#2015-06-0406:33sriharirobert-stuttaford: Hmm. Don't have pro at the moment.#2015-06-0406:33srihariIt's cool. I have a working solution. Was just wondering if there was a neat way to do it in a datalog query.#2015-06-0406:33sriharirobert-stuttaford: Thanks anyway simple_smile#2015-06-0406:33robert-stuttafordcool simple_smile#2015-06-0406:34robert-stuttafordi’m sure that Rich or Stu or one of the other cognitects would have an answer for you pretty quickly. the trick is to put it in front of them simple_smile#2015-06-0406:37srihariCan someone invite them here? 😉#2015-06-0406:37robert-stuttafordthey’re busy people. they probably won’t show up in here#2015-06-0407:07stijn@martintrojer: about your do’s and don’ts article: use dynamo. which other backends have you tried that failed?#2015-06-0407:08martintrojerwe tried PSQL (with many peer nodes) obviously doesn’t scale like dynamo#2015-06-0407:09martintrojerwanted to make that clear to people trying to figure out datomic#2015-06-0407:09stijni see#2015-06-0407:10stijnwe’re intending to use cassandra since we’re running on google cloud#2015-06-0407:10martintrojerany SQL store would show similar characteristics#2015-06-0407:10martintrojerYeah, we wanted to limit our infrastructure so introducing cassandra / riak would blow our devops budget#2015-06-0407:11stijnthe big string thing, ran into that too. indexing would take forever after storing enough raw xml in datomic simple_smile#2015-06-0407:11robert-stuttafordhow big were your strings, stijn?#2015-06-0407:11martintrojeryeah, I have a feeling people are putting json blobs into datomic without knowing the limitations.#2015-06-0407:11stijnranging from a few KB to a couple of MB#2015-06-0407:12stijnwe’re putting them in cassandra now, and store the key in datomic#2015-06-0407:12stijnyou lose the ‘change detection’ of datomic though#2015-06-0407:13martintrojerstill kind of off-putting if you ask me. It’s very nice to dump big documents into PSQL JSON columns (and to be able to query into them)#2015-06-0407:13martintrojerdatomic needs to feature#2015-06-0407:14martintrojeralso, the for love of god give us query planning.#2015-06-0407:14stijni read somewhere it’s on the todo list 😉#2015-06-0407:14stijnthe blob thing#2015-06-0407:33tjgIf I read this example properly (don't have a few minutes to test) Datomic supports unbounded recursion: https://gist.github.com/stuarthalloway/2002582#2015-06-0407:34robert-stuttafordnice tjg! @srihari , see tjg’s link#2015-06-0407:36tjgThough I hear datalog always terminates on finite data... If it supports sufficiently powerful recursion and always terminates... then I'd think something's gotta give. simple_smile#2015-06-0409:16sriharitjg, robert-stuttaford: thanks! Will take a look.#2015-06-0412:15tcrayford@martintrojer: I've thought about writing a query planner for datomic a few times. A trivial one ain't even that hard (but obviously you can ramp up how good it is a whole bunch)#2015-06-0412:18robert-stuttafordwhat would a planner actually do?#2015-06-0412:18robert-stuttaford-curious-#2015-06-0412:19robert-stuttaforddetermine the best order for the clauses on the fly?#2015-06-0412:21stijnthat would be pretty awesome simple_smile#2015-06-0412:25robert-stuttafordyup#2015-06-0412:25robert-stuttafordsome orders will always be the same - e.g. when working from :in values to :find values#2015-06-0412:26robert-stuttafordbut sometimes changing the order of clauses in the middle makes a huge difference#2015-06-0420:52arohnerusing the AWS transactor appliance, what is the recommended way to do backups?#2015-06-0420:52arohnerby default you can’t SSH in, or put files on the box, so do you modify the AMI, or something else?#2015-06-0506:08robert-stuttaford@arohner: separate, dedicated instance#2015-06-0506:09robert-stuttafordhttps://www.youtube.com/watch?v=vFX6T5oQC7Y#2015-06-0506:09robert-stuttafordthis talk is great for those with Datomic in prod or close to it#2015-06-0506:09robert-stuttafordhe says they use a t2.micro that backs up to S3 continuously#2015-06-0517:32ericfodeWhat is the best way to get a unix epoch into something that the datomic :inst type approves of?#2015-06-0520:53arohner(java.util.Date. 1433537571770)#2015-06-0520:53arohneris there a clojurebot in here?#2015-06-0520:53arohner@ericfode: ^^#2015-06-0520:53ericfodefacepalm @arohner thank you.#2015-06-0522:49maxHi everyone, I need some help.
I've developed a datomic app using the free filesystem transactor, and now it's time to deploy it. I decided to deploy it on Postgres.
I ran the postgres setup scripts that ship with datomic, and this is what I have in my transactor.properties:
protocol=sql
host=localhost
port=4334
### Postgres
sql-url=jdbc:
sql-user=datomic
sql-password=datomic
sql-driver-class=org.postgresql.Driver
Now when I try to create a database and connect to it, I get an error:
user> (d/create-database "datomic:)
true
user> (d/connect "datomic:)
ClassCastException [trace missing]
#2015-06-0522:49maxany ideas?#2015-06-0522:50maxI've also had the ClassCastException return a more detailed error ClassCastException clojure.lang.Symbol cannot be cast to clojure.lang.Associative clojure.lang.RT.assoc (RT.java:778)#2015-06-0522:58maxGood news! I solved my problem!#2015-06-0522:58maxTurns out I had the wrong version of postgres in my project.clj#2015-06-0707:49robert-stuttafordhttps://twitter.com/robstuttaford/status/607454351985135616#2015-06-0708:02borkdudeare we in the same timezone now @robert-stuttaford ? it's 10 AM here#2015-06-0708:20robert-stuttaford@borkdude: yes, 10am here too#2015-06-0711:46gjnoonan@robert-stuttaford: are we going to record the hangout too#2015-06-0712:24robert-stuttaford@gjnoonan: yes, that’s a good idea#2015-06-0712:24robert-stuttafordnever recorded or hosted a recording before. any thoughts on that?#2015-06-0712:24tcrayford@robert-stuttaford: iirc you can just tell it "record this to youtube" and it'll do it#2015-06-0712:24robert-stuttafordwicked#2015-06-0712:24robert-stuttafordi’ll do that, then#2015-06-0712:34robert-stuttafordhey Jeff. ltns#2015-06-0715:45domkmIs there an open-source example of porting a non-trivial SQL schema to a Datomic schema?#2015-06-0715:45domkmAny recommended sources on data modeling in Datomic?#2015-06-0716:26meow@domkm: If anyone can help you it's probably @robert-stuttaford. Note that he is doing a datomic hangout soon: https://twitter.com/robstuttaford/status/607454351985135616#2015-06-0716:27meow@domkm: I think he's offered to do datomic design reviews as well.#2015-06-0716:29domkm@meow: Thanks#2015-06-0716:35meow@domkm: no simple_smile#2015-06-0716:35meowoops ^ s/np/no/g#2015-06-0717:13robert-stuttafordi humbly offer my input and feedback, on the understanding that i might be completely wrong simple_smile#2015-06-0717:14robert-stuttafordi.e. salt not included -grin-#2015-06-0808:39robert-stuttaford<!channel> as can be seen in the topic, i’m doing a hangout this friday. if you’re planning to attend, I’ve made a google form with some questions, and i’d appreciate any input you have for me. thanks! https://docs.google.com/forms/d/1opX6Br27woFrJvih4pMPuOpTwlWB4il0EYiHuRwcwYI/viewform#2015-06-0808:41danielcomptonOuch 1am in NZ#2015-06-0809:08gjnoonan@danielcompton: It will be worth staying up for simple_smile#2015-06-0809:09haduart@robert-stuttaford looking forward to it simple_smile#2015-06-0809:16robert-stuttaford@danielcompton: sorry about that. curse you, solar system!#2015-06-0809:20tjgHah, and a bunch of us in Bonn/Cologne Germany will be at the ClojureBridge workshop at 5pm.
Perhaps we'll come early and watch it on a big screen... 😛#2015-06-0809:21robert-stuttaford-grin-#2015-06-0815:37madJune 12th, 3pm UTC +2 OK, I’ll be there!#2015-06-0818:11robert-stuttaford@luke on large strings, would sticking those on their own entities in their own db partition help at all?#2015-06-0818:12robert-stuttafordthereby containing the slowness to reads of those datoms#2015-06-0818:12robert-stuttaford… a little#2015-06-0818:13lukeThat's getting into implementation details a bit deeper than I know. I don't think putting them on their own entities would make that big of a difference.#2015-06-0818:13lukeNot sure about partitioning - I don't know how that works with adaptive indexing.#2015-06-0818:14robert-stuttafordok. part of the problem is that they impact the read-side perf of the rest of the nice, small datoms around them. i thought perhaps if they’re isolated, that might help when using only those small datoms#2015-06-0818:15robert-stuttafordi guess for it to work you’d have to be careful about when you touch them, which is pretty much the same territory as just using a blob store#2015-06-0818:15lukeit seems logical but someone with a much deeper knowledge of the implementation than I do would need to answer that.#2015-06-0818:15robert-stuttaforddo you perhaps have such a person handy? simple_smile#2015-06-0818:15lukeI'll ask.#2015-06-0818:16robert-stuttafordthank you! i’d like to make mention of it in that hangout i’m doing on friday. would like to put the partition idea forward, but only if it’s actually a good idea!#2015-06-0818:21tcrayfordfinna guess that's top secret though#2015-06-0818:24tcrayford@robert-stuttaford: think the blessed story is just to use a squuid pointing at whatever blob store you're using for datomic still though#2015-06-0818:24robert-stuttafordyeah. it was just an intuition that i thought i’d sanity check. after all, i can actually talk to the people, unlike some other databases i’ve used in the past simple_smile#2015-06-0818:47luke@robert-stuttaford: So I forgot momentarily that partitioning is by entity, not by attribute. So you'd have to put all the of the "heavy" attributes on their own entity.#2015-06-0818:47robert-stuttafordyes. i was aware of that - they’d have to hide behind a ref#2015-06-0818:47lukeAt that point it depends on your usage patterns if the extra ref traversal cost outweighs the "heavy" attribute cost.#2015-06-0818:48lukeProbably a benchmark is the only way to tell for sure, absent Rich or Stu themselves weighing in to my question simple_smile#2015-06-0818:49robert-stuttafordyes. i guess my question really is, is this a viable strategy for dealing with this tradeoff of Datomic, and if so, when do you know when it’s time to a) use this approach and b) stop using this approach in favour of a blob store instead#2015-06-0818:49robert-stuttafordagain, just a thought experiment; just wondering if this approach would actually help or not simple_smile#2015-06-0818:50robert-stuttaforddatomic does kinda have some nice properties which are valuable to use even on bigger data -grin-#2015-06-0910:23stig+1 for recording! I cannot make the hangout in realtime, unfortunately 😞#2015-06-0910:24stig@robert-stuttaford: ^^#2015-06-0910:25robert-stuttafordi’ll be sure to record it simple_smile#2015-06-0920:30ericfodeI am curious... When you have an enum value in your datomic database, and an entity references it should the reference be marked as isComponent#2015-06-0920:44bhagany@ericfode probably not… if I'm understanding correctly, that would mean that retracting the referencing entity would retract the enum entry too#2015-06-0920:44ericfodeThat was my thought also. the problem is when i pull the entity it pulls the id of the enum value instead of the enum value#2015-06-0920:44ericfodedo you know of a good way to get around that ?#2015-06-0920:44bhaganyoh yeah, that's an annoying thing about the pull api#2015-06-0920:45ericfodeshould I prefer the entity api?#2015-06-0920:45bhaganyI get around it by doing something like (pull ?thing [* {:enum/thing [:db/ident]}])#2015-06-0920:45bhaganyI use the pull api a lot, and just deal with it#2015-06-0920:46ericfodelol sounds good. simple_smile Thanks for the tip, I really appreciate it#2015-06-0920:46bhaganysure, np#2015-06-1005:45robert-stuttaford@bhagany, @ericfode: i asked about this, and it’s by design that pull does this. i guess they didn’t want to hide any other data that might be on the enum entity, which is what would happen if they short-circuited it#2015-06-1005:45robert-stuttafordanother thing to note about pull; it doesn’t return :db/id#2015-06-1005:46bhaganyyeah, it's understandable, once you think about it. just a little unexpected after using the entity api.#2015-06-1008:47borkdude@robert-stuttaford: I probably can't make it to the hangout, because I'm supposed to work (read: make billable hours) then#2015-06-1008:47borkdude@robert-stuttaford: but if it's going to be recorded, I'm interested in viewing it later#2015-06-1009:45robert-stuttafordcool simple_smile#2015-06-1016:47ericfode@robert-stuttaford: your awesome, thankyou.#2015-06-1016:56ericfodeWhen i create a new entity, is there a way to get to it's id from the response?#2015-06-1016:59stigis there any benefit to :user/id + :user/email idents as opposed to :user-id + :user-email ? The Datomic docs seem to prefer namespaces, but the “bare” version would allow validating maps passed into datomic with Prismatic schema...#2015-06-1017:02marshall@ericfode: The map returned by transact contains an entry with the :tempids key that contains the created (or identified if they already exist) entity ids#2015-06-1017:02ericfodeahh#2015-06-1017:02ericfodeThank you!#2015-06-1017:04marshallif you’re using tempids from the tempid function, this might also be helpful: http://docs.datomic.com/clojure/#datomic.api/resolve-tempid#2015-06-1017:06tcrayford@stig: think that's just the datomic team's standard, I don't think there's a real win from them (though I do prefer namespaces personally)#2015-06-1017:37stig@tcrayford: ta.#2015-06-1017:39robert-stuttaford@stig, @tcrayford, i think namespaced keys index better#2015-06-1017:39robert-stuttaforduser/* lives together, etc#2015-06-1017:40robert-stuttaford@stig, you can use ns’d kws with Schema#2015-06-1017:40tcrayford@robert-stuttaford: reminder that datomic doesn't actually store keywords internally, they're just the eids of the idents#2015-06-1017:40robert-stuttafordyou’re right. don’t listen to me#2015-06-1017:40robert-stuttaford-grin-#2015-06-1017:41robert-stuttafordyeah. then there’s no semantic benefit other than it reads well#2015-06-1017:41stigheh. being able to use namespaced keywords with Schema would be interesting nonetheless. I couldn’t make it work.#2015-06-1017:42robert-stuttafordwe’re using them fine#2015-06-1017:43marshallClojure (keyword ns name) also makes it easier to generate programmatically#2015-06-1017:43robert-stuttaford(def User
"A schema for a User entity"
{(s/required-key :user/full-name) Text
(s/optional-key :user/nickname) Text
(s/required-key :user/email) Email
(s/required-key :user/status) (s/enum :status/active
:status/suspended
:status/deleted)
…
#2015-06-1017:44tcrayford@robert-stuttaford: seems like you could generate those from the schema itself, right? DRY and all that#2015-06-1017:44robert-stuttafordyup#2015-06-1017:52stigrobert-stuttaford: all I know about prismatic schema I’ve learnt from “Clojure Applied” and they didn’t cover that way to build the schema 😛#2015-06-1017:52stigonly the s/defrecord way, which does not work. (As record fields cannot be namespaced.)#2015-06-1017:52robert-stuttafordright simple_smile i don’t know that way. the way i learned was from the project’s readme#2015-06-1017:54stigI should do that.. but I don’t have network connection at home, so have been using offline resources as much as possible. I (idiotically) concluded that it wasn’t supported.#2015-06-1017:54robert-stuttaford😮 no internet at home. gosh.#2015-06-1017:54stig(I’m in a pub leeching wifi and drinking coffee atm.)#2015-06-1017:54stigNot since mid march, no!#2015-06-1017:55stigThe withdrawal is fierce.#2015-06-1017:56robert-stuttafordwow. you’re either very brave or very unlucky. either way, well done for holding up#2015-06-1017:57robert-stuttafordhttps://github.com/cape-town-clojure/steel-plains-tcg/blob/deck-builder/src/cljx/sptcg/card_schema.cljx
https://github.com/cape-town-clojure/steel-plains-tcg/blob/deck-builder/test/clj/sptcg/tests/card_schema.clj#2015-06-1017:57robert-stuttafordthis was where i learned schema, for a fun-time clj usergroup we ran for a while#2015-06-1017:57robert-stuttafordthe tests show the actual data that’s being schematised#2015-06-1017:57stigit wasn’t planned! I signed up for internet a week before we got the keys to the place, but BT didn’t send an engineer until 9 weeks later (today) and he left without connecting everything so it’ll be another 2 weeks I think 😞#2015-06-1017:58robert-stuttafordyou might spot a M:tG nerd in there somewhere -grin-#2015-06-1017:58robert-stuttafordcrikey. my condolences#2015-06-1018:00stigmeh. it’s frustrating, but it helps me getting out of the house while being on sabbatical.#2015-06-1018:45stigThanks for the help!#2015-06-1018:53robert-stuttaford:+1:#2015-06-1019:19robert-stuttafordanyone know how to find an entity id’s birthday? i know the id is a composite of the partition it’s in and the timestamp. wondering if it’s possible with bit shifting magic to extract the timestamp component for use with e.g. d/tx-range#2015-06-1019:19robert-stuttaford@tcrayford: -nudge-#2015-06-1019:22robert-stuttaford(d/q '[:find (min ?t) . :in $ $h ?e :where
[$h ?e _ _ ?tx]
[?tx :db/txInstant ?t]]
db
(d/history db)
(:db/id (d/entity db [:attr "value"])))#2015-06-1019:22robert-stuttafordthis works, but takes 20ms to do#2015-06-1019:25robert-stuttafordif i just get the ?tx value directly, it’s 5ms#2015-06-1019:26robert-stuttafordgood enough for me, but i’m hoping for a bitshifting trick simple_smile#2015-06-1019:29bhaganyso, thinking about the construction of entity ids, and if I have everything straight in my head, this wouldn't be possible. As far as I recall, the lower 42 bits of an eid are related to the basis-t, which, if you can get to it with bit shifting, still won't give you the txInstant.#2015-06-1019:30robert-stuttafordah, but it would give me the t value, which is all i need#2015-06-1019:31bhaganyaha, okay, then that might work… I'm unsure how it arrives at unique bits for multiple entities in the same transaction though#2015-06-1019:31robert-stuttafordi’ve just remembered that i asked about this a while back#2015-06-1019:31robert-stuttafordhttps://groups.google.com/forum/#!searchin/datomic/partition$20temp$20id/datomic/pw2S0aI3H0A/o16Fnp97z4QJ#2015-06-1019:31robert-stuttafordi understand as much now as i did then about what daniel explained: close to nothing#2015-06-1019:32robert-stuttafordnever did learn the bit math stuff 😊#2015-06-1019:36bhaganyhmmm… yeah, I don't think I understand enough to get the info you want#2015-06-1019:36robert-stuttafordthank you for having a look simple_smile i’ll post on the group some time#2015-06-1019:38bhaganysounds good, I'd be interested to see if there's an answer#2015-06-1019:42robert-stuttafordi’ve asked the twinernet. let’s see simple_smile#2015-06-1107:10hmadelaine@robert-stuttaford, @tcrayford : Hi Robert and Tom ! do you know a bridge librairie between Prismatic-Schema and Datomic Schema ? As I am using both, I often think this could be useful ?#2015-06-1108:11robert-stuttaford@hmadelaine: i don’t think you need one. they both work with plain clojure data#2015-06-1108:13hmadelaine@robert-stuttaford: my idea was to have a unique Schema and derive the other. Often I have the impression to repeat myself#2015-06-1108:15robert-stuttafordit’s worth attempting simple_smile#2015-06-1108:16robert-stuttafordas long as you remember that they are fundamentally different things, and you can’t hide one in the other#2015-06-1108:21hmadelaine@robert-stuttaford: yes, that's why I did not start working on it yet. If anyone is interested in the reflexion, I would be glad to participate#2015-06-1212:08robert-stuttaford<!channel> damn. i’m so terribly sorry, folks, but i’m going to have to postpone today’s hangout 😞 work things have come up#2015-06-1212:11robert-stuttafordI won’t be able to reschedule for next friday, as I am on leave. I’m going to try for three weeks from today, which is the next Friday that I am available#2015-06-1212:26cmdrdats@robert-stuttaford: aw, that’s sad - sorry, hope things stabilise!#2015-06-1212:27robert-stuttafordthanks Deon#2015-06-1215:29madno problem#2015-06-1215:30madno hurry robert#2015-06-1220:48alexmillerhttp://clojure-log.n01se.net/date/2008-10-02.html#15:21#2015-06-1220:48alexmillersounds like a good idea - someone should do that#2015-06-1319:12arohner@alexmiller: wow, rhickey beat me by a few months. I have a blog post (https://arohner.blogspot.com/2009/04/db-rant.html) arguing for ACID, relational, non-SQL DB based on the JVM using sexprs and raw access to the index#2015-06-1408:49thomasdeutschHi everyone. I have a datascript question and i hope you can help me out. My scenario: i would like to change the entity-id of an entity, because i use the entity-nr 100 for an input form and on submit, i would like to change this to another eid, so that eid-nr 100 can be used for the input form.#2015-06-1414:08tonskyif anyone is interested answer to the @thomasdeutsch question is here https://github.com/tonsky/datascript/issues/88#2015-06-1513:43lboliveiraHello, I want to use an entity returned by d/touch as if it were a map, assoc'ing new keys, etc. How should I do that? Example:
(let [entity (->> (d/q '[:find ?e .
:where [?e :db/ident ]]
(d/db conn))
(d/entity (d/db conn))
d/touch)]
(assoc entity :answer 42) )
The following code is throwing a CompilerException java.lang.AbstractMethodError.
#2015-06-1513:51hmadelaine@lboliveira: why not#2015-06-1513:51hmadelaine(let [entity (->> (d/q '[:find ?e .
:where [?e :db/ident]]
(d/db conn))
(d/entity (d/db conn))
d/touch
(into {}))]
(assoc entity :answer 42))#2015-06-1513:53lboliveira@hmadelaine: It worked. 😃 Thank you!#2015-06-1513:54hmadelaine@lboliveira: not sure this is the best solution#2015-06-1513:55lboliveira@hmadelaine: it is small at least. simple_smile#2015-06-1514:19robert-stuttaford@lboliveira @hmadelaine: you should use the pull api which returns normal clj data#2015-06-1514:20robert-stuttaford(d/pull db ‘[*] your-id-or-lookup-ref-or-entity-reference-here)
http://docs.datomic.com/pull.html#2015-06-1514:22hmadelaine@robert-stuttaford: yes of course, I should have proposed this solution. I was to focused on the code 😉#2015-06-1514:23robert-stuttaford-grin-#2015-06-1514:23robert-stuttafordyour suggestion isn’t at all wrong. it’s just pull is more suitable simple_smile#2015-06-1514:26lboliveira@robert-stuttaford: thank you. I will use the pull api.#2015-06-1514:31cmdrdatscurious: https://github.com/Yuppiechef/datomic-schema/issues/13#2015-06-1514:31cmdrdatsFrom the day of datomic videos, Stu mentioned that in practise they found they pretty much put ‘index’ on everything#2015-06-1514:32robert-stuttafordyou don’t need to index type ref or anything with unique on it#2015-06-1514:32robert-stuttafordit’s redundant in both cases. already indexed#2015-06-1514:32cmdrdatsah, ok - so it’s making a little extra schema..#2015-06-1514:33robert-stuttafordyeah. no harm done, just no use either simple_smile#2015-06-1514:59stuartsierraleandro: EntityMaps (the return type from d/entity) are not Clojure maps. You can't assoc them. Instead, try d/pull, which returns real Clojure maps.#2015-06-1514:59stuartsierraoh, scrolling, whatever#2015-06-1515:00stuartsierraignore me#2015-06-1515:00robert-stuttaford-grin-#2015-06-1515:02lboliveira@stuartsierra: Thank you. Now I am using the pull api. Far better.#2015-06-1515:06stuartsierraYou're welcome.#2015-06-1617:46bhaganyI'd like to issue a query using the as-of filter, but I'm using the REST api. I somehow overlooked that this is only documented for datoms and entity over REST, but not query. Does anyone know of a workaround?#2015-06-1617:50bhaganyThe only thing that occurs to me is to build a small clojure wrapper around the api and figure out how to deploy it#2015-06-1617:51bhaganyHowever, I don't have much experience with JVM deployments and I'd prefer to keep the scope of this project as small as I can#2015-06-1618:08marshall@bhagany: You can supply an :as-of in the args map:
{:db/alias "dev/mbrainz-1968-1973", :as-of 12345}#2015-06-1618:14bhaganyah, excellent. Thanks @marshall!#2015-06-1810:22jthomsonHas anyone encountered datom conflicts after restores? I.e. two entities with the same value for a :db/unique attribute.#2015-06-1810:26jthomsone.g.
(d/q '[:find ?e ?unique-ident
:in $ ?a ?v
:where
[?e ?a ?v]
[?a :db/unique ?unique]
[?unique :db/ident ?unique-ident]]
(d/db (db/connection))
:cat.block/key
"designers22")
->
[[17592187845110 :db.unique/identity]
,[17592189560149 :db.unique/identity]]
#2015-06-1812:46marshall@jthomson: can you also return ?a in the :find spec and report that output?#2015-06-1812:47marshall@jthomson: sorry, never mind, i missed that you were passing it in.#2015-06-1812:47marshall@jthomson: what version of Datomic?#2015-06-1812:48jthomsoni'm pretty sure it's a real conflict, as when I try to retract the entity I get a two datoms conflict error.#2015-06-1812:48jthomsondatomic-pro "0.9.5130"#2015-06-1815:25kbaribeaurestores are a bit funny aren't they? I noticed yesterday that datomic will still do a differential restore even if I've delete-database'd the target db#2015-06-1815:26jthomsonhm, interesting#2015-06-1815:26jthomsonFYI in my case it this was due to trying to restore on divergent dbs (the target had some transactions that were not in the source)#2015-06-1815:27jthomsonwhich isn't supported at this point.#2015-06-1815:29kbaribeauyeah, that definitely seems like it could get weird#2015-06-1815:32kbaribeauif I really want to guarantee i've got two identical databases after a restore (like if I want my staging env to match prod exactly), I think I'd have to clean out everything related to my target db in the storage layer.
does anyone have a decent way to do that in dynamo besides deleting the table? I've got several database in my dynamo table, but the raw data just looks like gibberish to me.#2015-06-1815:33stuartsierraThere is no "differential restore"#2015-06-1815:34stuartsierraRestore always starts from an empty database.#2015-06-1815:35kbaribeauI am confused then, the output I see is that most segments are skipped#2015-06-1815:35kbaribeau"Copied 146 segments, skipped 2672 segments." etc.#2015-06-1815:36marshallrestore always checks the whole DB. it only copies when necessary#2015-06-1815:36marshalldifferential backup doesn’t need to check the whole DB#2015-06-1815:36kbaribeau"starts from an empty database" and "only copies when necessary" sound like conflicting statements to me?#2015-06-1815:37marshallyou can restore to an existing uri of the same DB (at a different t)#2015-06-1815:37marshallbut, as jthomson mentioned above, if you have a divergent DB, restoring to that existing DB is not supported#2015-06-1815:38kbaribeauright, so if I have DBs that have diverged, and I want to resolve that, what do I do?#2015-06-1815:38kbaribeaubasically I want to force prod -> staging while ignoring anything from staging that has diverged#2015-06-1815:39marshallrestore to a new/blank DB in staging#2015-06-1815:40jthomsonSo I guess this means use a different name? Thinking about it in my case (which was actually the same use case @kbaribeau) I had deleted the old DB but reused the name, thinking this would create a new db.#2015-06-1815:40kbaribeauyup, same issue ^#2015-06-1815:42marshallYou need to call bin/datomic gc-deleted-dbs#2015-06-1815:42marshallsee the bottom of this page: http://docs.datomic.com/capacity.html#2015-06-1815:43kbaribeauaha! cool#2015-06-1815:43jthomsongreat#2015-06-1815:44bkamphausyep, different name won't work because db has unique id that will report conflict if you try to restore same db to different name in the same datomic storage (i.e. into the same bucket/table, w/e)#2015-06-1815:45jthomsonah yes of course#2015-06-1816:32bkamphausDatomic 0.9.5186 has been released: https://groups.google.com/forum/#!topic/datomic/IGzjBbHr10E#2015-06-1816:35bhaganycurious about this, "* Fixed bug in Pull API when using selectors built from strings."#2015-06-1816:36bhaganyI reported a bug in which a scalar find spec (aka, :find (pull ?e [*]) .) returned an edn string over the rest api that didn't match what you get at the repl#2015-06-1816:36bhaganyis that what this refers to?#2015-06-1816:38bhaganyiirc, over the rest api, you would get a map per attr/value pair#2015-06-1818:59bhaganyokay, I updated datomic, and it appears that it's not yet fixed#2015-06-1818:59bhaganyalso, I was wrong - you get a 2-vector for each attr/value pair#2015-06-1819:00bhaganyas though you called vec on the correct result#2015-06-1914:55frankiesardoWhat would be the idiomatic way of getting the time of creation of the db? Sort of like the opposite of basis-t#2015-06-1914:56frankiesardo(d/q '[:find [(min ?inst)] :where [?tx :db/txInstant ?inst]] db) doesn't work because there's always a transaction set at the epoch#2015-06-1914:59stuartsierrafrankie: I don't think the "time of creation of the DB" is stored anywhere. The best you could do is "time the first user-space entity was created."#2015-06-1915:07frankiesardo@stuartsierra: What would the query look like?#2015-06-1915:09stuartsierra@frankie Or just get the txInstant from the first transaction returned by (d/tx-range (d/log conn) nil nil)#2015-06-1915:09frankiesardoJust discovered that min actually takes an optional n parameter, so I could jus drop the first returned from (d/q '[:find [(min 2 ?inst)] :where [?tx :db/txInstant ?inst]] db)#2015-06-1915:11frankiesardo@stuartsierra: can't use d/log, I have db as a value passed down from other fns. That was another good idea tho#2015-06-2017:34bhaganyIt looks like one of my previous questions in this room resulted in an improvement to the REST API docs, so I'm posting again here, even though I've already figured it out. The current description of the args parameter to /api/query reads thus:#2015-06-2017:34bhagany> args - Vector of maps of arguments to query. Each map is a database descriptor that corresponds to a database argument to query. Each descriptor must include a value for :db/alias. #2015-06-2017:36bhaganyThis, however, is misleading. The items in the vector aren't always maps. For references to data sources, maps are needed, but for parameterized queries, they should just be the parameters you want bound.#2015-06-2017:40bhaganyhere's an example POST body, in case that's not abundantly clear: {:q [:find (pull ?layout [*]) :in $ ?layout] :args [{:db/alias "dev/stores"} 277076930200641]}#2015-06-2022:16maxI am having trouble grokking partitions. Right now, my whole data set is in :db.part/user. The docs say to not do this in production, but I haven’t had much guidance on how to organize partitions.#2015-06-2022:16maxShould I create several? How do I know how to split things?#2015-06-2022:18maxI have paths that pass through every type of entity, which makes me think multiple partitions will be bad?#2015-06-2022:18arohner@max partitions are just about cache locality of index segments#2015-06-2022:19arohnerthings in the same partition are stored next to each other, which optimizes locality of fetching data#2015-06-2022:19arohnerI haven’t heard much guidance about using them, but I typically create a new partition for each ‘table'#2015-06-2022:20max@arohner: even if they are linked? i.e. entity of type A has a key that points to entities of type B?#2015-06-2022:21arohner@max AIUI, partitions only deal with the entity id#2015-06-2022:22arohnerentity A will be in the partition you specified when you created it, and entity B will be in the partition you specified, when you created it#2015-06-2022:22arohnerrefs don’t affect their location#2015-06-2022:23maxright, so if they are in different partitions will searching across that ref be slower then if they are in the same?#2015-06-2022:24arohnerprobably#2015-06-2022:24arohnerunless both segments are hot#2015-06-2022:24arohnerremember that datomic fetches datoms by segment, which is ~5kB-50kB#2015-06-2022:25arohnerpartitioning is to keep entities ‘nearby’, so hopefully they’ll come from storage in the same segment#2015-06-2022:26arohnerbut this is mainly a thing to worry about when your data is much larger than ram, and your queries have to pull a lot of datoms from storage#2015-06-2022:26maxI suspect that my data will fit in ram for a long time.#2015-06-2022:27maxand when it doesn’t ram will be even cheaper simple_smile#2015-06-2022:27maxso to not think about it, I want to keep things in one partition. Is there any disadvantage to keeping things in :user?#2015-06-2022:27arohnerprobably not#2015-06-2022:29maxthanks Allen!#2015-06-2022:29arohnernp#2015-06-2203:27alwaysbcodingIf you have an attribute that is of type 'uuid' do you still need to declare that it is unique identity to use it as a lookup ref or is that done automatically because it's of type uuid?#2015-06-2207:36robert-stuttafordyou have to declare uniqueness yourself, @alwaysbcoding#2015-06-2213:34bkamphaus@bhagany - that’s correct, overzealous documentation of db descriptors specifically. Wasn’t meant to limit args to dbs but of course the ‘Vector of maps’ language was incorrect and did just that. Fix to REST docs has now been made.#2015-06-2213:49curtosisis there a library … or best practice … or recommended example 😉 for managing schemas? Something akin to rails db:create and db:migrate (but without necessarily implying that it be otherwise similar to rails).#2015-06-2214:28stuartsierra@curtosis: Look for "Conformity" by Ryan Neufeld for one example.#2015-06-2214:29stuartsierraRemember that schema is immutable like everything else in Datomic. You can always add new attributes, but it's very rare to replace or change them (once you have data in production).#2015-06-2214:29tcrayford+1 on conformity. Yeller's been using it in prod for a year and then some, been very happy with it.#2015-06-2214:39curtosisthanks! conformity seems pretty close to what I was looking for. (I also happen to think immutable schema is a pretty good thing.)#2015-06-2214:41tcrayfordthe datomic way if you really want to mutate schema is to do it via backup/restore, but that's a whole bunch of work (intentionally so, I think)#2015-06-2214:45curtosisyeah… changing attributes gets you into some bad places wrt temporal consistency.#2015-06-2214:46bhagany@bkamphaus - great, thanks!#2015-06-2313:10robert-stuttafordtcrayford: how would you mutate schema via backup/restore?#2015-06-2313:11tcrayford@robert-stuttaford: iirc: copy the data into a new database alongside the existing one. Not using datomic's backup/restore tools (shoutout too terminology)#2015-06-2313:11robert-stuttafordoh. yuck simple_smile#2015-06-2313:11robert-stuttafordthat’s lots of work#2015-06-2313:12robert-stuttafordso, you can alter anything except type and fulltext. and you can get around those by making new attrs with the original name (after first renaming the original to something else) with the new type/fulltext setting and a manual batch of transactions to transfer the data#2015-06-2313:18tcrayfordyeah, it's better than it used to be 😄#2015-06-2705:08bhaganyI should probably just put this down for now, but I'm stubborn. I've got a complex pull pattern that isn't working, and I cannot for the life of me figure out why. I've got it narrowed down to a pretty minimal test case. This works:#2015-06-2705:08bhagany(d/q '[:find (pull ?e [{:row/_columns [:db/id]}]) . :in $ ?e] db 277076930201266)#2015-06-2705:08bhaganyThis doesn't:#2015-06-2705:08bhagany(d/q '[:find (pull ?e [* {:row/_columns [:db/id]}]) . :in $ ?e] db 277076930201266)#2015-06-2705:09bhaganyEntire output: user=> (:row/_columns (d/q '[:find (pull ?e [{:row/_columns [:db/id]}]) . :in $ ?e] db 277076930201266))
[{:db/id 277076930201265}]
user=> (:row/_columns (d/q '[:find (pull ?e [* {:row/_columns [:db/id]}]) . :in $ ?e] db 277076930201266))
nil
#2015-06-2705:10bhaganyThe only difference between the two is the wildcard at the top level of the pull spec. If anybody can see something I'm missing, I surely would appreciate it if you'd point it out to me.#2015-06-2705:29bhaganyThe crazy thing about this is that works fine with other attributes#2015-06-2706:04bkamphaus@bhagany: do you have this isolated in a test db that you could potentially share? Or that you have schema/tx instructions for?#2015-06-2706:05bkamphaus@bhagany: we had a similar bug reported but were unable to reproduce it with our own sample data (e.g. mbrainz) and the reporter didn’t follow up with their own data/instructions for repro.#2015-06-2706:06bkamphauscomplicating issues, the bug I’ve seen that looked similar was reported only for a test suite running against a mem db. The user didn’t report whether or not they were able to repro on durable storage.#2015-06-2713:22bhagany@bkamphaus: yes, I can share it, and I'm using dev storage. Do you just need datomic.h2.db?#2015-06-2713:46bhagany@bkamphaus: I hope I've got that right - I'm going to be leaving shortly for a vacation and won't be back until Tuesday. So, here's the db: https://www.dropbox.com/s/kb3f30irv1eu2z0/datomic.h2.db?dl=0#2015-06-2713:47bhagany@bkamphaus: And here's the schema too, just in case: https://www.dropbox.com/s/fv30duh11ejj1kr/schema.clj?dl=0#2015-06-2713:59bkamphaus@bhagany: easiest way to pass is just the backup/restore mechanism. e.g.,
$DATOMIC_DIR/bin/datomic backup-db datomic:<dev://localhost:4334/dbname> file:///path/to/dbname#2015-06-2714:07bhagany@bkamphaus: okay - https://www.dropbox.com/s/dw84mvu9292hfb2/db.backup.zip?dl=0#2015-06-2714:08bhaganyThe weekend support here is really very appreciated. Thank you.#2015-06-2714:17bkamphausNo worries, @bhagany - enjoy your vacation time!#2015-06-2714:46bhaganythanks 😄#2015-06-2913:58samfloresI have an attribute that uses the idiomatic way of representing enums (a :db.valueType/ref attribute “pointing” to values with :db/ident).
When I use the pull API with [:foo/enumerated-field] it returns {:foo/enumerated-field {:db/id 999}} (which I understand perfectly) and if I use the pattern [{:foo/enumerated-field [:db/ident]}] the return value is nicer: {:foo/enumerated-field {:db/ident :foo.enum/cool-name}}.
The question is: is there a convenient/idiomatic/built-in way to return it as {:foo/enumerated-field :foo.enum/cool-name}?#2015-06-2914:00stuartsierra@samflores: no#2015-06-2914:01samfloresthanks, @stuartsierra#2015-06-2914:05tcrayfordthe only reason I ain't used the pull api yet 😉#2015-06-2918:19robert-stuttafordmakes sense why. if they collapsed it, you can’t easily get at anything else on the enum entity#2015-06-2919:55tcrayford@robert-stuttaford: oh for sure. But it defeats the point of pull - to be able to declaratively define the shape you want, with no munging afterwards#2015-06-3015:46ghadiwhat's the current convention for storing lists with ordering in datomic?#2015-06-3015:47alexmillerbuilding your own linked list#2015-06-3015:49ghadiyikes 😉#2015-06-3015:55ghadiThen use entity navigation to grab all the results?#2015-06-3015:56ghadiOr recursive query?#2015-06-3015:56ghadinm, i don't know how to achieve it with a recursive query#2015-06-3015:57ghadirubber-ducking: using navigation simplifies, I suppose then the challenge becomes update#2015-06-3015:57alexmilleryeah#2015-06-3016:45stuartsierraOr an :order attribute on the elements in the list.#2015-06-3018:25robert-stuttaford:order on elements presupposes that the items only participate in a single list#2015-06-3018:27robert-stuttaford@ghadi: here’s one approach that uses Stuart’s idea http://dbs-are-fn.com/2013/datomic_ordering_of_cardinality_many/#2015-06-3018:28robert-stuttaford@tcrayford: i don’t think that’s pull’s intent. not to pull out the shape you want, but rather a generic way to pull whatever you want out at once for a linked graph of entities#2015-06-3018:29robert-stuttafordi’m not aware of any ability to transform the natural shape of the result in pull with pull, only how to describe what to pull#2015-07-0113:22mitchelkuijpersI have small question, I have a hard time getting a schema in clojure. I keep getting java.lang.RuntimeException: No reader function for tag db/id
#2015-07-0113:22mitchelkuijpersThis is my schema [
{
:db/id #db/id[:db.part/user]
:db/ident :company/name
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one
:db/unique :db.index/value
:db/doc "The name of the company"
:db.install/_attribute :db.part/db
}
]#2015-07-0113:22mitchelkuijpersAnd I try to load it with: (read-string (slurp "resources/schemas/company.edn"))
#2015-07-0113:23mitchelkuijpersI get the error but I don't know how to fix it 😞#2015-07-0113:25a.espolovhello#2015-07-0113:25a.espolovGuys tell me I understand correctly that the rest datomic only works in dev mode?#2015-07-0113:26mitchelkuijpersFixed it by using (d/tempid :db.part/user)
instead of #db/id[:db.part/user]
#2015-07-0113:34tcrayford@mitchelkuijpers: have you already required the datomic namespace?#2015-07-0113:34mitchelkuijpersYes#2015-07-0113:34tcrayford@a.espolov: nope, rest datomic works against any transactor#2015-07-0113:35tcrayford@mitchelkuijpers: I mean, I don't use the tag thing, so not sure I can help 😞#2015-07-0113:35mitchelkuijpersI fixed it already 😉#2015-07-0113:35mitchelkuijpers@tcrayford: simple_smile#2015-07-0113:35mitchelkuijpersBut I don't get why this for example does not fail: https://github.com/Datomic/day-of-datomic/blob/master/src/datomic/samples/schema.clj#2015-07-0113:37a.espolov@tcrayford: Unfortunately I did not once its run when datomic is running in sql view#2015-07-0113:38bkamphaus@mitchelkuijpers: the tag is for raw data - i.e., edn. The general recommendation is to do what you’ve done - use d/tempid in code to generate temp ids, use #db/id in edn and read the edn. It might be helpful to see this topic on the mailing list: https://groups.google.com/d/msg/datomic/Fi7iXps2npw/ywEVFKZjJKQJ#2015-07-0113:39mitchelkuijpersOk awesome thnx @bkamphaus now I get it#2015-07-0113:50bkamphaus@a.espolov: By “sql view” you mean it does not work when using SQL as your Datomic storage? Can you show how you’re invoking the REST launch command? (feel free to obscure any details in URIs, ports, credentials, etc.) As well as the error you’re seeing?#2015-07-0117:30ericfodeis there an equivalent to order-by in datomic?#2015-07-0117:47tcrayford@ericfode: in datalog? Not that I'm aware of 😞#2015-07-0117:47tcrayfordit's set oriented, so it's distinctly not ordered as far as I remember#2015-07-0117:57marshall@tcrayford: That’s correct, Datomic query returns sets, which are inherently not ordered#2015-07-0117:58ericfodeSo what do you do when want to page#2015-07-0117:58ericfode?#2015-07-0118:05ericfode@marshall: @tcrayford ?#2015-07-0118:06marshall@ericfode: You have a couple options. You can use direct index access for lazy traversal#2015-07-0118:07marshallYou could also include an order attribute and restrict queries based on that#2015-07-0118:09marshallyou could also use index-range or datoms to pass a section of an index at a time to query#2015-07-0118:09ericfodeI have a at attribute#2015-07-0118:10ericfode(time the entity was made)#2015-07-0119:16spiralganglionHi @stuarthalloway! Happy to have you here.#2015-07-0119:18spiralganglionOut of curiosity — how large is the Datomic team at Cognitect? I couldn't find a list of devs anywhere on the site.#2015-07-0119:19bhagany@bkamphaus: Just checking in - do you have enough info about the potential bug we discussed last week?#2015-07-0119:24bkamphaus@bhagany: using your db, I’ve been able to reproduce it independently (thanks for passing it along). Nothing additional to report at this time.#2015-07-0119:24bhaganyOkay, thanks!#2015-07-0119:34stuarthalloway@ivanreese: almost up-to-date team list is at http://cognitect.com/team#2015-07-0119:36spiralganglionIs there a particular subset who work primarily on Datomic? Again, just curious, no sweat if this isn't public info.#2015-07-0119:43ghadiivanreese: probably a scarily small team considering how sophisticated datomic is#2015-07-0120:58spiralganglion@ghadi: It's something I wish they were a bit more open about. The lack of info about how Datomic is maintained — and especially, how committed they are to it — makes me reluctant to commit myself (and my company) to it.#2015-07-0121:02alexmillerCognitect is very committed to growing Datomic and there is a solid team behind it#2015-07-0121:54ghadi@ivanreese: anecdotally from what I see on the mailing list, the few bugs that crop up are squashed with a speed I've never seen#2015-07-0122:12spiralganglion@alex Thanks. In all my dealings with them, through all channels, the Cognitect staff have been amazingly responsive. I've just had a hard time figuring out, based on the public signalling, how significant the focus is on Datomic as compared to, say, CLJ or CLJS (which are open source, so it's easier to see who is active and how active they are). After FoundationDB disappeared, I'm a little shy about building my future on top of opaque foundations (pun intended). Your comments here are reassuring, and perhaps my comments might help inform future iterations of your marketing material.#2015-07-0208:15robert-stuttafordivanreese: we’ve been a paid Datomic customer since soon after its first release. whenever we’ve had an issue that we needed Cognitect’s help with, we’ve gotten it, usually same day, always with in 48 hours. it’s clear that they’re very committed to this thing simple_smile#2015-07-0210:06spiralganglion@robert-stuttaford: I appreciate the endorsement, and look forward to joining you in the set of people who are happily using Datomic (...I can't help myself!)#2015-07-0214:32robert-stuttaford-grin-#2015-07-0214:56bkamphaus@bhagany: Thanks again for passing along the db. We’re including a fix for the bug you encountered in the next release. I’ll update you to let you know when the release is out.#2015-07-0214:58bhagany@bkamphaus: excellent! thanks for the update.#2015-07-0215:46marshall@ivanreese: If you’d like to talk about Datomic and Cognitect, I’d be happy to discuss with you over Skype sometime. Shoot me an email (<mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>) and we can work out a date/time.#2015-07-0216:18maxHi all, I have a question about the entity api. Given an entity I can get the database back — in java db = entity.db().
In clojure-land I can do (.db entity) of course. Is that the only way? I expected (d/db entity) to also return the db, but it seems to expect connections only.#2015-07-0216:19tcrayfordentity-db#2015-07-0216:19tcrayford(d/entity-db entity)#2015-07-0216:20maxah thanks @tcrayford I missed that in the docs somehow#2015-07-0216:25tcrayfordnp simple_smile#2015-07-0216:25tcrayfordhappy to help#2015-07-0216:48spiralganglion@marshall: Thanks! I will take you up on that in a few weeks.#2015-07-0417:10val_waeselynckI know that the EULA won't let you publish any benchmarks, but does anyone know if using Datomic with NodeJS (with the REST server in-between) is tractable?#2015-07-0417:41spiralganglionI've wondered the same - and look forward to finding out. At the risk of hijacking the topic, I now also wonder why they don't let you publish benchmarks. Further, they seemingly don't let you talk about details of the EULA! (See the section on publicity - unless that section forbids me to mention the fact that it exists, in which case, don't see it.) IANAL, but this seems awfully restrictive.#2015-07-0417:42spiralganglionI'm reminded of the saying, "I'll see it when I believe it."#2015-07-0417:44spiralganglionTo the best of anyone's knowledge, has there been any mention (perhaps, in the Google Groups) of why these clauses so restrict public discussion of the software, its performance, and the license itself? What is the justification?#2015-07-0417:49tcrayford@ivanreese: I'm unsure if I'm breaching the EULA by replying 😮#2015-07-0417:49tcrayford(with actual data)#2015-07-0417:50tcrayford(well, not data, just thoughts on the EULA)#2015-07-0417:50tcrayford(sorry for misspeaking there)#2015-07-0417:50spiralganglionWe need a can-of-worms emoji#2015-07-0417:51spiralganglion:existentialquandary:#2015-07-0417:52tcrayford(felt real bad, I like the team and the product a lot)#2015-07-0417:53val_waeselynck> publicly display or communicate the results of internal performance testing or other benchmarking or performance evaluation of the Software;#2015-07-0417:53spiralganglionGood thing slack is update-in-place and you can just delete the message ;)#2015-07-0417:53val_waeselynckOK, so if it's not publicly it's okay simple_smile#2015-07-0417:53spiralganglionThis is an invite-only community.#2015-07-0417:53spiralganglionAnd they serve drinks.#2015-07-0417:54spiralganglionDefinitely not public.#2015-07-0417:54tcrayford@ivanreese: my understanding of the EULA is they just took heavy inspiration from other database vendors. There have for sure been mailing list threads about it#2015-07-0417:55spiralganglionHmm. Time to go spelunking.#2015-07-0417:56spiralganglionBut while we're on the topic... And since I'm not currently subject to the terms of the license... "Neither party will, without the other party's prior written consent, make any news release, public announcement, denial or confirmation of this EULA, its value, or its terms and conditions, or in any manner advertise or publish the fact of this EULA." ..... That's (possibly) bonkers!#2015-07-0417:56spiralganglion(arguably)#2015-07-0417:58tcrayford@ivanreese: no comment#2015-07-0417:58tcrayford😉#2015-07-0418:12spiralganglionThere was a great thread on the Google group just over a year ago: https://groups.google.com/d/topic/datomic/6XOdQNzrioY#2015-07-0418:30spiralganglion@tcrayford: you deleted it! You anti-immutable sneak, you.#2015-07-0418:30tcrayford😉#2015-07-0418:49spiralganglionI was so happy to see @jgehtland respond right out of the gate on that thread... And then so sad to see no further responses, and that (seemingly) nothing has changed since.#2015-07-0418:50spiralganglionPlanning for my next big work project, I've been torn between Datomic (for the power) and Postgres (for the OSS) for months. The decision gets ever harder the closer I get to needing to choose.#2015-07-0419:03spiralganglionThis is one of those "I'm sure it'll be fine" situations, where I bet I'll never have an issue because of the license. But we're so used to seeing Cognitect as forward-thinking and obsessed with good design. It's a discouraging shock to find an imprecise and overzealous license bearing their thumbprint.#2015-07-0419:28val_waeselynckI agree. I do understand that they make Datomic proprietary, and I'm grateful that they have released so much excellent stuff for free. Still, this EULA is disappointing. I'm wondering to what extend they're aware of this, maybe they just outsourced their legal stuff to the wrong guy.#2015-07-0419:29val_waeselynck@ivanreese: exactly same dilemma.#2015-07-0419:42spiralganglionEg: I want my company to pay for Datomic. I (dearly) want to use it to build the next generation of our service. But I also want to blog about the experience, without fear of legal repercussions because I might mention the name "Cognitect".#2015-07-0617:14bkamphausDatomic 0.9.5198 has now been release: https://groups.google.com/forum/#!topic/datomic/D0T6qyF-GPA#2015-07-0617:15bkamphaus@bhagany: this release contains a fix for the issue you encountered.#2015-07-0617:46bhaganygreat, thanks so much!#2015-07-0618:36bhagany@bkamphaus - just confirming that 0.9.5198 works for me. Thanks again!#2015-07-0620:53canweriotnowWell, shit...#2015-07-0620:53canweriotnow> "Neither party will, without the other party's prior written consent, make any news release, public announcement, denial or confirmation of this EULA, its value, or its terms and conditions, or in any manner advertise or publish the fact of this EULA."#2015-07-0620:53canweriotnowDoes that mean my company has to stop telling people how much we love using Datomic?#2015-07-0621:23spiralganglionIt reads like a gag order, of the sort included in a national security letter, "which restricts the recipient from ever saying anything about being served with one."#2015-07-0621:24spiralganglionIn my sense of the EULA... by publicly discussing the facts of the EULA here, we are in violation of it.#2015-07-0711:23mitchelkuijpersI have a question, I am currently in the process of creating a single page application with datomic on the server. What we would like to do is make one endpoint where you can just post queries and get the results back (we don't need to create a rest server because it is only for use in the single page application). The application is multitenant and I would like to restrict the query to only the data that tenant can see. I have used filter on the complete database but this does not seem like a scalable solution at all. Now I think I should transform the queries on the server so restrict them to one tenant. Have you guys tried something like this?#2015-07-0711:39mitchelkuijpersOr might it be a good idea to create a database per tenant?#2015-07-0711:41mitchelkuijpersI am reluctant to do that because I would lose a lot of opportunities to query the database for overall stats or learning from my customers..#2015-07-0711:43val_waeselynck@mitchelkuijpers: Given that you can run code in queries (including code that have side-effects I believe, since all of clojure.core is included) I think you cannot contend security issues with this approach. Right ?#2015-07-0711:47mitchelkuijpers@val_waeselynck: Yeah that is basically my problem#2015-07-0711:48mitchelkuijpers@val_waeselynck: And using (datomic/filter) has pretty horrible performance unless I could use this on a result of a query#2015-07-0711:55val_waeselynck@mitchelkuijpers: even if you can restrict the set of datoms your client is able to perceive (using e.g filters) that does not change the problem that the client would have access to your whole runtime. That's where the big problem is IMO. Maybe you'll need to add a parsing step before the query to forbid function invocations.#2015-07-0711:56mitchelkuijpersYeah but that would be pretty easy I guess#2015-07-0711:57mitchelkuijpersMaybe I should just not do it, but I hate creating new rest endpoints for simple queries#2015-07-0711:59stuartsierra@mitchelkuijpers: Even ignoring the arbitrary-code-execution part, a client could easily construct a pathological query which consumes all the resources on the Peer.#2015-07-0712:00mitchelkuijpers@stuartsierra: and @val_waeselynck All valid points let's not do this#2015-07-0714:40bkamphausWe’ve added a Best Practices section to the docs today, see: http://blog.datomic.com/2015/07/datomic-best-practices.html#2015-07-0716:15arohner@mitchelkuijpers: I decided against doing that as well, but I still want something similar.#2015-07-0716:15arohnerWriting server endpoints that are just validation around datomic queries is tedious#2015-07-0716:22mitchelkuijpers@arohner maybe om next will give some good ideas how to achieve this#2015-07-0716:28jwmnice to have a best practices document always#2015-07-0717:36val_waeselynckhonestly, I think the solution is some kind of RPC that makes it very fast to write server endpoints.#2015-07-0717:36val_waeselynckYou don't always have to be RESTful, do you?#2015-07-0718:32jwmwebsockets works#2015-07-0718:32jwmbut for large data rest / json compressed is a lot faster since its compressed / decompressed by the client and server#2015-07-0719:02mitchelkuijpers@bkamphaus: awesome addition to the docs!#2015-07-0805:42maxI have a data modelling/querying question. I want to store a heartbeat that a client sends every hour. Datomic recommends using noHistory for things like counters, which makes sense, but I also want to use the heartbeat for billing, i.e. I bill you based on number of heartbeats per month as a proxy of usage. Is it possible to build that query with the tx log even if I have history turned off for an attribute?#2015-07-0809:01stijnif I filter a db to contain only datoms of a certain partition is that a performance win or not?#2015-07-0812:01robert-stuttafordstijn, the filter would still have to consider all datoms that your datalog looks at, so i’m thinking no#2015-07-0812:01stijn@robert-stuttaford: that’s what I thought as well#2015-07-0812:02robert-stuttaford@max: the tx log will contain everything that happened, so you could use it for no-history changes. just be aware that reading from the log doesn’t take advantage of any peer caching#2015-07-0914:22domkmHas anyone done user account merging in Datomic? I haven’t seen this topic addressed.#2015-07-0914:24tcrayforddomkm: don't think there's anything datomic specific in there, right?#2015-07-0914:28domkm@tcrayford: Sure. Bear with me because I don’t have much experience with Datomic so this question could be due to my ignorance. Specifically, I’m not sure how I’d deal with historic references that point to the wrong user entity. Let’s say you annotate transactions with the user who made the change for auditing but later that user is merged into another user.#2015-07-0914:29domkm@tcrayford: There’s nothing specific to Datomic about this question in terms of reading the most recent data but I don’t think I understand the implications of merging user accounts when it comes to Datomic time travel.#2015-07-0914:29tcrayford@domkm: you can't change the past 😉 So in those cases, the old transaction references would point to the old user#2015-07-0914:30tcrayfordbut it seems to me like you could store "this user was merged with user YYY" on that user (remember attributes are flexible compared to rows) and know about that at query time#2015-07-0914:32domkm@tcrayford: So retract one of the user entities and assert a ref like user/merged-with that refers to the remaining user?#2015-07-0914:33tcrayfordthat's probably what I'd do#2015-07-0914:33domkmOkay, thanks simple_smile#2015-07-0914:34jthomsonI've used a recursive rule in such situations, so you can have user/merged-with -> user/merged-with -> user#2015-07-0914:42domkmNice. Thanks @jthomson#2015-07-0921:09ghadiDoes a full Datomic license cover HA use in non-prod / dev scenarios? I forget#2015-07-0921:33marshall@ghadi: Yep, you can use all features in testing/dev.#2015-07-0921:36ghadithanks, Marshall#2015-07-1120:22a.espolovAfter a couple of seconds after the launch of datomic.
exception thrown error: relation "datomic_kvs" does not exist#2015-07-1120:35val_waeselyncksounds like misconfigured storage?#2015-07-1305:06timothypratleyAny advice on how to structure data for an elementary school report card?
There are numeric metrics
Several lists and text fields… but I’m hoping to keep them flexible;
So the data might be:
[{:db/id 3
:components [[:metrics "Metrics" ["Productivity"
"Leadership"
"Happiness"]]
[:ol "Achievements" ["Won the spelling bee."]]
[:ol "Weaknesses"]
[:ol "Coach goals"]
[:ol "Player goals"]
[:textarea "Coach comments"]
[:textarea "Player comments"]]}])
Bearing in mind that the categories might change.
I’m thinking I should be using :metric/happiness :comments/coach instead of nested data… but it feels like then I have some path info tied up in my keywords#2015-07-1305:08timothypratleyAlternatively maybe I should be making a new entity for ‘metrics’ and ‘achievements’ etc#2015-07-1305:09bhaganyIf I'm reading your intentions here correctly, I think there's a mismatch between what you're attempting in the code snippet, and what datomic does -#2015-07-1305:10bhaganyin particular, datomic doesn't natively support vectors or lists#2015-07-1305:11bhaganyis the entity in your code snippet a student?#2015-07-1305:21timothypratleyyup#2015-07-1305:22bhaganyokay, when you say "the categories might change", what part of the snippet are you referring to?#2015-07-1305:22timothypratleynew/different metrics for example “Math"#2015-07-1305:22timothypratleyand new categories#2015-07-1305:23timothypratleylike a list of friends or something I haven’t thought of#2015-07-1305:23bhaganystill not sure what a category is… is "Achievements" a category?#2015-07-1305:24timothypratleyyes, though I don’t have any special meaning to ‘category’ simple_smile#2015-07-1305:26bhaganyheh, alright. well, at a first blush, here's how I'd do it -#2015-07-1305:28bhaganya student entity would have these (optional) attributes: :metrics, :achievements, :weaknesses, :goals, and :comments#2015-07-1305:29bhagany:metrics, :achievements, and :weaknesses look like they can just be db.type/string with :db.cardinality/many#2015-07-1305:30timothypratleyis there a way to preserve order in a many relationship? or would I need to make them entities that relate to each other to do that?#2015-07-1305:31bhaganyordering isn't really straightforward - the easiest way to do it, imo, is to give the things you want to order an :order attribute and sort in your client code#2015-07-1305:31bhaganyso if you need to order those first three, you'd need to make them entities, rather than just strings#2015-07-1305:31timothypratleyah nice#2015-07-1305:31bhagany:goals and :comments would be entities in their own right, because you need to note authors#2015-07-1305:33bhaganythey'd just have an attribute for the goal or comment, and then another attribute to note the author - either an enum-like thing if you just want it to be :coach or :player, or you could represent people as their own entities and refer to them from there#2015-07-1305:33timothypratleygotcha#2015-07-1305:33timothypratleythanks that makes a lot of sense#2015-07-1305:33bhaganyokay, good simple_smile#2015-07-1305:34timothypratleyis there a shorthand way to insert my root with it’s various sub-entities, or should I write a transformation from my datastructure that produces all the entities and throw them at transact?#2015-07-1305:35bhaganyyou can nest your entities when you transact them… let me get you a link#2015-07-1305:36bhaganyCan't link you directly to the content, but search for "nested" here: http://docs.datomic.com/transactions.html#2015-07-1305:37bhaganyalso, note that even though the syntax for nesting uses vectors, the entities don't end up being ordered in datomic#2015-07-1305:37timothypratleyooo great!#2015-07-1305:37timothypratleythank you very much simple_smile#2015-07-1305:38bhaganysure, have fun! simple_smile#2015-07-1309:15stijn@bhagany: cool, I didn’t know about that. every once in a while you need to re-read the datomic docs for added features simple_smile#2015-07-1313:03bhagany@stijn: heck yeah, I learn new things every time#2015-07-1319:55wilkerluciohi nice people simple_smile one question: if I do a transaction using upsert, but nothing is changing (let's say I'm transacting data with the same value that is already on the DB), this will create a new transaction anyway or it's going to skip since nothing was going to change?#2015-07-1319:58arohner@wilkerlucio: that will create a new transaction#2015-07-1320:15wilkerluciothanks @arohner, so I think I'm better do a query to check if the data is the same to avoid unintended history I think#2015-07-1320:16arohner@wilkerlucio: depends on what you’re trying to do. Re-asserting a fact can be a valid strategy as well. “on today’s date, I observed X to still be true"#2015-07-1320:17wilkerlucio@arohner: my case is just a simple facebook login, I'm actually login at the client-side, I just send the access token and then I create/read the user ID, is on that point that I was going to always transact (in case it didn't generate a new one), but since it does I'll just check if the user is there and avoid when it is. makes sense?#2015-07-1320:18arohnersure#2015-07-1321:00arohnerI’d like to debug a query that is slow in production. Is there a way to clear the peer cache, without restarting?#2015-07-1321:04tcrayfordarohner: not that I know of. I'd bet you can, but it'll involve reflection and poking around in the internals of a db (which may violate a thing you signed when you downloaded datomic)#2015-07-1407:21robert-stuttaford@wilkerlucio: @arohner: you can also use d/with to dry-run the transaction and check that (-> tx-result :tx-data count (> 1)) to know if there will be any changes#2015-07-1407:22robert-stuttafordthere will always be one datom for the tx timestamp#2015-07-1407:22robert-stuttafordwe do this in our Onyx stats processing system to reduce noise#2015-07-1410:33wilkerlucio@robert-stuttaford: cool, I'll try that approach, thanks#2015-07-1410:46robert-stuttafordgreat#2015-07-1412:08stuartsierra@wilkerlucio @arohner A transaction which re-asserts existing facts will create a new Transaction entity in the log, but it will not contain the re-asserted Datoms.#2015-07-1412:09wilkerlucio@stuartsierra: thanks, that's nice to know#2015-07-1412:09robert-stuttaford@bkamphaus: do you perhaps know why peer would connect to 127.0.0.1 even though i’m giving it an FQDN which resolves to the IP of a different server on the dev storage? driving me nuts!#2015-07-1412:11robert-stuttafordbit of a shotgun approach, but trying @luke and @daemianmack too simple_smile#2015-07-1412:13robert-stuttafordthe docs clearly show that a host must be provided:#2015-07-1412:13robert-stuttafordDev Appliance:
datomic:dev://{transactor-host}:{port}/{db-name}#2015-07-1412:17lukeWell, first of all, definitely.not.localhost is in fact a subdomain of localhost 😄#2015-07-1412:17lukeBut I'll presume that's not actually the string you're using#2015-07-1412:18robert-stuttafordyes simple_smile not the real string#2015-07-1412:18robert-stuttafordi can ping the actual domain from the server in question and it resolves to the right IP#2015-07-1412:18lukeHm. Aside from some crazy networking nonsense I can't think of anything that would cause that.#2015-07-1412:19lukeCan you temporarily change the port on your db connection string to something else? Then your error message would show whether its somehow resolving your domain to localhost, or using a different config entirely.#2015-07-1412:19robert-stuttafordi was using /etc/hosts shortcuts before. let me make very sure it’s not a factor#2015-07-1412:19lukeyeah overriding in hosts would definitely explain what's going on.#2015-07-1412:20robert-stuttafordok. /etc/hosts eliminated. still happening#2015-07-1412:20robert-stuttafordbizarre#2015-07-1412:20robert-stuttafordyes. i’ll alter the port#2015-07-1412:21robert-stuttafordinteresting#2015-07-1412:22robert-stuttafordso now it’s attempting the domain#2015-07-1412:22robert-stuttafordswitching back#2015-07-1412:24robert-stuttafordwith 4334 it’s back to localhost again#2015-07-1412:24robert-stuttaford‽#2015-07-1412:26lukeThat's strange. It could be a bug in Datomic, but Occam's razor says its something with how your application code is loading the connection string. A caching effect, or something. Maybe restart the JVM process if you haven't already? Put a println immediately before you call datomic.api/connect at the lowest possible level?#2015-07-1412:26lukesorry, don't have any really good ideas.#2015-07-1412:26robert-stuttafordif the transactor machine’s firewall is not allowing connections, perhaps the code that reports this error isn’t reporting the right string for domain?#2015-07-1412:27robert-stuttafordthe strace has hornetq close to the throw. perhaps it’s in HQ.#2015-07-1412:27robert-stuttafordtrying to validate the firewall rules independently now#2015-07-1412:29lukeactually, @robert-stuttaford, looking at the error messages, I think your first one isn't a problem with hitting storage at all#2015-07-1412:29lukelook how the error comes from h2.jdbc when you change the port, but is a standard wrapped exception if the port is correct#2015-07-1412:29robert-stuttafordyeah, it’s a hornetq strace; it’s trying to connect to the transactor#2015-07-1412:30robert-stuttafordok. i’m pretty sure it’s a blocked nostril <BACKSPACE> port. thank you for the rubber ducking and apologies to all for the channel noise#2015-07-1412:39bkamphaus@robert-stuttaford: the host in the URL is the storage host, the transactor writes its location in storage and then the peer connects to the transactor. What’s the host in the transactor properties file?#2015-07-1412:39bkamphausSorry if I missed any details there, just a quick scan.#2015-07-1412:39robert-stuttafordthat’s exactly where i’m debugging now. it’s 127.0.0.1. it should be the public ip, right?#2015-07-1412:40robert-stuttafordor can i cheat and use 0.0.0.0#2015-07-1412:41tcrayfordnumber of bugs I've worked on at $dayjob this month that would be trivial to fix if we'd used datomic instead of postgres: 40. Fuck databases without history and transaction metadata#2015-07-1412:42tcrayford(nearly all of those take the form "which part of this 50k lines of code is modifying this data incorrectly")#2015-07-1412:42robert-stuttafordlanguage! simple_smile#2015-07-1412:43bkamphaus@robert-stuttaford: it should be an ip the peer can connect to the transactor with simple_smile if you’re using e.g. a container or virtual instance that doesn’t know its IP until running, you might have to look into something like e.g. https://groups.google.com/d/msg/datomic/wBRZNyHm03o/0SdNhqjF27wJ#2015-07-1412:43robert-stuttafordgreat, thank you ben. nice gas mask, by the way simple_smile#2015-07-1412:47robert-stuttafordthat explains why I saw 127…. now. it was fetching that value from storage (which it could reach), and the transactor had put 127 there because of the props file#2015-07-1418:26ghadiTo confirm a bit of undocumented knowledge http://docs.datomic.com/clojure/index.html#datomic.api/connect For a SQL backend, any connection pooling needs to be handled in userspace by providing the appropriate :datasource, right?#2015-07-1418:27ghadiAnd is connection pooling even necessary?#2015-07-1418:28ghadiThe latter question is more important.#2015-07-1420:12val_waeselynckhttps://stackoverflow.com/questions/31416378/recommended-way-to-declare-datomic-schema-in-clojure-application#2015-07-1420:13val_waeselynckIf anyone can be of help simple_smile#2015-07-1420:42mitchelkuijpers@val_waeselynck posted an answer#2015-07-1420:43mitchelkuijpers@val_waeselynck basically it is use conformity and read the blogpost from @tcrayford that's how I got it working simple_smile#2015-07-1420:52val_waeselynck@mitchelkuijpers: thanks for the quick answer simple_smile I'll give it a try then validate your answer#2015-07-1420:54mitchelkuijpers@val_waeselynck no problem if you need any more help let me know#2015-07-1516:30ghadiCan anyone answer my jdbc connection pooling question a few lines up? ^#2015-07-1518:46robert-stuttaford@ghadi: i’m sure @bkamphaus or @luke or @stuartsierra can either answer you or direct you to some place where it’s written down#2015-07-1609:55gerstreeWhen following the guide (http://docs.datomic.com/aws.html) to deploy datomic transactor on AWS I end up with the problem described here: http://comments.gmane.org/gmane.comp.db.datomic.user/4568#2015-07-1609:56gerstreeI tried to to find the startup.sh script that gets downloaded from a Datomic S3 bucket to find out what the problem could be, but cannot find it#2015-07-1609:58gerstreeI also tried to boot the Datomic AMI by hand to figure out how to fix this, but I cannot log into the instance by ssh, although I have a key pair set up and used it while booting the AMI#2015-07-1609:59gerstreeThe security group also allow inboud ssh traffic#2015-07-1610:00gerstreeCan someone help me figure out how to fix this, or am I better of setting up a custom AMI and a manual/custom automation transactor installation#2015-07-1611:26robert-stuttaford@gerstree, @bkamphaus should be able to assist when he appears#2015-07-1611:56gerstree@robert-stuttaford: great, thanks. I will continue with the rest of my setup and wait a bit for help on the transactor part.#2015-07-1612:43marshall@gerstree: What size EC2 instance are you attempting to start the txor on?#2015-07-1612:58gerstree@marshall It was a c3.large instance#2015-07-1613:13bkamphaus@gerstree: what’s the Xmx and memory settings you’re using?#2015-07-1613:15bkamphausre: process, you’re using the ensure-transactor, ensure-cf, create-cf-template, create-cf-stack commands? For the keypair, you added it to the generated JSON (using AWS docs or some other resource for guidance)?#2015-07-1613:24gerstree@bkamphaus, I did exactly what was documented in http://docs.datomic.com/aws.html: ensure-transactor, ensure-cf, create-cf-template and create-cf-stack. I did not add the keypair to the generated JSON when using the create-cf-stack#2015-07-1613:27gerstreeI did however start an instance from the console using a keypair (that works for several other instances) and could not connect to that instance#2015-07-1613:27gerstreeShould the image allow ssh access?#2015-07-1613:27bkamphaussome common culprits for this sort of thing: heap size too large for instance, memory settings larger than 75% of heap size, unsupported instance type (looking into it), license not valid for version of Datomic requested#2015-07-1613:29gerstreeI am looking for a way to find the error, what would be the best approach? I was trying to start an instance and run the startup.sh manually.#2015-07-1613:38gerstreeIs that startup.sh script available somewhere?#2015-07-1613:38gerstreeThe Xmx was 2625m by the way#2015-07-1613:38gerstreeLooks reasonable for the 3.75G of that c3.large instance type not?#2015-07-1613:40bkamphausmemory-index-max + object-cache-max = ?#2015-07-1613:42bkamphausI don’t know if there's a good generic troubleshooting route for the dead start on CF I can offer, past addressing common config issues. The general approach I use is to validate the settings against e.g. local dev then to build the transactor properties + cf properties around the working settings, then push.#2015-07-1613:44gerstreeI understand, I was looking for a way to get a hold of the real error, it's annoying the instance terminates and can't be restarted. And the instance I start by hand is not accessible via ssh for some reason.#2015-07-1613:51bkamphaus@gerstree: I verified that ssh is turned off at the AMI level as a security measure#2015-07-1613:55bkamphausassuming you’re running with a super user AWS key (i.e. same one you used with ensure*) , can you start a local transactor against the ddb table set up by ensure-transactor using the same settings (also forcing the same JVM args re: Xmx, etc.)?#2015-07-1613:57bkamphausIf you want to PM me a version of the transactor properties and cf properties files redacted of any credentials or sensitive info on Slack, I can look it over. We do have customers that configure their own instance or CF for some of the reasons you mentioned - e.g. wanting to ssh in to access logs, etc.#2015-07-1613:59bkamphausIn general, it’s typically the issues I mentioned - wrong instance type (you should be fine here), too much heap (you’re ok here), not enough heap for transactor properties memory settings (unsure yet), invalid license (unsure yet) — if you use ensure with AWS key w/appropriate permissions your security groups, role policies, etc. should be fine, but if not these can contribute as well.#2015-07-1613:59bkamphausOnce you’re going, cloudwatch metrics and log rotation are almost always sufficient for figuring out problems that arise.#2015-07-1614:12gerstreeI am indeed running with a super user AWS key (we tried via IAM first, but that was hopping from policy to policy... no fun).#2015-07-1614:13gerstreeI have not tried connecting to ddb from a local transactor, let me try that first.#2015-07-1615:06kbaribeaucan anyone offer advice on a transactor that keeps timing out? It'll timeout even if I give it a transaction with just a single datom.#2015-07-1615:07kbaribeauI assume this means it's busy with something, but I'm unable to tell what that is or how to stop it#2015-07-1615:10bkamphaus@kbaribeau: do you have access to logs or metrics? In either do you see AlarmBackPressure ? (one scenario is that it could be in the middle of a large indexing job with several transactions backed up)#2015-07-1615:11kbaribeauthat would show up in both logs and cloudwatch metrics?#2015-07-1615:13bkamphausYeah. w/logs you can also just grep for [Ee]rror [Ee]xception and Alarm as a first sanity check for health.#2015-07-1615:14bkamphausOr w/metrics look for any Alarm(s) — other storage metrics as well such as StoragePutGetBackoffMsec etc. could be an indicator if e.g. storage provisioning is an issue.#2015-07-1615:14kbaribeauI've got a NullPointerException in the log from yesterday#2015-07-1615:14kbaribeaujava.lang.NullPointerException: null
at datomic.db$get_ids$fn__4004.invoke(db.clj:2352) ~[datomic-transactor-pro-0.9.5186.jar:na]
at clojure.core.protocols$fn__6074.invoke(protocols.clj:79) ~[clojure-1.6.0.jar:na]
at clojure.core.protocols$fn__6031$G__6026__6044.invoke(protocols.clj:13) ~[clojure-1.6.0.jar:na]
at clojure.core$reduce.invoke(core.clj:6289) ~[clojure-1.6.0.jar:na]
at datomic.db$get_ids.invoke(db.clj:2367) ~[datomic-transactor-pro-0.9.5186.jar:na]
at datomic.db.ProcessExpander.getData(db.clj:2425) ~[datomic-transactor-pro-0.9.5186.jar:na]
at datomic.update$processor$fn__10124$fn__10125$fn__10126$fn__10130$fn__10133$fn__10134.invoke(update.clj:246) ~[datomic-transactor-pro-0.9.5186.jar:na]
at clojure.lang.Atom.swap(Atom.java:37) ~[clojure-1.6.0.jar:na]
at clojure.core$swap_BANG_.invoke(core.clj:2232) ~[clojure-1.6.0.jar:na]
at datomic.update$processor$fn__10124$fn__10125$fn__10126$fn__10130$fn__10133.invoke(update.clj:240) ~[datomic-transactor-pro-0.9.5186.jar:na]
at datomic.update$processor$fn__10124$fn__10125$fn__10126$fn__10130.invoke(update.clj:238) ~[datomic-transactor-pro-0.9.5186.jar:na]
at datomic.update$processor$fn__10124$fn__10125$fn__10126.invoke(update.clj:235) [datomic-transactor-pro-0.9.5186.jar:na]
at datomic.update$processor$fn__10124$fn__10125.invoke(update.clj:216) [datomic-transactor-pro-0.9.5186.jar:na]
at datomic.update$processor$fn__10124.invoke(update.clj:216) [datomic-transactor-pro-0.9.5186.jar:na]
at datomic.update$processor.doInvoke(update.clj:216) [datomic-transactor-pro-0.9.5186.jar:na]
at clojure.lang.RestFn.applyTo(RestFn.java:139) [clojure-1.6.0.jar:na]
at clojure.core$apply.invoke(core.clj:626) [clojure-1.6.0.jar:na]
at datomic.update$background$proc__10046.invoke(update.clj:58) [datomic-transactor-pro-0.9.5186.jar:na]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.6.0.jar:na]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_55]
#2015-07-1615:16kbaribeauI'm not sure our metrics are configured correctly. looking into it#2015-07-1615:25kbaribeauI can't even see a metric named AlarmBackPressure, but other metrics look reasonable. I think.#2015-07-1615:28bkamphausdid transactions stop going through with the NPE? (you would no longer see, e.g. TransactionBytes metrics after that point if so)#2015-07-1615:31kbaribeauThe metrics are making it look that way, although there are log entries after the NPE (but not many)#2015-07-1615:31kbaribeauheartbeats are still getting through#2015-07-1615:35kbaribeauCould restarting the transactor instance help?#2015-07-1615:36bkamphausDo you have any really large transactions that precede that? (seen from high TransactionBytes values).#2015-07-1615:37bkamphausDef. worth restarting the transactor to see if that resolves the issue.#2015-07-1615:39bkamphausJust to note, though, that NPE looks like its a transaction function error, though, I wouldn’t expect it to have an impact on the health of the system.#2015-07-1615:40kbaribeauOh, interesting.#2015-07-1615:40kbaribeauThe largest TransactionBytes value I see in the last day is 257 bytes. Seems pretty small.#2015-07-1615:42kbaribeauI think I'll restart, thanks for the help so far. Knowing which metrics to look at is definitely useful simple_smile#2015-07-1615:54bkamphausif you have a long indexing job (say something like excise where a lot of segments have to be rewritten), you’ll see IndexWriteMsec values during the indexing job, so having a lot of those when transactions are timing out could be an indication that you’re stuck in indexing.#2015-07-1615:55kbaribeauCool. I had suspected indexing at one point but didn't realize there was a metric for it.#2015-07-1616:01bkamphausI don’t believe that metric is in any version but don’t remember off the top of my head in which one it was introduced#2015-07-1616:01bkamphauss/any/every#2015-07-1616:02bkamphausCreateEntireIndexMsec will also show up at the end of a successful indexing job. AlarmIndexingFailed will show up if indexing fails (and these failures are usually related to memory issues on the transactor if they do show up).#2015-07-1616:31bkamphausWe’ve released Datomic 0.9.5201 https://groups.google.com/forum/#!msg/datomic/GI5R-e_r100/nEPhvLd4E3IJ#2015-07-1617:45gerstree@bkamphaus, no need to look at our template anymore. I have the transactor up and running on AWS.#2015-07-1617:47gerstreeYou put me on the right track by making me start the transactor locally first, talking to ddb on Amazon#2015-07-1617:49gerstreeThanks so much#2015-07-1617:54bkamphaus@gerstree: glad you were able to get the issue resolved!#2015-07-1714:20martinklepschDo people using Datomic in Clojure projects usually run a separate Datomic instance for development or do you use an in-process version?#2015-07-1714:22martinklepsch@bkamphaus: The bin/maven-install script (and maybe others) don’t have a #!/bin/sh — maybe a good idea to add it?#2015-07-1714:23statonjrmartinklepsch++#2015-07-1714:24statonjrI have a bootstrap script that adds it#2015-07-1714:24tcrayford@martinklepsch: local transactor for dev, in memory db for tests#2015-07-1714:24statonjrSame here#2015-07-1714:25martinklepschlocal transactor = bin/transactor conf.properties did I get the lingo right?#2015-07-1714:27statonjrCorrect.#2015-07-1714:27tcrayfordyep#2015-07-1714:29martinklepschThanks! simple_smile#2015-07-1714:31martinklepschIf you upgrade Datomic moving a db from old to new is done by backup & restore basically?#2015-07-1714:31tcrayfordyou can typically just run the new transactor against the old db#2015-07-1714:31tcrayfordseveral times they've changed the format of what's in storage and done that via code in the transactor, which means no backup/restore#2015-07-1714:33tcrayfordfor local storage, that just means copying data out of the old transactor#2015-07-1715:06bkamphausone thing to note for the memory db is that it’s sufficient for testing ACI but not D aspects of Datomic's ACID semantics. I.e., it has no log, which underlies durability in Datomic.#2015-07-1715:06bkamphausso you’ll need e.g. local dev storage if you have any testing around Log API, excise, etc.#2015-07-1715:18stuartsierra@martinklepsch @tcrayford The on-disk format has changed extremely rarely in the past, and is documented in release notes. Most transactor upgrades are just restarting the process with a new JAR.#2015-07-1715:19tcrayfordjust the same as prod simple_smile#2015-07-1716:32val_waeselynckRegarding a question I asked a few days ago about schema declaration:
What do you think of using transaction functions to make declaring schema attributes less tedious?
Here's a POC: https://gist.github.com/vvvvalvalval/fe16f475b1656f28d4b8#2015-07-1716:36val_waeselynck(@mitchelkuijpers @guilespi this may interest you)#2015-07-1716:57bkamphausthe use case is a matter of opinion I’ll stay out of simple_smile but re: making it a transaction function, in general I would avoid tx functions in cases where you don’t need transaction-time access to the database value.#2015-07-1717:00val_waeselynck@bkamphaus: thanks, what do you imagine could go wrong?#2015-07-1717:04bkamphaus@val_waeselynck less “going wrong” and more about best fit. Data munging (producing a schema attr map from defaults/terse names) makes sense to me as library/API code. Transaction functions (the occasional dorky illustrative example aside) are really about enabling logic at transaction time that ensures valid transitions between state a la ACID isolation (using serialized writes as the mechanism), i.e. adding/subtracting from a balance.#2015-07-1717:04bkamphausthere are use cases for validation or helper functions that make sense as tx functions, b/c they would require transaction-time access to the db value#2015-07-1717:05bkamphausyou can run into issues by over-relying on transaction functions, mainly perf issues since tx function logic is performed in serial, a few commonly used tx functions w/non trivial perf characteristics can tank the throughput of a system#2015-07-1717:05bkamphausthough the perf impact case is less likely to be problem w/schema install#2015-07-1717:06val_waeselynckYeah I wouldn't worry about performance in this case ^^#2015-07-1717:09val_waeselynckThe thing is, in this case, the code has 2 purposes: being processed by the database (data) and being a reference for the application developer (which makes it a big deal to reduce noise)#2015-07-1717:12tcrayford@val_waeselynck @bkamphaus I'd utter some things about the suitability of abstracting away schema as well. Whilst the default definition is relatively verbose, it's easy to understand for the most part. I'd worry about losing that by moving to a more terse syntax.#2015-07-1717:12val_waeselynckAs for the fact that it does not use transaction-time access to the database, I don't really see it as a problem. The documentation itself says other uses for database functions may yet be found. http://docs.datomic.com/database-functions.html#2015-07-1717:13tcrayfordAlso, depends on your app, but I think in most apps schema changes are relatively rare, and so making them easier isn't a good economic tradeoff (at least for the apps I've worked on)#2015-07-1717:15val_waeselynck@tcrayford: maybe if you have a very good memory simple_smile but I often find myself consulting my schema declarations quite often e.g to remember attribute names.#2015-07-1717:15tcrayfordalso depends how big the schema is 😉#2015-07-1717:23val_waeselynck@bkamphaus: I do note your point about not mixing library code with data though#2015-07-1717:23robert-stuttafordval: a dev-time dev/user.clj fn that prints out all your schema should save you some time simple_smile#2015-07-1717:24tcrayford@robert-stuttaford: before cursive, I had a keybinding that "autocompleted" schema attributes (easy to wire up with ido-mode or similar)#2015-07-1717:24robert-stuttaforddefinitely agree with keeping schema simple: data. dsls are nice for writing it, but that’s about it. you lose when you stop keeping it as data. we use .edns with plain datomic schema defs#2015-07-1717:30val_waeselynck@robert-stuttaford: good call!#2015-07-1717:31val_waeselynckMmmh still can't think of a situation where this goes wrong. I think I'll be dumbly stubborn and tell you what happens when it gets ugly.#2015-07-1717:34val_waeselynck@robert-stuttaford: do you have an example of such helper? would you mind posting the code?#2015-07-1717:45bkamphausSomething like this should work for grabbing schema, we have a java example as well: https://github.com/Datomic/datomic-java-examples/blob/master/src/PrintSchema.java
(let [db (d/db conn)]
(clojure.pprint/pprint
(map #(->> % (d/entity db) d/touch)
(d/q '[:find [?a ...]
:where [_ :db.install/attribute ?a]]
db))))
#2015-07-1719:44timothypratleyit there a sensible way to sync datoms to datascript? (query a datomic server and put the datoms into a client side datastructure datascript)#2015-07-1806:41robert-stuttafordhey lucasbradstreet !#2015-07-1806:43robert-stuttaford@timothypratley: you can read d/datoms to get datoms directly, but of course that’s not the result of any sort of query. you can always reconstitute your own datoms from query results, it’s not that hard, it’s just not obvious; as you’d have to fetch out t values for everything. @tonsky no doubt has a preferred method or two.#2015-07-1806:52robert-stuttaford@val_waeselynck, this gives you a nice data set you can easily filter on. see http://docs.datomic.com/clojure/#datomic.api/attribute to see what attribute returns. you can also run it through clojure.pprint/print-table#2015-07-1816:12timothypratleythanks @robert-stuttaford#2015-07-1816:43val_waeselynck@robert-stuttaford: thanks!#2015-07-1818:10robert-stuttaford100%#2015-07-1820:34a.espolovHello. How to run datomic rest service for sql storage.
I'm use this command bin/rest -p 8001 sql datomic:sql://{db-name}?{jdbc-url}#2015-07-1820:36a.espolovrest service is running. But I after selecting storage "sql" on page localhost:8001/data#2015-07-1820:36a.espolovin web interface receives an 500 error#2015-07-1905:58robert-stuttaford@a.espolov: you have the sql service and the transactor running, i assume?#2015-07-1905:58robert-stuttafordi recommend looking at the docs to see how the rest service is logged and then review those logs#2015-07-1905:58robert-stuttafordbin/logback.xml has a bunch of commented out loggers you can turn on, maybe one is for REST service#2015-07-1910:53a.espolov@robert-stuttaford: https://gist.github.com/dark4eg/26a15c52dd33377b3850#2015-07-1915:25robert-stuttaford@a.espolov: psql isn’t getting your password. sorry. @bkamphaus can help you tomorrow, i think simple_smile#2015-07-1915:26a.espolov@robert-stuttaford:
bin/transcator datomic:sql://{db-name}?{jdbc-url} it's working#2015-07-1915:28a.espolovas I did not try to run datomic rest service#2015-07-1915:28a.espolovIt gets only with dev#2015-07-1915:30bkamphaus@a.espolov: does the rest command work if you wrap the jdbc url in quotes or escape any characters that may need it?#2015-07-1915:32bkamphausI’m assuming in this case the transactor URL is specified by base + user + password, i.e.
sql-url=jdbc:
sql-user=datomic
sql-password=datomic
#2015-07-1915:33bkamphausI believe sql passwords may also have to be url encoded i.e. if there’s a %, %25#2015-07-1915:34a.espolov@bkamphaus: It is not difficult to show the full startup command datomic rest service using sql as storage?#2015-07-1915:35bkamphausIn general, most likely culprit for this error is that the creds are missing, or somehow malformed in how the jdbc url is being supplied to bin/rest#2015-07-1915:54a.espolov@bkamphaus bin/rest -p 8001 sql datomic:sql://?jdbc:<postgresql://localhost:5432/datomic?user=datomic&password=datomic>#2015-07-1915:54bkamphausbin/rest -p 8001 sql "datomic:sql://?jdbc:<postgresql://localhost:5432/datomic?user=datomic&password=datomic>"#2015-07-1915:54bkamphausjust tested and this works for me#2015-07-1916:26a.espolov@bkamphaus: thx sir#2015-07-1923:20timothypratleyloving the reverse lookup capabilities of Datomic, I was able to make a basic entity browser in a weekend:#2015-07-1923:21timothypratleybeing able to answer “what is everything that relates to x?” is really powerful.#2015-07-2007:00robert-stuttaford@timothypratley: yup. it’s incredibly powerful. what makes it doubly so is that you can do the same with time!#2015-07-2012:11robert-stuttaford@stuartsierra: have you perhaps solved the issue of how to stop listening to datomic.api/tx.report.queue in the context of your component pattern?#2015-07-2012:12robert-stuttafordright now i’ve got a listener in a core.async thread, but i’m struggling to find a way to cleanly stop it#2015-07-2012:13stuartsierra@robert-stuttaford: I've found the 'Component' Lifecycle really only works for application start-up / shut-down. Anything with a shorter or different lifespan needs to be handled separately.#2015-07-2012:13robert-stuttafordso you no longer use the tools.namespace/reset thing?#2015-07-2012:14stuartsierra@robert-stuttaford: No, no, I use that for everything that should be started and stopped as a whole with the rest of the application.#2015-07-2012:14robert-stuttafordthis is for app start/stop, but for the development workflow, i’m doing this many times over. of course, it’s all fine if i restart the jvm. but, for obvious reasons, i don’t want to do that#2015-07-2012:14robert-stuttafordyes. ok. in my case, this is true for the tx-report-queue as well#2015-07-2012:15stuartsierra@robert-stuttaford: For development, I would typically use a test DB with a unique generated name.#2015-07-2012:15robert-stuttaford-grin-#2015-07-2012:15robert-stuttafordok. so there’s no clean way to repeatedly listen/unlisten to the tx-report-queue that you know of?#2015-07-2012:16robert-stuttafordi get the test db thing, but again, in my case, i’m working with a large production database and working on a stats processing system that works with that data#2015-07-2012:17robert-stuttafordi know why this isn’t supported, but man, it would *rock* if datomic’s api provided a tx-report chan natively.#2015-07-2012:17stuartsierra@robert-stuttaford: Not sure what you're really trying to do here. You can use remove-tx-report-queue to disconnect the queue.#2015-07-2012:18robert-stuttaford-sigh-. of course. thank you for being the voice of reason, Stuart. i suppose that will cause the next .take to return nil or something?#2015-07-2012:19stuartsierraOr just create and manage the tx-report-queue outside of the component / reset.#2015-07-2012:21robert-stuttafordi’ll see how remove-tx-.. works out#2015-07-2012:28stuartsierra@robert-stuttaford: The Tx report queue is a BlockingQueue, so I expect once you call remove-tx-report-queue it will block forever.#2015-07-2012:33robert-stuttafordthat’s disappointing.#2015-07-2012:34robert-stuttafordwe’re using the tx-report-queue as an input for Onyx#2015-07-2012:35robert-stuttafordi remember from the pedestal talk at Cwest 2013, one of the architecture diagrams had Datomic doing this as well. i distinctly remember Tim Ewald speaking very highly of the capability. would be great if it were a little easier to start and stop listening to it in a repeatable way.#2015-07-2013:06stuartsierra@robert-stuttaford: Should be easy enough to ignore, just close your channel.#2015-07-2013:22lowl4tencyHi all simple_smile#2015-07-2013:24lowl4tencyGuys, I have a launchgroup with datomic transactor. If I wanna 2 transactor for HA purposes which one address endpoint I should use? Can I up a loadbalancer and balance the requests? I mean aws ElasticLoadBalancer#2015-07-2013:25lowl4tency@bkamphaus: sup#2015-07-2013:26stuartsierra@lowl4tency: The Transactors don't need to be load-balanced. The active Transactor writes its location into Storage and the Peers get it from there.#2015-07-2013:27stuartsierraYou can't have two active Transactors for the same database. That's the point simple_smile#2015-07-2013:30lowl4tencystuartsierra: thanks a lot. Do you have any expirience with AWS and datomic? I'm trying to get datomic endpoint from ScalingGroup with CloudFormation but don’t see any method for GetAtt and privateIP of the instances of the LaunchGroup#2015-07-2013:31lowl4tencyOne method which I have it’s use aws cli and get list of instances but it looks like a bicycle#2015-07-2013:32stuartsierra@lowl4tency: Not sure I understand your question. The only "Datomic endpoint" that matters is the Storage URI. Peers will automatically use that to find the Transactor. The AWS set-up scripts in the Datomic distribution manage the rest.#2015-07-2013:33lowl4tencystuartsierra: endpoint, i mean an address and a port of the datomic transactor ec2 instance#2015-07-2013:33lowl4tencyfor pass it later to an application#2015-07-2013:35lowl4tencySo, let me clarify, I have a CloudFormation stack which runs datomic transactor as ec2 instance (got a template from datomic generator), I have a stack with application, I wanna to pass the datomic transactor’s address to the application.#2015-07-2013:36lowl4tencyDatomic Transactor’s running into a AutoscalingGroup, so I’m not able to get ec2 instance's private address#2015-07-2013:37stuartsierra@lowl4tency: I'm not sure how to get that. But you don't need it just to use Datomic. It is all handled automatically by the Transactor, Storage, and the Peer Library.#2015-07-2013:41lowl4tencystuartsierra: I don’t need it if I just run a transactor. simple_smile#2015-07-2016:26lowl4tencyhm, soo, what about failover? If one transactor is failed?#2015-07-2016:26lowl4tencywhich I connected#2015-07-2016:35bhagany@lowl4tency: failover is coordinated through the storage, so you don't have to worry about it#2015-07-2016:35bhaganythe transactors store a heartbeat in storage. when the primary fails, the secondary notices and takes over transparently#2015-07-2016:36bhaganythere will be a brief window in which transact will fail, though#2015-07-2016:36bhagany(afaik)#2015-07-2016:40lowl4tencybhagany: but how my app will know about I have new tranactor#2015-07-2016:41bhaganypeers know what transactor to use because they look at the storage too#2015-07-2016:41lowl4tencyLook, my transactor failed, autoscaling kill old and run new one, but app is aconfigured still to old the transactor#2015-07-2016:41bhaganyno, it's not#2015-07-2016:41lowl4tencyhm#2015-07-2016:41bhaganythat information is in the storage#2015-07-2016:43lowl4tencyso, I don’t need to pass the datomic URI?#2015-07-2016:43bhaganythe datomic URI specifies the storage#2015-07-2016:43lowl4tencyah#2015-07-2016:43lowl4tencyif I have postgres rds as backend I need to pass the rds endpoint?#2015-07-2016:44lowl4tencyas datomic uri I mean#2015-07-2016:44bhaganyI haven't used postgres as a storage, but I'm pretty sure, yes#2015-07-2016:44lowl4tencybhagany: it clarifys all simple_smile#2015-07-2016:44bhaganyexcellent simple_smile#2015-07-2016:45lowl4tencybhagany: do you use dynamodb ?#2015-07-2016:45bhaganyI will be, I'm currently developing our first datomic-backed service#2015-07-2016:46bhaganybut I haven't gone through all prod deployment stuff yet#2015-07-2016:46lowl4tencybhagany: I almost done with datomic on aws simple_smile#2015-07-2016:47lowl4tencyIt’s really awesome#2015-07-2016:47bhaganyexciting! I hope it goes well#2015-07-2016:48lowl4tencybhagany: it’s in finish state already#2015-07-2016:48lowl4tencyRunning an app and going to test updates and other related processes#2015-07-2016:49bhaganyI see. Mine is a user-facing design system#2015-07-2016:49bhaganyfor self-serve e-commerce#2015-07-2021:01ghadiDesign question: Given a process that is stepping through the Log, how to identify all changes that happened to entities in a particular partition?#2015-07-2021:02ghadiI'd like to ignore datoms related to schema, as well as other partitions I don't care about#2015-07-2021:03ghadiLooking for mainly the appearance of entities, or the transaction of datoms related to entities#2015-07-2021:21stuartsierra@ghadi: You can get the partition from an entity ID with d/part, then resolve it to a keyword with d/ident.#2015-07-2021:22ghadithanks, stuartsierra . Is there a better approach than filtering through the log?#2015-07-2021:23ghadilike using d/q and making ad hoc queries against a db and its successor/predecessor?#2015-07-2021:23ghadi(I'm trying to broadcast to other systems changes that happen to certain types of entities)#2015-07-2021:24stuartsierra@ghadi: It really depends on the specifics of your entities and your queries.#2015-07-2021:25ghadiYeah. I don't care what the change actually is, I'm fine with re-publishing the entire representation of the entity (as opposed to publishing a specific delta)#2015-07-2021:25stuartsierraThe tx-report-queue and the Log, together, give you a guarantee you'll see every change when it happens.#2015-07-2021:26stuartsierraThen it's up to you what you want to do with that information.#2015-07-2021:34ghadistuartsierra: I just grokked this: ` :where [(tx-ids ?log ?t1 ?t2) [?tx ...]]
[(tx-data ?log ?tx) [[?e]]]` I think that will be sufficient, if I filter on ?e#2015-07-2022:10ghadifollow up -- using query to join the d/log against a particular d/db.#2015-07-2022:10ghadiworks swell.#2015-07-2105:55robert-stuttafordghadi: just be aware that ‘queries’ against the log don’t cache like index queries do#2015-07-2115:08jballancWhat would be the best way to log a request for the Datomic AWS support scripts?#2015-07-2119:48arohnertrying to debug a slow query, is there a way to find out how many segments were pulled down to do a query?#2015-07-2119:51stuartsierra@arohner: Run it on its own, you can look at the :StorageGetMsec and :StorageGetBytes metrics in the Peer.#2015-07-2119:53stuartsierraThe :count is the number of requests to Storage.#2015-07-2119:53arohner@stuartsierra: my local DB isn’t sending to cloudwatch, is there another way to get metrics?#2015-07-2119:53stuartsierra@arohner: Yes, they are logged under the datomic.process-monitor Logger via SLF4J.#2015-07-2119:54stuartsierraYou can also attach your own metrics callback function.#2015-07-2119:55arohnerhow? ctrl+f for ‘metrics’ and ‘callback’ are failing me in the API docs#2015-07-2119:55stuartsierrahttp://docs.datomic.com/monitoring.html#2015-07-2119:56stuartsierraSpecifically http://docs.datomic.com/monitoring.html#Custom#2015-07-2120:40jballanc@stuartsierra: would you be the person to handle a (very) simple request re: the AWS startup.sh script?#2015-07-2120:48stuartsierra@jballanc: No, I don't know anything about that. I recommend the Datomic mailing list.#2015-07-2120:48jballanccool, thanks!#2015-07-2121:02arohnerare there any docs on how attrs of type :bytes are indexed?#2015-07-2121:03arohnerI’m using seek-datoms on :bytes, and it appears the iteration starts well before the value I pass in#2015-07-2121:06arohneri.e. it appears the iteration starts at the first value in the attribute index, rather than right before value#2015-07-2121:32stuartsierra@arohner: Possibly related to https://groups.google.com/d/topic/datomic/zhDiEqNPb3A/discussion#2015-07-2121:55arohner@stuartsierra: yeah, that looks suspicious. I have a gist testcase incoming#2015-07-2121:59arohnerhttps://groups.google.com/forum/#!topic/datomic/JqXcURuse1M#2015-07-2218:13robert-stuttaford@stuartsierra: just curious, but is using https://clojuredocs.org/clojure.core/seque a valid strategy with tx-report-queue?#2015-07-2219:29stuartsierra@robert-stuttaford: I don't know. I never really understood the use-cases for seque.#2015-07-2317:45lowl4tencyHm#2015-07-2317:46lowl4tencyI’m trying to download datomic distro#2015-07-2317:46lowl4tencydamn#2015-07-2317:46lowl4tencywget error: 20 redirections exceeded.#2015-07-2318:19bostonaholicare schema alterations not supported for datomic:mem databases?#2015-07-2318:20marshall@lowl4tency: you can increase the # of allowed redirects by including#2015-07-2318:20marshall--max-redirect=number
#2015-07-2318:21robert-stuttafordbostonaholic: what happens when you try?#2015-07-2318:22robert-stuttafordno mention of a restriction in the docs#2015-07-2318:22bostonaholicdatomic.impl.Exceptions$IllegalArgumentExceptionInfo: :db.error/not-an-entity Unable to resolve entity: :foo in datom [:foo :db/ident :bar]#2015-07-2318:23robert-stuttafordmind sharing the code that produced that?#2015-07-2318:23robert-stuttafordthe + button on the slack text input has a snippet paste#2015-07-2318:23bostonaholic{:db/id :foo
:db/ident :bar}
#2015-07-2318:23bostonaholicthat's my "alteration"#2015-07-2318:24robert-stuttaford{:db/id [:db/ident :foo] :db/ident :bar}#2015-07-2318:24robert-stuttafordtry that#2015-07-2318:24lowl4tencymarshall: thank you, i’ve already get it through my laptop simple_smile#2015-07-2318:26bostonaholic@robert-stuttaford: Unable to resolve entity: [:db/ident :foo] in datom [[:db/ident :foo] :db/ident :bar]#2015-07-2318:27bostonaholica little backstory, I'm trying to load the schema for tests. transacting the schema is happening through https://github.com/rkneufeld/conformity#2015-07-2318:27robert-stuttafordwhat does (d/pull db [:db/ident :foo] ‘[*]) produce?#2015-07-2318:28robert-stuttafordah. altering schema has specific syntax. also, conformity will only ever load a given named norm once#2015-07-2318:28robert-stuttafordhttp://docs.datomic.com/schema.html search Alter schema#2015-07-2318:29bostonaholicyeah, I use the alter schema with conformity just fine for the production app
:app/v3 {:txes [[{:db/id :foo
:db/ident :bar}]]}
#2015-07-2318:30robert-stuttafordok. were i in your shoes, i’d first validate the db actually has :foo in it. does it? 😁#2015-07-2318:30robert-stuttafordstranger things have happened to me#2015-07-2318:30bostonaholicyeah, that's where I'm heading now#2015-07-2318:37bkamphaus@bostonaholic: schema alteration on mem should work — example
(ns datomic-manual-tests.mem-schema-alteration
(:require [datomic.api :as d]))
(def db-uri "datomic:")
(d/create-database db-uri)
(def conn (d/connect db-uri))
(def schema [{:db/id (d/tempid :db.part/db)
:db/ident :person/name
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one
:db.install/_attribute :db.part/db}])
@(d/transact conn schema)
(def pname-id
(-> (d/db conn)
(d/pull [:db/id] :person/name)
:db/id))
(def alter-tx [{:db/id :person/name
:db/ident :person/first-name
:db.alter/_attribute :db.part/db}])
@(d/transact conn alter-tx)
(-> (d/pull (d/db conn) [:db/ident] pname=id)
:db/ident)
; :person/first-name
#2015-07-2318:38robert-stuttafordyou just don’t need to sync to wait for results like you do with durable storage#2015-07-2318:39bostonaholicthanks @bkamphaus#2015-07-2318:40bostonaholicI wonder if it's because my alteration tx doesn't include :db.alter/_attribute :db.part/db#2015-07-2318:44bkamphausWell, if you change e.g. cardinality, uniqueness, index you should get this specific error:
java.lang.IllegalArgumentException: :db.error/invalid-attribute Schema change must be followed by :db.install/attribute or :db.alter/attribute […]
#2015-07-2318:45bostonaholicyeah, I'm just renaming#2015-07-2318:46bkamphausrenaming shouldn’t require alter then, actually: http://docs.datomic.com/schema.html#renaming-an-identity#2015-07-2318:46bostonaholicI wonder if this is because conformity isn't transacting each norm in the order I'm assuming it is#2015-07-2318:47bostonaholicsince it's a map of norm-name:transactions#2015-07-2318:48bostonaholicso when it's trying to rename a datom :foo and conformity hasn't transacted the definition of :foo#2015-07-2318:49stuartsierra@bostonaholic: As I recall, Conformity uses declared dependencies of schema sections to determine order.#2015-07-2318:51bostonaholic@stuartsierra: hmm, I can't find in the docs how I declare a dependency#2015-07-2318:53stuartsierra@bostonaholic: Yeah, neither can I. I must be thinking of something else.#2015-07-2318:55stuartsierraI guess it's transacting them in the order you give to ensure-conforms#2015-07-2318:58bostonaholicI give ensure-conforms my map that looks like:
{:app/v1
{:txes [[{...}
{...}]]}
:app/v2
{:txes [[{...}]]}}
#2015-07-2319:00bostonaholicand since it's a map, comformity cannot guarantee that :app/v1 is transacted before :app/v2#2015-07-2319:00bostonaholicthanks for you help everyone#2015-07-2321:13arohnerI’m trying to d/datoms :avet :foo/bar, where :foo/bar is an attr of type :db.type/ref. It’s failing with “:attribute-not-indexed”. I thought :db.type/ref was always indexed, and reverse indexed?#2015-07-2321:15marshallref types are indexed in VAET#2015-07-2321:15marshall@arohner: ^#2015-07-2407:42a.espolovMust I point basis-t in the rest transact request?#2015-07-2407:43a.espolovhowever when I use basis-t, the transaction is still not satisfied (#2015-07-2413:36marshall@a.espolov: I would suggest trying your transaction from the web UI to ensure your data and parameters are correct. Also, errors from the REST service will be in the transactor logs and/or the peer logs#2015-07-2413:36marshall@a.espolov: Finally, for a transaction, you don’t use the :query-params, just the :tx-data map:
http://docs.datomic.com/rest.html#transact#2015-07-2413:37a.espolovkey :query-params for client from http-kit#2015-07-2413:37marshall@a.espolov: ah, sorry, misread#2015-07-2413:38marshall@a.espolov: Can you verify your txn works from the web UI?#2015-07-2413:38a.espolovweb ui good worked)#2015-07-2413:40marshall@a.espolov: the transactor logs should include additional information about the error. Also, can you build the transaction as a URL-encoded string directly and see if that works (instead of through http-kit)#2015-07-2414:49lowl4tencyguys, I’m trying to restore a datomic backup to transactor with psql as backend#2015-07-2414:50lowl4tencyorg.postgresql.util.PSQLException: The server requested password-based authentication, but no password was provided#2015-07-2414:51lowl4tencyMy URI looks like datomic:<sql://example?jdbc:postgresql://datomic.example.com:5432/datomic?user=datomic&password=datomic>#2015-07-2414:51lowl4tencyWhat’s wrong with my restore-db command?#2015-07-2414:53stuartsierra@lowl4tency: As a first step, I would suggest looking at the output of the Transactor when you start it. It prints the Datomic URI with a placeholder for the database name. Make sure that matches your URL in restore-db.#2015-07-2414:54stuartsierra@lowl4tency: Also, make sure to quote the URI to prevent the shell from interpreting characters like ? and ߟ-07-2414:54lowl4tencydouble quote?#2015-07-2414:54stuartsierraEither double-quote or single-quote should work in Unix-style shells, double-quote only in Windows.#2015-07-2414:55lowl4tency"Starting datomic:sql://<DB-NAME>?jdbc:<postgresql://datomic.example.com:5432/datomic?user=datomic&password=datomic>#2015-07-2414:55lowl4tencyI’ve got clean transactor and psql, I’ve created only datomic and datomic_kv dbs.#2015-07-2414:56lowl4tencySo I don’t need to pass datomic db name? only postgresql db?#2015-07-2414:56lowl4tencyOr my transactor working incorrect?#2015-07-2414:57stuartsierra@lowl4tency: In the restore-db command, replace <DB-NAME> in that URI with the name of the (Datomic) database you want to restore into.#2015-07-2414:57lowl4tencyit was done#2015-07-2414:57lowl4tencyHm#2015-07-2414:58lowl4tencystuartsierra: is it critical? I’ve got CNAME for my postgresql rds instance#2015-07-2415:00lowl4tencyOh, passed the A record for the db and quoted the address#2015-07-2415:00lowl4tencyWorks#2015-07-2415:00lowl4tencySo, just for clarify, CNAMEs are not working?#2015-07-2415:01stuartsierra@lowl4tency: I don't know the answer to that. I do know that the Transactor's configuration file should have the correct URL to access the SQL database as the sql-url property.#2015-07-2415:02stuartsierraAs far as I know, if you can connect to the JDBC URI using Java's JDBC libraries, then Datomic should be able to connect to the same URI.#2015-07-2415:02lowl4tencyWill experiment so#2015-07-2415:02lowl4tencyJust it’s important for me, I’ve got a cname for RDS and if application are not able to use the cname for connect to datomic db is sad simple_smile#2015-07-2415:03lowl4tencystuartsierra: okay, got it, thanks a lot!#2015-07-2415:03stuartsierraYou're welcome.#2015-07-2504:02shofetimIs there an idiomatic, or suggested pattern for modeling "effective at" or "effective between" time data in datomic?#2015-07-2511:14gjnoonanhttps://vimeo.com/130614731 Another great talk by @stuarthalloway#2015-07-2517:13val_waeselynckSharing a moment of joy here: migrating my web server from MongoDB to Datomic, I just rewrote my multi-level authorization system in pure Datalog. Rules freaking rule.#2015-07-2611:37robert-stuttaford\o/ @val_waeselynck#2015-07-2611:39robert-stuttaford@shofetim: if you’re ok with System Time being synonymous with Domain time - that is, datomic’s transaction time is good enough for your app’s notion of the time when things occured, then you can simply use d/as-of and d/since to constrain queries about time#2015-07-2611:40robert-stuttafordif you have timestamps that differ to Datomic’s transaction time, then you can annotate transactions as you write them#2015-07-2611:41robert-stuttaford@(d/transact conn
[[:db/add (d/tempid :db.part/tx) :your.domain/timestamp ts]
... ])
#2015-07-2611:41robert-stuttafordanything you write to a tempid for :db.part/tx will be asserted on the reifed transaction entity directly#2015-07-2611:42robert-stuttafordthen, you can use d/filter to write your own as-of and since filters. i’ve not yet done this personally, so i don’t know how to do that performantly, yet.#2015-07-2612:20val_waeselynckCan I pass a set of 3-tuples as a data source to a Datalog query and treat it exactly as I would a Datomic database value? I think I read it's possible, but can't really get it to work#2015-07-2612:23val_waeselynckNvm I just had to remove the ? from datasources names#2015-07-2618:27kachayev@val_waeselynck: here is a nice set of examples from Stuart for doing this: https://gist.github.com/stuarthalloway/2645453#2015-07-2621:41val_waeselynck@kachayev thanks!#2015-07-2707:19kachayevI wonder if it’s possible to know in advanced how many segments will be fetched from storages during query execution?#2015-07-2707:19kachayevI mean I understand that it depends on query itself, schema, data that was already fetched etc#2015-07-2707:19kachayevSomething like “select explain"#2015-07-2708:24robert-stuttafordi saw someone had made a library to attempt to count datoms at each clause, but that was before all the new stuff was added#2015-07-2708:25robert-stuttafordfrom reading the questions the author was asking and the sort of answer he was getting from Cognitect, it was directly in Datomic’s secret-sauce and no real progress was made#2015-07-2708:25robert-stuttafordif such a facility exists, it’ll be because Cognitect provides it#2015-07-2708:38kachayevsure#2015-07-2709:33robert-stuttafordanyone using the pull spec have a simple way to flatten the result such that all nested maps are merged with the root map?#2015-07-2713:28akiel@kachayev: i think the best thing you can do is monitoring your storage. as far as i know, index segments have a fixed size in kilobytes. so even your data influences how many segments are needed for a query. the peer also caches all segments. so running a reasonable small query again should not reach out to the storage twice. in queries, it’s important to order the clauses by the number of datoms they bind. by having the most specific clause first, you will get the best performance.#2015-07-2713:29kachayevright#2015-07-2713:29kachayevthe question is that it’s hard to keep track of "order the clauses by the number of datoms” when you have many queries#2015-07-2713:29kachayevand/or dynamically built queries#2015-07-2713:31kachayevit’s also hard to “play” with data locality - it just takes too much time to do: change schema, load everything, run a lot of queries, then analyze charts about network consumption & storage performance (and they are not that obvious usually)#2015-07-2713:32robert-stuttaforddynamically built queries are not such a good idea. d/q caches the preparatory work it does for its first param. better to have standard queries with dynamic :in values#2015-07-2713:33robert-stuttafordhear you on the ease of play thing. immutability does have its downsides simple_smile#2015-07-2713:34robert-stuttafordi’ve just checked, we have 500+ invocations of d/q in our projects#2015-07-2713:34robert-stuttafordand we’ve done ok with perf testing each one as we go#2015-07-2713:35robert-stuttafordordering clauses such that :in values are handled early and :find values late, and then testing swapping things around in the middle#2015-07-2713:36robert-stuttafordon network traffic, you can stick memcached in the middle to get a big overall read perf boost#2015-07-2713:39kachayevdidn’t get the idea about “immutability” and “data locality” (in terms of “immutability downsides”). orthogonal concepts as for me#2015-07-2714:03akiel@kachayev: right a query planner may be helpful in bigger projects - I once heard from Rich that he likes the control one have when there is no query planner. I think he was bitten by some SQL query planner in the past. If he still thinks the same and paying customers do not complain a lot, do not expect a query planner very soon.#2015-07-2714:05akiel@kachayev: I expect data locality is also not easy to track down inside say Oracle accessing files in a SAN. Other than that there is better tooling around.#2015-07-2714:08kachayevI can’t say that it’s a kind of “complain”, just curiosity. I understand that most modern databases don’t provide any tooling for this as well, so it’s not a “must-have” and definitely not a “deal-breaker”.#2015-07-2714:09robert-stuttafordi was talking more to the busy-work of having to recreate databases with new schema etc to test different setups#2015-07-2715:57jelleaIs the Datomic documentation available offline (as pdf, dash docset, repo)?#2015-07-2716:00tcrayford@akiel: note: not all segments are cached, log segments aren't, neither are things from the gc#2015-07-2717:22akiel@tcrayford: what do you mean with gc?#2015-07-2717:24tcrayfordthe "list of segments to gc" is stored in storage somewhere. When you run d/gc-storage it has to query that stuff and it's not cached#2015-07-2717:52akielah ok - so this is only a maintenance thing#2015-07-2718:21robert-stuttaford@jellea: nope. http://docs.datomic.com is it#2015-07-2814:04ljosaDoes anyone here use Datomic with multiple Couchbase clusters and cross-datacenter replication (XDCR)?#2015-07-2818:53stuartsierra@ljosa: Datomic is, in general, not designed to support cross-datacenter operation with one Transactor pair.#2015-07-2818:57stuartsierraCross-datacenter replication strategies usually allow data to diverge between the two datacenters, with some kind of arbitrary rule for conflict resolution. This is not a strong enough guarantee to preserve Datomic's consistency model.#2015-07-2818:59stuartsierraFor example, for Couchbase, http://docs.couchbase.com/admin/admin/XDCR/xdcr-architecture.html
'XDCR … provides eventual consistency across clusters. If a conflict occurs, the document with the most updates will be considered the “winner.” '#2015-07-2819:00ljosaHmm … but doesn’t Datomic store immutable segments, always with new keys?#2015-07-2819:00tcrayfordnot for the roots#2015-07-2819:00tcrayfordthe roots require CAS or consistent put#2015-07-2819:01tcrayford(they just contain uuids to the immutable segments afaik)#2015-07-2819:01stuartsierraYes, as @tcrayford says, there is one important piece that is not immutable: the pointer to the "root" of each database value.#2015-07-2819:01tcrayfordack, wrong term s/root/"pointer to the root"/g#2015-07-2819:02tcrayfordthere are afaik like 4-6 or something of those as well, not just a single thing#2015-07-2819:02stuartsierraAlso, the immutable segments are nodes in a tree structure… if the tree has a new root but not all the leaves have been replicated across the datacenters, you would see inconsistent results. Datomic doesn't allow this, so it would appear as unavailability.#2015-07-2819:02tcrayford(uh: db heartbeat, log tail, log, indexes, gc)#2015-07-2819:03stuartsierraBasically, you can't get Datomic's strong consistency guarantees and cross-datacenter (or cross-region) replication at the same time. simple_smile#2015-07-2819:03tcrayfordphysics, a thing#2015-07-2819:20ljosaI believe conflicts cannot happen in this case because the replication is one-way from the cluster that Datomic writes to. But I see that point that Datomic will be confused if the mutable documents are updated in the wrong order or if the leaves of the tree are delayed. Do you know how Datomic would react in such cases? Would it throw an exception?
(That might be OK: from playing with Datomic and XDCR, it seems that replication delays are usually masked because recent datoms are cached in the memory index, which is transferred directly from the transactor to the peers.)#2015-07-2819:21ghadiare entities comparable?#2015-07-2819:21ghadilike if I access a ref (through navigation) on two different database values.#2015-07-2819:22ghadiI should just test it out... but chat room#2015-07-2819:24stuartsierra@ljosa: In general Datomic will always prefer an error to returning inconsistent results. But you should be aware that cross-datacenter replication is not a supported use case so anything it does is, by definition, undefined behavior.#2015-07-2819:25ljosaunderstood. thank you for good answers.#2015-07-2819:26ljosaI suppose we’ll have to get by with a single Couchbase cluster in a single AZ and hope that caching in the peers together with the memory index is enough to smooth over AWS glitches.#2015-07-2819:27ljosaI suppose Datomic must be using strongly consistent reads when it’s running on DynamoDB?#2015-07-2819:37tcrayfordfor the pointers to roots, yeah, CAS too iirc#2015-07-2819:37tcrayford(I think dynamo supports it)#2015-07-2820:42arohneryes, dynamo supports strongly consistent reads#2015-07-2820:44arohnerand CS#2015-07-2820:44arohners/CS/CAS/#2015-07-2820:45arohnersome systems (not sure if dynamo is one), only have problems w/ consistent reads when (ab)using mutability#2015-07-2820:45arohnerif you never update-in-place, it will give you correct results for “give me the segment w/ this UUID”, even when using eventual consistency#2015-07-2908:55robert-stuttafordshould i be seeing a different d/basis-t value for an d/as-of database in the past?#2015-07-2908:56robert-stuttafordno matter what i do, i’m always getting the same basis-t and next-t values back#2015-07-2909:06robert-stuttafordanyone have any ideas?#2015-07-2909:51tcrayford@robert-stuttaford: I think it's because basis-t is an implementation detail that's leaking through as-of? Recall the part of @stuarthalloway's recent ete datalog talk about the history filters. as I understand things, as-of etc are implemented as things that a) filter the index b) merge the live index and parts of the history index. I don't think any of those actually needs to affect basis-t#2015-07-2909:57robert-stuttafordthanks, tom#2015-07-2909:59tcrayford@robert-stuttaford: uh, bad terminology there. When I say "live index" in that para, I mean as opposed to the historical indexes, not the peer in memory stuff#2015-07-2909:59robert-stuttafordyeah#2015-07-2909:59robert-stuttafordi kinda got that#2015-07-2909:59tcrayfordsimple_smile#2015-07-2914:28caskolkm@(d/transact conn [[:db.fn/retractEntity (BigInteger. company-id)]]
causes a:#2015-07-2914:28caskolkmdb.error/reset-tx-instant You can set :db/txInstant only on the current transaction#2015-07-2914:28caskolkmDoes anyone know what I'm doing wrong?#2015-07-2914:31bostonaholic@caskolkm: I was getting the same error earlier this week. Unfortunately, I don't believe I solved it. (Just deleted and recreated my local db)#2015-07-2914:38bostonaholicI know that probably doesn't help much 😜#2015-07-2914:41bostonaholicplus I usually do (Long. eid)#2015-07-2914:55bkamphaus@robert-stuttaford: kind of late, but if you want the as-of point you need to use as-of-t — http://docs.datomic.com/clojure/#datomic.api/as-of-t, likewise since requires since-t http://docs.datomic.com/clojure/#datomic.api/since-t#2015-07-2915:25robert-stuttaford@bkamphaus: thanks. i do recommend you update the docstrings for basis-t and next-t, because they are not correct#2015-07-2915:26robert-stuttaford"Returns the t of the most recent transaction reachable via this db value.”#2015-07-2915:27robert-stuttafordperhaps just a note that its result is not constrained by d/as-of or d/since#2015-07-2915:47caskolkm@bostonaholic: weird.. Hopefully someone else knows the answer :)#2015-07-2915:48bostonaholicI just tried it again and it worked...#2015-07-2915:50bkamphaus@robert-stuttaford: I think you’re correct that there may be doc improvements that would make sense around using filters, but I don’t think that is necessarily one of them. Sorry, I’m still working on how to best phrase it, but the db value returned by a call to a filter (`as-of`, since, or your own) - is a db value with the filter applied. the as-of-t and basis-t are different, but basis-t is still the correct basis of the filtered db, even though the filter may filter our the most recent (or several of the most recent) tx(es).#2015-07-2915:52bkamphausanother angle of this, that the db-after returned by with for an as-of db filter filters out the prospective data, is also surprising.#2015-07-2915:55bkamphaus@bostonaholic: and cc @caskolkm if you’re able to repro I’d be curious to see, but you’re correct that entity id’s should be java.lang.Long and I’d want to see it repro’d using the correct type of arg to retractEntity#2015-07-2916:02robert-stuttaford@bkamphaus: thank you for your feedback#2015-07-2916:03robert-stuttafordnot sure what the right way forward is, i just know that it’s not obvious that this is the case, and it can catch people. certainly caught me, and i’ve been using Datomic for a long time#2015-07-2916:06caskolkm@bostonaholic: can you show me your code? #2015-07-2916:08bostonaholicit was the same as yours, just (Long. eid) is different#2015-07-2916:08caskolkmOk, i will try it tomorrow#2015-07-2916:30tcrayford@robert-stuttaford: as somebody who's also been using datomic for a long time, I agree that it's confusing 😞#2015-07-3003:39erichmondis GPG broke on el capitan? I am trying to work with datomic-pro and am getting "pg: gpg-agent is not available in this session” even thought that daemon is indeed running#2015-07-3006:21caskolkm@bkamphaus @bostonaholic: still the same error using: @(d/transact conn [[:db.fn/retractEntity (Long. company-id)]])#2015-07-3012:42mitchelkuijpers@bkamphaus: @bostonaholic I found our problem with retracting an entity somehow we saved the entities in the db.part/tx
which is obviously wrong 😅#2015-07-3014:21bkamphaus@mitchelkuijpers: that would do it. Note that to have gotten this outcome if using e.g the map form, you’d have specified a tempid for :db.part/tx in the same attr. These attributes then become attributes on the transaction entity.
For annotating a transaction in the map form case you would need to supply separate maps for the attributes intended as tx annotations and attributes intended for a new or existing entity. Example (though Java) at: http://docs.datomic.com/transactions.html#reified-transactions#2015-07-3014:34erichmondFollow-up : It was because I didn’t fully nuke the gpg installed by brew before installing the one recommended by leiningen.#2015-07-3014:44maxI just realized that postgres is using 30gb of storage for a small app#2015-07-3014:45maxI assume this is because I havent been garbage collecting, or am I messing something up bad#2015-07-3015:05erichmondWhat is the best tutorial for someone who wants to use datomic + clojure#2015-07-3015:05erichmondthese docs are a mess#2015-07-3015:22bkamphaus@erichmond: the day-of-datomic repro is at: https://github.com/Datomic/day-of-datomic — if you’re talking about the tutorial on the docs page, it’s available in clojure in the datomic directory as mentioned here: http://docs.datomic.com/tutorial.html#following-along — if you’re looking at query specifically: http://docs.datomic.com/query.html points to the clojure examples from day-of-datomic here: https://github.com/Datomic/day-of-datomic/blob/master/tutorial/query.clj#2015-07-3015:23erichmond@bkamphaus: thanks, also, this datomic for 5 year olds is helping too#2015-07-3015:25marshall@erichmond: We also have the full Day of Datomic training session as a series of videos here: http://www.datomic.com/training.html#2015-07-3015:25bhaganyI really got a lot out of those videos, fwiw#2015-07-3015:26meowI've heard good things about http://www.learndatalogtoday.org/#2015-07-3015:26erichmondthanks, I’ll check out the videos too.#2015-07-3015:27bkamphaus@max: you should be doing some gc http://docs.datomic.com/capacity.html#garbage-collection — you may also have to take additional steps for postgres (and other storages) to reclaim data, e.g. VACUUM https://www.postgresql.org/docs/9.1/static/sql-vacuum.html#2015-07-3015:27erichmondActually, all the querying and whatnot is pretty straightforward to me#2015-07-3015:27maxso it looks like my vm ran out space (I had a 40gb vm)#2015-07-3015:27maxI upped the disk space#2015-07-3015:27maxand my database size is only growing#2015-07-3015:27maxand the transactor is unavailable#2015-07-3015:27erichmondI was looking more for “10 steps to firing up a mem based datomic connection” “10 steps to firing up a dev based datomic connection + datomic console”#2015-07-3015:27maxwill this resolve itself?#2015-07-3015:28erichmondI’m realizing now, if I want to run mem, I don’t even seem to need to download that datomic.zip, etc#2015-07-3015:29bkamphaus@max not enough info to tell. can you tail the logs to see if the txor is busy? e.g. indexing#2015-07-3015:29maxbkamphaus: debugging this I also found that my only log file is log/2015-06-26.log.#2015-07-3015:30maxI kept the default logback.xml#2015-07-3015:30maxso that’s another issue#2015-07-3015:30maxis there another place they could be?#2015-07-3015:30bkamphaus@max: does your transactor properties file specify a different log location?#2015-07-3015:33maxbkamphaus: ah thanks. Okay so it’s indexing#2015-07-3015:35maxaw crap#2015-07-3015:35maxI may have done a bad thing.#2015-07-3015:36maxI accidentally shoved some ~860kb strings into datoms#2015-07-3015:36maxam I hosed here?#2015-07-3015:38bkamphauswell it definitely can kill perf stuff, and will depend on how your system is provisioned. But yeah, you definitely want to avoid large blobby stuff in datoms. options for recovery — do you have a recent backup? You can also excise that stuff.#2015-07-3015:38bkamphausare those fields in :avet? i.e. indexed -- that’s when it will hurt the most by far.#2015-07-3015:38maxthey are indexed#2015-07-3015:42maxbkamphaus: in the future, if i want to store this, doing noHistory and without index would be a bad idea still?#2015-07-3015:43bkamphausless of a bad idea, but I’d still avoid it. Indexing it guarantees that it will be a huge perf drag. Your best option for blob/document type stuff is to put in storage directly and store the pointer/ref/key w/e for it in Datomic in the datom#2015-07-3015:44bkamphausor a file store, e.g. s3#2015-07-3016:04bkamphaus@arohner: Stu has replied re: your questions/issues on bytes reported here in slack and on group https://groups.google.com/forum/#!topic/datomic/JqXcURuse1M#2015-07-3016:04arohner@bkamphaus: yeah I just saw, thanks#2015-07-3016:04bkamphausDatomic 0.9.5206 has been released https://groups.google.com/forum/#!topic/datomic/kEAqsjeeMaE#2015-07-3016:19erichmond@bkamphaus: do you work on datomic for cognitect?#2015-07-3016:39bkamphaus@erichmond: yes, I’m on the Datomic team at Cognitect.#2015-07-3016:39erichmondvery cool!#2015-07-3016:43bkamphausI agree that it’s very cool to be on this team. simple_smile Also, typing is hard.#2015-07-3017:01maxbkhamphaus: thanks for your help so far.#2015-07-3017:02maxI tried to run a garbage collect and an excision of one of the attributes#2015-07-3017:02maxmy database size is still growing (33->46 gb in the past hour)#2015-07-3017:02maxand datomic is running at 100% cpu#2015-07-3017:03maxheres a tail of the log#2015-07-3017:03maxso it looks like im still indexing?#2015-07-3017:04bkamphaus@max where you’re at, you’re waiting on indexing to push through — it will have to complete before space can be reclaimed and it will probably take longer for excision, etc. (more indexing necessary) — gc also competes for transactor resources — cpu/mem.#2015-07-3017:04bkamphausfrom the log tail, seems that way#2015-07-3017:04maxhow long can I expect to wait and is there anything I can do to speed it up#2015-07-3017:05maxmy hd is now 160gb, can I be reasonably sure I won’t hit that?#2015-07-3017:05bkamphaushow many attr val pairs were targeted by the excision?#2015-07-3017:05max1#2015-07-3017:05maxI was just doing a test on one datom.#2015-07-3017:14bkamphaus@max can you grep for successfully completed indexing jobs, e.g. :CreateEntireIndexMsec metrics, index specific completion messages, grep ":[ea][aev][ve]t, :phase :end" *.log, possible failures (just grep for AlarmIndexingFailed.#2015-07-3017:16maxthe last index specific completion was 3 hours ago
2015-07-30 11:58:23.370 INFO default datomic.index - {:tid 150, :I 5265000.0, :index :eavt, :phase :end, :TI 8465930.540997842, :pid 1480, :event :index/merge-mid, :count 2110, :msec 14400.0, :S -283878.0409978423, :as-of-t 961005}
#2015-07-3017:17maxI have an AlarmIndexingFailed once a minute
2015-07-30 13:15:47.572 INFO default datomic.process-monitor - {:tid 13, :AlarmIndexingFailed {:lo 1, :hi 1, :sum 4, :count 4}, :CreateEntireIndexMsec {:lo 16500, :hi 18600, :sum 70500, :count 4}, :MemoryIndexMB {:lo 0, :hi 0, :sum 0, :count 1}, :StoragePutMsec {:lo 1, :hi 239, :sum 11097, :count 381}, :AvailableMB 2640.0, :IndexWriteMsec {:lo 1, :hi 659, :sum 35259, :count 381}, :RemotePeers {:lo 1, :hi 1, :sum 1, :count 1}, :HeartbeatMsec {:lo 5000, :hi 5346, :sum 60427, :count 12}, :Alarm {:lo 1, :hi 1, :sum 4, :count 4}, :StorageGetMsec {:lo 0, :hi 124, :sum 2204, :count 305}, :pid 1480, :event :metrics, :StoragePutBytes {:lo 103, :hi 4568692, :sum 128385966, :count 382}, :ObjectCache {:lo 0, :hi 1, :sum 231, :count 536}, :MetricsReport {:lo 1, :hi 1, :sum 1, :count 1}, :StorageGetBytes {:lo 1853, :hi 4568435, :sum 95278692, :count 305}}
2015-07-30 13:16:47.573 INFO default datomic.process-monitor - {:tid 13, :TransactionDatoms {:lo 3, :hi 3, :sum 3, :count 1}, :AlarmIndexingFailed {:lo 1, :hi 1, :sum 3, :count 3}, :GarbageSegments {:lo 2, :hi 2, :sum 4, :count 2}, :CreateEntireIndexMsec {:lo 15800, :hi 17400, :sum 50500, :count 3}, :MemoryIndexMB {:lo 0, :hi 0, :sum 0, :count 1}, :StoragePutMsec {:lo 1, :hi 291, :sum 11173, :count 474}, :TransactionBatch {:lo 1, :hi 1, :sum 1, :count 1}, :TransactionBytes {:lo 102, :hi 102, :sum 102, :count 1}, :AvailableMB 2460.0, :IndexWriteMsec {:lo 2, :hi 350, :sum 36373, :count 471}, :RemotePeers {:lo 1, :hi 1, :sum 1, :count 1}, :HeartbeatMsec {:lo 5000, :hi 5003, :sum 60006, :count 12}, :Alarm {:lo 1, :hi 1, :sum 3, :count 3}, :StorageGetMsec {:lo 0, :hi 100, :sum 2151, :count 351}, :TransactionMsec {:lo 19, :hi 19, :sum 19, :count 1}, :pid 1480, :event :metrics, :StoragePutBytes {:lo 86, :hi 4568692, :sum 146567666, :count 473}, :LogWriteMsec {:lo 8, :hi 8, :sum 8, :count 1}, :ObjectCache {:lo 0, :hi 1, :sum 247, :count 598}, :MetricsReport {:lo 1, :hi 1, :sum 1, :count 1}, :PodUpdateMsec {:lo 2, :hi 7, :sum 9, :count 2}, :StorageGetBytes {:lo 86, :hi 4568435, :sum 94665879, :count 351}}
#2015-07-3017:19bkamphaus@max which version of Datomic are you running?#2015-07-3017:19maxdatomic-pro-0.9.5173#2015-07-3017:22bkamphauscan you do a failover or start/restart to upgrade to 0.9.5201 (or latest 0.9.5206) to see if the indexing job is then able to run to completion?#2015-07-3017:22maxokay#2015-07-3017:22maxany reason to go with 5201 vs 5206?#2015-07-3017:23bkamphausI’d just drop into latest 5206 if no preference, 5201 is just minimal to get past a fix for a related issue. 0.9.5206 only adds error handling/explicit limits to byte attributes#2015-07-3017:32maxbkamphaus: I updated, am getting some out of memory errors
015-07-30 13:31:43.668 WARN default datomic.update - {:tid 77, :pid 10386, :message "Index creation failed", :db-id "canary-f3e9a40e-2036-4ad9-aae7-52919cced434"}
java.lang.OutOfMemoryError: Java heap space
#2015-07-3017:33maxI’m using
# Recommended settings for -Xmx4g production usage.
memory-index-threshold=32m
memory-index-max=512m
object-cache-max=1g
#2015-07-3017:35bkamphaus@max some follow up q’s then — can you verify you’re using GC defaults? Either only setting Xmx, xmx as transactor args, or if using JAVA_OPTS, adding -XX:+UseG1GC -XX:MaxGCPauseMills=50 to keep GC defaults? Also, would it be possible to up -Xmx (what’s current + available on machine)?#2015-07-3017:36max exec /var/lib/datomic/runtime/bin/transactor -Xms4g -Xmx4g /var/lib/datomic/transactor.properties 2>&1 >> /var/log/datomic/datomic.log#2015-07-3017:36maxthat’s my datomic command#2015-07-3017:36maxI could up the memory, should I change transactor props also?#2015-07-3017:41bkamphaus@max I would up memory, double it if you can — maybe up object-cache-max only slightly (i.e. to 25% of heap or so, not up to 1/2 for sure). I.e. something like -Xmx 8g, object-cache-max=2g, rest same#2015-07-3017:41maxok#2015-07-3017:49maxbkamphaus: the excision finished!#2015-07-3017:49maxthanks!#2015-07-3017:50bkamphaus@max awesome — make sure and spread out the excision the way you’d normally pipeline txes on an import#2015-07-3017:50bkamphausassuming you’re following up by removing more of the blobby string vals#2015-07-3017:50maxthere are only 35 attrs to excise#2015-07-3017:51max…my postgres db size is at 51gbs though#2015-07-3017:51bkamphausah, cool, so less of an issue then. as stuff pushes through, you’ll be able to run gc (or maybe it’s already running?)#2015-07-3017:51maxI ran a datomic garbage collect and it didn’t seem to do much, I assume I should run it again and vacuum#2015-07-3017:51bkamphausit runs async#2015-07-3017:52bkamphausbut yes you should do it after excision, more segments will need to be gc’d after that#2015-07-3017:52bkamphausthe gc-storage call when finished will log something like: 2014-08-08 03:24:14.174 INFO default datomic.garbage - {:tid 129, :pid 2325, :event :garbage/collected, :count 10558}#2015-07-3017:52maxso, how did this happen? I had 35 blobs some of which were like a meg at most. And the rest of my data is pretty small. How did my db grow to 51gigs?#2015-07-3017:52maxAnd how do I make sure it doesn’t happen again, garbage collect daily?#2015-07-3017:53bkamphausI don’t know how much segment churn you go through, but it does build up over time from indexing. The blobs can be particularly bad with :avet on.#2015-07-3017:54bkamphausNightly may not be necessary, but you can set up a gc-storage call to run at w/e period you determine is necessary#2015-07-3017:55tcrayford(as a side reference, for my [relatively normal] webapp, I run it at application bootup, because only the webservers are datomic peers and they're deployed together)#2015-07-3017:55bkamphausand then periodically I’m assuming you’ll need to VACUUM in postgres before space is reclaimed since the deletion in Datomic will be handled/deferred by table logic in the storage#2015-07-3017:56bkamphausi.e. Cassandra via tombstone, Oracle space reclamation is deferred by High-water Mark stuff, etc.#2015-07-3017:58maxcool, thanks so much for your help @bkamphaus#2015-07-3018:09micahWeird datomic error throwing me for a loop:#2015-07-3018:09micahairworthy.repl=> @(api/transact @db/connection [{:segue/time #inst "2015-04-09T05:32:48.000-00:00", :segue/way :out, :segue/airport 277076930200614, :segue/user 277076930200554, :db/id 277076930200690}])
IllegalArgumentExceptionInfo :db.error/not-an-entity Unable to resolve entity: Thu Apr 09 00:32:48 CDT 2015 in datom [277076930200690 :segue/user #inst "2015-04-09T05:32:48.000-00:00"] datomic.error/arg (error.clj:57)#2015-07-3018:10micahWhy does it think the date is an entity?#2015-07-3018:11shaunxcodewhat is schema for :segue/time ?#2015-07-3018:11micah:instant#2015-07-3018:11shaunxcodeand :segue/user ?#2015-07-3018:12micah:ref#2015-07-3018:14mitchelkuijpersThank you for your help @bkamphaus#2015-07-3019:05max@bkamphaus: I ran a garbage collect and a vacuum, but pg still says the datomic database size is 51gb. Any suggestions?#2015-07-3019:24bkamphaus@max have you backed the db up recently so that you have a reference for how large the backup is?#2015-07-3020:09max@bkamphaus: 150mb#2015-07-3020:10bkamphaus@micah: as a sanity check, I would verify that all entities in the transaction exist and that all attr keywords are specified correctly (e.g. spelled correctly) and exist, including the (assumed enum) :out entity — it may be that something else wrong in the transaction is causing it to resolve to the incorrect datom that’s transacting the date as the value for the :segue/user attr (the cause of the exception)#2015-07-3020:13bkamphaus@max that datomic db is the only one in that instance? you haven’t e.g. run tests that generate and delete dbs (note that dbs when deleted have to be [garbage collected](http://docs.datomic.com/capacity.html#garbage-collection-deleted), also). And definitely nothing else you’re storing in postgres?#2015-07-3020:14maxI only use memory dbs for testing, and I don’t run tests on the production instance#2015-07-3020:15maxThere is another database on the pg instance, but it’s tiny:
datomic=> SELECT pg_database.datname,
pg_size_pretty(pg_database_size(pg_database.datname)) AS size
FROM pg_database;
datname | size
------------+---------
template1 | 6314 kB
template0 | 6201 kB
postgres | 6314 kB
canary-web | 7002 kB
datomic | 51 GB
(5 rows)
#2015-07-3020:29bkamphaus@max have you restored versions of the same database when the restore has diverged? The incompatible restore is one thing I’m aware of which can potentially orphan segments so that they never get gc’d.#2015-07-3020:30maxI don’t think so. This is the production db, so it was initially restored from a seed db, and then only backed up#2015-07-3020:31maxit looks like a lot of the growth (20gbs worth!) happened after I ran out of disk space last night and was trying to do excisions.#2015-07-3020:32bkamphausDBs do pick up small amounts of cruft from operational churn, but this is well out of lines of my expectation for the size of it. Depending on what kind of outage you could tolerate, you could do a test restore from backup to a clean postgres in a dev/staging environment and seeing what the resulting table size it.#2015-07-3020:34bkamphausThe failure to index could be contributing then, maybe leaving orphaned segments somehow. There’s always the possibility of clobbering the table and starting from a clean restore, obviously you want to backup and test a restore as I mentioned above first before considering going down that path.#2015-07-3020:35bkamphausDo you know what the table size was prior to running into the indexing failure?#2015-07-3020:36maxI’m not sure, I ran out of disk space at ~30gb#2015-07-3020:36maxI’m assuming it’s going to affect performance to keep this 51gb database around#2015-07-3021:05maxSo I did a restore on my dev system, and the pg database is 142mb after restore.
I can do a restore in prod again, but I’m worried about this happening again. Any suggestions as to what to do at this point?#2015-07-3021:06maxis it possible I hit a bug in datomic?#2015-07-3021:12bkamphaus@max hard to speculate about a possible bug without knowing more specifics. I’m wondering how much of this can be attributed to the failures to index w/the blob-ish strings. My general advice would be to make sure and make regular backups, and configure some kind of monitoring for Alarm* events - so that you can jump in more quickly (i.e. reacting to AlarmIndexingFailed, rather than toe running out of size).#2015-07-3021:13maxbkamphaus: that makes sense, and it’s definitely my next plan of action#2015-07-3021:14maxwe ran out of space at db size 30 gb, so there must have been some failure before that that caused that 30gb to be written#2015-07-3021:14maxbut I guess that could have been cascading indexing failures?#2015-07-3021:15bkamphausI think it’s fairly typical for dbs in production over time to accumulate a little bit of cruft, but nothing like the difference in size from your backup to postgres table, which is why I think it must be linked to that indexing failure. I haven’t seen another report of that much excess size, usually when I’ve looked through those concerns about size differences it’s still less than 2-3x the expected size (after account for e.g. storages with replication factors, etc.) on dbs that have been running for a long time, nothing orders of magnitude larger than expected size like this — except with whole dbs not gc’d, or gc never having been run, etc.#2015-07-3021:16maxok. I’ll set up better monitoring and see if it happens again#2015-07-3021:17maxone more question: we’re not using aws, and I am using datadog for this data. Do you generally recommend to use the built in cloudwatch stuff and push that data to other services, or is integrating with a non-AWS monitoring service pretty easy?#2015-07-3021:21bkamphaus@max we definitely have users doing both. Cloudwatch is what we use at Cognitect and test the most, but lots of people on premise just configure their own callback ( http://docs.datomic.com/monitoring.html#sec-2 ) stuff or point it at various other logging/metric tools.#2015-07-3022:16micah@bkamphaus: Thanks for the tip. I verify everything is correctly spelled and schema-fied.#2015-07-3114:16raymcdermottquick question … can I filter datoms based on transaction data? I guess yes but is that via a standard query or do I have to write code?#2015-07-3114:18raymcdermottin my use case, I have two transactions on the same data but would sometimes like to show the data back to the user based on the source system (tracked in the transaction)#2015-07-3114:19tcrayford@raymcdermott: the transaction entities are perfectly queryable from normal queries#2015-07-3114:20raymcdermottok cool - that’s what I hoped but I cannot see any examples#2015-07-3114:21tcrayfordyou just join against them via the part of the datom that is the transaction id simple_smile#2015-07-3114:22raymcdermottah - ok so I wouldn’t need to use the history view? Or maybe I would combine that?#2015-07-3114:23raymcdermottI see the example#2015-07-3114:23raymcdermott[:db/add
#db/id[:db.part/tx]
:data/src
"http://example.com/catalog-2_29_2012.xml"]#2015-07-3114:23raymcdermottso (just to nail it) I can just add :data/src to the query?#2015-07-3114:24tcrayfordyou'd need to join against txid, but yeah#2015-07-3114:24raymcdermottah, ok and how would that work with pull?#2015-07-3114:24potetm@raymcdermott: I believe It depends if the data you’re interested in is in the current db or not.#2015-07-3114:25tcrayford(d/q '[:find ?src :where [_ :user/email _ ?tx] [?tx :data/src ?src]] …)#2015-07-3114:25tcrayfordwith pull - I wouldn't be too surprised if it didn't work with transaction entities, or if they did. If they do, you'd probably just use the txid as the entity id#2015-07-3114:26tcrayford(I'd have to try it at a repl, but I can't right now easily)#2015-07-3114:27raymcdermottok let me try and play over the weekend and I’ll come back here#2015-07-3114:27raymcdermottthanks for the great guidance so far1#2015-07-3114:27raymcdermotts/1/!/#2015-08-0101:22maxsomething really strange is going on#2015-08-0101:22maxI have an attribute that has cardinality one, but it has two values#2015-08-0101:22maxuser=> (d/pull db '[* {:db/cardinality [:db/ident]} {:db/valueType [:db/ident]}] :server/last-heartbeat-at)
{:db/index false, :db/valueType {:db/ident :db.type/instant}, :db/noHistory false, :db/isComponent false, :db/fulltext false, :db/cardinality {:db/ident :db.cardinality/one}, :db/doc "", :db/id 158, :db/ident :server/last-heartbeat-at}
user=> (d/q '[:find ?e ?hb :in $ ?e :where [?e :server/last-heartbeat-at ?hb]] db 17592186962108)
#{[17592186962108 #inst "2015-07-31T23:45:01.167-00:00"] [17592186962108 #inst "2015-08-01T00:45:01.195-00:00"]}
user=>
#2015-08-0113:21bkamphaus@max: we’ll follow up with diagnostics, etc. on the support ticket. If this is a high churn attribute (the name leads me to suspect that it is), we have a suspicion about what’s going on. Will confirm before going into it more.#2015-08-0113:21maxthanks @bkamphaus#2015-08-0114:57potetm@bkamphaus: I would be very interested in knowing what ya’lls suspicion is, even if it turns out that it isn’t the cause of this issue.#2015-08-0115:27robert-stuttaford@potetm, @bkamphaus ditto#2015-08-0202:31lboliveiraHello! How do I get a max value and its entity id using the same query?
(d/q '[:find [(max ?a) ...]
:where [?e :a ?a]]
[[1 :a 10]
[2 :a 20]
[3 :a 30]])
=> [30]
Returns 30. ok.
(d/q '[:find [(max ?a) ?e]
:where [?e :a ?a]]
[[1 :a 10]
[2 :a 20]
[3 :a 30]])
=> [10 1]
Returns [10 1]. How do I write a query that returns [30 3]?#2015-08-0202:55bhaganyso… I can see what it's doing, but I'm not sure how to make it not do that#2015-08-0202:56bhaganyanytime you include a non-aggregate lvar in :find, it's going to group the aggregates by that lvar#2015-08-0202:56bhaganyI think I would just issue one query for (max ?a), and a second for ?e#2015-08-0202:58bhaganyI think that's the only way because there may be more than one ?e#2015-08-0202:58bhaganyheh, I should have tagged you - @lboliveira ^^#2015-08-0203:00lboliveira@bhagany: Ty. So the idiomatic way to query the id is making two queries?#2015-08-0203:01bhagany@lboliveira: in this case, I would say yes. In general, you don't need to worry about round tripping to the database with datomic like you do with other db's#2015-08-0203:02bhaganyI'm pretty positive that ?e is already going to be in local memory because of the first query#2015-08-0203:06lboliveira@bhagany: I am wondering if I could have some issues if a new value is inserted between these calls.#2015-08-0203:07bhagany@lboliveira: ah, this is another thing you don't need to worry about with datomic simple_smile#2015-08-0203:07bhaganyif you pass the same db value in to d/q, that is guaranteed not to happen; it's immutable#2015-08-0203:08bhaganyso you can be completely assured that you're getting a consistent view of your data#2015-08-0203:08lboliveirayou are soo right. It takes time to wrap the mind about it. 😃#2015-08-0203:08lboliveirathank you soo much.#2015-08-0203:09bhagany@lboliveira: my pleasure simple_smile it took me a bit to wrap my head around it too.#2015-08-0203:09lboliveiraand the "don't worry about round trips".#2015-08-0203:09lboliveirait make all queries diferent#2015-08-0203:10lboliveiraIt is very cool way to interact with the database.#2015-08-0203:12bhaganyyes, very! there's a great talk by Rich where he goes into detail about the benefits of the datomic operational model, and this one really struck me. It's just so nice not to have to grab all the data you might need up front#2015-08-0203:14lboliveira😃#2015-08-0203:18lboliveirahttps://github.com/Yuppiechef/datomic-schema
This is to arbitrarily support extra generating options, including the new index-all? option, which flags every attribute in the schema for indexing (in line with Stuart Halloway's recommendation that you simply turn indexing on for every attribute by default).
@bhagany: Do you have any thoughts about it?#2015-08-0203:18bhagany@lboliveira: I haven't used it, but I have seen people refer positively to it here and on IRC.#2015-08-0203:19lboliveiraAnd about index all?#2015-08-0203:19bhaganypersonally, I don't have a problem with the raw schema, and I kind of like having it there as data.#2015-08-0203:19bhaganydo you mean having :db/index true on all attributes?#2015-08-0203:19lboliveiraYes. Do you do that?#2015-08-0203:20bhaganyoh, I missed your first message somehow. Yes, I do that, based on the same recommendation from Stu.#2015-08-0203:21lboliveiraThis is a "wow" thing to me.#2015-08-0203:22bhaganysimple_smile#2015-08-0203:25lboliveiraI could not find the Halloway's recommendation. Do you have a link? I have some boolean attributes. It seems odd to index them.#2015-08-0203:26bhaganyI don't have a direct link, I saw it in one of the Day of Datomic videos#2015-08-0203:26bhaganyhere: http://www.datomic.com/training.html#2015-08-0203:26lboliveiraty#2015-08-0203:26bhaganynp simple_smile#2015-08-0203:42lboliveira{:db/error :db.error/incompatible-schema-install, :entity :ping.reply/start, :attribute :db/index, :was false, :requested true}
:ping.reply/start is a :db.type/instant#2015-08-0203:43lboliveiraI was trying to add an index to it#2015-08-0204:00bhagany@lboliveira: hmm, I would guess you can't change the :db/index setting of an installed attribute?#2015-08-0204:00bhaganythis rings a bell, I bet it's partially why they started recommending that you index all attrs. It isn't too expensive, and much easier than adding it after the fact.#2015-08-0204:02lboliveira@bhagany: This post says I can set :db/index to true : https://groups.google.com/forum/#!msg/datomic/UHGf2beACog/GKHqoSig0noJ#2015-08-0204:03bhaganyaha, did you do the :db/alter thing?#2015-08-0204:06lboliveirano 😳#2015-08-0204:06bhagany😄#2015-08-0204:13lboliveira\o/
#object[datomic.promise$settable_future$reify__6754 0x7227ddab {:status :ready, :val {:db-before
@bhagany: Thank you again. 😃#2015-08-0204:13bhaganyalright! congrats#2015-08-0219:05bkamphaus@bhagany and @lboliveira one thing to note re: turning index on, is that this recommendation is meant for data models which stick pretty fairly close to Datomic’s modeling recommendations. We’ve found (since making the recommendation) that it collides with some common antipatterns in Datomic - e.g. storing large blobs/documents in string attributes.#2015-08-0219:05bhagany@bkamphaus: thanks! good thing I'm not doing that simple_smile#2015-08-0219:10bkamphausCool. I don’t think most are doing that, but definitely a few have run into perf issues, index failures, etc. introduced by the combo of :avet + 10MB document-like string values.#2015-08-0313:12lboliveira@bkamphaus: thank you!#2015-08-0322:54ejhaving problems with boot, datomic and reader literals
ej [11:53 PM]
can anyone help?
ej [11:53 PM]
keep getting NullPointerExceptions when running my datomic transactions wrapped in a fucntion
ej [11:53 PM]
even with d/tempid being used.#2015-08-0406:29robert-stuttaford@ej can you use the snippet paster to paste a stack trace?#2015-08-0509:26martinklepschwhen backing up to S3 — is there a way to specify a region?#2015-08-0509:38martinklepschApparently not needed :relaxed:#2015-08-0509:39robert-stuttafordyeah, S3 is not region specific as far as i’m aware#2015-08-0513:22domkmAre there any good resources for modeling user permissions with Datomic?#2015-08-0513:37tcrayforddomkm: the real hard part there is modeling user permissions at all. Shit is incredibly hard to do well imo#2015-08-0513:39domkmtcrayford: Agreed. I was thinking that Datomic might enable some interesting patterns where permissions could be modeled very granularly on the entity and attribute level.#2015-08-0513:53damionjunkDoes anyone have any pointers to discussions of pro/cons of various storage services? Or is it more the case that you’ve got to first weigh the pros/cons of things like riak vs. cassandra vs. postgresql on their own merits?#2015-08-0513:54damionjunkAside from configuration/administration/cluster issues, I’m sort of wondering if there are any base performance / capability comparisons of the underlying storage services.#2015-08-0514:00tcrayforddamionjunk: from my perspective: datomic's gonna be faster on riak/cassie/dynamo than one of the sql stores#2015-08-0514:00tcrayford(and more resilient)#2015-08-0514:02damionjunktcrayford: I was thinking so as well. especially given Riak’s similarity to Dynamo. I was just wondering if the extra administration effort was worth it. I guess I’ll actually have an opportunity to try SQL and Riak, this is a greenfield project, and step one is a data-load-throughput simulation anyway.#2015-08-0514:06tcrayford@damionjunk: as for comparing riak/cassie/dynamo: they have pretty different sets of tradeoffs. Dynamo is obviously completely commercial, the other two are open source. Cassie seems to be better maintained these days to me (disclosure: I use to know a bunch of folk who worked at the company that makes riak). I use riak, but then I use it for other things as well simple_smile#2015-08-0514:22damionjunkthis may be a bit noob’ish, but if you’re not actually running a 3+ node cluster, would Riak still make sense as a storage service?#2015-08-0514:24stuartsierra@damionjunk: My take is: If you're using AWS, use Dynamo. Can't be beat. If you're on-premise, pick whatever distributed storage you have the most experience / comfort with.#2015-08-0514:25damionjunk@stuartsierra: so, if you’ve not no experience with distributed storage on premise, would Postgres? I’m leaning towards Riak, because it’s something additional to ‘learn’, and I’ve got the time to do it though.#2015-08-0514:25damionjunkThere’s nothing “special” about the data, it’s entirely read heavy, medical sensor data, eventually around 200billion “rows"#2015-08-0514:26stuartsierra@damionjunk: Well then that's another issue you have to contend with first: sharding those 200 billion rows. Even Datomic can't fit that much in a single transactional database.#2015-08-0514:27damionjunkyeah, that’s a projection, but initially, the data set is in the 100 million range#2015-08-0514:28damionjunkit’s entirely domain partitionable, afaik. (i’ve actually yet to see the full “plan” unfortunately)#2015-08-0514:33damionjunkat any rate, @stuartsierra @tcrayford , thanks for the replies. I’ll definitely be reporting back. simple_smile#2015-08-0514:37robert-stuttaford@domkm: i have interesting stories to tell around user permissions#2015-08-0514:37stuartsierraAny one of the distributed storages has a maintenance burden, and you can lose your data if you don't configure things like replication correctly. Make sure you understand how your storage works, and test for failure scenarios!#2015-08-0514:38robert-stuttafordit’s a source of considerable complexity for us#2015-08-0514:39domkmrobert-stuttaford: Please share simple_smile#2015-08-0514:39robert-stuttafordcan’t right now. wonderful south african power cuts are landing in 20 minutes. let’s chat on skype some time?#2015-08-0514:41domkmrobert-stuttaford: I'd love that! Thanks. I'll email you.#2015-08-0514:41robert-stuttaford100%#2015-08-0516:23wasserdomkm: at last year's conj Lucas Cavalcanti & Edward Wible gave a presentation called "Exploring four hidden superpowers of Datomic" which touched on access control as one of the four. https://m.youtube.com/watch?v=7lm3K8zVOdY #2015-08-0516:23domkmwasser: Thanks! I'll check it out.#2015-08-0517:48akiel@damionjunk: I would not use Riak with less than 5 nodes.#2015-08-0520:19robert-stuttafordjust a sanity check, @bkamphaus, but we definitely want to down all our peers when restoring databases, right? 😁#2015-08-0520:19robert-stuttafordit’s not like when upgrading the transactor version with the double-failover trick#2015-08-0520:21bkamphaus@robert-stuttaford: correct — if you’re using a storage other than dev/free, should take txor down as well, see http://docs.datomic.com/backup.html#other-storages#2015-08-0520:22robert-stuttafordthis’d be RDS -> DDB#2015-08-0520:22robert-stuttafordso the txor would go down and back up thanks to that we’re using CFN and whatnot#2015-08-0611:09lowl4tencyHi guys simple_smile are you planning to add cloudwatch logs to datomic AMI?#2015-08-0611:11robert-stuttaford@bkamphaus will likely know the answer to that, @lowl4tency#2015-08-0611:12lowl4tencyrobert-stuttaford: it’s simple to add the support to CFN template. Curious mgiht be it presents here simple_smile#2015-08-0611:13robert-stuttafordmy guess is that it was added after they last worked on the AMI#2015-08-0611:36lowl4tency@bkamphaus: also interesting how do you add custom metrics for datomic.#2015-08-0613:23bkamphaus@lowl4tency: no announcements to make re: cloudwatch logs. CloudWatch metrics and S3 log rotation ( http://docs.datomic.com/aws.html#s3-log-storage ) are the main AWS monitoring tools, and custom monitoring for metrics is configurable http://docs.datomic.com/monitoring.html#sec-2#2015-08-0613:25lowl4tencybkamphaus: thank you, second link is very useful#2015-08-0613:26lowl4tencyYes, I’ve configured the s3 bucket for logging, but I’m interested in realtime log watching. For example AWS CW is able to do it with awslogs daemon.#2015-08-0613:26lowl4tencyIt looks like this#2015-08-0613:27lowl4tencyzookeeper logs example#2015-08-0614:22statonjrWe ship our transactor logs to Elasticsearch and view with Kibana.#2015-08-0616:22onetom@robert-stuttaford: domkm I would be super interested in permission stories because we just about to design and implement one.
(im aware of the superpowers video and showed it to all my colleagues too)#2015-08-0616:33lowl4tencystatonjr: logstash and s3 source?#2015-08-0616:34lowl4tencystatonjr: just I prefer use one place and vendor for same tasks simple_smile#2015-08-0616:34lowl4tencybkamphaus: also, do you use packer for AMIs?#2015-08-0616:34lowl4tencybkamphaus: https://packer.io/#2015-08-0616:43statonjr@lowl4tency: We host our own stack of Logstash, Elasticsearch, Kibana, and Riemann at AWS.#2015-08-0616:47lowl4tencystatonjr: understood, do you send metrics from Riemann to ELK?#2015-08-0616:50statonjrNo. TBH, we haven’t integrated Riemann into the rest of our stack just yet, but the plan is to have Riemann read the stream of data and act accordingly.#2015-08-0616:56lowl4tencyDoes riemann have metric alarms for your stack?#2015-08-0617:09ejCould someone give an example of the best way to test if a query returns a value? #2015-08-0617:10jballanc@lowl4tency: we’ve got a hack to have Datomic ship logs directly via SQS to our Logstash instances#2015-08-0617:10jballancwell… a “hack” 😉#2015-08-0618:23robert-stuttaford@onetom perhaps you could join dom and i on a skype call soon#2015-08-0623:28onetom@robert-stuttaford: Thanks for the invitation, but Im in HKT (GMT+8) unfortunately, so I was sleeping and just woke up#2015-08-0722:37arohnerI’m seeing weird behavior in a query. when I use a simple d/q, with a search on one attribute, (d/q [?find (pull ?e [*]) :where [?e :foo/bar ?x]] db x), everything works fine. There’s only one result in that query. When I add a second attribute that I know ?e possesses, instead of getting the result, I appear to get the tx entity: [{:db/id 13194139534324, :db/txInstant #inst "2015-08-07T22:34:21.299-00:00”}]#2015-08-0722:37arohner*add a second attribute = add a second attribute to the query, [?e :foo/bar ?x] [?e :foo/bbq ?y]#2015-08-0808:27samir@arohner you have probably inserted the attribute clause at the wrong place. Following query retrieves the transaction:
[:find ?tx .
:where [?a :artist/name "The Rolling Stones" ?tx]]
#2015-08-0817:13arohner@samir: I’m pretty sure I’m only using the 3-arity clause in :where#2015-08-0817:34arohnerarg. Figured it out. User Error#2015-08-0906:59robert-stuttaford@arohner: what was it? reading your code i saw a lack of :in clauses despite your providing args. was that it, or were your chat messages pseudocode?#2015-08-1112:45magnarsThe datomic docs say this: The example below uses retraction to stop retaining history for :person/address.
[[:db/retract :person/address :db/noHistory false]
[:db/add :db.part/db :db.alter/attribute :person/address]]
- does it really? You retract :db/noHistory false to stop retaining history?#2015-08-1112:47magnarsSeems more intuitive to me to add :db/noHistory true, I guess. Maybe this leaks some implementation detail?#2015-08-1113:16nbergerI guess it's an error in that doc. A few lines before it says "Altering an attribute's :db/noHistory to false or retracting it will ... start retaining history..." so I would say it's starting to retain history#2015-08-1114:17bkamphaus@magnars the docs are correct there, but require the context of the preceding section. It’s retracting the previous assertion of :db/noHistory false.#2015-08-1114:43bhaganyI'd like to understand this… isn't false the default value for :db/noHistory?#2015-08-1114:44bhaganyand since that's the case, why would retracting a fact that states the default cause :db/noHistory to become true?#2015-08-1114:49bhaganyBy my reading, neither the assertion of :db/noHistory false nor the retraction of the same would have any actual effect on the retention of history, given that there's no prior assertion of :db/noHistory true for :person/address.#2015-08-1115:34magnarsyeah, that is my intuition as well - it feels decidedly strange the way it is described in this example.#2015-08-1115:37bkamphausI think the behavior described may actually be incorrect - sorry, investigating further. I’ll update here.#2015-08-1115:53bhaganyfwiw, I just spent some time exploring in a repl. I can't find any retraction that causes :db/noHistory to become true.#2015-08-1116:06bkamphaus@bhagany: I can confirm that I see the same thing as you (I worked through several changes tracking the outcome in terms of datoms in :tx-data) and can reason fairly simply about why this is the case — I’m verifying expected behavior now before any follow up.#2015-08-1116:07bhagany@bkamphaus: I am very relieved that my mental model turns out not to be disastrously wrong simple_smile#2015-08-1116:07bhagany(probably)#2015-08-1116:46bkamphaus@bhagany and @magnars I can confirm that retracting [… :db/noHistory false] will not have the outcome previously stated in the docs (stop retaining history). I've revised the docs at: http://docs.datomic.com/schema.html#altering-nohistory-attribute#2015-08-1116:46bhagany@bkamphaus: thanks!#2015-08-1119:39potetmDoes anyone happen to know of a hashing algorithm that you can use to generate a unique attribute that will index “easily” (a la squuid)?#2015-08-1119:41potetmIn my example, I have an entity that is the child of two parents, and the combination of the two parents is unique. (hash-algorithm parent-id-1 parent-id-2) would generate a unique string for my entity.#2015-08-1119:45potetmBasically I want to guarantee my :db.unique/identity attribute is deterministic, but not in a way that harms indexing like SHA1 would.#2015-08-1120:02potetmOr, if not a hashing algorithm, some deterministic strategy that works well with datomic’s indexing.#2015-08-1121:30stuartsierra@potetm: Indexing performs best when new values are always added at or close to the "end" of the index. squuid achieves this by creating a hashed value where the high bits increase over time. But a deterministic hash function cannot do this. I wouldn't worry about it too much unless you have a measurable performance problem. Just generate a string combining unique values from the parents.#2015-08-1121:35potetm@stuartsierra: Awesome. We thought that might work as well. Is there by chance some rule of thumb for an upper limit of items indexed in that way? Or is it basically, “do it until you notice problems”?#2015-08-1121:36stuartsierra@potetm: There is no upper bound that I know of, other than the much-discussed bounds on optimal size of a Datomic database. As always, testing on a production-sized set of your own data is the only way to be sure.#2015-08-1121:38potetmRight. Cool then. Thanks for your help! Much appreciated.#2015-08-1121:38stuartsierraYou're welcome#2015-08-1123:25mishagood morning guys.
I have a single-attribute entity, which represents a list of items.
Each item is an enum value.
How can I enforce uniqueness of such single-attribute entity?
;; schema for entity:
[{:db/id #db/id[:db.part/db]
:db/ident :some-set/items
:db/valueType :db.type/ref
; :db/unique :db.unique/value
; :db/unique :db.unique/identity
:db/cardinality :db.cardinality/many
:db/fulltext true
:db/doc "List of all items included in this set."
:db.install/_attribute :db.part/db}]
;; enums:
[{:db/id #db/id[:db.part/user] :db/ident :enum/a}
{:db/id #db/id[:db.part/user] :db/ident :enum/b}
{:db/id #db/id[:db.part/user] :db/ident :enum/c}]
;; trying to create 2 sets:
'[{:db/id #db/id[:db.part/user -1] :some-set/items [:enum/a :enum/b]}
{:db/id #db/id[:db.part/user -2] :some-set/items [:enum/a :enum/c]}]
When I use :db/unique :db.unique/identity in a schema – 1st set is created, and the 2nd one overwrites it.
When I use :db/unique :db.unique/value in a schema – I get "unique conflict" error:
:db.error/unique-conflict Unique conflict: :some-set/items, value: 17592186045438 already held by: 17592186045482 asserted for: 17592186045484
which basically says:
you cannot add :enum/a to a new :some-set/items, because :enum/a is already used in existing :some-set/items
Can I have a collection uniqueness constraint, not an item in collection one?#2015-08-1123:31mishaThe actual thing I am trying to achieve – is to have "nested :cardinality/many"
:some/things [:enum/a :enum/b [:enum/a :enum/c] :enum/d]
#2015-08-1123:37mishaAnd after few exceptions here and there this is what I came up with:
:some/things [:enum/a :enum/b {:some-set/items [:enum/a :enum/c]} :enum/d]
which works with either of these used in schema:
:db/unique :db.unique/value
:db/unique :db.unique/identity
but does not work without :db/unique set (actually it makes me either do a lot of look ups, or to create a copy of :some-set/items each time I use it to create new :some/things).#2015-08-1200:59bkamphaus@misha I’m not sure I follow your example, but if you mean unique within the collection w/the collection being that implied by the set of card-many vals (refs to enums or otherwise), then it sounds like you just want the default behavior.#2015-08-1201:02bkamphaus(d/pull (d/db conn) '[*] 17592186045418)
;{:db/id 17592186045418
; :person/aliases ["Bert" "Bobby" "Curly" "Robert" "Robin”]}
@(d/transact conn [[:db/add 17592186045418 :person/aliases "Robert"]])
;{:db-after #2015-08-1208:32madvasHey guys, how do I connect to heroku postgresql database? I have installed:
[com.datomic/datomic-pro "0.9.5206" :exclusions [joda-time]]
[postgresql "9.3-1102.jdbc41"]
When I try to connect as:
(def uri "datomic:)
(def conn (d/connect uri))
I get error
java.sql.SQLException: No suitable driver
clojure.lang.Compiler$CompilerException: java.sql.SQLException: No suitable driver, compiling:(server.clj:28:11)
#2015-08-1209:42misha@bkamphaus, not if you mean unique within the collection ,
with the available enum values a b c I need to be able to have entities representing these combinations:
[a]
[b]
[c]
[a b]
[a c]
[b c]
[a b c]
And it would be nice, if adding another [a b] would just upsert already existing one.
Also, I am not entirely sure if this is significant or not, but you use strings as list items, and I use keywords, which are idents.
With :db/unique :db.unique/value in :person/aliases definition, you would not be able to have these 2 :
{:db/id 1
:person/aliases [:names/Bert :names/Bobby]}
{:db/id 2
:person/aliases [:names/Bert :names/Curly]}
And with with :db/unique :db.unique/identity in :person/aliases definition, the second would have overwritten the 1st one.#2015-08-1209:48mishaNow – a is treated as unique value, not [a]. a and b, not [a b].
Re-using your example: :db.unique/value and :db.unique/identity enforce only one person being able to have Bert as an alias across entire db.#2015-08-1210:00magnarsAny ideas why I get multiple historic entries for an attribute that has :noHistory set to true? Here's a very short repl-session demonstrating it: https://gist.github.com/magnars/093c6c437b1760ac22cf#2015-08-1210:46magnarsTurns out Stuart Halloway has addressed this quirk here: http://datomic.narkive.com/gIuZhcp8/db-nohistory
>:db/noHistory is not a logical property of your data, it is only a storage optimization hint. The memory database has no storage, so :db/noHistory is irrelevant.#2015-08-1210:48magnarsI wish it wasn't so, tho - because this makes for a real difference between my development and testing environments, and production.#2015-08-1213:23robert-stuttafordanyone using dynamodb as a datomic backend, here?#2015-08-1213:24robert-stuttafordcurious what it’ll cost to restore 30gb of datomic backups to ddb#2015-08-1213:34tcrayford@robert-stuttaford: gonna bet you could e.g. dig in the backup format and count the number of segments, then assume it's something like that many writes?#2015-08-1213:35robert-stuttafordgosh#2015-08-1213:37robert-stuttafordsuch strange pricing#2015-08-1213:37robert-stuttafordpay per hour for the ability to write#2015-08-1221:03ericfodeis there any way to guess the number of segments in a database?#2015-08-1221:16arohner@ericfode: you could just go poking around your postgres instance or whatever#2015-08-1221:16arohnerbased on data size, I’m not sure#2015-08-1221:16ericfodeIt’s a ddb, should i just look for the number of rows?#2015-08-1221:21sdegutisHello. I am inexplicably receiving an error that I wasn't receiving this morning, during a transact.#2015-08-1221:21sdegutisI've reset my database to no avail, the error still occurs: clojure.lang.ExceptionInfo: :db/invalid-data Unable to resolve entity: :question-template/sort-pos {:db/error :db/invalid-data}#2015-08-1221:21sdegutisPlease help?#2015-08-1221:23shaunxcodeWait, is that exception happening when starting transactor or when doing a specific transaction? What are the details of the transaction?#2015-08-1221:24sdegutisWhile I'm doing a transaction. One of the tx-data (a list) contains a map like this: {:question-template/sort-pos 0, ...} and includes a legitimate temporary eid as :db/id#2015-08-1221:24shaunxcodewhat is schema for :question-template/sort-pos ?#2015-08-1221:26sdegutis{:db/id (d/tempid :db.part/db)
:db/ident :question-template/sort-pos
:db/valueType :db.type/long
:db/cardinality :db.cardinality/one
:db.install/_attribute :db.part/db}#2015-08-1221:27shaunxcodeis there a way to see entire transaction? (can you anonymize or what not?)#2015-08-1221:28sdegutisI don't think it's related to this specific attribute. When I remove this attribute from the map, I get the same error with another attribute, and so on.#2015-08-1221:28sdegutisSure.#2015-08-1221:28bostonaholichas the schema been transacted?#2015-08-1221:28shaunxcodeaww, there was someone else having a similar issue but I think it is lost to slack black hole#2015-08-1221:29sdegutisbostonaholic: Yes. At the beginning of starting the process, every schema is transacted again.#2015-08-1221:29sdegutisHere's the whole tx-data: https://gist.github.com/sdegutis/298b6f1db2672189cc4a#2015-08-1221:41sdegutisDid the gist share correctly?#2015-08-1221:41kbaribeaucan you produce the error with a smaller transaction?#2015-08-1221:41sdegutisYeah one sec.#2015-08-1221:43kbaribeauusually tempids look like #db/id[:db.part/user -1000106] not sure why yours are different#2015-08-1221:46sdegutiskbaribeau: Yeah I noticed that too, seemed odd. Assumed it was because that's the map it really boils down to.#2015-08-1221:46sdegutisSo, I got it to stop giving the error.#2015-08-1221:47sdegutisIt succeeds as long as I omit the fields :answer-template/correct?, :answer-template/sort-pos, :question-template/sort-pos, :question-template/enabled?, :question-template/allowed-types, :question-template/answers, and :question-template/science?.#2015-08-1221:47sdegutisIf any of those is present, it fails with the same basic error.#2015-08-1221:49kbaribeau(d/create-database "datomic:")
@(d/transact (d/connect "datomic:")
[{:db/id (d/tempid :db.part/user)
:thisthing/doestnexist 0}])
#2015-08-1221:49kbaribeau^ I get a very similar error by running that. it really looks to me like your schema has not been transacted#2015-08-1221:49kbaribeauor, it could be only partially transacted#2015-08-1221:50kbaribeauthe error I get due to missing schema is CompilerException java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException: :db.error/not-an-entity Unable to resolve entity: :thisthing/doestnexist, compiling:(:62:1)#2015-08-1222:05sdegutisI'm using Datomic 0.8.4218 btw#2015-08-1222:05sdegutisDatomic Free#2015-08-1222:06bkamphaus^ simplest explanation, probably best next step is to confirm those exist (and that data type matches, etc.) - the tempid map is just what d/tempid resolves to.#2015-08-1222:06bkamphausi.e.
(d/tempid :db.part/user)
;
#2015-08-1222:10sdegutisAlso, it works just fine when I do it using the in-memory database.#2015-08-1222:34bkamphausIf it succeeds when you omit fields, my next step would be to confirm that a problem attr’s schema has been transacted and is in the db prior to transacting (in the isolated case of free storage).#2015-08-1222:39sdegutisbkamphaus: Very good idea, thanks, will do.#2015-08-1222:52misha@sdegutis looks like missing schema to me too#2015-08-1222:53mishagentlemen, what is (or is there) an idiomatic way to keep track of order of items in collections in datomic?#2015-08-1222:55mishaE.g. I have 3 "reusable" item-entities: a b c, and I want to have some list-entities which not only contain list of items, but know their order in the list as well.#2015-08-1222:55mishaIn which case [a c] and [c a] would be different.#2015-08-1222:57mishaThe only solution I have in mind, is to wrap items in an item-container-entities, and make lists out of those:
[{:item a, :idx 0} {:item c, :idx 1}] and
[{:item c, :idx 0} {:item a, :idx 1}]
or
[{:item a, :idx 0} {:item c, :idx 1}] and
[{:item a, :idx 1} {:item c, :idx 0}]#2015-08-1222:58mishais there a better/other way?#2015-08-1223:38sdegutis@bkamphaus: How do you confirm the existence of such a thing in code?#2015-08-1223:51bkamphausquery for the attr entity (e.g. By ident) using the db value from the conn prior to submitting the tx. Entity or attribute apis are options, too.#2015-08-1223:53sdegutisYeah yeah I know some of these words .gif#2015-08-1223:59misha@sdegutis
(datomic.api/entity (datomic.api/db (datomic.api/connect db-url)) :my/attr)#2015-08-1300:00misha=> {:db/id 17592186045443} means :my/attr is installed.
=> nil means it is not#2015-08-1300:01mishaor a bit fancier: https://gist.github.com/stuarthalloway/2321773#2015-08-1300:14sdegutis@misha, @bkamphaus aha! It's definitely not there, it returns nil.#2015-08-1300:14sdegutisWell, {:db/id nil}#2015-08-1300:16sdegutis@misha: So what reason could it be nil when I definitely installed it in a transact?#2015-08-1300:18misha@sdegutis how did you install it exactly?#2015-08-1300:27sdegutis@misha: (d/transact conn txs) where txs included {:db/id (d/tempid :db.part/db), :db/ident :question-template/sort-pos, :db/valueType :db.type/long, :db/cardinality :db.cardinality/one, :db.install/_attribute :db.part/db}#2015-08-1300:28sdegutisBut (d/entity db :question-template/sort-pos) returns {:db/id nil}#2015-08-1300:30bkamphausthere are a variety of reasons this could be the case — that transaction didn’t succeed, that transaction went to a different conn, the conn had the db deleted/recreated prior to following transaction, the order of transaction differs from your expectations, etc.#2015-08-1300:32sdegutisIs there a way to determine the reason?#2015-08-1300:34bkamphausAs a sanity measure, one thing you could do would be to check the basis-t of the :db-after returned by the map you get from deref-ing the schema transaction vs. the basis-t of the db retrieved from the conn prior to submitting the transaction that fails.#2015-08-1300:35bkamphausare the transactions on the same peer process, in a way where (for at least debugging) you could guarantee the order? I.e. submit schema transaction, deref future it returns (which will block until transaction succeeds), then follow up? If you can guarantee the order, you could look at the :tx-data returned by the schema transaction.#2015-08-1300:36sdegutisI'm using Datomic Free with the default H2 thing it uses, and using only one peer process.#2015-08-1300:36sdegutisI always deref the future it returns immediately after getting it back.#2015-08-1300:36sdegutisOh wait, in this specific transact I might not be!#2015-08-1300:43misha@bkamphaus transact's future will "complete" even if I will not deref it, right? right. what is the "correct" way to transact stuff (in the code, not in repl)?
"transact, deref, and see that transaction succeeded/log any errors"?#2015-08-1300:46lowl4tencyHi, guys as far as I remember datomic has a WebUI, it hasn’t?#2015-08-1300:47mishasort of, yes#2015-08-1300:49sdegutis@bkamphaus: I think you may have solved it!#2015-08-1300:50sdegutis@bkamphaus: I've been ignoring the (transact) call's result all this time. Well just now, I finally chose to print it out (without deref'ing it, mind you) and instead of printing it out, it just died, saying: "Exception in thread "main" java.util.concurrent.ExecutionException: java.lang.Error: :transact/bad-data Changing :db/valueType to 22 is not supported for entity 315"#2015-08-1300:50lowl4tencymisha: what I should do for getting it? I’m using official ec2 AMI for transactor#2015-08-1300:50bkamphaus@lowl4tency: you mean the web console? http://docs.datomic.com/console.html#2015-08-1300:50mishahttps://my.datomic.com/downloads/console#2015-08-1300:51lowl4tencyah, it’s running separetely#2015-08-1300:52misha$ cat README-CONSOLE.md
...
If you downloaded Console as a separate download (i.e.: not bundled with Datomic Pro), you will need to install it alongside Datomic:
Run this command from the directory you unzipped Console to:
bin/install-console path-to-datomic-directory
Switch to your Datomic directory and run:
bin/console -p 8080 alias transactor-uri-no-db
...
#2015-08-1300:53lowl4tencyOkay, I don’t need it in this way simple_smile#2015-08-1300:53lowl4tencyCLoudWatch metrics is enough#2015-08-1300:53mishaas a separate download (i.e.: not bundled with Datomic Pro)#2015-08-1301:01sdegutis@bkamphaus: Yep, I was changing the type of an existing attribute, which isn't allowed in our older version of Datomic. By not deref'ing the schema transact, we were never hearing about that error.#2015-08-1301:02sdegutis@bkamphaus: Thanks, I owe you a beer.#2015-08-1304:02bostonaholicdoes it make sense to :db/index true a :db.type/ref?#2015-08-1304:21robert-stuttaford@bostonaholic: no. refs are all in VAET index already#2015-08-1304:22bostonaholicthat's what I thought, thanks @robert-stuttaford#2015-08-1304:22robert-stuttafordso you can (->> (d/datoms db :vaet your-ref-id :some/attr) seq (map :e)) to get all relations#2015-08-1304:23robert-stuttaford@sdegutis: probably already something you’ll do from now on, but you should always look at the results of your transact calls simple_smile#2015-08-1317:14potetmIs there a limit on the number of attributes that d/pull will pull at one time? I’m in a situation where I’m trying to pull 14 attributes, of those 9 exist on this particular entity, but it appears to only return 8 attributes.#2015-08-1317:33bkamphaus@potetm which version of Datomic are you seeing this behavior you’re on? There was a bug w/pull fixed in 0.9.5198 that if I remember correctly had a similar manifestation (showed up w/more than 8 attr):
* Fixed bug where the pull API did not always return all explicit reverse references.#2015-08-1317:34potetmThat’ll do it simple_smile I’m running 0.9.5153#2015-08-1317:34potetmThe thing that was being dropped was a reverse reference.#2015-08-1510:17gerrithi! I am having problems with running bin/datomic backup-db. the storage is postgres, and the postgresql-9.3-1102-jdbc41.jar should be on the classpath (from my understanding of the shell scripts), but I still get java.sql.SQLException: No suitable driver. any ideas on what I am doing wrong?#2015-08-1510:25gerritthe class is even loaded:
[Loaded org.postgresql.Driver from file:/var/lib/openshift/542fae5ae0b8cda9b7000ca0/app-root/data/datomic-pro-0.9.5153/lib/postgresql-9.3-1102-jdbc41.jar]
#2015-08-1511:17gerritthe script is called like this:
bin/datomic -Xmx512m -Xms512m backup-db 'datomic:sql://<DBNAME>?jdbc:postgresql://<IP>:5432?user=datomic&password=datomic' file:/var/lib/openshift/.../app-root/data/backup
the same happens with version 0.9.5206#2015-08-1517:29ckarlsentry ..<IP>:5432/datomic?user=...#2015-08-1608:16gerritgreat! that worked!#2015-08-1608:16gerritthanks#2015-08-1608:18gerritbut why? is that documented somewhere?#2015-08-1616:11bkamphaus@gerrit the Jdbc url was missing the Postgres db name, so was not valid.#2015-08-1702:20joshgCould anyone recommend some examples of how to handle user authorization with Datomic?#2015-08-1703:02meow@joshg: I'm not currently using datomic but I think this question has come up before so you might want to check the archives if you haven't already: http://clojurians-log.mantike.pro/datomic/#2015-08-1703:05joshg@meow: Thanks. The best resource I’ve found thus-far has been: https://www.youtube.com/watch?v=7lm3K8zVOdY#2015-08-1703:06meownp#2015-08-1711:33val_waeselynck@joshg: if you're planning to give your clients arbitrary query power (even on a subset of your db using filter) keep in mind that they still can build queries that will consume all of your resources. So I don't recommend that in the general case.#2015-08-1711:35val_waeselynckI personally used the old way: parameterize all my endpoints with some access restriction configuration, using Ring middlewares to query the db for the right authorization. I have found that recursive rules can be a very powerful tool for this.#2015-08-1715:19raymcdermotthi guys, this question was kind of answered before but I didn’t have the database in front of me like I do now so I couldn’t quite get the spoon feeding that I need 😊#2015-08-1715:20raymcdermottassume I have added some data and have added this to the record...#2015-08-1715:21raymcdermottthis all works and I can get back the data for that tx#2015-08-1715:22raymcdermottbut I cannot work out how to ask datomic… give me the entity that changed as a result of that tx#2015-08-1715:25raymcdermottI’m still breaking my head with this stuff so anything on a spoon would be happily received!#2015-08-1715:41robert-stuttafordraymcdermott: you have a txid and you want a list of entities it touched?#2015-08-1715:41robert-stuttaford(d/q '[:find ?e :where ?t [?e _ _ ?t]] db tx-id)#2015-08-1715:42robert-stuttafordthe transact function returns a bunch of data. included are all the new datoms, as well as a map of temp -> storage ids#2015-08-1715:42bostonaholic@raymcdermott: I usually return the tempid then d/entity on it#2015-08-1715:42robert-stuttafordyou should find everything you need in there#2015-08-1715:43robert-stuttafordhttps://github.com/clojure-cookbook/clojure-cookbook/blob/master/06_databases/6-12_transact-basics.asciidoc covers the essentials#2015-08-1717:23raymcdermottsuper - thanks robert (I was AFK for a while)#2015-08-1718:29sdegutisWhat major changes would someone upgrading from Datomic Free 0.8.4218 to Datomic Free 0.9.5206 encounter?#2015-08-1718:42sdegutisOnly one I've found so far is "Alter Schema" in 0.9.4470, which says "This feature breaks compatibility with older versions. Once a schema alteration has been performed on a database, only connect peers and transactors running at least [this version] to that database." but I don't quite know how to interpret that.#2015-08-1718:44bostonaholic@sdegutis: re: alter schema: it's just saying that once you use the alter schema feature of >=0.9.4470, there is no going back to an earler version of datomic#2015-08-1718:44sdegutisAh got it.#2015-08-1718:44sdegutisThanks @bostonaholic.#2015-08-1718:44bostonaholicnp#2015-08-1718:45bostonaholicand I would suggest going through the changelog to get the best answer#2015-08-1718:45sdegutisThat's all the changelog says.#2015-08-1718:46sdegutis@bostonaholic: That's all it says. I've been going through the 500 lines in the full changelog between our current version and the latest.#2015-08-1718:46sdegutisSo I think you're right.#2015-08-1719:52sdegutisSuccessfully upgraded from 0.8.4218 to 0.9.5206 with no visible issues so far. It was pleasantly smooth. Just FYI.#2015-08-1722:20clojuregeekmy secret plan to start moving to datomic by using the ruby gem was deflated today, Yoko said the gem is behind and I’m better off using clojure#2015-08-1802:03bhagany@clojuregeek: using the raw REST api isn't too terrible. I'm doing it from Python.#2015-08-1802:05bhagany@clojuregeek: maybe I'm just very motivated to think about it positively, though. When I look at it a bit more objectively, I am doing quite a bit of string interpolation. But I do get to use datomic simple_smile#2015-08-1802:05bhaganyAlso, datomic is my way of getting clojure in the door, so… in the fullness of time...#2015-08-1802:05clojuregeek@bhagany simple_smile#2015-08-1802:05clojuregeek@bhagany: what data store you using?#2015-08-1802:06bhaganyour own - I'm porting an existing system. It's basically a row/column based page designer, for ecommerce#2015-08-1802:07clojuregeekohh#2015-08-1802:07bhagany@clojuregeek - I think I saw on twitter that you're doing the music brainz examples?#2015-08-1802:08clojuregeeki loaded it up#2015-08-1802:09clojuregeeki just finished all the day of datomic training videos from the site#2015-08-1802:11bhaganyI see. From my experience, doing things in clojure transfers pretty well to using the REST api from non-jvm-land, if that's where you're headed. Either way would probably result in some good learning simple_smile#2015-08-1802:12bhaganyor, if you're feeling really ambitious, update the gem!#2015-08-1802:21clojuregeeksimple_smile#2015-08-1812:54sdegutis@bhagany: even using the in-memory db?#2015-08-1812:58sdegutisGood morning everyone.#2015-08-1812:59sdegutisI've been considering making a function which wraps (d/transact ...), takes the :db-after key from the result, and swaps it out in a global atom/ref/var/whatever, so that I don't need to pass the most recent database value everywhere. What do you think of this idea? Would it be terrible on performance?#2015-08-1813:19borkdudeI'm trying to run a transactor, but I get: java.lang.IllegalArgumentException: :db.error/not-enough-memory (datomic.objectCacheMax + datomic.memoryIndexMax) exceeds 75% of JVM RAM#2015-08-1813:20borkdudewhich values should I change#2015-08-1813:22borkdudeI increased Xms and Xmx to 2G and that seems to work#2015-08-1813:34bhagany@sdegutis: I've never used the in-memory storage with the rest api, but I don't know of any reason why it would be different#2015-08-1813:35bhagany@sdegutis: also, I'm no expert, but having a global db atom would defeat one of the big benefits I get from datomic - a consistent view of time#2015-08-1813:38bhaganyto expand, I want to be able to arbitrarily compose functions that all use the same db value, so as not to get inconsistent results from updates that happen in other threads, etc. This can't happen if those functions are constantly referencing the most recent db-after#2015-08-1814:07sdegutis@bhagany: The docs say "The memory system is included in the Datomic peer library." which seems to imply that it needs a full-fledged Clojure Peer process?#2015-08-1814:08sdegutis@bhagany: Also the majority of the time I want to see updates that happen in all threads as soon as possible, since this is a live web app it's used in#2015-08-1814:09bhagany@sdegutis: The rest process is a peer#2015-08-1814:09sdegutisAhh right.#2015-08-1814:10bhagany@sdegutis: In a web app context, you'll get inconsistent results within a single request, if you don't use a consistent db value#2015-08-1814:10sdegutis@bhagany: How do you figure?#2015-08-1814:14bhagany@sdegutis: contrived example: your request is composed of 3 functions: func1 gets a count of some entities, func2 transacts some data and resets your db-after atom, func3 gets a list of the entities that func1 counted,#2015-08-1814:14bhagany@sdegutis: in another thread, after func1 runs, but before func2, another entity is added#2015-08-1814:15bhagany@sdegutis: now the count from func1 and the list from func3 are inconsistent#2015-08-1814:20sdegutis@bhagany: That seems like an expected type of inconsistency within the context of web apps#2015-08-1814:21bhagany@sdegutis: heh, I suppose you could say it happens all the time. I'd hesitate to call it good, and avoiding it is one of the benefits of using datomic.#2015-08-1814:22sdegutisAh touche simple_smile#2015-08-1814:22bhaganysimple_smile#2015-08-1814:24sdegutis@bhagany: But even when passing database values around, the user will seem to get somewhat inconsistent results. Consider that if a route /count-things is called, and immediately afterwards /insert-thing is called and returns before /count-things returns, then /count-things will give an outdated number, inconsistent with the effects of /insert-thing.#2015-08-1814:24sdegutisWhich I guess isn't better or worse, just different.#2015-08-1814:25bhagany@sdegutis: yes, this is true. If these are something like rest endpoints and the results will be displayed in a single view, I would use an as-of filter, so that the results are consistent#2015-08-1814:26bhagany@sdegutis: if these are two different page views, I think that the user's expectation is generally that time has passed in between page loads.#2015-08-1814:27sdegutisGood point!#2015-08-1814:27bhaganyI don't have a specific attribution, but that's almost certainly not originally my point simple_smile#2015-08-1814:27bhaganybut thanks#2015-08-1814:29sdegutis😛#2015-08-1814:39tcrayfordthe fact that you can stuff as-of into all ajax requests (in theory anyway) is a super good idea 😄#2015-08-1814:39tcrayford@sdegutis: typically ring/datomic apps have a middleware that derefs the database and stuffs it into the request somewhere for this stuff#2015-08-1814:41sdegutis@tcrayford: Yeah that's what we're doing now. But that means every function that my handlers call which need access to the database will need to take a database parameter.#2015-08-1814:41tcrayfordyep, and that's a good thing#2015-08-1814:42sdegutisI spose#2015-08-1814:42sdegutis@tcrayford: But since we also do extensive testing, all our functions return a new database value too. So return values often look like [db user] or [db cart item]#2015-08-1814:44tcrayford@sdegutis: so maybe have the thing inside the request be an atom and mutate that somewhere? My apps tend to do very minimal mutation/transactions anyway, and rarely does rendering the response require the updated database value#2015-08-1814:44sdegutis@tcrayford: Hmm that's an interesting idea. I'm not sure how much it will clean up my API though since they'd all still need to take the db atom.#2015-08-1814:47tcrayfordfor the record: ~40 route webapp, maybe 5k lines of code all told or something. Been doing stuff like this for nearly 3 years now, and not had any problems with it. I typically only test transactional things by having functions that take in a request and return the transaction data, then just test them with d/with. I don't worry about testing the actual call to transact via unit tests#2015-08-1814:56sdegutis@tcrayford: Hmm interesting technique.#2015-08-1814:58sdegutis@tcrayford: I haven't been testing the calls to transact either, I look at that as an implementation detail, but I have been testing the returned "presenter" values. Like, (create-user db "myname") should return [db user] where user is {:name "myname", :confirmed false} etc. which necessitates returning the value from the database (and in some cases the updated database itself for querying against in my tests).#2015-08-1816:45a.espolovhello#2015-08-1816:48a.espolov(s/schema inspection
(s/fields
[time :long]
[status :enum [:rejected :unverified :confirmed]]
[targets :ref :many]
[comment :ref :many]
[outlet :ref :one]
[promo :ref :one]
[user :ref :one]))
Guys how to find all "inspection" for sample by user ids [... ... ...] and by outlet ids [... ... ... ] in datomic database?#2015-08-1816:49a.espolovI have to use and or or in a query?#2015-08-1818:34val_waeselynck@a.espolov: sorry, I don't understand the question :s#2015-08-1818:43sdegutisJust came up with a pattern I'm liking and wanted to share: https://gist.github.com/sdegutis/7f75d257abbf037e8e48#2015-08-1818:43sdegutisIt's a push-based schema updating technique, that's friendly for code-reloading too.#2015-08-1900:55sdegutisQuestion: is it alright to install an attribute multiple times?#2015-08-1901:53bhagany@sdegutis: I've done it while developing and I'm pretty sure it's idempotent. At least I didn't notice any ill effects.#2015-08-1901:53bhaganyprobably wise to wait for someone more knowledgable to weigh in though#2015-08-1902:26alexmillergenerally asserting datoms that match current state does not change anything (except creating transactions)#2015-08-1902:26alexmillerbut I cannot really attest to a definitive answer, just going by what I've seen#2015-08-1902:29sdegutisThanks.#2015-08-1909:51raymcdermott@bostonaholic: I need a general purpose way to query the tx data and the entities rather than just getting back the entity affected by the current tx (if I understand your answer!)#2015-08-1909:52raymcdermott@robert-stuttaford: I ran the query and get back an error from the program#2015-08-1909:52raymcdermott(def tx-id 13194139534372)
=> #'datomic-customer.core/tx-id
(d/q '[:find ?e :where ?t [?e ?t]] db tx-id)
IllegalArgumentException Argument ?t in :where is not a list datomic.query/validate-query (query.clj:290)#2015-08-1910:41robert-stuttafordsorry @raymcdermott, i’m a dork#2015-08-1910:41robert-stuttafordmy shoot-from-the-hip coding-in-slack code i gave you was nonsense. here’s the right way:#2015-08-1910:42robert-stuttaford(d/q '[:find ?e :in $ ?t :where [?e _ _ ?t]] db tx-id)#2015-08-1910:46robert-stuttaford@raymcdermott: look at http://docs.datomic.com/log.html#log-in-query too#2015-08-1910:51raymcdermottthanks robert but now I get#2015-08-1910:51raymcdermott(d/q '[:find ?e :in $ ?t :where [?e ?t]] db tx-id)
Exception Insufficient bindings, will cause db scan datomic.datalog/fn--6468 (datalog.clj:368)#2015-08-1910:52raymcdermotti’m actually struggling to unify to log with the db in my head - are they different query objects?#2015-08-1910:53tcrayfordthey're different. The log isn't cached either (which may matter a lot for you)#2015-08-1910:54raymcdermottok so maybe I should reiterate my understanding and what I thought I could do and you guys can tell me where I’m right / wrong#2015-08-1910:55raymcdermottI thought I could associate some data with a txn and then simply get that data back from the db#2015-08-1910:56raymcdermottso maybe I have to use two APIs instead?#2015-08-1910:56tcrayford@raymcdermott: no you can for sure#2015-08-1910:57tcrayfordsorry, I need to scroll up a bit#2015-08-1910:57tcrayford(to find out your original question) 😉#2015-08-1910:58tcrayfordare you trying to load all the attributes about a specific tx-id?#2015-08-1910:58raymcdermottI want to add provenance information to each update on customer records and then be able to view that provenance information when I access the customer record … end goal is that I may prefer to show the data from the customer over data from another source which was more recent though less ‘trustworthy'#2015-08-1910:59raymcdermottthat sounds a bit messy but we are trying to combine a way to show updates on a per source basis#2015-08-1911:00tcrayfordseems reasonable to me. So the trick is: attributes about a transaction use the transaction id as the entity id#2015-08-1911:00raymcdermottand I thought tx data might offer that … perhaps another, more explicit design would be better#2015-08-1911:00tcrayfordoh, are you going from tx-id to entity then?#2015-08-1911:00tcrayfordor tx-id to "attributes about that tx"#2015-08-1911:01raymcdermottor the other way around - tell me which tx-ids made this entity data#2015-08-1911:01tcrayfordack yeah#2015-08-1911:02tcrayfordI think that's gonna not fly because of the fact that it'd do a full table scan 😕#2015-08-1911:02raymcdermottand from those tx-ids I can find some tx-data#2015-08-1911:02robert-stuttafordok. you’ll need to use tx-data then#2015-08-1911:02robert-stuttafordthe log.html link i pasted earlier#2015-08-1911:02tcrayfordyeah. You can use d/pull with a tx-id for getting tx-data, which is dope simple_smile#2015-08-1911:03robert-stuttafordusing what pattern, tom?#2015-08-1911:03tcrayfordhere's a query I ran on yeller's db a while back:#2015-08-1911:03tcrayford(clojure.pprint/pprint (d/q '[:find [(pull ?e [*]) (pull ?t [*])] :where [?e :project/name "stress1"] [?e :project/api-token _ ?t]] (-> webapp :datomic-connection d/db d/history)))#2015-08-1911:03robert-stuttafordall the datoms caused in that tx, or merely the values on the tx itself?#2015-08-1911:03tcrayfordah, values in the tx 😉#2015-08-1911:03tcrayfordthere's no index the other way except the log#2015-08-1911:03robert-stuttafordok. ray wants the first one, which means talking to d/log#2015-08-1911:04tcrayfordyeah#2015-08-1911:04tcrayfordand as mentioned: log is uncached#2015-08-1911:04robert-stuttafordray, be aware that reads on d/log are not cached#2015-08-1911:04tcrayfordhaha ^_^#2015-08-1911:04tcrayford(yeller doesn't use the log for anything for just this reason)#2015-08-1911:04robert-stuttafordno peer cache involved, as no index involved. it’s the tx log directly#2015-08-1911:04raymcdermottit’s not clear from the link how I go from entity to tx#2015-08-1911:05tcrayford@raymcdermott: you'd have to scan the log, looking for entries with that eid 😕#2015-08-1911:05tcrayfordI don't think this solution is gonna work too well though 😕#2015-08-1911:05tcrayfordI'd vote towards making an explicit entity for source stuff#2015-08-1911:05raymcdermottThe more I explore this, the better that option sounds!#2015-08-1911:05robert-stuttaford(d/q '[:find ?e ?a ?v ?added
:in ?log ?tx
:where [(tx-data ?log ?tx) [[?e ?a ?v ?tx ?added]]]]
(d/log conn) tx-id)
#2015-08-1911:06robert-stuttafordthis is if you have tx-id and you want to know what entities were affected by it#2015-08-1911:07tcrayfordfor Yeller: I attach stuff to every transaction, but it's just debugging: git sha, which user account was logged in, uri/method of the http request and so on. It's super useful, but I don't think I'd use it for more than that#2015-08-1911:07raymcdermottin my case I want to go the other way … what is all the ex data for entity X#2015-08-1911:07raymcdermotts/ex/tx/#2015-08-1911:07tcrayfordyeah, so that's hard/bad/slow imo#2015-08-1911:08robert-stuttafordthinking#2015-08-1911:08raymcdermottmakes sense, perhaps I was just pushing the concept too hard#2015-08-1911:08robert-stuttafordso, all the attrs on all the txes for all modifications to an entity, right?#2015-08-1911:08raymcdermottyes#2015-08-1911:09tcrayfordI think specifically "all the datoms that make up the present state of the entity"#2015-08-1911:09raymcdermottyes, but specifically including the tx-data that created those datoms#2015-08-1911:10raymcdermottThe point is that I don’t want to abuse the feature if the performance / queries are all going to be non-idiomatic#2015-08-1911:11raymcdermottbut I would like it if they were 😉#2015-08-1911:11tcrayfordso that's like (d/q '[:find (pull ?t [*]) :in $ ?eid :where [?eid _ _ ?t]] db eid) right? Do you get a full table scan warning there?#2015-08-1911:11tcrayfordit should just be: "walk up to eavt, grab datoms about EID, then walk up to eavt, grab all datoms about TX entity"#2015-08-1911:12tcrayfordat least: I think tx metadata is stored in eavt as normal entries#2015-08-1911:12raymcdermottI didn’t try that although it looks close to Robert’s suggestion earlier which does come with that warning#2015-08-1911:12raymcdermottlet me give it s a try#2015-08-1911:12tcrayfordif not, you can do "get me all the tids for this entry" on top of raw eavt index access yourself#2015-08-1911:13robert-stuttaford(d/q '[:find [(pull ?t [*]) ...] :in $ ?e :where
[?e _ _ ?t]]
some-db
some-id)#2015-08-1911:13robert-stuttafordthis works for me#2015-08-1911:14tcrayford(I saw that before the edit)#2015-08-1911:14robert-stuttaforddev-mode only#2015-08-1911:14tcrayfordoh, maybe that's ok 😉#2015-08-1911:14robert-stuttafordprod uses TrapperKeeper with nice config management, relax simple_smile#2015-08-1911:14tcrayfordmaybe I should actually steal that thinking about it#2015-08-1911:14raymcdermottboom#2015-08-1911:14raymcdermott(d/q '[:find (pull ?t [*]) :in $ ?eid :where [?eid ?t]] db eid)
=> [[{:db/id 13194139534372, :db/txInstant #inst "2015-08-15T18:25:13.844-00:00", :data/src "A random place on the Internet, spooky heh?"}]]#2015-08-1911:15tcrayfordyay simple_smile#2015-08-1911:15raymcdermottyes tom, that worked#2015-08-1911:15robert-stuttafordafter first cold-cache call, i get 34ms for that query for my test entity with 722 txes#2015-08-1911:15robert-stuttafordso, perf is fine#2015-08-1911:15tcrayford(if 34ms is acceptable to you)#2015-08-1911:15robert-stuttafordits not seconds#2015-08-1911:15tcrayfordyeah#2015-08-1911:16robert-stuttafordand it caches#2015-08-1911:16tcrayfordyeller has a long way to go on pageload times, so I'm just teasing simple_smile#2015-08-1911:16robert-stuttaford😁#2015-08-1911:16tcrayford(the eventual aim: every pageload is under 100ms, ideally under 50ms)#2015-08-1911:16robert-stuttafordcode perf is like playing golf. you never get it perfect#2015-08-1911:16robert-stuttafordyeah i think you’ll be burning your java bytecode onto roms at that point#2015-08-1911:17raymcdermottso guys … that’s great stuff … any other warnings / perf issues around what we have now?#2015-08-1911:18robert-stuttafordyou should be fine#2015-08-1911:18tcrayfordI don't think I have any, iff tx attributes are in eavt#2015-08-1911:18robert-stuttafordit’s idiomatic datalog#2015-08-1911:18robert-stuttafordyup, everything’s in eavt simple_smile#2015-08-1911:19robert-stuttafordok. have fun ray, tom!#2015-08-1911:43raymcdermottnice - thanks!#2015-08-1912:59sdegutisCan you somehow return a map from a Datomic query?#2015-08-1913:01sdegutisRight now I'm doing :find [k v] and (into {} query-result)#2015-08-1913:11tcrayford@sdegutis: see the pull api#2015-08-1913:11tcrayfordand/or the entity api#2015-08-1913:17sdegutisI'm not sure the pull API can do this: I'm matching up arbitrary keys to arbitrary values in my query and turning that into a map.#2015-08-1913:17tcrayfordah, so no to that then. I'd strongly recommend using navigation via entities for that stuff over doing it in query (imo anyway)#2015-08-1913:20sdegutis@tcrayford: By navigation do you just mean building the map after the query is done using regular Clojure code?#2015-08-1913:21tcrayfordyeah, via mapping over entities you get by calling d/entity on the results of query#2015-08-1915:07kvltIs there a reason I can’t do this:
(d/q '[:find ?moo
:in $ ?account ?moo
:where
[?account :account/obj ?obj]
(or [?obj :obj/moo ?moo]
[(missing? $ ?obj :obj/moo)])]
(db/db) 17592186045619 17592186046122)#2015-08-1915:38sdegutisHow can I find duplicate values of :user/email in a Datomic query?#2015-08-1915:40bostonaholic@sdegutis:
(d/q '[:find ?u1 ?u2
:in $ ?email
:where
[?u1 :user/email ?email]
[?u2 :user/email ?email]]
db email)
#2015-08-1915:40bostonaholicsomething like that should work#2015-08-1915:41sdegutisHmm.#2015-08-1915:41sdegutis@bostonaholic: Sorry I meant to query the whole database looking for duplicates.#2015-08-1915:41bostonaholicmade a couple edits#2015-08-1915:41sdegutisNot for a given email.#2015-08-1915:42bostonaholiccan you take what I gave you and adjust it?#2015-08-1915:42bostonaholicwhat would you need to change in that query if you weren't passing in ?email?#2015-08-1915:42sdegutisThe context is that we didn't have a uniqueness constraint on emails because of how we used to handle authentication in the past, but now we've changed that, and I'm looking to see if I can add a uniqueness constraint now, which would require having no current duplicates.#2015-08-1915:43bostonaholicbeen there#2015-08-1915:43sdegutisMy current try was this:
(d/q '[:find (count ?email) .
:with ?user
:where [?user :user/email ?email]]
(db))#2015-08-1915:43bostonaholicso you just want the email addresses that are duplicated?#2015-08-1915:43sdegutisBut I don't think that actually works, I think it could be filtering out duplicates by the nature of how Datomic inherently uses sets.#2015-08-1915:43sdegutisYep.#2015-08-1915:44sdegutisSlack sucks btw.#2015-08-1915:45bostonaholictake my first example, how can you adjust it without passing in ?email?#2015-08-1915:45sdegutisHmm I wonder.#2015-08-1915:46sdegutisI don't think it's possible.#2015-08-1915:46bostonaholicwhat about something like
(d/q '[:find [?email ...]
:where
[?u1 :user/email ?email]
[?u2 :user/email ?email]]
db)
#2015-08-1915:46sdegutisBtw the :in $ is implicit, you can omit that.#2015-08-1915:47bostonaholicif you're not familiar, the [?email ...] syntax will return a collection of emails#2015-08-1915:47sdegutisHmm interesting.#2015-08-1915:47sdegutisI do vaguely recall it. Didn't remember the [] being necessary around it.#2015-08-1915:47bostonaholicthanks, I just removed the bound ?email and forgot to remove $#2015-08-1915:47sdegutisI'll re-read that section of the docs.#2015-08-1915:49sdegutis@bostonaholic: I see now in the query.html page. Thanks. I'm running the query now, it'll take a few minutes to finish though.#2015-08-1915:50bostonaholicyou might also need to add the [(not (= ?u1 ?u2))] clause#2015-08-1915:54sdegutis@bostonaholic: That fails with an error about ?u1 not being resolveable in context.#2015-08-1916:12sdegutis@bostonaholic: btw this seemed to work with reasonable confidence: (d/q '[:find (frequencies ?email) . :with ?user :where [?user :user/email ?email]] (db))#2015-08-1916:13sdegutisQuestion: is it possible to have an (or ... 0) in the :find clause, for times when you're using something like (count ?e) but the result may be nil?#2015-08-1916:13sdegutisI mean, I know (or ... 0) literally won't work, but I'm wondering if there's something similar to that concept available.#2015-08-1916:38sdegutisAlso, is the only difference between :db.unique/value and :db.unique/identity that the latter has upsert behavior and the former doesn't?#2015-08-1916:55sdegutisQuestion: What's the benefit of Pull? When you e.g. get a user by (d/entity db [:user/email ", you can then get anything lazily from it as if it were a nested map, like (:user/name user) or even (-> user (:user/account) (:account/balance)) and it works fine.#2015-08-1917:06kvltIs there a reason I can’t do this?#2015-08-1917:30sdegutisDo you often find yourself mapping d/entity over [?ent ...] find results like this? (->> (d/q '[:find [?user ...] :where [?user :user/email]] db) (map (partial d/entity db)))#2015-08-1917:31sdegutisThat's a simplistic query, but the idea is that you're querying for a list of entities matching some predicates, and you want to get them as entities.#2015-08-1917:39bostonaholic@sdegutis: I use (pull ?e [*])#2015-08-1917:40bostonaholic[(pull ?user [*]) ...] for your example#2015-08-1917:42kvltAnyone?#2015-08-1917:43sdegutis@bostonaholic: Oh smart. I forgot about pull expressions.#2015-08-1917:43sdegutisWe've been using a 3-year-old version of Datomic until yesterday.#2015-08-1917:43bostonaholicwhoa#2015-08-1917:43sdegutisSo all these changes are very new to me.#2015-08-1917:44sdegutisWe were also on Clojure 1.5.1 until then, now Clojure 1.7 ❤️#2015-08-1917:44bostonaholic@petr: I'm failing to see what you're trying to accomplish with that query#2015-08-1917:44bostonaholic@sdegutis: good for you!#2015-08-1917:45kvlt@bostonaholic: I’m trying to find ?obj that have :obj/moo set to a value or nil#2015-08-1917:45bostonaholicthat will be everything#2015-08-1917:45bostonaholicit's either set, or not#2015-08-1917:45kvltNo, a specific value or nil#2015-08-1917:45bostonaholicnil doesn't exist in datomic#2015-08-1917:45bostonaholicyou cannot set an attribute to nil#2015-08-1917:46kvltLet me rephrase. Set to a specific value or not set at all#2015-08-1917:46bostonaholicah, ok#2015-08-1917:47bostonaholicso "find me all dogs whos names are "Fido" or not set at all?"#2015-08-1917:47kvltCorrect#2015-08-1917:51sdegutisYou can use missing? tho#2015-08-1917:51sdegutisOr whatever it's called. It's a new-ish expresison.#2015-08-1917:52bostonaholic@petr: try unwrapping your missing? call#2015-08-1917:52bostonaholic(or [?obj :obj/moo ?moo]
(missing? $ ?obj :obj/moo))
#2015-08-1917:52kvlt@sdegutis: It complains when I try to use missing
[#{?obj ?moo} #{?obj}#2015-08-1917:53kvlt@bostonaholic: i get the same error#2015-08-1917:53bostonaholicwhat's the error?#2015-08-1917:53kvltAssert failed: All clauses in 'or' must use same set of vars, had [#{?obj ?moo} #{?obj}] (apply = uvs) datomic.datalog/unifying-vars (datalog.clj:817)#2015-08-1917:54bostonaholichm#2015-08-1917:56kvltYeah. I’m not really sure how to do this#2015-08-1918:00marshall@petr: If your :obj/moo is cardinality-one, you can use get-else: http://docs.datomic.com/query.html#get-else#2015-08-1918:00kvlt@marshall: unfortunately it’s not#2015-08-1918:04kvltThere can be multiple moos#2015-08-1918:05kvltI also don’t know that get-else would allow me to use missing?#2015-08-1918:05kvltNor would get-some for that matter#2015-08-1918:06marshallhow about this:
(or
[?obj :obj/moo ?moo]
(and
[(missing? $ ?obj :obj/moo)]
[(ground :nil) ?moo]))
#2015-08-1918:06sdegutisIt might not. I didn't notice the get-some part.#2015-08-1918:06marshall^ if it’s missing assign it nil, otherwise assign it the value there#2015-08-1918:11kvlt@marshall: that’s a good guess. But same result#2015-08-1918:16marshallAh, think I had a typo. Edited.#2015-08-1918:39bostonaholicdoes missing? work on :db.cardinality/many?#2015-08-1918:47kvlt@bostonaholic: It does indeed#2015-08-1918:47bostonaholicjust checking. I haven't used it much so I wasn't sure#2015-08-1919:17sdegutisWhat do the docs mean by the example [(ground [:a :e :i :o :u]) [?vowel ...]] ?#2015-08-1919:27marshall@sdegutis: it binds the ?vowel variable to the values :a :e :i 😮 and :u#2015-08-1919:28bostonaholic😮#2015-08-1919:28bostonaholiclol#2015-08-1919:28marshall@sdegutis: http://docs.datomic.com/query.html#collection-binding#2015-08-1919:28marshallyeah, emojis in code ftw#2015-08-1920:18sdegutisokay#2015-08-1920:19sdegutisI thought you could omit :db/id from a nested map in a transaction if one of the keys was a unique attribute?#2015-08-1920:19sdegutisIt's saying java.lang.IllegalArgumentException: :db.error/invalid-nested-entity Nested entity is not a component and has no :db/id#2015-08-1920:20sdegutisAm I misreading the requirement?#2015-08-1920:34bostonaholicI don't believe you can#2015-08-1920:34bostonaholicif I'm not mistaken, all entities in datomic have a :db/id#2015-08-1920:36sdegutisExcept components in a transact apparently.#2015-08-1920:36sdegutisThe wording in their documentation wasn't clear, but that's effectively what it seems to mean.#2015-08-1920:38sdegutisWhoa! You can use the /_ syntax in pull patterns!#2015-08-1920:39sdegutis(d/pull db '[:user/_orders] [:order/id "abc123"])#2015-08-1920:39sdegutisThis returns a list of users who have that order ID!#2015-08-1920:39sdegutisEnjoying this.#2015-08-1921:10kvlt@sdegutis: You need to supply a tempid to the map if you’re transacting nested entities and the nested entity is not a component of the parent#2015-08-2113:41stuarthallowayevery entity has a tx, you could lookup up precisely those txes in the log#2015-08-2115:12kvltHey, so it looks like my datomic transactor (0.9.5130) is dying 5-6 times a day with the message:
2015-08-21 01:37:28.879 ERROR default datomic.process - {:tid 184, :pid 1848, :message "Critical failure, cannot continue: Heartbeat failed”}
Could this be due to memory settings? We are running them on c3-xlarge with the following memory settings:
memory-index-threshold=64m
memory-index-max=512m
object-cache-max=1g#2015-08-2115:46sonnytoI am trying to recursively touch an entity to realize it so that I can send the it over a websocket... however its not working.#2015-08-2115:47sonnytoit does touch the sub entity but when printing out the original entity, the subentity is not realized#2015-08-2118:22arohnersonnyto: doseq returns nil. you’d need to use for to get the results#2015-08-2118:23arohnersonnyto: however, it looks like what you’re trying to accomplish can be done with the pull API, so I’d look at that first#2015-08-2118:23arohnerhttp://docs.datomic.com/pull.html#2015-08-2409:59a.espolov(GET "/api/group/: id/children" [id] (do
(println "a" id (d/pull (d/db conn) ' [*] id))
...))
Guys what's the catch?
the route takes the get parameter id and it is not empty, but d/pull gets even the score or anything.
If you change the id code on a real id'nik from the database then all works#2015-08-2410:00tcrayfordis id from the route a string?#2015-08-2410:02a.espolovtcrayford: yes, sorry(#2015-08-2410:03tcrayfordnp 😉#2015-08-2410:03tcrayfordI'd bet that's it, right?#2015-08-2410:03a.espolova silly question#2015-08-2410:05robert-stuttafordwrap id with (Long. id) and it should be good#2015-08-2410:06robert-stuttafordalso, be aware it’s not advisable to use internal entity ids as external identifiers#2015-08-2410:06tcrayfordyeah, use a squuid#2015-08-2410:09a.espolov@robert-stuttaford: But what about the dreovvidnye data structure which leaves refer to similar entities?#2015-08-2410:10robert-stuttafordthe entity ids are not guaranteed to be stable between database backup/restore. tcrayford knows the details#2015-08-2410:10robert-stuttafordif you want to keep a reference to an entity, you should use one of your own making#2015-08-2410:10robert-stuttafordlookup refs make this pretty straightforward to do#2015-08-2410:10tcrayfordobviously you're still fine doing a pull from that - if you avet it and use a lookup ref it's real easy#2015-08-2410:10robert-stuttafordhttp://docs.datomic.com/identity.html#2015-08-2410:11tcrayfordin other news, I think I figured out how to do "injection" (like sql injection) to datomic, iff you have external endpoints that accept EDN or transit#2015-08-2410:12tcrayford(at least: I think I did. I'm unsure if it actually works thinking about it)#2015-08-2410:12a.espolov@tcrayford: I have?)#2015-08-2410:12tcrayford@a.espolov: oh, you have EDN endpoints that point at datomic?#2015-08-2410:12tcrayford(or transit)#2015-08-2410:13tcrayfordI need to check it, and don't have time for that…#2015-08-2410:14tcrayfordbut tldr: (d/q [:find WHATEVER :in $ ?id [?id :some/attribute whatever]] db id) - if id comes from EDN, can't you just pass this EDN string: {:id (d/q WHATEVER_QUERY_YOU_WANNA_DO_FOR_INJECTION $)}#2015-08-2410:15tcrayfordreminder about "blind" database injection as well - even if you never display results of the query you can still work out the entire database#2015-08-2410:16tcrayford(google "blind sql injection" if you wanna blow your mine a bit)#2015-08-2410:17a.espolov@tcrayford: is it sql injection is not requests to add/update records in the database?#2015-08-2410:18tcrayfordI don't think I understand…#2015-08-2410:21a.espolov@tcrayford: I do not understand correctly d/q-besides result set could execute insert/update?#2015-08-2410:22tcrayfordoh, not insert#2015-08-2410:22tcrayfordbut even so, an attacker can query literally every entity/attribute/value in your db, which is quite a big deal#2015-08-2410:24a.espolov@tcrayford: but it is when a request is going to fly, rather than rigidly prescribed to each endpoint#2015-08-2412:41robert-stuttafordtcrayford: your example query is missing :where. you wouldn’t be able to inject arbitrary clauses like that#2015-08-2416:15tcrayford@robert-stuttaford: haha yeah. Just pretend it has it ;)#2015-08-2416:41a.espolov@robert-stuttaford: "if you want to keep a reference to an entity, you should use one of your own making"
This is what options to save a reference to the enity?)#2015-08-2417:47robert-stuttafordfor example, make your own uuid attr and use that in your urls#2015-08-2419:57sdegutisIs it possible to make a transaction where it will do nothing (be a no-op) if the lookup-ref is invalid and returns no matching entity?#2015-08-2419:58sdegutisSo like [:db/add [:user/email maybe-email] :user/name "bob"]#2015-08-2419:58sdegutisWhere that'll work if maybe-email has a match but no-op (without throwing an exception) if it doesn't?#2015-08-2419:58shaunxcodewhere your goal is to avoid having a "nothing happened" transaction?#2015-08-2419:59sdegutisRight, especially one that does not throw an exception.#2015-08-2420:07shaunxcodeas far as I am aware there is not a way w/o the throwing of exception - are you just annoyed by the empty transaction?#2015-08-2420:15bostonaholic@sdegutis: I would just wrap in try/catch and have the catch block be empty#2015-08-2420:15bostonaholicand only catch that particular exception, maybe rethrow if another exception occurs#2015-08-2421:20arohnergiven a txid, how do I find the contents of the transaction?#2015-08-2421:48alexmiller[:find ?e ?a ?v ?op :in $ ?txid :where [?e ?a ?v ?txid ?op]] ;; something like that via a query (excuse typos)#2015-08-2421:48alexmilleror use the log directly http://docs.datomic.com/log.html#2015-08-2422:08arohner(d/q '[:find ?e :in $ ?tx :where [?e ?tx ?op]] db txid)
Exception Insufficient bindings, will cause db scan datomic.datalog/fn--6468 (datalog.clj:368)#2015-08-2422:18arohner(:data (first (d/tx-range (d/log dev-conn) txid (inc txid))) works, but I’d like to understand how to make that q work#2015-08-2422:43arohneralso, that’s slightly annoying because I need a conn rather than a db#2015-08-2516:32jonasUsing the raw indexes I can do db.datoms(AVET, :some/attribute) to get all the datoms with a particular attribute, sorted by V. I'd like the iterator to be reversed (i.e., from the largest V to the smallest). Is this possible?#2015-08-2516:41arohner@jonas: not built in, no. You can hack it by storing (- Long/MAX_VALUE your-value)#2015-08-2517:59jonas@arohner: that seems nice and hacky, and kind of brilliant simple_smile. Would only work for numeric values I suppose#2015-08-2518:02jonasHow do people approach pagination in datomic? There’s no offset/limit in datalog and the datoms api doesn’t quite fit either.#2015-08-2518:05jonasdatomic seems nice in principle because you could get the next “page” from the same db value.#2015-08-2518:18arohner@jonas: you can’t get pagination out of d/q, because it’s set-based. If you need iteration, you can look at d/datoms, d/seek-datoms and friends#2015-08-2518:18arohnerthose can also be faster than d/q in some cases, but obviously only reach for that tool when necessary#2015-08-2518:22jonas@arohner: yes, d/datoms would be perfect for my use case if I could get the reverse iterator.#2015-08-2518:23jonasI want to paginate and sort by a date field so the log api might be a possibility as well.#2015-08-2520:04tcrayfordNote that the log isn't cached, so accessing it is relatively slow compared to the indices#2015-08-2603:26sdegutisIs it typical that (d/connect "datomic:) takes 1678 msecs?#2015-08-2610:30tcrayford@sdegutis: it depends on a lot of things#2015-08-2610:31tcrayforde.g. if the dev transactor is doing a full GC, that latency could be even larger#2015-08-2610:46viesti(hello “world”)#2015-08-2610:47viestiwas chatting with colleagues about using Datomic in an open source project#2015-08-2610:47viestiDatomic is really neat technically but how to tackle the question about keeping your data in a closed system#2015-08-2610:47robert-stuttaford@jonas i can talk to you about pagination#2015-08-2610:48tcrayford@robert-stuttaford: curious to hear your take. I have a lot of thoughts about it#2015-08-2610:48robert-stuttaford@sdegutis: a new connect is also downloading the live index from the transactor#2015-08-2610:49tcrayfordI'd expect that to typically be very small in the dev transactor, but maybe that's just my use case 😉#2015-08-2610:49robert-stuttafordwe’re actually struggling with pagination at the moment#2015-08-2610:49robert-stuttafordbut i think our issue is conceptual more than it is a fault of Datomic's#2015-08-2610:50robert-stuttafordyou have 100k entities. you want to see the 100 most recently active ones. you have to get all 100k, sort by activity date descending, then grab the first 100#2015-08-2610:50robert-stuttafordhow to break this work up?#2015-08-2610:51robert-stuttafordwe’re doing things like memoizing (on a redis backend) with the db value so that once you’ve generated the set, pages 2..n are super fast#2015-08-2610:51robert-stuttafordbut making that initial set is still super slow#2015-08-2610:51robert-stuttafordwe’re pre-calculating as much as we can, too, so that queries only need look at a single attr per entity#2015-08-2610:51tcrayford@robert-stuttaford: in that case, raw index walking isn't too hard, right?#2015-08-2610:51robert-stuttafordbut that’s not always possible#2015-08-2610:52robert-stuttafordit is if you want descending order#2015-08-2610:52tcrayfordah yeah#2015-08-2610:52tcrayfordjust do an arohner and write an attribute in 2512 or whatever#2015-08-2610:52robert-stuttafordstoring ever descending values doesn’t work. it’s not really a solution if you have lots of existing data#2015-08-2610:53robert-stuttaforduh, wha? simple_smile#2015-08-2610:53robert-stuttafordif it were easy to enable some sort of ‘clutch’ and allow edits in the past, then it’s easy to fix with ETL#2015-08-2610:54tcrayfordah yeah#2015-08-2610:54robert-stuttafordbut it’s not. to do that we’d have to retransact our entire db in time order and alter txes on the fly#2015-08-2610:54robert-stuttafordwe’re at over 40mil txes#2015-08-2610:54tcrayfordoh, he stores them in ascending order by inverting them from Long/MAX_VALUE. Now I understand 😉#2015-08-2610:55tcrayfordconceptually all you'd need is access to an inverted index (which seems… relatively doable?)#2015-08-2610:55robert-stuttafordright now, we take the sort dimension you want to use, realise the full set for just that one ‘attr’ (might be computed, might be direct lookup), sort, paginate, then realise the rest of the data for each ‘row'#2015-08-2610:56jonastcrayford: surely that depends on the value type?#2015-08-2610:56robert-stuttafordwe’ve cut a lot of processing time like this, and i’ve got it all using datalog and transducers as much as possible#2015-08-2610:56robert-stuttafordbut it still takes long for big sets#2015-08-2610:56tcrayford(many folk have asked for inverted indexes though, so I assume they have good reasons for not doing it yet)#2015-08-2610:57robert-stuttafordwe have to find a better way. if we didn’t need to sort, then you can paginate very easily. unfortunately, unsorted data is fairly useless in a reporting context. sorting’s the real perf pain.#2015-08-2610:58tcrayfordyeah 😞 And conceptually, the indexes have the already sorted data, just there's no way to ask datomic for it 😞#2015-08-2610:58robert-stuttafordi would actually dig to have a 1 or 2 hour hangout with you tom, and whoever else has tried their hand at this to talk about novel options#2015-08-2610:58robert-stuttafordi can talk through what we’ve done so far#2015-08-2610:58robert-stuttafordwhat’s worked, how well, etc#2015-08-2610:58tcrayfordI uh, haven't tried anything#2015-08-2610:58tcrayfordI could write Yeller's database down on two or three sheets of paper#2015-08-2610:59robert-stuttaford-grin-#2015-08-2610:59tcrayfordso I don't have a sorting problem simple_smile#2015-08-2610:59robert-stuttafordbastard 😁#2015-08-2610:59jonas@robert-stuttaford: That would be great! I will need to read through and respond later. I have a few ideas myself as well#2015-08-2610:59tcrayford(actually it's bigger than that now, thinking about it. Still, like the number of entities is below 1k)#2015-08-2611:00robert-stuttafordyeah no we are WAY beyond that#2015-08-2611:00robert-stuttaford3 years of user data#2015-08-2611:00tcrayford@robert-stuttaford: yeah, understood 😉#2015-08-2611:00tcrayford@robert-stuttaford: aren't y'all paid users? Lean on dat support contract#2015-08-2611:01robert-stuttafordit’s not a datomic support issue. datomic isn’t doing anything wrong#2015-08-2611:01robert-stuttafordit’d be a consulting gig#2015-08-2611:01robert-stuttafordPEBKAC simple_smile#2015-08-2611:01tcrayfordno, but a "how do I use your product to do $COMMON_TASK" thing imo (and I think it is a datomic issue, because there's no inverted index access)#2015-08-2611:02robert-stuttafordsure#2015-08-2611:02tcrayfordlike, if you had a d/datoms-reversed or whatever, this'd be trivial#2015-08-2611:02robert-stuttafordfor descending sorts, yes#2015-08-2611:04robert-stuttafordi’m looking into pre-processing with Onyx and creating Sorted Sets in Redis now#2015-08-2611:04jonasI would like (the possibility) to get sorted sets out of the datalog queries where you can specify the sort-order. Then you could also specify offset/limit#2015-08-2611:04robert-stuttafordas Onyx is processing our data tx by tx, it can update many sets pretty quickly#2015-08-2611:05robert-stuttaford@jonas: yep, although it’s all still going to happen app-side#2015-08-2611:05robert-stuttafordif Datomic provides this, it’s going to be a layer around d/q, not a new internal part of it#2015-08-2611:05robert-stuttafordand we can pretty much do that ourselves#2015-08-2611:05robert-stuttafordthat’s a big fat assumption on my part, of course#2015-08-2611:06robert-stuttafordi don’t have any sort of insider knowledge or anything 😁#2015-08-2611:06jonasI agree we can do it ourselves and that’s an idea I’m exploring#2015-08-2611:07robert-stuttafordanyway. i have to be off. i’d love to show you guys what we’re doing at the mo, as it might help you, but also it’ll probably help me because you’ll likely poke holes in all of it simple_smile#2015-08-2611:07robert-stuttafordperhaps a hangout sometime in September?#2015-08-2611:16tcrayfordI'm interested, but September is kinda bad for me 😐#2015-08-2613:25shofetimSo from the docs http://docs.datomic.com/clojure/index.html#datomic.api/q it looks I can (and perhaps should prefer?) to write queries as maps rather then vectors, but whenever I try it, I get "java.lang.IllegalArgumentException Don't know how to create ISeq from: clojure.lang.Symbol" am I doing it wrong, or maybe the docs are describing an as yet unreleased API? (I'm running 0.9.5206 which I think is the latest)#2015-08-2613:45jonasI don’t think the map form is preferred (except for when you’re generating queries programmatically). The IllegalArgumentException is probably unrelated. Note that when using the map form you need to wrap the “arguments” in an extra vector (or list): {:find [?a ?b ?c] …} instead of [:find ?a ?b ?c …]#2015-08-2614:07sdegutisWhat settings do you use for the development transactor?#2015-08-2614:08sdegutisDo you ever change the min/max memory for it?#2015-08-2619:51sdegutisWhat are the advantages or disadvantages of using maps to describe transactions vs using vectors?#2015-08-2619:51sdegutisWhat would you typically want to use and when would you use the other kind?#2015-08-2619:59bensu@sdegutis: maps usually refer to a single entity and they are easier to generate since you can assoc attributes with values in.#2015-08-2620:01bensu@sdegutis: I'm not 100% confident on this next point: vectors might be the only way to leverage user defined or built in functions like :db.fn/retract-entity#2015-08-2620:11sdegutisI'm thinking so too.#2015-08-2620:54sdegutisWhen would you want to use transact-async over transact?#2015-08-2621:13sdegutisI guess I don't understand why transact returns a future when it's not async and waits for it to complete anyway.#2015-08-2621:19alexmillerso the result is not built if it's not needed#2015-08-2621:19alexmiller(would be my guess - I'm not on the datomic team)#2015-08-2621:26sdegutisOh that could be it.#2015-08-2621:29arohneram I allowed to assume txids are monotonically increasing?#2015-08-2621:33alexmillerhttp://docs.datomic.com/best-practices.html#t-instead-of-txInstant seems to say so#2015-08-2621:34arohner@alexmiller: those are ts, though, not txids?#2015-08-2621:34alexmillerah, right#2015-08-2621:35arohnerlooks like I can avoid it and just d/pull the txid’s txInstant, and sort those#2015-08-2621:35alexmillerI think it is logical that txids would be as well (for ordering in the index, plus they are serialized at creation time), but I don't know that that is guaranteed#2015-08-2622:12sdegutisI have some pretty messy code to automatically resolve the :tempids of a transaction. Is this common, or is there a better pattern that people use?#2015-08-2622:22bensu@sdegutis: for what is worth I also have a tx->ids function.#2015-08-2622:22bensu(not pretty)#2015-08-2707:32lowl4tencyHow should look the java opts for JVM for connection to ddb transactor?#2015-08-2707:33lowl4tencyFor PostgresSQL it looks like export DATOMIC_URI="datomic:<sql://dbname?jdbc:postgresql://example.com:5432/datomic?user=datomic&password=datomic>#2015-08-2707:37lowl4tencyhttp://docs.datomic.com/javadoc/datomic/Peer.html#connect(java.lang.Object)#2015-08-2707:37lowl4tencyThank you simple_smile#2015-08-2710:17lowl4tencyGuys, I've got infinity redirects when trying to download datomic from Amazon EC2 instance#2015-08-2710:18lowl4tencyI've added --max-redirection option but it doesn't help#2015-08-2710:33lowl4tencyInteresting, from laptop it works as expected#2015-08-2710:35lowl4tencyIt hurts me 😞#2015-08-2713:57jeffh-fpredirects all the way down#2015-08-2715:52sdegutisWhat's a fast way to rollback a test database (like "datomic:") to a blank slate, other than ((juxt d/delete-database d/create-database) uri)?#2015-08-2716:05kbaribeau@sdegutis: I've been just opening a connection to a new url to get a blank slate#2015-08-2716:08kbaribeau(def url "datomic:")
(def num-resets (atom 0))
(defn conn-to-fresh-db []
(let [url (str url (swap! num-resets inc))]
(schema/ensure-schema url)
(d/connect url)))
^ seems like a total hack, but also seems to get the job done#2015-08-2716:51sdegutis@kbaribeau: I imagine that would make your memory usage skyrocket if you used an auto-test-rerunner?#2015-08-2716:53kbaribeauit might, I was having problems with data pollution when deleting dbs though, I figured there must be a cache somewhere that's operating by URL that delete-database wouldn't clear#2015-08-2717:02sdegutis@kbaribeau: I've been using delete-database with no problem for 2 years, using the same in-memory uri.#2015-08-2717:02sdegutis@kbaribeau: So you've got something else going on there that you might want to fix.#2015-08-2717:04kbaribeauyeah, this code has been sitting for about a year and a half without causing problems, so I'm content to let it sit#2015-08-2717:07sdegutiscool#2015-08-2717:09sdegutis@kbaribeau: afaict that means you've got an error somewhere in your code that's hiding, from my experience it'll probably come out and bite you at some point in the future, but any way your choice just psa/fyi that's all#2015-08-2717:15kbaribeaualright, thanks for the input I guess?#2015-08-2721:57sdegutisWhen should you not use the reader-macro form of #db/id?#2015-08-2807:14robert-stuttafordsdegutis: the macro is for .edn files. you should use d/tempid in your code, as #db/id reader macros produce one value only#2015-08-2807:15robert-stuttaford(fn new-id [] #db/id[:db.part/user] ) this would return the same value how ever many times you call it#2015-08-2807:16robert-stuttaford#db/id is a convenient way to specify a one off temp-id in a data-only manner#2015-08-2807:16robert-stuttafordsuch as in schema.edn#2015-08-2807:26lowl4tencyHm, are the way to do restoring to DDB a bit quicker? Might be are there any best practices?#2015-08-2810:44robert-stuttaford@lowl4tency: http://docs.datomic.com/backup.html#performance#2015-08-2810:44lowl4tencyrobert-stuttaford: hah, why I didn't find it 😞#2015-08-2810:45lowl4tencyThank you#2015-08-2810:45robert-stuttafordperhaps you can increase the ddb write throughput#2015-08-2810:45robert-stuttafordfor the duration of the restore#2015-08-2810:45robert-stuttafordhttp://docs.datomic.com/capacity.html#storage-size-and-write-throughput talks about a write of 1000 for imports#2015-08-2810:51lowl4tencyHm, interesting#2015-08-2810:51lowl4tencyWill carefully learn it simple_smile#2015-08-2814:14lowl4tencyConcurrency option and increasing ddb write throughput did restorin much quicker#2015-08-2814:19lowl4tencyAlso, restoring is continious and begin from last interrupt. It's pretty cool#2015-08-2816:32sdegutisDoes it make sense to give entities a UUID (if they don't already have a unique-based attribute) so that you don't have to resolve their :tempids after the transaction, and instead you can just return the UUID you gave to whoever needs it (which can then be used in a Lookup Ref)?#2015-08-2816:32sdegutisI've been seriously considering that technique for a few days, and the only drawback I can see is that it's an extra attribute and potentially superfluous data (considering the :db/id is essentially the same thing).#2015-08-2816:33sdegutisBut there are two benefits for this: (1) you don't have to resolve the tempid, and (2) the UUID can be shared/consumed externally whereas the :db/id cannot.#2015-08-2816:37raywillig@sdegutis: datomic best practices document seems to suggest the uuid approach though they don't specifically say use a uuid. the example they give of using a unique identity is a uuid generated with datomic's index friendly squuid function#2015-08-2816:39marshall@sdegutis: Yes, @raywillig is correct. That is a good approach b/c entity IDs are not guaranteed to be stable across i.e backup/restore, etc, so whenever possible you should supply a domain identifier or an externally unique identifier:
http://docs.datomic.com/best-practices.html#unique-ids-for-external-keys#2015-08-2817:18sdegutis@marshall: Excellent.#2015-08-2817:19sdegutisIn this case, there seems to be little to no use for Datomic users to use :db/id, right?#2015-08-2817:19sdegutisIt seems like the only place it would ever be used is to ensure that a transaction creates a new entity (via {:db/id (d/tempid :db.part/user) ...}).#2015-08-2817:19sdegutis@raywillig: Thanks!#2015-08-3023:11mishahello.
is there any way to "enhance" full text search?
Some query string syntax for exact match, etc.?
(datomic.api/q
'[:find ?e ?v ?score
:in $ ?s
:where [(fulltext $ :some/attr ?s) [[?e ?v ?tx ?score]]]]
db "xx yy")
returns me these search results:
([17592186045571 "yy xx" 1.0]
[17592186045571 "yy zz" 1.0])
1st one is ok, since it has both words (though in different order),
but the score of the 2nd one is not really ok, since result is missing 1 word, and has different one instead.#2015-08-3023:17mishaactually, false alarm, these 2 results belong to the same entity, this is why it returns me the irrelevant one too#2015-08-3023:19mishanot sure why it does that though.#2015-08-3107:34robert-stuttafordtwo matches in the same body of text?#2015-08-3109:02misha@robert-stuttaford good morning,
the actual value was:
:some/attr ["yy xx" "yy zz"]
#2015-08-3109:03mishareturning 2 matches within the same entity is probably ok, but both with score 1.0? meh#2015-08-3111:23robert-stuttaford@misha, they’ll simply be providing you what the underlying fulltext engine is providing datomic#2015-08-3111:23robert-stuttafordwhich is lucene#2015-08-3122:39sdegutisWhat's the idiomatic way to catch an exception when a Lookup Ref fails inside a transaction, e.g. [:user/email "?#2015-08-3122:41sdegutisSpecifically I'm trying to catch this: java.lang.IllegalArgumentException: :db.error/not-an-entity Unable to resolve entity: [:user/email "foo"] in datom [#db/id[:db.part/user -[redacted]] :account/admin [:user/email "foo"]]#2015-08-3122:42sdegutisAre you supposed to just (catch IllegalArgumentException e) and look inside (.getMessage e) for the exact sub-string ":db.error/not-an-entity"?#2015-08-3122:43sdegutisBecause if so, that seems pretty fragile.#2015-09-0107:12robert-stuttafordprobably better to test the lookup in isolation first#2015-09-0109:41tcrayford@robert-stuttaford: but then if you want transactionality, you've gotta be in a db function 😕#2015-09-0110:02robert-stuttafordindeed simple_smile#2015-09-0114:53sdegutisSo it's better to do a query before a transaction, to first ensure that a Lookup Ref exists?#2015-09-0115:56bkamphaus@sdegutis: taking a step back — how do you want to handle the failing case? If you want to create if it doesn’t exist, but update if it does, then you may just want to use unique/identity and provide the unique att/val on an entity w/tempid to get upsert behavior as described here: http://docs.datomic.com/identity.html#unique-identities#2015-09-0118:53sdegutis@bkamphaus: In this case I just want to return :invalid-user from this function if the user doesn't exist.#2015-09-0118:58bkamphaus@sdegutis If you’re not expecting much nuance around coordination I would probably just do the existence check with query as @robert-stuttaford mentioned, rather than waiting for the exception to get thrown by the transaction attempt.#2015-09-0119:05sdegutisCool thanks simple_smile#2015-09-0119:22bhaganyJust want to ping any Datomic team members here about a bug I've reported before: Queries that use both pull and the scalar find spec (`[:find (pull ?e [*]) . …]`) that are done via the rest api return results that are in a different shape than you'd expect. If, in a repl, you get {:a 1 :b 2} as a result, the same query will return [[:a 1] [:b 2]] via the rest api.#2015-09-0119:22bhaganyAlso, just signed up for the Datomic training at the Conj, so, yay simple_smile#2015-09-0119:26alexmilleryay! we'll have more details on that in the next few weeks, lots of cool stuff in the works for it.#2015-09-0119:33sdegutisAlso it would be cool if you could do a recursive Lookup Ref, like [:account/user [:user/email "#2015-09-0119:33sdegutisRight now it needs a query, but it would be cool if it could.#2015-09-0120:04sdegutisAnyone got a solution for correctly indenting multiple :where clauses in Emacs?#2015-09-0121:59shaunxcode@sdegutis: hah I have just come to accept the way the clauses look stacked beneath :where but yes that does bug me when I think about it#2015-09-0122:22sdegutissimple_smile#2015-09-0122:22sdegutis@shaunxcode: I've often used the map-literal style instead of the vector-literal style, but that gets verbose.#2015-09-0122:23sdegutise.g. [:find ?foo :where [?foo :foo/bar]] == {:find [?foo] :where [[?foo :foo/bar]]}#2015-09-0122:23sdegutisBut it's the only way I can get the indentation right if I have more than one :where-clause.#2015-09-0122:41sdegutisQuestion: does it make sense to use a pull-expression inside a (d/q ...) query, or is that literally equivalent to just putting another :where clause in there?#2015-09-0122:42kbaribeauFWIW, this is I get with default formatting in vim, and it's been fine for a couple of years worth of a project:
(d/q '[:find ?foo
:in $ ?thing
:where
[?id :foobar/baz ?thing]
[?id :foobar/foo ?foo]]
db
thing)
#2015-09-0122:43sdegutis@kbaribeau: When it comes up is if you put the first where-clause on the same line as the :where word.#2015-09-0122:43kbaribeauyeah, I just never do that#2015-09-0122:43sdegutisSo, j j J#2015-09-0122:43sdegutisok cool me too#2015-09-0122:43kbaribeaumaybe in a really short query, but for multiple where clauses it looks so much neater with where on its own line#2015-09-0122:43kbaribeauIMO anyway#2015-09-0122:44kbaribeaure: pull, stacking where clauses is pretty verbose. before they added pull I used the entity API a lot because of that#2015-09-0122:47kbaribeau(d/q '[:find ?foo ?bar ?wha
:in $ ?thing
:where
[?id :foobar/baz ?thing]
[?id :foobar/bar ?bar]
[?id :foobar/wha ?wha]
[?id :foobar/foo ?foo]]
db
thing)
;--------- VS ------------
(d/q '[:find (pull ?id [:foobar/foo :foobar/bar :foobar/wha])
:in $ ?thing
:where
[?id :foobar/baz ?thing]]
db
thing)
(plus or minus a syntax error maybe, I didn't try to execute that)#2015-09-0123:25sdegutis@kbaribeau: Ah good point.#2015-09-0213:13bkamphaus@sdegutis: it depends also on what you actually want back, and whether or not you want the presence of the attributes specified in the pull to affect whether or not the entity is returned by the query. A write up on some of the differences is here: http://docs.datomic.com/best-practices.html#use-pull-to-retrieve-attribute-values#2015-09-0218:15robert-stuttaford@bkamphaus: :db.error/tempid-not-an-entity tempid used only as value in transaction. can you shed some light on what this might mean?#2015-09-0218:21robert-stuttafordi found it. was trying to tx schema with only id and install attrs#2015-09-0313:43sdegutis@bkamphaus: Ah thanks simple_smile#2015-09-0315:58robert-stuttafordi have a pretty nice bit of code that writes select entities to a file encoded as transit and imports it into a memory db. we have 30gb in our production db but only need a very tiny 100mb of it for our apps to function. was quite some fun writing this with transducers etc#2015-09-0321:50clojuregeekI'm working on using datomic for the first time. I signed up for datomic pro, and now trying to add the clojure library I am getting errors about unable to find valid certificate. Do i need to enrypt my creds to clojars just to be able to download a dependency?#2015-09-0321:51bostonaholic@clojuregeek: datomic pro isn't hosted on clojars#2015-09-0321:52bostonaholicif you're using leiningen, you have to add
:repositories {"" {:url ""
:username ""
:password ""}}
#2015-09-0321:52clojuregeekok it must be my login to datomic, ok i got it simple_smile#2015-09-0321:52bostonaholicfill in the blanks 😜#2015-09-0321:52clojuregeekthanks#2015-09-0321:53bostonaholicpassword is the "download key" on your account page for datomic pro#2015-09-0321:53bostonaholicshould look like a uuid#2015-09-0321:55clojuregeeki see, thanks!#2015-09-0321:56bostonaholic(just be sure not to publish that code)#2015-09-0321:56clojuregeekgot it .. i'm trying to do a POC for work#2015-09-0322:23clojuregeekgot the dependancies downloaded simple_smile simple_smile#2015-09-0411:24damienHi all - question about using couchbase as storage in Datomic. I have started my transactor and it connects to my test bucket just fine. When I open Datomic console I can see my bucket under the storage alias I provided but the DB drop down is empty. It's blank and I can't run by queries with the error "DB name cannot be blank"#2015-09-0411:44bhagany@damien: it sounds to me like you need to create a db, like here: http://docs.datomic.com/tutorial.html#making-a-database#2015-09-0411:46bhagany@damien: to dispel what may be a confusion - the storage and the db are different things.#2015-09-0411:46bhagany@damien: also, you can do it in clojure, if you'd prefer http://docs.datomic.com/clojure/#datomic.api/create-database#2015-09-0413:28damien@bhagany thanks, good to go :+1:#2015-09-0413:29bhagany@damien: Excellent :)#2015-09-0413:54clojuregeekdoes anyone know of a good example of using datomic in clojure? I'd like to see one. The examples here are for java and groovy.. but.. i'm not seeing a whole lot for clojure on Datomic github 😭 I'd like to see where the attributes go and init of the application#2015-09-0413:56marshall@clojuregeek: This repo: https://github.com/Datomic/day-of-datomic has quite a few examples of using Datomic from Clojure#2015-09-0414:00clojuregeekyes there are some small things, i was kind of hoping to see a small example that was complete. but I can work with that.. thanks simple_smile#2015-09-0414:17bhagany@clojuregeek: I haven't done this myself yet, but when I get to the point you're at, I plan to take a look at how juxt's modular sets up datomic projects: https://github.com/juxt/modular#2015-09-0414:34clojuregeekcool, i'll check it out#2015-09-0416:34clojuregeekanyone use lein-datomic ?#2015-09-0422:40micahAnyone find that you can't query on certain attributes in datomic? It's F-ing weird!#2015-09-0422:40micahairworthy.repl=> (into {} (api/entity (db/db) 277076930200612))
{:airport/city "GRAYSLAKE", :airport/name "Campbell Airport", :airport/type :airport, :airport/public true, :airport/lng -88.0740880555556, :airport/lat 42.3246111111111, :airport/state "IL", :airport/code "C81", :airport/elev 788}#2015-09-0422:40micahnote the airport/lng attribute#2015-09-0422:41micahairworthy.repl=> (api/q '[:find ?e :in $ :where [?e :airport/lng]] (db/db))
#{}#2015-09-0422:41micahYet a query for all entities with :airport/lng attribute yields an empty set. WTF?#2015-09-0422:42micah:airport/lat works fine...#2015-09-0422:42micahairworthy.repl=> (count (api/q '[:find ?e :in $ :where [?e :airport/lat]] (db/db)))
19339#2015-09-0422:43micahevery other attribute is query-able... just not :airport/lng#2015-09-0422:48micahOK... by transacting all the airport entities (without changing any values) the problem is solved#2015-09-0422:49micahI guess an index got blown away somehow and transacting the entities repopulated it. Puzzling. Sorry for spamming the room here.#2015-09-0516:28tcrayford@micah: that sounds like a pretty serious bug that you should report to the datomic team#2015-09-0714:35raymcdermottdoes anybody have any insight on when Datomic will have JS / node bindings?#2015-09-0715:30tcrayford@raymcdermott: reminder that datomic bindings are very complex beasts, but also that cognitect only rarely talks about future plans#2015-09-0715:30tcrayfordThey could've been working on js bindings for a year and we wouldn't know ;)#2015-09-0717:12clojuregeekgreat video of a simple datomic database and query functions https://www.youtube.com/watch?v=ao7xEwCjrWQ .. using TDD too#2015-09-0719:01raymcdermott@tcrayford: fair points, just wondered if there was any info on it as it feels like it’s becoming a barrier to adoption at least from where I am these days#2015-09-0809:46caskolkmis there a simple way to flat a nested vector with transaction data?
[[[:db.fn/retractEntity 17592186045471]]
[[[:db.fn/retractEntity 17592186045470]
{:db/id {:part :db.part/user, :idx -1001126},
:field/ref 17592186045462,
:city/name “Fooo",
:fieldset/_values 17592186045469}]]]
clojure.core/flatten does not solve it#2015-09-0810:30jthomson@caskolkm: does it need to flatten to an arbitrary depth or can you just use concat and vec?#2015-09-0810:32caskolkm@jthomson:
[
[:db.fn/retractEntity 17592186045471]
[:db.fn/retractEntity 17592186045470]
{:db/id {:part :db.part/user, :idx -1001126},
:field/ref 17592186045462,
:city/name “Fooo",
:fieldset/_values 17592186045469}
]
should be the result 😉#2015-09-0810:36jthomsonnot sure why you have those outer vectors, valid tx data is a sequence containing either vectors or maps (`[[:db/add ..] {:db/id ..}]`). If you can do without those then (apply concat tx-datas) will combine them for you.#2015-09-0810:37caskolkmthose outer vectors come from mapv on nested attributes#2015-09-0810:37caskolkmto decide what we need to edit, delete or add 😉#2015-09-0810:39jthomsonwell if there is guaranteed to be just one child in each outer vec then you can just (mapcat first tx-datas)#2015-09-0810:40jthomsonotherwise you'll need to concat them twice.#2015-09-0810:44caskolkmThnx @jthomson for the help! simple_smile#2015-09-0810:44caskolkmgot a solution simple_smile#2015-09-0810:44jthomsonany time! simple_smile#2015-09-0813:36bensu@clojuregeek: I used to use lein-datomic but after a while I decided it was not worth the overhead and started using shell scripts.#2015-09-0814:45clojuregeek@bensu: thanks, yeah it doesn't seem to add alot of functionality that you couldn't add in other ways. Thanks for your input simple_smile#2015-09-0818:31bostonaholicjust like to share a win I had today using datomic. I had dates across multiple entities that were saved incorrectly. Essentially the year was saved as 12 instead of 2012. So when writing a script to update all of those entities, I used the power of datomic to query the schema to find all date attributes, then filter those bad dates:
(d/q '[:find ?e (pull ?attr [:db/ident]) ?date
:where
[?attr :db/valueType :db.type/instant]
[?tx :db/ident :db/txInstant]
[(not= ?tx ?attr)]
[?e ?attr ?date]
[(.getYear ^java.util.Date ?date) ?year]
[(< ?year 100)]]
(d/db conn))
Super powerful#2015-09-0818:31bostonaholicwould have not been nearly as easy if we were just using a traditional rdbms#2015-09-0818:45potetm@bostonaholic: noice! I liiiiike.#2015-09-0919:16raymcdermottspeaking with the CLJS folks it seems like the REST API is the best option for JS clients#2015-09-0919:17raymcdermottdo you guys have any knowledge of tooling built around that?#2015-09-0920:46bhagany@raymcdermott: I don't think there's much. I'm using it from Python, and I basically just wrote my own helpers.#2015-09-0920:47bhaganyThe API itself is pretty simple#2015-09-0921:55sgroveWhat would be a single datalog query to find all entities with a given attr (e.g. :user/email), and then also pull out every other attribute the entities might have#2015-09-0921:57sgrovee.g. (d/pull-many db '[*] (d/q '[:find [?eid …] :where [?eid :user/email ?value]] db))#2015-09-1000:13bhagany@sgrove: Maybe I misunderstand what you want, but would (d/q '[:find (pull ?e [*]) :where [?e :user/email]] db) do the trick?#2015-09-1000:14sgrove@bhagany: Wow, that’s pretty cool, I didn’t know about the (pull …) fn#2015-09-1000:15bhagany@sgrove: great! I'm really looking forward to hearing more about dato, btw simple_smile#2015-09-1000:15sgroveYup, does exactly the trick!#2015-09-1000:16sgroveThanks! Need to ship a big update the the app that’s built on it, and then circling back to address some of the fundamentals#2015-09-1000:17bhaganyI'll be patient simple_smile#2015-09-1113:37ghadiDoes datomic use a delta compression scheme in the indices?#2015-09-1113:45alexmillerI assume it's delta blues compression#2015-09-1113:45alexmiller(I don't actually know anything :)#2015-09-1221:11arohnerAre keywords interned, or is is it just a serialization thing?#2015-09-1221:23arohneroh, that’s in the docs#2015-09-1221:23arohner"Keywords are used as names, and are interned for efficiency"#2015-09-1418:37azHi all, just started working through the tutorial on datomic, hitting a few road blocks. Any idea how to generate run
gpg --default-recipient-self -e \
~/.lein/credentials.clj > ~/.lein/credentials.clj.gpg
I get this error:
Enter the user ID. End with an empty line:
gpg: no valid addressees
gpg: /Users/Limix/.lein/credentials.clj: encryption failed: no such user id
Thanks#2015-09-1419:01marshall@limix: Do you require automated download of the peer library? If you’re just hoping to get started exploring Datomic, I would suggest skipping the GPG steps, downloading the Datomic Starter zip, and using bin/maven-install to install it into your local maven repo#2015-09-1419:11azThanks @marshall, will do.#2015-09-1419:35azHi @marshall, so I created the maven repo, but how do I register that repo with the musicbrainz sample project? I still get the follow error when trying lein repl:
Could not find artifact com.datomic:datomic-pro:jar:0.9.5130 in central ()
Could not find artifact com.datomic:datomic-pro:jar:0.9.5130 in clojars ()
This could be due to a typo in :dependencies or network issues.
If you are behind a proxy, try setting the 'http_proxy' environment variable.
Exception in thread "Thread-3" clojure.lang.ExceptionInfo: Could not resolve dependencies {:suppress-msg true, :exit-code 1}
#2015-09-1419:35azSorry I don’t know much about maven, does something need to be running?#2015-09-1419:36azeverything seems to have built correctly after running bin/maven-install:
Installing datomic-pro-0.9.5206 in local maven repository...
[INFO] Scanning for projects...
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building datomic-pro 0.9.5206
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] --- maven-install-plugin:2.4:install-file (default-cli) @ datomic-pro ---
[INFO] Installing /Users/Limix/Documents/sandbox/datomic/datomic-pro-0.9.5206/datomic-pro-0.9.5206.jar to /Users/Limix/.m2/repository/com/datomic/datomic-pro/0.9.5206/datomic-pro-0.9.5206.jar
[INFO] Installing /Users/Limix/Documents/sandbox/datomic/datomic-pro-0.9.5206/pom.xml to /Users/Limix/.m2/repository/com/datomic/datomic-pro/0.9.5206/datomic-pro-0.9.5206.pom
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 0.295 s
[INFO] Finished at: 2015-09-14T12:34:20-07:00
[INFO] Final Memory: 7M/289M
[INFO] ————————————————————————————————————
#2015-09-1419:37azthe project file in the mbrainz sample project:
(defproject com.datomic/mbrainz-sample "0.1.0"
:description "Example queries and rules for working with the Datomic mbrainz example database."
:url ""
:license {:name "Eclipse Public License"
:url ""}
:dependencies [[org.clojure/clojure "1.7.0-alpha4"]
; [com.datomic/datomic-free "0.9.5130"]
;; To run on Datomic Pro, comment out the free
;; version above, and enable the pro version below
[com.datomic/datomic-pro "0.9.5130"]]
:source-paths ["src/clj" "examples/clj"]
:jvm-opts ^:replace ["-Xmx2g" "-server"])
#2015-09-1419:40marshall@limix: You have a version mismatch - you bin/maven-installed version 0.9.5206 but you have 0.9.5130 in the project.clj file. Update the project.clj to 5206 and you should be good.#2015-09-1419:41azworked!#2015-09-1419:42marshallgreat#2015-09-1419:42azdetails! pay attention 😣#2015-09-1419:42azthank you#2015-09-1420:10azThe pull api seems pretty amazing, am I wrong to think this is going to be the go to solution for building a graphdb backend?#2015-09-1420:12azare there other DBs that can shape data like this? Is it plausible to build a system that returns data straight out of datomic via the pull api, and not even have to remold the data at all? Just pass it through to a client?#2015-09-1420:13azwith the pull api, is it possible to apply authorization rules to just part of a pattern?#2015-09-1420:15stuartsierra@limix The Pull API describes only the attributes you want to get out. Authorization rules would have be implemented in your application code.#2015-09-1420:17az@stuartsierra: thanks.#2015-09-1421:48ghadiDoes datomic use some sort of delta compression scheme in the indices?#2015-09-1501:26devnFor some reason my Datomic Pro Starter license key is not working#2015-09-1501:27devnAnyone else have this problem? It says my key license expires November, 2015#2015-09-1501:28devnI overwrote the dev-transactor.properties file with the key I received via email, but when I try to run ./bin/transactor dev-transactor.properties#2015-09-1501:29devn=>
Critical failure, cannot continue: Error starting transactor
java.lang.RuntimeException: Unable to load license key*
#2015-09-1501:29devnThis is on 0.9.5206#2015-09-1501:30devnSomething tells me that installing the Datomic Console might be the cause of the problem...#2015-09-1501:31devn😕 Nope#2015-09-1503:28bostonaholicmaybe you didn't copy it correctly into your config?#2015-09-1503:28bostonaholic@devn: ^^#2015-09-1503:30bostonaholicit should look like
license-key=bhnfyuijhiuhjuhjmnuyhjnbgt78uhn\
ftyuhjtyujnbhuyuijnbhu7yufdasdfasdfajbhu7yh\
nbvtyhj678ujnmjfdrtyhbnjfuydujhbasdfasdfasm\
bvtghj67ujbhytyujnbgh7tyudjnhuyasdfasdasduj\
vghu76tyujnbgy6yujhbgydfuyujnbhuasdfasdfasd\
vftyghjnby6ujbvgy76tyujbvgyuyj==
^^ this was random typing#2015-09-1514:17bkamphaus@devn was the license key working with a previous version?#2015-09-1514:18devnThe answer there is yes, though the last time I used it was about 9 months ago.#2015-09-1514:22devnSo, I clicked the button to have it send me a license key twice.#2015-09-1514:22devnOne of those keys does not work. The other one does.#2015-09-1514:23devnBut it's the same link, and the same email as far as I can tell.#2015-09-1514:24bkamphaus@devn nothing else at play, e.g. copy and paste then moving b/t OSes (such as introducing carriage returns in newlines from windows into unix text files)?#2015-09-1514:25bkamphausIf you have both working and non-working versions, also curious re: diff output b/t them (obviously don’t paste credentials in here for diff)#2015-09-1514:25devn@bkamphaus: nono, i got it working, but what I'm saying is that I received two license keys for "Datomic Pro Starter Edition"#2015-09-1514:26bkamphaushmm#2015-09-1514:26devnthat's what I just noticed. It seems to be cycling through two keys, one of which is inactive.#2015-09-1514:27bkamphausyou got completely different keys each time?#2015-09-1514:29devnYep#2015-09-1514:30devnNot reliably, though. If I click the link to have it send me my license key a few times, it's not consistently key 1, and then key 2.#2015-09-1514:32bkamphaus@devn looking into it, will get back to you#2015-09-1514:33devn@bkamphaus: ah, you at Cognitect?#2015-09-1514:33bkamphausyep#2015-09-1514:33devn@bkamphaus: forgive me for not recognizing your name simple_smile#2015-09-1514:33devn@bkamphaus: I can drop you both keys if that's helpful#2015-09-1514:34bkamphaus@devn don’t worry, I have a suspicion about what’s going on#2015-09-1514:34devncool#2015-09-1514:34devnthanks for your help!#2015-09-1514:37domkmIs it possible to move entities to new partitions?#2015-09-1514:56bkamphaus@devn should be resolved now.#2015-09-1514:56devn@bkamphaus: awesome. thanks!#2015-09-1516:34sdegutisWhy is Datomic Free no longer recommended for small production apps like it used to?#2015-09-1517:45bkamphausOne reason is that Pro Starter is required for additional storage options (other than file/free and mem).#2015-09-1519:59azHi, is it possible to alter a valueType in a schema?#2015-09-1520:00marshall@limix: No, you cannot alter :db/valueType#2015-09-1520:00marshallhttp://docs.datomic.com/schema.html#Schema-Alteration#2015-09-1520:01azI see#2015-09-1520:01marshallIf you need to emulate that ability, you can rename the existing attribute and create a new one with the altered valueType#2015-09-1520:01azI see#2015-09-1520:01azok#2015-09-1520:01marshallthen use application code and/or db functions to migrate existing data as necessary#2015-09-1520:02azdoes this ever cause you trouble?#2015-09-1520:03azdo you ever find yourself needing to switch like this?#2015-09-1520:05bkamphaus@limix: that’s a tough one to answer, you’ll see a large amount of variation in the user base for schema alteration to accommodate type changes vs. migrating to a new db/model.#2015-09-1520:05bkamphaussorry I don’t mean schema alteration specifically, but the rename strategy Marshall mentioned.#2015-09-1520:05bkamphausfor that case.#2015-09-1520:20azthanks @bkamphaus good to know. Do you guys think that in a scenario where you have a required attribute that needs to be changed, the strategy @marshall outlined would force one into making the field optional?
Example: zipcode starts off required and type long
Then we need to accommodate some special format and string is a better choice
How could we duplicate the attribute and still keep it as required but not satisfy the old attribute?#2015-09-1520:22azor is there no such thing as a required attribute in datomic?#2015-09-1520:27marshall@limix: That’s correct, there really isn’t any model of ‘required’ attributes in Datomic. Any validations/requirements should be enforced by your application code (or possibly via transaction functions)#2015-09-1520:31azthanks @marshall makes sense#2015-09-1523:22devn@bkamphaus: small thing, but one that might be worth mentioning is that the emails I received had a subject of [Datomic TEST MESSAGE] Your Datomic Pro Starter ...#2015-09-1523:23devnPurely cosmetic, I assume, but my guess is you guys probably want that to just be [Datomic] or something along those lines#2015-09-1523:36bkamphaus@devn: that’s purely from the issue that was impacting things yesterday night through this morning (actually what pointed me to problem), should no longer be the case.#2015-09-1605:07shofetimI have a prototype app that I'd like to run on a single EC2 t2.small instance (2 GB RAM) along wih a datomic transactor. That seems reasonable, but the transactor keeps dying with OOM errrors. Are there some knobs I can turn or is it just "transactors need at least 4GB's of RAM" ?#2015-09-1605:18danielcompton@shofetim: there are JVM memory settings you could probably tweak#2015-09-1605:20danielcomptonhttp://docs.datomic.com/monitoring.html#2015-09-1613:54mitchelkuijpersIs there a way to do a sort of set operation on a entity?#2015-09-1613:55mitchelkuijpersso that you can say on this entity only set these properties en delete everything else of this entity?#2015-09-1614:03tcrayford@mitchelkuijpers: not that I know of inbuilt. Should be easy enough with a database function though.#2015-09-1614:03mitchelkuijpers@tcrayford: That was what I was thinking, updates and additions are very easy though simple_smile#2015-09-1617:29ericnormandHello!#2015-09-1617:30ericnormandI found what appears to be inconsistent behavior in the in-memory and the production datomic databases#2015-09-1617:30ericnormandit has taken me a while to isolate the bug to that#2015-09-1619:18tcrayford@ericnormand: what was the inconsistency?#2015-09-1619:19ericnormandIn a database function, I was calling coll? on a value from an entity.#2015-09-1619:19ericnormandFor a many cardinality property, the in-memory version was returning a Clojure set#2015-09-1619:20ericnormandso (:x/y entity) => #{}#2015-09-1619:20ericnormandbut in the production version, it returns a java.util.HashSet#2015-09-1619:21ericnormandso coll? was returning true in one and false in the other#2015-09-1619:28tcrayfordoh that's gnarly af#2015-09-1619:29tcrayfordI don't know, but I'd wager maybe the memory store doesn't even serialize segments (with fressian), just stores raw data in memory#2015-09-1619:44alexmillerI don't think Datomic makes any commitment on the concrete data type returned there#2015-09-1619:46alexmillerthe java api javadoc http://docs.datomic.com/javadoc/datomic/Entity.html#get(java.lang.Object) says it will be a "collection" which to me means java.util.Collection#2015-09-1619:46alexmillernot even that it must be a java.util.Set#2015-09-1619:47alexmillerbut standard caveat that I'm not on the datomic team and if you don't hear an answer here from someone on the team, feel free to ask on the mailing list#2015-09-1620:15bkamphaus@ericnormand: I can confirm that as @alexmiller mentions, Clojure collection type continuity is not guaranteed when using the API. In this specific case, it’s when passing the collection as a parameter to a database function that you see the behavior you describe (expected for current release, but the impl specific type info you see here is also not guaranteed).#2015-09-1620:17ericnormandthanks, @alexmiller and @bkamphaus#2015-09-1620:17ericnormandI'm using (instance? java.util.Collection x) now#2015-09-1620:18ericnormandand it's actually a little simpler that way#2015-09-1620:18ericnormandjust took a while to debug#2015-09-1620:18ericnormandit's not easy getting information out of a database function#2015-09-1620:18ericnormandI had to package it up in an exception simple_smile#2015-09-1621:48danielcompton@bkamphaus: sounds like that could be a good thing to put in a Datomic caveats/FAQ section if it’s not already documented.#2015-09-1621:50bkamphaus@danielcompton: documentation request acknowledged simple_smile -- I’ll discuss with the team.#2015-09-1714:14bkamphausDatomic 0.9.5302 has been released. https://groups.google.com/forum/#!topic/datomic/4qMFwE2Dcr8#2015-09-1719:49tcrayforddoes the datomic pro starter transactor depend on protobufs?#2015-09-1719:50tcrayfordseeing this kinda error when I upgrade to 5302 with the riemann metrics callback:#2015-09-1719:50tcrayfordSep 17 19:48:16 java.lang.VerifyError: class com.aphyr.riemann.Proto$Msg overrides final method getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet;
Sep 17 19:48:16 at java.lang.ClassLoader.defineClass1(Native Method) ~[na:1.8.0_25]
Sep 17 19:48:16 at java.lang.ClassLoader.defineClass(ClassLoader.java:760) ~[na:1.8.0_25]
Sep 17 19:48:16 at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) ~[na:1.8.0_25]
Sep 17 19:48:16 at java.net.URLClassLoader.defineClass(URLClassLoader.java:455) ~[na:1.8.0_25]
Sep 17 19:48:16 at java.net.URLClassLoader.access$100(URLClassLoader.java:73) ~[na:1.8.0_25]
Sep 17 19:48:16 at java.net.URLClassLoader$1.run(URLClassLoader.java:367) ~[na:1.8.0_25]
Sep 17 19:48:16 at java.net.URLClassLoader$1.run(URLClassLoader.java:361) ~[na:1.8.0_25]
Sep 17 19:48:16 at java.security.AccessController.doPrivileged(Native Method) ~[na:1.8.0_25]
Sep 17 19:48:16 at java.net.URLClassLoader.findClass(URLClassLoader.java:360) ~[na:1.8.0_25]
Sep 17 19:48:16 at java.lang.ClassLoader.loadClass(ClassLoader.java:424) ~[na:1.8.0_25]
Sep 17 19:48:16 at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) ~[na:1.8.0_25]
Sep 17 19:48:16 at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ~[na:1.8.0_25]
Sep 17 19:48:16 at com.aphyr.riemann.client.TcpTransport.<clinit>(TcpTransport.java:34) ~[datomic-riemann-reporter-0.1.0-SNAPSHOT-standalone.jar:na]
Sep 17 19:48:16 at java.lang.Class.forName0(Native Method) ~[na:1.8.0_25]
Sep 17 19:48:16 at java.lang.Class.forName(Class.java:260) ~[na:1.8.0_25]
Sep 17 19:48:16 at riemann.client$eval9$loading__4910__auto____10.invoke(client.clj:1) ~[na:na]
Sep 17 19:48:16 at riemann.client$eval9.invoke(client.clj:1) ~[na:na]
Sep 17 19:48:16 at clojure.lang.Compiler.eval(Compiler.java:6619) ~[datomic-riemann-reporter-0.1.0-SNAPSHOT-standalone.jar:na]
Sep 17 19:48:16 at clojure.lang.Compiler.eval(Compiler.java:6608) ~[datomic-riemann-reporter-0.1.0-SNAPSHOT-standalone.jar:na]
Sep 17 19:48:16 at clojure.lang.Compiler.load(Compiler.java:7064) ~[datomic-riemann-reporter-0.1.0-SNAPSHOT-standalone.jar:na]
Sep 17 19:48:16 at clojure.lang.RT.loadResourceScript(RT.java:370) ~[datomic-riemann-reporter-0.1.0-SNAPSHOT-standalone.jar:na]
Sep 17 19:48:16 at clojure.lang.RT.loadResourceScript(RT.java:361) ~[datomic-riemann-reporter-0.1.0-SNAPSHOT-standalone.jar:na]
Sep 17 19:48:16 at clojure.lang.RT.load(RT.java:440) ~[datomic-riemann-reporter-0.1.0-SNAPSHOT-standalone.jar:na]
#2015-09-1719:50tcrayford@bkamphaus: ^^#2015-09-1719:51tcrayfordah yeah, looks like the protobuf library version changed in 5302 or so#2015-09-1719:54bkamphaus@tcrayford: looking into it#2015-09-1719:55tcrayfordthanks!#2015-09-1719:55tcrayfordI would, but licensing…#2015-09-1719:55tcrayfordI'm unsure if I'm even allowed to ls the lib directory#2015-09-1719:55tcrayford(and would feel uncomfortable doing so without consulting a lawyer)#2015-09-1719:59bkamphausthe drones have been dispatched#2015-09-1720:00tcrayfordgood thing they'll fall out of the sky somewhere over the atlantic ocean simple_smile#2015-09-1720:02bkamphausactually we circumvent the falling out of the sky problem with an immutable flight path#2015-09-1802:29domkm"Datomic does not provide a mechanism to declare composite uniqueness constraints; however, you can implement them (or any arbitrary functional constraint) via transaction functions." Reading the transaction function documentation, it looks like it applies to a single transaction entry but not to the transaction as a whole. Is this true? If so, doesn't this mean that, while you can guarantee that no single entry violates a composite uniqueness constraint in the current db, you cannot guarantee that the whole transaction maintains this constraint? In other words, multiple entries in the same transaction could have duplicate composite keys but there is no way to detect it?#2015-09-1818:32shofetimSo from looking over the docs at http://docs.datomic.com/schema.html it looks like if I need to change a :db/valueType or :db/fulltext I'm pretty much on my own? I'm guessing that the "right way to do it" would be to create another attribute, with the altered type or index setting, copy all data from the existing, then retract the old, and rename the new to the old?#2015-09-1818:42bkamphaus@shofetim: that’s essentially correct. To change valueType or fulltext you need a new attribute, which you can then migrate data to. You can alter :db/ident in sequence so as to point the new attribute to the old one — change the previous attribute to point to something like :person/id-old, make a new :person/id attribute, then migrate data from :person/id-old to :person/id-new.#2015-09-1818:44bkamphausIt may be appropriate to also introduce e.g. rules that will look for values in the appropriate place and so on. There are definitely users who prefer to migrate all data (i.e. by replaying the log from the entire db), preserving original tx-instants but remapping values when appropriate to match new type, rather than introduce that level of complexity into their schema for the db of record.#2015-09-1818:44bkamphausAs always, backup, backup, backup before you try any of this. simple_smile#2015-09-1819:57bhaganyI'm pretty curious if there's a way around @domkm's issue#2015-09-1819:57bhaganyaka, bump#2015-09-1820:12tcrayford@bhagany: you could wrap the entirety of every transaction in a transaction function, which takes all the other data per transaction as it's arg. Probably overkill though.#2015-09-1820:12bhaganyheh, I had wondered about something like this#2015-09-1820:13bhaganyI'm guessing most people enforce uniqueness-per-transaction in their application code#2015-09-1820:13bhaganyseems better to me, anyway#2015-09-1820:15tcrayfordSure. You can't do that and have it actually work in the face of concurrency though :)#2015-09-1820:16stuartsierraTransaction Functions are application code that happen to run on the Transactor.#2015-09-1820:18stuartsierraTo answer @domkm's question, you define a Transaction Function in terms of one unit of logical change for your data.#2015-09-1820:19stuartsierraIf you have an operation which makes multiple changes, all related to the same constraints, then that operation should probably be a single transaction function.#2015-09-1820:21stuartsierraIt's still your code responsible for enforcing the constraint, so it's a very different style of interaction than, say, "table constraints" in SQL databases.#2015-09-1820:27bhaganyI think what I had in mind would work with concurrency, even outside of a transaction function - I'm imagining a transaction that adds two entities. Your non-transaciton-function-application code would ensure that your uniqueness constraint holds for these two entities. Then the transaction function does the same thing for each entity, against db-before. I'm also assuming that if the uniqueness check munges values to make them unique, then that munging is guaranteed not to produce a collision between the two added entities.#2015-09-1820:30bhagany@stuartsierra: is there a less unwieldy word or phrase for "code that's not a transaction function"?#2015-09-1820:30stuartsierra@bhagany: "code"?#2015-09-1820:31bhaganytransaction functions are code that happen to run on the Transactor#2015-09-1820:31stuartsierrayes#2015-09-1820:32bhaganyI'm looking for a phrase that wouldn't run against your correction of my use of "application code"#2015-09-1820:32stuartsierraoh, "Peer code" maybe#2015-09-1820:32bhaganyokay, gotcha. much obliged simple_smile#2015-09-1820:32stuartsierrayou're welcome#2015-09-1820:32stuartsierraI admit these things get a bit fuzzy sometimes…#2015-09-1821:04domkmThanks @stuartsierra#2015-09-2201:33danielcomptonhttps://twitter.com/Opacki/status/645610294266322944#2015-09-2201:33danielcomptonZing!#2015-09-2212:35stuartsierraDoes DynamoDB promise five nines?#2015-09-2215:41sonnytois it possible to query across two databases? for example:#2015-09-2215:42sonnyto(def r2 (d/q '[:find ?e :in $ $1 :where (or [$ ?e :artwork/title _]
[$1 ?e :artwork/title _])]
[[123 :artwork/title "foo"]]
[[1234 :artwork/title "bar"]]))#2015-09-2215:42sonnytoi would like to get all entities with :artwork/title in both database value#2015-09-2215:42sonnytobut the query returns empty#2015-09-2215:55bostonaholic@sonnyto: I assume you are passing both dbs to d/q#2015-09-2215:55bostonaholicyour example is missing them#2015-09-2215:56sonnytothe dbs are [[123 :artwork/title "foo"]] [[1234 :artwork/title "bar"]]#2015-09-2215:56bostonaholicah, missed that#2015-09-2215:57sonnytohttps://groups.google.com/forum/?fromgroups=#!searchin/datomic/multiple$20databases/datomic/JiEfDPcWLkA/FCTFNCO4xK8J#2015-09-2215:57sonnytoapparently rules cannot take db as arguments#2015-09-2215:57sonnytois there another way to do this?#2015-09-2216:01sonnytoanother reference https://groups.google.com/forum/?fromgroups=#!searchin/datomic/multiple$20databases/datomic/CgcLgZ85vyA/_IoTMA6D1yEJ#2015-09-2216:03marshall@sonnyto: You can use separate logic variables for each source:
(d/q '[:find ?e1 ?e2 :in $ $a :where [$ ?e1 :artwork/title _] [$a ?e2 :artwork/title _ ]] d1 d2)
#2015-09-2216:05sonnytothat seems to work.. thanks! i'll play around with it#2015-09-2216:05marshallYou might end up having to deal with cartesian product there#2015-09-2216:05marshalljust fyi#2015-09-2216:05sonnytothanks#2015-09-2218:02bkamphaus@sonnyto: I would be cautious about using that solution - be sure you’re ok with the intermediate rep’s size (since it’s the cartesian product, the relation will return an intermediate set of n * m instead of n + m, which you’ll presumably be flattening out anyways). If you’re on the JVM, I’d just write two queries and merge the sets (you don’t have the same impetus to tackle everything in one query as in SQL w/Datomic due to local caching/peer direct read access to storage).#2015-09-2218:03bkamphausI’m also assuming here that your toy query example isn’t illustrative of your use (i.e., getting entity id’s), as if you’ll be using the entity id’s elsewhere you’d probably want to know which db/source they’re from.#2015-09-2218:04sonnytoThanks @bkamphaus yes. It's a toy example to understand how to work with multie db vals#2015-09-2218:04bkamphaus(another picky point, w/apologies to @marshall , is that the query is not particularly evident - I would have to do a little bit of guesswork to figure you’re hacking a merge w/a find specification that returns a relation that is not really a meaningful relation).#2015-09-2218:08marshallno apologies necessary. it was very much a hack.#2015-09-2220:50danielcompton@stuartsierra: I did some research, and doesn’t look like there is a published SLA. Guess you can’t just trust what you see on Twitter 😞#2015-09-2419:19robert-stuttaford@bkamphaus are you around?#2015-09-2420:49bkamphaus@robert-stuttaford: yep#2015-09-2508:42robert-stuttaford@bkamphaus: i decided to email <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection> instead simple_smile#2015-09-2521:51domkmI remember reading something about the maximum practical length of strings in Datomic but I can't find the discussion. What limit should I set for string length?#2015-09-2522:48shofetimSo when I use a pull in a find spec ie like
(d/q '[:find (pull ?e [* {:order/line-items
[* {:package/size [*]
:line-item/package
[* {:good/_packages [*]}]}]}])
:in $ ?person-id
:where [?e :order/person ?person-id]]
(get-db) person-id)
I get a vector of vectors, of the objects I wanted.... is their something more idiomatic then calling flatten ?#2015-09-2523:03shofetimI found it
(d/q '[:find [(pull ?e [* {:order/line-items
[* {:package/size [*]
:line-item/package
[* {:good/_packages [*]}]}]}]) ...]
:in $ ?person-id
:where [?e :order/person ?person-id]]
(get-db) person-id)
Not sure what that is called, but wrapping the query in another vector unwraps the results.#2015-09-2523:04bkamphaus@shofetim: you’re using the collection find spec http://docs.datomic.com/query.html#find-specifications#2015-09-2523:09shofetim@bkamphaus: any ideas how I could make the above query faster? or how to know what parts are expensive? I don't have much data yet and it's taking about 950 msecs (returns 24 results)#2015-09-2523:11bkamphausis person-id a unique attr?#2015-09-2523:11shofetimYes#2015-09-2523:12bkamphausyou might bypass query altogether then and just use a lookup ref as the entity identifier in a call to the pull API directly#2015-09-2523:12shofetim{:db/id #db/id[:db.part/db]
:db/ident :order/person
:db/valueType :db.type/ref
:db/index true
:db/cardinality :db.cardinality/one
:db.install/_attribute :db.part/db}#2015-09-2523:12shofetimk, I'll try that.#2015-09-2709:16voytechHi. I would like to confirm something. If we have unique id e.g. username. Then me make a transaction to store userdetails. When we want to update user details it should be sufficient to make just similar transaction providing same username. Then tempid should be resolved to existing entity id ? And we thus we can change other properties ?#2015-09-2714:41bhagany@voytech: In this case, you wouldn't use a temp id, you'd use a lookup ref. There's an example in this blog post: http://blog.datomic.com/2014/02/datomic-lookup-refs.html#2015-09-2716:26voytech@bhagany Thanks for answer. I wil use lookup refs then, But i think documentation says that tempid should be resolved to existing id. It is in section about transactions I think. Or maybe I understand i wrong...#2015-09-2716:27bhagany@voytech: It might, I'm not sure. But it's definitely easier to use the lookup ref.#2015-09-2716:37voytech@bhagany At least lookup ref gives You control because by providing lookup ref You are explicitly defining transaction as an update. Hmm but when you are providing transaction with unique property :username which already exists in db - this should be update too - what else could happen ? Knowing that username already has this value we cannot create another entity with this value because it already exists. For me there are two results - failure or successfull update on existing entity :)#2015-09-2716:40voytech@bhagany. My question was added here because after adding entity with the same username again I still have the old entity. This is strange...#2015-09-2716:43bhagany@voytech: Hmm, that is odd… does your unique attribute have :db.unique/identity, or :db.unique/value?#2015-09-2716:48voytech@bhagany. It is identity. Db.unique/identity. #2015-09-2716:49bhagany@voytech: Okay. I found this thread, which seems related, but the bug they were talking about should be fixed: https://groups.google.com/forum/#!topic/datomic/30ADvlLV9f4#2015-09-2716:49bhaganyI would expect an upsert in the situation you're describing#2015-09-2716:50voytech@bhagany yeah it is upsert.#2015-09-2716:51voytech@bhagany and it is ondatomic free. H2.#2015-09-2716:51bhagany@voytech: oh, I understood you to be saying that you ended up with two entities with the same unique attribute#2015-09-2716:57voytech@bhagany no after upsert there is only old entity. I'll double check. Now i don't have access to my env. Maybe I'm doing something stupi somewhere around.#2015-09-2716:58bhagany@voytech: okay, I see. In that case, I would double check to make sure you're using the db-after value from the transaction, instead of an old db#2015-09-2717:02voytech@bhagany im getting new value using db func. But I need to ensure.#2015-10-0610:23gerstreeAnybody running datomic on ECS / docker?#2015-10-0616:01bkamphaus^ @potetm @devn order in a transaction is not guaranteed to be preserved, only order b/t transactions. I.e. the contents of a transaction are not a procedure and the serializable aspect of ACID only applies to the transaction as a whole. The order of datoms in a transaction doesn’t matter.#2015-10-0616:02bkamphaus^ @devn it might be useful to see a cleared illustration of what’s going on, but from the description above it looks like you’re treating Datomic as a document store for logs, which is really not the best use or the way to get sequential time semantics within a log.#2015-10-0616:02devnI'm surely abusing it.#2015-10-0616:03devnThis is not going to production, just playing at the moment.#2015-10-0616:05bkamphaus@devn I think the standard use you’d see for logs in Datomic would be something like this: (1) parse a log file, generating time stamps and information of interest per log line of interest. (2a) transact one line per transaction, or (2b) add your own timestamp (inst) field as an attribute in the database. The same would apply if you wanted to preserve line numbers, for example.#2015-10-0616:05bkamphausnot really unlike lexing in general, but parsing into transactions#2015-10-0616:06bkamphausthe idea is that Datomic stores information, often already extracted from elsewhere, you’ll run into perf issues or hit a mismatch in semantics using it for raw data.#2015-10-0616:08bkamphausi.e. in this case if line is meaningful, and I want line context, I’d want to preserve the line so I could do +5, -5 on a line attribute, not try to reverse engineer the line context by extracting information from the log and reassembling it.#2015-10-0616:09gerstreeIs there a way to distinguish between the bind address for Datomic and the address it writes to the database? Both seem to be based on the host= parameter.#2015-10-0616:14gerstreeI've managed to deploy Datomic on AWS ECS (docker) but I don't see a way out on this ip issue.#2015-10-0616:17bkamphaus@gerstree: you can use alt-hostto do this. This is an undocumented property that we’ll be adding to the docs. The host and alt-host settings were designed for AWS, where machines have internal and external IP addresses. On the Datomic AMI the AWS host is used for the (more efficient) internal IP, and alt-host is used for the (more broadly reachable) external IP.
The transactor will bind to the port specified by host, and advertise its reachability at both host and alt-host. If host and alt-host are both specified, peers must resolve each of them to the same actual transactor if they resolve them at all.
The peer will try to connect, first to host, and then if that fails to alt-host.#2015-10-0616:29gerstree@bkamphaus Thanks, trying it right now.#2015-10-0616:33gerstreeIt's quite a roundtrip (building / pushing / pulling) I will be right back with the results.#2015-10-0616:33gerstreeBut it will be awesome once I get it running#2015-10-0616:44gerstree@bkamphaus: working! Thank you so much#2015-10-0616:47gerstreeThat's a working datomic on AWS ECS deployed via AWS Cloudformation.#2015-10-0616:48bkamphaus@gerstree: cool, glad that resolved your issue :)#2015-10-0617:13gerstreePeers are showing stack traces for the 'primary' address during startup, that's the only glitch I see.#2015-10-0617:13gerstreeNo measurable delay in the failover to the second ip.#2015-10-0618:24potetm@bkamphaus: Good to know. Thanks for chiming in!#2015-10-0718:26paxanwhat's the best way to submit bug reports for Datomic software?#2015-10-0718:34marshall@paxan: Can you post your report to the Datomic Google Group? (https://groups.google.com/forum/?pli=1#!forum/datomic)#2015-10-0719:02paxanposted#2015-10-0803:56shofetim@gerstree: https://pointslope.com/blog/datomic-pro-starter-edition-in-15-minutes-with-docker/ is also good reading#2015-10-0809:47gerstree@shofetim: Thanks. That's what got me started. I improved on the Dockerfile a bit, next week I will share it all in a blog.#2015-10-0810:12robert-stuttafordwhat’s the most performant way to find the most recently transacted datom for a given attr?#2015-10-0813:16robert-stuttaford@bkamphaus / @marshall : could I ask a huge favour and ask if you can get any sort of clarity on this issue for us, please? https://groups.google.com/forum/#!topic/datomic/1WBgM84nKmc#2015-10-0813:55bkamphaus@robert-stuttaford: is there a specific aspect you want clarified? Re: changes to the behavior, we don’t have anything to report. The present recommended strategy for accommodating this is still the same, i.e. generate a unique db name with gensym or appending a uuid to the name.#2015-10-0814:06robert-stuttafordStu’s last message on that thread was ‘investigating a fix’ - presumably, that means, it’s not the intended behaviour. Do you plan to fix it?#2015-10-0814:09robert-stuttafordappending a unique suffix fixes it in the short term, but that’ll bloat durable storage very, very quickly#2015-10-0814:09robert-stuttafordgives us one more thing to manage in dev and staging environments#2015-10-0814:10robert-stuttafordsuffixes are totally fine for in-memory dbs, but then, this isn’t actually an issue for in-memory dbs#2015-10-0814:10robert-stuttaforddoes that make sense?#2015-10-0814:17bkamphaus@robert-stuttaford: I do understand the points you outline here. Just nothing additional to report at this time. The exact previous behavior is unlikely to be restored, as there was at least one bugfix related to insufficient coordination around deletion.#2015-10-0814:18robert-stuttafordok - all we’re really hoping to be able to do is delete and make durable databases with the same name without restarting the peer#2015-10-0814:19robert-stuttafordeven if we have to wait for a future to deliver or something#2015-10-0814:19bkamphausThe investigation comment Stu makes is exactly that - looking into tradeoffs. We are reluctant to have people rely on create/delete in a tight cycle as a promised fast behavior. I suspect, due to how it often comes up in testing, a slow but synchronous solution won’t match what a lot of people expect. Anyways, I have brought it up with the dev team again, but can’t promise any specific outcome.#2015-10-0814:20robert-stuttafordok, great. thank you, Ben. much appreciated#2015-10-0815:06marshall@robert-stuttaford: You don’t necessarily need to restart the peer process. You can ‘reuse’ DB names after a certain timeout. I just confirmed that I can successfully create a db, delete it, wait 60 seconds, then create again with the same uri and reconnect.#2015-10-0815:08bkamphaus@robert-stuttaford: re: the earlier questions for fast query to get last datom transacted, something like (working example on mbrainz):
(let [hdb (d/history (d/db conn))]
(d/q '[:find ?a ?attr ?aname ?atx ?added
:in $ ?attr
:where
[(datomic.api/q '[:find (max ?tx)
:in $ ?attr
:where
[_ :artist/name _ ?tx]]
$ ?attr) [[?atx]]]
[?a ?attr ?aname ?atx ?added]]
hdb :artist/name))
#2015-10-0815:09bkamphausbah, hard coded artist/name in subquery is a refactoring artifact#2015-10-0815:10bhaganyI must say, I never considered using q inside a query like that before. Nice.#2015-10-0815:12bkamphausyou can also just compose the queries, but for rest api etc. where you want everything in one query subquery is a good way to get an answer based on an aggregate, or handle an aggregate of aggregate problem#2015-10-0815:13bhaganyyes, this gets around one of my biggest bugaboos with using the rest api. I'm always trying to minimize round tripping back and forth to a peer, so this is very nice.#2015-10-0815:14bhaganyalso, let this be a record of my surprise that my computer recognizes "bugaboos" as a word.#2015-10-0816:35ljosamy biggest bugaboo is the error handling#2015-10-0818:38bhagany@ljosa: with the rest api? if so, I hear you there.#2015-10-0818:42robert-stuttafordthanks, @bkamphaus !#2015-10-0818:43robert-stuttafordmust admit i’ve never seen d/q used inside a datalog clause before! in hindsight, it’s obvious that it’s possible simple_smile#2015-10-0819:51domkmIt would be nice if datomic.api was used in database functions like datomic.Peer and datomic.Util methods. Any chance of this in the future?#2015-10-0921:10sdegutisDo you usually call d/create-database at the beginning of every process, even though it usually returns false?#2015-10-1001:43bostonaholic@sdegutis: I usually have a leiningen alias lein create-db which runs the script to create the database. Documented in the README.md for development setup#2015-10-1001:45bostonaholicand recently we've been using https://github.com/rkneufeld/conformity for "schema migrations"#2015-10-1021:30magnarsI'm making an online game - but it is essentially single player. Reading up on Datomic's partitions, it seems to me that I should create a partition for each player, since they each have a lot of data, and I almost always want to query for facts about one player at a time. Am I understanding it correctly? Any thoughts?#2015-10-1021:31magnarsWe're looking at maybe 30-40k partitions in that case. How would datomic feel about that?#2015-10-1021:58bostonaholicmy understanding of partitions is to create a :users partition if you're going to retrieve a lot of users at once#2015-10-1105:36domkm@magnars: I think I read that they are committed to supporting up to 100k partitions but there is no hard limit built in.#2015-10-1108:22tcrayford@magnars @domkm : I'd recommend instead creating say 1024 partitions or so and then assigning users somehow into those (maybe randomly, maybe doing a hash of some external id and using mod to chunk that into 1024 sections). Rich said a thing about this a long time ago on the mailing list (bottom post): https://groups.google.com/forum/#!searchin/datomic/rich$20hickey$20partition/datomic/7HvCCbrsOJ0/IOKsOMMcahsJ#2015-10-1110:58magnarsExcellent, that makes sense. Thanks for the advice. #2015-10-1213:32aanDoes anyone know if it’s possible to save queries across sessions in the datomic console?#2015-10-1213:34martinklepschJust got a :backup/claim-failed error when trying to backup — it seems that the backup mechanism thought the backup location is being used by another system or so. Couldn’t find anything googling so asking here: how does datomic check if a backup storage is already in use?
java.lang.IllegalArgumentException: :backup/claim-failed Backup storage already used by solglas-1862a281-5d82-4fd1-97dc-10b7fa084e9e
#2015-10-1214:20bkamphaus@martinklepsch: That message indicates that the backup URI where you’re trying to backup to is already being used to backup a different database.#2015-10-1214:21martinklepsch@bkamphaus: yeah that’s what I understood too but how does Datomic know? Or what did I do that it doesn’t know anymore? simple_smile#2015-10-1214:21bkamphausi.e. you backed up “accounts” to that URI (file location, s3 location), so you can’t backup “dossiers” to it.#2015-10-1214:22bkamphausWell, it knows the exact detail it puts there — "solglas-1862a281-5d82-4fd1-97dc-10b7fa084e9e” is the id of the db backed up there.#2015-10-1214:23bkamphausif you want to overwrite with something different and don’t want to keep the other data, you can just use s3/file system to delete the backup, or if you’re not sure of course archive/backup the current backup folder first.#2015-10-1214:43sdegutisWhen making test databases for use in tests, is it efficient to create a new one for each test and let the system GC them after they're unused (str "datomic:mem://fake-db-" (java.util.UUID/randomUUID))?#2015-10-1214:45bkamphaus@sdegutis: if the tests don’t encounter behavior that introduces garbage that doesn’t get removed (restoring over an old db with a branching t, index failures, that sort of thing), probably. If you’re really worried, have all test dbs hit the same storage table/bucket/keyspace with the txor and periodically blow it away and recreate. There are users doing this as part of their testing, actually (delete and re-init keyspace/table).#2015-10-1214:54sdegutis@bkamphaus: That's what I was doing before, but I wanted to unify my test/live code, which meant removing the initial (d/delete-database uri) line.#2015-10-1215:56peterbakThinking of using Datomic as a backing store for a web app hosted on AWS (probably on top of DynamoDB). Can anybody share any gotchas from an operational perspective? Did you end up creating custom admin dashboards to make sense of the data stored in Datomic, or is console sufficient? etc etc. Thanks!#2015-10-1216:13bkamphaus@sdegutis: I’m not sure I follow - do you delete at the end now, or in some batch delete? You will still have to call delete to get the database recovered by gc. Also the initial delete call would be spurious if starting from a new keyspace/table/bucket each time? Or is that i.e. what you mean you had to drop to match live?#2015-10-1216:20sdegutis@bkamphaus: Our current tests use a function that re-uses the same memory-based databased URI but call delete-database before the tests run (followed immediately by create-database), in order to clear any state from previous tests.#2015-10-1216:28sdegutis@bkamphaus: But other than that one call to delete-database, the code for handling our in-memory test database and a live database was identical, thanks to how Datomic was designed.#2015-10-1216:28sdegutis@bkamphaus: So my solution is to remove the call to delete-database, leave the call to create-database (which is a no-op if the db already exists), and change the function that produces a test-database's URI so that instead of returning a single one, it returns a new one every time.#2015-10-1216:29sdegutis@bkamphaus: It works fine, I just worry about memory consumption and whether Datomic knows to GC them properly when I'm done with them, so that I don't have to manually release them when the each test is done with the temporary in-memory database.#2015-10-1311:23robert-stuttaford@peterbak: we’re doing exactly that. web-based SaaS, Clojure/ClojureScript/Datomic/AWS/DDB. happy to answer your questions. custom dashboards for sure.#2015-10-1311:23robert-stuttafordadmin data access is just as business focused as end-user data access. console is not sufficient at all.#2015-10-1313:35mishaCan anyone help me understand why quoted tx-vector '[[:db.fn/retractEntity 17592186047366]] works here:
@(datomic.api/transact (datomic.api/connect db-url) '[[:db.fn/retractEntity 17592186047366]])
but does not work here:
((defn delete-entity [conn id] @(d/transact conn '[[:db.fn/retractEntity id]])))
?
;; quoted tx-vector inside function:
(defn delete-entity [conn id]
@(d/transact conn '[[:db.fn/retractEntity id]]))
(delete-entity (d/connect db-url) 17592186047368)
=> {:db-before #2015-10-1313:36bkamphaus@misha not sure why you’re using the quoted form?#2015-10-1313:37misha@bkamphaus: I'm not sure either simple_smile#2015-10-1313:37bkamphausbut in the second case you’re not passing the evaluated var, you’re passing the symbol (b/c it’s in a quoted coll)#2015-10-1313:38mishaso it evaluates to :db.fn/retractEntity 'id?#2015-10-1313:38misha"retract symbol id"#2015-10-1313:38bkamphausI’m probably not being as precise as I mean to.#2015-10-1313:39bkamphausBut you’re not binding a value to the id symbol in the scope of the fn, without delving into how it gets treated or eval’d on the other side#2015-10-1313:39mishaok, then what makes it work, when I am passing the same quoted vector to d/transact directly? what happens/does not happen if the same "text" gets wrapped up in a function. does it have something to do with the clojure's reader? what should I read about this?#2015-10-1313:40mishaah, yes#2015-10-1313:40bkamphausthe main thing is, transact is intended to take vecs, not quoted vecs#2015-10-1313:40mishatrue#2015-10-1313:40mishathank you#2015-10-1313:40bkamphaus@misha you don’t pass the same quoted vec#2015-10-1313:40bkamphausin your example, you pass one with the value in it, not the symbol#2015-10-1313:41mishamakes sense.
this does not work either:
(let [id 17592186047360]
@(datomic.api/transact (datomic.api/connect w.db/db-url) '[[:db.fn/retractEntity id]]))
#2015-10-1313:42misha*for the same reason.#2015-10-1313:42misha@bkamphaus: cheers#2015-10-1315:11bkamphausDatomic 0.9.5327 is now available https://groups.google.com/d/msg/datomic/4BwCDxs6zKw/xRe-1uYqCQAJ#2015-10-1315:21bkamphaus@robert-stuttaford: ^ addresses your issue from a few days ago, with:
> * Bugfix: Fixed bug that prevented connecting from a peer that deletes and recreates a database name.#2015-10-1315:21bkamphausI’m sure there are other interested parties in that one lurking here as well… simple_smile#2015-10-1315:23robert-stuttaford@bkamphaus: -applause- -celebration- thank you!#2015-10-1409:46frankiesardoHi all, we're upgrading to datomic pro#2015-10-1409:47frankiesardoAre 5 processes enough to maintain a staging + prod web server?#2015-10-1409:47frankiesardo1 Peer prod, 1 Peer staging, 1 prod transactor, 1 staging transactor, 1 for occasional dev REPL#2015-10-1409:47frankiesardoor do you usually share the same transactor between staging and prod?#2015-10-1410:38mitchelkuijpers@frankie: you can use your license for multipe envs#2015-10-1410:38mitchelkuijpersThe limit is for the amount of peers you can apply to one transactor that is connected to a certain database#2015-10-1410:39mitchelkuijpersSo you can use the license for local dev#2015-10-1410:39mitchelkuijpersstart a test transactor with the max amount of peers#2015-10-1410:39mitchelkuijpersand a prod with the max amount of peers#2015-10-1410:40mitchelkuijpersfrom the datomic pricing page:#2015-10-1410:40mitchelkuijpersA Datomic Pro license is priced by the number of simultaneous processes using Datomic software (peers + transactors) in production at your company. Testing and development use against non-production databases does not count towards your limit.#2015-10-1410:41adam_awan@mitchelkuijpers: how are non-production databases / testing / development differentiated from production in Datomic’s eyes?#2015-10-1410:41frankiesardoWe don't want to use datomic-free in our dev/staging environment, if that's what you mean#2015-10-1410:42mitchelkuijpers@frankie: No you can use the same license#2015-10-1410:42mitchelkuijpers@adam_awan: they don't but you accept a license agreement#2015-10-1410:44frankiesardo@mitchelkuijpers: I mean, we want to use a transactor, not an in-memory db#2015-10-1410:44mitchelkuijpersAgain: A Datomic Pro license is priced by the number of simultaneous processes using Datomic software (peers + transactors) in production at your company. Testing and development use against non-production databases does not count towards your limit.#2015-10-1410:44mitchelkuijpersyou can start multiple transactors with the same license#2015-10-1410:45frankiesardoInteresting. But obviously the question is how do they know which database is a production database?#2015-10-1410:46frankiesardoIf you can use the same license to start multiple transactors then you can potentially cheat and have N production databases, right?#2015-10-1410:46frankiesardo^ ok license agreement i see#2015-10-1410:47mitchelkuijpers@frankie: Yes#2015-10-1410:47mitchelkuijpers@frankie: So you can simply start with datomic pro starter and then grow from there simple_smile#2015-10-1410:50frankiesardo@mitchelkuijpers: thanks a lot that was really useful#2015-10-1410:50mitchelkuijpers@frankie We went through the same 2 months ago, glad to help#2015-10-1410:51adam_awan@mitchelkuijpers: yes thanks, very helpful#2015-10-1416:29bostonaholicwhen trying to alter an attribute to [:db/add :person/name :db/fulltext true] I'm getting:
Caused by: java.lang.IllegalArgumentException: :db.error/invalid-alter-attribute Error: {:db/error :db.error/unsupported-alter-schema, :attribute :db/fulltext, :from :disabled, :to true}
Do I need to first [:db/retract :person/name :db/index true]?#2015-10-1416:31bostonaholic(I'm assuming I can't have both :db/index true AND :db/fulltext true on the same attribute)#2015-10-1416:32marshall@bostonaholic: You can’t alter :db/fulltext#2015-10-1416:32marshallThe list of supported schema alterations is here: http://docs.datomic.com/schema.html#altering-schema-attributes#2015-10-1416:32bostonaholicthat's what I was afraid of#2015-10-1416:32bostonaholicah, thanks#2015-10-1416:33bostonaholicso I guess a new "shadow field" should be added?#2015-10-1416:34marshallif you need to add the fulltext capability, you can rename the existing attribute, create a new one with the appropriate settings (fulltext true) and migrate the data from the existing attribute to the new one#2015-10-1416:35bostonaholicthat would work, too#2015-10-1416:35marshallmake sure you backup your DB before doing any schema alteration or migration#2015-10-1416:35bostonaholicright on, thanks!#2015-10-1416:41bostonaholic@marshall: am I correct in my assumption that an attribute cannot have both :db/index and :db/fulltext?#2015-10-1417:41marshall@bostonaholic: an attribute can have both. if it is :db/index true, it will be indexed in AVET, :db/fulltext allows it to be substring searched using the fulltext query funcion#2015-10-1417:41marshallfunction#2015-10-1417:42bostonaholicthat makes sense. I wasn't sure#2015-10-1417:43marshallthe mbrainz sample repo shows use of both. https://github.com/Datomic/mbrainz-sample#2015-10-1417:43marshalli.e. : https://github.com/Datomic/mbrainz-sample/blob/master/schema.edn#L29-L36#2015-10-1419:01curtosisis Cloudant congruent enough to Couchbase to work as a storage backend?#2015-10-1419:44bhaganyRegarding the resolution of temp id's - I am using the rest api and thus need a way to go from temp id to actual id outside of the JVM. I'm currently doing some bitshifting that I found in a mailing list post, but this requires knowing the partition id. I notice that resolve-tempid doesn't need the partition id though, and I'm in a situation now where this is highly desirable. Is there a way for me to resolve tempids without knowing the partition id?#2015-10-1509:01syk0sajehi, all! i'm pretty new to clojure and just started trying out datomic. i just got the latest (0.9.5327) and have gone through the tutorials with the shell. however, i can't seem to get the console running as i hit the following:
Exception in thread main java.lang.IllegalAccessError: tried to access method clojure.lang.RT.classForNameNonLoading(Ljava/lang/String;)Ljava/lang/Class; from class datomic.console.error#2015-10-1509:01syk0sajeam pretty lost on how to proceed from here. would anyone have any tips?#2015-10-1510:02tcrayford@syk0saje: think there was just a mailing list post on that topic#2015-10-1510:06syk0sajeah i see. would you have a link to the post? or for joining the mailing list?#2015-10-1510:36tcrayfordhttps://groups.google.com/forum/#!forum/datomic#2015-10-1510:37tcrayfordspecifically the last post on https://groups.google.com/forum/#!topic/datomic/4BwCDxs6zKw#2015-10-1512:43dsapoetraHi everyone, i'm really new to datomic, well i need advice, i got datomic free 0.9.5302 ,
when i use
(d/create-database "datomic:<free://localhost:4334/sample-database%22|free://localhost:4334/sample-database">)
=> true
but
(d/connect "datomic:<free://localhost:4334/sample-database%22|free://localhost:4334/sample-database">)
=>NullPointerException com.google.common.base.Preconditions.checkNotNull (Preconditions.java:191)
Problem didn't occur when i use datomic:mem
Any advice, please?#2015-10-1512:51caskolkmDoes somebody know how i can get a list of retracted entities in de history?#2015-10-1514:19marshall@caskolkm: Something like:
(defn retracted-datoms
[db]
(let [hdb (d/history db)
present? (fn [_ datom]
(not (:added datom)))
ret-db (d/filter hdb present?)
ret-datoms (d/datoms ret-db :eavt)]
(seq ret-datoms))) #2015-10-1514:19marshallBut be aware that will be performing a full DB scan#2015-10-1514:19marshallso it will take a long time on a DB of any measurable size#2015-10-1514:20bkamphaus@caskolkm: to follow up on that, if you can limit by attribute or entity or something, can query the retraction filtered history db much faster#2015-10-1514:23bkamphaushow do I edit my comment, query not filter constraints by attr 😛 — though you could do it there, maybe should?#2015-10-1514:26bkamphausok, that entire thing was spurious b/c I started w/Marshall’s filter, but w/query by specific attr you don’t need that of course, can just do history query#2015-10-1518:31bostonaholicany particular reason why "fuzzy query" (`foobar~`) on a :db/fulltext attribute doesn't work, but prefix query does (`foobar*`)?#2015-10-1613:09caskolkmsee screenshots#2015-10-1613:30bkamphaus@caskolkm: can you provide the code for the queries?#2015-10-1613:31caskolkm(defn get-archived-company
"Looks in the history for a retracted company"
[db host company-id]
{:pre [(db? db)
(entity? host)
(long? company-id)]}
(q '[:find (pull ?e pattern) .
:in $ ?host ?e pattern
:where
[?e :host/belongs-to ?host ?tx false]
[?e :company/name _ ?tx false]]
(d/history db) (:db/id host) company-id company-pull-pattern))
Note that: a company is deleted when both name and host removed#2015-10-1613:32bkamphaus@caskolkm: one issue is that you’re specifying the scalar find specification . - you would probably see both names but at present you’re limiting it to only a single value returned.#2015-10-1613:33caskolkm@bkamphaus: yes and no, without the .
in the console it shows the same#2015-10-1613:35caskolkm[:find (pull ?e [*])
:where
[?e :host/belongs-to _ ?tx false]
[?e :company/name _ ?tx false]]
#2015-10-1613:40caskolkmit’s weird because it pulls information based on a entity id..#2015-10-1613:40bkamphausI’m not sure behavior for pulling from history is defined here in query, you can’t actually do it from the direct pull API — i.e. if you provide a history db input into pull directly you will see java.lang.IllegalStateException: Can't pull from history, or for entity java.lang.IllegalStateException: Can't create entity from history — in general, pull/entity not supported from history, as entities are derived from a database that contains only all assertions and retractions (applied) through the t of the database.#2015-10-1613:45caskolkmthat’s weird, you can call it and it returns something?#2015-10-1613:45caskolkm😛#2015-10-1614:08bkamphaus@caskolkm: investigating the fact that it returns something as a bug, which I’ve repro’d. I expect that it should throw and we’ll fix this in a future version, but can’t promise that as of yet.#2015-10-1614:09caskolkmwill the pull be supported in the future?#2015-10-1614:10bkamphausDepends on what you mean? pull specs in query and pull alone is definitely planned to be supported w/longevity you would expect for user facing API. What I mean specifically is a pull on history should fail/throw.#2015-10-1614:11bkamphausentities, either from entity (lazy traversing obj) or pull (pure data rep) are projections of datoms from a “current time” type db — they’re fairly nonsensical projected against history as any constraints that apply meaningfully to the entity no longer make sense, and e.g. retractions as part of an entity at once don’t make sense.#2015-10-1614:12bkamphausI think the more typical case would be to use history to figure out which t/tx values are interesting transition points for the entity, then use e.g. the as-of filter to see the entity at that point in time.#2015-10-1615:48lowl4tencybkamphaus: hi, ocassionally have you use syslog for datomic logs in the official AMI?#2015-10-1721:55val_waeselynckHey there, I have a question about mocking datomic.Connection, I put it in persistent form here: https://groups.google.com/forum/#!topic/datomic/z25ZAZzD_Ws#2015-10-1721:55val_waeselynckThanks simple_smile#2015-10-1813:36robert-stuttaford@val_waeselynck: you can preload a db with schema and fixtures before all your tests and then retain that db as a global value. then you can use d/with on that in each test as you need, saving having to recreate the db from scratch for each test. should be stupid fast.#2015-10-1816:05val_waeselynck@robert-stuttaford: yes that's my objective simple_smile the thing is, I want to get these benefits transparently when I'm using datomic.api/transact, hence a desire to mock the connection#2015-10-2110:34val_waeselynck@robert-stuttaford: I did the mocking stuff on my dev laptop, it takes on average 2ms to create a mock db and run one transaction on it simple_smile#2015-10-2110:34val_waeselynckSo yeah, good for testing.#2015-10-2118:07thosmos@dsapoetra: what's your series of repl commands?#2015-10-2119:55robert-stuttaford@val_waeselynck: nice#2015-10-2120:01bostonaholiccan anyone help with a query I'm trying to build? https://gist.github.com/bostonaholic/30e31f4620804632d3c8#2015-10-2120:04bostonaholicfull-search is what's falling apart#2015-10-2120:18jgdaveychange or to or-join, naming all vars that need bound#2015-10-2120:18jgdaveyLooks like maybe just ?person in your case?#2015-10-2120:19jgdavey@bostonaholic: Oh, I mean ?person and ?query#2015-10-2120:20bostonaholicI've tried the or-join and it still doesn't work#2015-10-2120:21bostonaholicwell, at least I don't get the exception. But only Bob is returned#2015-10-2120:25bostonaholicI updated the gist to use or-join#2015-10-2120:36bkamphausworks fine for me with clj re-pattern and your or-join#2015-10-2120:36bkamphaus(defn full-search [db query]
(d/q '[:find [(pull ?person [* {:person/address [*]}]) ...]
:in $ ?query
:where
(or-join [?person ?query]
(and
[?person :person/name ?name]
[(re-find ?query ?name)])
(and
[?person :person/address ?address]
[?address :address/street ?street]
[(re-find ?query ?street)]))]
db
(re-pattern (str "(?i).*" query ".*"))))
#2015-10-2120:36jgdaveyLooks like there’s a typo in your data#2015-10-2120:36jgdaveyhttps://gist.github.com/bostonaholic/30e31f4620804632d3c8#file-datomic-search-clj-L27 should be :db.part/user#2015-10-2120:36bkamphausAlso gist contains a couple of tempid partition typos#2015-10-2120:37bkamphausyeah ^ that one#2015-10-2120:37bostonaholicha, thanks#2015-10-2120:38jgdaveyI just ran this with the typos fixed, and it does return the expected results (Bob and Charles)#2015-10-2120:39bostonaholic@jgdavey: hmm#2015-10-2120:40jgdaveyjava version “1.8.0_20” and [com.datomic/datomic-pro “0.9.5302”]#2015-10-2120:42bkamphaus@bostonaholic can confirm what @jgdavey sees (working as expected), w/1.7.0_60 and 0.9.5327 — so I expect issue is in data.#2015-10-2120:42bostonaholicYAY, it works!#2015-10-2120:43bostonaholicthanks for your help @bkamphaus and @jgdavey#2015-10-2122:52bostonaholicI've updated the gist with an added bit of complexity. Note that line #102 works, but #96 does not https://gist.github.com/bostonaholic/30e31f4620804632d3c8#2015-10-2122:54bostonaholic(load-file "datomic-search.clj") works great for that file#2015-10-2123:07thosmos@dsapoetra: these commands work as expected for me in the IntelliJ repl: (require '[datomic.api :as d]) (d/create-database "datomic:<free://localhost:4334/sample-database%22|free://localhost:4334/sample-database">) (d/connect "datomic:<free://localhost:4334/sample-database%22|free://localhost:4334/sample-database">)#2015-10-2123:35jgdavey@bostonaholic: It seems like destructuring is tripping you up. Since you’re passing in an empty collection, the or-join cannot bind ?zip. The query will always return nothing.#2015-10-2123:36jgdaveyOne way to get around this is to destructure the collection within an or-join.#2015-10-2123:38jgdaveyIt’s probably not the most elegant way to do this, but it does work. Somebody else might be able to provide a more elegant way to destructure (i.e. without identity).#2015-10-2123:52bostonaholicinteresting#2015-10-2123:53bostonaholicI like my (conj zips "") hack 😜#2015-10-2123:57bostonaholicthis is when I wish datomic was OSS so I could dig into the source to see if it's a bug, or a feature#2015-10-2202:30bkamphausThe or-join logic had gotten complex enough I'd probably just define a rule instead. Maybe more personal preference, though.#2015-10-2215:33bostonaholic@bkamphaus: actually, I have them all in rules, but I wanted to simplify my example for you all#2015-10-2215:34bostonaholicand I love that the clauses inside of a rule are by default and so I don't need to wrap them when in a rule#2015-10-2217:01bkamphaus@bostonaholic: yeah, the split use case for me is this (again, communicating preference more than anything) — different paths to a match = rule. Something like “people with blue or green eyes” is an or-clause. If I have to nest or/and too far, or write out a truth table to make sure I’m thinking through De Morgan’s laws correctly, well maybe a rule simple_smile#2015-10-2217:02bostonaholicheh#2015-10-2217:02bostonaholicCS to the rescue!#2015-10-2300:42domkmIs there any harm in putting enum values in :db.part/db instead of :db.part/user as the docs show?#2015-10-2306:29domkmIs it possible to create a transaction that contains a circular reference? I'm trying to install an attribute and annotate a transaction with that attribute at the same time but it's erroring. https://gist.github.com/DomKM/fda80d4d3e948ae31140#2015-10-2310:46robert-stuttafordpretty sure that’s not possible, @domkm - it uses the known-good schema to validate the transaction, the in-flight tx is not known-good at that point#2015-10-2310:46robert-stuttafordenums in db, what happens when you try – does it work?#2015-10-2315:51domkmThanks @robert-stuttaford#2015-10-2315:53domkm@robert-stuttaford: Re: enums in :db.part/db, I haven't tried but I assume it would work. However, I'm also able to (if memory serves) create an attr that begins with :db. which is explicitly disallowed, so I think there's not much validation.#2015-10-2416:47stuartsierra@domkm Circular references in general are not a problem. You can use tempids to transact a graph-like structure containing cycles. But schema is special — you cannot use an attribute (or partition) in the same transaction in which it was created.#2015-10-2417:16domkmThanks @stuartsierra#2015-10-2518:36raymcdermottpassword suppression ….#2015-10-2518:37raymcdermottI need to prevent the transactor from emitting the password used to connect to the backend server#2015-10-2518:37raymcdermottI use this at the moment
transactor ${DYNO_PROPERTIES} | sed 's/\(.*\)&password=.*&\(.*\)/\1\&password=*****\&\2/‘#2015-10-2518:40raymcdermottis there a more supported mechanism for this instead of the Shell hack?#2015-10-2606:22domkmQuestion for the Cognitects out there: I noticed that there is a :db.install/function ident. Its docstring says that it is used to install functions but it is not documented on http://docs.datomic.com. Could you shed some light on :db.install/function?#2015-10-2606:24domkmAlso, any hint about when we can use :db.install/valueType? 😉#2015-10-2612:35raymcdermottanother connection question for the Datomics#2015-10-2612:36raymcdermottwhy does the client need to provide the JDBC url when making a connection to the transactor?#2015-10-2612:36raymcdermottthis seems like a leaky abstraction#2015-10-2612:36raymcdermottof course the fact that the client needs to specify the storage at all is leaky#2015-10-2612:37raymcdermottor am I missing somthing?#2015-10-2612:37raymcdermottapart from an e in something#2015-10-2612:45mitchelkuijpers@raymcdermott The client get's the transactor url from the database, this means that if you for example start a new transactor or one crashes and another on comes up it get's the new transactor url from the database. This also means that you data fetching keeps working when the transactor is down. It is actually pretty nifty#2015-10-2612:59robert-stuttaford@raymcdermott: it rocks that peers talk to storage directly. transactor doesn’t have to be involved for reads at all. means scaling reads is independent of scaling writes.#2015-10-2613:00robert-stuttafordalso failover process uses storage; so peers know when failover is happening even while primary transactor is dead#2015-10-2613:03marshall@raymcdermott: You can suppress the printing of the login information by passing:
-Ddatomic.printConnectionInfo=false
as a parameter to the transactor launch#2015-10-2613:03marshallhttp://docs.datomic.com/system-properties.html#2015-10-2613:04robert-stuttafordhey @marshall!#2015-10-2613:05marshallMornin’ @robert-stuttaford . Well, I guess not for you simple_smile#2015-10-2613:06robert-stuttafordhah, no#2015-10-2613:06robert-stuttafordso when’s the Next Big Thing coming for Datomic, @marshall? i just can’t imagine that you guys are ‘done’ simple_smile#2015-10-2613:08marshallNope, definitely not done, but I don’t have any visibility into when/what is coming; at least not until it’s essentially release-ready#2015-10-2613:10robert-stuttafordok, cool simple_smile good to know there’s still stuff cooking!#2015-10-2613:20raymcdermottthanks @robert-stuttaford and @mitchelkuijpers#2015-10-2613:20raymcdermottoh and you too @marshall !!#2015-10-2613:31pesterhazyHi. Does anyone have a solution for partial backups of a datomic db?#2015-10-2613:32pesterhazyThe use case: I'f like to extract a set of "fixtures" from an existing production db. With mysqldump (for example) I'd just exclude ceratain tables from the dump.#2015-10-2613:33pesterhazyBut a normal Datomic dump will typically include all facts (whereas I'd like to exclude, say, all events or all payment information).#2015-10-2613:35robert-stuttafordhah – is that you, @bkamphaus ?#2015-10-2613:36robert-stuttafordpesterhazy: we wrote custom code for this that exports to a transit file, zips it, and puts it on S3#2015-10-2613:36pesterhazythat sounds like a good solution#2015-10-2613:36robert-stuttafordtakes a database and a configuration of attrs to export, and exports all the schema (whether in the config or not), enums, and data for those attrs#2015-10-2613:37robert-stuttafordour prod db backup is 9gb. this exports the ‘control’ information which comes to 12mb gzipped#2015-10-2613:37robert-stuttafordor 55mb unzipped. it’s super useful simple_smile#2015-10-2613:37robert-stuttafordlemme see if i can gist it#2015-10-2613:37pesterhazythat would be PERFECT simple_smile#2015-10-2613:38pesterhazyI tried https://groups.google.com/d/msg/datomic/eQQnnqYl67Y/z-Ib60NCJAAJ but it doesn't seem to cope well with 100s of MB of data#2015-10-2613:41raymcdermott@marshall: I noticed that password data does get leaked out when things go wrong (it dumps the properties) … maybe something more complete is needed?#2015-10-2613:50robert-stuttaford@pesterhazy: not a simple cut and paste job, i’m afraid. i’ve put a note on my list to get it done this week. are you on twitter?#2015-10-2613:50pesterhazysure#2015-10-2613:50pesterhazysame handle#2015-10-2613:50pesterhazywould be useful for lots of users, I think#2015-10-2613:51robert-stuttafordtotally. i’d share it as a public gist and tweet it, and cc you#2015-10-2613:51pesterhazycool thanks!#2015-10-2613:52robert-stuttaford:+1: was super fun to write with transducers#2015-10-2613:53bkamphauseverything is better with some transducers thrown in#2015-10-2613:54bkamphausthey’re the sriracha of clojure#2015-10-2613:54robert-stuttafordthis code actually builds transducers dynamically based on the config, and memoizes them#2015-10-2613:54robert-stuttafordfor teh speedz#2015-10-2613:55robert-stuttafordsriracha +1#2015-10-2613:55robert-stuttaford(had to look that up)#2015-10-2613:57bkamphaus@raymcdermott: taking over from @marshall here - if you can email me — bkamphaus @ cognitect — we definitely don’t want the creds to leak anywhere if they’re printing has been disabled by the command line arg. If you can pass along your transactor launch command, logs or console output that shows the leaked creds (redact/replace the actual creds), and the type of failure required to repro (or steps to take to induce that failure), I’ll investigate as a bug.#2015-10-2614:01raymcdermottthanks @bkamphaus - I will double check the issue on this side first#2015-10-2614:02bkamphaus@raymcdermott: cool, feel free to ping me here whenever you send it along - also would be helpful to have the txor properties file (again w/any creds or sensitive info redacted from it).#2015-10-2614:03raymcdermottI struggling with Postgres / SSL on Heroku at the moment#2015-10-2616:40kbaribeauhey, does anyone have any experience with this error? Our transactor is currently unresponsive:
2015-10-26 16:35:07.170 INFO default datomic.update - {:tid 133, :pid 1947, :message "Update failed"}
java.lang.NullPointerException: null
at datomic.db$next_valid_inst.invoke(db.clj:2412) ~[datomic-transactor-pro-0.9.5206.jar:na]
at datomic.db.ProcessExpander.getData(db.clj:2468) ~[datomic-transactor-pro-0.9.5206.jar:na]
at datomic.update$processor$fn__10171$fn__10172$fn__10173$fn__10177$fn__10180$fn__10181.invoke(update.clj:246) ~[datomic-transactor-pro-0.9.5206.jar:na]
at clojure.lang.Atom.swap(Atom.java:37) ~[clojure-1.6.0.jar:na]
at clojure.core$swap_BANG_.invoke(core.clj:2232) ~[clojure-1.6.0.jar:na]
at datomic.update$processor$fn__10171$fn__10172$fn__10173$fn__10177$fn__10180.invoke(update.clj:240) ~[datomic-transactor-pro-0.9.5206.jar:na]
at datomic.update$processor$fn__10171$fn__10172$fn__10173$fn__10177.invoke(update.clj:238) ~[datomic-transactor-pro-0.9.5206.jar:na]
at datomic.update$processor$fn__10171$fn__10172$fn__10173.invoke(update.clj:235) [datomic-transactor-pro-0.9.5206.jar:na]
at datomic.update$processor$fn__10171$fn__10172.invoke(update.clj:216) [datomic-transactor-pro-0.9.5206.jar:na]
at datomic.update$processor$fn__10171.invoke(update.clj:216) [datomic-transactor-pro-0.9.5206.jar:na]
at datomic.update$processor.doInvoke(update.clj:216) [datomic-transactor-pro-0.9.5206.jar:na]
at clojure.lang.RestFn.applyTo(RestFn.java:139) [clojure-1.6.0.jar:na]
at clojure.core$apply.invoke(core.clj:626) [clojure-1.6.0.jar:na]
at datomic.update$background$proc__10093.invoke(update.clj:58) [datomic-transactor-pro-0.9.5206.jar:na]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.6.0.jar:na]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_55]
#2015-10-2616:42kbaribeauwe're seeing timeouts every time we try to issue a transaction. the above error comes from the transactor log. we suspect the transactor is basically paused because of that error.#2015-10-2616:46bkamphaus@kbaribeau: two quick questions (1) does the problem persist across transactor restart (2) do you have the logs from when the error first started occurring?#2015-10-2616:46kbaribeau(1) yes#2015-10-2616:46kbaribeau(2) there's a bunch of instances of that exception in our logs. double checking#2015-10-2616:57kbaribeauwe've got the logs, for sure, there's just a lot of data to search through#2015-10-2617:00kbaribeauthere's :txid in these logs entries, can we use that to find out what was in the transaction that was failing?#2015-10-2617:02bkamphausIf you can get first log (just grep or something) that has “Update failed” in it w/the accompanying “next_valid_inst” in stack trace, can you email it to me at bkamphaus @ cognitect ?#2015-10-2617:08bkamphaus@kbaribeau have you backed up your database? Two pieces (1) back up right away to be safe, maybe to a new location in file store/s3 or w/e you back up to if your latest db backup does/did not show this issue, (2) verify backup will complete successfully#2015-10-2617:16bkamphauswait a minute, I may be misreading the error#2015-10-2617:18bkamphaus@kbaribeau: what’s the most recent instant in the database? Is it possible a transaction was incorrectly written future dated? w/explicit db/txInstant as specified here http://docs.datomic.com/transactions.html#reified-transactions#2015-10-2617:20kbaribeauhmm, there's a datomic api call to get that right?#2015-10-2617:21kbaribeaubasis-t?#2015-10-2617:26kbaribeauI doubt we've ever set a T value explicitly. d/basis-t tells me our most recent T value is 4053248. should that show up in the log?#2015-10-2617:31kbaribeaud/next-t seems to work. but I suppose that's different from datomic.db$next_valid_inst#2015-10-2617:40bkamphausDatoms about transaction entity from last successful transaction, replace safely-before-inst with date prior to issue occurring:
(let [log (d/log conn)
safely-before-inst #inst "2015-10-01" ;example date, replace
txes (d/tx-range log safely-before-inst nil)
last-tx-data (:data (last txes))]
(filter #(= 3 (d/part (:e %))) last-tx-data))
#2015-10-2617:40bkamphausif you annotate transactions with domain specific info, verify you’re ok sharing prior to pasting, or pm me w/tx datoms from last successful transaction.#2015-10-2617:46bkamphaus^ @kbaribeau#2015-10-2617:46kbaribeauthanks, running it#2015-10-2617:48kbaribeaujust need a minute, lost my repl 😞#2015-10-2617:55kbaribeau(let [log (d/log @connection/conn)
safely-before-inst #inst "2015-09-25T00:00:00.000-00:00" ;example date, replace
txes (d/tx-range log safely-before-inst nil)
last-tx-data (:data (last txes))]
(filter #(= 3 (d/part (:e %))) last-tx-data))
; (#datom[13194143586558 50 #inst "2015-10-23T21:30:09.200-00:00" 13194143586558 true])
(d/touch (d/entity (d/db @connection/conn) 13194143586558))
; {:db/id 13194143586558, :db/txInstant #inst "2015-10-23T21:30:09.200-00:00"}
#2015-10-2618:16kbaribeau^ not quite the most recent stuff#2015-10-2618:16bkamphaus@kbaribeau: are you able to upgrade to latest? 0.9.5327 - on transactor#2015-10-2618:16kbaribeauwe've got more than one database#2015-10-2618:18bkamphaus@kbaribeau: preferably upgrade both transactor and peer lib dep to latest, 0.9.5327 - we think there’s a possibility it stems from an issue corrected in that version.#2015-10-2618:18kbaribeauwe can update. what's in the latest release that's interesting? we're on 5206#2015-10-2618:18kbaribeauoh ok#2015-10-2618:18kbaribeauwe'll update#2015-10-2618:35kbaribeaudoes upgrading from 5206 to 5327 require that we also upgrade the peer? seeing some errors that might be indicating that#2015-10-2618:39bkamphaus@kbaribeau: yeah, push dep upgrade for peer apps as well#2015-10-2618:46kbaribeaunew error?
#object[datomic.promise$settable_future$reify__6305 0x262beb24 {:status :failed, :val #error {
:cause "java.lang.NullPointerException"
:via
[{:type java.util.concurrent.ExecutionException
:message "java.lang.NullPointerException: java.lang.NullPointerException"
:at [datomic.promise$throw_executionexception_if_throwable invoke "promise.clj" 10]}
{:type java.lang.NullPointerException
:message "java.lang.NullPointerException"
:at [clojure.core$eval329$fn__330 invoke "NO_SOURCE_FILE" -1]}]
:trace
[[clojure.core$eval329$fn__330 invoke "NO_SOURCE_FILE" -1]
[datomic.error$deserialize_exception invoke "error.clj" 135]
[datomic.peer.Connection notify_error "peer.clj" 401]
[datomic.connector$fn__8480 invoke "connector.clj" 169]
[clojure.lang.MultiFn invoke "MultiFn.java" 233]
[datomic.connector$create_hornet_notifier$fn__8486$fn__8487$fn__8490$fn__8491 invoke "connector.clj" 194]
[datomic.connector$create_hornet_notifier$fn__8486$fn__8487$fn__8490 invoke "connector.clj" 189]
[datomic.connector$create_hornet_notifier$fn__8486$fn__8487 invoke "connector.clj" 187]
[clojure.core$binding_conveyor_fn$fn__4444 invoke "core.clj" 1916]
[clojure.lang.AFn call "AFn.java" 18]
[java.util.concurrent.FutureTask run "FutureTask.java" 266]
[java.util.concurrent.ThreadPoolExecutor runWorker "ThreadPoolExecutor.java" 1142]
[java.util.concurrent.ThreadPoolExecutor$Worker run "ThreadPoolExecutor.java" 617]
[java.lang.Thread run "Thread.java" 745]]}}]
#2015-10-2618:47bkamphaus@kbaribeau: this is after upgrading both?#2015-10-2618:47kbaribeauyes#2015-10-2618:48bkamphausthis is on transactor or peer on startup?#2015-10-2618:49kbaribeauthis is on a d/transact call#2015-10-2618:49kbaribeautransactor and peer startup fine#2015-10-2618:53bkamphaus@kbaribeau: is there an exception being reported on the transactor for that transaction attempt also?#2015-10-2621:39zentropeModelling question:#2015-10-2621:40zentropeI have “user” entities and “club” entities. Users are members of one or more clubs, and clubs have one or more users (as members).#2015-10-2621:40zentropeI could go with :user/clubs or :club/users.#2015-10-2621:41zentropeBut I want that “membership” to be typed, as in, “president” “secretary” “member” and so on.#2015-10-2621:41zentropeSo, I can't just associate a user with a club via a cardinality/many because that association itself needs to be annotated.#2015-10-2621:42zentropeDo I create a “membership” entity which has refs from users to clubs along with a role enum (pres, veep, etc)?#2015-10-2621:42zentropeThat works, but feels kinda RDBMSish.#2015-10-2621:43zentropeOr do you go with :club/presidents :club/veeps :club/members?#2015-10-2621:45zentrope(Of course, what I want is to do a Pull API thing in a user and get a list of all the clubs s/he belongs to, and what the membership role is.)#2015-10-2621:46bkamphaus@zentrope: It depends simple_smile#2015-10-2621:47zentropeYeah. I don’t have a lack of ideas, just enough lack of experience to choose between them. ;)#2015-10-2621:48bkamphausFor simple case, I would probably use multiple attributes, :club/president :club/veep :club/member … etc. as card/many and define a rule that says a user is one of these. Of course the Pull API use case with this is less straight forward. I might also try to define the president of relation from the user if a cardinality one constraint should apply to that direction. You could reify the relation, also, as you mention.#2015-10-2621:49bkamphausIs the Pull API use case you mention above the most common use case you envision, or just something you want to make sure is included?#2015-10-2621:49zentropeI’m using the Pull API to make pulling a user’s data kind of document-like.#2015-10-2621:49zentropeGet me the whole state for that user so I can blob it up to a UI and render it.#2015-10-2621:50zentropeDoing the pulls in stages works, of course, but I’m hoping I’m not missing something given my past thinking in terms of using Postgres.#2015-10-2621:54bkamphausI guess it depends on how far your use of this membership idea. You could go so far as to make each “membership” a component entity of a person entity that refers to the class of membership and the organization, start date, end date, billed amount, etc.#2015-10-2621:55zentropeYes, that actually seems to be a reasonable approach, keeping it open ended.#2015-10-2621:56zentropeUsing the Pull API, how do I reference the membership?#2015-10-2621:56zentrope[:user/id :user/name …] ?#2015-10-2621:58zentrope[:user/id :user/name {:membership/_user [{:membership/role [:db/ident]}]}] seems to be getting me somewhere.#2015-10-2621:59bkamphaushttp://docs.datomic.com/pull.html#component-defaults — component defaults in pull documentation#2015-10-2622:02zentropeI can get from user -> membership -> club but can’t add -> membership -> users ; to get a list of all the members of that club.#2015-10-2622:02bkamphausthat would be assuming a structure like:
{:user/id
:user/name
:user/membership
[{:membership/org …
:membership/role ...
:membership/start .. }]}
Where a membership is isComponent true, not sure if that’s a model that sounds reasonable for your use case.#2015-10-2622:02zentropeHopefully it’s just syntax.#2015-10-2622:02zentropeYes. That’s working for me.#2015-10-2622:04zentropeWhat I want, though, is [user] -> [clubs] -> [members in each club] but I can’t seem to make the club to a membership lookup to find users. Hm.#2015-10-2622:06zentrope[:user/id
:user/name
:user/email
{:membership/_user
[{:membership/role [:db/ident]}
{:membership/club
[:club/id
:club/name
:membership/_user
]}]}]
#2015-10-2622:07bkamphausso from :membership/club -> (club entity id) -> :membership/_club (all membership entities that point to club) -> :user/_membership (back to user ids)#2015-10-2622:07zentropeThat last :membership/_user doesn’t appear.#2015-10-2622:13zentropeI can map this stuff through a transform that then pulls in, say, a list of all the members of a specific club via separate queries. I’m just wondering if the Pull API can do all that for me.#2015-10-2622:18zentropeShould user have a :user/membership (isComponent) and club have a :club/membership (isComponent)?#2015-10-2622:18zentropeRight now, I have just a standalone membership set of attributes for an entity, with :one for club and :one for user.#2015-10-2622:19bkamphausProbably differences of opinion on this — my quick mental mock up would say have :user/membership as isComponent of a user, points to an organization of which the user is a member.#2015-10-2622:19bkamphausI’m not sure what you mean be “standalone membership set of attributes for an entity” precisely.#2015-10-2622:20zentrope(attr :membership/role :ref :one "")
(attr :membership/user :ref :one "")
(attr :membership/club :ref :one "")
#2015-10-2622:21zentropeWell, that’s just my dsl. I mean there’s no :user/memberships attribute of any type. Nor :club/membership.#2015-10-2622:21zentropeJust like a SQL join table, is what I’m doing (and thus feels wrongish).#2015-10-2622:24bkamphausyeah, I would make membership entities that are components of users and have similar attributes. So no :membership/_user backref, but still role (points to an enum?), :club (points to a club org), and you can go from club org by back ref to all membership that apply to that club, and from those membership to their user parent entities.#2015-10-2622:24zentropeOkay. I totally understand what you’re saying. That seems quite reasonable.#2015-10-2622:25zentropeI’m much more interested in being able to walk from a user through to all the information they can see vs starting with a club.#2015-10-2622:36zentropeHow do you “go from club org by back ref to all membership” in a Pull API starting with a user?#2015-10-2622:43zentropeI think :user/memberships :many shouldn’t be a component such that you can have a :club/memberships :many also pointing to the same membership entity as the :user/memberships attribute does.#2015-10-2622:44bkamphaus@zentrope: why :club/membership also pointing to it?#2015-10-2622:45zentropeBecause otherwise I can get a list of the members of the club via a Pull API pattern.#2015-10-2622:45bkamphausI saw :user/memberships card many pointing to a membership entity which has :membership/club as a cardinality one#2015-10-2622:46bkamphaus:membership/_club being the reverse ref you would follow from club entity to get all memberships#2015-10-2622:46bkamphausand you can use :user/_memberships for all users with a membership to that club#2015-10-2622:46zentropeHm.#2015-10-2622:46bkamphausbut I haven’t thrown in the toy schema/data yet#2015-10-2622:47bkamphausmy reasoning being that it makes sense to constrain a membership to only be for one club#2015-10-2622:47bkamphausso there’s a direction of the relationship to which you could apply a cardinality/one constraint (have to try to enforce that via some other logic if you model the ref in the other direction)#2015-10-2622:48zentropeRight. I think it’s the Pull API pattern I might be having trouble with.#2015-10-2622:49raymcdermotthey guys … I am trying to connect to Datomic that is connected to a PostgresSQL DB#2015-10-2622:49raymcdermottthey require SSL and have some recommendations#2015-10-2622:49raymcdermottfor properties#2015-10-2622:50zentrope(def club-pattern
[:user/id
:user/name
:user/email
{:user/memberships
[{:membership/role [:db/ident]}
{:membership/club
[:club/id
:club/name
{:membership/_club '[*]}]}]}])
#2015-10-2622:50zentropeI can’t get from :membership/_club to a list of users.#2015-10-2622:50raymcdermottssl=true&sslfactory=org.postgresql.ssl.NonValidatingFactory#2015-10-2622:52raymcdermottbut I’m still getting SSL off#2015-10-2622:52raymcdermottCaused by: java.util.concurrent.ExecutionException: org.postgresql.util.PSQLException: FATAL: no pg_hba.conf entry for host "52.22.208.29", user "dyno", database "dd7fmhk85j9m9d&ssl=true", SSL off#2015-10-2622:52raymcdermottis there another way to enable SSL on the Postgres connection?#2015-10-2622:53zentropeI think I figured it out.#2015-10-2622:55raymcdermottit seems like it’s misinterpreting any additional parameters on the JDBC URL#2015-10-2622:58zentrope(def club-pattern
[:user/id
:user/name
:user/email
{:user/memberships
[{:membership/role [:db/ident]}
{:membership/club
[:club/id
:club/name
{:membership/_club
[{:user/_memberships
[:user/name {:user/memberships [{:membership/role [:db/ident]}]}]}]}]}]}])
#2015-10-2622:58zentropeTag soup, but it works.#2015-10-2622:59raymcdermottthere is a sql-driver-params in the JDBC property file … I will give that a try#2015-10-2623:04raymcdermottnope that did not work nor sqlDriverParams#2015-10-2623:21bkamphaus@zentrope: it looks like you figured it out. I did end up doing a gist of a quick example of what I was talking about: https://gist.github.com/benkamphaus/4f991901b2fe8d00e20b#2015-10-2623:22bkamphaus@raymcdermott: you’re following the Heroku PostgreSQL instructions here? http://docs.datomic.com/storage.html#sql-database#2015-10-2623:22zentrope@bkamphaus: Thanks!#2015-10-2623:38raymcdermottyes - the transactor is connecting OK#2015-10-2623:38raymcdermottbut the Peer is failing#2015-10-2623:43bkamphaus@raymcdermott is it possible that there are issues with creds in the connection URL that need to be percent encoded?#2015-10-2623:45bkamphausAnother option to try would be to use the map form for the URI described here http://docs.datomic.com/clojure/#datomic.api/connect#2015-10-2623:45raymcdermottI have set the creds as system properties as mentuined in the docs#2015-10-2623:46raymcdermottoooh so datomic:sql://{db-name}?{jdbc-url} is two maps?#2015-10-2623:46raymcdermottI have just made them strings#2015-10-2623:47raymcdermottConnects to the specified database, returing a Connection.
URI syntax ({} indicate place holders to fill in, [] indicate optional):
#2015-10-2623:48raymcdermottah OK, read further now#2015-10-2623:48raymcdermottstill not clear how I set SSL#2015-10-2623:49raymcdermottbut maybe I can use the same terms as in the property file?#2015-10-2623:50zentrope@bkamphaus: Another version of the many-to-many thing: https://gist.github.com/vaughnd/3705861#2015-10-2700:37raymcdermott@bkamphaus: could you paste in an example map … I can’t get it working#2015-10-2700:37raymcdermottthis is what I have#2015-10-2700:37raymcdermottconn-map {:protocol :sql
:db-name "customer"
:sql-driver-params "ssl=true;sslfactory=org.postgresql.ssl.NonValidatingFactory"
:jdbc-url (env :jdbc-database-url) }
conn (d/connect conn-map)
db (d/db conn)#2015-10-2700:38raymcdermottthe JDBC URL is provided by Heroku#2015-10-2700:38raymcdermottbut I get#2015-10-2700:38raymcdermottException in thread "main" java.lang.IllegalArgumentException: :db.error/invalid-sql-connection Must supply jdbc url in uri, or DataSource or Callable<Connection> in protocolObject arg to Peer.connect, compiling:(web.clj:57:9)
#2015-10-2700:39raymcdermottso I am a little lost how I should form the data#2015-10-2710:14mitchelkuijpersQuick question will a datomic console count towards the peer process?#2015-10-2710:56robert-stuttafordi believe it does#2015-10-2712:36stuartsierra@mitchelkuijpers: yes#2015-10-2712:52mitchelkuijpersOk thnx @robert-stuttaford @stuartsierra#2015-10-2714:03robert-stuttaford:+1:#2015-10-2717:32maxDoes the speed of a cut querye (i.e. [:find ?e . :where …]) depend on the number of possible results#2015-10-2717:34max;; there are 1366 of these
user> (time (d/q '[:find ?e . :where [?e :vulnerability/title]] db))
"Elapsed time: 1.272812 msecs"
17592186931182
;; there are 676457 of these
user> (time (d/q '[:find ?e . :where [?e :artifact-version/number]] db))
"Elapsed time: 461.450483 msecs”
17592187088881
#2015-10-2717:35maxI understand the problems with using time in a repl, I get similar results using criterium#2015-10-2717:35maxalso both of those attributes have indexing turned on#2015-10-2717:46stuartsierra@max: As a general rule, the time cost of a query is proportional to the number of Datoms matched by each clause.#2015-10-2717:47stuartsierraThe :find ?e . syntax is just a convenience for when you only expect one result.#2015-10-2720:03domkmIs datomic.db.DbId an implementation detail or can I rely on tempid always returning an instance of DbId?#2015-10-2720:04domkmI ask because it isn't in Datomic's javadoc#2015-10-2720:07bkamphausNot sure I follow. tempid returns a tempid, which is different from an entity id. A temp id has to be resolved to the actual id assigned in the db with resolve-tempid (in clojure) - http://docs.datomic.com/clojure/index.html#datomic.api/resolve-tempid#2015-10-2720:09bkamphausOh you mean type#2015-10-2720:09alexmillerI don't think you should consider that guaranteed#2015-10-2720:09bkamphausinstance of, yeah.#2015-10-2720:09domkmYup#2015-10-2720:10domkmI want to be able to test if something is a tempid and then if that tempid is in the reserved space of -1..-1000000#2015-10-2720:11domkmWhich is (and (instance? datomic.db.DbId x) (<= -1 (:idx x) -1000000))), assuming that part of the API can be relied on.#2015-10-2720:12domkm@alexmiller: If this were to change, would it be documented in the release notes?#2015-10-2720:14alexmillerI'd defer to @bkamphaus for any official answer but my personal opinion is that you are relying on implementation details that may change without warning (and may or may not be documented when they do)#2015-10-2720:17domkmHmm, okay. Thanks @alexmiller. @bkamphaus, what do you think? Is there an official way to test if something is a tempid and to retrieve its index?#2015-10-2720:23bkamphaus@domkm I’ll look into it. The standing answer for the mean time (and in general the default answer) consistent with @alexmiller ’s gut take is to consider undocumented specifics like types or members to be implementation details and not promised.#2015-10-2720:24bkamphausWhat’s the goal of the test?#2015-10-2720:29bkamphausNote that you can use with ( http://docs.datomic.com/clojure/#datomic.api/with ) to determine what will happen when you apply tx-data to a db without having to transact durably, which includes using resolve-tempid with :tempids in the returned map.#2015-10-2720:30domkm@bkamphaus: I am writing a function that composes mutators (transactor functions) together using d/with and d/resolve-tempid. It's based on this (https://github.com/webnf/webnf/blob/master/datomic%2Fsrc%2Fwebnf%2Fdatomic.clj#L181-L244) function. There are problems with aliasing tempids if any mutators create sub-mutations with reserved tempids but no problems with unreserved ones since those are unique (right?). I want to be able to test if a tempid is in the reserved range in order to throw an error instead of transacting invalid data.#2015-10-2721:01bkamphaus@domkm still parsing this. What’s the difference in unique behavior you’re discussing? Potential for collision with transactor functions also generating tempids? All unique tempids in a single transaction should resolve to unique entity ids, except in the event of upserts (dup values for unique identity attrs).#2015-10-2721:10domkm@bkamphaus: In the linked function, if the same reserved negative tempid is used in multiple composed txs, it will result in aliased data. In other words, you can end up with an entity that is an invalid collection of facts (like an entity with both a :user/name and an :org/name). In terms of unique tempids, could using with in a transaction introduce duplicate tempids?#2015-10-2721:26raymcdermotthi @bkamphaus … update on the connection saga#2015-10-2721:26raymcdermottsystem properties set...#2015-10-2721:26raymcdermott(System/setProperty "datomic.sqlUser" user-value)
(System/setProperty "datomic.sqlPassword" password-value)
(System/setProperty "datomic.sqlDriverParams"
"ssl=true;sslfactory=org.postgresql.ssl.NonValidatingFactory”)#2015-10-2721:26raymcdermottchecked it before connection and it’s in the System properties map#2015-10-2721:27raymcdermottthen I try to create the db#2015-10-2721:28raymcdermottconn-map {:protocol :sql
:db-name "datomic"
:sql-driver-params "ssl=true;sslfactory=org.postgresql.ssl.NonValidatingFactory"
:sql-url simple-jdbc}
created! (d/create-database conn-map)
#2015-10-2721:28raymcdermottbut unfortunately the connection does not play out#2015-10-2721:28raymcdermottException in thread "main" java.util.concurrent.ExecutionException: org.postgresql.util.PSQLException: The server requested password-based authentication, but no password was provided., compiling:(web.clj:65:9)
#2015-10-2721:30raymcdermottso the good news is that the postgres server is being reached and SSL is working#2015-10-2721:30raymcdermottodd news is that the password is somehow not being sent along#2015-10-2721:31bkamphaus@raymcdermott: I don’t believe the peer looks for user and pw system properties (those are documented for txor only), it will require that username and password be set in the URI params or e.g. on a data source provided in the map.#2015-10-2721:31raymcdermottah, ok I will give that a try#2015-10-2721:32bkamphauswith data source, last time I tested and built an example was in Java, but if you stick with the map route maybe this will be helpful.#2015-10-2721:33raymcdermotttrying now...#2015-10-2721:34bkamphausJava example config (parameters elided, but described in comments).
Object driver = java.sql.DriverManager.getDriver(sqlUrl);
Object driverClass = driver.getClass();
String name = driverClass.getName();
setURL() //JDBC portion of Datomic URI
setValidationQuery() // Use provider default or one you use in transactor properties file
setValidationInterval() // this should match the transactor properties file heartbeat-interval-msec
setTestOnBorrow() // set to true
setInitialSize(2)
.setDriverClassName() // ^ from code block above
.setUsername // username
.setPassword // password
.setConnectionProperties // anything else, i.e. what would be in transactor properties optional sql params.
#2015-10-2721:36bkamphausFor my local toy config, though, I just URL as string arg to connect (though no SSL), e.g.
datomic:#2015-10-2721:37raymcdermottthanks - I’ll play around with that#2015-10-2721:38raymcdermottwhat’s not obvious is the correct name of the parameters in the map#2015-10-2721:51raymcdermottafter setting username / password properties in the map … no joy#2015-10-2721:52raymcdermottafter adding the username to the driver params, message changed#2015-10-2721:52raymcdermottit now seems to be picking up the user.name from the system properties#2015-10-2721:53raymcdermottin the params I am using the ‘;’ separator convention … will try with ?ߟ-10-2721:57raymcdermottand now I’m back to SSL off#2015-10-2721:58raymcdermottGetting desperate, so I will try to set the user.name system property#2015-10-2721:58bkamphaus@raymcdermott: have you tried everything hard-coded in one URI as string?#2015-10-2721:59raymcdermottI’ll try that too#2015-10-2721:59bkamphausi.e. datomic:#2015-10-2722:02raymcdermottanother question… the transactor is on IP1 and the postgres is on IP2#2015-10-2722:02raymcdermottboth not localhost#2015-10-2722:03bkamphausthe peer needs to connect to postgres IP#2015-10-2722:03raymcdermotthow do I encode that on the URL?#2015-10-2722:03bkamphausit will look up transactor endpoint from storage (transactor writes its location to storage as part of heartbeat)#2015-10-2722:03raymcdermottah, ok#2015-10-2722:07raymcdermottI’m sure somebody told me that here the other day - I must have dropped a few packets#2015-10-2722:10raymcdermottha! I think this may have been working before… I think the error is coming from the data in postgres#2015-10-2722:12raymcdermottThinking about it, I think I need to update the hostname in the transactor properties file so that it no longer uses the default local host#2015-10-2722:13raymcdermottI will need to mess around with a few files / deployments and get back to you#2015-10-2722:15bkamphaus@raymcdermott: yeah, the transactor properties file hostname needs to be the address the peers can reach the transactor at.#2015-10-2722:27raymcdermottI get it now… fingers crossed - now that I know what connections are going back and forth - I should get it started up#2015-10-2722:27raymcdermottactually this will be a nice thing#2015-10-2722:27raymcdermottHeroku have a VPC in beta to which I have access#2015-10-2722:28raymcdermottthis will demonstrate it working in the VPC#2015-10-2722:28raymcdermottI will write a small blog once it’s all done (with appropriate attributions for support!!)#2015-10-2722:29bkamphaus@raymcdermott: good luck simple_smile Feel free to drop any additional questions here as needed.#2015-10-2722:30raymcdermottdon’t worry, will do - great help so far - thanks a lot#2015-10-2722:49domkm@bkamphaus: Could you clarify for me under what circumstances multiple invocations of (tempid :db.part/user) could possibly return duplicate tempids? There is no guarantee of uniqueness between transactor invocations and peer invocations, right? Inside a transactor function, can using tempid inside and outside of with potentially cause duplicates?#2015-10-2722:50domkmNote that I am asking about the unary tempid function that returns a random-ish tempid. I understand the binary tempid function.#2015-10-2723:06bkamphaus@domkm sorry, my comment above was the opposite side of the uniqueness constraint. If you specify (tempid :db.part/user) on multiple entities making assertions e.g. in a map with the same value for an attribute set as unique identity, then those will resolve as upserts on the same entity id.#2015-10-2723:07bkamphausRe: the collision issue re: user reserved (which you saw you understand, but just for clarification/verifying), I mean the issue described on group here: https://groups.google.com/forum/#!searchin/datomic/transactor$20function$20tempid/datomic/xRWXX0coMcI/wrBY-YMzbE8J#2015-10-2723:10raymcdermott@bkamphaus: all is hooked up now… next to put some actual data and run a few queries#2015-10-2723:11raymcdermottdon’t worry - I’ll keep it simple!#2015-10-2723:11raymcdermottthanks again#2015-10-2723:11raymcdermottI can sleep tonight simple_smile#2015-10-2723:40domkm@bkamphaus: Thanks for the link. Magnar Sveen's last response answers my question about whether unreserved tempid conflicts can occur between a peer and the transactor and, as I suspected, they can. I don't think it answers the question about conflicts in a transactor function that uses with. I was thinking that with might cause the uniqueness-within-a-transaction guarantee of tempid to be violated because with is sort of like a transact (probably shares a lot of the same code). I haven't been able to clearly describe this question so let me go put together an example.#2015-10-2817:56wambatHi all. I was trying to update value of isComponent ref with new set of data, is there a idiomatic way to do that? Updating ref with new values in map transaction just adding them to the set.#2015-10-2818:11bhagany@wambat: I'm pretty sure you have to explicitly retract the elements in the old set, if you want to replace them.#2015-10-2818:14wambat@bhagany: thanks, but that's sad to hear that.#2015-10-2818:17bkamphaus@wambat one strategy for this would be to define a cas-like transaction function that operates with logic: “if card/many attr on this entity still has the same set of values, retract that set and assert this new set"#2015-10-2818:18bkamphausbut yeah, cardinality one will handle retractions of old data w/assertions of new but card many will add new assertions unless you supply retractions.#2015-10-2818:21wambatI'll try adding the function, thanks @bkamphaus .#2015-10-2821:08domkm@bkamphaus: I think I figured out a solution that avoids the complications and any potential problems I might have had with 'with'. Thanks for your help. :)#2015-10-2821:10bkamphaus@domkm good to hear! If it’s something you end up sharing let me know, I’ll be curious to see what solution you settled on simple_smile#2015-10-2823:22zentropeI’ve got an attribute :event/attendees :ref :many isComponent and I want to replace it with a new set of values. I tried (in a tx) :event/attendees #{ … new entity ids … }, but it just adds the new ones without auto-retracting the old. Is there an explanation of how that should work?#2015-10-2823:34zentropeI thought I saw an example out there where you could just assert the value of a component attribute and if the value is of type set it would do the right thing.#2015-10-2823:45zentropeArgh! I know I saw that code fragment somewhere!#2015-10-2823:49zentropeMaybe here: http://docs.datomic.com/transactions.html#cardinality-many-transactions. Guess I saw something I wanted to see, now what was there, alas.#2015-10-2823:51zentropeI guess the option is to use a db function of some sort, or just read in the old values, diff against the new stuff, and assert/retract to taste.#2015-10-2902:37bkamphaus@zentrope: I think my reply from a little earlier today to @wambat may be relevant here:
> one strategy for this would be to define a cas-like transaction function that operates with logic: “if card/many attr on this entity still has the same set of values, retract that set and assert this new set"#2015-10-2902:39zentropeI can do that, but I worry that there’s an overlap between those. I can use the clojure diff function to figure out which ones to retract and add, and leave the others: but, oy. It’s not so easy to compare these entities.#2015-10-2902:41zentropeDoes db.fn/cas account for that somehow, or is a “replace” generally okay?#2015-10-2902:41bkamphauswhat are the correct semantics? is it an assertion/re-assertion of all the refs? are you using nested maps w/unique identity attr where you’re potentially upserting on a component ref?#2015-10-2902:41zentropeSeems like it breaks history (the :db/ids are different).#2015-10-2902:42zentropeI have an “event” with “attendees” (similar to the user/club relationship).#2015-10-2902:43zentropeThe user clicks who’s invited and who isn’t, then I ship the “who should be in the list” to the server.#2015-10-2902:43zentropeSo hard to explain in tiny slack sentences. ;)#2015-10-2902:43zentrope:event/attendees -> [{:attendee/status :enum :attendee/user :ref}]#2015-10-2902:45zentropeSo, theoretically, there are 10 attendees already recorded. User “unclicks” one, so I ship down the new “version” of the event with the 9 remaining attendees.#2015-10-2902:46zentropeIn an SQL type system, I could just delete all attendees, then write the 9 back down. Brute force, but eh.#2015-10-2902:47zentropeIf I don’t do that here, it feels like I’m losing the continuity of history.#2015-10-2902:47bkamphausdoes that match the semantic you want to preserve? i.e., the user asserted “these are the 9 attendees”? i.e. you consider it a re-assertion that those are attendees, not a correction or change of just the diff as the correct semantic for the click?#2015-10-2902:48zentropeHm.#2015-10-2902:48bkamphausthis is all fairly similar to the blog post here, actually, as I think about it: http://blog.datomic.com/2012/08/atomic-chocolate.html#2015-10-2902:48zentropeWell, it’s like you load up the entire “event” document, tweak this or that, then save the whole thing back.#2015-10-2902:51zentropeYes. Describes the problem. I don’t see the solution, though.#2015-10-2902:52zentropeSend adds and retracts singly for recording?#2015-10-2902:53bkamphausthe solution statement is admittedly fairly buried:
> Given the set of checkbox states, you should do the diff in the web tier as soon as you pull data out of the form.#2015-10-2902:53zentrope> At this point, you should submit adds and retracts only for the new facts I created -- not a set with an add or retract for every UI element.#2015-10-2902:54bkamphausright, the diff in the web tier being the source of adds and retracts that are only for new facts.#2015-10-2902:55zentropeYeah, I get it.#2015-10-2902:55zentropeSure makes the transaction simpler.#2015-10-2902:59zentropeI guess I’ll need to ship around the entity IDs, or add a uuid to the membership man-in-the-middle entity.#2015-10-2902:59zentropeOtherwise, I’ve got nothing to grab onto for a retract without doing a look up.#2015-10-2903:01bkamphausso you’re reifying a component attendance entity, a card/many on event, which has attendee status and attendee/user which is your ref. Hm.#2015-10-2903:01zentropeYeah, the attendee middleman is just a way to annotate the link between and event and a user with “yes/no/maybe” information.#2015-10-2903:02zentropeSame as user to club with a membership link for “veep” “pres” and so on.#2015-10-2903:03zentropeI could just add a uuid attribute for that link, then use an entity retract using, [:db.fn/retractEntity [:attendee/id “uuid”]].#2015-10-2903:04bkamphaushow do you limit choices for invitees?#2015-10-2903:04bkamphausand do you intend a constraint in the other direction? i.e. a user can only have one ‘attendee' relationship to an event?#2015-10-2903:05zentropeThat’s done in the UI. A list of all the people, then I consult the set of “attendees” to indicate if they’re invited.#2015-10-2903:06zentropeI have :event/attendees which is an isComponent=true and I populate it with an enum and a user reference.#2015-10-2903:06zentropeAn event can have any number of attendees, but no user is duplicated in that list.#2015-10-2903:06zentropeAn attendee can attend many events.#2015-10-2903:06zentropePretty typical, I imagine.#2015-10-2903:08zentropeIf it weren’t for this extra info (attendence status), things would be simple. ;)#2015-10-2903:08bkamphausI’m just thinking of the constraints implied by the cardinality and the component entities there to allow attributes on the ref.#2015-10-2903:11bkamphausi.e. with a card/many attendees that was itself a ref, you would get the constraint of no more than one attendance per user per event trivially from set behavior, but with those reified, you must have some other application or transaction function logic to prevent a multiple attendance from a user for the same event being asserted, since the attendance or what have you will be its own entity.#2015-10-2903:12zentropeI do have the second case. I’m not sure how to do the first.#2015-10-2903:14zentrope:event/attendees, card:many, isComponent:true, type:ref right?#2015-10-2903:14zentrope:attendee/status card:one :type:enum#2015-10-2903:14bkamphausright, you can’t do the trivial case in either direction I suppose if you want attributes on the events <-> users many-to-many#2015-10-2903:15zentropeRight!#2015-10-2903:15zentropeThat’s the struggle.#2015-10-2903:15bkamphausobligatory comment on the struggle being real#2015-10-2903:15zentropeWould be nice if there was a “set” type.#2015-10-2903:18bkamphausI will say re: one of your previous comments that it’s fairly typical to see globally unique IDs, either domain or e.g. sqUUID, provided for all entities in a lot of Datomic data modeling.#2015-10-2903:18zentropeIf I add a UUID to the attendence entity, I can at least pull them into the UI as tourist information, then use that in a transaction to craft appropriate retract elements.#2015-10-2903:20zentropeYeah? I’ve been doing that, too — but only for domain entities, not these implementation specific linking strategies.#2015-10-2903:20zentropeI’ll rethink that.#2015-10-2903:21bkamphausI’m thinking on the linking strategy entities — i.e., if you assert, retract, then re-assert, is it correct to generate a new attendance linking entity, vs. e.g. re-asserting the previous attendance.#2015-10-2903:22bkamphausassuming the user’s view of history is agnostic (i.e. they don’t care whether or not there was previous invitation they retracted), a new assertion is probably correct (?)#2015-10-2903:23zentropeI think so.#2015-10-2903:23zentropeIt just shows, say, that Susan was invited, then un-invited, then invited again, and here’s the dates where that happened.#2015-10-2903:24zentropeTruthfully, all that really matters is the end result. I could just bulk retract and re-create on every change.#2015-10-2903:24zentropeNo user would care, but I personally feel like I’m wasting Datomic, and that if I can solve this, it might come in handy when it actually really matters.#2015-10-2903:25zentropeDoesn't Stu’s “chocolate” analogy indicate that it’s good to see what chocolate was a fave, then it wasn’t, then it was again?#2015-10-2903:27zentropeWell, he makes the point that retracting/adding (when the user made no actual change) is a non-skillful thing to do, but the implication is that the user does make those changes.#2015-10-2903:28zentropeEverything he says makes sense, but is only problematic with those linking entities.#2015-10-2903:36bkamphausI guess I’m missing some detail in the difficulty w/retraction#2015-10-2903:37zentropeIf I load up all the attendee entities into the UI, but don’t include :db/id, then I have no ID to use in a retract clause.#2015-10-2903:38zentropeSo, either I have to retrieve the :db/id or make something else that’s an identity attribute on that entity.#2015-10-2903:39bkamphausretract the entity where:
[?event :event/attendees ?a]
[?a :attendee/user ?u]
#2015-10-2903:39zentropeYou can use a where in a retraction?#2015-10-2903:39bkamphauspossibly ?event :id ?id and ?u :userid ?id or whatnot? I.e. if you ensure the uniqueness there will only be one entity with those refs?#2015-10-2903:40zentropeBut then I have to query to find the things to retract. Which I could do.#2015-10-2903:41zentropeBut I’d prefer just [:db.fn/retractEntity [:attendee/id #uuid “kajlkasd”]]. Done!#2015-10-2903:42bkamphausright, or keep the id w/o exposing in UI, or something else, but yes uuid/lookup rer will be more terse#2015-10-2903:46zentropeWhat if you had an “attendee” entity that had an attribute for the user and an attribute for the event (and attendence status, etc) and an ID.#2015-10-2903:47zentropeBut the event doesn’t have an :event/attendees.#2015-10-2903:47zentropeI guess it amounts to the same thing.#2015-10-2903:49bkamphausright, you'll have reverse ref :attendees/_event, and it’s indexed in :vaet, but I think the cases discussed remain the same?#2015-10-2903:50zentropeYep.#2015-10-2903:50zentropeThe basic lesson: in the UI, keep a list of things to retract, and things to assert, things that don’t change.#2015-10-2903:51zentropeAnd if you’re doing that, you might as well ship up something that can act as an entity identifier.#2015-10-2903:51zentropeUse those for things that need to be retracted.#2015-10-2903:52zentropeThings that need to be asserted won’t have that identifier.#2015-10-2903:52zentropeOther lesson: Everything gets a UUID of some sort.#2015-10-2904:29zentropeYep! Works.#2015-10-2904:30zentropeNo transaction functions and query before transacting stuff in the app.#2015-10-2904:31zentrope(defn do-update-event!
[conn {:keys [id name description datetime duration location link asserts retracts]}]
(let [uninvites (for [attendee-id retracts]
[:db.fn/retractEntity [:attendee/id attendee-id]])
invites (for [uid asserts]
{:db/id (d/tempid :lattice)
:attendee/id (d/squuid)
:attendee/status :attendee.status/unknown
:attendee/user [:user/id uid]})
event {:db/id [:event/id id]
:event/name name
:event/description description
:event/date datetime
:event/duration (* duration 60)
:event/location location
:event/link link
:event/attendees (set invites)}
tx (apply conj [event] uninvites)]
@(d/transact conn tx)))
#2015-10-2916:20tangrammerHi folks!, did anyone try to work with aws lambda and datomic pro starter edition ? I'm struggling with that https://groups.google.com/forum/#!topic/datomic/OYJ4ghelmF0#2015-10-2917:16bkamphaus@tangrammer: we don’t really have an expectation that the peer lib should work on (or be well suited to) AWS Lambda.#2015-10-2917:23tangrammerThanks @bkamphaus ! really appreciate your help!#2015-10-2918:17tim.linquistHey everybody! Can anybody tell me how to group by day in Datalog?#2015-10-2918:17tim.linquist[:find (count ?p) ?date
:with ?date
:where
[?p :purchase/ext-purchase-id ?ext-id]
[?p :purchase/occurred-at ?date]
#2015-10-2918:17tim.linquistExample value: #inst "2013-01-09T11:46:23.000-00:00"#2015-10-2918:21tim.linquistWrite a func in clj and call it to return y/m/d for the timestamp? I was trying to do this in the console ...#2015-10-2918:31alexmillerif you're on Java 8, there are probably some good options in the new java 8 instant stuff. otherwise, there are (deprecated) getters for these things on java.util.Date (which is likely the instance you have). some other options are to construct a Calendar instance or to format the date to a string.#2015-10-2918:41tim.linquistthx alexmiller I'll post back when I get it#2015-10-3015:29tim.linquistFyi I never ended up writing the query in Datomic. I yanked the data into memory and grouped that way#2015-10-3018:14bhagany@tangrammer: I've thought about doing something like this - you could use the REST api. Having persistent peers that are always running kind of mitigates many of the advantages of using Lambda, but at least it makes it possible.#2015-10-3114:12magnarsIf I upsert an existing entity, where all facts in the map are already asserted in the db - how does Datomic behave? Will it create an empty transaction? Or re-assert all the facts?#2015-10-3114:20alexmillerempty txn#2015-10-3114:20magnarsthanks!#2015-10-3114:20alexmillerdatomic records novelty#2015-10-3114:22magnarsthat is very handy!#2015-10-3123:09zentropeIn the pull pattern, you can (default :some/type []), but what if you want to add specifiers to :some/type if it exists?#2015-10-3123:10zentrope(default {:some/type [:db/ident]} []) doesn’t work, for instance.#2015-10-3123:24zentropeStill surprises me when I {:db/id … :some/props #{}} and no enums are removed from :some/props.#2015-11-0213:32robert-stuttaford@magnars we use d/with in our stats processor to check for empty txes prior to submitting them, to prevent tx noise. empty txes have 1 datom: :db/txInstant#2015-11-0214:17pesterhazyI'm wondering. If I configure my datomic peer library using datomic:, how does it know which transactor to connect to?#2015-11-0214:18jgdavey@pesterhazy: the transactor URI is written to storage.#2015-11-0214:19pesterhazy@jgdavey: interesting. what happens if I have two transactor connect to the same storage uri?#2015-11-0214:20jgdaveyOne will become a fallback (HA), and won’t write it’s location unless the other one stops phoning home#2015-11-0214:20pesterhazythat's all automatic, then?#2015-11-0214:20pesterhazynifty simple_smile#2015-11-0214:21jgdaveyWell, HA technically only kicks in with the paid licenses. Anyone else care to expound?#2015-11-0214:23pesterhazyI just created a new Auto Scaling Group with a new license key. Based on what you said, if I disable the old ASG, the new one should kick in automagically#2015-11-0214:23pesterhazywith no client reconfiguration required#2015-11-0214:23jgdaveyBut yes, so long as peers and transactors share storage, the transactor location “communicated” to the peer through storage.#2015-11-0214:24pesterhazygreat#2015-11-0214:24pesterhazyI guess that requires that the transactor has a sort-of public IP address#2015-11-0214:25jgdaveyWell, it just needs to be accessible to the peer.#2015-11-0214:25pesterhazyright#2015-11-0214:26jgdaveyTransactors actually write two IPs to storage: host is normally the internal network address, and alt-host is usually the public IP#2015-11-0214:26pesterhazya reassuring word about this in the docs would be great (though maybe I didn't look hard enough)#2015-11-0214:26jgdaveyPeers try host first, then use alt if the first isn’t accessible.#2015-11-0214:26jgdaveyI don’t want to misspeak here, though. Other thoughts, @bkamphaus ?#2015-11-0214:27pesterhazyIn this case I'm actually fine with things working out of the box (as they seem to be)#2015-11-0214:27bkamphaus@jgdavey: a slight correction, alt-host is not usually the public IP, but only provided if a different public IP is needed. Of course with docker (or containerization in general) and more vms in the clouds setup, this does show up more.#2015-11-0214:28bkamphausHigh availability is documented in fairly high detail here: http://docs.datomic.com/ha.html#2015-11-0214:29pesterhazy@bkamphaus: thanks!#2015-11-0214:32bkamphausI do think there is an organizational deficiency in the docs at present around the heartbeat mechanism and how peers determine which transactor to correct to, including the alt-host mechanism (we’re transitioning this from an implementation detail to a public facing transactor property). We’re considering how we want to address it.#2015-11-0214:33pesterhazyyeah, for me it wasn't clear how peers discover the transactor in the case of dynamodb#2015-11-0214:34pesterhazyI considered the idea that the address is written to storage, but rejected it as unlikely simple_smile#2015-11-0215:04pesterhazy@robert-stuttaford: have you had time to look into turning your datomic-backup script into a gist yet?#2015-11-0215:06pesterhazythe use case is to get a partial backup of a prod db for development, which doesn't include credit card information or db sessions#2015-11-0215:06pesterhazysorry s/db sessions/session data/#2015-11-0215:09pesterhazyone way I can think of is to get the data on a test system, excise everything you don't want, and then do a backup. Is that what people do?#2015-11-0215:42bkamphaus@pesterhazy: for a few different reasons, to build a dev db I would avoid anything that implicitly “forks” the db (excise on a backup) and do something like replay the log, filtering out datoms that should not go in the other copy.#2015-11-0215:45bkamphausat the connection and storage level, dbs are unique and there’s no accommodation in Datomic for the concept of “two different versions of the same database” with forked, missing data, etc. The idea of using filtered dbs, or dbs as-of etc. e.g. in query using the API are ways of dealing with db values.#2015-11-0215:54pesterhazyyeah I'm also inclined to think that excision is not the right tool for the job#2015-11-0215:56pesterhazyI've looked into filtering the tx log, but haven't found an obvious way to determine that kind of entity a :db/add refers to#2015-11-0215:56pesterhazyand that's what I want -- filter out certain kinds of entities (payment records, session data), not filter out a specific attribute#2015-11-0215:57pesterhazyI think simple_smile#2015-11-0215:58pesterhazyplus an attribute (like :user) might be a possible attribute of both payments (which I want to discard) and addresses (which I want to keep)#2015-11-0216:12bkamphausyou can always pull the entity in question to see what’s associated with it. does everything in the db (or at least that has refs to/from it) have some kind of UUID - any unique identifier other than the entity id?#2015-11-0216:13bkamphausyou can always do stuff like pull the entity as of the time immediately before/after a tx, also to inspect it (using the as-of filter on a db), doing it a lot can get expensive perf wise, but it depends on the overall size of the db you’re filtering whether or not that really matters. Also, since it’s going to dev, and doesn’t impact a liveness window for prod.#2015-11-0216:18pesterhazythat's useful#2015-11-0216:18pesterhazymany things have a unique identifier, though maybe not all#2015-11-0217:16robert-stuttafordhavent had a chance, sorry, @pesterhazy !#2015-11-0217:17pesterhazyno worries#2015-11-0217:17pesterhazyI'm trying my hand at a simple edn dumper for datomic#2015-11-0217:17pesterhazythat might get the job done as well#2015-11-0217:19robert-stuttafordthat’s what i have, except it writes transit instead of edn#2015-11-0305:34gurdasAnyone here have experience with throwing analytical reporting loads at Datomic? (Basically what you'll tend to see in OLAP scenarios, with aggregations against large datasets for real-time reporting/drilldown etc)#2015-11-0305:35gurdasI'm trying to find if Datomic would be a good fit for an api that would have to do aggregations of "facts" by hierarchical dimensions (stretching the terminology a bit here, I know it doesn't fit with Datomic's view of the world)#2015-11-0308:40robert-stuttafordwe use Onyx to watch the Datomic transaction report queue to produce stats and write them back to Datomic, @gurdas#2015-11-0309:20tangrammerHi guys, is there anyway to build a project with datomic-pro with travis?#2015-11-0309:21tangrammerI tried to set and use the user credentials with travis https://docs.travis-ci.com/user/environment-variables/#2015-11-0309:22tangrammerand retrieving in lein with https://github.com/technomancy/leiningen/blob/master/doc/DEPLOY.md#credentials-in-the-environment#2015-11-0309:22tangrammerbut still no way to get the build#2015-11-0309:49tangrammerthanks @taylor.sando for your reply
I think these settings are not good for travis as far as I get the envs in my terminal working#2015-11-0309:51tangrammertangrammers-MacBook-Pro:tamara tangrammer$ echo $datomic_user_name
but the error keeps the same when i start the repl
Caused by: org.sonatype.aether.transfer.ArtifactTransferException: Could not transfer artifact com.datomic:datomic-pro:pom:0.9.5327 from/to (): Not authorized , ReasonPhrase:Unauthorized.
at org.sonatype.aether.connector.wagon.WagonRepositoryConnector$4.wrap(WagonRepositoryConnector.java:951)
#2015-11-0309:52taylor.sandoI don't know if it matters, but my actual environmental variables are all upper case DATOMIC_USER_NAME and DATOMIC_PASS#2015-11-0309:55tangrammer@taylor.sando: good point! it worked now simple_smile#2015-11-0309:55tangrammeryou saved my morning ! thanks a lot!#2015-11-0309:56taylor.sandoI think I remember having similar problems with lein environmental variables#2015-11-0315:00pesterhazy(let [uri "datomic:]
(println "Deleted" uri (d/delete-database uri))
(d/create-database uri)
(println "Created" uri)
(let [conn (d/connect uri)]
(println "Connected to" uri)
(d/release conn))) #2015-11-0315:01pesterhazyThe first time I run this, it works. The second time, I get Deleted datomic: true
Created datomic:
ExceptionInfo database does not exist clojure.core/ex-info (core.clj:4593)
#2015-11-0315:02pesterhazyI just want to start w/ an empty db 😐 Am I missing something?#2015-11-0315:04pesterhazyThere's nothing in the transactor log, either.#2015-11-0316:04marshall@pesterhazy: What version of Datomic are you using?#2015-11-0316:04pesterhazycom.datomic/datomic-pro "0.9.5130"#2015-11-0316:05marshall@pesterhazy: From the 0.9.5327 release notes: * Bugfix: Fixed bug that prevented connecting from a peer that deletes
and recreates a database name.#2015-11-0316:05pesterhazyhaha!#2015-11-0316:05marshall😉#2015-11-0316:05pesterhazythanks!#2015-11-0316:05marshallno problem#2015-11-0318:26robert-stuttafordsorry @pesterhazy, i should have warned you about that simple_smile#2015-11-0405:11grounded_sageHi I am interested in using http://www.phoenixframework.org/ for my backend, Om for my front end and I would like to use Datomic for my database this is a library I may be able to use https://github.com/edubkendo/datomex
I'm curious how difficult is it to run Datomic outside of JVM. I am relatively new to programming so a layman's response is ideal. Thanks in advance!#2015-11-0407:45robert-stuttafordyou’d have to use the REST API#2015-11-0407:45robert-stuttafordthe peer lib is only available for the JVM#2015-11-0412:40tcrayfordyo, I have datomic throwing an ArrayIndexOutOfBoundsException on a query. Is that a known bug in 0.9.5173? Or is this new?#2015-11-0412:41tcrayfordthe query looks like this:#2015-11-0412:41tcrayford(d/q '[:find ?k ?e
:in $ ?run-name
:with ?e
:where
[?s :yeller.sim/name ?run-name]
[?e :yeller.operation/sim ?s]
[?e :yeller/operation :yeller.db.operation/error]
[?e :yeller/key ?k]]
query run-name)
#2015-11-0412:54tcrayfordand here's the relevant bit of the stacktrace:
java.lang.Exception: processing rule: (q__60408 ?k ?e ?e), message: processing clause: [?e :yeller/key ?k], message: java.lang.ArrayIndexOutOfBoundsException: 2
at datomic.datalog$eval_rule$fn__6156.invoke(datalog.clj:1441)
at datomic.datalog$eval_rule.invoke(datalog.clj:1421)
at datomic.datalog$eval_query.invoke(datalog.clj:1464)
at datomic.datalog$qsqr.invoke(datalog.clj:1553)
at datomic.datalog$qsqr.invoke(datalog.clj:1510)
at datomic.query$q.invoke(query.clj:674)
at datomic.api$q.doInvoke(api.clj:35)
at clojure.lang.RestFn.invoke(RestFn.java:439)
Caused by: java.lang.Exception: processing clause: [?e :yeller/key ?k], message: java.lang.ArrayIndexOutOfBoundsException: 2
at datomic.datalog$eval_clause$fn__6130.invoke(datalog.clj:1387)
at datomic.datalog$eval_clause.invoke(datalog.clj:1350)
at datomic.datalog$eval_rule$fn__6156.invoke(datalog.clj:1436)
... 50 more
Caused by: java.util.concurrent.ExecutionException: java.lang.ArrayIndexOutOfBoundsException: 2
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at clojure.core$deref_future.invoke(core.clj:2186)
at clojure.core$deref.invoke(core.clj:2207)
at clojure.core$mapv$fn__6727.invoke(core.clj:6616)
at clojure.lang.PersistentVector.reduce(PersistentVector.java:333)
at clojure.core$reduce.invoke(core.clj:6518)
at clojure.core$mapv.invoke(core.clj:6616)
at datomic.datalog$fn__5673.invoke(datalog.clj:588)
at datomic.datalog$fn__5531$G__5503__5546.invoke(datalog.clj:51)
at datomic.datalog$join_project_coll.invoke(datalog.clj:116)
at datomic.datalog$fn__5602.invoke(datalog.clj:219)
at datomic.datalog$fn__5510$G__5505__5525.invoke(datalog.clj:51)
at datomic.datalog$eval_clause$fn__6130.invoke(datalog.clj:1356)
... 52 more
Caused by: java.lang.ArrayIndexOutOfBoundsException: 2
at clojure.lang.RT.aset(RT.java:2326)
at datomic.datalog$fn__5673$project__5744.invoke(datalog.clj:480)
at datomic.datalog$fn__5673$join__5762.invoke(datalog.clj:578)
at datomic.datalog$fn__5673$fn__5767$fn__5768.invoke(datalog.clj:588)
at clojure.lang.AFn.applyToHelper(AFn.java:152)
at clojure.lang.AFn.applyTo(AFn.java:144)
at clojure.core$apply.invoke(core.clj:630)
at clojure.core$with_bindings_STAR_.doInvoke(core.clj:1868)
at clojure.lang.RestFn.invoke(RestFn.java:425)
at clojure.lang.AFn.applyToHelper(AFn.java:156)
at clojure.lang.RestFn.applyTo(RestFn.java:132)
at clojure.core$apply.invoke(core.clj:634)
at clojure.core$bound_fn_STAR_$fn__4439.doInvoke(core.clj:1890)
at clojure.lang.RestFn.invoke(RestFn.java:397)
at clojure.lang.AFn.call(AFn.java:18)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
... 3 more
#2015-11-0413:52grounded_sageOk so I get that I would have to use the REST API with Datomic when I use another language. My next question is... how easy is it to add and remove Datomic to a system? What if I decide to use some databases then want to add Datomic? Or I start with Datomic then I feel I am better off without it? I note that there is future plans to make Datomic available to non-JVM languages. So I am probably best off finding a workaround to use Datomic??? Though if I can easily add or pull it out I might start without it and build up to the point when I should add it.#2015-11-0414:00pesterhazy@grounded_sage: easiest is probably to just play with it and form your own conclusions#2015-11-0414:15grounded_sageI'm still relatively new to programming. Have yet to fully grasp database stuff let alone actually build something from scratch. I have something I really believe can change the world that I would like to build relatively quickly but still be maintainable (not wordpress or any other thing that tends to get in the way like others have said they could do it in) so I am essentially trying to get a good roadmap before I start pounding the pavement and getting stuck into it#2015-11-0414:29pesterhazyI'd (1) try datomic before I'd decide one it and (2) use a jvm language#2015-11-0415:34marshall@tcrayford: In your query you’re using :with ?e and you’re asking to include ?e in the returned relations in the :find specification. The with clause specifically removes the provided vars from the relations.#2015-11-0417:13bhaganyJust chiming in - I'm currently working on a project that uses datomic from python, and I concur with the advice to use a JVM language if you can. Using REST comes with a few quirks (eg. the '[:find ?e .] syntax not working correctly), and more significantly, negates the benefit of not having to worry about round-tripping to an external data source.#2015-11-0417:14bhaganyI'll be following my own advice as soon as I can - Datomic just happens to be my way to sneak Clojure into the business simple_smile#2015-11-0422:52grounded_sage@bhagany: thanks for your input. I'm really torn on this. I want to build an app similar to Meetup. The performance and fault tolerance of BEAM is definitely exciting but I'm mostly interested in the community behind it. I find on the server side there isn't really a well beaten path of app development other than a good tutorial with luminus (which I haven't done yet btw) and I am skeptical as to how many people use it. It also doesn't say anything about Datomic and Om which to me is the biggest things Clojure ecosystem has to offer. Since I have little plans to actually do heavy calculations etc I feel that Phoenix would be a better fit for me. I'd just love to be able to record immutable facts in time in my database :( #2015-11-0423:54tcrayford@marshall: aha. Once again I wish datomic and the Clojure community at large cared about error messages#2015-11-0500:33bhagany@grounded_sage: best of luck on your decision. I agree, it doesn't sound easy.#2015-11-0509:39pesterhazyIs it absolutely necessary/required/recommended to keep peer libraries and Transactor versions in sync?#2015-11-0509:39pesterhazyIn particular, can I backup my db using a more recent version of Datomic Pro?#2015-11-0511:30robert-stuttafordi think it’s ok as long as both sides are on the same side of a breaking change#2015-11-0511:31robert-stuttafordhttp://docs.datomic.com/release-notices.html#2015-11-0513:12pesterhazy@robert-stuttaford: good point#2015-11-0513:12stuartsierra@pesterhazy: In general, peer and transactor do not need to be at exactly the same version. The release notices make it clear for which releases they do have to be in sync.#2015-11-0513:12pesterhazyso basically as long as both versions are more recent than 0.8.3705, it should be okay#2015-11-0513:13stuartsierraObviously, it is recommended to keep both versions up to date, for bugfixes.#2015-11-0513:13pesterhazyright. it looks like the last few versions were mostly bugfixes anyway#2015-11-0513:13pesterhazyvery helpful, thanks#2015-11-0600:26sjolNew to datomic and trying to get a better grasp of how to use it, are there any recommended patterns to add ACL features to datoms? Should ownership be added as part of the metadata of the transaction?#2015-11-0600:28sjolAnd are there caveats to syncing datomic and datascript together for apps? I am still not quite sure i understand how writes from the client would get back to datomic if i am writting to datascript ( which doesn't have idents)#2015-11-0804:04gurdasAnyone have any recommended techniques for troubleshooting non-performant datalog queries?#2015-11-0804:05gurdasI've used a lot of the common practices of starting with the most restrictive where clauses, but am seeing some issues when "joins" are in effect#2015-11-0804:07bkamphaus@gurdas: If you share a copy of the query (or an analogous query against a toy schema or something like https://github.com/Datomic/mbrainz-sample ) I can take a look tomorrow or Monday (not staying on much longer tonight).#2015-11-0804:08bkamphaus(of course, someone else might be able to help in the mean time)#2015-11-0804:10bkamphausare you using any comparison predicates on unindexed attributes (or on a version of Datomic prior to avet optimizations w/comparison predicates in query), or log or history database values in the query?#2015-11-0804:13gurdasThanks @bkamphaus , It's basically an OLAP style reporting db, that has a few hierarchical "dimensions" that slice/aggregate a lot of individual facts.
For instance in this query the dimensions are known as "trees", and the nodes have on the order of a few thousand "lines" of financial data that needs to be aggregated up to them; a query for retrieving a count of all lines in a tree would look like this (this executes in ~60ms against 100k+ lines, and around 600 nodes)
[:find (count ?l)
:where
[?t :tree/id "someid"]
[?n :node/tree ?t]
[?l :line/nodes ?n]]
#2015-11-0804:14gurdasLines are grouped into datasets too, so when I try to group the resulting lines by dataset (to get line count by dataset):
[:find ?ds (count ?l)
:where
[?t :tree/id "someid"]
[?n :node/tree ?t]
[?l :line/nodes ?n]
[?l :line/dataset ?ds]]
Query time shoots up to ~300 ms#2015-11-0804:16gurdasMay help if I share the schema?#2015-11-0821:09nhaHello, I am following the datomic tutorial, trying to start the console and I get :
➜ datomic-pro-0.9.5327 bin/console -p 8080 dev datomic:
Exception in thread "main" java.lang.IllegalAccessError: tried to access method clojure.lang.RT.classForNameNonLoading(Ljava/lang/String;)Ljava/lang/Class; from class datomic.console.error$loading__5340__auto____1
What could it be ?#2015-11-0821:10nhaI started the transactor in another console :
➜ datomic-pro-0.9.5327 bin/transactor config/samples/dev-transactor-template.properties
Launching with Java options -server -Xms1g -Xmx1g -XX:+UseG1GC -XX:MaxGCPauseMillis=50
Starting datomic:, storing data in: data ...
System started datomic:, storing data in: data
`#2015-11-0821:20alwaysbcodingit's a bug with version 0.9.5327 of Datomic#2015-11-0821:20alwaysbcodingIf you're just trying out the datomic tutorial I would recommend just downloading version 0.9.5302 and using that#2015-11-0821:21alwaysbcodingor else I think you can download a working version of the console in isolation and run it against version 0.9.5327#2015-11-0821:22nhaOh I see#2015-11-0821:23nhaIs it something usual ? I mean, how stable is datomic ?#2015-11-0821:26alwaysbcodingfor the most part it's pretty stable. I don't really use the console so can't say for sure whether or not there's usually bugs with it#2015-11-0821:28nhaOk thanks, I am just getting my feet wet with datomic. I'm sold on the general idea of course simple_smile#2015-11-0821:30alwaysbcodinghaha yup, welcome to Clojure. Works great in conference talks, a little trickier in the real world.#2015-11-0821:30nhaaha yes I spent quite a few hours making a nice setup, and looks like the foundations of my app will take a little time too simple_smile#2015-11-0821:31nhaBut this is fun, and the community is really nice and going in a good direction so.. pretty cool simple_smile#2015-11-0821:32nhaok it works with the previous version, thank you#2015-11-0914:41bkamphaus@gurdas if you can share the schema it would help.#2015-11-0914:49bkamphausmainly, I want to verify cardinality of ref attrs (am I ok to infer it from plural vs. singular naming convention? i.e. node/tree line/dataset = card one, line/nodes card many?#2015-11-0918:07arohnerFor those of you running on AWS + dynamo, how frequently do you get “transactor not available”, as an intermittent error?#2015-11-0918:22arohneroh, fun. It looks like if you get the dynamo ‘throughput exceeded’ error, the transactor kills itself and starts over#2015-11-0918:54gurdas@bkamphaus: Here's the schema i'm working with: https://gist.github.com/gurdasnijor/03e9ea105ed77775367c#2015-11-0918:59gurdasAppreciate the help! Let me know if a dump of the datomic db i'm working with would help as well and I can get that out to you#2015-11-0919:00pesterhazy@arohner: the transactor restarting is actually not a bad reaction to seeing errors#2015-11-0919:00pesterhazythough it probably won't help much if the issue is due to dynamo's throughput limit simple_smile#2015-11-0919:01arohnerindeed#2015-11-0919:01arohnerit also took some poking around to find out that’s what happened#2015-11-0919:05pesterhazyhow did you find out? I find it hard to understand what the AMI does#2015-11-0919:06pesterhazythere are logs on S3 but they only seem to be updated once a day#2015-11-0919:11arohnermy S3 logs were up to date#2015-11-0919:11arohnernot sure why mine are and yours aren't#2015-11-0919:12arohnerlooks like it pushes logs when a transactor restarts#2015-11-0919:12arohnerI have the full logs from each transactor that died#2015-11-0919:25pesterhazyit's possible I didn't look close enough#2015-11-0919:26pesterhazyI've also run into the issue that the AMI restarts continuously (every minute or so)#2015-11-0919:26pesterhazyprobably some misconfiguration of the auto-scaling group#2015-11-0919:29robert-stuttafordarohner: we went through quite some fun with this. you have to get your write prov, and memory-index-threshold, memory-index-max values tuned#2015-11-0919:29robert-stuttafordif your m-i-* values are too high, you can slam storage with a BIG amount of writes in an otherwise sleepy system#2015-11-0919:31robert-stuttafordyou want small, frequent indexing jobs, so small m-i-threshold. ours is 32mb. @bkamphaus is a wizard at reading CloudWatch, so if you’ve Pro, you should totally spend an hour with him, and have him read your account’s entrails#2015-11-0919:31robert-stuttafordwe had transactor-not-available issues all over until we got this right, and we did have two instances where things had to restart#2015-11-0919:32robert-stuttaford@arohner ^#2015-11-0919:32marshall@arohner: What size EC2 instance are you running the txor on?#2015-11-0919:34arohner@robert-stuttaford: thanks. Not sure what my m-i-threshold is, I’ll check it out#2015-11-0919:34arohner@marshall c3.large, but that’s probably overkill#2015-11-0919:34marshall@arohner: what do you have Dynamo write throughput set to?#2015-11-0919:35arohner20#2015-11-0919:35marshalli suspect that is the issue#2015-11-0919:35marshallthe transactor will need sufficient write throughput to handle both incoming transactions as well as background indexing and heartbeats#2015-11-0919:37marshall@arohner http://docs.datomic.com/capacity.html#dynamodb#2015-11-0919:37marshallThat page indicates starting values for common sstems#2015-11-0919:37marshallsystems#2015-11-0919:37marshallthe lowest recommended DDB write is 75#2015-11-0919:37marshalland that would be for a fairly small system (one supported by an m1.small)#2015-11-0919:46arohnerthanks#2015-11-0920:03robert-stuttaford@arohner: m-i-threshold is my laziness; i mean max-index-threshold as set out in your transactor.properties file when booting your transactor instances#2015-11-0920:03arohnerI understood simple_smile#2015-11-0920:05robert-stuttafordour write throughput is 400 😐#2015-11-0920:25thosmoskind of absurd I know, but I got an example app + transactor using postgresql running on a free heroku dyno: https://github.com/clojurous/shouter-datomic-heroku You can see it in action here: https://calm-castle-4835.herokuapp.com#2015-11-1000:05bostonaholicwhich method do you find yourself using?
;; 1
(map (comp (partial d/pull db [:account/number :account/date])
:e)
(d/datoms db :aevt :account/number))
;; 2
(d/q '[:find [(pull ?account [:account/number :account/date]) ...]
:where [?account :account/number]]
db)
#2015-11-1000:08bostonaholic1) retrieve the datoms then pull the attributes you want OR
2) query for entities and use pull from within the query#2015-11-1005:41robert-stuttaford@bostonaholic: easily both simple_smile#2015-11-1014:25nhaI'm taking baby steps in datomic... all the examples I have seen so far fill data from a file, and I am not sure how to use db/add to add data. Here is what I have so far : https://www.refheap.com/111536 now how would I go about adding a user ?#2015-11-1014:37cmcfarlen@nha http://docs.datomic.com/clojure/#datomic.api/transact#2015-11-1014:39nha@cmcfarlen: ah thanks I was looking for a different name.#2015-11-1014:40cmcfarlenyou'll have to transact your schema data before you can add a user#2015-11-1014:41nhaI understood that there was something like that, yes. I will play a bit now simple_smile#2015-11-1014:41nhaJust found this : https://gist.github.com/stuarthalloway/2948756 looks like it is going to help#2015-11-1014:45nhawhat do these mean ? : #db/id[:db.part/db] #2015-11-1014:46bostonaholic@nha it's an "edn tagged data literal". see http://docs.datomic.com/data-structure-literals.html and https://github.com/edn-format/edn#tagged-elements#2015-11-1014:47nhaThanks#2015-11-1014:47bostonaholicit's basically saying "create a temporary id in the :db.part/db partition"#2015-11-1014:53marshall@nha: You might want to look at the Day-Of-Datomic examples. The “Hello World” example shows a creating a very minimal transaction: https://github.com/Datomic/day-of-datomic/blob/master/tutorial/hello_world.clj#2015-11-1014:54marshallThe other examples in the same tutorial directory show various other techniques and features#2015-11-1014:56nha@bostonaholic: thanks I did not know about EDN tagged.
@marshall: Alright I probably have to start there. I glanced through the official tutorial, but it did not seem to target Clojure users.#2015-11-1014:57marshallthe Seattle tutorial is available in Clojure in the Datomic distro you downloaded under samples/seattle/getting-started.clj#2015-11-1014:58nhaOk great that should get me started simple_smile#2015-11-1018:37robert-stuttaford@nha can’t recommend http://www.datomic.com/training.html enough#2015-11-1100:22domkmIs there something like macroexpand and macroexpand-1 but for Datomic transactions? I'd like to step through the expansion of transactor functions to diagnose issues.#2015-11-1100:30domkmEven if it's an implementation detail that is subject to change without notice, I'd quite appreciate a pointer to the right function(s).#2015-11-1105:47robert-stuttaford@domkm: that’d be awesome to have!#2015-11-1107:53domkm@robert-stuttaford: Indeed simple_smile It should be fairly easy to simulate by converting maps to vecs and calling invoke on any operation that isn't :db/add or :db/retract. I'm just hoping the official version is available because it seems like it would be a very useful tool for virtually all Datomic users.#2015-11-1108:02robert-stuttafordyeah. better if it was something we could invoke in their api rather than something provided by the community, because then we’d know it’s correct#2015-11-1108:02robert-stuttafordbtw, it’s not the same, but one trick you can use is to run it through d/withand inspect :tx-data#2015-11-1108:03robert-stuttafordyou’d only see datoms that were transacted – no datoms that were elided due to already being currently true would be included#2015-11-1114:56frankiesardoHi, I keep on getting an error when connecting to AWS transactor Caused by: HornetQException[errorType=NOT_CONNECTED message=HQ119007: Cannot connect to server(s). Tried with all available servers.]#2015-11-1114:57frankiesardoSomewhere I red it might be because I run out of connections but I close all those I'm aware of. Is there a way to list all the connections to a certain transactor?#2015-11-1115:05frankiesardoif I try to restart the transactor sometimes the error changes to Caused by: HornetQException[errorType=SECURITY_EXCEPTION message=HQ119031: Unable to validate user:..#2015-11-1209:01nhaAdding datomic-pro - [com.datomic/datomic-pro "0.9.5153" :exclusions [org.slf4j/slf4j-nop org.slf4j/slf4j-log4j12]] causes the following message when running boot aot pom uber jar :
Writing pom.xml and pom.properties...
Adding uberjar entries...
Error while extracting /Users/nha/.m2/repository/org/apache/tomcat/tomcat-juli/7.0.27/tomcat-juli-7.0.27.jar:META-INF/LICENSE: java.io.FileNotFoundException: /Users/nha/.boot/cache/tmp/Users/nha/repo/vendor/saapas/20r6/mv1vpv/META-INF/LICENSE (Is a directory)
Writing clojure-backend-0.1.0-SNAPSHOT.jar...
#2015-11-1213:19stuartsierra@nha That looks like a problem with how Boot is trying to create an Uberjar. I suspect it's because some library JAR has a file named META-INF/LICENSE and another JAR has a directory named META-INF/LICENSE/. This isn't specific to Datomic— maybe ask for advice in #C053K90BR or #C03S1KBA2.#2015-11-1213:22nhaOk sure. I will thanks#2015-11-1218:40iwilligin this function doc, t value is time? (java.util.Date)#2015-11-1218:40iwillighttp://docs.datomic.com/clojure/#datomic.api/t-%3Etx#2015-11-1218:41iwilligor datomic time#2015-11-1218:41stuartsierra@iwillig: t is a Datomic-specific value representing time. It's just a counter.#2015-11-1218:42iwilligthanks @stuartsierra#2015-11-1218:42iwilligis there a way to go from datetime to datomic t without using the log ?#2015-11-1218:42stuartsierrano#2015-11-1218:43iwilligthanks#2015-11-1218:43stuartsierraMost Datomic API functions that take a "time point" argument accept any one of t, a transaction entity ID, or a java.util.Date.#2015-11-1218:51iwilligI guess my question is something like this#2015-11-1218:51iwilligGiven to three arguments, two as java.util.Date (start end), and a
identifier on each transaction (:audit/group-id) what is the best (fastest)
way to query for these txs & datoms?#2015-11-1218:52iwilligmaybe i should open a support ticket for this#2015-11-1218:54stuartsierra@iwillig: Not sure I understand the description of your problem. If there are attributes on the transaction entities, then you can query for them using d/q. If you have start/end Dates, you can use d/tx-range to find all the transactions between those dates.#2015-11-1218:56iwilligso I should just request all of the transactions for that date range and then filter the transaction but group-id#2015-11-1218:57iwilligi am not explaining it clearly sorry#2015-11-1219:04stuartsierra@iwillig: Yes, what you describe will work. Or, as an alternative, something like (d/q '[:find ?tx :where [?tx :group-id 42] [?tx :db/txInstant ?inst] [(< ?inst #inst "2015-01-01")] [(>= ?inst #inst "2014-01-01")]] db)#2015-11-1219:04stuartsierraOne or the other might be faster depending on which set is smaller — transactions in the range of dates, or transactions with that :group-id.#2015-11-1219:05iwilligokay thanks#2015-11-1219:41raywillig@nha: not sure if this will help or if you already got it figured out but boot's uber task has a -j option that will keep your dependency jars as jars without exploding them in the process of making your uberjar#2015-11-1305:13sjolWhy can I not set the eid for an entity? doing (d/transact conn [{:db/id 175921860454230 :user/name "test_new" }]) throws an error [...]:cause ":db.error/invalid-entity-id Invalid entity id: 175921860454230"[...]
can't I set the eid?#2015-11-1306:36robert-stuttafordis the id already in storage? if not, you need to use d/tempid to create a temporary id for it to convert to an in-storage one#2015-11-1306:36robert-stuttafordyou can’t make eids directly by yourself#2015-11-1306:36robert-stuttaford@sjol ^#2015-11-1312:45nha@raywillig: Did not know that, it could come in handy.#2015-11-1314:05stuartsierra@sjol: No, you cannot set EIDs. The transactor creates & manages EIDs to support efficient indexing and storage. Transactions may refer to existing entity IDs, but must use d/tempid to create new entities.#2015-11-1315:01zirmiteI’m trying to import messaging data and set :db/txImport to the timestamp for each message. I’m running into time conflicts when I try a second import with overlapping timestamps. do I have to import in strict timestamp order? i.e., import all messages in a set of transactions ordered by timestamp.#2015-11-1315:14marshall@zirmite: That’s correct. http://docs.datomic.com/best-practices.html#set-txinstant-on-imports
"note that you must choose a :db/txInstant value that is not older than any existing transaction's :db/txInstant value and not newer than the transactor's clock time"#2015-11-1315:16zirmitegot it, thanks!#2015-11-1315:17zirmitei had read that passage multiple times but only after asking my question here did it really sink in#2015-11-1315:20marshallrubber duck FTW#2015-11-1318:14ljosaAre there any limitations on the size of a pull pattern? I just ran into a bug where I had listed all the attributes I needed, and some where not returned. When I instead put * in the pattern, it worked.#2015-11-1319:36marshall@ljosa: What version of Datomic are you using?#2015-11-1319:36marshall0.0.5198 release notes included: * Fixed bug where the pull API did not always return all explicit reverse references.#2015-11-1319:37ljosa0.9.5153, it looks like#2015-11-1319:38stuartsierraI would also check for something simple but easy-to-miss like misspelled attribute names.#2015-11-1319:39ljosaI wish I had a cleaner example, but here I add one attribute and lose another: > (keys (first (d/q '[:find [(pull ?g [:group/docId
:group/blocked_psns
:group/keywords :group/negative_phrases :group/negative_urls :group/negative_domains :group/psns :group/topic :group/type :group/url_prefix :group/verticals
{:group/campaign [*]}
{:creative/_groups [*]}]) ...] :where [?g :group/docId "560d4087090adba02ce7d98e"]] (d/db conn))))
=> (:group/docId :group/verticals :group/type :group/keywords :group/blocked_psns :group/topic :group/campaign :creative/_groups)
^ So far so good, :creative/_groups is there. Now let's add :group/cpc to the pull form: > (keys (first (d/q '[:find [(pull ?g [:group/docId
:group/blocked_psns :group/cpc
:group/keywords :group/negative_phrases :group/negative_urls :group/negative_domains :group/psns :group/topic :group/type :group/url_prefix :group/verticals
{:group/campaign [*]}
{:creative/_groups [*]}]) ...] :where [?g :group/docId "560d4087090adba02ce7d98e"]] (d/db conn))))
=> (:group/docId :group/verticals :group/type :group/keywords :group/blocked_psns :group/topic :group/campaign :group/cpc)#2015-11-1319:40ljosa^ notice that :creative/_groups disappeared when we added :group:cpc#2015-11-1319:40stuartsierraAh, that's a reverse reference, so it looks like the bugfix @marshall just mentioned may be it.#2015-11-1319:41ljosagreat, I'll try that#2015-11-1319:41marshall@ljosa: Yeah, that is the issue we saw with that bugfix#2015-11-1319:57ljosayes, can confirm that upgrading to 0.9. 5198 fixed it. thanks, @marshall and @stuartsierra!#2015-11-1320:10marshallGlad to hear it @ljosa#2015-11-1602:40tylerDo partitions work for the in memory database?#2015-11-1602:41tylerI can only get entities with :db.part/user to transact. I can create new partitions just cant get entities to transact into those partitions.#2015-11-1602:41tylerPartition looks like {:db/id #db/id[:db.part/db]
:db/ident :users
:db.install/_partition :db.part/db}#2015-11-1602:45tylernvm looks like lein clean and a repl restart fixed the issue#2015-11-1621:35sjol@robert-stuttaford: @stuartsierra thank you for the responses!
It does make sense, still having a hard time understanding how i could use the output of the log or history to sync a datascript db that already has some content#2015-11-1705:49robert-stuttaford@sjol it’s not a trivial problem simple_smile i might be wrong, but i think @sgrove has worked on this problem in https://github.com/sgrove/dato#2015-11-1723:11sjol@stuartsierra: I also thought that the nuBank guys may have solved this as their presentation seemed to hint at it and also a security model : https://www.youtube.com/watch?v=7lm3K8zVOdY#2015-11-1804:17domkmIt seems as if :db.fn/cas doesn't resolve lookup refs for values. Lookup refs and ids seem to be treated interchangeably elsewhere and I don't see this caveat mentioned in the docs. Is it a bug?#2015-11-1819:29ljosaHow can I get more than 1000 values for a cardinality-many attribute when I'm using * in a pull expression? I tried (pull ?g [(limit :group/keywords nil) *])
, but that still only gives me 1000 keywords.#2015-11-1820:47ljosanever mind, I upgraded Datomic and worked around it by specifying explicitly the attributes I need#2015-11-1911:48pesterhazyOur datomic AMIs keep crashing. In these cases they don't get a chance to push logs to S3. Is there a way to introspect what is wrong with a running instance?#2015-11-1911:49pesterhazyinspect even#2015-11-1914:25jgdavey@pesterhazy: Are you talking about the transactor? Or peers?#2015-11-1914:26pesterhazytransactor#2015-11-1914:27jgdaveyI’ve been able to SSH and poke around before. You’re likely to need to change the security group settings to allow that. However, in my experience it’s almost always memory settings. Not enough Xmx, or datomic cache exceding 75% of JVM mem.#2015-11-1914:39pesterhazythat's helpful, thanks#2015-11-1914:39pesterhazyI'd need to specify a keypair as well I guess#2015-11-1914:45pesterhazyit'd be great if you could ssh into the transactor and tail the logs by default; the log->s3 mechanism seems unreliable in exactly the circumstances where you need it#2015-11-1915:16lowl4tencypesterhazy: did you use CFN template?#2015-11-1915:17lowl4tencyactually usually you recieve the key when you start your AMI as ec2. So you need change sec group and start ssh daemon through userdata#2015-11-1915:18lowl4tencyYou can check the key name in Amazon Web console#2015-11-1915:19pesterhazyyes, I used the cloudformation template#2015-11-1915:19pesterhazy@lowl4tency: I'll check it out, that's useful#2015-11-1915:20lowl4tencypesterhazy: as well you need just add "service ssh start\n", in first line of your userdata#2015-11-1915:20lowl4tencyIt should start ssh daemon and you can check out logs and other. but I'm sure the best way to try to increase memory#2015-11-1915:21pesterhazyyeah it may well be memory, as I'm using m3 instances#2015-11-1915:21pesterhazym3.medium more specifically#2015-11-1915:22lowl4tencyalso, you can share your template and I will review it#2015-11-1915:22jgdaveyWhat are your Xmx and datomic-specific memory settings?#2015-11-1915:24lowl4tencypesterhazy: I use c4.large and 3500mb for xmx#2015-11-1915:26jgdaveyFor m3.medium, you probably want between 2 and 3 Gb Xmx. http://docs.datomic.com/capacity.html has some good guidance for memory settings.#2015-11-1915:26lowl4tencypesterhazy: one moment, don't share your real licence key, replace it with fake numbers simple_smile#2015-11-1915:27pesterhazynot sure about my memory settings, which is probably a bad sign simple_smile#2015-11-1915:29lowl4tencyI recommend learn the link http://docs.datomic.com/capacity.html#2015-11-1915:29pesterhazyI'll read it, promise simple_smile#2015-11-1915:30lowl4tencyalso, do you have correct permissions for your transctor role?#2015-11-1915:30lowl4tencyit might be a reason why you have not s3 logs#2015-11-1915:31lowl4tencyandI recommend check out running logs#2015-11-1915:33pesterhazyno it does write to s3 normally, just not when it dies and I have to kill the instance#2015-11-1915:33pesterhazythe system logs are normally truncate severely, unfortunately#2015-11-1918:58sdegutisI'm getting an awful lot of Consider using [com.datomic/datomic-free "0.9.5327" :exclusions [joda-time]]. -- is this common?#2015-11-1919:05bostonaholic@sdegutis: I believe so. That's what I have in my profiles.clj#2015-11-1919:10sdegutisAhh hmm.#2015-11-1919:10sdegutisThanks.#2015-11-1919:12sdegutisI see now. So, the aws package is what's pulling in joda-time. So excluding it is fine when I'm not using the aws feature within my process.#2015-11-1919:12sdegutisGreat.#2015-11-1920:02stuartsierra@sdegutis: It's not about excluding joda-time altogether. The resolution to those kinds of conflicts is to figure out the version of joda-time which is backwards-compatible with the versions required by all your other dependencies, then make sure you end up with that version on the classpath. Popular Java libraries are usually pretty good about maintaining backwards-compatibility for precisely this reason.#2015-11-1922:10mattgIf dynamo is not an option (nor any cloud-based solutions), is there a de-facto storage engine recommendation for datomic? Assume little to no expert data services / dba availability.#2015-11-1922:43stuartsierra@mattg: I believe Cassandra is the most popular on-premise storage engine for Datomic at present.#2015-11-1922:48tcrayford@mattg: I think the usual recommendation is also to "go with what you know" - if your team has experience with a storage service, choose that, or choose a thing you're already running. Running a datomic storage service can be quite a bit of work and require quite some tuning (depending on usage).#2015-11-1922:55mattg@stuartsierra @tcrayford Thanks. “what my team knows” is also a challenge at the moment. I’m still gathering information on this end, but will explore Cassandra first.#2015-11-1922:58bkamphausre: choosing storage, a lot depends on your transaction throughput. SQL stores can be harder to mess up config for vs. Cassandra if your throughput needs are minimal.#2015-11-1922:59bkamphausif you do have high throughput requirements, and opt to go w/Cassandra, learning it at the same time as Datomic can be a bit of a challenge. Something like this can help some: http://www.ecyrd.com/cassandracalculator/#2015-11-1923:54timgilbertHi all, I'm a bit of a noob with datomic, but am I correct that the "5 processes" option here: http://www.datomic.com/pricing.html would cover basically one "database" and four "clients" (thinking of it as I might think about a postgres server or something)?#2015-11-1923:58timgilbertLike, my current setup has three application servers on a load-balancer talking to a postgres server - would that equate to four processes (one transactor and three peers)?#2015-11-2000:58danielcomptonOn that note, does running the dashboard consume a license too?#2015-11-2003:35bkamphausRight, each peer takes up a process in the license - REST and Console peers both included in that. The postgres comparison is tricky, though - Datomic peers are part of the database and directly access storage, cache segments, etc. You can still do things a traditional relational database client can without being peer and taking up a process (i.e. making a query and getting results via the REST API).#2015-11-2004:17danielcompton@bkamphaus: the transactor consumes a license too?#2015-11-2004:20bkamphausyes, simultaneous process use count includes peers and transactors#2015-11-2014:15bplatzI have an app that has a significant collaboration part to it. Datomic is a win for storing the core transactional data, I'm not sure about the collab part.#2015-11-2014:15bplatzThe colloaboration/chat part is more like a very active event stream, virtually no updates and high volume.#2015-11-2014:15bplatzMy concern is consuming signficant Datomic Peer memory for event stream data, but perhaps I shouldn't be concerned. Anyone tackle anything similar?#2015-11-2014:15bplatzI've contemplated using DynamoDB + maybe S3 for archives, but keeping all data in one source would be very attractive.#2015-11-2018:26paxanAbout transaction fns. What's the best practice for returning failures from txn functions? Exception? We've been raising IllegalArgumentException#2015-11-2018:29stuartsierra@paxan Throw ex-info and include data describing the failure.#2015-11-2019:43mattgSilly question du jour.
Assuming average commodity hardware (think laptop quality) and default tuning; given a model where consumers talk to peers but are not peers themselves, rough ballpark where is the tipping point size of dataset returned for a “realtime” frontend application, using datomic as the source of data. I’m being asked to off the cuff estimate without being able to explore and measure and get into details. (“10 thousand foot view”).
The rough hypothetical use case is a service backed by datomic that allows people to return “large” datasets. They want to pull it into memory and operate on it from languages like Ruby, PHP, Python, Perl, (not Java, not JVM-based). They want to know at what point pagination will be forced. 10k records, 500k records, 1MM records.
Unfortunately all of the meaty details I would ask if I heard this question are unknowns. I guess I’m just looking for anecdotal feedback.
rough context: think data warehouse trapped behind an API.#2015-11-2019:46mattg(Thinking out loud: I wonder if the peer can stream the data to the non-peer. Am I lucky enough to have that supported without significant custom development.)#2015-11-2315:54pesterhazyI transact a single, large transaction (10,000s of txs), but datomic reports :db.error/tempid-not-an-entity tempid used only as value in transaction. Is there a way to see which tempid is failing the transaction?#2015-11-2315:55pesterhazyI looked at transactor logs (with log level :debug), but it doesn't print the culprit#2015-11-2315:56robert-stuttafordyou have a tempid with no attr/values assigned#2015-11-2315:57robert-stuttafordsomething like {:db/id (d/tempid :db.part/user)}#2015-11-2315:58robert-stuttaford(shooting from the hip here, i admit simple_smile )#2015-11-2315:59pesterhazyI'm using only the [:db/add e a v] form#2015-11-2316:00pesterhazyMy assumption is that this error means that I'm using a tempid as a value but not (in the same tx-data) also as an entity#2015-11-2316:01robert-stuttafordah, yes#2015-11-2316:01pesterhazystill working on the EDN exporter btw simple_smile#2015-11-2316:01pesterhazyalmost there.. except for this pesky bug#2015-11-2316:01robert-stuttaforda tempid needs to appear at least once in E position#2015-11-2316:01robert-stuttafordyou likely have a ref to a tempid without actually giving that tempid some data#2015-11-2316:01pesterhazyright#2015-11-2316:02pesterhazythat's the thing -- if I read my tx-data correctly, I've removed all those instance#2015-11-2316:03robert-stuttafordyou should be able to write some scratch code to find all tempids in V position and validate that they all appear in E position at least once#2015-11-2316:04robert-stuttafordalso - are all your enums included in your dataset?#2015-11-2316:04robert-stuttaford[:db/add (tempid) :db/ident :status/awesome] these guys#2015-11-2316:04robert-stuttafordgotta run. good luck!#2015-11-2316:08pesterhazythanks, very helpful#2015-11-2322:42domkmQuestions for Cognitects: I'm building a type system on top of Datomic and have encountered a few anomalies. Why aren't functions installed with db.install/function (`:db.fn/cas` and :db.fn/retractEntity are values of :db.install/function)? What is :db.bootstrap/part? It's not referenced by any other entities. It looks like an artifact of DB bootstrap process that should maybe be retracted. CC @bkamphaus#2015-11-2322:43domkmMeta question: Is building a type system for entities a bad idea? 😉#2015-11-2322:54taylor.sandoSpec-tacular something with types. #2015-11-2322:54taylor.sandohttps://github.com/SparkFund/spec-tacular#2015-11-2414:06stuartsierra@domkm Anything prefixed with :db.install/ or :db.bootstrap/ is probably a "special" entity. Similar to special forms in Clojure, these are part of the implementation of Datomic and may not follow the same rules as normal entities.#2015-11-2418:04domkmThanks @stuartsierra. I guess I can just special-case those.#2015-11-2422:05haroldHello. I am calling ./bin/datomic restore-db ... and it's taking a lot longer on one machine than another. Is it possible to get some more verbose output or otherwise find out about progress?#2015-11-2506:08robert-stuttafordare you restoring the same database? is any version of the database already present on the storage to which you are restoring?#2015-11-2506:08robert-stuttafordhow many skipped vs new segments do you see in each?#2015-11-2514:19raymcdermottFWIW I made a buildpack for Datomic on Heroku … and blogged about it here: http://blog.opengrail.com/jekyll/update/2015/11/19/datomic-heroku-spaces.html#2015-11-2514:19raymcdermottI would welcome feedback#2015-11-2515:17bostonaholic@raymcdermott: I am excited to read that. thanks!#2015-11-2515:35raymcdermott@bostonaholic: the Private Spaces feature is not widely available yet but once it’s out there it will make it easier for people to get started with Datomic proper than EC2 - at least IMHO. Of course it will cost more too 😉#2015-11-2515:37alexmiller@raymcdermott: fyi, in your post you have "from Rich Hickey and the other guys at Cognitect" but there are women on the Datomic team as well and it would be polite to remove that gendered word there#2015-11-2515:40raymcdermottoh shit yes, I will change to folks#2015-11-2515:40raymcdermottthanks#2015-11-2515:40alexmillerthx#2015-11-2515:42raymcdermottdone#2015-11-2520:29sdegutisWhen is it useful to have the collection of Datoms produced by a transaction?#2015-11-2520:33danielcompton@raymcdermott: I talked to a Heroku account manager who indicated we needed to spend >$1000 / month to get onto Enterprise#2015-11-2520:34danielcomptonIs that roughly accurate for your spend?#2015-11-2520:51raymcdermott@danielcompton: yes, the corporation where I consult has many apps on their PAAS. I did not know the bar was that high however. I will check with some other sources to verify. I will need to update the post if that is the case since that is a pretty heavy ‘ceremony’ 😉#2015-11-2520:52danielcompton@raymcdermott: What I heard from them was that they really wanted you on Heroku Enterprise which is an annual agreement. They would maybe make an exception for standard customers paying $1000/month.#2015-11-2521:00raymcdermott@danielcompton: the client has an annual contract so yes, that makes sense. Most large companies favour an invoice rather than a credit card bill and Amazon has a similar policy in fairness. But I should be a little clearer on the barriers to entry. I hope it didn’t waste your time.#2015-11-2521:01danielcomptonno, not at all, it was very interesting#2015-11-2521:01danielcomptonjust not quite relevant for us yet simple_smile#2015-11-2521:01danielcomptonI talked to the Heroku person a few months ago, not in relation to your post#2015-11-2521:02raymcdermottthere are other options for exposing IP addresses in interesting ways using some Heroku addons to obtain a similar result#2015-11-2521:03raymcdermottgive me 2 mins and I’ll post the addon info#2015-11-2521:08raymcdermottone is Proximo and the other is Fixie … they work by setting up static IP addresses that you could use for the transactor. I didn’t go there because I couldn’t recommend off-piste services for core aspects of the system to a large corporate. You might find it interesting to explore but YMMV!#2015-11-2521:58raymcdermottJust checked with a contact at Heroku and yes the account is usually paid up front#2015-11-2521:59raymcdermottI have added that caveat to the post#2015-11-2522:00tcrayford@raymcdermott: @danielcompton that seems accurate with what I know simple_smile#2015-11-2608:22robert-stuttaford@sdegutis: we have an Onyx system that watches the Datomic log and inspects the datoms to figure out if it needs to do any stats work#2015-11-2610:23aspraHi! Not sure if this the right place to ask or the clojure channel is better. But here it goes: is there some sort of naming convention for functions that call datomic queries?#2015-11-2610:27asprafor instance how would you name a query function that retrieves all entities of a specific type and how one that takes a specific input?#2015-11-2612:55robert-stuttafordwe use Datomic as our sole database. i don’t think we have any sort of definable convention for our function names#2015-11-2612:56robert-stuttafordwe’ve taken the pragmatic approach of naming them as simply as possible, based on the semantic meaning - (defn active-users-for-group [group]) - that sort of thing#2015-11-2613:41asprathanks @robert-stuttaford. Probably it is a good enough approach. Only it can get verbose I suppose for multiple inputs?#2015-11-2613:42aspraI was wondering if there was some kind of styling standard a bit like https://github.com/bbatsov/clojure-style-guide#naming#2015-11-2614:11robert-stuttafordit definitely can get verbose, but we apply pragmatism to that, too. quite often, we end up breaking the function part#2015-11-2614:11robert-stuttafordwonderful thing about Datomic; no need to query the whole world in one go!#2015-11-2614:12robert-stuttafordi don’t think Datomic has seen broad enough use to warrant there being a guide such as that#2015-11-2614:13robert-stuttafordwhat’s really cool about Datomic is that you can almost forget that the data is not inside your app process… that is, it’s just normal data filtering code like any other FP#2015-11-2618:20dobladezIs there any tool/script to convert an existing SQL schema to a set of Datomic attribute definitions? I know there's no 1-to-1 correspondence, and cannot be 100% automated. I only expect something rough to start from#2015-11-2706:53robert-stuttafordhey dobladez ! long time!#2015-11-2706:54robert-stuttaford@dobladez: i’m not aware of such a tool. @cmdrdats (on here and on twitter) has undertaken such an effort and might have input for you#2015-11-2708:03lowl4tencystart transactor on sql database - create datomic backup. Start transactor on another DB engine - restore datomic backup#2015-11-2708:59josephany one knows how to operate when the "result sets are larger than can fit in memory" as described in the "Queries and Peer Memory" part in the page http://docs.datomic.com/query.html#2015-11-2708:59josephit mentioned datoms API and index-range, but is there some simple example or description about it?#2015-11-2710:21aspra@robert-stuttaford: cool, thanks for sharing your way of working. We do something similar but we name things in a different way. Thats why I started wondering if there was a standardised way that we might be missing.#2015-11-2711:16robert-stuttafordsure thing!#2015-11-2817:03raymcdermottguys … a quick design question for components in Datomic...#2015-11-2817:04raymcdermottI am trying to model a shopping basket with some items in the basket and figured that the items should be a component of the basket#2015-11-2817:05raymcdermottit all worked out nicely but ...#2015-11-2817:06raymcdermottI noticed that when I add another element to the basket that the items are duplicated#2015-11-2817:07val_waeselynck@raymcdermott: so that's a bug right ?#2015-11-2817:07raymcdermottor maybe to put another way… that I’m finding it unintuitive to model the current state of the items in the basket using the component model#2015-11-2817:07raymcdermottI’m not sure...#2015-11-2817:07raymcdermottwhich is why I asked#2015-11-2817:07raymcdermottmaybe I’m doing it wrong#2015-11-2817:08val_waeselynck@raymcdermott: I'm not sure I understand what you get#2015-11-2817:08val_waeselynckcould you put some code somewhere maybe ?#2015-11-2817:08raymcdermottthis is my original cart#2015-11-2817:08raymcdermott(def cart [{:db/id #db/id [:db.part/user -1]
:cart/id #uuid "d213198b-36b5-4c19-8cb1-e172f59091d9"
:cart/name "My Shopping Cart"
:cart/sku-counts
[
{:sku-count/sku 12345
:sku-count/count 1}
{:sku-count/sku 54321
:sku-count/count 2}
]
}
])#2015-11-2817:08raymcdermottand then an updated cart#2015-11-2817:09raymcdermott(def new-cart [{
:db/id 17592186045421
:cart/sku-counts
[
{:sku-count/sku 12345
:sku-count/count 1}
{:sku-count/sku 54321
:sku-count/count 2}
]
}
])#2015-11-2817:09raymcdermottand it creates two new records even though the data has not changed#2015-11-2817:09val_waeselynckis :sku-count/sku an identity field ?#2015-11-2817:10raymcdermott{:db/id #db/id[:db.part/db]
:db/ident :sku-count/sku
:db/valueType :db.type/long
:db/cardinality :db.cardinality/one
:db/doc "Number of the SKU"
:db.install/_attribute :db.part/db}#2015-11-2817:10val_waeselynckso no simple_smile#2015-11-2817:10raymcdermottno simple_smile#2015-11-2817:10val_waeselynckIt's normal that it gets duplicated then. 2 strategies here :#2015-11-2817:11raymcdermott{:db/id #db/id[:db.part/db]
:db/ident :cart/sku-counts
:db/isComponent true
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/many
:db/doc "SKUs with counts for this cart"
:db.install/_attribute :db.part/db}#2015-11-2817:11val_waeselynck1) have an identity field on the cart items, and do insert or update based on whether the line item is created or updated#2015-11-2817:12val_waeselynck2) On each transaction, erase the past line items and replace with the updated ones (it's a 'document-like' approach)#2015-11-2817:13raymcdermottok, so in either case I have to take care of the state rather than (my naive understanding) that it would be handled by Datomic because from my view there is no novelty#2015-11-2817:13val_waeselynckFor the second approach, I ended up writing a database function that does this:#2015-11-2817:16raymcdermottthanks - so I guess it’s a common issue!#2015-11-2817:16raymcdermottand how did you insert the snippet (not a Slackista)#2015-11-2817:16val_waeselynckclick the + sign in the bar at the bottom of the screen#2015-11-2817:17val_waeselynckAnd it's not that common - don't be too eager to use something like this simple_smile#2015-11-2817:17raymcdermottI take it you mean the function#2015-11-2817:18val_waeselynckyes#2015-11-2817:18raymcdermottisn’t this a classic master / detail#2015-11-2817:18val_waeselynck@raymcdermott: what do you mean?#2015-11-2817:19raymcdermottI mean the need to update component entries#2015-11-2817:19raymcdermottbasket / items ; order / order-lines ; etc...#2015-11-2817:20val_waeselynckMost of the time I would add /update /delete the line items individually, instead of 'resetting' the cart all the time#2015-11-2817:21raymcdermottok, so just so I understand … why didn’t you do that?#2015-11-2817:21raymcdermotti would like to understand the trade-offs#2015-11-2817:22val_waeselynckI didn't have a choice, I was migrating from a Document database, and my legacy clientside code relied on this design#2015-11-2817:23raymcdermottah, ok. I think the add / update / delete approach is simpler for this use case#2015-11-2817:23raymcdermott@val_waeselynck: thanks for the suggestions#2015-11-2817:24val_waeselynckit really depends on what granularity is permitted on the client#2015-11-2817:24val_waeselynckOne thing to consider is that transaction functions are more costly#2015-11-2817:24val_waeselynckin performance#2015-11-2817:24raymcdermottit’s just a toy at the moment#2015-11-2817:25raymcdermottso I can decide#2015-11-2817:27raymcdermottI think it’s going to be easier to update the existing counts#2015-11-2817:28raymcdermottthe nested map is probably ideal for a case where the data will not change or only rarely#2015-11-2817:31raymcdermottI guess another benefit is that deleting the outer item will delete the components so that’s less tedious#2015-11-2817:31raymcdermottanyway, I’ll get back to my toys - thanks#2015-11-2817:46val_waeselynck@raymcdermott: have fun!#2015-11-2821:06raymcdermottfollowing on from the earlier conversation … real life crept in...#2015-11-2821:06raymcdermottanyway I now have this situation...#2015-11-2821:06raymcdermott(clojure.pprint/pprint (d/pull db '[*] cart-id))
{:db/id 17592186045421,
:cart/id #uuid "d213198b-36b5-4c19-8cb1-e172f59091d9",
:cart/name "My Shopping Cart",
:cart/sku-counts
[{:db/id 17592186045422, :sku-count/sku 12345, :sku-count/count 1}
{:db/id 17592186045423, :sku-count/sku 54321, :sku-count/count 2}
{:db/id 17592186045425, :sku-count/sku 12345, :sku-count/count 1}
{:db/id 17592186045426, :sku-count/sku 54321, :sku-count/count 2}]}#2015-11-2821:07raymcdermottand I would like to retract the first two sku-counts#2015-11-2821:07raymcdermottbut I am struggling to locate the right way to achieve the retraction … docs are not feeling obvious#2015-11-2821:09raymcdermottThis is close (I think) but I get errors...#2015-11-2821:09raymcdermott(def retraction [:db/retract 17592186045422 :sku-count/sku 12345 :sku-count/count 1])
=> #'shopping-cart-demo.datomic/retraction
@(d/transact conn retraction)
IllegalArgumentExceptionInfo :db.error/not-transaction-data Transaction data element must be a List or Map, got :db/retract datomic.error/arg (error.clj:57)
#2015-11-2821:10raymcdermottor maybe I’m way off… either way I would appreciate a pointer#2015-11-2821:26raymcdermottok, I have got as far as#2015-11-2821:26raymcdermott@(d/transact conn [[:db.fn/retractEntity 17592186045422 ]])
#2015-11-2821:27val_waeselynck@raymcdermott: that's the one you want#2015-11-2821:27raymcdermottalthough the pull API still reports the same data#2015-11-2821:27val_waeselynck@raymcdermott: try to always picture the low-level set of datoms that is involved#2015-11-2821:27raymcdermottdo I need to touch?#2015-11-2821:27val_waeselynckI think you're querying on an old version of your db#2015-11-2821:28raymcdermottcorrect!#2015-11-2821:29raymcdermottthanks .. I’m getting there#2015-11-2920:18dmi3yHello. I’m trying to start Datomic Console, but getting a weird Clojure stacktrace https://www.refheap.com/112188
Does anyone know what’s going on with it?#2015-11-3000:29dobladez@dmi3y: I had to install the Console from here: https://my.datomic.com/downloads/console, on top of the Datomic Pro installation#2015-11-3007:25dmi3y@dobladez: Thanks. I got it working using standalone datomic console.#2015-11-3008:22greywolveis it possible to restore a datomic database to a particular point in time, so that you can replay transactions at a systems level?#2015-11-3008:26robert-stuttafordfrom http://docs.datomic.com/backup.html:#2015-11-3008:26robert-stuttafordbin/datomic -Xmx4g -Xms4g restore-db from-backup-uri to-db-uri (t)#2015-11-3008:26robert-stuttaford> If you do not specify the optional t, the most recent backup will be restored. Note that you can only restore to a t that has been backed up. It is not possible to restore to an arbitrary t.#2015-11-3008:26robert-stuttafordin this quote, i’m not sure what the bolded sentence means by ‘arbitrary'#2015-11-3008:27robert-stuttaforddoes t have to point to a particular backup, and if so, how do we determine which ones are so available?#2015-11-3009:11greywolvenevermind, we are sorted now simple_smile list-backups shows all t values we can restore to#2015-11-3013:58a.espolovHello
Getting "HornetQNotConnectedException HQ119007: Cannot connect to server (s). Tried with all available servers. "for call (d/create-database uri).
I understand this occurs because the server is in the cloud and then run datomic on a private ip#2015-11-3013:59a.espolovIs there a solution to this problem?#2015-11-3018:13lowl4tencya.espolov: hm, are your transactor running correctly?#2015-11-3018:21a.espolovlowl4tency: yes#2015-11-3018:21lowl4tencyAnd do you use backend database as address for connect from peer?#2015-11-3020:03domkmI'm pretty sure the answer is no, but is there a way to have (get an-entity an-enum-attr) return the referenced entity instead of a keyword ident?#2015-12-0115:11bkamphaus@robert-stuttaford: re: the backup docs, ‘arbitrary’ means any t that exists in the database, which you can’t do. You can only use the t database values that have been specifically backed up - you can see which are there with list-backups.#2015-12-0115:12robert-stuttafordthanks ben, we since determined that when greywolve found the list-backups command. thanks for answering even so!#2015-12-0115:13bkamphaus@domkm: not sure which exactly which api endpoint or use case you’re discussing, or what you mean by the referenced entity. Do you want a map of all attr/val pairs, the number, the reified entity that you get from e.g. datomic.api/entity? Do you want this returned by query, or any time you have a value in hand? You may be able to do what you want with pull or a query, as it seems like you might not like the default behavior in the entity api (trying to infer context from your problem description).#2015-12-0115:14bkamphaus@a.espolov: the generic reading of this error is that the peer can’t connect to the transactor. This could be a number of reasons: the peer doesn’t have permission to connect to the transactor (i.e. configuring open ports, etc. on AWS or wherever the transactor is), the transactor’s host isn’t specified correctly (or alt-host if e.g. it’s containerized), etc.#2015-12-0115:17bkamphaus@dmi3y: the console error you encountered is due to an incompatibiltiy b/t bundled version of console and Datomic 0.9.5327, discussed on this thread: https://groups.google.com/forum/#!topic/datomic/4BwCDxs6zKw — the next release will address this. In the mean time, the suggestion @dobladez passed along to download the most recent stand alone console is the recommended workaround.#2015-12-0115:24dmi3ygot it, thanks for clarification @bkamphaus#2015-12-0115:26robert-stuttafordhttp://www.datomic.com/videos.html#2015-12-0115:26robert-stuttafordhttp://www.datomic.com/training.html#2015-12-0117:05domkm@bkamphaus: Sorry for not being clear. I think your inference was correct. I want to use the entity API to walk entity relationships, as it's intended for. The automatic replacement of ident entities with keywords makes this difficult. I was asking if there is a way to turn off this automatic entity->keyword replacement.#2015-12-0117:08domkm@bkamphaus: I suspected the answer was "no," so I wrote my own EntityMap to resolve it. It's working well except for one problem: I can't get Object#equals and IPersistentCollection#equiv to work correctly with the default datomic.query.EntityMap. It looks like the default EntityMap implements equals by first checking that the other object is an instance of datomic.query.EntityMap instead of checking that it implements datomic.Entity. This surprised me since Clojure favors programming to interfaces instead of concrete types. What do you think of changing the default EntityMap equals to check for the public datomic.Entity interface?#2015-12-0117:44bkamphaus@domkm: do you need laziness a la entity? If not, I would probably use [pull](http://docs.datomic.com/pull.html) and deal with the entity as a map (just the data) directly if the entity api’s ident behavior is not what you want (usually we have the opposite complaint for pull, as it does not return keywords). You could also build your own behavior for entities or ways of traversing or reifying data with pull.
I will say that it's not really typical to use :db/ident for entities that represent e.g. nodes in a graph with either attributes or further connections that are meaningful. Can you describe this aspect of your data model a little more?#2015-12-0117:52domkm@bkamphaus: Sure. I built a very basic type system and I'm using idents to specify entity types (not to be confused with value types). I want to walk the types to discover supertypes, attributes, etc. My EntityMap wrapper works perfectly for this except that equals is broken because datomic.query.EntityMap does a class check instead of an interface check.#2015-12-0117:56domkm@bkamphaus: In terms of pull returning maps instead of ident keywords, I think that it makes the most sense for the behavior of the entity and pull APIs to be opposite of what they currently are, since the entity API is commonly used for walking references, or for everything to be like the pull api (entities with idents treated the same as all other entities). Ideally, All APIs would be configurable so users could choose the behavior that best suits their use.#2015-12-0117:56jdubieQuestion about using d/db function vs using db-after. example is in this gist https://gist.github.com/jdubie/e7682a9c5cf7d5ecb60f#2015-12-0117:57jdubiebasically if you transact on a connection then run (d/db conn). will the resulting db always include the transaction?#2015-12-0118:04bkamphaus@jdubie: if you block until the transaction returns the db pulled from the conn should be at or later than the database value/t after the transaction, but, it may also include other things done to the database by transactions submitted by other peers (or even just threads) in the mean time. So the difference you should assess can be pointed the other direction - i.e., say it’s a deposit reporting the new balance back to a user. Do you want the balance immediately after the deposit, or the balance 100 milliseconds after the deposit, during which time either no or a small number of debits may have occurred?#2015-12-0118:05jdubieawesome - that make sense. thanks @bkamphaus !#2015-12-0118:05jdubiedatomic rules - i’ve really enjoyed using it#2015-12-0118:24bkamphaus@domkm: I’m thinking on your use case. I understand what you’re running into and why you’re using idents now, and why you’re hitting the issues (traversing the graph implied by the type hierarchy, etc.) I’ll probably discuss more here and get back to you. I do also understand the complaint re: the class vs. interface check and how it’s impacting the way you’re approaching the problem at present.#2015-12-0118:24domkm@bkamphaus: Okay, thanks.#2015-12-0120:44currentoorHello, I've heard DynamoDB is the recommended storage engine for ease of use and maintainability. Is this true?#2015-12-0120:52tcrayford@currentoor: it's recommended you use whichever storage engine you are more familiar or comfortable with, but if there is no familiarity with any of the existing engines, dynamo is probably the one that's easiest#2015-12-0120:52tcrayfordit may be somewhat pricier than other storage engines though#2015-12-0121:04currentoorWhat about performance constraints? Specifically, I'm worried about scaling postgres. Any quirks here?#2015-12-0121:07tcrayfordit depends, as with all scaling things. If you're actually worried about that, I'd spend some time doing performance testing and simulated load of the system#2015-12-0121:07tcrayforddynamodb will likely at the very least have better availability than postgres, but if you know postgres well and don't know dynamodb well, then likely using postgres as the storage service will work better for you#2015-12-0121:09currentoorbut is it true that in postgres it stores all of datomic in one table?#2015-12-0121:09currentoorwouldn't that be problematic?#2015-12-0121:37stuartsierraIf you are running your app on AWS, DynamoDB will be the easiest and most scalable storage option.#2015-12-0121:38stuartsierraDatomic storage on SQL is supported largely for the benefit of organizations which already have substantial investment in infrastructure supporting SQL.#2015-12-0121:42currentoorsounds good, thanks!#2015-12-0123:21bkamphaus@domkm: this topic has come up for a few people on our side as well, and the general consensus is the simplest possible solution is just to not use :db/ident for the name of the type system (using a unique string or something else as a unique/identity so you can use lookup refs). If you really want to stick with idents, pull is the way to go, you can make use of [nesting](http://docs.datomic.com/pull.html#nesting) and [recursion](http://docs.datomic.com/pull.html#recursive-specifications) to navigate between type entities. Recursion is fairly straight forward for traversal if you have consistent attributes for each type entity node - i.e., get the parent of the parent of the parent, (up to all parents), etc.#2015-12-0123:22bkamphausI’d opt to not use idents for your types, personally. The point of idents is pretty much be substitutable for the entity. I.e., with enums where all you ever care about the entity when ref’d is its ident/identity.#2015-12-0123:31domkm@bkamphaus: Thanks for bringing this up with your team. Using idents hasn't caused any problems for me except for the built-in EntityMap equality check. I should also add that another major motivating factor for me is that I want my code to be compatible with DataScript, which, unfortunately, lacks idents. Was the equals change from datomic.query.EntityMap to datomic.Entity discussed? If so and if it was rejected, may I ask why?#2015-12-0204:27devnso "hypothetically" let's say there's a pretty tricky and crazy relational data model that exists in postgres, and I want the cheapest and potentially incorrect solution to migrating that data into datomic. i know "there be dragons", but this is mostly for the purpose of forming a proposal to the rest of my team#2015-12-0204:29devntwo other questions: we moved to postgres 9.4 for JSONB support. curious if any datomicistas have thoughts on how one might go about pitching a smooth transition away from that in order to get to Datomic#2015-12-0204:32devnand second question: as far as im aware, the fulltext support is not really open in any way shape or form. we also take advantage of tsvector and tsquery in postgres. so a similar question to the above: thoughts? my understanding is that not enough of lucene is visible to be able to say they're capable of the same stuff#2015-12-0204:34devnTL;DR help me out on selling it. I think we'd benefit greatly in some respects if we were able to query history, the log. ewald's reified transaction talk + sagas also looked pretty ripe#2015-12-0204:36devnbut having indexing on JSONB and the fancier fulltext makes me iffy on whether the tradeoffs are enough to justify switching#2015-12-0214:32val_waeselynck@devn id say the usual motivations for Jsonb are either schema flexibility or the need for raw storage. You get the first one out of the box with datomic; as for raw storage, you can always encode your attributes in json, nippy or fressian#2015-12-0214:33val_waeselynck@devn migrating a schema from sql to datomic is pretty straightforward I believe #2015-12-0214:36val_waeselynck@devn as for selling it, don't forget one of the biggest advantages of datomic is that the database is not remote. When you no longer have to think of your queries as an expedition it really changes your life#2015-12-0216:09val_waeselynckTook me some time to discover this, maybe it will help someone: https://gist.github.com/vvvvalvalval/5547d8b46414b955c88f#2015-12-0223:08devn@val_waeselynck: totally reasonable on the jsonb front, though the fulltext sure would be nice#2015-12-0223:09devnwe're also leveraging hstore pretty heavily#2015-12-0319:15bkamphausDatomic 0.9.5344 is now available https://groups.google.com/d/msg/datomic/LgGjHCF_CGw/HMzjhVLpBAAJ#2015-12-0320:26raymcdermottmust be the n00best question ever … but here goes … how do I get the data back after an add on the DB? The only twist is that there are components in the record being saved so I cannot tell which id is the resolved ID for the ‘outer’ object#2015-12-0320:27raymcdermottdata looks like this#2015-12-0320:27raymcdermott(def cart {:cart/name "Bunky cart"
:cart/id (java.util.UUID/randomUUID)
:cart/sku-counts [{:sku-count/sku 12345
:sku-count/count 3
}
{:sku-count/sku 54321
:sku-count/count 4
}]})
=> #'shopping-cart-demo.datomic/cart
(save-new-cart conn cart)#2015-12-0320:27raymcdermottresult looks like this#2015-12-0320:27raymcdermott{:db-before datomic.db.Db,
@db33bdaf :db-after,
datomic.db.Db @f673e86,
:tx-data [#datom[13194139534347 50 #inst"2015-12-03T20:15:35.122-00:00" 13194139534347 true]
#datom[17592186045452 64 "Bunky cart" 13194139534347 true]
#datom[17592186045452 63 #uuid"a192e41c-461a-44dc-a662-57ae652816d6" 13194139534347 true]
#datom[17592186045452 65 17592186045453 13194139534347 true]
#datom[17592186045453 66 12345 13194139534347 true]
#datom[17592186045453 67 3 13194139534347 true]
#datom[17592186045452 65 17592186045454 13194139534347 true]
#datom[17592186045454 66 54321 13194139534347 true]
#datom[17592186045454 67 4 13194139534347 true]],
:tempids {-9223354444667731342 17592186045454,
-9223354444667731343 17592186045453,
-9223350046623220336 17592186045452}}#2015-12-0320:27raymcdermottso it all worked#2015-12-0320:27raymcdermottquestion now is which ID do i plug into the following query#2015-12-0320:28raymcdermott(d/pull (:db-after tx) '[*] cart-id)
#2015-12-0320:29raymcdermottI happen to know (from querying the DB manually) that the ID is the last one in the map#2015-12-0320:29stuartsierra@raymcdermott: You have to hold on to something to reference the entity. It could be a unique identity, such as a UUID, to use in a lookup ref. Or keep the tempid and use resolve-tempid to get the real Entity ID.#2015-12-0320:42raymcdermottah nice ok I will use the resolve-tempid …. thanks#2015-12-0320:59raymcdermott@stuartsierra: boom! worked a treat, thanks#2015-12-0321:23stuartsierraYou're welcome#2015-12-0322:31raymcdermottquestion about component entities …. is there anyway to remove them all at once rather than one by one (I want to keep the entity they’re a part of as more elements in the component entries could be added later)#2015-12-0322:31raymcdermottit’s not a huge hassle to enumerate over them … just wondered if there was a shortcut#2015-12-0406:00robert-stuttafordyou could get a ref to the parent entity and walk its collection(s), or if you want what was just added, you’d use the tempid thing again#2015-12-0418:08jonasDoes the tx-report-queue tx-data not contain the fifth datom element (added?). It’s not mentioned in the docs http://docs.datomic.com/clojure/#datomic.api/tx-report-queue#2015-12-0418:35marshall@jonas: The :tx-data entry does contain full E/A/V/T/Op datoms. I will suggest that be clarified in the doc.#2015-12-0418:35jonas@marshall: Thanks#2015-12-0421:03raymcdermott@robert-stuttaford: i want to be able to remove all component IDs in one go … for the moment I have to go through them one by one#2015-12-0421:06raymcdermottI should clarify … I am using a component ref for items in a shopping basket#2015-12-0421:06raymcdermottwhich seems like a logical way to use that feature#2015-12-0421:08raymcdermottbut now I have to do a lot of book-keeping around the elements that are added / updated deleted rather than just sending the map back into datomic and let it handle the book-keeping#2015-12-0421:08raymcdermottdoes that make sense?#2015-12-0422:27tcrayford@raymcdermott: seems like it'd be easy enough to write a transaction function for that.#2015-12-0422:55raymcdermott@tcrayford: that sounds tempting … what would I win?#2015-12-0422:57tcrayfordatomicity for one#2015-12-0423:00raymcdermottYes, makes sense but it still feels like I’m moving boiler plate from one place to another#2015-12-0423:01tcrayfordah, but the transaction function could be very generic, it doesn't need to be tied to your application at all (which is a pattern I'd generally recommend for transaction functions)#2015-12-0423:01tcrayfordlike :db.fn/replaceSubComponents or something#2015-12-0423:02raymcdermotthmmm - yes …. I was hoping that would be in Datomic 😉#2015-12-0423:02raymcdermottbut nice idea … I’ll give it a crack#2015-12-0423:04raymcdermottI’m working on an example for the Heroku Datomic thing so it could be good to show that feature#2015-12-0423:05tcrayfordneat simple_smile#2015-12-0423:06tcrayford@raymcdermott: I've only written 3 transaction functions in 3 years of using datomic, all of them super generic, which I like a lot.#2015-12-0423:07raymcdermott@tcrayford: I’ll shoot you a link to the code for your comments once it’s done (should be able to finish the Datomic parts over the weekend)#2015-12-0423:07raymcdermottthen om.next!#2015-12-0423:09tcrayfordnice 😄#2015-12-0515:19raymcdermottcan anyone point me at any transaction functions that use require or are more than one line examples? Struggling with google for this 😕#2015-12-0516:27raymcdermotti found a more interesting example on the day of datomic#2015-12-0516:27raymcdermotthttps://github.com/Datomic/day-of-datomic/blob/master/tutorial/transaction_function_exceptions.clj#2015-12-0521:12paxanIs it considered an anti-pattern if a database transaction function can only produce consistent results if it runs in a transaction all by itself?
E.g. running :do/thing twice in one transaction on parameters that aren't independent like so: (d/transact conn [[:do/thing x y z] [:do/thing x y p q]]) would, due to the implementation details, yield bad results.
So instead we have to run that as two separate transactions:
(d/transact conn [[:do/thing x y z]])
(d/transact conn [[:do/thing x y p q]])
#2015-12-0521:17paxanI've just realized that it's not an anti-pattern, all thanks to writing this question down.#2015-12-0612:31magnarsI'm seeing collisions between tempids generated by the transactor, and tempids generated by my process. I understand why it happens, but it is a scary trap to fall into.#2015-12-0612:31magnarsI could do something like this:#2015-12-0612:31magnars(some #(when (even? (:idx %)) %)
(repeatedly #(d/tempid :db.part/user)))#2015-12-0612:32magnarsand replace even? with odd? on the transactor#2015-12-0612:32magnarsbut is that really the best way to do it?#2015-12-0712:13tcrayford@magnars: were you using just any old negative number, or were you specifically calling d/tempid?#2015-12-0712:13tcrayfordohh, that has to do with transaction functions…#2015-12-0712:48magnarsyes, calling d/tempid from transaction functions is perilous, since you might get the same tempid from your own process, resulting in non-deterministic bad data.#2015-12-0712:58tcrayford@magnars: yeah, eww 😞#2015-12-0713:02tcrayford@magnars it seems like the datomic team could maybe "fix" this by reserving a portion of the tempid space to be transactor generated only. It probably doesn't need to be too large.#2015-12-0713:06magnars@tcrayford: They have known about the issue for a few years. I'm not sure why it isn't a priority. Seems like a pretty bad result. Maybe people aren't using transaction functions.#2015-12-0713:29robert-stuttaford@magnars i think i saw someone saying that by convention they explicitly set tempids in tx functions, starting at a number far away from where in-proc tempids would be#2015-12-0713:30robert-stuttafordwe’ve got like two tx functions and we haven’t had this issue with either of them#2015-12-0713:32magnars@robert-stuttaford: that sounds good. Maybe it was fixed and I haven't spotted it in the change log.#2015-12-0713:36magnarsCan't see it in the changelog, but I'll upgrade regardless.#2015-12-0713:47robert-stuttaford@magnars: sorry, i meant to say that i saw someone who merely uses Datomic say that this is how they dealt with this issue#2015-12-0713:47robert-stuttafordthey explicitly make temp-ids starting at a number that’s far away from where Datomic starts by default#2015-12-0713:47robert-stuttafordbeen trying to find the source but i can’t#2015-12-0713:48magnarsaha, yes. That would be another way of fixing it. I think I like my even/odd approach better tho, since there's no risk of collision.#2015-12-0713:48robert-stuttafordsure simple_smile#2015-12-0713:48robert-stuttafordyou’re never going see em, so whatever’s sensible#2015-12-0713:48magnarsmight be a purely theoretical advantage tho simple_smile#2015-12-0717:16pesterhazy@robert-stuttaford: I managed to finish my datomic exporter, based on your work#2015-12-0717:16pesterhazyit should work for pretty large dbs#2015-12-0717:46robert-stuttaford@pesterhazy: hey, cool! any feedback for me? any bits in particular that made the cut?#2015-12-0717:51robert-stuttafordregardless of how much of it you ended up using, i will shamelessly admit that i thoroughly enjoyed writing it. transducers ftw#2015-12-0720:28raymcdermottguys… I have made my first tx function to handle component updates a little more completely than comes out of the box#2015-12-0720:29raymcdermottI would appreciate some comments on whether the code follows the intentions of such functions#2015-12-0720:34raymcdermottI have also create a gist if it’s too hard to read here https://gist.github.com/raymcdermott/57ade5ae671f83d2444e#2015-12-0720:37raymcdermottthe idea is to support cases where the component entities are not just the ones created with the entity … it should be possible to add new entries, delete entries and update existing entries by passing in the component entity map#2015-12-0720:38raymcdermottI am using it for a shopping cart use case where items are often subject to these operations#2015-12-0720:38raymcdermottany hints or tips would be appreciated#2015-12-0721:43raymcdermott🕸 is it just me or it getting dusty in here#2015-12-0808:51pesterhazy@robert-stuttaford: I decided to rewrite it mostly so that I understand it better#2015-12-0809:08pesterhazyWhen restoring, I'm using a two-phase approach to deal with tempids. First I safe bogus refs, then I fix them to point to the new entities.#2015-12-0810:02jonpitherHi, is it possible given a connection to restore an in-memory db to db from an earlier point in time? For test purposes (using conformity for migrated a DB to the desired version, but it's too slow for each test we have)#2015-12-0810:12ustunozgurI'm running a datomic instance and a simple jetty/ring web server on a 2GB RAM instance, but it is consuming about 1.5GB RAM. The database is pretty empty, maybe 1000 records or so. Is this normal?#2015-12-0810:13ustunozgurdoes datomic consume that much memory out of the box or am I doing something wrong in my server?#2015-12-0810:14ustunozgurI just attach to the db at every function call, for example, I have the following db queries: https://github.com/YoungTurks/hackerdict/blob/master/src/hackerdict/db.clj#2015-12-0810:14ustunozgurI have also experimented with storing the connection in an atomic#2015-12-0810:14ustunozgurbut then reverted back to using a function call to get a brand new one each time.#2015-12-0810:15ustunozgurwhat is the ideal way to keep a connection to the db?#2015-12-0810:35robert-stuttaford@jonpither: interesting problem#2015-12-0810:36jonpitherAny ideas, I'm stumped#2015-12-0810:36robert-stuttafordso you have a storage-backed db and you want to take an in-mem db pointing to a particular point in that storage db for tests to use?#2015-12-0810:36jonpitherLooking at excision also, but that seems non-trivial#2015-12-0810:36jonpitherno it's not storaged backed#2015-12-0810:36robert-stuttafordwhat form does your test data take when your tests aren’t running? transactions in edn?#2015-12-0810:37jonpitherIt's a pure in-memory DB, the problem is the cost of creating a new DB for each test (running schema migrations) is around ~ .5 sec#2015-12-0810:37robert-stuttafordi assume you transact in your tests?#2015-12-0810:38jonpitherYes#2015-12-0810:38jonpitherthe migrations use plugged in with https://github.com/rkneufeld/conformity#2015-12-0810:38robert-stuttafordyeah we use that too#2015-12-0810:38jonpitherideally, given a freshly created in-mem DB, I'd like to shove it back down the connection, saying "revert/restore yourself to this"#2015-12-0810:38robert-stuttafordhow many datoms in the test database post schema?#2015-12-0810:39jonpithernot a great deal#2015-12-0810:39jonpither(just rocked on to this codebase)#2015-12-0810:40robert-stuttafordone approach is to use d/with instead of d/transact#2015-12-0810:41robert-stuttafordbut that means alterations to production code which sucks#2015-12-0810:42robert-stuttafordyou can’t rewind datomic databases like that. you can assert mirror transactions in reverse order, but that’s probably going to be slower than simply remaking it#2015-12-0810:42robert-stuttafordyou could also make the db, save off the datoms (out of :EAVT) and simply transact the whole lot into a new in-mem db for each test#2015-12-0810:43jonpitheryeah the latter sounds better#2015-12-0810:43jonpitheris there a resource you could point me at?#2015-12-0810:43robert-stuttafordto save off the datoms, or to build a tx based on them?#2015-12-0810:44jonpithergiven I have a freshly created / migrated DB with some initial seed data, can I extract all the txs from this necessary for building up a new db?#2015-12-0810:44robert-stuttaford(vec (for [[e a v] (d/datoms db :eavt)] [:db/add e (d/ident db a) v]))#2015-12-0810:45robert-stuttafordyou’d have to filter out all the stuff present in every datomic database, so a’s with :db/ or :fressian/ as a namespace#2015-12-0810:46jonpitherI'll give this a go, thanks Bob, be interesting to see if it's faster#2015-12-0810:47robert-stuttafordshould be, you’d get full persistent data-structure advantages#2015-12-0810:49robert-stuttaford@jonpither: untested, but i’m fairly sure this’ll work:#2015-12-0810:49robert-stuttaford(vec (for [[e a v] (d/datoms db :eavt)
:let [a (d/ident db a)]
:when (not (-> a namespace #{"db" "db.install" "fressian"}))]
[:db/add e a v]))#2015-12-0810:50robert-stuttafordthat should be empty for a completely new database#2015-12-0810:50jonpithernice - legend#2015-12-0810:51robert-stuttafordof course, you can optimise further; keep the db around too because you can use it for any tests that don’t transact anything (as few as those might be)#2015-12-0810:54robert-stuttaford@jonpither: one small fix to the snippet above#2015-12-0810:55jonpitherI see namespaces such as db.install#2015-12-0810:55robert-stuttaford😁#2015-12-0810:58jonpither`(defn- db-creation-datoms
[db]
(vec (for [[e a v] (d/datoms db :eavt)
:let [a (d/ident db a)]
:when (not (or (-> a namespace #{"db" "fressian"})
(.startsWith (namespace a) "db.")))]
[(namespace a) :db/add e a v])))`#2015-12-0810:59robert-stuttaford@jonpither, i’m a dork. i’ve completely ignored that transacting requires tempids, so you’ll need to post-process the e and v values in that output to replace all the ids with tempids, and you’ll need to make sure that schema gets db partition tempids#2015-12-0811:00jonpitherah yeah#2015-12-0811:00robert-stuttafordnot insurmountable, and it’s work that would only happen once at the start of the test run, so still worth trying#2015-12-0811:01robert-stuttafordjust not a 4 line code snippet any more, heh#2015-12-0811:01jonpithergood point#2015-12-0811:01robert-stuttafordprogramming!#2015-12-0811:01robert-stuttafordgood luck. must run#2015-12-0811:01jonpitherthanks Bob#2015-12-0811:02jonpitherI'll let you know of my travails#2015-12-0818:16currentoorHi, if transact a few entities using tempids, is there a way to get the entity ids they resolved to from the tx-data?#2015-12-0818:16currentoorreturned from transact#2015-12-0818:17currentooror should i just query the DB after the transaction, i'm interested in a subset of the new entity ids#2015-12-0818:17currentoorthe entity ids for some of the newly transacted entities#2015-12-0818:55marshall@currentoor: They are returned from transact in the map. They’re in the :tempids entry of the map.#2015-12-0818:55marshallhttp://docs.datomic.com/clojure/#datomic.api/transact#2015-12-0818:56currentoorHello @marshall!#2015-12-0818:56currentoorLooking forward to talking to you this afternoon.#2015-12-0818:56marshall@currentoor: You can also use http://docs.datomic.com/clojure/#datomic.api/resolve-tempid#2015-12-0818:56marshall@currentoor: Likewise#2015-12-0818:57currentoorresolve-tempid was exactly what I needed#2015-12-0818:57currentoorthanks!#2015-12-0818:57marshallsure#2015-12-0822:10stuartsierraAs I understand it, you must use d/resolve-tempid to correctly resolve tempids, you can't just get them from the map returned by d/transact.#2015-12-0912:49josephHello, does anyone have some knowledge about the cooperation among multiple peers? for query and transact?#2015-12-0912:50josephafter searching google, I only found lan eslick's gist codes about peer reservation: https://gist.github.com/eslick/4122604#2015-12-0912:51josephdoes anyone know some doc?#2015-12-0913:07josephor that's what we can get with paid version?#2015-12-0913:23robert-stuttafordhttp://docs.datomic.com/clojure/#datomic.api/sync#2015-12-0913:24stuartsierra@joseph The only differences among the Free, Pro Starter, and Pro editions are those described here http://www.datomic.com/pricing.html#2015-12-0914:04joseph@robert-stuttaford @stuartsierra thanks for replying, I have checked the datomic api sync, but do you have some detail tutorial or examples to show how to use it for peers to coordinate#2015-12-0914:04josephlike how to query a large datasets with two peers#2015-12-0914:06robert-stuttafordhttp://blog.datomic.com/2013/06/sync.html#2015-12-0914:06robert-stuttafordbreaking query up across peers is more something you’d need to do yourself#2015-12-0914:07robert-stuttafordyou might look at http://onyxplatform.org or #C051WKSP3 on here#2015-12-0914:09josephmany thanks, that's what I am looking for#2015-12-0914:10robert-stuttafordwe use Onyx, so i can likely answer many questions you might have simple_smile#2015-12-0914:28stuartsierra@joseph: Datomic's transactional capabilities are sufficient to coordinate any activity across multiple Peers, but Datomic just gives you the primitives. You will either have to write the coordination code yourself or use a library/framework that provides it.#2015-12-0915:03joseph@robert-stuttaford @stuartsierra that's awesome...#2015-12-0915:12a.espolovguys#2015-12-0915:13a.espolovI run datomic on a remote server#2015-12-0915:14a.espolovI try to create a database, but catch an exception#2015-12-0915:24joseph@a.espolov: in my experiences, after you create a new database, it seems like you have to wait around 1 minute for inserting data, it sounds unreasonable, maybe other guys know more about it#2015-12-0915:25a.espolov@joseph: there is another problem#2015-12-0915:26a.espolovrunning datomic repl on the remote machine#2015-12-0915:26joseph@a.espolov: have you started the transactor with right configuration?#2015-12-0915:27a.espolovI created a database(using public IP for VPS in the connection string)#2015-12-0915:27a.espolov@joseph: yes#2015-12-0915:34robert-stuttaford@a.espolov: your peer can not connect to your transactor. are the ports open? 4334 and 4335 if i’m not mistaken?#2015-12-0915:37robert-stuttaford@joseph: creating a new database should not take a minute to ‘work'#2015-12-0915:38joseph@robert-stuttaford: my experience about it is after I delete and create a database, it seems I need to wait around 1 minute to be able to insert the data into datomic#2015-12-0915:40josephin this post, http://permalink.gmane.org/gmane.comp.db.datomic.user/6873, it says "At some point in the last few
months they changed the internals of the transactor such that a database
name is not available for reuse for up to a minute after it is deleted."#2015-12-0915:46a.espolov@robert-stuttaford: thanks#2015-12-0916:50robert-stuttaford@joseph they fixed that in the second-to-last release simple_smile#2015-12-0917:17joseph@robert-stuttaford: yes, that’s right, thanks#2015-12-0919:04laforge49Newbie datomic question. Getting started.
C:\datomic-pro\datomic-pro-0.9.5344>bin\groovysh.cmd
Dec 09, 2015 2:01:15 PM java.util.prefs.WindowsPreferences <init>
WARNING: Could not open/create prefs root node Software\JavaSoft\Prefs at root 0x80000002. Windows RegCreateKeyEx(...) returned error code 5.
Groovy Shell (1.8.9, JVM: 1.8.0_31)
Type 'help' or '\h' for help.
-----------------------------------------------------------
groovy:000>#2015-12-0919:04laforge49Ignore the warning?#2015-12-0920:09robert-stuttafordwhat does google say about it?#2015-12-0923:27domkmCognitects, would you be open to providing official (non-implementation) versions of datomic.impl.Exceptions$IllegalArgumentExceptionInfo and datomic.impl.Exceptions$IllegalStateExceptionInfo? They are very useful for transactor functions but I am wary of relying on implementation details.#2015-12-1001:51currentoorI've got a Cassandra cluster in a data center setup for Datomic (using the provided CQL scripts) and a locally running transactor connected to it.
Launching with Java options -server -Xms1g -Xmx1g -XX:+UseG1GC -XX:MaxGCPauseMillis=50
Starting datomic:cass://<IP-ADDRESS>:9042/datomic.datomic/<DB-NAME>?user=iccassandra&password=369cbbab59f6715bfde80cce13cde7cc&ssl= ...
System started datomic:cass://<IP-ADDRESS>:9042/datomic.datomic/<DB-NAME>?user=iccassandra&password=369cbbab59f6715bfde80cce13cde7cc&ssl=
But when I try to launch the web console with
bin/console -p 8080 staging datomic:cass://<IP-ADDRESS>:9042/?user=<username>&password=<password>&ssl=false
I get this error in the browser
Cannot support TLS_RSA_WITH_AES_256_CBC_SHA with currently installed providers trying to connect to datomic:cass://<ip address>:9042/?user=<username>&password=<password>&ssl=false, make sure transactor is running
#2015-12-1001:52currentoorHas anyone seen this error before?#2015-12-1002:06currentoorIf I try to programmatically connect it says Caused by: java.lang.IllegalArgumentException: Cannot support TLS_RSA_WITH_AES_256_CBC_SHA with currently installed providers.#2015-12-1002:06currentoorin a stack trace.#2015-12-1002:06paxan@domkm: if you're trying to throw informative exceptions from a transaction function, just use clojure.core/ex-info#2015-12-1002:07domkm@paxan: ex-info doesn't differentiate between state and argument exceptions but those Datomic exception classes do.#2015-12-1002:21paxanFair point @domkm. I've settled on just using ex-info in our txn functions based on the recommendation from one of the datomic people.#2015-12-1008:03roelofCan datomic be a good solution for a ecommerce solution made in clojure. Or a accounting app made in clojure ?#2015-12-1008:05robert-stuttafordhell yes, and hell yes#2015-12-1008:06robert-stuttafordboth require a full audit trail to be sound. datomic excels at that#2015-12-1008:07roelofoke, then I have to search for a good tutorial / book to learn datomic and the way I have to make the queries#2015-12-1008:09robert-stuttafordsee the 3 pinned items in here#2015-12-1008:09robert-stuttafordthere’s no book yet#2015-12-1008:09robert-stuttafordbut there are great videos from the folks who make Datomic#2015-12-1008:10roelofsorry for a newby question. How can find the pinned items ?#2015-12-1008:10robert-stuttafordalso see http://learndatalogtoday.org#2015-12-1008:11robert-stuttafordopen the People list and then click pinned items just above all the people in the list#2015-12-1008:11roelofthanks, learned another thing about slack#2015-12-1008:13robert-stuttaford:+1: also a bunch of recipes in the clojure cookbook here: https://github.com/clojure-cookbook/clojure-cookbook/tree/master/06_databases, 6-10 through 6-15#2015-12-1008:41pingQ: we are evaluating datomic fora social network app, and we are storing large body of text.#2015-12-1008:42pingI have read tht datomic is not suitable for storing a lot of text.#2015-12-1008:42pingis this still true as of the latest release?#2015-12-1009:12robert-stuttafordit is. you can use a blob store (DynamoDB or similar) to store the actual text bodies, and store the keys in Datomic, and so benefit from all Datomic’s capabilities until the point where you need to retrieve this text#2015-12-1009:13robert-stuttafordactually, clarification is needed. Datomic’s performance is pressured when it is given large strings, but it’s totally capable of dealing with many, small strings#2015-12-1009:13robert-stuttafordwhich case describes your problem?#2015-12-1009:20robert-stuttaford@ping ping simple_smile#2015-12-1009:20joseph@robert-stuttaford: Hi robert, these days, I am working to import the 20 million data from mysql to datomic, and applied the policies about pipe-blocking and batching, but it kept happening the java.util.concurrent.ExecutionException: clojure.lang.ExceptionInfo: :db.error/transactor-unavailable Transactor not available {:db/error :db.error/transactor-unavailable} error#2015-12-1009:21robert-stuttaford@joseph, for import jobs, you need to tweak your transactor memory settings. what storage are you using?#2015-12-1009:21josephI used to consider it's the problem of memory, coz it's shown in the log outofmemory#2015-12-1009:21robert-stuttafordyour import is crushing the transactor simple_smile#2015-12-1009:22robert-stuttafordhttp://docs.datomic.com/capacity.html#data-imports#2015-12-1009:22josephmemory-index-threshold=32m
memory-index-max=4g
object-cache-max=2g
#2015-12-1009:22josephI gave 8 GB to transactor#2015-12-1009:22robert-stuttafordwhat storage are you using? dynamo or something else?#2015-12-1009:23robert-stuttafordthe other issue is that threshold. it’s going to index to storage whenever it reaches that threshold. so you might want to increase that quite a bit#2015-12-1009:24josephno, just dev mode, we are still in the experiment#2015-12-1009:25robert-stuttafordok. try increasing that threshold to 128 or even 256mb#2015-12-1009:25robert-stuttafordyou should also give it time between batches to catch up with itself#2015-12-1009:28josephok, and we have around 120 variables, and each variable has around 100 000 datum, should I do the request-index after importing each variable's data?#2015-12-1009:28robert-stuttafordthat’s a wise idea#2015-12-1009:29robert-stuttafordimport one variable’s datoms, request index, wait for it to go back to sleep#2015-12-1009:29robert-stuttafordare you batching the transactions for those 120,000?#2015-12-1009:29josephno, batching around 200-400 datom#2015-12-1009:29robert-stuttafordso 2000 datoms at a time, say#2015-12-1009:29josephok#2015-12-1009:30robert-stuttafordok. if the values are small, then you can go higher than that#2015-12-1009:31josephwhen you were saying "wait for it to go back to sleep", do you mean I should wait for the return of request-index_#2015-12-1009:31robert-stuttafordyou can also tag the transaction itself if you want to keep track of which source values you’re transacting - e.g. [:db/add (d/tempid :db.part/tx) :source-range “variable-A__4001-6000”]#2015-12-1009:31robert-stuttafordwait for its CPU usage to die down#2015-12-1009:31robert-stuttafordi’ve never used request-index, reading docs#2015-12-1009:31robert-stuttafordhey vijay simple_smile#2015-12-1009:32vijaykiranHi Robert!#2015-12-1009:32robert-stuttafordyeah request-index returns immediately. you can wait for the deref of http://docs.datomic.com/clojure/#datomic.api/sync-index to return#2015-12-1009:33robert-stuttaford(do (d/request-index conn) @(d/sync-index conn (d/basis-t (d/db conn))))#2015-12-1009:33robert-stuttafordsomething like that#2015-12-1009:33josephright#2015-12-1009:34robert-stuttafordlet me know how it goes, i am curious to learn from your use-case simple_smile#2015-12-1009:36josephno problem:grinning:#2015-12-1009:39josephtesting now, but I am a little unclear about the reason to increase the index threshold instead of decreasing#2015-12-1009:40robert-stuttafordwell, if you’re manually controlling when you index, then doing so is only necessary to prevent it from indexing before you’re ready#2015-12-1009:40robert-stuttafordit might start indexing before you’re done transacting all the datoms for a variable#2015-12-1009:43josephhmm...the transactor crash after first variable's sync-index...#2015-12-1009:45robert-stuttafordstacktrace?#2015-12-1009:50josephok, I tried again, it goes well now, but it's obviously slower than before...#2015-12-1009:50robert-stuttafordi prefer slow and correct to fast and incorrect simple_smile#2015-12-1009:51robert-stuttafordgetting to fast and correct is a matter of tuning, which might not be worth the time investment if you achieve your goal before you get there#2015-12-1009:51robert-stuttafordi say that as someone who’s been there XD#2015-12-1009:52josephyes, of course correct is most important, used to batch around 150 datoms, it also works, and almost the same speed as now...#2015-12-1009:53robert-stuttafordi recommend you reach out to @michaeldrogalis in #C051WKSP3, i think they might be working on some sort of SQL->Datomic ETL tool. your case might just be a great test case for them if that happens to be true#2015-12-1009:54josephah....nice, thanks a lot#2015-12-1009:55ping@robert-stuttaford: I’m refactoring a legacy publishing system, and in midst of experimenting with datomic for it hence the large text requirement.#2015-12-1009:55robert-stuttafordso it is large text blobs?#2015-12-1009:56pingI’m not familar with text blobs tbh, we were using old mongo#2015-12-1009:56pingand never really concerned too much with text size issue.#2015-12-1009:57robert-stuttafordhow big is your biggest string?#2015-12-1009:57robert-stuttaford10s of kb? 100s of kb? 1s of mb?#2015-12-1009:58pingaverage around 20kb#2015-12-1009:58robert-stuttafordoh you can stick that in Datomic no problem#2015-12-1009:59pingreally? 😄 yay#2015-12-1009:59pingbut why the warning about 1k string limit ?#2015-12-1009:59pingmaybe those are old version#2015-12-1009:59robert-stuttafordwe’re using strings of that size and it’s totally fine#2015-12-1009:59robert-stuttafordhow many records are you talking?#2015-12-1010:00pingat what size should I be concerned?#2015-12-1010:00pingin context of mongo’s document, not alot, around 800k docs so far#2015-12-1010:03robert-stuttafordok. so, the pressure that large strings puts on the system is that indexing takes longer, and less datoms are stored per index segment, which means more have to be retrieved from storage when satisfying query#2015-12-1010:03pingahhh#2015-12-1010:04robert-stuttaford@bkamphaus and @luke can both comment with more detail than that#2015-12-1010:04pingtht make sense.#2015-12-1010:04robert-stuttafordso it’s not like it’s a boolean GOOD or BAD; it’s a slow degradation as your size and volume increases#2015-12-1010:04pinggot it.#2015-12-1010:05robert-stuttafordpersonally i think you might consider spending a day writing a migration and put a whole bunch of data in and write some queries, and see how it all feels.#2015-12-1010:05pingyeah that’s what I am thinking too#2015-12-1010:06pingI have never heard of ppl trying to build somehting like https://medium.com backed by datomic#2015-12-1010:06robert-stuttafordmy biased opinion is that the benefits Datomic will bring you will far outweigh any perf costs you might pay. and, there are ways to deal with it if you do find that the perf pressure is too great (put strings in KV store, store keys in Datomic)#2015-12-1010:06pingthen again, I have not really come across datomic being positioned as some kind of all-purpose backend storage#2015-12-1010:07robert-stuttafordwe use it for everything simple_smile#2015-12-1010:07pinginteresting.
My plan B is to keep storing large text in mongo doc, and datomic refer to it by key/id#2015-12-1010:08pingbut ofcourse, they would mean 2 backend storage to maintain and possibly n+1 queries#2015-12-1010:08pingto get the text data.#2015-12-1010:09pingCurious, are you using dynamodb or mix of dynamodb and other db?#2015-12-1010:09robert-stuttafordwe use DDB as a Datomic backend only#2015-12-1010:09robert-stuttafordno other storages or direct-use dbs, aside from Redis as a post-query cache for some hot pages#2015-12-1010:09robert-stuttafordwe used memcached as a 2nd tier cache for Datomic as well#2015-12-1010:11pinggot it, thanks for your input. very helpful. simple_smile#2015-12-1010:11robert-stuttaford100%, happy to assist#2015-12-1011:44joseph@robert-stuttaford: I am a little bit confused about more peers. because I met one situation, that kind of read around 1 million datum's value from datomic, and query failed every time, I considered that's because the result is too big and out of memory. So I am thinking of if more peers will help it?#2015-12-1011:50robert-stuttafordthe result set of a Datalog query has to be able to fit into memory#2015-12-1011:50robert-stuttafordthe datoms under consideration do not have to, but if they don’t, you’ll have cache churn as it cycles index segments in and GC cleans up#2015-12-1011:51robert-stuttafordyou can, however, lazily walk over the datoms yourself, building up some sort of a result#2015-12-1011:51robert-stuttafordthis talk has stuff about doing that <ttp://www.infoq.com/presentations/datomic-use-case>#2015-12-1011:51robert-stuttafordthe relevant api is http://docs.datomic.com/clojure/#datomic.api/datoms#2015-12-1011:53robert-stuttafordthis way you can lazily walk your million datoms, performing functional transformations (filtering, mapping, reducing, etc), and either arrive at an end result, which still has to fit into ram, or do some sort of processing and commit results to some sort of I/O so that your results no longer need to fit into ram#2015-12-1011:53robert-stuttafordi hope all that makes some sense simple_smile#2015-12-1011:58josephyes, that's also what's I am think of, the limited ram is one problem, but these days I read some info about the coordination among peers, and get confused about if more peers can help...#2015-12-1012:00robert-stuttafordthe fact that you can hold on to a database value indefinitely solves the timing problem#2015-12-1012:01robert-stuttaforddoesn’t matter how long the query phase takes, you don’t have to worry about the database changing on you#2015-12-1012:01robert-stuttafordthis allows you to perform all the work on a single peer, in a lazy-sequence fashion, perhaps parellelising some of the work along the way#2015-12-1012:02robert-stuttafordi point you at http://onyxplatform.org again, as it’s built for precisely this sort of work coordination#2015-12-1012:03josephthanks, that's very helpful#2015-12-1012:04josephbtw, the experiment fails with the same reason, and the strange thing is there is neither error nor warn in the log#2015-12-1012:04robert-stuttafordtransactor unavailable?#2015-12-1012:04josephyes,#2015-12-1012:04josephhere is my logback.xml config file:#2015-12-1012:04joseph26 <logger name="datomic.transaction" level="DEBUG"/>
27
28 <!-- uncomment to log transactions (peer side) -->
29 <logger name="datomic.peer" level="DEBUG"/>
30
31 <!-- uncomment to log the transactor log -->
32 <logger name="datomic.log" level="DEBUG"/>
33
34 <!-- uncomment to log peer connection to transactor -->
35 <logger name="datomic.connector" level="DEBUG"/>
36
37 <!-- uncomment to log storage gc -->
38 <logger name="datomic.garbage" level="DEBUG"/>
39
40 <!-- uncomment to log indexing jobs -->
41 <logger name="datomic.index" level="DEBUG"/>
#2015-12-1012:04robert-stuttafordcheck the transactor's logs - anything in there?#2015-12-1012:05robert-stuttafordi would uncomment the last one indexing jobs#2015-12-1012:05robert-stuttafordi have to go. good luck simple_smile#2015-12-1012:05josephok, thanks again#2015-12-1117:35statonjrGreetings! I’m getting an “Error communicating with HOST” message after my transactors died and rebooted on AWS. The IP address in the error message no longer exists because the transactors came back on new IP addresses. It appears that the transactors did not write their new IP addresses to storage (DynamoDB) and the app is getting the wrong IP address. How do I check what IP address is stored in Dynamo?#2015-12-1117:35statonjrFWIW, I’m also unable to connect in the REPL.#2015-12-1117:36alexmiller@bkamphaus: ^^#2015-12-1117:42statonjrThanks, @alexmiller!#2015-12-1117:44statonjrAnother interesting note: this only happens on one of our production apps. The other production apps connect as expected.#2015-12-1119:27statonjrQuick update: In the REPL, I see the following error:#2015-12-1119:28statonjrMight be an SSL issue?#2015-12-1119:28statonjrhttp://docs.datomic.com/storage.html#2015-12-1119:58bkamphaus@statonjr: if the transactor can’t write its ip address, it should be failing. Are the transactors staying up?#2015-12-1120:05statonjrTransactors are staying up. Both fell down on Thursday evening, but came right back up and have been up ever since.#2015-12-1120:07bkamphaus@statonjr You can run datomic.peer/transactor-endpoint (side note: diagnostics tool, not stable api, so don’t use outside of this intended purpose) to sanity check what the current transactor endpoint is. If peer can’t connect with that error, endpoint is what you expect, and transactors are fine, check to see if anything about security groups, etc. changed? That’s what the issue looks like on the surface at present.#2015-12-1120:09statonjrThe app is pointing at the wrong endpoint for some reason. Also using the previous transactor version. Probably a deploy error on our side.#2015-12-1120:19statonjr@bkamphaus: Is :version the transactor version or the peer version?#2015-12-1120:32bkamphaus@statonjr: for the map returned by transactor-endpoint, it’s the transactor version.#2015-12-1120:32statonjrMakes sense. Thanks.#2015-12-1120:53statonjr@bkamphaus: Our staging environment works with the previous version. Only the host is different.#2015-12-1120:53statonjrWe’re checking security groups, but we haven’t changed anything there recently.#2015-12-1120:57statonjrThe :host version is incorrect and has been since the transactors went down. When they came back up, it appears that they either failed to write the IP addresses or they did write their IP addresses and DynamoDB failed somewhere.#2015-12-1121:45statonjr@bkamphaus: Fixed. We rebooted the transactors and then the app. Our app has this code: (def conn (delay (d/connect url))) that I think does some caching somewhere. I’m going to investigate, but we’re back.#2015-12-1121:47bkamphaus@statonjr: glad to hear you’re back. I wouldn’t expect transactors failing to write their IP address to be an issue - they should be writing their hostname and alt-host (assuming you’re using our cf tools or your own bootstrap logic, they should get that when machine goes up), and the IP is written as part of heartbeat. I.e. if they can’t write and read it, they would experience heartbeat failures and go down.#2015-12-1121:50sdegutisWhat is the meaning of added? in [entity attribute value transaction added?] ?#2015-12-1121:50statonjrMakes sense. BTW, after we rebooted the transactor but before bouncing the app, we ran datomic.peer/transactor-endpoint and could see the new IP addresses, which matched our EC2 instances. When I created a connection with (d/connect url), I could connect and run queries. When I tried to use the delay above, it failed and showed the old :host IP.#2015-12-1121:50statonjrStrange.#2015-12-1121:52bkamphaus@sdegutis: added distinguishes assertions from retraction. true for assert, false for retract.#2015-12-1121:52statonjrOnly after I rebooted the app was I able to connect to the new transactor.#2015-12-1121:52sdegutisOooh. Cool, thanks.#2015-12-1122:18bkamphaus@statonjr: I’m not sure what effect the caching implied by delay should have, but peers should automatically reconnect to a new transactor on failover.#2015-12-1122:19bkamphausat present Datomic will cache the call to conn, so it’s probably not necessary to put it in the body of a` delay` (i.e. if you connect twice in same app/peer lib to same database, you’ll get the previous connection).#2015-12-1122:22statonjrI’m not sure, either, and we have Immutant in there, too. I’m going to look closer this weekend.#2015-12-1122:22statonjrAt least I have a stack trace to look at!#2015-12-1123:35davebryandIs it expected behavior that if I have a transactor A on AWS with ddb storage table X, starting up a second transactor B, pointing at ddb table X, will crash transactor A?#2015-12-1201:59bkamphaus@davebryand: if you don’t have a full pro version which supports [HA](http://docs.datomic.com/ha.html), yes.#2015-12-1202:00bkamphausWith an HA configuration, instead of transactor B barging in (at which case transactor A sees it’s no longer got a unique, current claim on heartbeat), transactor B will start in standby mode where it will only take over if transactor A misses two heartbeats (plus a little tolerance).#2015-12-1202:00bkamphausWith pro starter/free, transactor B doesn’t enter standby mode, so it claims heartbeat at which point transactor A self destructs.#2015-12-1204:41davebryandBingo—great explanation, thanks @bkamphaus#2015-12-1204:42davebryandLoving Datomic, btw. Great work by the whole team#2015-12-1215:38laforge49laforge49 [2:04 PM]
Newbie datomic question. Getting started.
C:\datomic-pro\datomic-pro-0.9.5344>bin\groovysh.cmd
Dec 09, 2015 2:01:15 PM java.util.prefs.WindowsPreferences <init>
WARNING: Could not open/create prefs root node Software\JavaSoft\Prefs at root 0x80000002. Windows RegCreateKeyEx(...) returned error code 5.#2015-12-1215:38laforge49So I found and tried this: https://stackoverflow.com/questions/16428098/groovy-shell-warning-could-not-open-create-prefs-root-node#2015-12-1215:39laforge49Hopefully this will be the end of it. I'm running windows 10, which might have been part of the problem. simple_smile#2015-12-1216:13roelof@laforge49: why open a groovy shell . When I did the beginners tutorial I did everything in repl#2015-12-1216:13roelofI opened getting_started,clj in my ide#2015-12-1216:15laforge49Just following the tutorial. As for opening a repel in my IDE, I'm using cursive and frankly have never opened a repl that way--I use lein.#2015-12-1216:15roelofoke,#2015-12-1216:15roelofI can help you with that easily#2015-12-1216:15roelofIm also a cursive user#2015-12-1216:15laforge49cool#2015-12-1216:15roelofshall we di it in private#2015-12-1216:16laforge49could you just post a pointer? I'm trying to renew my health care coverage right now. 😄#2015-12-1216:16roelofit only 5 - 6 steps#2015-12-1216:17roelofif you do want to do it later it's also no problem#2015-12-1216:17laforge49On hold at the moment but I may be torn away at any point.#2015-12-1216:17laforge49ok, sounds better.#2015-12-1216:17laforge49ttfn#2015-12-1216:17roelofcan be that im then making or eating dinner#2015-12-1216:17laforge49😄#2015-12-1216:17roelofit's now dinner time in the Netherlands#2015-12-1216:18laforge49not a real worry. can probably find something on the web and if not, post something to the clojure chat.#2015-12-1216:18laforge49--it was a few days back I started digging into datomic and hopefully today I can get back to it.#2015-12-1218:17robert-stuttaford@laforge49: see the pinned items in this channel; one of them is some getting-started recipes from the clojure cookbook simple_smile#2015-12-1220:36bkamphausA lot of people miss this on the tutorial page, but the examples are provided in Clojure and Java in the datomic directory as well - e.g. (assuming latest version directory name) datomic-pro-0.9.5344/samples/seattle/getting-started.clj#2015-12-1313:25gerritwhat is the recommended way of storing something like clj-time/local-date in datomic? Convert it to a java.util.Date and then convert it back when reading the attribute? could that be done generically/centrally?#2015-12-1315:52robert-stuttafordDatomic only knows how to store java.util.Dates, so you’ll have to convert to/from those as appropriate. this is pretty seamless with clj-time.coerce/to-date and /from-date#2015-12-1316:06gerritso you typically convert data from/to your domain manually or do you stick something like prismatic schema coercion in between somewhere? I like it a lot how datomic allows one to store domain data directly without mapping it before and after, and would love to extend that to other types#2015-12-1321:37mishagreetings!
what is the idiomatic(?) way to represent ordered collections in datomic?
items in a collection do not have a "natural" index attribute, and should not know about their order in a collection.
collection, on the other hand – should know the order items are in.
something tells me, that:
- keeping extra entity just for that {:id wrapper-id2 :idx 4 :item item-id1},
- and storing collection of such wrappers {:items #{wrapper-id1 wrapper-id2}}, rather than the actual items {:items #{item-id1 item-id2}} - might be an overkill.
is it?#2015-12-1321:55alexmillerI think linked list is common too#2015-12-1322:02misha@alexmiller: keeping knowledge of the next item in the current one?#2015-12-1322:04misha@alexmiller: btw, reading "clojure applied" - so smooth, thank you.#2015-12-1322:09mishaok, I see, linked lists introduce 2 extra levels in between the actual collection and its items. 😕#2015-12-1322:10misha(https://github.com/dwhjames/datomic-linklist)#2015-12-1322:28mishagreetings!
global uuids
say, you have 2 entity types: bus and route.
which is more preferable:
1. having both entities sharing :global/id attribute
{:db/id 2 :global/id uuid2 :bus/model ...}
{:db/id 1 :global/id uuid1 :route/destination ...}
2. or each have global id under entity's namespace:
{:db/id 2 :bus/id uuid2 :bus/model ...}
{:db/id 1 :route/id uuid1 :route/destination ...}
I foresee a tradeoff between "(not)dealing with irregular entity attribute names" and "more(shorter)/less(larger) datalog queries".
Does one of them win in common case?#2015-12-1402:19bostonaholicit really depends on how you're going to query them#2015-12-1402:20bostonaholicif you have a uuid and you know you want a bus, then maybe the latter is better#2015-12-1402:20bostonaholicbut if you have a uuid and you don't know if you want a bus or a route then the former is probably what you want#2015-12-1402:27bostonaholic@misha: also consider using 'squuids` for better indexing of uuids -> http://docs.datomic.com/identity.html#sec-6#2015-12-1402:53misha@bostonaholic: using squuids, yes. No idea about the queries, beyond pull api yet, working through this now.#2015-12-1402:59mishaIn my case, those guids are secondary, as I need them only (for now) to maintain relationships between entities during export/import db data, and to have common id (ref) between ui (datascript) and be (datomic(s)). Most of the other things I thought of – will/can be covered with pull api or other attributes.#2015-12-1405:18robert-stuttaford@misha: if you’re ever going to seek using this uuid, you should make schema for each entity type. otherwise your queries will have larger datasets to seek through#2015-12-1407:37domkm@robert-stuttaford: What do you mean by "seek?" Does that include using a global id in a lookup ref?#2015-12-1409:21robert-stuttafordyeah. it has to scan through the AVET index to find your value when you use lookup refs or any [?unbound-id :attr ?bound-value] datalog clause. using semantically assigned attrs lessens the size of that seek space.#2015-12-1413:12ustunozgurthis might be a dumb question, but why doesn't datomic have an edn or json type? or does it? sometimes I want to tuck away some random bag of data to an attribute of an entity, do I have to convert that to a string and save it that way?#2015-12-1413:17ustunozguris datomizer an answer for this? https://github.com/GoodGuide/datomizer#2015-12-1415:07robert-stuttafordjust pr-str when transacting, and clojure.edn/read-string when reading#2015-12-1415:07robert-stuttafordwe do this#2015-12-1415:07robert-stuttafordworks great#2015-12-1415:08curtosisI would think that depends on the scope of "random bag of data". At some point you're working against datomic, no?#2015-12-1415:08robert-stuttafordyes. large strings do create performance pressure on Datomic#2015-12-1415:09robert-stuttafordwe use it for very small edn blobs, under 1k, where only the client-side consumer cares about it#2015-12-1415:10robert-stuttaforda trade-off decision against making unnecessary schema for stuff we’ll never query against or call on directly#2015-12-1415:22curtosisgood, my understanding is relatively accurate simple_smile#2015-12-1415:28curtosisworth highlighting that that's clojure.edn/read-string, not clojure.core/read-string.#2015-12-1415:30robert-stuttafordyes, that’s important; the former does not evaluate Clojure code#2015-12-1417:14val_waeselynckis there a commonly accepted name for databases like Datomic in the academic world? It seems that 'Functional Database' is already taken for something else: https://en.wikipedia.org/wiki/Functional_Database_Model#2015-12-1417:22bhagany@val_waeselynck: I think of it as a variant on these: https://en.wikipedia.org/wiki/Triplestore#2015-12-1417:23bhaganyor these: https://en.wikipedia.org/wiki/Entity%E2%80%93attribute%E2%80%93value_model#2015-12-1417:43val_waeselynck@bhagany: I'm actually more interested in the 'database as a value' quality, not the data schema#2015-12-1417:44bhaganyah, I don't have a good comparison for that part of it#2015-12-1417:45val_waeselynckI'll go ask on the mailing list, the guys in the Datomic team have probably done that research simple_smile#2015-12-1419:43robert-stuttafordthe marketing site seems to focus on 'immutable database'#2015-12-1419:47bkamphausWhile I’m not going to add anything definitive, I’ll note you have to be cautious when pulling terminology from the literature or generalizing from terminology we or others use like “deductive database”, “universal schema”, “triple store”, “append only”, “accumulate only”, etc. — because a lot of the assumptions/typically included components with those models historically largely do not match Datomic’s architecture as a whole.#2015-12-1419:51bkamphausThe literature itself isn’t always crystal clear on what a lot of these are precisely, so it’s not just Datomic per se. I think the problem implied by the questions, i.e. something like “If I only knew what Datomic’s data model was called precisely I could find an article on how to design my schema to represent X”, isn’t necessarily solvable on those terms, if that makes sense.#2015-12-1419:52robert-stuttafordso, Datomic is a chimera of many good ideas from many places simple_smile#2015-12-1421:36ljosaI have started getting exceptions like these in my peer: HornetQNotConnectedException: HQ119006: Channel disconnected
HornetQNotConnectedException: HQ119010: Connection is destroyed
ExceptionInfo: Error communicating with HOST 10.43.180.240 on PORT 4334
HornetQInternalErrorException: HQ119001: Failed to create session
IllegalStateException: Connection is null
HornetQInternalErrorException: HQ119001: Failed to create session
This is a peer that does a couple of large queries, then does some computation and saves the result to disk. It does not write anything to datomic. The queries work fine, and in fact the computation is able to finish after the exceptions appear in the log. Any idea what's going on?#2015-12-1422:31misha@ljosa: garbage collection? did you try to increase peer's memory?#2015-12-1422:37ljosayou're thinking that the connection object was GCed?#2015-12-1422:58mishaIf computation and queries were large enough – it might have been.
or maybe host is just unreachable as ExceptionInfo suggests.#2015-12-1500:58zentropeIs there any way to store “ordered” values on a card/many attribute?#2015-12-1501:00zentropeOr do you have to add an entity with and :entity/order attribute?#2015-12-1504:38domkm@zentrope: Nope, card many is unordered.#2015-12-1508:08robert-stuttaford@bkamphaus: what method can i use to identify which transactor in a HA pair is the primary?#2015-12-1508:52misha@zentrope: https://clojurians.slack.com/archives/datomic/p1450042662000836#2015-12-1513:06josephSince DynamoDB is based on Amazon cloud service, we do not want to save the data there. Although DynamoDB has a local version, but obviously that's not recommended for production. So except DynamoDB, which storage server is recommended or perform better considering the load balance of peers?#2015-12-1513:24robert-stuttafordyou can use clustered postgresql, or the fault tolerant versions of any of the storages available, basically#2015-12-1513:27joseph@robert-stuttaford: thanks, robert, read this article: https://martintrojer.github.io/clojure/2015/06/03/datomic-dos-and-donts/, it's said DynamoDB is the best choice, since, we decide to not to use it, what do you think of Riak?#2015-12-1513:28stuartsierraI am not aware of any testing with Datomic on clustered PostgreSQL.#2015-12-1513:28robert-stuttafordi’ve only got experience with postgres (a single node) and ddb. never used the rest#2015-12-1513:28robert-stuttafordah, thanks for correcting me stuart.#2015-12-1513:29stuartsierraIt might work (clustered Postgres), I just don't know if anyone has tried it.#2015-12-1513:29stuartsierraSupport for SQL as a storage engine is mostly targeted at organizations that already have substantial investment in SQL infrastructure.#2015-12-1513:30robert-stuttafordthat’s as i understood it as well - which i took to mean 'at scale'#2015-12-1513:33josephfrom wiki: https://en.wikipedia.org/wiki/Riak, Riak implements the principles of AWS's DynamoDB paper#2015-12-1513:36stuartsierraPerformance of Datomic will be roughly similar on any of the distributed storage engines. Cassandra is probably the most popular distributed storage among Datomic users, after DynamoDB.#2015-12-1513:36stuartsierraWhichever one you choose, read the Datomic docs carefully — improperly configured storage can lead to data loss.#2015-12-1513:37joseph@stuartsierra: Thanks for recommendation and advices.#2015-12-1514:11statonjrWe briefly used clustered Postgres before switching to DynamoDB. We had no issues with Datomic, only with managing clustered Postgres.#2015-12-1514:12statonjrWe also tried Riak. Again, no issues with Datomic. Managing 5 Riak nodes was difficult for us.#2015-12-1514:24joseph@statonjr: Thanks for sharing experiences.#2015-12-1514:34curtosisthe advice shared at Datomic Conf this year -- which seems both eminently wise and supported by comments here -- is to use the storage you already know. There is probably some performance differential in the limit, but operational issues with a new storage backend are far more likely to bite you, and more badly (= data loss).#2015-12-1515:00joseph@curtosis: OK, I will keep it in mind, thanks for advices.#2015-12-1518:25davebryanddoes anyone know of a hosted version of Datomic? I love it but would prefer not to take on the ops overhead. Looked around and couldn’t find anything out there#2015-12-1518:32ustunozgurdoes the license allow it?#2015-12-1518:40alexmillerno#2015-12-1519:39davebryandIs this something Cognitect would think about providing at some point?#2015-12-1519:46bkamphaus@davebryand: we are aware of the request for a hosted Datomic. I’ll note that there’s additional interest being expressed in it (you’re welcome to add your voice to the group post as well). https://groups.google.com/d/msg/datomic/Vcx7LtaK65U/-rVfFLrILPQJ#2015-12-1519:48bkamphausIn the mean time, happy to gather feedback on whether you think the ops overhead looks intimidating due to intrinsic reason (i.e. just running all the pieces), or if there are specific things unclear in docs, etc. that you’ve had trouble with.#2015-12-1519:48davebryandthanks ben—just posted#2015-12-1519:51davebryandAfter all is said and done, I probably will end up taking on the ops overhead because I truly love Datomic.#2015-12-1519:54davebryandI had a bunch of trouble where I didn’t understand the side effects of things like ensure-transactor, for instance. When I initially ran it, I didn’t have an s3 log bucket uncommented. I got everything up and running and then decided I wanted one. Here is are my notes from that experience.#2015-12-1519:56davebryandBasically my transactor just kept getting terminated and restarting and I had no logs to look at. It would have also been helpful to SSH into the transactor but appears to not be on that AMI#2015-12-1519:56bkamphausright, the AMI can’t be ssh’d into. Logs can be helpful, a lot of problems can be addressed with metrics though of course if problem is getting started in the first place, logs won’t rotate and metrics won’t be put.#2015-12-1519:57davebryandAnyway, it was a rocky few days to get things running but I think it’s stable now#2015-12-1519:57davebryandright, no logs, no help#2015-12-1519:57bkamphausNot sure if you reviewed this section of the docs, but the manual setup walks through everything ensure does for you: http://docs.datomic.com/storage.html#automated-setup#2015-12-1519:57bkamphausIt handles role, transactor, and permissions for table and s3 access but doesn’t create the bucket.#2015-12-1519:58bkamphausIn terms of where the boundaries are (i.e. if it’s not listed in manual setup there, ensure-transactor doesn’t do it for you).#2015-12-1519:58davebryandNet net is that I think Datomic is the PERFECT database for most startups to use and I want less friction for my friends to try it simple_smile#2015-12-1519:58davebryandahhh, that’s super helpful—thanks#2015-12-1519:58davebryandI did see that but didn’t read it closely#2015-12-1519:59bkamphausBut yeah, rolling your own stuff on AWS is definitely tricky, and navigating the docs for the different possibilties AWS + Dynamo, AWS + something else, on site + something else, mix and match AWS and Heroku, etc. — all of the different blends out there — is tricky.#2015-12-1520:14davebryandOf course now that I’m on the other side of the experience it feels much less tricky, but yes, it was. Thanks for your help!#2015-12-1520:44danielcompton@bkamphaus: does the Datomic license prohibit third parties providing hosted datomic?#2015-12-1520:49bkamphausyes#2015-12-1520:50tcrayfordrunning hosted databases is hard work#2015-12-1520:51tcrayford(it's not actually as hard as you might think though. We have much better automation than the average devops person at a startup)#2015-12-1521:06tesseractis datomic easy to administer on heroku? i.e 3 commands and you’re up and running like heroku postgres ?#2015-12-1521:07tcrayfordnoooooooooooooooooooooooooooooo#2015-12-1521:07tesseractwell I guess I have my answer simple_smile#2015-12-1521:09tesseractthat’s good to know. Is there any plan for it in the future? Or what’s the easiest way to administer it with “no hands"#2015-12-1521:09tcrayfordscroll up, but there's no currently available option for that#2015-12-1521:10tcrayfordthe license prevents third parties from doing that for cognitect, and they haven't released or publicly talked about doing hosted datomic at all#2015-12-1521:41magnarsWith all this talk about storage, there's one option I rarely see mentioned: the file system. Given that you take backups, is there a good reason to avoid file storage? We're not on AWS, so should I set up Cassandra just to have a "real" data store?#2015-12-1602:22ljosaI didn't see Couchbase mentioned. It is at least easy to set up. Any downsides to Couchbase for use with Datomic?#2015-12-1609:21afhammadFor those running on Datomic Pro Starter/Free, how far has it taken you? What kind of load can it handle? (apologies if this has been asked a hundred times)#2015-12-1610:14raymcdermott@tesseract: I have blog post about running Datomic on Heroku which explains that answer in more detail and offers hope for how it can be made easier http://blog.opengrail.com/jekyll/update/2015/11/19/datomic-heroku-spaces.html#2015-12-1614:10tesseract@raymcdermott: so just a few commands and up and running? not too bad#2015-12-1616:01raymcdermott@tesseract: yes that’s right … although like I say you have to have an enterprise account and access to the Private Spaces beta so there are more barriers than usual#2015-12-1616:03raymcdermottthere are some options to mimic private networking using their standard offerings https://elements.heroku.com/addons#network but YMMV#2015-12-1623:16domkmIs there a way to get the previous t from a t?#2015-12-1623:28domkmAlternatively, is there a way to simulate as-of returning a db that is exclusive of t (by default it returns a db that is inclusive of t?#2015-12-1623:34domkm(...without running through the tx-range until finding the tx before the given t.)#2015-12-1705:44magnars@domkm it may not be part of the official API, but (dec t) works. I saw it used first here: http://dbs-are-fn.com/2013/datomic_history_of_an_entity/#2015-12-1705:45domkm@magnars: ts are not sequential (on my system).#2015-12-1705:45domkmI wish they were 😞#2015-12-1705:46magnarsthey don't have to be - (as-of (dec t)) works even if there is no such t as (dec t)#2015-12-1705:50domkm@magnars: (tx-range log (dec t) nil) provides the same range as (tx-range log t nil) if (dec t) isn't a real t. It rounds up.#2015-12-1705:51domkm...with the caveat that I am using the free storage protocol. Maybe other ones work differently, though I hope they would be consistent.#2015-12-1705:51magnarsThat's strange. The code in http://dbs-are-fn.com/2013/datomic_history_of_an_entity/ shouldn't work then. Maybe it no longer does.#2015-12-1707:55robert-stuttafordit does round up. the only real way to find a previous t is to iterate dec, testing d/t->tx until you find one, @domkm#2015-12-1707:55robert-stuttafordpersonally i think it’s annoying that the api doesn’t provide an easy way to traverse time backwards like this#2015-12-1707:55robert-stuttafordwe do an activity stream (most recent first) and had to deal with this too#2015-12-1707:57domkmThanks @robert-stuttaford#2015-12-1714:23joseph@robert-stuttaford: Hi, robert, just update the result of importing the large data (around 340 million datums) with dividing to 120 variables from mysql to datomic, it has been done yet, processing. But it seems well. and I request-index and sync-index after importing each variables' datums.#2015-12-1714:25josephFound the reason of failing in the previous time is the peer/transactor timeout error. So set the datomic.peerConnectionTTLMsec and datomic.txTimeoutMsec from default 10 seconds to 1 minute#2015-12-1714:25josephit's working now#2015-12-1714:25josephBTW, each sync-index processing takes around 15-25 seconds#2015-12-1714:33robert-stuttafordthat’s fantastic!#2015-12-1714:47joseph@robert-stuttaford: hmm... fail with the error java.lang.RuntimeException: HQ119028: Timeout waiting for LargeMessage Body in the peer log
I guess it's because I set memory-index-threshold too high with 256m#2015-12-1714:47josephchange it to 64m and these is my other config:#2015-12-1714:48josephobject-cache-max=2g
memory-index-max=4g
memory-index-threshold=64m#2015-12-1714:48josephtry again, and set timeout to 2 minutes#2015-12-1714:48robert-stuttaford@bkamphaus: over to you on this one simple_smile#2015-12-1714:50robert-stuttafordjoseph, are you starting from scratch or resuming where it failed?#2015-12-1714:50robert-stuttafordif you’re not resuming, i would highly recommend you stop and make it so you can resume before doing anything else simple_smile#2015-12-1714:52bkamphaus@joseph: do you have metrics reported on the imports anywhere (i.e. cloudwatch), or alternatively are you at least saving logs where you can grep through them?#2015-12-1714:53joseph@bkamphaus: I am using dev mode, so just save the logs#2015-12-1714:54bkamphausAll contained on one machine? I.e. same machine is running peer import process, transactor (and its writes to file system via dev)?#2015-12-1714:54joseph@robert-stuttaford: because I don't have any clue where it failed when importing the datum, so i just from scratch#2015-12-1714:55josephyes#2015-12-1714:55joseph@bkamphaus: is that a problem?#2015-12-1714:56bkamphausThere will be intrinsic constraints due to processes running on the same box - gc induced pauses, multi-JVM stresses, etc. It will at least introduce a level of reliability you have to accommodate with increased timeouts, probably increased heartbeat time, and potentially GC tuning on the peer app (G1GC settings similar to transactor from command line).#2015-12-1714:57bkamphausIt won’t be near the volume you’d get of transaction/import using a distributed system with a dedicated transactor box and a dedicated storage. The indexing overhead will be harder to bear, too, without a dedicated transactor.#2015-12-1714:57bkamphausbut the big step back for any of these imports, in terms of assessing machine/system config, approach taken in peer app doing the imports, transactor settings, etc., is what is the target throughput? In Bytes, Datoms, or transaction count at least?#2015-12-1714:58bkamphausYou can use TransactionBytes or TransactionDatoms sum (grepped from metrics or monitored on cloudwatch) to see how much throughput you’re getting in terms of both (Datomic metrics are reported per minute)#2015-12-1714:59bkamphausSum that over several hours or e.g. day (a long enough time period that you’re capturing the overhead introduced by indexing), and you can assess your current throughput and whether or not it fits your requirements.#2015-12-1715:03bkamphausBut our expectation is that Datomic’s throughput capacity, even when not optimally configured, is sufficient to get you into trouble with database size very, very quickly. I.e. we know that you can transact more than 1 billion datoms a day with a well tuned but not particularly rigorously optimized system and a fast, well provisioned storage, and that you start incurring more performance costs for ongoing operations over 10 billion datoms in size.
Note these numbers are soft estimates based on hardware/networking practicalities, etc. and just a small snapshot, or a definitive statement on any actual limits or discrete performance change or level.#2015-12-1715:38joseph@bkamphaus: hmm thanks for the very informative advices. I never thought that peer, transactor are in the same machine will have some potential JVM and GC problems. I will try to lay them in different pcs. Since we will not save the data on the cloud, so till now we didn't use any amazon's service and productions.#2015-12-1715:40josephBut I am think of applying some databases instead of just dev modes. hope that would improve the performance#2015-12-1716:07curtosis@bkamphaus: double-checking that I've understood this: essentially, throughput is almost never a problem itself except that it can mask the size-related performance issues the high throughput enables?#2015-12-1716:10bkamphaus@curtosis: to be a little more precise, I’m commenting that any high level of sustainable Datomic throughput is fairly simple to achieve.#2015-12-1716:11bkamphausstruggling to really push optimization with throughput, unless you already have e.g. a rotating time or domain based sharding strategy or something, indicates that your data may be too large to be a good fit with Datomic.#2015-12-1716:12curtosisah, I see. the "inverted" perspective is a better one. "If you're having throughput problems, that's a good sign you're gonna have a bad time for other reasons" (modulo mitigation strategies).#2015-12-1716:14bkamphausRight, the “I want to put 1 billion datoms in a Datomic db a day” problem indicates you should step back and say “Is Datomic a good fit for 356 billion facts?” Today, probably not so much (modulo mitigation strategies) — I like that phrasing simple_smile#2015-12-1716:15curtosissimple_smile#2015-12-1716:15curtosisand from an architect's perspective I think I'd be inclined by "mitigation strategies" to mean, in part "confidence your domain is naturally (or at least sanely) shardable/rotatable"#2015-12-1716:17bkamphausBut import tuning, etc. is worth making sure you’re taking reasonable first steps, tuning, distribution the system correctly, etc. I’m not trying to dissuade anyone from perf tuning their imports. Just run through this sanity check list:
1. Know what your current throughput is (and how to measure it):
2. Know what your throughput target/requirements are.
3. Know what their implications are and if they’re compatible with your use of Datomic in general.#2015-12-1716:17stuartsierra1 billion datoms/day is ~11,500 datoms/second.#2015-12-1716:18bkamphausI see the problem shape come up often of “What are you trying to put into Datomic?” “everything that’s coming in from somewhere else”, “how much is that?” I don’t know”, “how fast is it going in now?” “I don’t know”, “how fast do you need it to be?” “faster” “What problem does that solve?” “It’s not going fast enough” … simple_smile just want to tease out the salient details.#2015-12-1716:18curtosislol. I'm very familiar with this sequence of questions/answers.#2015-12-1716:19curtosisfrequently reduced to just the last two.#2015-12-1716:20stuartsierra10 billion datoms/year = 317 datoms/second average#2015-12-1716:24curtosisfor comparison, you can hit 10k+ triples/sec on a tuned high-performance triplestore. But it's not exactly in the same problem space as Datomic. And your query throughput will be very different.#2015-12-1716:25stuartsierra@curtosis: yes, and that's a peak burst rate. How many systems can sustain 10k triples/second 24 hours per day for weeks at a time?#2015-12-1716:25curtosis(although, also, I doubt you can sustain that load rate!)#2015-12-1716:25stuartsierra!#2015-12-1716:25curtosisheh.. @stuartsierra same thought track simple_smile#2015-12-1716:31stuartsierraUnfortunately, most database benchmarks only report peak burst rate. It's expensive (and less impressive) to test throughput per week.#2015-12-1716:49curtosisalso easier to optimize for#2015-12-1815:25curtosisis there a best-practice size limit for a single tx? The main scenario I'm thinking of is creating a bunch of entities/datoms in a batch import, which I'd like to keep together. (Sagas would work ok, I'm just looking to understand where the practical line sits.)#2015-12-1815:27curtosisRelatedly, is there a best practice for nested entities? I have a use case where the structure is 3 deep, with a fanout of ~1:10:30.#2015-12-1815:29curtosisIt would obviously be easier to just build the nested structure, let Datomic handle the tempids, and send it as one big ("big"?) tx.#2015-12-1815:32bkamphausDatomic is more optimized for transactions ~40k or lower, it can usually survive ok with some transactions in the 100Ks, but I would definitely stay out of 1MB+ territory, which is where you’ll run into problems introduced by exceeding practical limits.#2015-12-1815:39curtosisso it's mostly raw size, not number of datoms? (which are still roughly correlated, yeah...)#2015-12-1815:42curtosisI think most of my current use cases would stay under 10k. I'm just not used to building 10k strings to send to the db. 😉#2015-12-1815:43bkamphausYeah, I would say while datom counts are correlated to size for most data in Datomic, for anything involving blobs/document strings, etc. size concerns dominate datom count concerns (but hopefully you don’t have those values sufficiently large so as to break the correlation too much anyways) simple_smile#2015-12-1815:43curtosisright simple_smile#2015-12-1819:02naomariksilly question: for the free version of datomic does 2 peers on the transactor mean only 2 app servers can synchronize with each other?#2015-12-1916:10davebryandI’m still trying to wrap my head around the best way to model created-by in Datomic. Say that I have a task entity which needs to track who created it, is it idiomatic to have a :task/created-by attribute or should I use an :audit/user attribute on the transaction? Appreciate thoughts...#2015-12-1917:00hkjelsI’m no Datomic guru, but from what I’ve read, the idiomatic way seems to be to add it to the transaction-entity#2015-12-1920:22zentropeYou could have a :user/tasks card/many entity pointing to all the tasks for that user (if it's an "owned" relationship).#2015-12-1920:22zentropeOh, sorry. This discussion is hours old. ;)#2015-12-1922:32nwjsmithIs there a way to do recursive queries?#2015-12-1922:33nwjsmithI have entities category and categorization, the categorization entity relates a category with a subcategory. I’d like to be able to find all of a category’s ancestors by traversing it’s categorizations in a query#2015-12-1922:35nwjsmithYou can see the ‘flattened’ query written to find the parent, grandparent and great grandparent of a category. I’d like to generalize that though#2015-12-1923:51davebryandthanks @hkjels and @zentrope — I’m thinking that because the notion of creation of a task is as first-class as the assignment of a task, I’m going to model this as a task attribute with :task/created-by and :task/assigned-to#2015-12-1923:52zentropeSeems reasonable. Associating a user with a transaction itself seems more about auditing on a meta level rather than a normal data model.#2015-12-1923:53zentrope(As in, do both, if "who made the transaction" is valuable.)#2015-12-2000:07davebryandgreat call—I think adding that across every transaction uniformly will pay dividends down the road#2015-12-2001:05tcrayford@nwjsmith: rules#2015-12-2001:09jgdavey@nwjsmith: Just to add on to Tom’s response, the datomic musicbrainz repo has examples of using rules for recursive queries: https://github.com/Datomic/mbrainz-sample/blob/master/src/clj/datomic/samples/mbrainz/rules.clj#2015-12-2001:19zentropeEh? :db.error/tempid-not-an-entity tempid used only as value in transaction. Can't you use a (d/tempid) defined for another entity in the tx when creating a reference attribute?#2015-12-2001:22zentropeOh, I see. The "temp-id" was generated for use in a different transaction. That makes sense.#2015-12-2020:11nwjsmith@tcrayford, @jgdavey RULES 💖#2015-12-2122:02firinneHey, not sure if this is the best place to ask this, but I’m running into an issue where whenever I run lein uberjar to try and compile my app, it is trying to connect to datomic (but since I’m using a different host for datomic in production, it can’t reach that url while it is being compiled), should I just be using a (try … catch …) block for creating the db and connecting to it?#2015-12-2123:48firinnethis might be a better question:
Hey there, new to both Datomic, Clojure, and Clojurescript as a whole, also learning Om, and have been trying to figure out if there is a set of things a person must learn to get a sort of Hello World live on the web backed by datomic. I imagine some folks new to Clojure may have become intersted in datomic, and may desire to deploy things to the web. So far here is what I’ve got
Compojure/Ring: Handle requests — ping the db
(used the om intermediate tutorial for the old om for this, and the book Web Development with Clojure)
Datomic Queries — http://learndatalogtoday.com was most useful resource, I got some value out of the day of datomic material but couldn’t find some answers I felt I needed since most written stuff seemed to be for java.
Om next — om next devcards (but honestly om isn’t needed for this project, reagent/re-frame seem to have better documentation so might be better for those starting)
Docker — using Docker’s networks you can name a container to have a particular hostname that other containers will point to — I’m starting with datomic free, since I know how to make that work locally, and want to get this ironed out before going onto datomic pro.
Currently I think I’ve almost got the full stack connected, I’m just trying to figure out the best way to compile an app using lein uberjar without having it try to connect to the new host for my production db on datomic.
Might anyone here have some examples of how one might do that. Or just an explanation of how one uses the component library to handle db connections during compilation of an uberjar?
Datomic is seriously cool, is anyone teaching beginners how they can build apps all the way to deployment that are backed by datomic? I feel like I can’t be the only person who has spent a while trying to figure this out.#2015-12-2123:50firinne@dnolen: is that more in line with an answerable question?#2015-12-2123:50firinneor rather, I could say simply — is the best only path to deploying an app backed by datomic learning to use components?#2015-12-2123:51dnolen@firinne: the later simple_smile#2015-12-2123:51dnolenin general assume that in any particular topic channel folks won’t necessarily know about the tech involved in another topic channel#2015-12-2123:51dnolenso you can drop that stuff since it just muddies the waters#2015-12-2123:53firinnemakes sense — the reason I mention the others is just that there seemed no roadmap and I’m unsure which pieces are essential to getting something live#2015-12-2218:03currentoorif I have an organization entity that has many user entities and i want to replace the set of users in an organization how should i do that?#2015-12-2218:04currentoori figured i should retract all the current users in the organization and then transact the new set in#2015-12-2218:04currentooris that the right way to go about it?#2015-12-2218:04stuartsierrano#2015-12-2218:05stuartsierraIf you want to do it in a single transaction, you need a transaction function that compares the old set with the new set and asserts/retracts what changed.#2015-12-2218:06stuartsierraYou can't both retract and assert the same fact in a single transaction.#2015-12-2218:07currentooroh right#2015-12-2218:08currentoorthanks#2015-12-2218:08stuartsierrayou're welcome#2015-12-2218:09currentoorBut why can't I just query for the current set then transact the diff? Then I won't need a transaction function right?#2015-12-2218:09currentoorOh in case the data has changed in the mean time, right?#2015-12-2218:10stuartsierra@currentoor: yep#2015-12-2218:10currentoorgot it#2015-12-2219:51potetmSo it’s a little late to the game, but I’m having a problem getting that old bug diagnostic tool running.#2015-12-2219:52potetmI’m getting:
Exception in thread "main" com.amazonaws.services.dynamodbv2.model.ResourceNotFoundException: Requested resource not found: Table: ${TABLE_NAME} not found (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: ResourceNotFoundException; Request ID: ${REQUEST_ID})
#2015-12-2219:54potetmThe command I’m running is:
bin/run -Xmx12g -m datomic.tools.detect-865 "${URI}" /tmp/identities.edn > /tmp/check-index.edn
#2015-12-2219:55potetmI know IAM perms can be a problem here, so I ran the following:
aws dynamodb describe-table --region ${REGION} --table-name ${TABLE_NAME}
#2015-12-2219:55potetmWhich returns successfully.#2015-12-2220:00potetmIn case it’s in question, this is running in AWS using IAM roles on the ec2 instance.#2015-12-2222:32bkamphaus@potetm: that form of invocation with appropriate permissions succeeds fine for me.#2015-12-2222:33potetm@bkamphaus: Yeah we ran it in one VPC and it worked fine.#2015-12-2222:33potetmThis one was run in eu-west-1, and it fails.#2015-12-2222:33potetmIs there another way I could check perms?#2015-12-2222:34bkamphaus@potetm: can a repl from the same machine connect? You can use bin/repl or a script invoked with bin/run on the same instance.#2015-12-2222:35bkamphausif you have enough for a repl to connect + the appropriate describe-table permission, the diagnostic tool should be able to run in detect mode fine.#2015-12-2222:38potetmRunning (d/connect “${URI}”) appears to work fine from bin/repl.#2015-12-2222:39potetmI’m able to call (d/db conn) and get a db instance back.#2015-12-2222:41potetmAnything else I can run to get more info?#2015-12-2222:44potetmAlso don’t mind giving more specifics in a 1-on-1 or via zendesk or whatever. Tried to anonymize what I posted in here.#2015-12-2222:45bkamphaus@potetm:Yeah, can you open a support ticket and I’ll follow through on ZenDesk? I want to verify details that aren’t anonymized if possible. I would suspect what your’e doing should work if you have peer + describe-table permissions (assuming the aws invocation you used is sufficient to verify the permissions for that role, or that you’d see a different effort).#2015-12-2222:46potetmGladly.#2015-12-2222:46bkamphausNot sure if a basic sanity check on echo $URI makes sense, assuming you would have checked for problem characters, typos, etc.#2015-12-2222:48potetmYeah I’ve just been filling it in when I type into slack. I’ve checked it a few times w/ a few pairs of eyes. No luck so far.#2015-12-2222:48potetmI’ll put it into the zendesk submission so you can see it too.#2015-12-2222:55davebryandIf I have something like :task/due-date which represents the date on which something is due, what is the idiomatic type for this? I’m guessing :db.type/instant with the date set to midnight of the day which the task is due. If that’s right, is index-range http://docs.datomic.com/clojure/index.html#datomic.api/index-range the way I’d query against that field? Thanks!#2015-12-2223:07potetm@bkamphaus: done#2015-12-2223:07potetmsimple_smile#2015-12-2223:07bkamphaus@potetm: thanks, I’ll respond on ticket.#2015-12-2223:07potetmthanks so much man#2015-12-2400:01currentoorIf I want all entities of a certain type, what's the correct way to do this? I'm doing the following but it feels wrong.
(d/q '[:find [(pull ?e [*]) ...]
:in $ ?id-attr
:where
[?e ?id-attr _]]
(d/db conn)
id-attr)
#2015-12-2419:38zentropeI see a doc about rules, but not how to get them into a query.#2015-12-2419:39zentropeAh, a tiny bit buried: First, you have to pass the rule set as an input source and reference it in the :in section of your query using the '%' symbol.#2015-12-2419:39zentropeSuggestion: An actual example of a query with rule right there in the "rules" section?#2015-12-2420:17zentropeSuper cool. Rules to do a "do some of these not-always present attributes contain this string" works really well.#2015-12-2420:18zentropeBy "works well" I mean it's a tiny amount of code.#2015-12-2422:14currentoorIs there a way to parameterize a pull expression inside a query?
I want to do something like:
(d/q '[:find [(pull ?e
?pull-exp) ...]
:in $ ?id-attr ?pull-exp
:where
[?e ?id-attr]]
(d/db conn)
:user/id
[:user/first-name {:user/task [:task/id]}])
#2015-12-2422:14currentoorBut I get the following error:
:db.error/invalid-pull Invalid pull expression (pull ?e ?pull-exp)
#2015-12-2422:20currentoorI have to write several queries like and it would be nice to be DRY and avoid writing macros...#2015-12-2422:30currentoorSo I figured out I can work around with the back quote.
(let [pull-exp [:user/first-name :user/last-name :user/id
:user/email {:user/task [:task/id]}]]
(->> (d/q `[:find [(~'pull ~'?e
~pull-exp) ...]
:in ~'$
:where
[~'?e :user/id]]
(d/db conn))
(map normalize)))
Not sure if this is an anti-pattern though.#2015-12-2423:01zentropeYou can pass in the pull-spec as a parameter. Is that what you're talking about?#2015-12-2423:02zentrope(d/q '{:find [[(pull ?e pat) ...]] :in [$ pat] ....} (d/db conn) the-pat)#2015-12-2423:02zentropeSomething like that.#2015-12-2423:03zentropecurrentoor ^^#2015-12-2423:04zentropecurrentoor: In your first example, remove the ? prefix from the pattern reference?#2015-12-2423:07zentropeOh, hm. Maybe @currentoor is the alert? Oy. Slack.#2015-12-2423:11currentoorOh does ? have special semantic meaning?#2015-12-2423:11zentropeNot sure. But I don't use it for my patterns and they work.#2015-12-2423:12currentoorYup that worked. Thanks!#2015-12-2423:16zentropePhew! ;)#2015-12-2602:29currentoorIf I create a collection of entities using nesting like:
[{:db/id order-id
:order/lineItems [{:lineItem/product chocolate
:lineItem/quantity 1}
{:lineItem/product whisky
:lineItem/quantity 2}]}]
And I want to update some of these nested lineItems, what’s the best way to do that?#2015-12-2602:30currentoorShould I just retract the order entity and create a new one with the same entity-id?#2015-12-2602:37zentropeI think you make those line items entities in and of themselves.#2015-12-2602:37zentropeThen you can just assert new facts about them individually, if you want.#2015-12-2602:39zentropeI’d make an attribute {:lineItem/id (d/squuid) } and have that be a :db.unique/identity.#2015-12-2602:40zentropeWhen you make changes, something like {:db/id [:lineItem/id “lkasjdlkas”] :lineItem/quantity 2}#2015-12-2602:41zentrope[{:db/id order-id
:order/id (d/squuid)
:order/lineItems [{:lineItem/id (d/squuid)
:lineItem/product chocolate
:lineItem/quantity 1}
{:lineItem/id (d/squuid)
:lineItem/product whisky
:lineItem/quantity 2}]}]
#2015-12-2602:42zentropeSomething like that.#2015-12-2616:08raymcdermott@currentoor: if you update the properties in existing lineItems, Datomic will take care of it if you send the map through (assuming the entity and component entity IDs are present)#2015-12-2616:08raymcdermottit gets funny if you want to add / delete items and its not so simple then#2015-12-2616:09raymcdermottI wrote a DB function to cope with it (after some advice from @tcrayford here)#2015-12-2616:09raymcdermotthttps://gist.github.com/raymcdermott/57ade5ae671f83d2444e#2015-12-2618:14currentoor@zentrope: Thanks, but lineItems are components entities and I’d like to interact with them as part of Orders.#2015-12-2618:14currentoor@raymcdermott: Thanks, I’ll look into that.#2015-12-2618:17raymcdermott@currentoor: let me know if it’s useful and I will write that blog post simple_smile#2015-12-2618:18currentoorWill do, it looks like exactly what I need!#2015-12-2618:22currentoor@raymcdermott: so how would I use it? Like this?
[:component-crud {:db/id order-id
:order/lineItems [{:lineItem/product chocolate
:lineItem/quantity 1}
{:lineItem/product whisky
:lineItem/quantity 2}]}
:order/lineItems]
#2015-12-2618:23currentoorwhere the value of :order/lineItems has been updated.#2015-12-2618:27raymcdermottvery close but the updated items should have the entity IDs (that were created automatically on insert)#2015-12-2618:28raymcdermottthey come back if you do a pull query on the order-id#2015-12-2618:28raymcdermottitems that do not have an entity ID are treated as novelty#2015-12-2618:29raymcdermottthe general idea is just to send back in the updated map and let the function work it out#2015-12-2618:32raymcdermottI have a small example based on a shopping cart...#2015-12-2618:33raymcdermott(defn- save-new-cart [conn cart]
"New: any embedded skus will be created as component entities"
(let [temp-id (d/tempid :db.part/user)
tx-data (conj [] (assoc cart :db/id temp-id))
tx @(d/transact conn tx-data)
{:keys [db-after tempids]} tx
cart-id (d/resolve-tempid db-after tempids temp-id)]
(d/pull db-after '[*] cart-id)))
(defn- save-updated-cart [conn cart]
"Update: embedded skus will be handled by the DB CRUD function"
(let [tx-data [[:component/crud cart :cart/sku-counts]]
tx @(d/transact conn tx-data)
db-after (:db-after tx)]
(d/pull db-after '[*] (:db/id cart))))
(defn save-cart! [cart]
(let [conn (d/connect uri)]
(if (:db/id cart)
(save-updated-cart conn cart)
(save-new-cart conn cart))))
#2015-12-2618:34raymcdermottthe save functions return the committed updates#2015-12-2618:38currentoorHmm, I was thinking about this a little bit differently. I’d like to only keep track of the outer Order entity and not worry about the entity-ids of the nested lineItems. Instead I can pass new lineItems (inside the order) in and the db-function can lookup the current lineItems and do a data-value based comparison to figure out the additions/retractions needed.#2015-12-2618:40raymcdermottRetractions is tricky that way. How do you mark something as deleted?#2015-12-2618:41raymcdermottadditions would be easy - you would need to provide some key or composite key for the comparisons#2015-12-2618:41zentrope@currentoor That's why I give lineItems unique IDs. When you pull-api the order out, you have all the information you need to construct retractions.#2015-12-2618:41raymcdermottmy thinking was that the data is cohesive by its nature#2015-12-2618:42raymcdermott@zentrope: datomic gives the lineitems IDs by default if it is passed a map#2015-12-2618:42zentropeFor instance, in a web form, when the user "deletes" the line items, you can add that to a "dropped items" structure. Throw the whole thing back to the server and you can just loop through and delete.#2015-12-2618:43zentropeYeah, db/ids or you can make your own as well to avoid relying on those, but either way, doing set math on the data in your client turns out to be reasonably simple.#2015-12-2618:43raymcdermottmy approach is that missing data is retracted but you could keep another structure too#2015-12-2618:44raymcdermott@zentrope: indeed my function just does some set functions#2015-12-2618:44currentoor@zentrope: I see your point but in my use-case the nested entity is an implementation detail that should not be exposed outside.#2015-12-2618:44raymcdermottone advantage of using a function is that the actions are atomic#2015-12-2618:45zentropeYeah. I guess the surprising thing is you can just assert a set of values for an attribute and thus replace what was already there. So, db function, or client-side stuff.#2015-12-2618:45currentooryeah I definitely want this to be an atomic function otherwise data can get into a weird state#2015-12-2618:46currentoor@raymcdermott: I think we may be talking about two different things, I’ll go try out my ideas first. I’m probably mistaken about something here.#2015-12-2618:47raymcdermott@currentoor: no worries, let me know how you get on#2015-12-2618:54currentoor@raymcdermott: is there an advantage to using datomic.api/function? I’ve been writing them as regular functions in regular namespaces (requiring them into the transactor).#2015-12-2620:26currentoor@raymcdermott, @zentrope: This is what I had in mind. It appears to be working.
https://gist.github.com/currentoor/dcdfbe4d8e99513a4135#2015-12-2620:29zentropeCool. Bookmarking. ;)#2015-12-2620:33currentoorAwesome!#2015-12-2620:44curtosisanyone know of a good example of building datomic entities up from SQL queries?#2015-12-2620:45curtosis(i.e., migrating from tables to Datomic)#2015-12-2621:44tcrayford@curtosis: is the goal schema translation? Or am I confused?#2015-12-2621:45curtosismore or less, yes. I have a bunch of data in postgres that I want to move over into Datomic. I have both schemas.#2015-12-2621:46curtosisit's not too complex, but the source tables are relatively normalized (= lots of nested entities)#2015-12-2621:47tcrayfordyeah. Are you looking to automate the schema translation or just do it once manually?#2015-12-2621:48curtosisautomated would be nice -- I'll probably have to do it more than once simple_smile -- but that's not a major consideration.#2015-12-2621:49tcrayfordsure. In which case maybe the mbrainz example from datomic itself? Mbrainz was originally a sql database (if you're looking for an example of how to translate schema)#2015-12-2621:52curtosishmm... I thought mbrainz was meant to be loaded from an already-datomicized restore file. is the generating code in the repo somewhere?#2015-12-2621:53tcrayford@curtosis: no, that's a sample of a manual translation#2015-12-2621:54curtosisoh, wait... I don't mean translating the schema ... I already did that. I just want to move the data.#2015-12-2621:54tcrayfordI don't think doing automated translation would ever work that well, but "never say never" 😉#2015-12-2621:54tcrayford@curtosis: oh. I don't think I have a good example of that#2015-12-2621:55curtosisyeah... I would think that an automated schema translation would probably work but hamstring your datalog into some unnatural shapes#2015-12-2621:58curtosismeanwhile, I'm also stuck trying to figure out what the datomic console url looks like for a sql backend, so that's going well for me too. 😛#2015-12-2622:00curtosis"uri is a Datomic db uri with the dbname missing" doesn't make sense with datomic:sql://<DB-NAME>?jdbc:#2015-12-2622:03raymcdermott@currentoor: the main thing is atomicity#2015-12-2622:07bkamphaus@curtosis: The URI arg for Datomic console is exactly that, omit the <DB-NAME> — e.g., datomic:sql://?jdbc: is my local URI (w/user,password changed to datomic).#2015-12-2622:10curtosis@bkamphaus: I'm clearly an idiot.... I tried that, which didn't work at all... until I noticed that I wasn't specifying a port.#2015-12-2622:11curtosisbut it looked like all my other failures 😛#2015-12-2622:11bkamphausAh, got it, yeah full invocation for me looks like: bin/console default $(pg-uri "") -p 1121 — helper to generate the local pg-uri because I can’t be arsed to do all the typing.#2015-12-2622:12bkamphausOh, right, as in it just prints the usage instead of complaining about which specific thing was missing?#2015-12-2622:12curtosislol, yeah.#2015-12-2622:13curtosisI've at least got the console talking to it, and successfully transacted my schema and the static data (from groovysh)#2015-12-2622:14curtosisbut thanks for pointing out that it was correct, no matter how wrong it looked... that helped see that something else was broken.#2015-12-2622:15curtosisso now I'm back to the hard part, of walking my sql results and building entity maps \o/#2015-12-2622:16bkamphausGotcha. FWIW, I’ll make note of the implied feature request for a more specific error message.#2015-12-2622:17curtosisah, yeah. thanks for translating simple_smile#2015-12-2622:26currentoor@raymcdermott: what's not atomic about my implementation?#2015-12-2622:30currentoorAre you talking about d/function?#2016-12-2701:35zentropeI don't get the "atomic" argument. Each transaction (consisting of lots of updates and retracts) is executed one at a time.#2016-12-2701:35zentropeIf two users are editing "the same order with line items" at the same time, and each one hits save, one of them gets applied first.#2016-12-2701:36zentropeSo, you still have to see "if the database has changed" or whatever, whether it's inside a transaction function, or outside.#2016-12-2701:36zentropeThe issue is that two people have a copy of the order they're fiddling with and the DB can change that order out from under them while they're still fiddling.#2016-12-2701:39zentropeSeems like you still need app logic in one way or another (just before a transaction, or in a tx function) to decide if the second order-save can happen, given that it's now based on an out-of-date assumption about the order.#2016-12-2706:10currentoor@zentrope: atomic won’t always be sufficient, like the example you mentioned, but there are cases where it is necessary and sufficient. Consider the case where you need to add $10 to an account balance and store the updated value. The current value is $100. Two users pay the account $10. If we read the data in application code then two simultaneous peers could read the current value of $100 and both write transactions like:
[:db/retract eid :account/balance 100]
[:db/add eid :account/balance 110]
Then the second transaction will not be accurate. But if you read the current value of the account balance in the transaction function then incrementing will always be accurate.#2016-12-2706:11zentropeYep. That makes sense to me.#2016-12-2706:11zentropeIn the case of order/items, I guess you'll have to reject the request when someone else beat the user to the update?#2016-12-2706:12zentropeI think in your case, you're just asserting/retracting based on what's there. But it's possible the second user's update will put something back that the first user deleted.#2016-12-2706:13zentropeSo, you'd have to do some sort of compare-and-set strategy?#2016-12-2706:13currentoorI don’t think so, cuz my implementation is like clojure.core/reset! for atoms. It makes sure the last transaction “wins".#2016-12-2706:13zentropeOy. ;)#2016-12-2706:13currentoorRegardless of what is or isn’t there previously.#2016-12-2706:14currentoorMy function generates retraction transactions if it needs to, otherwise just additions.#2016-12-2706:14zentropeRight. But reading the order/items out into a client, then sending an update request with drops/asserts already calculated adds up to the same thing. The last one will win.#2016-12-2706:15zentropeI guess if there's a retract and the entity is already retracted, Datomic will error?#2016-12-2706:15currentoorPrecisely#2016-12-2706:15zentropeI know that asserting the same fact over again if nothing chances is okay.#2016-12-2706:15zentropeDatomic does the right thing, there.#2016-12-2706:16currentoorIf the data changes while you’re reading it in the client then I believe datomic will error when you try to send pre-determined retractions.#2016-12-2706:16currentoorThat’s why I’m doing reads in the transactor.#2016-12-2706:20zentropeHm. I'm trying to force that in my app. Same user editing the same master/detail structure.#2016-12-2706:21zentropeRetracting an entity that isn't there isn't breaking.#2016-12-2706:21zentropeI'm not sure if that's a good or bad thing. ;)#2016-12-2706:21zentropeI guess if you assume that "retractEntity" is just making a statement about the system, and if it's already accomplished, so much the better.#2016-12-2706:23zentropeAh! But if I add two new "details" to the master, each with the same name, I get a dup. Then again, I can get a dup with just one user: I don't prevent it anywhere.#2016-12-2706:24zentropeSo, I'm convinced by your technique, but I don't see any penalties in ignoring it.#2016-12-2710:36raymcdermottthe docs at the web site are clear that DB functions are atomic http://docs.datomic.com/database-functions.html ; convincing yourself (and others) that your functions have the correct outcome is possible but tougher#2016-12-2710:43raymcdermottOTOH I found the DB Function to be a major PITA from a flow / code / debug perspective so that's a big downside. The tooling aspect (handling errors / log output for example) in database functions should have some attention.#2016-12-2711:51tcrayford@raymcdermott: a thing to note is that you can run database functions in your local codebase and write tests for them there (that's what I've done)#2016-12-2712:35raymcdermottwell yes but once they’re installed in the DB you lose some capabilities … glad to be corrected if I’m doing it wrong#2016-12-2712:35raymcdermottyou also need to cut and paste the code into the DB function which is a little weird (again, let me know if I’m doing that wrong!)#2016-12-2717:21curtosisI'm not sure I have a full grasp of lookup refs yet... is there a way to use them "upsert-wise" when creating an entity to find-or-create a ref'd entity? I'm thinking something like this:
{:invoice/number (make-invoice-number)
:invoice/company [:company/code "CODE"]
...}
#2016-12-2717:22curtosiswhere I want to find [?e :company/code "CODE"], creating it if not found, and set :invoice/company to ref to it#2016-12-2717:24curtosiscan I do that with straight tx syntax, or does it need a database function?#2016-12-2800:30zentropeA transaction function is atomic in the sense that you can query "the value of the database" and do all kinds of lookups before committing the transaction.#2016-12-2800:30zentropeBut transactions without db-functions are also atomic.#2016-12-2800:31zentropeIf you already have all the info you need to construct an appropriate transaction, a db function isn't necessary.#2016-12-2800:33bkamphaus@zentrope: it provides ACID isolation in that it will ensure that it acts on the direct immediate proceeding database value when querying, etc. I.e. nothing happens between reading and transacting, or throwing instead of transacting, etc.#2016-12-2800:34zentropeThat's my understanding.#2016-12-2800:35zentropeBut if a user is revising ingredients to a recipe the have the whole master/detail to hand. When they want to save it, it doesn't matter what the immediately preceding value is or was. Unless, by policy, you don't want "last one to win".#2016-12-2800:37zentropeSo, if you want to figure out which "ingredients" to retract, based on the user's intention, you could just use what the user has, or you could compare it using a DB function. To me, it becomes a matter of taste at that point.#2016-12-2800:38bkamphaus@zentrope: Basic isolation example: if you’re updating a balance from $1000 to $1100 because someone deposited $100, and another deposit of $50 comes in between querying the database value and submitting the transaction, you want to (a) throw and re-attempt with latest balance (peer coordination approach), or (b) have a transaction function that guarantees isolation and adds the deposit to whatever the current (most recent prior to transaction) balance is.#2016-12-2800:38bkamphausI.e. a cas (compare and swap) use vs. dedicated add-to-latest tx function.#2016-12-2800:38zentrope@bkamphaus: Yes. I get that. But that's not the case with editing a master/detail thing.#2016-12-2800:39bkamphausIt’s true that there are use cases where you don’t need isolation. But if you do, transaction function or optimistic concurrency using e.g. a cas approach where peers handle the work to build the tx and retries when necesary, etc. are what to reach for.#2016-12-2800:40bkamphausIt’s true that transactions without transaction functions are atomic in ACID terms, i.e. entire transaction succeeds or fails.#2016-12-2800:40zentropeI think the problem some folks have (I did) was when you need to adjust a bunch of isComponent style objects related to a single :ref entity.#2016-12-2800:41zentropeYou can't just assert, "Hey, these are the new ones, remove the ones no longer in this set".#2016-12-2800:42zentropeOne solution is to load all the items into client space, the user adds, deletes, alters, but you keep a reference to the original, construct asserts/retracts, then put them in a transaction.#2016-12-2800:42zentropeAnother solution is to just send down the set the user wants to keep, then use a db function to sort out the retracts.#2016-12-2800:42zentropeWhat I can't see is that there's really any difference.#2016-12-2800:43bkamphausWell you do need isolation in that case with card many refs, component or not if you need to avoid the race. I.e. if you need to remove all refs/entities pointed to by the attribute, so you want to ensure that you remove all of the latest things. That is, in case something was asserted in between you retracting all the previous values and asserting new ones. You would have a stale entry.#2016-12-2800:43zentropeUltimately, you're saying, "here's the new state, overlay it". The coolness of a db function doesn't protect you.#2016-12-2800:43zentropeYes, that's true.#2016-12-2800:44bkamphauscaveat: I haven’t really scrolled up and re-read thoroughly, so I could be missing nuance in the use case.#2016-12-2800:44zentropeNah, your last comment addresses it.#2016-12-2800:44zentropeEven so, a user adds an item, and then it's suddenly gone because some other user futzed with it at the same time.#2016-12-2800:45zentropeDatomic gives you tools to deal with whatever policy you want to implement, but it's still a bunch of tough decisions.#2016-12-2800:45zentrope(Luckily, history lets you untangle it!)#2016-12-2800:46bkamphausIf a user choose the application equivalent of “commit” and commits something that immediately overwrites another user, or a user says “forget everything” and someone commits something just before hand, the correct action is arguably to overwrite that that user said. But yes, exactly, preservation of retractions in history means you can determine what vanished and why, and if you want expose ways to recover things.#2016-12-2803:38kingoftheknollHey everyone! I have a schema question related to a FIFO stack setup. Here’s how I did it before in Django/SQL. Imagine an job scheduling tool for recursively subcontracting jobs from vendor to vendor. Start with an Event w/ 1..n unfilled jobs. Each job can be 1) filled by a worker OR 2) linked by fk to a new job , where the vendor of the new job can 1) fill with worker OR 2) subcontract with new job. So it’s a linked list using fk’s to build a stack representing a relationships of subcontracting. An event can have multiple of these job stacks.#2016-12-2803:39kingoftheknollIt seems I can easily implement the same thing with Datomic however I’m wondering if there’s a better way to represent this structure.#2016-12-2803:41kingoftheknollOne crazy thought is to leverage the history of a single entity over time to represent the change but seems not to fit very well.#2016-12-2818:26currentoor@kingoftheknoll: I also came from a SQL background. The best advice I got was that datomic schema entities should represent what. When, who, and why should be addressed by annotating transactions. Keep in mind transactions in datomic are entities in their own right.#2016-12-2818:27kingoftheknollinteresting I didn’t realize transactions could be annotated#2016-12-2818:32kingoftheknollMy current thought after stewing on things since last night is that I should embrace doing a linked list rather than focusing on the transactions. So :event/jobs cardinality/many to :job entities and :job entities have cardinality/one to other jobs. Do isComponent for all the jobs then sort them by their links to eachother post query#2016-12-2818:32kingoftheknoll@currentoor:#2016-12-2818:32kingoftheknollthanks for taking the time to look!#2016-12-2818:33robert-stuttafordi wouldn’t use transactions to model this subcontracting#2016-12-2818:34robert-stuttafordtransactions model time, which is orthogonal to the notion of delegating work#2016-12-2818:34robert-stuttafordtime can pass with no delegation, and several delegations can happen at a single time#2016-12-2818:35robert-stuttafordthe approach you’re planning sounds right#2016-12-2818:35kingoftheknoll@robert-stuttaford: yeah that was the conclusion I came to this morning#2016-12-2819:44currentoor@robert-stuttaford: so when do you think annotating transactions is best suited? I'm still a novice at this stuff.#2016-12-2820:32currentoorAlso how do you do pagination in Datomic. Say I have a collection of entities, I need to sort them by a particular attribute then get a particular page in this sorted collection.#2016-12-2820:32currentoorI know sorting has to be done in the client but are there any suggestions for doing pagination otherwise?#2016-12-2821:43tcrayford@currentoor: you have to do that with raw index walking right now. Datomic query doesn't support sorting of any kind which means no pagination inherently#2016-12-2821:54currentoor@tcrayford: could you please elaborate on what raw index walking means?#2016-12-2821:54currentoorDoes that mean just pulling the whole collection into memory and then extracting what I need?#2016-12-2822:59domkm@currentoor: Re: raw index walking, look at datomic.api/datoms#2016-12-2823:03currentoorthanks#2016-12-2904:22robert-stuttaford@currentoor: we use tx annos in a couple ways. 'who': every web-generated tx is tagged with the signed-in user who created it = easy audit trail. and our back-end event-stream processor tags its own txes as 'processed-by' so that it can keep track of what work it's done and has to do. i've also seen examples mentioned like marking a past tx as 'error', or marking a new tx as a 'correction'#2016-12-2904:22robert-stuttafordthings like that#2016-12-2904:28robert-stuttaford@currentoor: on pagination, it's actually an interesting problem to solve. the problem is fundamentally this: Datomic doesn't do arbitrary sorting for you like SQL or Mongo do, beyond the sort order present in the 4 indexes (eavt aevt avet vaet). if you needed to e.g. sort a 3 'column' dataset by any of its columns ascending or descending, you're on your own. it's easy to implement, but not performant in the large. i went down the road of caching large data-sets in redis to make paginating and re-sorting the set faster, as all the work required to get the dataset to the point where it's sortable and ready for render is slow when you get to 10,000s and 100,000s#2016-12-2904:29currentoorhmm, and how did that perform for you?#2016-12-2904:29currentoorredis i mean#2016-12-2904:29robert-stuttafordgenerating the initial set is still slow, but once cached, it's very fast#2016-12-2904:29robert-stuttafordbut this just moves the problem somewhere else: cache expiry#2016-12-2904:30robert-stuttafordusing core.memoize, the cache key is all the fn's args, one of which is the datomic db#2016-12-2904:30robert-stuttafordso every time a new db is used, the cache is empty#2016-12-2904:30currentoori see#2016-12-2904:32robert-stuttafordso, we're now looking into ways to reduce the total dataset size before you start sorting, by warning the user of the dataset size up-front and prompting them to apply filters to reduce it#2016-12-2904:32robert-stuttafordbecause the likelihood that you're going to page through 1000s of records is ultra low#2016-12-2904:33robert-stuttafordactual pagination code is very easy: (->> (d/datoms ...) seq (drop (* page-index page-size)) (take page-size))#2016-12-2904:33robert-stuttafordyou could have a datalog query or any other collection producing code at the beginning, of course, and you'd also sort before you drop+take#2016-12-2905:19currentoornice#2016-12-2905:20currentoorthanks for showing#2016-12-2910:38tcrayford@robert-stuttaford: would recommend also tagging transactions with: a) git sha of the process that produced it b) basic info about the http request (I just do method and path)#2016-12-2916:19robert-stuttafordboth great tips#2016-12-2916:19robert-stuttafordi assume you inject the git sha into your build artifact somehow#2016-12-2917:04bkamphausJust as a side note, any generated or domain supplied unique identifier on a transaction is great for dealing with retry logic, since you have to sync/coordinate after unavailability to see what made it in otherwise.#2016-12-2917:08bkamphausTim Ewald also covered some other use cases of Reified Transactions at the Datomic Conf portion of the Conj this year: http://www.datomic.com/videos.html#2016-12-2919:48davebryandhow do you guys think about creating partitions for your data? Should I be creating a different partition for every type of entity? So, if we had a notion of users, teams, games, stadiums we would do a separate partition for each?#2016-12-2920:23curtosis@davebryand: as I understand it, partitions (primarily) drive index locality, so you want to keep entities you work with together a lot under the same partition. It really depends on how you use teams/games/stadiums/etc.#2016-12-2920:25curtosis(I can imagine use cases for those entities where each strategy could be more appropriate.)#2016-12-2920:29curtosispresumably someone with more experience will correct me if I'm wrong simple_smile#2016-12-2920:59davebryandgotcha—so depending on the app logic, it might make sense to have a partition per team or something, if that’s a common query pattern?#2016-12-2921:49davebryandanyone know if there is a way to expand a transaction map form into a list form for debugging?#2016-12-2923:24kschraderhas anyone tried to use multiple count functions in a query?#2016-12-2923:25kschraderI’m seeing behavior where it seems to sum up across all of the counts instead of giving me individual counts#2016-12-2923:26kschraderor perhaps multiplying both values together…#2016-12-2923:26bkamphaus@kschrader: can you share an example of what you want the output to like like and a version of the query, obfuscated from your domain if need be?#2016-12-2923:27kschrader@bkamphaus: give me a second to put together a minimal example#2016-12-2923:30kschraderIf I only use one of the count statements I get the expected result#2016-12-2923:31kschrader@bkamphaus: but using both of them seems to multiple the values together and return that value for both statements#2016-12-2923:34kschraderit should be 7 projects and 1264 stories#2016-12-2923:35bkamphaus@kschrader: let me think through setting up an analogous query with mbrainz to test, and see expected behavior. What happens if you put ?org in a :with clause ( http://docs.datomic.com/query.html#with )#2016-12-2923:36kschradersame behavior#2016-12-2923:39kschrader@bkamphaus: need to head home, can you email me (kurt at http://clubhouse.io)?#2016-12-2923:39bkamphaus@kschrader: sure#2016-12-2923:39kschraderthanks#2016-12-2923:45currentoorShould I store a JSON blob as a string or bytes?#2016-12-3000:43bkamphaus@kschrader: ok, that’s expected behavior. I ran with analogous query and similar ones in mbrainz. The important thing is to look at the tuples you generate w/o the aggregate:
[:find ?artist ?release ?track
:in $ ?name
:where
[?artist :artist/name ?name]
[?track :track/artists ?artist]
[?release :release/artists ?artist]
]
Returns 8250 tuples (the cartesian product of relations between ?track and ?release)#2016-12-3000:44bkamphausbut you can limit to unique values for each with count-distinct:#2016-12-3000:44bkamphaus[:find ?artist (count-distinct ?release) (count-distinct ?track)
:in $ ?name
:where
[?artist :artist/name ?name]
[?track :track/artists ?artist]
[?release :release/artists ?artist]
]
Returns:
17592186047016, 30, 275#2016-12-3001:51tcrayford@curtosis: @davebryand note that partitions only impact the clustering of the E portion of the sort, unlikely to have much impact outside eavt and aevt indexes. Potentially huge impacts on both of those though.#2016-12-3016:11kschrader@bkamphaus: that’s the behavior that I want#2016-12-3016:11kschraderthanks#2016-12-3016:11kschraderdidn’t know that count-distinct existed#2016-12-3016:13kschraderthe count behavior seems non-obvious to me, I can’t think of a case where I’d want that to happen#2016-12-3016:13bkamphausGlad to help. If you’re building queries to do aggregation, might be worth a skim of the aggregates section of the docs: http://docs.datomic.com/query.html#aggregates - I have to confess to having needed to do that myself to reason about what you ran into and find the solution simple_smile#2016-12-3016:15kschraderyeah, good to know#2016-12-3016:15kschraderI searched Google for datomic count#2016-12-3016:16kschraderand then searched within the query page for count#2016-12-3016:16kschraderand the first thing that comes up is reference to count in the Not clauses section#2016-12-3016:16kschraderThe following query uses a not clause to find the count of all artists who are not Canadian:#2016-12-3016:17kschraderdidn’t occur to me that there might be another function to do counts that isn’t mentioned until further down the page#2016-12-3016:17kschraderjust FYI, how I got stuck#2016-12-3016:20bkamphausI get that it seems non-obvious, it’s a case where count basically does row/tuple counting, so it depends on the shape of the relation from the query. There are different aspects where you run into issues using aggregates given the set of tuples model, :with allows duplicates for values so you can e.g. sum multiple instances of the same value, count-distinct will only count unique things if the relation you construct in the query ends up with multiple values from the implied many-to-many, i.e. when relating to reverse refs to an entity id.#2016-12-3016:21kschradergot it#2016-12-3016:22kschraderis there a way to do this that’s faster? seems to be taking a lot of time in the console, but if you explore entities in the console it seems like they pull up the ref counts right away#2016-12-3016:25bkamphaus@kschrader: haven’t tested but it will probably be faster to get the counts with two separate queries and merge the results. count-distinct, while returning the answer you want, will still operate over the cartesian product from the many-to-many of ?project to ?story per ?org.#2016-12-3016:26kschradergot it, that explains why every count-distinct I add seems to increase query time exponentially simple_smile#2016-12-3017:27kschrader@bkamphaus: is there anyway to get a count of zero in an aggregation function?#2016-12-3017:28kschraderright now it just leaves out the values when I run them and no results are found#2016-12-3017:28kschraderand get-else doesn’t work on a cardinality-many ref#2016-12-3018:16bkamphaus@kschrader: even a custom aggregate won't do anything if there are no relations to aggregate over. I’d just check outside the query, if empty then 0.#2016-12-3018:18kschraderok, was hoping to have something that some of the less technical users here could just drop in the Console#2016-12-3018:18kschraderbut I can do that#2016-12-3019:25alandipertanyone know if there's a better way to do something like this using q directly? https://gist.github.com/alandipert/73e923a690061d18ba3a#2016-12-3020:38alandipertused the technique to query over AWS w/ amazonica, still interested to know if there's better ways - https://gist.github.com/alandipert/d2cb38ee869448182c4b#2016-12-3107:08jimmyhi guys how can we write seed in with relation for datomic in datalog ?#2016-01-0100:17curtosisfinally trying to transact some nested maps, and stymied by: :db.error/not-a-db-id Invalid db/id: #db/id[:part/user -1000725]. It's not supposed to be a valid db/id yet, right?#2016-01-0100:18curtosisthus the whole point of tempids...#2016-01-0100:21tylerdoes anyone know of any libraries for converting a nested map to eav?#2016-01-0100:27curtosisnever mind... shoulda been :db.part/user, not :part/user. sigh I'll take that as a sign I should step away from the keyboard until next year.#2016-01-0120:44raymcdermottFYI I made a demo for using Datomic with Hoplon that is now part of the demos project. I have just created a PR that also shows the update round trip. Uses the in-memory database for simple provisioning.#2016-01-0120:45raymcdermottso if anybody wants to get started with Datomic on Hoplon there are now a few simple examples#2016-01-0201:51jamesnvcHello, I’m having an issue trying to add a transaction function; when it tries to add, I get “Can’t embed object in code, maybe print-dup not defined: clojure.lang.Delay"#2016-01-0201:51jamesnvcfn looks like
{:db/ident :add-user
:db/id #db/id [:db.part/user]
:db/fn #db/fn {:lang "clojure"
:params [db params]
:code (pr-str
'(if-let [e (datomic.api/entity db [:user/email (:user/email params)])]
(throw (Exception. "User already exists with email"))
[params]))}}
#2016-01-0201:52jamesnvcalternatively to fixing that, is there a way to tell datomic that I don’t want to upsert? I’m using this function because I want to ensure that users have unique email addresses, but if I use a tempid when I insert the new user, duplicates result in updating the user with the existing email, instead of throwing an error#2016-01-0201:56jamesnvcoh, never mind the fn question; apparently d/function works instead of the reader macro#2016-01-0201:57jamesnvcI would still be interested to know if there’s a better way to do this and let datomic’s unique checking do the job for me#2016-01-0202:19bkamphaus@jamesnvc: use unique/value instead of identity. Unique/value throws, unique/identity upserts http://docs.datomic.com/identity.html#unique-values#2016-01-0203:40jamesnvc@bkamphaus: oh, thanks! I think I was misunderstanding what unique/value meant#2016-01-0217:48sparkofreasonAnybody know of a converter from JSON schema to Datomic schema? Grasping at straws here, Google turned up nothing.#2016-01-0218:45curtosis@dave.dixon: haven't seen one, but I'd be interested in a JSON-to-Datomic data converter if you've seen one. 😉#2016-01-0218:49curtosis(which i suppose is really json-edn...)#2016-01-0218:50sparkofreasonIs there some reason data.json doesn't work? https://github.com/clojure/data.json#2016-01-0218:51sparkofreasonThat's what I was planning to use to read the JSON schema into Clojure maps etc.#2016-01-0218:51curtosishaven't tried it... didn't get that far yet. lol#2016-01-0218:51alandiperti dabbled in putting arbitrary graphs into datomic with https://github.com/tailrecursion/monocopy, concept was flawed though#2016-01-0218:52curtosisI'm just in the process of rethinking my planned architecture... I wanted datomic on AWS but I think it'll just be too expensive.#2016-01-0218:52curtosisInstead now I'm considering dumping the raw data into DynamoDB using Lambda, then extracting it into Datomic for the real work.#2016-01-0218:53alandipertif you want to query maps in datomic datalog (not store them) you can circumvent the structural identity problem with an approach like https://twitter.com/alandipert/status/682597011141558273#2016-01-0218:54curtosis(run-time is mostly data collection with some lookups; "real" work happens periodically -- as in monthly/annually and is offline)#2016-01-0218:55alandipert@curtosis: what kind of data? would you characterize your work as "analytic"?#2016-01-0218:58curtosisThe data is essentially scores submitted by judges. The core entity is an event: {:ballots [{:judge judgeid :category score}]} (simplifying greatly .. it's actually 4-5 layers deep)#2016-01-0218:58curtosisit's less "analytic" than just "tabulation". There are a bunch of rules for how scores get averaged/dropped/qualified etc.#2016-01-0218:59curtosisand it seems dramatically easier to do it in datalog than SQL.#2016-01-0219:02curtosisI can't really justify the ~$300 to keep a t2.medium instance running all year.#2016-01-0219:03curtosis(thus the workaround... I'd prefer to stay in Datomic from the outset. Audit trail is, as you might expect, rather important here.)#2016-01-0219:04alandipertinteresting#2016-01-0219:04alandipertdo already have a SQL db where you store the collated results?#2016-01-0219:05curtosishmmm... I could also flip it around and use DynamoDB directly for the live lookup stuff (name regularization etc) and put the scores on SQS, fire up Datomic once a day or so to consume the queue and update the names list....#2016-01-0219:06curtosisI have a SQL db currently with the raw scores#2016-01-0219:06curtosis(and clojure code now to map it into Datomic)#2016-01-0219:07curtosisthe grand impetus for all of this is that a) the inputs look like documents more than tables, b) rails stack consistency over time is a tire fire, and c) I really like Datomic. simple_smile#2016-01-0219:08alandiperti'm in the ad business and we store "events" like clicks and impressions in in S3, then EMR them periodically and put the aggregates in a combination dynamo and redshift#2016-01-0219:08alandipertseems like in your case, if your real-time query requirements are light, the most economical thing would be to aggregate somewhere cheap and "wake up" periodically to process#2016-01-0219:08curtosisare they more like flat structures when they go into S3?#2016-01-0219:09alandipertnewline-delimited json maps#2016-01-0219:09curtosisyeah, that's what I'm thinking. And the free tier of Dynamo is probably plenty sufficient for that "cheap" aggregation.#2016-01-0219:09alandipertmostly flat but decorated with geo and other info, maybe 3-4 things deep in spots#2016-01-0219:09curtosisah, ok.#2016-01-0219:10curtosisS3 would be simpler (no Dynamo schema to care about) but there's no long-term free tier simple_smile#2016-01-0219:11curtosiswaking up once a day is more than sufficient, and would run $20/year. way better.#2016-01-0219:12curtosisOTOH, I'm also being silly... S3 is like $0.50/year for this use case.#2016-01-0219:13curtosisthe only gotcha is the name-lookup service backing... easy to do in Dynamo, harder to do in S3.#2016-01-0219:14curtosisbut regardless, I think this discussion is super helpful... I think I've reduced the problem now to a simple recurring load process, with a feedback to the name lookup stuff.#2016-01-0219:16curtosishmm.. for that matter, just dumping the records onto SQS for the instance to pick up when it wakes up might work too.#2016-01-0219:16curtosishow do you keep track of which ones you've processed out of S3?#2016-01-0219:33alandipert@curtosis: dynamo#2016-01-0219:33alandipertwell, there are a few "stages"#2016-01-0219:34alandipertthe first stage is gathering up files from S3 into batches and EMRing them... that's where we use dynamo, to keep track of the files/batches#2016-01-0219:34alandipertwhen EMR is done it puts a batch id on an SQS queue... where a thing that specializes in loading aggregated data into Redshift picks it up#2016-01-0219:35curtosisspiffy#2016-01-0219:35curtosisand so the first stage just looks at S3 for everything newer than its oldest batch?#2016-01-0219:36alandipertthat's one way to do it... another is to attach a lambda function to S3 events#2016-01-0219:36alandipertbut we do neither, we have a convention for storing in S3 and make a date segment out of the key#2016-01-0219:36alandiperteg 2015/10/2/3/0 for "the 0-30 minutes of 3am on 10-20-2015"#2016-01-0219:37alandipertso when the EMR wakes up it figures out what the previous segment path was, and scoops the files up there#2016-01-0219:38curtosisfair enough. I kind of like the idea of a lambda on S3 events putting something on SQS for my Datomic "core" to pick up.#2016-01-0219:38curtosisbut that may be more complicated than necessary... I can just get my last-pulled from Datomic and go from there.#2016-01-0219:48curtosisand, since I control the front-end, I can just dump EDN into S3 for Datomic to load.#2016-01-0219:48curtosisskipping the JSON stage entirely#2016-01-0219:50curtosisand I think if it's just EDN I can use #db/id[db.part/user] to defer tempid generation until it gets picked up#2016-01-0219:50curtosis\o/#2016-01-0219:51alandipertsounds pretty sweet#2016-01-0219:52curtosisthe only hard part (for some values of hard) is the "I don't know this name; create a new one".#2016-01-0219:53curtosisI very much like that there's no actual server I have to write.#2016-01-0219:53alandipertoh you mean like make a new name entity?#2016-01-0219:54curtosisyeah... I need to create it with a tempid on the client, process it into Datomic, and then update the front-end lookup service.#2016-01-0220:34curtosishmm... I guess I need to decide whether to build my own all-in-one transactor + processor AMI, or use the default Datomic AWS deploy template plus my application-code AMI.#2016-01-0220:36curtosisapart from the obvious cost difference, are there any advantages to running one way or the other?#2016-01-0221:16caspercSo I am trying to programmatically generate a query, or at least some of one, but I am coming up short. I am wondering if anyone can help me.#2016-01-0221:18caspercI want to make a function to which the db-ident to join on is being passed, so the called can chose the entity that is being joined. I have something like this:#2016-01-0221:18casperc(defn get-all-entities-with-tag [tag-title attr]
(let [db (d/db @conn)
eids (map first (d/q '[:find ?e ?log-title
:in $ ?tag-title
:where
[?tag-e :tag/title ?tag-title]
[?e ~attr ?tag-e]
[?e :log/title ?log-title]]
db
tag-title))]
eids))#2016-01-0221:19caspercThis results in an exception though: IllegalArgumentExceptionInfo :db.error/not-an-entity Unable to resolve entity: clojure.core/unquote#2016-01-0221:20caspercSo the unquote isn’t doing the trick and it doesn’t work without it either. Any idea what will do the trick?#2016-01-0221:20alandipertthe problem is you're using ~ inside a regular quote, not a syntax quote ' vs \`#2016-01-0221:20alandiperterr `#2016-01-0221:21caspercHmm ok, so should I use a syntax quote? I am trying to figure out the right way to generate a query like this programmatically#2016-01-0221:21alandipertunfortunately clojure's native syntax quote isn't a great fit either because it will try to resolve in namespaces things like ?log-title#2016-01-0221:22alandiperti recommend checking out the template macro in https://github.com/brandonbloom/backtick#2016-01-0221:22alandipertuser=> (let [a 1 b `(~a 2 ?log-title)] b)
(1 2 user/?log-title)
#2016-01-0221:23caspercOk thanks I will.#2016-01-0221:23alandipertyour other option is to leverage the fact that the query is a vector... so you can use update on it#2016-01-0221:23alandipertbut that would be kind of brittle, as you'd need to maintain the index of what inside you want to change#2016-01-0221:24alandipertuser=> (assoc [1 2 3] 1 "hi")
[1 "hi" 3]
#2016-01-0221:24caspercAh yeah, but normal list operations might be the way to go, e.g. concat#2016-01-0221:25alandiperttrue, that's maybe the best#2016-01-0221:25alandipertiirc q doesn't care if you use vectors#2016-01-0221:25caspercIt’s just a bit bulky tbh, just trying it now#2016-01-0221:25alandipertpersonally i prefer the template route, i find it the data analog to string interpolation#2016-01-0221:27caspercYeah it seems better. Just wierd that there is no good way to do it built into Clojure or Datomic#2016-01-0221:28caspercespecially since datalog is supposed to be easier to generate programmatically than string based queries like SQL#2016-01-0221:29alandipertyeah, i don't really buy that 😉#2016-01-0221:29alandiperti dunno tho, sql is filled with weird syntax that needs to be satisfied#2016-01-0221:29alandipertfortunately datalog requirements are few in comparison#2016-01-0221:30caspercI did, until I tried it just now and getting fucked by having to quote the vector to avoid symbols being resolved and subsequently can’t really do stuff with it 😬#2016-01-0221:31caspercBut thanks for the hint, I’ll give Backtick a go and stop complaining simple_smile#2016-01-0221:32alandiperthehe, sure#2016-01-0221:32alandipertyou can also always using strings again if you want#2016-01-0221:33alandipert(let [x 1] (read-string (str "[" x " 2 3]"))) 🍷#2016-01-0221:33caspercYeah, that is sort of what they suggest in the docs (http://docs.datomic.com/data-structure-literals.html), but I just feel like there should be a better way in Clojure#2016-01-0221:34alandipertoh wait, can you not pass in ~attr as a parameter? like the way you pass in tag-title#2016-01-0221:35caspercHmm, how do you mean?#2016-01-0221:35alandipert(d/q '[:find ?e ?log-title
:in $ ?attr ?tag-title
:where
[?tag-e :tag/title ?tag-title]
[?e ?attr ?tag-e]
[?e :log/title ?log-title]]
db
attr
tag-title)
#2016-01-0221:35caspercIt is already an input param for the function#2016-01-0221:35caspercah i get it#2016-01-0221:35caspercdoh, that might just be it 😄#2016-01-0221:36alandipertphew 😅#2016-01-0221:38caspercYup, that’s the ticket simple_smile#2016-01-0221:38caspercI am officially a dummy 😄#2016-01-0221:39pesterhazyyup, that's datalog simple_smile#2016-01-0221:53caspercSo maybe I am getting hung up on pointless little things, but I also want it to look for any ident via _ (underscore) when the attribute ident isn’t passed to the function#2016-01-0221:54casperc(defn get-all-entities-with-tag [tag-title & [attr]]
(let [db (d/db @conn)
join-attr (or attr '_)
eids (map first (d/q '[:find ?e ?log-title
:in $ ?tag-title ?attr
:where
[?tag-e :tag/title ?tag-title]
[?e ?attr ?tag-e]
[?e :log/title ?log-title]]
db
tag-title
join-attr))]
eids))#2016-01-0221:54caspercBut that way is not working#2016-01-0221:55caspercGiving this error: IllegalArgumentExceptionInfo :db.error/not-an-entity Unable to resolve entity: _#2016-01-0221:55alandipertif you don't have an attr... you could omit the whole [?e ?attr ?tag-e] clause, right?#2016-01-0221:56caspercSo I guess it is trying to resolve the entity#2016-01-0221:57caspercTrue, it isn’t needed in that case. But I still don’t know how to operate on the query programmatically, so I don’t know how to remove it simple_smile#2016-01-0221:57alandipertyeah - i think template comes back#2016-01-0221:57caspercHehe yeah#2016-01-0222:01alandipertbtw you may consider keeping your queries in functions that take db as an argument#2016-01-0222:01alandipertthis gives you more control as the code evolves, since you don't have to coordinate conn access#2016-01-0222:01caspercYeah, thanks. I will. This is just be messing around with the REPL at the moment.#2016-01-0222:03caspercI guess you would generally make a db at beginning of a route (if exposing a service) and operate on that throughout the call#2016-01-0222:05alandipertyeah, i think anywhere you want to do a bunch of queries and get consistent results#2016-01-0300:17raymcdermottPSA I made a blog post around entity components (highlighting needs for coding updates)#2016-01-0300:17raymcdermotthttp://blog.opengrail.com/jekyll/update/2016/01/02/datomic-entity-components.html#2016-01-0314:58pesterhazy@raymcdermott: thanks for writing this, I enjoyed reading it#2016-01-0315:25raymcdermott@pesterhazy: thanks!#2016-01-0319:20raymcdermottAs an end of holidays activity I also posted a blog about running Datomic on Heroku with DynamoDB as a storage option http://blog.opengrail.com/jekyll/update/2016/01/03/datomic-heroku-spaces.html#2016-01-0422:55darrelleshHaving a hard time with the built-in database function db.fn/cas when trying to set a ref many field. Is it possible to perform a compare and set for a REF MANY field. If so, any help would be appreciated. As a sidenote, I am trying to transact in Clojure.#2016-01-0423:10domkm@darrellesh: As far as I know, :db.fn/cas does not work for any cardinality many attribute types or even for cardinality one refs if the ids passed are idents or lookup refs. In my experience it's only really good for cardinality one non-ref attributes.#2016-01-0423:14darrellesh@domkm: Yeah. That is what I was seeing in my repl tests. Is there any documentation around this function as to how it should be used. The Datomic transaction function docs do not mention any limitations. Thanks in advance...#2016-01-0423:15domkm@darrellesh: I'm not familiar with official docs on it's limitations. I recall seeing some unofficial statements in the Datomic Google Group but I wouldn't be able to point you to the exact post.#2016-01-0423:18darrellesh@domkm: Good to know. Not what I was hoping for - but thanks for your help. There should be some Docs on this! But, then again we are not talking about SQL databases.#2016-01-0423:19domkm@darrellesh: This might be of use to you: https://github.com/democracyworks/datomic-toolbox/blob/master/resources%2Fdatomic-toolbox-schemas%2F01-transaction-functions.edn#L1-L46#2016-01-0423:27darrellesh@domkm: Wow! that is exactly what we are looking for at first glance.#2016-01-0423:27darrelleshthanks..#2016-01-0423:28domkm@darrellesh: np#2016-01-0504:14peterromfeldHi!#2016-01-0504:15peterromfeldMy license is still good until May 11, 2016
I followed the gpg guide from https://my.datomic.com/account -> https://github.com/technomancy/leiningen/blob/master/doc/DEPLOY.md#authentication
creted credentials.clj and then encrypted it into .gpg
however, when running lein deps i still get:
Could not transfer artifact com.datomic:datomic-pro:pom:0.9.5344 from/to (): Failed to transfer file: . Return code is: 204, ReasonPhrase: No Content.
This could be due to a typo in :dependencies or network issues.
If you are behind a proxy, try setting the 'http_proxy' environment variable.
#2016-01-0504:16peterromfeldno proxy running#2016-01-0504:16peterromfeldin project.clj:
...
:repositories {""
{:url ""
:creds :gpg}}
...
#2016-01-0504:17peterromfeldgpg-agent is asking for pass-phrase and not complaining, so i guess that part is correct too#2016-01-0504:17peterromfeldthe unencrypted credentials.clj i just copied from http://my.datomic.com#2016-01-0504:19peterromfeld{#"my\.datomic\.com" {:username “#2016-01-0504:25peterromfeldi also did a bin/maven-install which installed the peer libs into
~/b/datomic-pro-0.9.5344> ls -la ~/.m2/repository/com/datomic/datomic-pro/0.9.5344/
$0.9.5344/_maven.repositories $0.9.5344/datomic-pro-0.9.5344.jar $0.9.5344/datomic-pro-0.9.5344.pom
$0.9.5344/_remote.repositories $0.9.5344/datomic-pro-0.9.5344.jar.sha1 $0.9.5344/datomic-pro-0.9.5344.pom.sha1
so i thought it wouldnt even had to try to install it..#2016-01-0504:28peterromfeldok nvm simple_smile
after rm -rf ~/.m2/repository/com/datomic/datomic-pro/0.9.5344/
it worked.. seems that if you install peer libs via maven-install it conflicts somehow with lein deps#2016-01-0516:41sdegutisDoes setting the host key in datomic.properties to "localhost" prevent remote machines from being able to connect to it?#2016-01-0516:41sdegutisSpecifically in the Free transactor/protocol.#2016-01-0518:35stuartsierra@sdegutis: Possibly relevant mailing list discussion: https://groups.google.com/d/topic/datomic/wBRZNyHm03o/discussion#2016-01-0522:03kschraderdoes the connect function key off of something other than the uri to do its caching?#2016-01-0522:04kschraderwe had a local backup of a database that we thought we were working with#2016-01-0522:04kschraderbut we had an earlier connection to production#2016-01-0522:04kschraderin the same REPL#2016-01-0522:04kschraderthat it ended up using#2016-01-0522:05kschraderI’ve verified that whichever storage we connect to first gets cached#2016-01-0522:06bkamphaus@kschrader: cached of the db’s unique identifier / name — which is the same between restored backups in different storages.#2016-01-0522:07kschraderok, the API docs say: Connections are cached such that calling datomic.api/connect multiple times with the same URI value will return the same connection object.#2016-01-0522:07kschraderwhich is wrong#2016-01-0522:07kschraderand dangerous 😔#2016-01-0522:08bkamphaus@kschrader: it’s not intended that a peer should talk to two instances of the same database. That said, I understand your concern, I’ll look at correcting the API documentation. Discussion of this behavior has come up before.#2016-01-0522:11kschraderwe were able to roll back the changes by looking at the DB in the past and reversing the transactions#2016-01-0522:11kschraderbut it made for a more eventful afternoon than I would have liked#2016-01-0522:11bkamphaus@kschrader: Specifically we’ve discussed making it throw. I.e. we want to prevent the “and dangerous” portion of that, but at present don’t have any intention of support the database forking (i.e. connection from one peer to multiple forks of a previous database).#2016-01-0522:12kschraderthat would be fine#2016-01-0522:12kschraderwhen I do a connect the second time it should fail#2016-01-0522:12kschradernot silently keep the original connection open#2016-01-0522:14bkamphausif a different URI, right. That’s the change in behavior that’s been discussed. Nothing’s been slated for a release at present but I’ll update the dev team with your experience, and I’ll let you know how it will be addressed.#2016-01-0522:15kschraderok#2016-01-0522:16kschraderit’s probably only something we’d run into from the REPL#2016-01-0522:16kschraderbut when it goes sideways it goes very sideways#2016-01-0522:16kschraderthanks#2016-01-0522:17kschraderis there anyway to force change the :db-id after a restore?#2016-01-0522:21bkamphaus@kschrader: db-id is locked in. As I mentioned, connect and the peer library in general aren’t intended to support the idea of having forks of the same database. If you do need a workaround, i.e. to take data you’ve tested in staging and push it to prod, or to update selectively, etc. you have to be outside of the same peer app. Separate REST API peers or your own endpoints in different peer apps, etc. can work (no collision in the connection cache in that case), but it’s still a bit outside expected use.#2016-01-0522:22bkamphausI guess the question is, what’s the use case/goal of talking to both the db and restore from the same instance? A lot of the speculative transaction stuff is meant to be handled by e.g. with.#2016-01-0522:23kschraderspecifically what happened today was to pull a copy of our production DB locally, tested a script against it, and then ran the script against production#2016-01-0522:23kschraderall from a repl#2016-01-0522:24kschraderbut the repl got restarted between step one and two#2016-01-0522:24kschraderso it had a connection to production#2016-01-0522:24kschraderand then later in the day we ran a function that has a hardcoded URL to localhost#2016-01-0522:24kschraderand it ran against our production data#2016-01-0522:26kschraderif connect had thrown an exception it would have prevented it#2016-01-0522:27kschraderwe generally have our production DB firewalled as well, but it was a bit of the perfect storm#2016-01-0522:27kschraderthe behavior was still unexpected though#2016-01-0522:28bkamphaus@kschrader: got it. I understand and am sympathetic to the unexpected aspect of it and will get the docs corrected and the dev’s team focus on the standing request for an error there.#2016-01-0522:29bkamphausKeeping a separation in your environments such that the same peer can’t talk to both prod and staging/test dbs is probably the best means to prevent anything similar in the mean time.#2016-01-0522:31kschraderyep, that’s what we usually do#2016-01-0522:31kschraderfirewall reactivated#2016-01-0600:09currentoorSo I get the impression that non-additive changes to the schema are tricky. Is creating a new DB and re-importing data the only way to fundamentally alter the schema?#2016-01-0600:09currentoorAnd is this feasible to do?#2016-01-0600:10currentoorI'm just worried that if our requirements change down the road in a way we did not anticipate, are we stuck?#2016-01-0600:10bkamphaus@currentoor: it depends on what you need to change. Supported schema alteration is documented here: http://docs.datomic.com/schema.html#Schema-Alteration#2016-01-0600:13currentoorRight so when they say "You cannot alter :db/valueType or :db/fulltext." Can I work around that with a re-import?#2016-01-0600:13currentoorif absolutely necessary?#2016-01-0600:13currentoorOr would that not work?#2016-01-0600:17domkm@currentoor: I think it would work but in the case of a valueType change you'd need to convert all relevant datoms in each tx during import from old type to new type.#2016-01-0600:17currentoorI see.#2016-01-0600:18currentoorMakes sense.#2016-01-0601:58atrocheis there a way to specify the transactor’s host via environment variables, rather than in the config file?#2016-01-0604:18robert-stuttaford@currentoor: you can also use renaming. :old-typeA. make :new-typeB, transact all the values (while converting types) from :old to :new, and then rename :old to :old-unused and :new to :old.#2016-01-0604:19robert-stuttaforddisadvantage: you lose all your transaction time history, but then you would anyway if you recreated your db without re transacting everything in time order#2016-01-0607:15jimmyhi guys what is part in db.part in datomic ?#2016-01-0607:17isaacwhy [(= (f ?var-1) (f ?var-2))] is invalid?#2016-01-0607:19isaacI know there is a equivalent way: [(f ?var-1) ?value] [(f ?var-2) ?value]#2016-01-0607:23robert-stuttaford@nxqd: http://docs.datomic.com/schema.html#partitions#2016-01-0607:23robert-stuttaford@isaac: you can’t nest function calls in datalog. if you need to, you should defer to a namespaced function instead#2016-01-0607:33jimmy@robert-stuttaford: thanks#2016-01-0607:46isaac@robert-stuttaford: Is that means nested function not supported? or I need use namespaced function?#2016-01-0609:29robert-stuttaford@isaac: not supported. must use namespaced function#2016-01-0609:29isaacboth?#2016-01-0609:31robert-stuttafordnested function calls are not supported. you must use a namespaced function call instead. no: [(= (f ?x) (f ?y)] yes: [(my.ns/is-=-with-f? ?x ?y)]#2016-01-0609:31robert-stuttafordgot it, now?#2016-01-0612:37isaac@robert-stuttaford: I got it ,thank you#2016-01-0615:32bkamphaus@isaac: there’s also an example stepped out (rather than writing a custom function and calling it with namespaced example) here: https://stackoverflow.com/questions/32164131/parameterized-and-case-insensitive-query-in-datalog-datomic/32323123#32323123
So i.e.:
[ (re-find (re-pattern (str "(?i)" ?par)) ?bfn)]
Becomes:
[(str "(?i)" ?match) ?matcher]
[(re-pattern ?matcher) ?regex]
[(re-find ?regex ?aname)]
#2016-01-0617:15isaac@bkamphaus: that good, thanks#2016-01-0619:50caspercI am wondering if it is possible to make a sort of custom as-of function. I am marking my transactions with a “transaction time” or :db/tt instant, which is different (and before) :db/txInstant (which is when the data was received in my use case), and I want to be able to query datoms that have :db/tt that are before a certain time.#2016-01-0619:50caspercIs that at all possible?#2016-01-0620:14andrewboltachevHi. How do I get last Datomic version?#2016-01-0620:16andrewboltachevUPD: Found this: https://my.datomic.com/downloads/pro Last one must be on the top of the list simple_smile#2016-01-0620:17domkm@casperc: Sure, why not just write a function to find the tx with the correct :db/tt and then use as-of with that tx's id?#2016-01-0620:18casperc@domkm: I thought about that, but I need the as-of to operate on the :db/tt value, not :db/txInstant#2016-01-0620:19caspercI think that filter might be a workable solution though, so I am taking a look at that#2016-01-0620:19domkmas-of takes a t, tx, or inst#2016-01-0620:20domkmjust give it the tx id of the tx with the desired :db/tt#2016-01-0620:21casperc@domkm: I don’t need the database as-of the txInstant of some :db/tt, I need all datoms with :db/tt bigger than some date. There is a difference in my use case#2016-01-0620:22domkmOh#2016-01-0620:22domkmYeah, then filter#2016-01-0620:22caspercDon’t know how it performs though, but I guess I will find out simple_smile#2016-01-0620:22domkmHeh, yup#2016-01-0620:24domkm@casperc: index-range might also be useful for you#2016-01-0620:25casperc@domkm: I was looking at that too, but I don’t quite see how it can help me, unfortunately#2016-01-0620:27ljosaI'm benchmarking a query (a big pull of a few hundred entities) both as a peer and via the REST API. To my great surprise, querying via the REST API is consistently faster. Can I believe this result? Does the REST server do some clever caching of serialized results or something?#2016-01-0700:05bkamphaus@ljosa: you’re sure the comparison is otherwise apples to apples? I.e. impact from other project dependencies/processing steps in your peer app is minimal?#2016-01-0700:07bkamphaus@casperc: you can bind to the ?tx entity and specify a datalog clause that matches attribute on ?tx entity. I.e., instead of typical [?p :person/name ?name] form, something like:
[?p :person/name ?name ?tx]
[?tx :db/tt ?tt]
#2016-01-0700:08bkamphaus@andrewboltachev: that’s correct, the most recent one will be highest in list on are downloads sites. For the free version, which is on clojars, it will be the one listed here: https://clojars.org/com.datomic/datomic-free#2016-01-0700:11casperc@bkamphaus: Pretty sure that wouldn’t work since :db/tt is an instant, and what I want is a version of the database showing datoms with :db/tt vals before the specified date#2016-01-0700:11caspercI ended up making an as-of function using filter.#2016-01-0700:12caspercMy problem is that I think the filter is applied before the queries that you perform on that database, and the more efficient way would be last, but I need to verify that.#2016-01-0703:56bkamphaus@casperc I guess I'm not following some of your logic. A Datom is part of a universal schema, e/a/v/tx/assert -- you can't add an additional attribute (eg db/tt) to datoms, only to entities (by adding a Datom with db/tt in the attr position in underlying data model) - so you must get associated tx entity and look at its attributes in the custom filter?#2016-01-0703:57bkamphausYou can use a comparison predicate (function expression) http://docs.datomic.com/query.html#function-expressions as part of any query in place of the second tuple in the example, which may be more perform any of the preceding clauses are more selective than applying your filter to the database value as its passed in to each query, for example.#2016-01-0710:06jimmyhi guys do we need another separate id beside db/id like other record we create in sql ? For example :project/id#2016-01-0710:08dm3do you need to communicate the Id of your entity to anyone outside? If yes and you don't already have a natural key already - then you'll need another identifier.#2016-01-0710:10jimmyyes, like normal application, I need to use the id to query for example project/by-id, but I cannot query against the db/id. I run the query like so and it doesn't work :
(prn (d/q '[:find [(pull ?e [*]) ...]
:in $
:where
[?e :db/id 1]
]
(d/db conn)))
#2016-01-0711:26tcrayford@nxqd: so two things: a) if you're exposing an id externally, use an attribute with a uuid type and use squuid to generate the values. b) entity ids are the ?e part of a datom, so if you need internally to use entity ids for some reason (e.g. you already have an entity from some other external based lookup and now you want to query against it)#2016-01-0711:27caspercDoes anyone know, if I filter a database and then query against it, will the filter then be applied before or after the query?#2016-01-0711:28tcrayford@casperc: before#2016-01-0711:28caspercOuch#2016-01-0711:29caspercIsn’t that the same as doing a full db scan of all datums?#2016-01-0711:29tcrayfordah: no#2016-01-0711:29tcrayfordthe filter applies as the query walks the indexes#2016-01-0711:30caspercah ok, so filtering a database by some domain date, would not end up being horribly inefficient then?#2016-01-0711:30caspercThis is the same question explained a bit more btw: https://groups.google.com/forum/#!topic/datomic/nmFsEFk6LDE#2016-01-0712:42jimmy@tcrayford: thanks#2016-01-0712:43tcrayford@casperc: it is a very "it depends" thing. Ultimately for performance, the best you can do is to benchmark it 😉#2016-01-0712:44tcrayfordso the real thing with filters to note is that they're applied as effectively a lazy clojure.core/filter call over the indexes (which are ordered sets of datoms). The query runtime uses those indexes, and then whilst it's walking them, the filter is applied. It's definitely less efficient than putting that restriction in your index, at least potentially#2016-01-0712:45tcrayford@casperc: here's a question: is :db/tt monotonically increasing as transactions go up?#2016-01-0713:49casperc@tcrayford: re benchmarking, I agree. My problem is that I don’t have realistic data in the amounts that we are going to see. But I am working on that simple_smile#2016-01-0713:50casperc@tcrayford: And no, they are not monotonically increasing, since the data will come from different sources with different update speeds.#2016-01-0714:30bkamphaus@casperc: the reply from Linus on group fleshes out what I mentioned. I would benchmark that or something else derived from the time-rules in day of datomic ( https://github.com/Datomic/day-of-datomic/blob/master/tutorial/time-rules.clj ) against your filter on realistically sized data.#2016-01-0715:53casperc@bkamphaus: Thanks, that is a good place to look, I’ll have a look at it simple_smile#2016-01-0716:58akielIs it possible to have pull over ordinary data like q does it?#2016-01-0720:12bkamphaus@akiel: nope simple_smile#2016-01-0720:48akiel@bkamphaus: ok I was hoping to leverage pull using om.next in a non-datomic context. But I wrote it myself. It's not that hard.#2016-01-0721:14tcrayford@akiel: it's a shame to me that datomic's query and a bunch of it's features don't work at all over ordinary data in certain kinds. It means e.g. you have to stand up an actual database in test instead of giving it a seq of vectors (that are datoms) etc etc#2016-01-0800:10gworley3are there any known problems with running multiple transactors and cassandra? i'm seeing my two transactors flip-flop even when i set heartbeat-interval-msec relatively high (at the moment it's at 15000)#2016-01-0800:12gworley3i'm also not sure how much of a problem this is, but it seems to be causing connection problems for the peers in our app anyway#2016-01-0800:26bkamphaus@gworley3: on latest version? It wouldn’t be caused by heartbeat latency, but there was a bug fixed in 0.9.5327 that could cause transactor terminations. From changes.md:
* Bugfix: Fixed bug that could cause a transactor to terminate when
deleting a database against Cassandra storage.#2016-01-0800:32bkamphausThere’s nothing otherwise intrinsic to a Cassandra configuration that would result in that high frequency of failovers. First sanity questions for seeing failovers like that is :
* Are processes (storage, transactor, peers) isolated, is network free of partition events
* is Cassandra system seeing other use, up to and including compact events, etc. that can cause storage unavailability,
Do you see, e.g., the cause of transactor failover listed in logs? I.e., :cause :conflict with AlarmHeartbeatFailed?#2016-01-0800:35gworley3@bkamphaus: 0.9.5344. this is on a cluster just for datomic and it has no traffic to it other than the transactors right now. i'm not seeing obvious signs of network partitions. i am seeing :event :transactor/heartbeat-failed, :cause :conflict in the logs#2016-01-0800:37gworley3i don't see AlarmHeartbeatFailed anywhere, though#2016-01-0801:46tcrayford@gworley3: @bkamphaus check JVM GC logs for long pause times as well, both on Cassie and transactor (Cassie only matters if all the nodes sync a long GC together, but never say never)#2016-01-0802:10bkamphaus@gworley3: another sanity check, if they’re flip flopping immediately, does either one ever successfully become a standby (i.e. issue HeartMonitorMsec metric) and related question, is this with a paid or eval license? (no HA is one of the pro starter limitations along with non memcached and 3 process cap).#2016-01-0802:20gworley3@bkamphaus: ohhhhhh, that's the answer. this is with a pro starter license right now since we're still in development and not ready to deploy yet. if that's the case i'll check into getting the full thing since we're already past the committed point#2016-01-0802:20gworley3i didn't realize it didn't work with ha. Thanks!#2016-01-0816:29timgilbertHey, newbie question: is it possible to execute transactions in the datomic console? Would like to set up some test data while I try to comprehend datalog#2016-01-0816:30timgilbert...or do people generally use the REPL for this sort of thing?#2016-01-0816:49bostonaholicREPL is king#2016-01-0816:50marshall@timgilbert: The Console doesn’t support transactions.#2016-01-0816:51timgilbertOk, got it, thanks#2016-01-0817:26stuartsierraThe ReST API can do transactions, and it includes basic HTML forms when visited from a browser.#2016-01-0817:34isaaccan I treat backward-reference as an attribute?#2016-01-0817:34isaac:community/_neighborhood#2016-01-0819:06bostonaholic@isaac: if you're asking what I think you're asking, yes#2016-01-0819:08bkamphaus@isaac: it depends on where you’re using it. Works in pull/entity, not in query/datalog (but just reorder the from e.g. [?n :community/_neighorhood ?c] to [?c :community/neighborhood ?n]#2016-01-0905:05isaac@bostonaholic: @bkamphaus
yeah! I find it can not use in query#2016-01-0905:05isaacand can appear in transact
@(d/transact conn [{:db/id (d/tempid :db.part/user)
:neighborhood/name "jack"
:community/_neighborhood c-id}])
#2016-01-0905:05isaacthis transact is ok#2016-01-0905:07isaac@(d/transact conn [{:db/id (d/tempid :db.part/user)
:neighborhood/name "jack"
:community/_neighborhood [c-id1 c-id2]}])
but in this case [c-id1 c-id2] will treat as look ref.#2016-01-0908:52raymcdermottFYI I have created another demo of Datomic / Hoplon / Castra. First one just showed off random queries, this time shows updates via Castra - it’s simple and not especially pretty but shows off how to achieve the combination#2016-01-0908:52raymcdermotthttps://github.com/hoplon/demos/tree/master/castra-datomic-free-state#2016-01-1015:55gerritAccording to the docs it is not possible to specify the licenseKey as command line arg. Would you be open to support that?
My use case goes like this: I'd like to pass it as arg because it would allow me to add the transactor.properties to a docker image (and check the transactor.properties into SCM) and then provide the licenseKey in the CMD at container start. Or is there another way to achieve that?#2016-01-1017:35raymcdermottHow about a small wrapper script that took the license on the command line, then use something like sed or awk to paste it into the properties file before you start up the transactor#2016-01-1019:13raymcdermottYou can (and I have done) the same thing with an ENV variable#2016-01-1019:15raymcdermott@gerrit: see the buildpack I made to run Datomic on Heroku if you want to grab some code that could be chopped up to do this https://github.com/opengrail/heroku-buildpack-datomic#2016-01-1021:00gerrit@raymcdermott: sounds interesting. I'll have a look. Thanks!#2016-01-1106:49jimmyhi guys, what is a good way to query the current schema in user partition ? thanks#2016-01-1106:52lowl4tencyHi guys!#2016-01-1115:40andrewboltachevHi. Are there any migration tools for Datomic? Say, I'm in development and I'm changing my schema arbitrarily, and I want to preserve all the data though?#2016-01-1115:44timgilbert@andrewboltachev: I’ve been using https://github.com/rkneufeld/conformity for that (I’m a newbie though)#2016-01-1115:45andrewboltachev@timgilbert: interesting! Thanks#2016-01-1115:46timgilbertNo prob. Seems to cover most of what I was using lobos for in pgsql-world. There are a few other similar tools around if you google a bit#2016-01-1115:48andrewboltachevMain problem is Datomic's support around schema changes, e.g. fulltext attribute once it's set#2016-01-1115:53andrewboltachev@timgilbert: so, would that library, say help me to (1) take all the data (2) drop existing and create a new schema (3) import preserved data, and apply transformations to it where appropriate?#2016-01-1115:54timgilbertUh, sort of except for (2), to my understanding#2016-01-1115:55timgilbertIt just basically lets you run a set of transactions on your database, and then it keeps track of which ones have already been run#2016-01-1115:55timgilbertBut as I understand it, you don’t really drop old schema changes, since everything is immutable#2016-01-1115:56andrewboltachevLike I said, there are cases where you can't apply a change to the schema#2016-01-1115:56timgilbertSo transactions can include schema updates, and oalso adding new data from somewhere#2016-01-1115:56timgilbertNot sure about that, I’ll defer to someone with more expertise#2016-01-1115:57andrewboltachevOk, thanks anyway!#2016-01-1115:57timgilbertNo prob, good luck#2016-01-1116:04andrewboltachevMy real situation is like this: (0) I have an idea about an (web) app I want to build
(1) I'm starting to with "categories". And I want to have schema to save categories and make categories widgets first
(2) I realize that there would be "collapsed" flag for each category (in category widget, is it collapsed or not) — need to add new attribute
(3) I'm finished with the widget and I want to add "items" entity.
(4) Adding users to my project
(5) Say, categories list (or tree, or graph ['cause yep, there's no tree]) is common for all users, but each has own ones collapsed-or-not. So, thus "collapsed" flag must be turned to some relation then.
Could I achieve (5) (and all of the above) without creating new DB?#2016-01-1116:08andrewboltachevAnd, have old data preserved (say, I added bunch of categories)?#2016-01-1116:13andrewboltachevAlso, interesting thing is that this way of development I just described is sort of "Agile", i.e. categories, items and users are "features". According to Rich Hickey we should be solving problems, not building features. So, is my way fundamentally incorrect, i.e. I should first imagine what my app would operate (design it) and then implement it (so no changes to DB like that would arise)? Would much appreciate any answers/opinions.#2016-01-1116:18jonahbenton@andrewboltachev: hey andrew- definitely don't go in the "waterfall" direction of having heavy "design" and "implementation" cycles. instead, tune your workflow to make it easy to try, discard, and try again. use datomic in mem mode, or even datascript, to iterate on a data model, and separately, if you need to "seed" your data model to test your app, keep the seed datoms in an edn file and just transact them at app startup#2016-01-1116:20andrewboltachev@jonahbenton: so, you say I'll be able to switch to Datomic when finally ready to production?#2016-01-1116:22andrewboltachevWell, seems like easiest approach#2016-01-1116:22andrewboltachevw/o need for external tools#2016-01-1116:24andrewboltachevOne big thing is confidence, though. Whoever said that building and deploying an app must not be harder than in 1-click.#2016-01-1116:31jonahbentonyes, you can use Datomic in a "disposable" mode as you iterate through design and implementation, and even when you first show it to users in 1-on-1s. getting positive user feedback, and being comfortable with your feature set, helps build confidence. when you have confidence to capture user data durably, then you can switch Datomic to a durable storage#2016-01-1116:32jonahbentonbut you don't want to saddle yourself with unnecessary workflow obstacles while getting to that place, because then you will lose momentum#2016-01-1116:35andrewboltachevWell, seems I have an idea (and some thesis) for a library (but I'll probably build it for SQL, at least first). 'Cause migrations is what I missed in Clojure world, when switching from Python/Django. Thanks for help @jonahbenton ! I'll take into account what you've answered#2016-01-1122:11sdegutisIs there a way to specify inside a :where clause that a the number of ?things must be more than a given number, e.g. (> (count ?things) 6) ?#2016-01-1122:13bkamphaus@sdegutis: if you want to get it done in a single query, you can do so with a subquery, as in this example: https://groups.google.com/d/msg/datomic/5849yVrza2M/31--4xcdxOMJ#2016-01-1122:14sdegutisOh wow I never knew about subqueries until now.#2016-01-1122:15sdegutisHmm. That seems a bit hacky though doesn't it?#2016-01-1122:15bkamphausmain use case (at least re: performance) is REST, on JVM may be as or more efficient to just chain query results into another query or sequence manipulation.#2016-01-1122:15sdegutisAhh interesting.#2016-01-1122:16sdegutisSlack needs a "the more you know" reaction.#2016-01-1122:19bkamphausI’d say only mildly hacky, maybe the fully qualified namespace stands out a little : ) It is a use case it’s designed to support. I.e., chaining queries together or using subquery are both means of composing queries. Composing queries is one reason datalog query in Datomic takes in a set of relations and returns a set of relations as its default (when return not overridden by find specification).#2016-01-1122:20sdegutisHmm, I wonder if it can be cleaned up using that function-storing feature Datomic has.#2016-01-1122:37alwaysbcodingDoes anyone have an example of querying the Datomic Rest API through clojure, I can't figure out how to do it...#2016-01-1122:38alwaysbcodingIf I try something like that ^ it just gives me an error because of the formatting of all the ?eid fields#2016-01-1122:39alwaysbcodingPutting a quote in front of the query doesn't work either#2016-01-1122:40alwaysbcodingthe documentation just shows this {:q [:find …] :args [{:db/alias … }]} with no information about how to deal with the ?eid syntax#2016-01-1200:23domkmDo Floats and Doubles not compare as equal in Datomic?#2016-01-1200:25bhaganyI've been using the repl that comes with datomic (bin/repl) for all of my datomic repl needs, but I'm finally feeling the need to use a repl from emacs. However, I'm using the rest api, and I don't have a lein or boot project that cider can hook into. Should I make a dummy project just to start it with cider, or is there some way to accomplish this that I'm missing?#2016-01-1200:32domkmTo answer my own previous question, Datomic does not do float/double coercion. 😞#2016-01-1203:24meowfyi, we're working on a slack replacement and might use datomic https://hackpad.com/collection/wnikaeBENEE#2016-01-1205:06isaacHow choose storage for datomic? Is there any Guiding principles?
I read the documentation about how set up for various storage db, but not about choice.#2016-01-1213:16bhagany@alwaysbcoding: this probably isn't terribly helpful, but your example looks a lot like what I do from python. I don't do anything special with the ?vars#2016-01-1213:17bhaganythe error suggests to me that your query datastructure isn't being coerced to a string before being sent?#2016-01-1213:19bhagany@isaac: the canonical advice is to use whatever is operationally easiest for you. for example, if you already have postgres in production, use that. I have also seen people say that if you're on AWS, consider using dynamo.#2016-01-1213:24tcrayford@isaac: @bhagany the last thing is that cassandra's probably the easiest/best if you don't have any other particular choice any particular way and want distributed/perf etc and cannot use dynamo#2016-01-1213:27isaacdatomic just support version of cassandra at 2.x?#2016-01-1213:32stuartsierrayes, only cassandra 2.0 and 2.1#2016-01-1213:33isaacIs it will support cassandra higher version?#2016-01-1213:36stuartsierraProbably, at some point.#2016-01-1215:53donaldballHey, I’m new to datomic and could use some guidance solving a problem. Let’s say I have some book entities that have a :book/color attribute which could be :red, :green, or :blue. Let’s say also that I have a map of colors to ordinal values {:red 1 :green 2 :blue 3} which I don’t have stored in datomic. I’d like to have a query that, given an eid and such a map, returns the ordinal value of the book’s color. (I realize this is trivially done after the fact; I’m actually working on a transaction fn for which I think a solution to this simple problem will help.)#2016-01-1215:55donaldballMy naive attempt fails: (defn get-book-color-ordinality
[db eid color-ordinals]
(d/q '[:find ?ord
:in $ ?eid ?ords
:where
[?eid :book/color ?color]
[(get ?ords ?color) ?ord]]
db eid color-ordinals))
#2016-01-1216:05donaldballNevermind, this does seem to work, I had just mixed up the type of my color attrs#2016-01-1218:33donaldballBut it doesn’t work as a txn fn; datomic complains with: java.lang.RuntimeException: Unable to resolve symbol: ?ords in this context
#2016-01-1218:34donaldballAny clue why this might be true? I even get this when I inline the color-ordinals map.#2016-01-1219:51domkmDoes anyone know why I would be able to successfully use d/with but transacting the same tx-data to an unchanged connection would fail with a java.lang.ExceptionInInitializerError?#2016-01-1220:07domkmEven weirder, transacting this particular data does work against an in memory connection but it does not work against a free dev connection. I'm stumped (and blocked). Any ideas? CC @bkamphaus @stuartsierra#2016-01-1221:09stuartsierraSorry, I got nothing @domkm#2016-01-1221:10domkm@stuartsierra: Thanks for responding. simple_smile Would you pass this on to your colleagues?#2016-01-1221:16marshall@domkm: can you share your tx-data ?#2016-01-1221:17donaldballI’ve written up a simple gist illustrating the txn-fn problem I’m having: https://gist.github.com/dball/f4cd5a52dddc7b812b86#2016-01-1221:18donaldballThe former form is a macro which generates a txn fn suitable for installation in a schema; the macro is well tested and I have no reason not to trust it. It fails when used in a txn. The latter form works exactly as intended locally. I’m not sure why; are clojure map literals not allowed within clojure txn fns?#2016-01-1221:27domkm@marshall: I pasted a snippet above.#2016-01-1221:30domkm@marshall: It's sort of a contrived example because I don't intend to use that exact code in production (which essentially prevents any concurrent modification), but I intended to do something related and, now that I know the above works in a mem transactor but not in a dev transactor, I am concerned about the approach.#2016-01-1221:31marshall@domkm is it the invocation of the txn function that fails or the installation of it?#2016-01-1221:32domkm@marshall: Installation works fine in both mem and dev. Let me get back to you on dev invocation...#2016-01-1221:35domkm@marshall: Dev invocation works as well. I used this tx-data: '[[:fn/query {:query [:find ?e :where [?e :person/email "foo"]]}]]#2016-01-1221:44bkamphaus@domkm: The snippet is invoking one query to make an exception, and a second query using that exception as an arg, to throw an exception if the db has been changed and no longer has the basis-t 1031?#2016-01-1221:46domkm@bkamphaus: Almost exactly. It doesn't invoke the query which makes and throws an exception. It simple returns that query in a list and the transactor invokes it after splicing it in.#2016-01-1221:53bkamphaus@domkm: I guess I’m not clear whether this is the intended use case of opening query in a transaction function, or a contrived example meant to illustrate a failure. That said, a suspect for the deserialize exception might be something that does a sequence manipulation in a transaction function that requires a clojure map or vec specifically rather than targeting a java.util.Hashmap as the precision of type preservation for collections is only guaranteed at the interface level (well, I think, either way it’s not preserved for the exact type) on the wire. That would produce something like the symptom you describe, where something would work on mem, but not dev/free. Maybe the assoc or cons in the code for :fn/query would be the points where the failure is encountered?#2016-01-1221:53bkamphaus(speculation, not tested at this point)#2016-01-1221:55domkm@bkamphaus: Okay, thanks, I'll test that.#2016-01-1222:32domkm@bkamphaus: Hmm, so you're correct that they are not Clojure data structures but assoc and cons are working as I expected on this: '[[:fn/query {:query [:find ?e :where [?e :person/email "foo"]]}]] I'm confused#2016-01-1223:44bkamphaus@domkm: I’ll use your transaction function code and investigate a bit more, see if I can identify what the exact issue is. Will update after digging some.#2016-01-1223:44domkm@bkamphaus: Thank you!#2016-01-1301:37ljosaDoes a stack trace like this indicate connectivity issues with the storage (couchbase in this case)? ExecutionException java.lang.RuntimeException: Exception waiting for value
java.util.concurrent.FutureTask.report (FutureTask.java:122)
java.util.concurrent.FutureTask.get (FutureTask.java:192)
clojure.core/deref-future (core.clj:2180)
clojure.core/future-call/reify--6320 (core.clj:6420)
clojure.core/deref (core.clj:2200)
datomic.catalog/get-catalog (catalog.clj:30)
datomic.coordination/cluster-conf->resolved-conf (coordination.clj:160)
datomic.cache/fn/reify--2419 (cache.clj:342)
clojure.lang.RT.get (RT.java:672)
datomic.cache/lookup-cache/reify--2416 (cache.clj:287)
datomic.cache/lookup-cache/reify--2416 (cache.clj:280)
clojure.lang.RT.get (RT.java:645)
Caused by:
RuntimeException Exception waiting for value
Caused by:
ExecutionException java.lang.RuntimeException: Cancelled
Caused by:
RuntimeException Cancelled
#2016-01-1301:40ljosa(worked after pagerduty and automatic restart)#2016-01-1315:06bkamphaus@donaldball: I’m fairly certain the data model and approach are a mismatch for what you’re trying to do re: book quality. Some of the changes I would make to the approach (a) use pull to get attr/value from a know entity id, (b) if you use keywords that have a corresponding integer value, add that value to the enums in the database. Otherwise, use numbers in the database to store values and track their meaning in your application. (c) if you pass in ordinal map, don’t resolve/get etc. in the query itself, use pull to get quality ident from db, then get corresponding value and generate transaction. (d) I wouldn’t do this in its own transaction function, if you need isolation this is a good fit for the built-in transaction function cas http://docs.datomic.com/transactions.html#built-in-transaction-functions - do the lookup work on the peer, you don’t want that overhead in the transactor’s serialization.#2016-01-1315:08bkamphaus@ljosa: it’s possible, hard to diagnose conclusively without more information — do you have more stack trace and/or logs (assuming you want to dig further). Did you encounter this on peer?#2016-01-1315:36ljosaI think "it's possible" is a good enough answer. It only happened once, and yes, it was a peer. No other logs; the only thing to note is that this happened well after the peer had connected and had been running for some time. This peer fails fast when something goes wrong. Marathon restarted it, and all was fine after that. #2016-01-1315:38bkamphaus@ljosa: Ok, sounds good. If you encounter anything similar in the future and opt for a deeper dive, feel free to ping me.#2016-01-1315:40ljosaThanks!#2016-01-1317:24hugodI’m trying to connect to a datomic instance but am getting HornetQException[errorType=SECURITY_EXCEPTION message=HQ119031: Unable to validate user: …elided…]. What does this mean?#2016-01-1317:25bkamphaus@hugod - possible that you’re exceeding the peer count?#2016-01-1317:29hugod@bkamphaus: The peer count is the number of processes connecting to datomic?#2016-01-1317:30hugodAs far as I know this is the only process connecting, but I can check that.#2016-01-1317:31bkamphaus@hugod: right, i.e. on free or starter, transactor + two peers. When up, REST server and Console count against limit. You can also look through transactor logs, will see a message where they’re logged with datomic.transactor - {:event :transactor/remote-ips …}#2016-01-1317:38hugodThis is with datomic-pro, btw#2016-01-1317:38bkamphausby default transactor logs will go to log/ subdir of the datomic dir with naming by date.#2016-01-1317:40hugodI don’t see transactor/remote-ips anywhere in the logs#2016-01-1317:42bkamphausWhich version are you on? I would expect to see even a blank message, e.g.:#2016-01-1317:42bkamphaus2016-01-12 00:01:39.312 INFO default datomic.transactor - {:event :transactor/remote-ips, :ips #{}, :pid 15750, :tid 28}
#2016-01-1317:44hugodWould two transactors running on the same table show this symptom?#2016-01-1317:44donaldballThanks, @bkamphaus, I realize there are probably better data models that would yield a better txn design, but I’m quite curious now what apparent constraint of txn fns I’m violating hereby#2016-01-1317:45hugod@bkamphaus: This is datomic-pro-0.9.4880#2016-01-1317:46bkamphaus@hugod: nope, two transactors can’t run live against the same table. With paid pro, only one transactor can be active, others will see it writing heartbeat and enter standby. With pro starter/free, transactors will constantly fail with AlarmHeartbeatFailed :cause :conflict.#2016-01-1317:51bkamphausIt may be that the remote-ips logging wasn’t yet included in that version (over a year old at this point), if you can consider upgrading transactor it should contain that logging info, should still be compatible with same peer lib version.#2016-01-1317:52hugod@bkamphaus: Thanks, I’ll assume that having two transactors is the cause of the issue for now, and test with just the one transactor up.#2016-01-1318:19bkamphaus@donaldball: I can’t reproduce your failure using code in your gist - the transaction function works for me with two caveats: (1) I’m using namespaced datomic.api/q for query, and (2) I’m installing the db fn through standard process, not using your deftxfn, e.g.
{:db/id (d/tempid :db.part/user)
:db/ident :db.fn/increase-book-quality
:db/fn #db/fn {:lang "clojure"
:params [db eid attr quality]
:code ...
#2016-01-1318:22donaldballCool, thanks, I’ll dig in a little deeper then#2016-01-1319:21domkm@bkamphaus: Related to my dev/mem issue yesterday, it seems like dev is swallowing the root cause of exceptions while mem is properly wrapping them. See: https://gist.github.com/DomKM/bedbb9ef2f281c1254fe#2016-01-1400:34domkmWhy doesn't {:find [?e], :in [$ ?user], :where [[?e _ ?user]]} return anything? It returns the correct ?es if the wildcard _ is replaced with an concrete attribute like :org/member.#2016-01-1402:21bkamphaus @domkm what value is being passed in for '?user'?#2016-01-1402:24domkm@bkamphaus: An ident...but when you asked that I just tried with an eid and it worked. Thanks! But I don't understand why...#2016-01-1402:25bkamphausAlso, not confirmed or tested yet, but I suspect the exception logged by transactor will be more detailed than reported on peer when transactor function is invoked by transactor as a different process (i.e. Not mem)#2016-01-1402:26bkamphausI believe ident resolution to entity id only works automagically when attribute bound is type ref - I.e. blank won't handle that case.#2016-01-1402:26domkmAh, interesting#2016-01-1402:37domkm@bkamphaus: Regarding the transactor exception issue: If I throw an exception from inside a proper transactor function (not inside a query invoked by a transactor function) on a dev transactor, the exception is not swallowed. The full stacktrace is returned to the peer. The only difference between mem and dev that I can see is that dev exception data might be returned in Java types instead of Clojure types.#2016-01-1411:49timgilbertHey, got a noob question: I’m trying to run this code…
(d/transact
(get-database-connection)
[;; Create the notification
{:db/id #db/id[:db.part/user -1]
:notification/id (:id new-notification)
:notification/content (:content new-notification)}
;; Create/update the user entity
{:db/id [:user/id user-id]
:user/notifications #db/id[:db.part/user -1]
}])
#2016-01-1411:49timgilbert…and I’m getting an error "datomic.impl.Exceptions$IllegalArgumentExceptionInfo: :db.error/invalid-attr-spec Attribute identifier quote of class: class java.lang.String does not start with a colon"#2016-01-1411:51timgilbertIt seems to have to do with the :user/notifications #db/id[:db.part/user -1] since if I comment that out it disappears, but I’m flummoxed. Did I miss something obvious?#2016-01-1412:19timgilbertOh, sorry, never mind… That exception is coming from elsewhere, I misread the stacktrace#2016-01-1413:52skadinyoGuys, should i download datomic free using wget ?
I can do that in my computer, but always failed when do that to my softlayer server.#2016-01-1415:24skadinyoturns out the problem is in my server locales#2016-01-1415:33stuartsierra@timgilbert: Unrelated note: use d/tempid in code instead of #db/id. #db/id is a reader literal, usually not what you want in code.#2016-01-1415:55timgilbertThanks @stuartsierra. I saw a note about that somewhere, was a bit surprised that the above code seemed to work verbatim in my tests actually#2016-01-1415:57stuartsierra@timgilbert: It will work, but it has surprising effects if you do, say, (defn make-tx [] [ ... #db/id[:db.part/user] ]]) because the db/id gets created once at read-time.#2016-01-1416:07timgilbertOh, right... thanks, that looks like a subtle one. I do think I remember some discussion of it in one of the training vids, now that I think back#2016-01-1417:31jballancI don’t suppose the datomic-transactor-pro JAR is available in a maven repo anywhere?#2016-01-1417:34bkamphaus@jballanc: running the transactor requires the Datomic distribution (i.e. jar isn’t sufficient)#2016-01-1417:36jballanc@bkamphaus: hmm…I’d argue otherwise (though I’m almost sure I’m off the “recommended” path)#2016-01-1417:41bkamphausI’m speaking in terms of supported configuration. You could of course take the provided transactor jar and pom from, deps, etc. from the distribution and manage those assets in something like artifactory and generate your transactor invocation script and properties etc. on the fly and I’m sure there are people doing so to manage their datomic deployment/ops.#2016-01-1417:43jballancYeah, that’s essentially what I’m doing. It’s just annoying that the transactor JAR is only available in the downloaded zip. It means having to automate the download/unzip/copy of the JAR from the zip instead of being able to manage it as a usual maven dependency.#2016-01-1421:17timgilbert@jballanc: my team uploaded the datomic jars to s3 via https://github.com/technomancy/s3-wagon-private (which we were already using for private shared libs). Works like a charm for developers.#2016-01-1421:17timgilbert...you need s3 though.#2016-01-1421:29maxis there a canonical way to merge two entities?#2016-01-1421:51jballanc@timgilbert: yeah, s3-wagon is probably going to be the easiest solution...#2016-01-1421:54timgilbertIt was pretty straightforward, FWIW. I did need to pass -DcreateChecksum=true to the maven command to install in my local repository, but after that it was as easy as doing aws s3 sync ~/.m2/repository/com/datomic/datomic-pro/ #2016-01-1422:09timgilbertI have another newbie question: since there doesn't seem to be a way to sort query results from either the pull or datalog sequence, I don't quite see the use of the (limit) function in the pull API. Like, which 10 values will be returned? And can I specify the offset to start with, like "10 results starting with the 25th result"?#2016-01-1422:09timgilbertI'm trying to figure out the best approach to do sorting and pagination in my app, basically#2016-01-1422:15jballanc@timgilbert: There’s not really a good solution that I’ve found. You can come close by using < and > on the attribute of interest and managing a window on your own.#2016-01-1422:16jballancI think the limit function is really more just as a stop-gap when you might have very many large entities returned…less so for the usual pagination solutions that, for example, SQL’s limit is typically used for.#2016-01-1422:18timgilbertOk, that makes sense, yeah#2016-01-1422:20timgilbertDefinitely being able to specify sort order is the biggest thing that I'm missing going from postgres to datomic#2016-01-1422:42bkamphaus@timgilbert: no limit, offset, sort by, etc. in datomic datalog at present. Assuming you’re using JVM language and not REST API, remember peer is part of db with local caching, etc. so there’s not the same incentive to force everything through in one query.#2016-01-1422:43bkamphausif you’re in a use case where you’re really paging, you probably need to use datoms instead, pulling or using the entity api as appropriate, rather than going through query.#2016-01-1422:48timgilbertThanks @bkamphaus. Just curious, do you know if there is any consideration of this use case for future Datomic development? I do have several use cases where I will want to page / sort / etc, though I think I'll be able to use data modeling to limit my result set in a lot of them and just sort / take / drop the results in application code#2016-01-1422:50bkamphausIt’s definitely a use case we’re considering and sort/limit/offset are frequently requested features. That said, we don’t have anything planned for any particular release at this point in time.#2016-01-1423:48kschrader@bkamphaus: it looks like my.datomic is down#2016-01-1423:48kschraderif you weren’t already aware#2016-01-1423:49kschraderInternal server error: exception#2016-01-1500:02bkamphaus@kschrader: looking into it, looks like possibly AWS related ( https://status.aws.amazon.com/ 😞 3:13 PM PST We are investigating connectivity issues for some instances in the US-EAST-1 Region.
3:33 PM PST We can confirm connectivity issues when using public IP addresses for some instances within the EC2-Classic network in the US-EAST-1 Region. Connectivity between instances when using private IP addresses is not affected. We continue to work on resolution.#2016-01-1500:04kschraderok, thanks#2016-01-1500:10bhagany@bkamphaus: in case you're not continually refreshing, they updated the status with a workaround 10 minutes ago:#2016-01-1500:11bhagany> 4:00 PM PST We continue to work on resolving the connectivity issues when using public IP addresses for some instances within the EC2-Classic network in the US-EAST-1 Region. For instances with an associated Elastic IP address (EIP), we have confirmed that re-associating the EIP address will restore connectivity. For instances using EC2 provided public IP addresses, associating a new EIP address will restore connectivity.#2016-01-1500:51bkamphaus@bhagany @kschrader should be back up now.#2016-01-1500:51bhaganyexcellent, thanks#2016-01-1500:51kschraderthanks Ben#2016-01-1500:53bhaganytiming out here… I'll give it a bit#2016-01-1500:53bkamphaushold on#2016-01-1500:54bkamphauswas back up, and timing out now again#2016-01-1500:55bhaganyI'm trying to think of a portmanteau of "devops" and "sympathize", but nothing is quite working#2016-01-1501:22bkamphaus@bhagany: @kschrader looks like it’s genuinely back up, I’ll keep checking in on it for a bit.#2016-01-1501:23bhaganydanke simple_smile#2016-01-1501:24kschraderthanks#2016-01-1501:25bhaganyconfirmed successful download here#2016-01-1506:32domkmAre the ordering of query results deterministic? They seem to be from my brief testing. I am asking because I want a transactor function to compare the result of a peer query with the result of that same query inside a transaction.#2016-01-1510:29luposlipHi there!
Managed to get Datomic Pro running on AWS ECS, in a way that enables me to connect to it from the EC2 host itself (still need to test the connectivity from another container though).#2016-01-1510:46luposlipCompiled a small container to show the proof of concept (without the actual Datomic files): https://github.com/enterlab/docker-aws-ecs-env#2016-01-1510:53robert-stuttaford@luposlip: https://twitter.com/robstuttaford/status/687950657702215680 simple_smile#2016-01-1511:28isaacCan I put function into rules?
I got error when I did test.
If can, How to?#2016-01-1513:24robert-stuttafordjust a small experience report, in which i use some awesome features of Datomic along with Onyx: http://www.stuttaford.me/2016/01/15/how-cognician-uses-onyx/. super happy to answer any questions!#2016-01-1513:45meowsharing is caring 💌#2016-01-1520:00domkmHas anyone run into a query working with with but erroring with transact? I'm getting an IndexOutOfBoundsException from processing rule when using transact on a dev db. I think it has something to do with serialization, which is evidently my new nemesis. 😉#2016-01-1520:02stuartsierra@domkm: I've never seen that, but I would suggest looking at the transaction data closely, especially the types of all the data structures.#2016-01-1520:05domkm@stuartsierra: Will do. I think it has something to do with a query map being deserialized in the transactor in an odd way. It would be really great if with tx-data went through the same serialization/deserialization pipeline as transact tx-data.#2016-01-1520:06stuartsierraA rough approximation is (read-string (pr-str data)). The real implementation is https://github.com/Datomic/fressian#2016-01-1520:36domkm@stuartsierra: I'm stumped. I've replaced the tx fn with a static implementation. It creates the tx data inside the tx fn so there's no serialization/deserialization. However, it still errors when transacted into dev but not when "transacted" with with.#2016-01-1520:57stuartsierraI got nothing, sorry.#2016-01-1521:00domkmOkay 😞#2016-01-1522:32domkmI finally figured out what causes the rules above to throw the error above when transacted into a dev transactor but not a mem transactor. Renaming the rule from type to something that doesn't shadow clojure.core causes it to work in both cases. How does one submit a bug report? Also, is there a list of known, but not yet fixed, bugs (like JIRA)?#2016-01-1522:35bkamphaus@domkm: there’s not a public facing bug report list at present, but you can send bug reports to <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection> or post them on group: https://groups.google.com/forum/#!forum/datomic#2016-01-1710:36raymcdermottI’m fiddling around with trying to obtain transaction events using the tx-report-queue facility and seeing some odd behaviour….#2016-01-1710:37raymcdermott(save-new-cart conn {:cart/name "Cart-2"})
=> {:db/id 17592186045484, :cart/name "Cart-2"}
(def txns (d/tx-report-queue conn))
=> #'shopping-cart-demo.datomic/txns
txns
=>
#object[java.util.concurrent.LinkedBlockingQueue
0x418201de
"[{:db-before #2016-01-1710:38raymcdermottthe save-new-cart resolves the tempid so I can see the entity that was created and of course that agree with the data in the txns queue#2016-01-1710:39raymcdermottbut when I come to query it, I get a NPE … and I then reconnect to the DB and the data is there#2016-01-1710:40raymcdermottIt seems like the :db-after is not null, the entity ID is not null so something weird is going on#2016-01-1710:41raymcdermottI am running the peer lib and transactor from datomic-pro-0.9.5327 starter edition#2016-01-1710:42raymcdermottmaybe I shouldn’t use the data like this so any advice on how I can retrieve the data for the transaction from the queue would be great!#2016-01-1710:43raymcdermott[my goal, perhaps obviously, is to watch the DB and generate changed data out to other systems]#2016-01-1710:53caspercI am wondering, is it possible to use aggregates in a where clause to answer a question like: what is the newest entity of a given type, and then use that entity to join some other stuff in the same query#2016-01-1713:25caspercOr the entity with the biggest of some value and then use that in the query again.#2016-01-1713:54yendaif I want to keep the position of each element in a component list do I need a position attribute or is the list of element already ordered ? I want to have an ordered list of requests for each scenario with the following schema :#2016-01-1714:16robert-stuttaford@yenda, see http://dbs-are-fn.com/2013/datomic_ordering_of_cardinality_many/#2016-01-1714:18robert-stuttaford@casperc, as aggregates are declared at the :find level, that’s not possible in a single query. however, you can nest queries! [(datomic.api/q [… query with aggregates ...] $ ?something) ?something-else]. good to do this if the subquery isn’t likely to be reused, otherwise probably better to make it a separate fn and compose#2016-01-1714:28casperc@robert-stuttaford: Woah, I did not know you can nest queries!#2016-01-1714:30casperc@robert-stuttaford: That could be the way to go, if aggregates can indeed answer the first part of the question, e.g. give me the entity with the biggest of some value. I’ll need to play around with it#2016-01-1715:08robert-stuttafordlet me know if it solves it for you simple_smile#2016-01-1715:13casperc@robert-stuttaford: Will do. Still trying to sort it out, but an aggregate can definitely answer the first part of the question, so just need to grok completely how subqueries work simple_smile#2016-01-1716:27isaacWhy Datomic use memcached as cache system instead of redis?#2016-01-1716:48bhagany@raymcdermott: I think the problem is that you're treating txns like it's a record with a :db-after key. But it's not that - it's a LinkedBlockingQueue with a record inside it.#2016-01-1716:49bhagany@raymcdermott: you want something like (d/pull (:db-after (.take txns)) '[*] 17592186045484), if I remember that api right#2016-01-1718:02raymcdermott@bhagany: ok, thanks I didn’t find any good examples with tx-report-queue directly so I’ll check out the LBQ#2016-01-1718:22raymcdermott@bhagany: that works a treat - thanks!!#2016-01-1718:22bhaganygood to hear simple_smile#2016-01-1718:23raymcdermottI guess people use core.asynch or something similar to manage the events#2016-01-1718:25raymcdermottbit of a shame that day of datomic etc. does not have some examples of this functionality … I will play around with it and make a blog post#2016-01-1720:08tcrayford@isaac: memcached is a much better cache for datomic's purposes. It scales multicore whereas redis is limited to one. Given that datomic only uses it's cache for k/v writes with bytes in each, it doesn't need any of the features of redis#2016-01-1721:15raphaelHello, I am starting with datomic#2016-01-1721:16raphaelsomebody see what the error in my snippet?#2016-01-1721:16raphaelHeu.. is it the right place to ask? simple_smile#2016-01-1721:16raphaelthank a lot#2016-01-1721:17raphaelAh sorry, I received this error: IllegalArgumentExceptionInfo :db.error/not-an-entity Unable to resolve entity: :dialog/bubbles datomic.error/arg (error.clj:57)#2016-01-1722:44currentoorIs there a way to invalidate the caching in Datomic in dev?#2016-01-1722:45currentoorIn the peer and anything potentially lingering in the transactor?#2016-01-1723:14currentoorI’m seeing sporadic issues when trying to load seed data in dev. I’ve tried deleting the database and restarting both the transactor and the JVM but loading seed data still fails sometimes.#2016-01-1800:32curtosis@raphael: it doesn't look like you're installing the :dialog/bubbles attribute. Your tx map for the schema needs to include :db.install/_attribute :db.part/db.#2016-01-1806:18raphael@curtosis: thank you simple_smile#2016-01-1809:25yenda@robert-stuttaford: thanks that's a good start, it doesn't use components maybe I don't need them after all#2016-01-1813:42tcrayford@currentoor: datomic's caching should never have any notable impact on the app, except for memory pressure. Also a reminder that "it doesn't work" isn't a great bug report. What were you trying to do? What happened? What should have happened? Tell us a bit about your environment? (OS, Datomic version, JVM version)#2016-01-1815:05pesterhazycurrentoor: there are occsaionally issues when restoring a database on top of a version of the same database. Are you using the restore functionality?#2016-01-1815:32jannisHi! I've dug around in the datomic query docs and various examples/tutorials but I'm unsure whether this is easily possible: Given a db and an entity, I would like to return a single boolean value if it has more than N references via a specific attribute.
Example: given a user with :user/friends (a many ref attribute), return a boolean for whether the user is "popular" (i.e. has more than 2 friends. I can probably make it work using a custom aggregate function but I wonder if it's possible with built-in features?#2016-01-1815:55bkamphaus@jannis: you can use a nested query/subquery to limit query results to a ref attribute with refs of a count greater than some number, that’s pretty much what’s going on in the second query here - https://groups.google.com/forum/#!msg/datomic/5849yVrza2M/31--4xcdxOMJ-- re: returning a single boolean, I think I’d just handle that outside query. I.e., the query returns something in the results or it does not.#2016-01-1815:55bkamphaus(the second query from the link)
(d/q '[:find ?track ?count
:where [(datomic.api/q '[:find ?track (count ?artist)
:where [?track :track/artists ?artist]] $) [[?track ?count]]]
[(> ?count 1)]]
(d/db conn))
#2016-01-1816:01jannisI want to try doing it inside the query. (My goal is to have a declarative, query-based way of describing data derived from entities.)#2016-01-1816:01jannishttps://gist.github.com/Jannis/8fd22f556b55f02589bf - this almost works, except for when no friends are set.#2016-01-1816:04bkamphaus@jannis: ah, aggregates in find won’t get called if there are no results, you could look at using http://docs.datomic.com/query.html#get-else maybe#2016-01-1816:05bkamphausin body#2016-01-1816:05bkamphausoh wait card many attribute#2016-01-1816:05jannisSo if I'd simply want to query the number of friends with :find (count ?f) ., it would return nil and not 0 if there are no friends?#2016-01-1816:07jannisJust tested it. The answer is yes.#2016-01-1816:07bkamphausthat’s correct,#2016-01-1816:07jannisI find that odd but ok, I'm new to Datomic.#2016-01-1816:12jannisThere must be a way to achieve this though.#2016-01-1816:47bkamphaus@jannis here’s one, but it’s a hack (against mbrainz sample)#2016-01-1816:47bkamphaus(defn prolific [tracks]
(> (count (filter #(not= :nil %) tracks)) 500))
(d/q '[:find (user/prolific ?t) .
:in $ ?a
:where
(or [?t :track/artists ?a]
(and
[(missing? $ ?a :track/_artists)]
[(ground :nil) ?t]))]
(d/db conn)
1111) ;;no tracks for non-existent artist
;17592186046909) ;pink floyd - works for true condition in artist
#2016-01-1816:51jannisNice job. simple_smile Yes, it may be a hack but I played with or and ground as well and couldn't get anything to work. This is still reasonably readable.#2016-01-1817:03bkamphausI have to give credit to @marshall for suggesting that use of or, missing?, and ground to me.#2016-01-1817:31currentoor@tcrayford: My apologies. I was having the same issue as @domkm, same codebase. Namely we have this rule.
https://clojurians.slack.com/files/domkm/F0JJN0G9X/rules.clj
And it sporadically results in this exception.
https://clojurians.slack.com/files/domkm/F0JJMNC65/-.txt#2016-01-1817:41raphaelHello, what the difference between (d/tempid :db.part/db) and #db/id[:db.part/db] ? thank a lot simple_smile#2016-01-1817:47bostonaholic@raphael: you would use the former in code, and the latter in a data structure to be read (like an edn file)#2016-01-1818:15currentoorI'm on the Datomic Pro Starter license and it ways I get to have to 2 peers + 1 transactor. I was wondering if it is ok for me to have a staging environment with that the same setup and production environment as well, so 4 peers and 2 transactors total (albeit two separate apps).#2016-01-1818:15currentoorOr does that violate the license?#2016-01-1818:17marshall@currentoor: That is perfectly acceptable. The process limit specifically refers to production processes. You’re permitted to maintain separate unlimited development/staging instances.#2016-01-1818:22currentoorAwesome, thanks @marshall!#2016-01-1822:15jannisWhat can lead to this error? java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException: :db.error/not-an-entity Unable to resolve entity: :user/email?#2016-01-1822:17bhaganyit looks like the :user/email attribute isn't installed#2016-01-1822:17bhagany(assuming it's an attribute)#2016-01-1822:18jannisIt is, yep#2016-01-1822:18jannisI'll check whether the schema makes it into the database#2016-01-1822:19bhaganyworth mentioning: recently there was someone who missed the :db.install/_attribute :db.part/db part of the schema installation. that's a hard one to see.#2016-01-1822:20jannisNope, that's there. The schema is generated by datomic-schema. But it's not current transacted into the db apparently. I'll dive deeper.#2016-01-1822:21jannisOh, (d/transact ...) not @(d/transact ...), so my guess is since it was never deref'd it didn't run at all?#2016-01-1822:22tcrayfordNot not run, but if transact throws an exception you have to deref to see it#2016-01-1822:24jannisAh! Excellent, it is working now. There was an error: :db.type/int doesn't exist, I should be using :db.type/long.#2016-01-1822:24jannisSo the schema transaction failed and the exceptions weren't visible.#2016-01-1822:25bhaganyglad you found it simple_smile#2016-01-1822:25jannisThanks for the help, that was quick simple_smile#2016-01-1822:31jannisI hit another problem: Exception in thread "async-dispatch-5" java.lang.IllegalStateException: :db.error/connection-released The connection has been released. inside (d/db conn).#2016-01-1822:38bhaganynot sure about that one#2016-01-1822:45jannisThe web suggests someone might be deleting the db before the (d/db conn) call.#2016-01-1822:46bkamphaus@jannis: that’s likely. Or a different thread releases the connection through a call to release, or memory issues/gc on peer or transactor cause excessive latency and peer loses connection, etc. (or genuine network partition), etc.#2016-01-1822:53jannisI am using multiple connections in the same process, could that be a problem?#2016-01-1822:56jannisOh, I know what it is. My process terminates while a go loop is still busy and tries to access a conn/db.#2016-01-1822:57bkamphausyep, that would do it.#2016-01-1822:58bkamphausDatomic caches connections, so successive connect calls, etc. are usually not an issue.#2016-01-1822:59bkamphausif you’re interested in more discussion re: managing connections, I think Ryan’s blog post here is pretty good: http://www.rkn.io/2014/12/16/datomic-antipatterns-eager-conn/#2016-01-1912:53yendadoes this mean using datomic as a component with system is a bad idea ?#2016-01-1913:19stuartsierra@yenda: No, it's not a bad idea. You can use a component to handle the Datomic connection if you want.#2016-01-1913:20yendacool, I'm using it in danielsz/system#2016-01-1915:05robert-stuttafordwe use it with trapperkeeper. it uses https://github.com/rkneufeld/conformity on start to update schema#2016-01-1915:06robert-stuttafordsee https://github.com/robert-stuttaford/tk-app-dev/blob/master/src/tkad/services/datomic.clj (explained on http://www.stuttaford.me/2014/09/24/app-and-dev-services-with-trapperkeeper/)#2016-01-1916:40dexterquick question, I think I'm missing something with Datomic. I see everywhere people saying how it is horizontally read scalable especially when used with something like Dynamo or infinispan because all the querying is done on the client (peer). But this seems contradictory. With large datasets, sending say 80GB of data over the wire from say 50 machines to a single client for that one machine to process down to whatever tiny subset is needed is, even if you could manage to make it not oom going to be very slow. This gives a relatively small absolute limit on dataset size Whereas if the query was pushed down to the servers then the query can be distributed across the nodes allowing it to actually scale as nodes are added. Scale this up to multi-terabyte datasets and it very quickly becomes obvious this is impossible. What am I missing here. Can datomic actually handle large datasets; how much is actually done on the servers? Becuase other than scale concerns it sounds perfect for us.#2016-01-1916:45curtosis@dexter: I'm not entirely sure what you mean by "sending 80GB of data from 50 machines to a single client" in the context of read scaling.#2016-01-1916:47curtosissending data from 50 machines sounds like putting data in, which is the job of the transactor (and so single-threaded), but that addresses write sacalability.#2016-01-1916:49dexterlets say I have 50 machines, with my dataset evenly distributed#2016-01-1916:50dexterif I create a query and push it to the servers a-la h-base server side filters, SQL stored procedures etc then the task is subdivided across all 50 machines and each one conducts data-local ops where possible only sending the minimal amount needed over the wire to the client#2016-01-1916:51dexterthis means as your data scales you simply divide it more and the overall data needing to be returned to the client as the answer remains constant#2016-01-1916:54dexterif however the client has a snapshot of all relevant data so it can perform the query client side that is fine at small scale but with large datasets you will be limited by a) network bandwidth which with a cluster of 50 machines throwing data at you to process will surely saturate the network and cause a serious bottleneck. Moving 80GB of data to a single machine takes a loooong time and b) once you finally get all the data you then have to perform the query but one machine will easily hit CPU limits long before it can efficiently process that much data#2016-01-1916:55dextermost horizontally read-scalable systems rely on pushing as much of the work as possible out for divide and conquer, so logically a system that does the inverse cannot scale to large datasets as it will take hours just to get all the source data for a query even before you manage a query#2016-01-1916:57curtosisok, I think I see... A couple of thoughts: 1) Peers need to see the transactor and storage. If that's not local, that's a huge bottleneck. (80GB is of course a lot on anything other than very fast networks.)#2016-01-1916:57curtosis(on the other hand, once you have the data it's there)#2016-01-1916:58dexterbut you are still limited processing-wise even just streaming over 80GB on one machine will take hours#2016-01-1916:58dexterscale that to terrabytes and it is really impractical#2016-01-1916:58dexterI'm looking at a dataset of around 30T#2016-01-1916:59dexterif you co-locate your peer with part of the dataset that bit can be fast but you still need to get data from the rest of the cluster and process it#2016-01-1916:59dexterto use the example from flyingmachinestudios:#2016-01-1916:59dexter1) Peer: Peer Library! Find me all redheads who know how to play the ukelele!
2) Peer Library: Yes yes! Right away sir! Database - give me your data!
3) (Peer Library retrieves the database and performs query on it)
4) Peer Library: Here are all ukelele-playing redheads!
5) Peer: Hooraaaaaaaay!#2016-01-1917:00dexterstep 3 is a massive bottleneck that limits any datomic dataset to whatever a single machine can practically get its hands on in time#2016-01-1917:00curtosisI think the Peer is generally smarter than that, and will only load the chunks it needs to get those datoms#2016-01-1917:02dextertrue, but thats still the source so if I read from my 30T dataset, 80G datoms -> some filter -> 20G datoms -> some transform -> 20G datoms -> some filter -> 1G datoms -> some join -> 100 datoms#2016-01-1917:03dexterI only needed the 1G for the join, but I have to send all 80G and do all that parallelisable work on one very beefy machine#2016-01-1917:04dexterin particular consider http://internetmemory.org/en/index.php/synapse/on_the_power_of_hbase_filters#2016-01-1917:06curtosisbased on my understanding, most of what you would do with HBase filters is handled by the indexes in datomic.#2016-01-1917:06dexterso you pre-calculate the filters as indexes?#2016-01-1917:07dexterdoesn't that heavily limit flexibility in that you need to know every query in advance#2016-01-1917:07curtosisnot really.... that's a relational way of thinking of it.#2016-01-1917:08curtosisdatoms are much more granular, kind of like a triplestore.#2016-01-1917:10curtosisso to find "all the redheads" it just needs to look at the AVET index, to get all the Entities with the Attribute :hair/color and the value :hair.color/red (hypothetically)#2016-01-1917:11curtosis(disclaimer: I'm relatively new to datomic myself, so I'm only speaking from my understanding)#2016-01-1917:11dexterno worries there, every hint helps 😉#2016-01-1917:13dexterbasically we have a large transaction dataset that we could never fit on a single box, and we want very fast reads for complex matrix manipulation / modelling purposes. People submit queries asking for models to be generated over large swathes of data and they want them sub-second. The problem with things like HBase is that they are too slow with their clunky HDFS backing stores. Datomic seems ideal but we really need to do the work data-local on the servers because no one machine could process all the data fast enough, more to the point you couldn't even send the source data to one box in-time. Fortunately the problem can be sub-divided and thus as long as each section completes fast enough we should be fine.#2016-01-1917:14dexterif i'm understanding right to use datomic in this context we may want a proxy process to conduct the queries datalocal on subsets of data then manually collect the results from the delegates?#2016-01-1917:15dexterthese queries also are heavily time-series oriented hence why something like datomic is better than just say postgres#2016-01-1917:16dexterso you might collect all the transactions for one transactor using an index and shard by transactor#2016-01-1917:16dexterbut sadly we need to join across transactors#2016-01-1917:16curtosisno. that breaks the core datomic model.#2016-01-1917:17curtosisthere is one transactor, and it runs single-threaded. period. (well, HA failover excepted)#2016-01-1917:17dexterthat's for writes though right?#2016-01-1917:18curtosiswell, yes.... but "sharding by transactor" doesn't make sense in this context.#2016-01-1917:19dextertransactor in my parlance meaning a legal entity which conducts transactions#2016-01-1917:19dexternot the datomic transactor#2016-01-1917:19curtosisah. a very bad namespace collision in here! simple_smile#2016-01-1917:19dexterindeed, new enough to this thing I chose bad wording#2016-01-1917:19dexterlets say legal entity#2016-01-1917:19dexterand err trades#2016-01-1917:20curtosisso in this context, you could definitely partition the data set by legal entity. partitions are all about index locality#2016-01-1917:21dexterso in a datomic 'cluster' I could partition by legal entity while retaining the ability to join across partitions (after filtering to reduce the cost of course)#2016-01-1917:21curtosisthat said, the peers need to be sized for their working set, so if they really do need 80GB to answer a query, then they need that much space.#2016-01-1917:22dexterthe thing is its divisible, so ideally the part that works on 80G would be farmed out to many machines, it pairs down to small sizes for actual result-sets#2016-01-1917:22dextervery little needs all the data at once#2016-01-1917:22curtosisI don't think that model really maps to datomic that well.#2016-01-1917:23dexteryeah, we are currently evaluating a whole bunch of things, if it could handle the scale datomic is the best fit by far#2016-01-1917:23dexterbut we dont want to invest months of time into something that just wont scale#2016-01-1917:23curtosisI mean, you could do it manually and do a join across results from a bunch of Peers, but that seems like re-inventing the wheel.#2016-01-1917:24curtosiswithout knowing more about what the data & access patterns look like, it's hard to say.#2016-01-1917:24curtosisI'd recommend talking with the Datomic staff directly.#2016-01-1917:25dextermy confusion here is just because people say reads scale horizontally here, but it seems like they very definitely don't number of clients is irrelevant to the datomic structure but the scale is limited by how much one machine can do#2016-01-1917:26jonahbenton@dexter: yeah, what the Datomic folk mean is that in the traditional centralized OLTP database world, your read and write scalability is tightly bound.#2016-01-1917:26jonahbentonif you have heavy read traffic- again, OLTP-like traffic- you can marry your load to your capacity very easily#2016-01-1917:27jonahbentonfor your problem, have you looked at Onyx? http://www.onyxplatform.org/#2016-01-1917:28dexterreally its basically OLAP queries with OLTP access patterns.#2016-01-1917:28curtosisread scalability is always limited to what a single node can do... in a cluster/map-reduce database, there are mechanisms for breaking up small bits of work and putting them together. Datomic is not a map-reduce system, and using it that way (on its own) you're gonna have a bad time.#2016-01-1917:28dexteryeah, Onyx looks interesting, it's actually what reminded my of datomic. The question there is that you still need to store and get access to the data somehow#2016-01-1917:29curtosisyou might be able to use Onyx to work across a number of datomic peers.#2016-01-1917:30dexterindeed, our feeling is that without an MR option our load cannot be handled but all the MR options are slow or 100% in memory which would cost a lot. VoltDB looks promising too#2016-01-1917:30curtosishere's one Onyx+Datomic solution: https://yuppiechef.github.io/cqrs-server/#2016-01-1917:32jonahbenton@dexter at what rate is your dataset changing, or is it fixed?#2016-01-1917:34dexterit varies, small data (100s MB) changes slowly but constantly, maybe 10M updates at a time. Big Data updates say minimum 30G /day come in overnight via our batch process (grab giant files from clients and normalise). Then people can submit arbitrary queries across almost any field which does e.g. montecarlo simulations on the streams of transactional data#2016-01-1917:35dexterreally tiny data changes via users inputting on the site, fairly constantly, which is tiny data but causes many changes#2016-01-1917:35dextervery little is pre-calculatable#2016-01-1917:36dexterapparently competitors manage thousands of these sorts of things in massive oracle deployments#2016-01-1917:36dexterwe want something that would scale to ~100 times that load#2016-01-1917:37dexteras we are carefully only pulling in basically samples atm#2016-01-1917:37dexter@curtosis: that seems about right simple_smile#2016-01-1917:38dexterwe only have ~30T atm in total#2016-01-1917:38dexterbut the lack of ability to pre-calc just due to the variance of the queries is a real pain#2016-01-1917:39dexteryou could think of us like an analytics database with a web ui#2016-01-1917:41jonahbentonyes. a number of tricky problems in that.#2016-01-1917:41dexterI know right 😛#2016-01-1917:42dexterat present we have basically bunged it all in Redis, but well, it's creaking, we've hit the limit#2016-01-1917:42dexterand its a pain to join#2016-01-1917:42jonahbentonyes. you're definitely at a scale in terms of data and potential licenses that you should be talking to cognitect directly but a couple of other things come to mind#2016-01-1917:42jonahbentondatomic has datom count limits within a single database#2016-01-1917:43jonahbentonand likely peer count limits within a "cluster"#2016-01-1917:43dexteras long as you can join cross db that's not necessarily an issue, in that there are a lot of partitions, at least a few hundreds#2016-01-1917:44dexterbut if there are absolute limits that may be an issue worth looking into#2016-01-1917:45dexterlike I say atm this is exploration work, we have a few options to consider, none are exactly perfect so we need to find what's best or maybe even divide the issue and have two solutions#2016-01-1917:45dexterpairing down what's needed for the OLTP style stuff knocks you down to under 1TB#2016-01-1917:46dexterbut if we can solve the problem once rather than twice that would be nice#2016-01-1917:49jonahbentonare you able to partition your model executions into groups based on the amount or type of data they'd need to churn through?#2016-01-1917:50dexteramount no, type not quite, we can partition via which top level entity the data is about#2016-01-1917:50dexterso by key basically#2016-01-1917:51dexterso e.g. one for Google / Amazon / Ebay#2016-01-1917:51dexterthough then the results may need joining#2016-01-1917:52jonahbentonright. cpu consumption per model may vary greatly as well, no?#2016-01-1918:00lucasbradstreet#onyx can certainly used to scale out the reads. We've just shipped with a new scheduler implementation. One feature we're hoping to achieve soon, is better scheduling constraints to prefer Datomic peers be collocated on nodes. http://www.onyxplatform.org/jekyll/update/2016/01/16/Onyx-0.8.4-Colocation.html is a decent discussion of how the scheduler can be used for performance optimisation purposes#2016-01-1918:06lucasbradstreet@dexter: looking at your requirements above, onyx may be a good fit. Sounds like you have some interesting issues at scale though. @michaeldrogalis and I would be happy to have a chat with you about it if you're interested. #2016-01-1918:09lucasbradstreet@dexter I think you could likely scale out your reads OK, but it's worth mentioning that all writes must go through a single transactor so you may have write scalability issues, depending on your needs#2016-01-1918:17dexter@lucasbradstreet: onyx does indeed sound like a good fit#2016-01-1918:18dexterespecially with the new co-location scheduler#2016-01-1918:19dexterthough I'm concerned about the likely stability onyx both operationally and api-stability given its sub v1 state#2016-01-1918:20dexterI don't think writes should be too much of an issue though we'd need to ensure the batch insert did not affect read performance#2016-01-1918:21dexterI think data being a bit behind at 2AM is less of an issue given that it just came out of FTP -> Hadoop#2016-01-1918:21dexterduring the day when reads are heaviest there will be little write traffic#2016-01-1918:23dexterif datomic/onyx stays in our top options I'll take you up on that chat with some concrete examples of actual queries#2016-01-1918:29lucasbradstreetYup. I understand your position on the sub v1 state given where you’re at and what you’re looking for. We’re quickly stabilising with recent work, including jepsen testing, but it sounds like you need something very solid now.#2016-01-1918:30lucasbradstreetbatch datomic writes don’t affect read performance as far as I know, because the peers don’t interact with the transactor at all#2016-01-1918:30lucasbradstreetI guess it could affect index creation, but I don’t understand the subtleties there#2016-01-1918:31dexterreally sounds like a good option still if its stable enough. What's the roadmap for stability like#2016-01-1918:31dexterare we talking months or years for a stable release#2016-01-1918:32dexterI'm sure I'm not the only one that remembers storm 😛#2016-01-1918:41curtosisIIRC batch transactions might need some tuning (and back pressure). There's also at least a theoretical upper bound on bandwidth to Peers for the index updates. But again, the performance-at-scale questions are better answered by Cognitecticians. 😉#2016-01-1918:44michaeldrogalis@dexter: It's in a pretty stable state right now. Datomic itself is < 1.0, for a comparison.#2016-01-1918:45michaeldrogalisAnyhow - I don't want to encroach on the Datomic conversation. Over to #C051WKSP3 if you have more questions.#2016-01-1921:28firstclassfuncAnyone have any pointers for the best way to deal with transact errors?#2016-01-1921:31stuartsierra@dexter Datomic does not try to position itself as a "big data" system. It's in a completely different category from something like HBase. "Horizontally-scalable reads" is to contrast to traditional relational databases where both queries and transactions are executed on a single machine.#2016-01-1921:32stuartsierraDatomic centralizes the transaction load on one machine to get efficient ACID transactions, but each Peer can execute its own queries without affecting transaction throughput or queries on other Peers.#2016-01-1921:38stuartsierraEach Peer may be interested in different subset of the whole database, but a single query is limited in scope to what a single Peer can hold in RAM.#2016-01-1922:47timgilbertSay, if I'm creating my own partition for some related entities, is it good style to use a :db.part/my-partition identifier for it, or should I just use :my-partition and then pass that keyword to (d/tempid)? Trying to decide between this:
{:db/id #db/id[:db.part/db]
:db/ident :db.part/notifications
:db.install/_partition :db.part/db}
...and this:
{:db/id #db/id[:db.part/db]
:db/ident :notifications
:db.install/_partition :db.part/db}
#2016-01-1922:49timgilbertParenthetically, I wish the datomic docs around this had a few more examples, specifically ones where new data is asserted in the created partitions#2016-01-2001:58bhagany@timgilbert: I do it without the namespace#2016-01-2001:59bhaganyalso, I think the :db namespace is reserved, and I'm not sure if this usage falls into the "okay" category or not#2016-01-2009:36dexter@stuartsierra: Thanks, that was what I needed to know#2016-01-2012:22paramemeProbably a popular question : But is there any blessed way of getting datomic data into Tableau / Wolfram .... insert pretty ad-hoc exploration and graphing system here?#2016-01-2012:26parameme(I found https://github.com/lynaghk/c2) but that looks a little old and I was hoping to have some sizzle (ie. pretty dashboards) as a way of demonstrating datomic's power.#2016-01-2014:15robert-stuttafordparameme probably through the use of some wrapper you write yourself. also, know that https://github.com/kovasb/session exists simple_smile#2016-01-2014:20yendawow thanks session is awesome. I join this link because the readme is not much to read https://medium.com/@kovasb/session-1a12997a5f70#.lfxz6kpdc#2016-01-2014:59yendathe github repo has been overwritten with a non working project though#2016-01-2015:07pesterhazy@yenda, did you dig up a link to a working one?#2016-01-2015:10yendano sadly I thought of going through the old commits but he just smashed the old code#2016-01-2015:12parameme😞#2016-01-2015:12paramemeSession looks brilliant#2016-01-2015:12yendaoh wait there are forks#2016-01-2015:13yendahttps://github.com/skhurram/session#2016-01-2015:15bkamphausgorilla repl is also in that space, (not really an endorsement one way or the other, I don’t use it personally) http://gorilla-repl.org/#2016-01-2015:16bkamphausregarding getting data in, under what terms or with what requirements? You can always get back results via REST server or your own export logic from a clojure/java peer into an environment for other language, or export queries in table form etc. (query results without find specifications are sets of tuples).#2016-01-2015:20paramemeBasically it is a business app - spreadsheets come in - we mangle them (hopefully add value) and then allow the data to be sliced and diced... several existent customers have requested ODBC access to a reporting schema or OLAP access...#2016-01-2015:22paramemeTableau (desktop) has been amazing in the ad-hoc data visualisation and dashboard construction - but is pretty expensive for a web-based system... I have been looking around to see if there was anything that would speak datomic sufficiently to allow for either a) a reporting schema / cube or b) an ETL process to Redshift or some other supported columnar / MPP datastore so that existent tools could be used.#2016-01-2015:35bkamphaus@parameme you might be interested in Nubank’s talk from Strange Loop https://www.youtube.com/watch?v=VexLSuOvb0w - in this case its their own analysis environment but they outline some of their data wrangling process.#2016-01-2015:38paramemeThanks @bkamphaus!#2016-01-2018:08sdegutisWhen asserting a value on an entity which is the same as it had before, is it expensive, even though it's basically a no-op?#2016-01-2018:09sdegutisIn other words, is transacting {:db/id 1234 :foo/bar "quux"} fast when 1234 already has {:foo/bar "quux'}?#2016-01-2018:12bkamphaus@sdegutis: it’s not entirely a no-op, it will create a transaction entity for the otherwise empty transaction, i.e. transaction data will be something like [#datom[13194139534316 50 #inst "2016-01-20T18:11:45.847-00:00" 13194139534316 true]]#2016-01-2018:13sdegutisAhhh. Alright then.#2016-01-2018:13sdegutisDoesn't sound terribly slow then.#2016-01-2018:15bkamphausnot slower than a normal transaction, no. takes more time than not transacting anything simple_smile#2016-01-2021:32tmortenHello All: I have a question about db partitions and when to use them wisely...for instance: would it make sense to have different partitions in the case you have a unique ID that only is unique inside a particular name? Ex: Company/items has many items which have a item number that identifies the item but only inside that particular company. Other companies may have same item number. So in this case companies would have its own partition?#2016-01-2021:50stuartsierra@tmorten: Datomic partitions have no relationship to unique attributes.#2016-01-2021:50stuartsierraPartitions are only related to Entity IDs, which Datomic generates.#2016-01-2021:50stuartsierraValues of an attribute declared :db.unique/value or :db.unique/identity must be globally unique across an entire database.#2016-01-2021:52stuartsierraIf you need uniqueness scoped within another identifier, create a composite ID value that includes enough features to be globally unique, like "<company-ID>:<product-id>"#2016-01-2022:00tmorten@stuartsierra: Composite IDs is exactly the path I was headed. I wasn't sure if there was a "best-practices" type approach, however.#2016-01-2022:01stuartsierraComposite IDs are the recommended approach. Use strings, not byte arrays.#2016-01-2022:02tmortenThen "split" via ":"?#2016-01-2022:34stuartsierraor whatever character you want to use#2016-01-2023:00tmorten@stuartsierra: thank you for the help!#2016-01-2023:06timgilbert@tmorten: if you haven't watched it yet, I thought the final Day of Datomic video was very helpful in helping me get an idea about what partitions are useful for: http://www.datomic.com/part-vi-the-datomic-operational-model.html#2016-01-2023:07tmorten@timgilbert: Got through video IV so far simple_smile#2016-01-2023:08tmortenGood stuff though. I am definitely a Datomic noob!#2016-01-2100:29currentoorIs there a built in way to get the created-at and updated-at values for an entity?#2016-01-2100:30currentoorOr should I store those as attributes on an entity?#2016-01-2102:12bkamphaus@currentoor: you can get created-at and updated-at information using Datomic’s time capabilities, answer here - https://stackoverflow.com/a/24655980 - is relevant.#2016-01-2102:13currentoorawesome! I should have searched around more.#2016-01-2113:29firstclassfuncI am curious how many are leveraging Datomic REST interface vs. a server side handler.#2016-01-2115:28yendaWhen I run the first function (with transactions expressed as a list with :db/add) it works, but it doesn't with map, am I doing something wrong ? according to the doc I should be able to do {:db/id entity-id attribute value}#2016-01-2115:29bkamphaus@yenda: if [:param/name parent] is meant to be a lookup ref, the map form key corresponding to it should be :db/id not :db/ident.#2016-01-2115:29yendayeah sorry I have that it says :db.error/entity-missing-db-id Missing :db/id"#2016-01-2115:38bkamphaus@yenda: if :param/attributes is cardinality many, you need to put the vector specifying the lookup ref in another vector, e.g. [[:param/name child]], and in that case it would be interpreting child as the second item in a list of refs, which could return that error.#2016-01-2115:40yenda@bkamphaus: thank you that was it !#2016-01-2116:06sdegutisDoes an attribute of :db.type/string and :db.cardinality/many preserve order of the strings?#2016-01-2116:13bkamphaus@sdegutis: nope, just sets, so order is not guaranteed. However, if they’re all individually transacted and you want the order in which they were transacted, that’s recoverable from the transaction entity for each transaction (in query, etc.). If you mean within the collection of values supplied to the transaction, there’s neither intrinsic nor recoverable order.#2016-01-2116:13sdegutisAhh good point!#2016-01-2116:14sdegutisI was afraid I'd have to create a new many-ref that points to entities that have a string and timestamp.#2016-01-2116:14sdegutisBut in this case, the transaction order is what matters. So it's easier, thanks to Datomic's history stuff. Phew!#2016-01-2116:14sdegutisThanks @bkamphaus#2016-01-2116:15bkamphausright, provided it’s the time it went into Datomic or the time you supplied by setting the transaction instant, rather than some other domain time that’s of interest.#2016-01-2116:16yendafor the case of an ordered list of things I couldn't find a better solution than having a position attribute#2016-01-2117:55sdegutis@bkamphaus: Interestingly, :db/noHistory seems to have no effect on this, as I'm still able to add strings to a cardinality-many string attribute on a single entity, and query their transaction date along with the strings.#2016-01-2118:13bkamphaus@sdegutis: noHistory is essentially an optimization hint and eventual, history is not preserved in the indexing process (as it usually is) and falls away. It doesn’t have the same guarantees as e.g. excise.#2016-01-2118:13bkamphausbut the most recent attribute’s tx will always be there.#2016-01-2118:14sdegutis@bkamphaus: So you're saying it's guaranteed that something like this will work on a no-history attribte? (d/q '[:find ?when ?what :where [_ :some/thing ?what ?tx] [?tx :db/txInstant ?when]] db)#2016-01-2118:16bkamphausanything that is part of the present db is a datom and has a txid, so yes. It’s just things that have been retracted or previously asserted that will not be kept.#2016-01-2118:19sdegutisExcellent. Thanks!#2016-01-2221:15gworley3i didn't find anything on google: is there any experience with monitoring datomic peers and transactors with new relic or has it all been done with cloud watch?#2016-01-2221:16gworley3i especially care about the transactors here since i obviously already have some application monitoring in the code that runs the peer#2016-01-2223:42currentoorI was thinking about using datalog like GraphQL, where the client (a browser) sends queries to the datomic peer. But being able to execute arbitrary code inside a :where clause scares me. Is there a reliable way to make sure queries don't have method calls? Can I just check for parentheses?#2016-01-2223:42currentoorOr is this just a terrible idea?#2016-01-2302:10jimmy@currentoor: you might want to look at om next.#2016-01-2315:15stuartsierra@currentoor: Any Datomic Peer (or anything with unrestricted access to a Peer, like the ReST interface) can do anything to the database. If you are offering a service to untrusted clients, you'll want to provide a service interface that limits what they can do. For example, you could write your own parser for a restricted subset of Datomic's datalog that you know is safe. You have to decide what to allow: For example, parentheses could be either rule invocation or method invocation.#2016-01-2316:09tmortenin case anyone else finds this useful...I created a db function to compose an existing parent identifier with another association in an entity transaction: https://gist.github.com/tylermorten/d8484a07229ec4d4f5ff#2016-01-2320:00currentoor@nxqd, @stuartsierra thanks for the advice#2016-01-2414:19jimmyhi guys, is it a good idea to store edn in datomic string field ? We can eval it to clojure data structure later ?#2016-01-2416:26robert-stuttaford@nxqd: nothing wrong with it. we do it in a couple places. mostly where we’re just storing stuff that only the client SPA will use#2016-01-2416:26robert-stuttafordi.e. we’d never query on it server side#2016-01-2416:35jimmyyeah of course. I find it's quite convenient in development phase as well. It's easy to shape the data structure without worrying too much about schema. Thanks for answering simple_smile#2016-01-2502:44richiardiandreaHello guys, can I connect to a datomic (free) server from outside? I forwarded 4334,5,6 but I still get org.hornetq.api.core.HornetQNotConnectedException: HQ119007: Cannot connect to server(s). Tried with all available servers. type: #object[org.hornetq.api.core.HornetQExceptionType$3 0x66b1f02d "NOT_CONNECTED"] What am I missing?#2016-01-2508:10luposlipHi @richiardiandrea, this is typically an error you see if the Transactor has bound to an IP/port unavailable to the peer.#2016-01-2508:11luposlipIf e.g. the Transactor is bound to localhost:4334, but the peer runs on a different host (or in a docker instance), then you need to set the alt-host property to the publicly available IP.#2016-01-2508:12luposlipAnd make sure to expose the ports as well if in a container/firewalled environment#2016-01-2517:18ljosaI have a query where I don't want to pull all the values of a multivalued attribute need to know whether the attribute has any values. I'm doing this: (d/q '[:find (pull ?c [:campaign/docId]) ?m
:where
[?c :campaign/enabled true]
[(missing? $ ?c :campaign/zipcodes) ?m]]
db)
That works. But is there a way to do the same thing in the pull syntax? It would be easier for the consumer code if the true/`false` were in the pull result.#2016-01-2518:07a.espolovHello.
Please enlighten me about the work of datomic free
1. can I limit the amount of memory for memory storage, such as 8 GB
2. what will happen if these 8 GB busy data and I will be adding new entries, whether removing old?
or do I have to remove them?#2016-01-2518:38stuartsierra@a.espolov: Datomic Free offers two kinds of storage: mem is in-memory only, on a single machine, limited to available space in the JVM Heap. If your mem database exceeds the available space on the heap, you will get an OutOfMemoryException. free storage uses local disk on the same machine that is running the Transactor; it is limited by available disk space.#2016-01-2518:39a.espolov@stuartsierra: thx#2016-01-2518:40stuartsierra@ljosa: pull cannot "rewrite" or transform anything about the data it pulls; it's just a selection.#2016-01-2518:40a.espolovbefore this option was not ' free ' storage uses local disk#2016-01-2518:40ljosa@stuartsierra: ok, thanks#2016-01-2518:42stuartsierra@a.espolov: Datomic Pro Starter Edition is a no-cost way to use Datomic that offers more options for storage back-end. http://www.datomic.com/pricing.html#2016-01-2518:42a.espolovoh, ok#2016-01-2518:43luposlipand the cost of [EDIT:] DynamoDB can be as low as below 1 USD/month, if you set the read/write capacity to 1-2 pr. second.#2016-01-2522:01timgilbertHey, got a probably slightly dumb question about this page, I'm trying to deploy a Datomic AMI: http://docs.datomic.com/aws.html
So I have my transactor dynamodb properties up and running, but it's set up with host=localhost which I'm kind of assuming won't work for a transactor which I want to connect to over the network
...but it seems as though the CloudFormation config is going to provision my eventual host itself, so I'm not sure how to look into the future and get the AMI's eventual allocated hostname
Is there some Route53 config I need to do or something?#2016-01-2522:03bkamphaus@timgilbert: you don’t have to manage this aspect of it. The cloud formation template generated by Datomic will handle getting host and alt-host values via AWS, and you only need DynamoDB region + table information to connect to the transactor. Peers connect to storage, the transactor writes its location (`host` and alt-host) to storage, peers read it there.#2016-01-2522:03timgilbertOh, right! slaps forehead#2016-01-2522:03timgilbertI had forgotten about the peers-talk-directly-to-storage aspect of datomic#2016-01-2522:12timgilbertOk, so then my peer URI is just basically like datomic:ddb://${aws-dynamodb-region}/${aws-dynamodb-table}/${my-db-name}, correct?#2016-01-2522:13bkamphausyep#2016-01-2522:13timgilbertOk, that makes sense, and there's no hostname in the connection URL. Thanks @bkamphaus!#2016-01-2601:15currentoorI'm using conformity for migrations but I'm a little confused about the API. Based on the README it takes a map of the form
{:migration/143424234_add_user {:txes [[...]]}
:migration/143424235_modify_user {:txes [[...]]}}
But the map is not ordered so how does it now what order the migrations should be applied?#2016-01-2601:15currentoorOr should I pass in a new map for each migration?#2016-01-2601:29taylor.sandoThere is a requires key in addition to :txes that you can supply the name of the prequisites, so if modify user depends on add_user, you'd put :add_user in the :requires map for modify_user#2016-01-2603:24donaldballI have a general usage pattern question, I’ll try to summarize in prose. I have a query that returns a bunch of musicians. I’m also going to be interested in the publishers of any albums the musicians have released, but have no guarantee that every musician has done so, and don’t want to exclude them from the results. In sql, I’d balance the cost of the repeated musician data against the cost of another query to determine my execution strategy. In datomic, it’s not clear if a. it would even be possible to do this in a single query or b. if there’s any benefit from doing this in a single query.#2016-01-2603:34bkamphaus@donaldball: for your use case I would use a pull expression. As described here - http://docs.datomic.com/best-practices.html#use-pull-to-retrieve-attribute-values - pull expressions don’t limit results missing attributes/ref traversals, etc. Documentation on using pull expressions is here: http://docs.datomic.com/query.html#pull-expressions#2016-01-2603:37donaldballThanks, that does seem like the tool for which I was looking.#2016-01-2604:00donaldball[:find (pull ?musician [:db/id :album/_artist])] gives me the musicians and their sets of albums, if any, but I don’t really want the albums here, I just want the publishers#2016-01-2604:01bkamphaus@donaldball: you can nest patterns to retrieve publishers from albums, http://docs.datomic.com/pull.html#nesting#2016-01-2604:02donaldballAh, so you can, thanks again#2016-01-2620:15timgilbertHi all... I'm not having any luck getting a CloudFront instance to deploy because the EC2 instances keep failing their health checks and being restarted. Anything obvious I should look for?#2016-01-2620:20timgilbertThe instances are reporting "Client.InstanceInitiatedShutdown: Instance initiated shutdown"#2016-01-2620:36timgilbert...then if I try to launch the AMI in an EC2 instance I set up manually I can't seem to ssh to it#2016-01-2620:39bkamphaus@timgilbert: likely culprits are (a) inadequate permissions for transactor role, (b) incompatible memory settings, (c) unsupported instance type *e.g. one without a file system, some more details here in the “transactor fails to start” section — http://docs.datomic.com/deployment.html#troubleshooting#2016-01-2620:40timgilbertThanks again @bkamphaus, will look through that#2016-01-2703:04jimmyhi guys what is the proper way to update refs in datomic ? Normally I transact many ref like this :user/comments #{1 2 4 5} but when I re update it with :user/comments #{1 2}. It doesn't seem to change#2016-01-2703:09bhagany@nxqd: in that case, you'd have to issue retractions for 4 and 5#2016-01-2703:09bhaganydatomic doesn't replace the whole set like you seem to want#2016-01-2703:11bhaganyI find it helpful to remember that the map form of transactions always expands to vector forms that begin with :db/add#2016-01-2703:12bhaganyso, your original transaction expands to [[:db/add 123 :user/comments 1] [:db/add 123 :user/comments 2] [:db/add 123 :user/comments 4] [:db/add 123 :user/comments 5]]#2016-01-2703:13bhaganyand your second transaction expands to [[:db/add 123 :user/comments 1] [:db/add 123 :user/comments 2] ], which means you're just re-asserting 1 and 2, but not doing anything with the already-existing 4 and 5#2016-01-2703:16jimmyhmm, that explains. so if I have to do it manually I have to do [[:db/remove :user/comments 4]] ?#2016-01-2703:17bhaganyyes, but it's :db/retract#2016-01-2703:17jimmyah ok#2016-01-2703:18jimmyfor example if I want to 'just update' then I have to retract all and add the new ones ( most simple way ) or I have to find which one is new and which is not then explicitly add or retract those ( which is more complex )#2016-01-2703:19bhaganyyes, those are your options#2016-01-2703:20jimmyok, thanks for helpful insight simple_smile#2016-01-2703:20bhaganyyou're welcome simple_smile#2016-01-2703:28bhaganyI suppose I should point out that this only applies to :cardinality/many attributes. For :cardinality/one, datomic will do the retraction for you#2016-01-2703:28jimmy@bhagany: it's good to know#2016-01-2703:29jimmyI just found out this gist, stuart does point out how to retract all refs. https://gist.github.com/stuarthalloway/2948756#2016-01-2703:29jimmyIt would be a bit easier, now I need to find a general way to solve this#2016-01-2705:16ebahsiniAny good walkthroughs for Datomic that aren't groovy based? #2016-01-2706:11jimmy@ebahsini: java or clojure ? There are tutorials on frontend page of datomic ( it's in java ) and another one in clojure in a Datomic org on github. You can find it right away.#2016-01-2712:52isaaccan not add partition and enum type at the same transact
{:db/id #db/id[:db.part/db]
:db/ident :table
:db.install/_partition :db.part/db}
[:db/add #db/id[:table] :db/ident :resource.type/table]
#2016-01-2712:54isaacFirst, I install partition table. And then add entity to this partition. It will throw exception.#2016-01-2715:10stuartsierra@isaac: That is correct: You cannot create a partition and create entities in that partition within the same transaction. The same is true for attributes: You cannot create an attribute and use it in the same transaction.#2016-01-2715:25isaac@stuartsierra: Yeah, thanks. I separated it into two transaction.#2016-01-2721:45meowI'm trying to install Datomic Pro Starter Edition on Windows 10 - I can't run bin/maven-install because there is no .cmd file equivalent. What gives?#2016-01-2721:48currentoorI'm trying to store a JSON blob as :db.type/bytes but I get this error
ERROR java.lang.IllegalArgumentException: :db.error/wrong-type-for-attribute Value {:href "", :method "POST", :title "corge (Phrase)"} is not a valid :bytes for attribute :action/form
Any suggestions? Also is bytes even the right type?#2016-01-2721:49currentoorWhat are the requirements for something to be storable as bytes?#2016-01-2721:50meowThe Datomic "Getting Started" isn't giving me any warm fuzzies - more like I feel like an idiot. Just FYI.#2016-01-2721:54meowSeriously, does nobody install Datomic on Windows?#2016-01-2722:39jonahbentonhey @currentoor a json blob should be put into a :db.type/string#2016-01-2722:40currentoor@jonahbenton: i plan on storing error information that could be on the order of a megabyte in size. Is that ok?#2016-01-2722:45jonahbenton@currentoor my understanding is that current design considerations around datom storage are in the vein of a max of low 10s of k for strings. so writing and retrieving blobs closer to a MB in size is considered "suboptimal" for some definition of suboptimal. whether it's ok probably depends on your use case, throughput requirements, etc.#2016-01-2722:49stuartsierra@meow: bin/maven-install is only needed if you're going to use the Datomic Peer libraries with Java or Clojure tools that use Maven for dependency resolution, such as Maven or Leiningen.#2016-01-2722:50stuartsierrabin/maven-install is a very simple script that just calls the Maven command-line executable.#2016-01-2722:50meowAssuming that were the case, how does one do that on Windows 10?#2016-01-2722:51stuartsierraI'm afraid I don't know exactly, but if you have Maven (https://maven.apache.org) installed you should be able to copy the mvn command from bin/maven-install.#2016-01-2722:52meowSince I'm just starting with it and it isn't critical I'm putting it off for a few days. Not in the mood.#2016-01-2722:53meowNo offense, I appreciate the help. Just don't feel like I should need it just to install the thing.#2016-01-2722:54currentoor@jonahbenton: thanks for the advice.#2016-01-2723:04jonahbentonhey @meow hit me up when you want to come back to it, happy to help#2016-01-2807:45jimmyhi guys, in my entity I have :project/started-date and its type is instant. Is there anyway that I can get all the projects that is within a date range using datomic rule ? Or it's better get the data out of datomic then do the filter on it ?#2016-01-2813:48stuartsierra@nxqd: You can use a regular Datomic query with < and > predicate constraints on the value of :project/started-date#2016-01-2813:49jimmy@stuartsierra: thanks. It seems I overthought the solution then simple_smile#2016-01-2818:52bplatzHi, I'm trying to convert a #datom into just a vector, basically doing some post-processing on this data before sending it out to subscribed clients.#2016-01-2818:52bplatzIs there an easy and performant way to do this in clojure? Right now the way I do it seems like it would be inefficient.#2016-01-2818:52bplatzI'm using: (juxt :e :a :v :tx :added)#2016-01-2821:36stuartsierra@bplatz: try vec#2016-01-2821:37bplatzPretty sure that throws... I'll try to make sure.#2016-01-2821:37stuartsierraAh, maybe (vec (seq datom)) or something similar.#2016-01-2821:39bplatzseq throws: CompilerException java.lang.IllegalArgumentException: Don't know how to create ISeq from: datomic.db.Datum,#2016-01-2821:39bplatzvec throws: CompilerException java.lang.RuntimeException: Unable to convert: class datomic.db.Datum to Object[],#2016-01-2821:52stuartsierraah, then maybe juxt is the way to go#2016-01-2821:53stuartsierraIf you need a vector.#2016-01-2821:54stuartsierraor destructure like (let [[e a v tx add] datom] [e a v tx add])#2016-01-2900:29bplatzThanks Stuart, glad to get a second opinion on it. It just felt like maybe I was missing something, but I guess not!#2016-01-2912:36serioga@meow: I did smth like this:
- run transactor.cmd with dev-transactor-template.properties
- run shell.cmd and import demo data in there
- run console.cmd to see console in browser
Then I was able to see smth in action simple_smile But documentation is not clear here 😞
https://clojurians.slack.com/archives/datomic/p1453931405000091#2016-01-2915:48misha@robert-stuttaford: greetings!
I saw you asked @tonsky whether datomic<->datascript sync is a solved problem or not.
What is the outcome of that? Are there any good approaches,
or did it turn out to be way to specific to a particular project needs to have something useful to share?
thanks#2016-01-2920:02jgdaveyHas anyone ever had to implement something that’s just like :db.fn/cas, only retracts the fact if and only if the current value matches? Bascially, allowing “new value” to be effectively nil?#2016-01-2920:03jgdaveyI basically want "compare-and-retract"#2016-01-2920:08marshall@jgdavey: You can simply issue a retraction.#2016-01-2920:09marshall@jgdavey: If the value in the [:db/retract E A V] doesnt match, you just get an empty transaction#2016-01-2920:09jgdaveyThat works when the fact exists.#2016-01-2920:09jgdaveyIt doesn’t “blow up” like cas if there is no fact to retract#2016-01-2920:10marshalli.e. you want something to throw or error if you try to retract a non-existant value?#2016-01-2920:11jgdaveyThe specific thing is: I have an expirable token. When that token is used in one transaction, and is retracted in the same transaction.#2016-01-2920:12jgdaveyI want to ensure that token is valid (in an atomic sense) so that it can’t be used twice#2016-01-2920:12jgdaveyI was thinking in terms of cas, but maybe there’s another way#2016-01-2920:13marshallAh. I’d say probably a good candidate for a check and retract inside a custom tx function#2016-01-2920:13jgdaveyAnd just throw if it’s gone?#2016-01-2920:13marshallif that’s your desired behavior#2016-01-2920:13marshallthat will abort the whole transaction#2016-01-2920:13jgdaveyIs that the correct way to abort a transaction?#2016-01-2920:14marshallhttp://docs.datomic.com/database-functions.html#uses-for-transaction-functions#2016-01-2920:15marshallspecifies that "You can use them to ensure atomic read-modify-update processing, and integrity constraints. (To abort a transaction, simply throw an exception)."#2016-01-2920:15jgdaveyGotcha. Thanks!#2016-01-2920:15marshallsure#2016-01-2920:17marshallThis page http://docs.datomic.com/exceptions.html also provides some additional info on how exceptions are propagated back to the peer from the transactor#2016-01-2920:44jgdaveyFYI, here’s what I came up with:#2016-01-2920:45jgdaveyhttps://gist.github.com/jgdavey/57208328312ebecbc6c6#2016-01-2920:49marshallseems reasonable to me#2016-01-3114:25robert-stuttaford@bkamphaus: what might cause "org.h2.jdbc.JdbcSQLException: Database is already closed” when restoring from S3 into the dev storage?#2016-01-3114:26robert-stuttafordfreshly installed test server. i keep retrying and it manages another 3k-5k segments before it fails again. slooooowly restoring all 170k segments by retrying over and over.#2016-01-3114:27robert-stuttafordspecifically happening on SQL insert#2016-01-3118:42robert-stuttaford@bkamphaus, please disregard; turns out {DATOMIC_HOST} is not a valid alt_host value XD#2016-01-3119:14bkamphaus@robert-stuttaford: ok, good to hear you got it sorted.#2016-02-0103:31ghufranHi all, I’m going through the datomic tutorial at http://docs.datomic.com/tutorial.html , and trying to understand the samples/seattle/getting-started.clj file in the free datomic distribution. The transactions have an @ symbol before the form, e.g. `;; submit seed data transaction
@(d/transact conn data-tx)` . What does this symbol signify? Is this a clojure annotation of some kind, or specific to datomic?#2016-02-0103:41ghufrannever mind, managed to find it by googling “at sign clojure”, found this useful link https://yobriefca.se/blog/2014/05/19/the-weird-and-wonderful-characters-of-clojure/#2016-02-0108:18luposlipOK @ghufran, so you are now aware that the transact function returns a future, that you deref with the @ sign?
This code is identical:
(deref (d/transact conn data-tx))
#2016-02-0205:39jimmyfor example I have location, and project/location :ref :one to location. now I want to query all projects that has that location ?#2016-02-0206:44luposlip@nxqd: "You can navigate relationships in the reverse direction by prepending a '_' to the name portion of a reference attribute keyword.” - pasted from the Datomic tutorial.
In your case this means you get a location entity from Datomic. With that entity in hand, you get all project entities for that location simply by calling:
(:project/_location location-entity)
#2016-02-0208:47jimmy@luposlip: thanks simple_smile#2016-02-0211:33val_waeselynckmaybe this has been asked before, but how does t->tx work exactly ? I don't see how a transaction number could map to any one transaction independently of the database.#2016-02-0213:59lucasbradstreetJust to check my rough understanding of how the transactor works. When you transact something, the transactor first writes it to the log, and updates the indexes periodically in the storage layer. The peers and memcached never have to touch the transactor as they can follow the log along fetch from the indexes in the storage layer once the transactor has indexed. I’m assuming there’s no “push” from the transactor to the peer to say that indexes have been updated, instead it’s pulled from the storage layer?#2016-02-0215:21robert-stuttaforddefinitely pushed, @lucasbradstreet#2016-02-0215:22robert-stuttafordthe txor and peers all keep a live index of the newly transacted but not yet indexed-in-storage datoms#2016-02-0215:22robert-stuttafordfor this reason, the txor is pushing all novelty to peers as it happens, so that queries on peers can consider the full database, not just storage index#2016-02-0215:24robert-stuttafordmy rough understanding is that the process is like this:
1. txor logs to tx-log in storage
2. transacting peer informed
3. live index updated; pushed to peers
4. merge of live-index to storage-index possibly triggered due to threshold reached
@bkamphaus can confirm simple_smile#2016-02-0215:31bkamphausessentially. log is always durable and all data is in live/memory index after the transaction, as in diagram here: http://docs.datomic.com/indexes.html#real-time-query#2016-02-0215:31bkamphaustransactor notifies peers of new data (logged on peer)#2016-02-0216:00lucasbradstreetBut the live index isn’t the actual data indexes - i.e. some reduced form of the log that notifies the peer of new data, but not the data itself? Thanks for that diagram, I thought I’d seen that somewhere#2016-02-0216:06timgilbertSay, if I'm going to provide a unique identifier for an entity that will be used in an external process (web service), should I create my own ID with (d/squuid) and :db/unique :db.unique/identity, or can I just use the actual entity ID?#2016-02-0216:08timgilbertThis seems to imply that entity IDs are only for internal keys, but I don't see it explicitly stated anywhere: http://docs.datomic.com/best-practices.html#unique-ids-for-external-keys#2016-02-0216:12bkamphaus@lucasbradstreet: those details are documented in the “On the Peer” section of the memory index docs in caching, http://docs.datomic.com/caching.html#memory-index#2016-02-0216:12bkamphausSpecifically:
* A peer builds the memory index from the log before the call to connect returns.
* A peer updates the memory index when notified by the transactor of a transaction.
* A peer drops the portion of the memory index that is no longer needed when notified by the transactor that an index job has completed.#2016-02-0216:15bkamphaus@timgilbert: you should create an identifier with (d/squuid), it’s true that an entity is essentially an internal id. If, e.g., you ever have to migrate records to another Datomic database, a uuid will be stable in the migration. An entity won’t as it’s assigned by the db and can’t be specified.#2016-02-0216:16bkamphausthese details are distributed through the docs in different places as you acknowledge, is there a first place you would have looked where you’d want that info to be more explicit?#2016-02-0216:23timgilbertThanks @bkamphaus. I did scan the "uniqueness" page looking for this, but then I thought I thought I remembered it from one of the datomic videos, which are a little hard to grep 😉#2016-02-0216:25lucasbradstreet@bkamphaus: thanks so much. I guess I should’ve done more homework in the docs!#2016-02-0216:25bkamphaus@lucasbradstreet: no worries, I’ll admit it’s not immediately apparent that you should jump to the caching topic to answer that question simple_smile#2016-02-0216:26lucasbradstreetYeah, heh. What does “notify” mean here? "A peer updates the memory index when notified by the transactor of a transaction.”. Is that as little as a the tx-id?#2016-02-0216:29bkamphausyep. with peer logging on you’ll see a message like:
2016-01-26 14:16:43.046 INFO default datomic.peer - {:event :peer/notify-data, :msec 2, :id #uuid "56a7e23a-2e3e-41ca-bf4d-b9113aba6e41", :pid 24212, :tid 28}
#2016-02-0216:39lucasbradstreetAh, cool, I am definitely going to turn peer logging on. That’s a good trick#2016-02-0217:00sonnytodoes anyone know of a tool that will generate datomic schema from prismatic schema?#2016-02-0218:20kvltCan anyone tell me of a way to query on a date range on a non-indexed attribute?#2016-02-0218:36bkamphaus@petr same query will work with or without index, although performance will differ. Do you mean a date for your own attribute or domain, or Datomic transaction time?#2016-02-0218:51lucasbradstreetIf I add an UUID attribute with db.unique/identity, that will mean that my whole entity is stored an extra time (so four times vs three), since it’s additionally stored in AVET, right?#2016-02-0219:06kvltbkamphaus: it’s a date for my own attribute#2016-02-0219:06bkamphaus@lucasbradstreet: that’s partially correct, avet will be set to true for any attribute, but only that attribute/value will be indexed, not everything on the entity.#2016-02-0219:07kvltI have only found datomic.api/index-range though (http://docs.datomic.com/clojure/#datomic.api/index-range)#2016-02-0219:07bkamphausentities are derived from datoms, not directly storied in their entirety in indexes.#2016-02-0219:07kvltI basically want to find all entities that had a date attribute between two date-times#2016-02-0219:07bkamphaus@petr you can use standard comparison, < and > etc. in query as the default case. index-range or datoms with :avet will work if you need to page through things by time.#2016-02-0219:09kvltThanks!#2016-02-0219:09bkamphaus@petr sorry re: the datoms + :avet, just remembered your condition specified not indexed. So, yes, index-range and datoms with :avet won’t work, but query will.#2016-02-0219:10lucasbradstreetOh that totally makes sense#2016-02-0219:10bkamphausit’s also fairly cheap to turn :avet - especially for a regularly sized value like a inst, long, etc. any particular motivation for keeping it off?#2016-02-0219:10kvltbkamphaus: [(< :moo/some-date #inst "2015-10-14T17:30:00.953-00:00”)] ?#2016-02-0219:11lucasbradstreetSo maybe it’ll be used to lookup the eid, and then if you wanted to access a bunch of attributes on that entity, they’d be accessed via EAVT#2016-02-0219:13lucasbradstreetThanks#2016-02-0219:15kvltNevermind. I just bind it ot a var#2016-02-0219:15bkamphaus@petr I would parameterize the time values myself, i.e. have ?inst1 and ?inst2 in the :in and provide values.#2016-02-0219:16kvltYep, I would do too. Was just testing#2016-02-0222:00gworley3i'm trying to get cloudwatch monitoring to work but so far not having any luck. i wrote up the details of what i've done on a stackoverflow question. any suggestions of what i could do to get it working would be appreciated https://stackoverflow.com/questions/35164549/monitoring-datomic-in-cloudwatch-without-cloudformation#2016-02-0222:06bkamphaus@gworley3: I’ve only used our documented permission granularity or let it be set via the ensure-transactor process and never had any issues. i.e.,
{"Statement":
[{"Resource":"*",
"Effect":"Allow",
"Action":
["cloudwatch:PutMetricData", "cloudwatch:PutMetricDataBatch"],
"Condition":{"Bool":{"aws:SecureTransport":"true"}}}]}
#2016-02-0222:07bkamphausin general my first troubleshooting step (if the situation allows) for any AWS config that’s not working is to try and run the transactor locally with keys in the environment with pretty wide permissions, so I can get a sanity check on my settings with the complexity of role config factored out.#2016-02-0222:08bkamphausYour situation may or may not allow for a troubleshooting step like that, of course.#2016-02-0222:14bkamphaus@gworley3: also just a sanity check, you’re using Pro or Pro Starter?#2016-02-0222:14gworley3@bkamphaus: pro starter#2016-02-0222:15bkamphausok, should be fine, it’s just free that's not supported for cloudwatch metrics.#2016-02-0222:36gworley3interesting. when i look at the iam role access advisor it says nothing has tried to access cloudwatch through the role#2016-02-0222:52gworley3also, where should i expect to see them show up when it works? metrics on the ec2 box or as a separate datomic section or somewhere else?#2016-02-0223:24bkamphaus@gworley3: re: where they’ll show up, I use CloudWatch from the AWS console, from the left drop down menu there’s a “Custom Metrics” drop down where you can select “Datomic"#2016-02-0223:26gworley3ah, ok. i don't (yet) show anything like that#2016-02-0223:30bkamphausI would double check that the IAM Role that displays on the instance description in the EC2 Dashboard is the correct one, also. I just checked a working transactor IAM role and its inline policy for metrics is verbatim from the docs:
{"Statement":
[{"Resource":"*",
"Effect":"Allow",
"Action":
["cloudwatch:PutMetricData", "cloudwatch:PutMetricDataBatch"],
"Condition":{"Bool":{"aws:SecureTransport":"true"}}}]}
#2016-02-0223:31bkamphauson startup I usually see it take 5 minutes or so for metrics to show up.#2016-02-0223:34gworley3i changed the role to have this exact policy but still not seeing anything#2016-02-0300:13gworley3just thinking of other things that could interfere (or at least maybe could in my mind since I don't know the code): i'm not shipping logs to s3 and i'm running this on a box i built on aws running ubuntu 14.04 without using either cloudfront or the datomic transactor ami and i'm using cassandra as the datastore#2016-02-0315:21chadhsarchitecture question: could you start by running everything on one server instance: nginx, your clojure app + datomic peer, datomic transactor, and sql storage?#2016-02-0315:22chadhsthen grow by breaking things out… like moving storage to dynamodb#2016-02-0315:22chadhsetc etc#2016-02-0315:23bkamphaus@chadhs: for testing, initial running it would work, but we don’t provide support for datomic configs that aren’t distributed in production. The reason being that combining processes that way impacts the stability of other processes and you need storage and the transactor to run smoothly to avoid hiccups in availability.#2016-02-0315:23chadhs@bkamphaus: so at a minimum you’d want appserver, transactor, storage split#2016-02-0315:25bkamphaus@gworley3: I’m still stuck thinking if there are any other differences I can probe. I may run an end-to-end deploy/config test with the latest version for aws metric reporting to see if anything of note comes out. Apart from that, not sure what the difference could be.#2016-02-0315:26bkamphaus@chadhs: correct.#2016-02-0315:27chadhscool thnx, that helps#2016-02-0315:35timgilbertHey, quick question about the console and licensing: I'm setting up a separate staging and production environment, and I'm considering having the console running on a dedicated server somewhere via console -p 8080 staging datomic: prod datomic:#2016-02-0315:36timgilbert...so I'm wondering if doing that would wind up consuming a processor license from both staging and prod even when nobody is actively using the console, or whether it only takes up a process when someone has, say, logged into it and selected "staging" from the dropdown#2016-02-0315:37bkamphaus@timgilbert: whenever it’s running it consumes a process.#2016-02-0315:37timgilbertPart of the motivation is to allow developers to use the console without necessarily needing the full datomic install on their laptops#2016-02-0315:38timgilbertOk, thanks @bkamphaus. So if I run it as above, it will be consuming one each from staging and production as long as the process is up, correct?#2016-02-0315:39bkamphausthat’s correct (its connection to each as a peer)#2016-02-0315:39timgilbertOk, cool. Thanks again for the info.#2016-02-0318:18gworley3@bkamphaus: thanks for taking a look. i keep hoping there's something obvious that i've failed to do that would address it. fyi running version 0.9.5344#2016-02-0319:14currentoorIs there a way to get the size of the the DB?#2016-02-0319:23arohnercurrentoor: not via the API, that I’m aware of. But you can always go look at your storage directly#2016-02-0319:23currentoor@arohner: good point#2016-02-0405:29currentoorI'm seeing very slow queries and my database is not even that large. Any suggestions for how to proceed?#2016-02-0406:42currentoorIf I have an expensive query that returns 5MB of data, I can see that the first time the query is made it takes about ~3 seconds. But the second time that same query is made, shouldn't it be way faster because of caching?#2016-02-0406:43currentoorI'm wondering if I've setup something incorrectly. #2016-02-0407:18currentoorI thought perhaps the peer does not have enough memory but based on New Relic I can see that I haven’t hit the max heap size yet, so that’s probably not the cause.#2016-02-0408:46currentoorNevermind, turns out to be a different issue.#2016-02-0420:00bkamphaus@currentoor: if you revisit this again and can share the query or an obfuscated form of it, there are common issues like clause ordering, typos in variable bindings, inclusion of clauses that don’t relate and lead to cartesian product intermediate sets of tuples, etc. that result in inefficient queries (and sometimes those inefficiencies may only become glaringly obvious at scale).#2016-02-0420:02bkamphausalso note that index or log segments go into the object cache and won’t by default consume the entire heap, you can change the objectCacheMax system property (defaults to half of heap), more on that here: http://docs.datomic.com/caching.html#object-cache#2016-02-0421:07currentoor@bkamphaus: much obliged!#2016-02-0423:10ljosaDoes Datomic do okay with the transactor and storage on the other side of the country (~100 ms)? Our west coast people are trying to get started with Datomic and are reporting slowness.
From a newly started peer JVM, a query that takes 5 s within the same AWS region as the servers takes 99 s from laptops in our Portland, Oregon, office (~100 ms ping times). And d/connect takes 80 s.
It's much better for subsequent queries, as the peer starts to cache most of what it needs. But is this expected behavior, and is it network latency that is the determining factor? Or is something wrong? Should I be looking for Couchbase connection problems?#2016-02-0423:11bkamphaus@ljosa: it sounds like network latency is certainly a contributing factor and that’s not a configuration I would typically recommend. Is there also a cross-regional consistency setting (i.e. replication or something) that’s a confounding factor as well?#2016-02-0423:13ljosano, no couchbase xdcr, as it doesn't guarantee the consistency that Datomic requires. just a transactor and a couchbase cluster, both in us-east-1.#2016-02-0423:14bkamphaus@ljosa: what are your memory-index settings?#2016-02-0423:15ljosaon the transactor?#2016-02-0423:15lockdownyep, I would try couchbase direct queries first#2016-02-0423:15lockdownto make sure you can discard it#2016-02-0423:16ljosamemory-index-max=512m
memory-index-threshold=32m
object-cache-max=128m
#2016-02-0423:16bkamphausok, that looks reasonable.#2016-02-0423:18bkamphausreason I ask re these two things is (1) really common issue with sudden latency spikes of users on e.g. Cassandra is cross-datacenter consistency/replication, have seen two orders of magnitude jump in latency out of that (2) peers have to accommodate memory index (and read all log/memory index segments into memory) with the initial call to connect, so that could be a contributing factor where even a small amount of latency could have a big impact.#2016-02-0423:19ljosais the peer able to pipeline its couchbase reads, or is there a lot of read-wait-read?#2016-02-0423:24ljosaI did some couchbase testing from my house in Massachusetts. Ping times around 25 ms. Connecting takes a few seconds. The query that they used in Oregon takes 30 s. Also tested directly with Couchbase, and things look reasonable: 200 ms to create cluster, 930 ms to open bucket, 30 ms to read a small document. No errors from Datomic or Couchbase.#2016-02-0423:29bkamphaus@ljosa: it’s certainly true that (especially with the cross-country latency contributing) a warm query will be significantly faster as it won’t be retrieving segments from storage. If the entire database or most frequently accessed segments can be held in the object cache on the peer, performance should be fine after the warm up period.#2016-02-0423:29bkamphausdo you have peer logging enabled?#2016-02-0423:31bkamphausthe concurrency of peer reads can be adjusted, also: http://docs.datomic.com/system-properties.html#peer-properties#2016-02-0423:31ljosayes, after my 30 s query I get the first metrics: [Datomic Metrics Reporter] INFO datomic.process-monitor - {:tid 22, :AvailableMB 2590.0, :StorageGetMsec {:lo 26, :hi 389, :sum 33313, :count 857}, :pid 37440, :event :metrics, :ObjectCache {:lo 0, :hi 1, :sum 75, :count 944}, :LogIngestMsec {:lo 0, :hi 601, :sum 601, :count 2}, :MetricsReport {:lo 1, :hi 1, :sum 1, :count 1}, :DbAddFulltextMsec {:lo 0, :hi 29, :sum 29, :count 2}, :PodGetMsec {:lo 54, :hi 76, :sum 186, :count 3}, :LogIngestBytes {:lo 0, :hi 3581246, :sum 3581246, :count 2}, :StorageGetBytes {:lo 67, :hi 48478, :sum 10179767, :count 857}}#2016-02-0423:33bkamphaushm, the average StorageGetMsec time for the peer doesn’t seem notably slow from the Datomic peer view, (39 msec average)#2016-02-0423:34ljosaI'm going to try to increase concurrency and see if it changes.#2016-02-0423:36bkamphausthe same query is an order of magnitude increase? I would only expect that from latency if e.g. the StorageGetMsec time is extremely fast (i.e. an order of magnitude lower if we’re talking 3 vs. 30 sec), though this assumes storage reads dominate.#2016-02-0423:37bkamphauscold and hot query comparisons, system configs identical re: heap and object-cache size? (i.e not cross a memory threshold for intermediate representation on differently configured systems?)#2016-02-0423:37ljosa-Ddatomic.readConcurrency=10 didn't change anything.#2016-02-0423:38ljosasame query, in lein repl on identical laptops. No -Xmx#2016-02-0423:40ljosaThe query takes 5.3 s from an AWS instance in the east. Metrics: {:tid 19, :PeerAcceptNewMsec {:lo 1, :hi 1, :sum 1, :count 1}, :AvailableMB 1200.0, :StorageGetMsec {:lo 0, :hi 5, :sum 444, :count 846}, :pid 12134, :event :metrics, :ObjectCache {:lo 0, :hi 1, :sum 81, :count 936}, :LogIngestMsec {:lo 1, :hi 619, :sum 620, :count 2}, :MetricsReport {:lo 1, :hi 1, :sum 1, :count 1}, :PeerFulltextBatch {:lo 1, :hi 1, :sum 1, :count 1}, :DbAddFulltextMsec {:lo 0, :hi 35, :sum 35, :count 2}, :PodGetMsec {:lo 12, :hi 31, :sum 71, :count 3}, :LogIngestBytes {:lo 0, :hi 5165426, :sum 5165426, :count 2}, :StorageGetBytes {:lo 67, :hi 48478, :sum 10071059, :count 846}}#2016-02-0423:43bkamphauswow, StorageGetMsec average is 0.52 msec, vs. 39 msec in the other example, so I’d say that could certainly account for the difference (very good fit actually to 5.3 second versus 30 second ratio).#2016-02-0423:46ljosaI tried -Ddatomic.readConcurrency=1000 also, without much effect. (Well, it went from 30.8 s to 28.8 s, not sure if I just got lucky.)#2016-02-0423:47bkamphausmay just be luck, I think the latency is the bottleneck. The storage retrieval component of the query just being masked by the extremely fast storage access in the primary config.#2016-02-0423:47ljosaDo you have other tricks that may speed up the connect and first query? Or do our people in Oregon just have to get used to long startup times? (This is for dev work and ad-hoc analysis; we don't have Datomic peers on production servers in the west.)#2016-02-0423:50bkamphausthe usual answer for reducing latency in population the object cache is memcached ( http://docs.datomic.com/caching.html#memcached ) but not sure you’ll want to configure it for the dev work and ad-hoc analysis situation you describe. I’m not sure where the costs with the queries are being made perf wise.#2016-02-0423:52bkamphausi.e. if it’s intermediate reps and joins, narrowing, etc. or if your clauses match a ton of results that have to be then passed on. You could throw up a REST server to return query results for ad hoc analysis and submit queries to the endpoints, that way the peer stays warm, though I’m not sure that would save you much trouble if you’re getting really large result sets.#2016-02-0423:52ljosadoes such a memcached have to be reachable by both the transactor and the peer?#2016-02-0423:53bkamphaussome of the costly queries may be able to be tuned via clause re-ordering, or strategies for handling time/tx provenance if those are a component?#2016-02-0423:54bkamphausdifferent Datomic processes can use a different memcached#2016-02-0423:56ljosaso a developer could have a memcached on his laptop without the transactor needing to be configured with memcached as well?#2016-02-0423:58ljosathe query itself is just a simple join and pulling four attributes on ~1285 joined pairs of entities: (count (d/q '[:find
(pull ?c [:c/d :c/i])
(pull ?b [:b/n :b/x])
:where
[?c :c/e true]
[?b :b/c ?c]] db))
=> 1285#2016-02-0500:00bkamphaus:b/c is card one or many?#2016-02-0500:00ljosaone#2016-02-0500:02ljosa:c/d and :c/i contain short strings; b/n and n/x are floats.#2016-02-0500:13ljosawhoa! the memcached solution reduced the cold query time from my house (25 ms ping time) from ~30 s to 2.2 s. I think we have our solution!#2016-02-0500:14bkamphauscool, good to hear. I wonder if there’s a cost in the structure of that pull that’s non-obvious. I’m doing testing against a larger mbrainz than the sample we provide, I see several orders of magnitude bump in perf to put in the second pull statement, I’ll discuss that with the dev team, though, too.#2016-02-0500:16bkamphausactually never mind, that time is only introduced when I have a typo in one of the pulled attributes, interesting.#2016-02-0500:16ljosathanks, we'll keep that in mind and see if we notice differences with two-pull queries.#2016-02-0500:16ljosaah simple_smile#2016-02-0500:16bkamphaussorry thinking aloud simple_smile#2016-02-0500:16ljosathank you for your help!#2016-02-0500:21bkamphausyeah, I’m not sure, I see < 150 msec w/local postgres storage for this query (larger mbrainz than public) with 10,340 count:
(time
(count
(d/q '[:find (pull ?t [:track/name :track/release]) (pull ?a [:artist/sortName :artist/startYear])
:where
[?a :artist/name "Pink Floyd"]
[?t :track/artists ?a]]
(d/db conn))))
#2016-02-0500:21bkamphausanyways, glad the memcached option seems to be helping! simple_smile#2016-02-0500:24bkamphaus~500 msec with reverse ref in first pull instead of typo 😛 (again 10,340 total results)
(time
(count
(d/q '[:find (pull ?t [:track/name :medium/_tracks]) (pull ?a [:artist/sortName :artist/startYear])
:where
[?a :artist/name "Pink Floyd"]
[?t :track/artists ?a]]
(d/db conn))))
#2016-02-0510:22nha@sonnyto: you could maybe have a look at https://github.com/cddr/integrity#integritydatomic#2016-02-0518:42currentoorBased on this stack overflow post I understand how I can get updated-at values using the history db.
https://stackoverflow.com/questions/24645758/has-entities-in-datomic-metadata-like-creation-and-update-time
But for performance I wanted to retrieve these timestamps together and part of another query. So is that possible? And is this the correct way to do it?
(d/q '[:find (pull ?a structure) ?created-at (max ?updated-at)
:in $ structure
:where
[?a :action/status "foo"]
[?a :action/id _ ?id-tx]
[?id-tx :db/txInstant ?created-at]
[?a _ _ ?all-tx]
[?all-tx :db/txInstant ?updated-at]
]
(d/db conn)
ent/ActionStructure)
#2016-02-0518:43currentoorAssuming :action/id is a unique attribute that is only set when the entity is created.#2016-02-0518:51stuartsierra@currentoor: "for performance I wanted to retrieve these timestamps together and part of another query"
There is usually no need to combine queries for performance reasons.#2016-02-0518:52stuartsierraSmaller, simpler queries usually perform better than large, complex queries.#2016-02-0518:53currentoorYeah I can totally see where you're coming from @stuartsierra but for this specific use-case I'm fetching about 1000 entities from the DB then mapping over them to get their created-at updated-at timestamps. The timestamp loop makes up about have my total execution time.#2016-02-0518:54currentoorIndividually these created-at updated-at queries are negligible but in aggregate they take a significant amount of time.#2016-02-0518:55currentoorDo you think they would still take just as long if I put them inside the larger query?#2016-02-0518:56stuartsierra@currentoor: As with any performance question, measure first. But I would not expect the combined queries to perform any better than separate queries.#2016-02-0518:58stuartsierraI would look at the size of the ?updated-at query results. If you have many transactions updating each entity, that could account for some of the cost of the query.#2016-02-0519:29currentoorHmm. So I know this is hearsay but I'm getting pressured to store created-at updated-at attributes directly on the entity, just like other DBs. I know this is re-inventing stuff but what about performance, do you suspect this would be faster than using Datomic's built in time facilities?#2016-02-0520:42stuartsierra@currentoor: As always, test and measure. Make sure you have realistic-sized data to test.#2016-02-0521:25currentoorWill do, thanks.#2016-02-0522:36currentoorI'm having getting a set of tx-times with this query.
(defn timestamps [db lookup-refs]
(d/q '[:find (min ?tx-time) (max ?tx-time)
:in $ [?eid ...]
:where
[?eid _ _ ?tx _]
[?tx :db/txInstant ?tx-time]]
(d/history db)
lookup-refs))
I'm passing in four lookup-refs so I would expect the result to be four tuples, one for each of the lookup-refs. But instead I get this.
[[#inst "2016-02-05T22:22:31.085-00:00" #inst "2016-02-05T22:31:29.292-00:00"]]
#2016-02-0522:38currentoorCan a query be used to take in a collection and return a collection in the same ordering?#2016-02-0522:38currentoorOh I get, uniqueness is the issue. This works.#2016-02-0522:39currentoor(defn timestamps [db lookup-refs]
(d/q '[:find ?id (min ?tx-time) (max ?tx-time)
:in $ [?eid ...]
:where
[?eid _ _ ?tx _]
[?eid :action/id ?id ?tx _]
[?tx :db/txInstant ?tx-time]]
(d/history db)
lookup-refs))
#2016-02-0606:50robert-stuttaford@currentoor: there’s also the :with clause in Datalog query#2016-02-0606:51robert-stuttafordbtw, the first datalog pattern in your timestamps query [?eid _ _ ?tx _] is made redundant by the second#2016-02-0608:39robert-stuttaford@bkamphaus: what is the maximum size a Datomic database can reach? i vaguely remember Stu either talking about or writing about this somewhere but i can’t find it. i know 1 billion datoms is possible. what’s the total ‘address space’?#2016-02-0611:44tcrayford@robert-stuttaford: ~10 billion datoms is the problem point. Not an address space thing, but problematic#2016-02-0611:44tcrayford@robert-stuttaford: also note that you can have at most ~20k idents in the db, because every ident is in memory in every peer/transactor#2016-02-0612:42robert-stuttafordthanks @tcrayford ! what makes 10b datoms a problem? can you direct me to something to read or watch?#2016-02-0614:39bkamphaus@robert-stuttaford: Stu's answer on this thread elaborates a little more: https://groups.google.com/forum/m/#!topic/datomic/iZHvQfamirI -- it's a practical limit and the value is a rough rule of thumb. the database still functions, but probably not with acceptable performance characteristics especially if the transaction volume would reach that size limit quickly for any given use case.#2016-02-0616:19robert-stuttafordthanks ben!#2016-02-0616:19robert-stuttafordsuper valuable info#2016-02-0616:55meowWhat is an ident of which there can be at most 20k? I'd like to understand this limit.#2016-02-0617:23bkamphausthe in-memory aspect of idents is documented here: http://docs.datomic.com/identity.html#idents#2016-02-0617:32meow@bkamphaus: Thank you for that link.#2016-02-0617:36meowSo is it fair to say that the ident limitation is primarily felt with more complex schemas?#2016-02-0617:37meowIf so, what is the impact of schema evolution?#2016-02-0617:40bkamphaus@meow: I’m not familiar with anyone running up against practical limits with ident count, though I imagine it would have an impact if you had e.g. generated or flexible tagging that users provided (if you anticipated thousands and thousands of that sort of tag, I would say switch to a unique/identity keyword or string attribute of your own.#2016-02-0617:40bkamphausthere’s also a limit on on schema elements but it’s pretty high, 2^20 http://docs.datomic.com/schema.html#schema-limits#2016-02-0617:42meowBraid has open-ended tagging of conversations.#2016-02-0617:43meowWe will hit those limits.#2016-02-0617:43meowIs there a performance penalty to the unique/identity keyword or string attribute of our own.#2016-02-0617:44meowAnd can you address the impact of schema evolution?#2016-02-0617:48bkamphausident is more performant but carries more memory overhead (pre-loaded). With your own unique attr on ref’d entity vs. ident you pay cost for retrieving segments and require warm cache etc. (three rough orders of magnitude to get segment from storage, memcached, object cache).#2016-02-0617:49meowThat is unfortunate.#2016-02-0617:50bkamphausif by schema evolution you mean how to make the change, you can find every one of those enums and give it an identical attr/val keyword name for what the ident was, leave the entity intact.#2016-02-0617:51bkamphausbut obviously pull, query, etc. and automagic around identy/eid translation is lost and requires more verbose lookup ref.#2016-02-0617:52meowBy schema evolution I mean the addition and/or removal of enitity attributes over time as the database design changes in a production environment along with the issues of migration of existing entities and how that works in datomic given that it is immutable.#2016-02-0617:53bkamphausI want to double check on that 20k limit, not sure if calculated or from a rule of thumb Stu or someone provided i.e. on a video. I do know that we caution people against too many idents but I’m not familiar with that specific boundary, @tcrayford if you don’t mind my quick follow question, can you refer me to the source for the 20k ident limit?#2016-02-0617:53bkamphaus@meow: not immutable over time, i.e. you can retract idents, assert them on other attributes, etc. But for testing, staging, etc. a lot of times you’re using the database itself as a test then migrating the portion of the schema/data you prefer to keep.#2016-02-0617:54meowWe always migrate the production instance of Braid.#2016-02-0617:55meowWe have the full history.#2016-02-0617:55bkamphausthe “present” database t/snapshot is the efficient one I mean, as in: http://docs.datomic.com/filters.html#usage-considerations#2016-02-0617:55bkamphaus“queries about "now" are as efficient as possible–they do not consider history and pay no penalty for history, no matter how much history is stored in the system."#2016-02-0617:58meowWhat schema is used when I query for something that happened yesterday. Is it yesterday's schema or today's schema, assuming the schema was changed?#2016-02-0618:01meowBraid is an online group chat application with groups and tags, and no limits on either.#2016-02-0618:02meowAnd the schema is evolving daily.#2016-02-0618:02meowAnd we have a production instance running since day 1.#2016-02-0618:03meowI use it every day.#2016-02-0618:03bkamphaus@meow answers to many of your questions are covered here: http://docs.datomic.com/schema.html#Schema-Alteration — however, an ident is not a schema element intrinsically (i.e. your own enums not in :db.part/db and an entity having an ident now or in the past doesn’t introduce the kind of complications you get from e.g. relaxing then trying to re-assert a unique constraint#2016-02-0618:03meowI understand that aspect.#2016-02-0618:04meow"Thus traveling back in time does not take the working schema back in time, as the infrastructure to support it may no longer exist. Many alterations are backwards compatible - any nuances are detailed separately below."#2016-02-0618:05meowThat was the answer I was looking for.#2016-02-0618:06meowI wrote Schevo in Python. Schevo was for "schema evolution". It was similar to datomic but OO.#2016-02-0618:07bkamphausI have to step away for a while, I’ll check in on the 20k limit re: idents Monday AM with the dev team. I’ll let you know how precise that limit is or if there are tradeoffs you can make (i.e. if you can keep running it up if it’s an important enough aspect of the architecture and you can accommodate via schema provisioning, cache settings, etc.).#2016-02-0618:07meowThank you for all your help.#2016-02-0618:08bkamphauss/schema provisioning/machine provisioning#2016-02-0618:09meowWe could also take a federated approach to scaling.#2016-02-0618:10meow@jamesnvc: @rafd @crocket See above for details on datomic limitations. ^#2016-02-0618:13jamesnvcIf I understand correctly, the ident limit is with regards to :db/ident things?#2016-02-0618:14jamesnvctags in braid are just strings that we do look-up on, so the schema shouldn’t actually be growing#2016-02-0618:15jamesnvc(this would be relevant for another project @rafd and I have worked on though)#2016-02-0618:16bkamphaus@jamesnvc: yes this is only about the count of entities that have :db/ident and the impact on memory, I’m trying to source the practical limit that was quoted here as I’m not familiar with it, but the softer principle of limiting the total number of things with idents because you always pay their memory overhead should be a modeling consideration.#2016-02-0618:16jamesnvcyeah, that makes sense#2016-02-0618:37currentoor@robert-stuttaford: thanks!#2016-02-0619:04stuartsierraI would apply the same guideline for Datomic Idents that I use for Keywords in Clojure applications: do not use Keywords for anything user-generated.#2016-02-0623:48tcrayford@bkamphaus: pretty sure I was wrong and the limit is just 2^20#2016-02-0701:41bkamphausAh, ok.#2016-02-0702:20crocketDatomic free vs datomic pro#2016-02-0702:20crocket@meow: I was referring to the limitations of datomic free.#2016-02-0703:55meow@crocket This was not in response to your question. This was my own question about a different limitation.#2016-02-0721:46kschraderis this the correct way to remove an index:#2016-02-0723:53bkamphaus@kschrader: should be an ok approach except for the case where the attribute is unique (i.e. being unique is sufficient to keep the index, you’d also have to additionally drop the uniqueness constraint to drop the index)., i.e. (example from docs - last section: http://docs.datomic.com/schema.html#schema-alteration )
[[:db/retract :person/external-id :db/unique :db.unique/identity]
[:db/retract :person/external-id :db/index true]
[:db/add :db.part/db :db.alter/attribute :person/external-id]]
#2016-02-0800:35kschradergot it thanks#2016-02-0800:35kschrader@bkamphaus: is there any way to know how much memory an index will take up?#2016-02-0810:25pesterhazyI'm seeing Transaction error clojure.lang.ExceptionInfo: :db.error/transactor-unavailable Transactor not available {:db/error :db.error/transactor-unavailable} pretty regularly#2016-02-0810:26pesterhazyit always recovers but this is a bit worrying. (This is using AWS, official AMIs with dynamo)#2016-02-0810:26pesterhazycould this be related to GC pauses in the peer?#2016-02-0811:48dm3yes, that could trigger it#2016-02-0811:48dm3same way as a broken network#2016-02-0813:36pesterhazythe GC pauses we see are only 5 seconds, though -- would that be sufficient?#2016-02-0813:37pesterhazynot that 5 second GC pauses aren't indicative of a problem in our code simple_smile#2016-02-0813:56dm3is there a timeout parameter of some sorts?#2016-02-0814:00bkamphaus@pesterhazy also large transactions on the peer or indexing not keeping up on transactor. gc pause of just 5 seconds could possibly impact it if times poorly or in quick succession.#2016-02-0814:02pesterhazythis peer is processing hardly any transactions#2016-02-0814:02bkamphausTimeout tolerance can be set by upping transactor heartbeat (Datomic level), or on peer changing datomic.peerConnectionTTLMsec to be higher (HornetQ level)#2016-02-0814:11bkamphaus@pesterhazy: if the peer isn’t processing many transactions, and the transactor (verified from metrics or logs) is heartbeating fine and not reporting alarms, peer GC is most likely culprit. If you’re not using non-default JVM GC settings on peer app, you could adopt some similar to those on transactor if goal is to avoid pauses. Or tolerate the GC by upping one or both of the settings mentioned above.#2016-02-0814:12pesterhazylike stu halloway says in his debugging talk, the culprit is always the GC#2016-02-0814:13pesterhazylooking at the transactor metrics, the heartbeating looks fine#2016-02-0814:13pesterhazyI guess there's no way around finding where those GC pauses are coming from#2016-02-0814:13pesterhazythanks for your help!#2016-02-0907:36onetomim trying to find a most minimal example of fast in-memory datomic db tests which doesn't require a global connection object and uses d/with.
i found http://yellerapp.com/posts/2014-05-07-testing-datomic.html but it doesn't share what does that (empty-db) function does to avoid recreating the db and reconnecting to it.#2016-02-0907:36onetomi found https://gist.github.com/vvvvalvalval/9330ac436a8cc1424da1 too but it seems a bit harsh and doesn't show what is solution is it comparing to.#2016-02-0907:41onetomah, i see vvvvalvalval has a recent article on this topic https://vvvvalvalval.github.io/posts/2016-01-03-architecture-datomic-branching-reality.html#2016-02-0908:25pesterhazy@onetom, interesting article!#2016-02-0908:36pesterhazydo I read the article correctly: you can use d/with to do multiple "transactions" one after another where the second one builds on the first?#2016-02-0908:51pesterhazyso you can basically completely emulate, or "fork", a connection#2016-02-0909:00onetomyup, that's the idea#2016-02-0910:15robert-stuttafordyou totally can#2016-02-0910:16robert-stuttafordwe do this with great success#2016-02-0910:16robert-stuttafordyou do need to shepherd the temp ids from prior d/with’s to later ones if you mean to ultimately transact something for real#2016-02-0910:20robert-stuttafordmake tx -> d/with. query with db, make another tx (now using ids that look like ones in storage but actually just came from d/wtih) -> d/with. repeat N times. actually commit final tx to storage which includes all the intermediate txes together, after swapping out the d/with “real” ids for the tempids again, so that the final tx has all the right real and temp ids.#2016-02-0910:20robert-stuttafordi have code for this if anyone wants#2016-02-0910:25robert-stuttafordwe use it here: http://www.stuttaford.me/2016/01/15/how-cognician-uses-onyx/#2016-02-0910:25pesterhazyinteresting#2016-02-0910:26robert-stuttafordmultiple onyx tasks each doing their own work, but each building on the data of the previous one. only actually goes into storage at the end.#2016-02-0910:26pesterhazyin my use case, I'm not planning to actually "really" commit anything#2016-02-0910:26robert-stuttafordthey each use d/with and return tx data#2016-02-0910:26pesterhazycurious, why would you want to commit at the end?#2016-02-0910:27robert-stuttafordusing the onyx-datomic commit-bulk-tx plugin#2016-02-0910:27robert-stuttafordyou’ll see if you scan my post#2016-02-0913:26pesterhazy@robert-stuttaford: will do, thanks#2016-02-0914:50bkamphausDatomic 0.9.5350 is now available https://groups.google.com/d/msg/datomic/TIGnE3Dtjgs/PEAWEQdcEgAJ#2016-02-0915:11jgdavey@bkamphaus: Can you elaborate on this bullet:
* Improvement: connection caching behavior has been changed so that peers can
now connect to the same database served by two (or more) different
transactors.#2016-02-0915:12jgdaveyMore than one transactor can serve a single datomic database?#2016-02-0915:22marshall@jgdavey: That bullet specifically deals with peers connecting to multiple databases than originated from the same call to create-database. I.e. if you have a staging database that is restored locally (dev) from a backup of a production database on some other storage, you can now launch a single JVM peer that can connect to both the staging and the production instance.#2016-02-0915:51jgdaveyJust to make sure I’m understanding correctly: is the connection caching now based on URI and database id?#2016-02-0915:58bkamphaus@jgdavey: aspects of the connection+storage config, but caching in that respect is just an implementation detail. The contract-level from this release forward is that two different transactors, one serving a database and the other a restored copy of that database in a different storage, can be reached from the same peer.#2016-02-0916:00jgdaveyThat makes sense. Whereas before, peers wouldn’t be able to simultaneously connect to a db and a restored copy of it on another transactor.#2016-02-0916:00jgdaveyAnd/or the behavior was undefined/unsupported#2016-02-0916:00jgdaveyNot trying to beat a dead horse, just want to make sure I understand simple_smile#2016-02-0916:42kschrader@jgdavey: before if you tried to establish a second connection it would stay connected to the first DB#2016-02-0916:42kschradersilently#2016-02-0916:42kschraderassuming that I’m understanding this change correctly, this fixes that#2016-02-0916:45kschraderif you did (def prod-conn (d/connect PROD_URI))#2016-02-0916:45kschraderand then (def local-copy-of-prod (d/connect LOCAL_COPY_URI))#2016-02-0916:45kschraderin a REPL#2016-02-0916:46kschraderlocal-copy-of-prod would actually be pointing at PROD_URI#2016-02-0916:46kschraderwhich was bad#2016-02-0919:21pesterhazyyeah I'm happy that's getting fixed#2016-02-0919:34jgdaveyThank you everyone for the clarification. simple_smile#2016-02-1007:01timothypratleyAre there any command line tools for importing TSV files into Datomic? (Assuming an existing schema, just want to transact in new facts, ideally with a low startup time cost)#2016-02-1007:44val_waeselynck@onetom happy to share more details about how we do testing by forking connections if the blog post is not enough simple_smile#2016-02-1007:44val_waeselynckI may release a sample application or Leiningen template at some point#2016-02-1007:45onetom@val_waeselynck: that would be really great!#2016-02-1007:45onetomi tried your mock connection and it works so far#2016-02-1007:46onetomi was using this function to create an in-memory db with schema to serve as a starting point for forking in tests:
(defn new-conn
([] (new-conn db-uri schema))
([uri schema]
(d/delete-database db-uri)
(d/create-database db-uri)
(let [conn (d/connect db-uri)]
@(d/transact conn schema)
conn)))
#2016-02-1007:47onetomi guess your empty-db fn is doing something similar#2016-02-1007:49onetomhave you released this mock connection as a lib anywhere yet?
if it served you well so far, it would make sense to create a lib, no?#2016-02-1007:50onetomactually i would expect cognitect to supply such a solution out of the box if it is a really sound approach as @robert-stuttaford hinted above#2016-02-1007:50val_waeselynck@onetom: yes I'll probablye roll out a lib soon, just wanted to get some criticism first#2016-02-1007:51onetomok, here is my criticism: why is it not on clojars yet!? ;D#2016-02-1007:51val_waeselynckMy next blog post will be a guided tour of our architecture, so it'll probably cover this in more details#2016-02-1007:51onetomhappy to hear!#2016-02-1007:51val_waeselynckAnd I wouldn't be surprised if this was actually the implementation of Datomic in-memory connections simple_smile#2016-02-1007:52onetomyet it takes longer to just create/delete in-mem dbs#2016-02-1007:52onetomdo u think it's just the overhead of specifically transacting the schema?#2016-02-1007:53val_waeselynckperformance is not the biggest win IMO, being able to fork from anything is#2016-02-1007:53val_waeselynckincluding your production database, I do it all the time#2016-02-1007:53onetomthat's what i was missing from your article. you haven't established a baseline which your are comparing your solution to, so im not sure what would be the alternative approach and how much faster is it to use the mock connection#2016-02-1007:55onetomthat sounds a bit risky to work w the production fork, no?
i always work on restored backups, but our db takes only a few seconds to restore still, so that's why it's viable atm#2016-02-1007:55pesterhazy@val_waeselynck: your article inspirational, will def try that for us as well#2016-02-1007:55val_waeselynckwhy risky ? once you fork, it's basically imossible to write to the production conection#2016-02-1007:56val_waeselynck(well , granted, the risk is that you forget to fork :p)#2016-02-1007:57onetomthat's what i meant simple_smile#2016-02-1007:57val_waeselynck@pesterhazy: thank you simple_smile this encourages me to roll out a lib then#2016-02-1007:57pesterhazythat would be very useful I think#2016-02-1007:58pesterhazyjust the mock connection itself would be great as a lib#2016-02-1007:58val_waeselynck@onetom: anyway, I'm generally not too worried about accidental writes with Datomic, they're pretty easy to undo#2016-02-1007:59onetom@val_waeselynck: your test example is the most heartwarming thing i've seen in a long time
that's how i always hoped to describe integration tests and now you made it a reality by putting the dot on the I (where I = datomic simple_smile)#2016-02-1008:01pesterhazynow if someone could build a better deployment strategy for datomic on AWS with live logging, that'd be great too (I just had the prod transactor fail to come up twice, without a way to find out what the problem was; only to work the third time, for no apparent reason)#2016-02-1008:01onetom@val_waeselynck: are you using any datomic wrapper framework, like http://docs.caudate.me/adi/ or something similar?#2016-02-1008:02val_waeselynck@onetom: no, never heard of such a framework 😕#2016-02-1008:02val_waeselynckquite happy with datomic's api (except for schema definitions)#2016-02-1008:03onetomwell, that's one of the obvious areas where some framework could help#2016-02-1008:04onetombut then migrations become tricky if u have a source file representing your schema, since the DB itself is not the single place of truth anymore#2016-02-1008:05onetombut i read your article about conformity, so i will try that approach soon#2016-02-1008:10val_waeselynck@onetom @pesterhazy I gotta run but happy to discuss this further, actually it would be really nice if you could persist your main questions and critics as comments of the blog post, so others can benefit from it :)#2016-02-1008:39robert-stuttaford@pesterhazy: that logs rotate from the transactor rather than stream is problematic for me too. it’s made logs totally useless for every instance that our transactors failed in some way#2016-02-1008:41caspercSo is it just me or does the Datomic client just never return when submitting a malformed query?#2016-02-1008:42caspercLike this one:
(d/q '[:find (pull ?be [*])
:where $ ?id
:where
[?be :building/building-id ?id]]
(d/db @conn)
2370256)#2016-02-1008:43casperc(with two :where clauses)#2016-02-1008:48casperccurrently the process is using a lot of CPU, so apparently it is doing something#2016-02-1008:58onetom@casperc: this doesn't hang for me:
(defn idents [db]
(q '[:find ?eid ?a
:where $
:where
[?eid :db/ident ?a]] db))
(->> (new-conn) db idents pprint)
#2016-02-1008:59onetombut it doesn't have a 2nd param either; let me try that#2016-02-1009:00onetomthat still works and no cpu load#2016-02-1009:01onetomim on [com.datomic/datomic-free "0.9.5344"]#2016-02-1009:02pesterhazy@robert-stuttaford: exactly. you have logs, but only the next day and only in case nothing goes wrong (which is precisely the case where you're not particularly interested in the logs)#2016-02-1009:02pesterhazyit'd be already helpful to be able to specify a logback.xml so you can set up your own logging#2016-02-1009:03robert-stuttafordyep#2016-02-1009:03robert-stuttafordwe use http://papertrailapp.com and it’d be great to use logback’s syslog appender with that#2016-02-1009:03pesterhazyI know that this is possible in principle by hacking the startup scripts, but that's way harder and hit-and-miss than any admin would like#2016-02-1009:03pesterhazywe use papertrail as well#2016-02-1009:03pesterhazyit's great#2016-02-1009:04pesterhazythe other thing the ami's are missing is the ability to pull in your own libraries (which you require when you use them in transactor fns)#2016-02-1009:31casperc@onetom: Weird. Thanks for testing it though. What are you getting as a return value?#2016-02-1010:35onetomi was getting the exact same results#2016-02-1010:37onetomor i was getting this error:
java.lang.Exception: processing rule: (q__23355 ?name ?ip ?cluster-name ?cluster-subdomain), message: processing clause: [$rrs ?subdomain ?ips], message: Cannot resolve key: $rrs, compiling:(ui/dns.clj:74:1)
#2016-02-1014:17caspercI am wondering a bit about lookup refs. It looks like they throw an exception when the external id being referenced, is not present which I think is fair. For my use case, I just want the ref to be nil (or not added). Any way to make the lookup ref optional?#2016-02-1015:14bkamphaus@pesterhazy: and @robert-stuttaford definitely understand the point around log rotation vs. streaming. Re: launch problems, we did add a section to the docs on troubleshooting common “fails with no info” issues under “Transactor fails to start” here: http://docs.datomic.com/deployment.html#troubleshooting — adding to lib/ and configuring different logging, though, definitely fall under the use case (at least at present) for configuring your own transactor machine vs. using the pre-baked AMI.#2016-02-1015:16bkamphausWe do hear and consider your feedback there, but nothing short term to promise on those topics.#2016-02-1015:30pesterhazy@bkamphaus: I realize it's a larger undertaking, not blaming you#2016-02-1015:30pesterhazyI'm probably going to build an amazon playbook to set up datomic on ec2+dynamo, that should make things a lot easier for folks#2016-02-1016:45robert-stuttafordthose docs are great, @bkamphaus ! thanks for taking note simple_smile#2016-02-1017:36val_waeselynck@casperc: in what context? Query / transaction / entity ?#2016-02-1017:52devn@pesterhazy: yes please#2016-02-1020:02akielIs there a defined order in which the pull api returns multiple results? I ask this because the entity api returns sets for cardinality-many attributes which can be compared without taking the order into consideration. But the pull api returns vectors where the order matters.#2016-02-1022:46ljosano, the order is undefined#2016-02-1106:43casperc@val_waeselynck: Ah yeah sorry, that wasn’t a given. It is in a transaction i am using the lookup ref.
I am basically doing a big data import from another database with a lot of tables that are linked by ids. For each id I am creating a ref, which I create by finding the entity that it references. The reference I am trying to resolve should be there, but in case it isn’t (due to data error) I still want the transaction to succeed. However when using lookup refs, the transaction fails when the lookup ref doesn’t resove to en entity.
Currently I am using a query at the peer to find the id if present, but it is not as nice a solution as just putting a lookup ref in the transaction (and it adds the problem of making sure that the peer is synchronised with all the relevant transactions).#2016-02-1106:47akiel@ljosa: thanks @bkamphaus Can you say something more here? Does the order depend on the index used? #2016-02-1107:28val_waeselynck@casperc Id use the query in a transaction fn#2016-02-1108:18casperc@val_waeselynck: Good point, that might be the way to go there.#2016-02-1114:34bkamphaus@akiel: no order is promised as when the entities are retrieved the underlying semantic is a set. Regarding implementation as vector as opposed to entity API — due to flexibility of nested pull specifications, it’s possible to get conflicts in the set which would reduce the count of elements and mislead about the number of matched entities. The tradeoff, as you recognize, is not being guaranteed correctness in equality, etc. of retrieved vectors since order is not promised.#2016-02-1115:21akiel@bkamphaus: I understand your point with nested pull specifications. I also understand that promising an order would constrain future implementation changes. But is it possible to declare the ordering as undefined but consistent within a version of datomic and either within one point in time or over all points in time?#2016-02-1115:48hjrnuneshi all#2016-02-1115:52pesterhazyhi @hjrnunes#2016-02-1115:54hjrnunesCan I add an entity like this, nesting components is shown?#2016-02-1115:55bkamphaus@akiel: I understand the case you’re making and can make a note of the request. To set expectations, though, we’re pretty conservative on making guarantees, and consistent ordering within a few caveats like within but not between versions is not likely to be one we’ll be making. At least for the short term it’s just a tradeoff when using pull.#2016-02-1115:58akiel@bkamphaus I’m fine with this. Thanks.#2016-02-1116:06akiel@hjrnunes: I’m not sure whether you can specify nested entities. But if so, they miss the :db/id (d/tempid …) part.#2016-02-1116:08akielBut it’s alsways possible to add the entities first and reference them later by there tempid (in the same transaction)#2016-02-1116:09hjrnunesok#2016-02-1116:09hjrnunesso I’m trying to do two different things here, I guess#2016-02-1116:10hjrnunessome of the components reference existing entities, others reference new entities that are to be created#2016-02-1116:10hjrnunesI understand your point re. the second case i.e. the new ones#2016-02-1116:10hjrnunesbut what about the first case?#2016-02-1116:11hjrnunesbtw, I’m trying to use lookup refs but I tried with the actual id in long format, and still get the same error#2016-02-1116:12bkamphaus@hjrnunes: See: http://docs.datomic.com/transactions.html#nested-maps-in-transactions — I believe issue is that you need to use map form, not list form for parent entity (can’t nest the map as a value in a :db/add list form).#2016-02-1116:13hjrnunesso the entire transaction needs to be a map then?#2016-02-1116:14bkamphaus@hjrnunes: not the entire transaction, but the nested map has to be inside a map. You could specify other [:db/add …] or [:db/retract …] forms in the transaction as a whole.#2016-02-1116:15hjrnunesI see; does the map format implicitly means :db/add?#2016-02-1116:16bkamphaus@hjrnunes: Yes, internally transformed into the add form, see: http://docs.datomic.com/transactions.html#map-forms#2016-02-1116:18hjrnunesok, so I guess the right transaction would look something like this then:#2016-02-1116:20bkamphaus@hjrnunes: I would just put the :recipe/name assertion in the map as well, to be honest. As opposed to passing the same tempid twice.#2016-02-1116:20bkamphausI.e. if you’re just asserting the entity, one big map is typically the most readable form.#2016-02-1116:20hjrnunesyeah I suppose that’s a good idea#2016-02-1116:20hjrnunesBtw, can I use lookup refs in nested components?#2016-02-1116:22bkamphausI believe you should be able to, not sure if I’m done that specifically before but I would expect that you can.#2016-02-1116:23hjrnunesok, i’ll give it a try#2016-02-1116:23hjrnunesthank you!#2016-02-1116:45hjrnuneswell, I’m getting something more puzzling now#2016-02-1116:45hjrnunesIllegalArgumentExceptionInfo :db.error/not-a-data-function Unable to resolve data function: :db/id datomic.error/arg (error.clj:57)#2016-02-1116:46bkamphaus@hjrnunes: that error would indicate you’re using :db/id in a list form somewhere rather than map form.#2016-02-1116:47hjrnunesYeah, I thought that initially#2016-02-1116:47hjrnunesI’ll double check#2016-02-1116:48hjrnunesjust to confirm#2016-02-1116:48hjrnunesis the tx data supposed to be wrapped in a vector when it’s passed on to transact if it is a map?#2016-02-1116:49hjrnunesi.e. (transact db tx-data) or (transact db [tx-data]) assuming tx-data is a map?#2016-02-1116:51bkamphausUsing the example from: http://docs.datomic.com/transactions.html#nested-maps-in-transactions
(d/transact conn [{:db/id order-id
:order/lineItems [{:lineItem/product chocolate
:lineItem/quantity 1}
{:lineItem/product whisky
:lineItem/quantity 2}]}]
#2016-02-1116:54hjrnunesright, so I can’t see what the issue is then#2016-02-1116:55hjrnunesthat’s my tx map, it gets wrapped in a vec before it goes to transact#2016-02-1116:55bkamphaus@hjrnunes: and you’re sure you’re wrapping it in a vec and not converting it to one?#2016-02-1116:56hjrnunesactually, I’m doing that exactly#2016-02-1116:56hjrnuneslol#2016-02-1116:56hjrnunesthx#2016-02-1117:01hjrnunes@bkamphaus: perfect, got it working. Thank you sir!#2016-02-1118:23currentoorIs it bad to include indexes on most schema attributes?#2016-02-1118:25bkamphaus@currentoor: nope, in fact the overhead is fairly cheap, I’d probably turn :avet indexes on anything that wasn’t blobby, though we’ve improved perf. for that specific problem (large string or binary values in :avet).#2016-02-1118:26currentoor@bkamphaus: Oh ok awesome. I need to add them retroactively. Anything I need to be careful about?#2016-02-1118:30bkamphausnope, I would just review the docs on schema alteration - http://docs.datomic.com/schema.html#Schema-Alteration - to get example forms of the alteration transactions you need to submit.#2016-02-1118:35currentoorCool, thanks!#2016-02-1120:13timgilbertHey, maybe kind of a dumb question, but I've got my transactors deployed to AWS/DDB with IAM roles, and my deployed peer machines can connect to them just fine from the aws-peer-role roles, but now I can't figure out how to tell Amazon to treat my laptop as also being in that peer role during development.#2016-02-1120:14timgilbertWhat do people generally do to get this working, is there a way I can avoid setting up environment variables on every development laptop?#2016-02-1121:50settingheadwould datomic be suitable for real-time survey program? say a lot of people cast votes on items and also see real-time updates on vote counts#2016-02-1121:50settingheadmy main concern is the write throughput. any insight in this would be appreciated#2016-02-1121:54bkamphaus@timgilbert: for running a transactor or peer on a laptop against ddb, as it’s a dev/testing scenario only, I just have my AWS user access keys in my environment. I think to use roles outside of AWS resources you’re still stuck with a user who must assume the role.#2016-02-1121:55bkamphaus@settinghead: nothing about your use case intrinsically disqualifies Datomic. I guess the main question is what’s your best estimate of the throughput you’d need?#2016-02-1122:25currentoorIs a sorted collection of squuids the same as if they were ordered by the time they were created?#2016-02-1122:26currentoorAssuming they were created simply with (squuid)#2016-02-1203:23andrewboltachevHi. Is it possible to get-database-names for datomic:mem://{host}:{port}/* URI? I.e. is is possible for *mem* DB's? Throws an error for me, asking about * in place of db name, but it's already there.#2016-02-1204:28bkamphaus@andrewboltachev: works fine for me:
(d/get-database-names "datomic:mem://*")
("a" "b")
#2016-02-1204:29bkamphausnote that in process memory URI is just: datomic:mem://{db-name}#2016-02-1207:49danielstocktondo datomic queries always return a set? or can i example, pull out a single value for an attribute?#2016-02-1207:52danielstocktonHere for example, I'd like to avoid (first (first#2016-02-1207:53kristianuse ffirst#2016-02-1207:55danielstocktonbut is the result of a query always a vector in a set? i don't mind having to pull things out, it just feels like im missing a trick#2016-02-1207:55danielstocktonffirst is useful#2016-02-1207:55dm3you can do [:find ?v . to get a single item#2016-02-1207:56dm3or [:find [?v] to get a seq#2016-02-1207:56danielstocktonthanks dm3, that was the trick i was missing#2016-02-1207:57jthomsonsee http://docs.datomic.com/query.html#find-specifications#2016-02-1207:57danielstocktonis there a name for that ., documented somewhere?#2016-02-1207:57danielstocktona great#2016-02-1207:57dm3important to note that :find ?a . will just return the first item, even if there are more results#2016-02-1207:58danielstocktonthanks a lot!#2016-02-1214:34bkamphaus@danielstockton: it’s worth mentioning that Datomic query doesn’t return a set of tuples by default just to be arbitrary. This behavior is the means by which you can compose queries. (It’s a similar story with SQL select and result tables/sets). That said, as others mentioned, for returning an (expected) single value or one collection, etc, that’s the exact use case that find specifications exist for.#2016-02-1214:37danielstocktonYep, understood. I actually tried implementing datalog on a toy project but didnt get into all the nitty gritty and special syntax#2016-02-1214:53stuartsierra@currentoor: Not quite. sqUUIDs have leading bits making a timestamp with an accuracy of 1 second. Multiple d/squuids created within the same second will be randomly ordered. Also, clock on different peers may not be in sync, so the timestamps in the sqUUIDs generated on different peers may not reflect the order in which they were transacted.#2016-02-1218:04currentoor@stuartsierra: Makes sense, thanks.#2016-02-1218:18currentoorI have an id attribute on entities and this attribute is only set when the entity is created and it is never updated. If I want to sort a collection of entities by created-at times, I guess I can use the transaction time of that id attribute. Is this fine or is there a preferred way to do it?#2016-02-1218:30bkamphaus@currentoor: that’s the approach I’d probably recommend (as long as you’re sure about the guarantee that it’s set at creation time and never changed), see answer here: https://stackoverflow.com/questions/24645758/has-entities-in-datomic-metadata-like-creation-and-update-time#2016-02-1218:31currentoor@bkamphaus: sounds good, Thanks!#2016-02-1314:37akielWhy get transactor functions java collections instead of persistent Clojure collections passed? It’s annoying if I try to test with coll? inside a transactor function only to find out that it works in tests with in-memory DB but not with a real transactor.#2016-02-1410:53jimmyhi guys how do we implement sort and filter in datomic ?#2016-02-1414:06val_waeselynck@nxqd: datomic queries return sets, so you'll want to sort as a post-processing step#2016-02-1414:06val_waeselynckand what do you mean by filter ?#2016-02-1419:41bkamphaus@nxqd as @val_waeselynck says you don’t do this at the datalog level, but it’s fairly easy to get any equivalent e.g. SQL ORDER BY from Clojure by calling sort-by` on query results, e.g. (sort-by first (d/q '[:find ?ident ?doc :where [?e :db/doc ?doc][?e :db/ident ?ident]] (d/db conn))). Note that sort-by can be called with a comparator to get ascending or descending (or arbitrary sort) behavior.#2016-02-1421:13val_waeselynckHi all, I just released a library to mock and fork Datomic connections in memory: https://github.com/vvvvalvalval/datomock
Your feedback is welcome.#2016-02-1421:13val_waeselynck@onetom: @meow @pesterhazy ^#2016-02-1421:15hueypdo you need the same version of datomic to restore a backup?#2016-02-1421:18hueypI’m getting a :restore/no-roots when pointing to s3 (it shows owner / roots / values when I look at it) but its with a newer version than the backup (which is old) so wondering if that would cause it#2016-02-1421:28bkamphaus@hueyp: backup/restore should be compatible across versions. What do you see when you invoke list-backups? http://docs.datomic.com/backup.html#listing-backups#2016-02-1421:28bkamphausthere is a windows specific issue that manifests as what you’re seeing, also (as of yet unresolved) https://groups.google.com/forum/#!topic/datomic/O6LVi2OjvJ4 but workaround is to provide a valid t.#2016-02-1421:29hueypI get ()#2016-02-1421:30hueypI’m on amazon linux atm … trying out a restore#2016-02-1421:30hueypbut I did the backup like … november 😜#2016-02-1421:30hueypI saved a snapshot of the db too just in case so can use that I think simple_smile#2016-02-1421:31bkamphausif you use aws command line for s3 you can list the bucket contents fine, though? (i.e. it’s not a role/permissions issue from the instance?)#2016-02-1421:31hueyp[
#2016-02-1421:31hueypI’m not using ec2 credentials but have .aws/credentials set#2016-02-1421:31hueypnot sure if that would matter#2016-02-1421:32hueypI’m copying it local atm to try that out as well#2016-02-1421:33hueypits possible I was running a free version maybe ...#2016-02-1421:33bkamphaushm, for all my backup cases I’m pointed at a subfolder of the bucket (i.e. not directly at the bucket), not sure if backup dir == bucket (no reasoning apart from the fact that it’s different between configs).#2016-02-1421:33hueypbut that can’t backup to s3 so probably not?#2016-02-1421:34bkamphauseven from free version, backup is storage/version agnostic. free can’t backup to s3 I don’t believe (doesn’t contain AWS deps, etc.)#2016-02-1421:34hueypokay, so maybe having an empty prefix (or whatever, no folder) is an issue … once its done sync’ing local I can just try that out#2016-02-1421:34bkamphausonly issue you’ll have with local copy and s3 is it has to have integrity, may take multiple calls to sync etc. for a full robust copy if it’s large (an issue we see when people copy from s3 bucket to s3 bucket is files not always making it through)#2016-02-1421:35hueypgood to know!#2016-02-1421:39hueyp[
#2016-02-1421:39hueyp\o/#2016-02-1421:41bkamphausglad it shows up fine locally! should be good to go. if you try restore and get any segment missing errors will need to attempt sync/copy again to see if anything from s3 is missing.#2016-02-1421:41hueypthanks for your help!#2016-02-1421:55hueypnice, restore started (had to quote an ampersand in the uri for postgres woops)#2016-02-1503:13jimmy@val_waeselynck, @bkamphaus thanks for helping. i have done the same thing, I do sorting by using sort-by and paging by partition and take#2016-02-1504:22jimmyhi guys, can't we use a rule like this in datomic : [[(rule-name rule-arg) [(not (< rule-arg 0.5))]]#2016-02-1504:25bkamphaus@nxqd: this looks like a similar case to what was raised here: https://stackoverflow.com/questions/32164131/parameterized-and-case-insensitive-query-in-datalog-datomic — function expressions in datalog don’t nest.#2016-02-1504:27jimmy@bkamphaus: yeah it seems so. I think I will try to solve the problem on the returned data instead of from datalog for now.#2016-02-1504:30bkamphaus@nxqd: sorry, may have read too quickly. You should be able to make rules using negation and disjunction, though I haven’t put together a rule like that any time recently that I remember. Of course, in the example you provide, you can negate < with >=#2016-02-1504:32jimmy@bkamphaus: I'm not sure If I remember correctly or not that we cannot apply nested clauses in datomic ? just make sure I understand things correctly#2016-02-1504:37bkamphausyou can’t nest function calls. not, not-join, or, and or-join, though specify negation and disjunction which apply to the clauses that follow them, which is a different case. They have an inside/outside e.g. not vs. the default and-like behavior in datalog.#2016-02-1504:40jimmy@bkamphaus: thanks simple_smile#2016-02-1515:06val_waeselynck@bkamphaus: is it safe to use db.with() on a memory database value after the mem connection which created was released or deleted ?#2016-02-1515:15robert-stuttafordvery likely not#2016-02-1516:48bkamphaus@val_waeselynck: that’s definitely on the edges of my knowledge — have you already encountered issues doing this or are you checking prior to attempting? I’ll have to look into it, just helpful to know context of my doing so beforehand.#2016-02-1516:49val_waeselynckI had an issue where my auto-reloaded tests would complain about a database having been released, but haven't been able to reproduce#2016-02-1516:50val_waeselynckso mostly checking prior to attempting#2016-02-1516:50val_waeselyncklet me rephrase it: it's worked for me so far, but maybe it's only because I have been lucky#2016-02-1518:42bplatzDoes Datomic provide any access to its' transaction expansion? I want to fully expand a transaction (tx functions, tx maps) into the final tx that the transactor will invoke (`[ [:db/add :e :a :v :tx] ...]`) so I can do some pre-validation work before allowing a transaction to go through.#2016-02-1518:42bplatzI suppose I could try to cram all conceivable logic into a db transaction, but I'm trying to keep it out of there.#2016-02-1518:43bplatzI don't need the real-time tx, just what it would be if executed at that point in time... that is enough for me to do my pre-work.#2016-02-1518:45bkamphaus@bplatz: not at present. It is a feature that’s been requested and I can note your interest in it. https://groups.google.com/forum/#!searchin/datomic/transaction$20map$20form/datomic/9D6zZYkiGlw/hc06dtalyS0J — it is a fairly simple transform as noted here: http://docs.datomic.com/transactions.html#map-forms — "Each attribute, value pair becomes a :db/add list using the entity-id value associated with the :db/id key."#2016-02-1518:47bplatzTx functions are the ones I'm more concerned about. Nested entities/refs as well, but those aren't too bad.#2016-02-1518:49bkamphaus@bplatz: a transaction function called with datomic.api/invoke instead of in a transaction will return its transaction data (rather than transacting it).#2016-02-1518:55bplatzso... assuming I have [[:db.fn/retractEntity [:person/email ", I'd look at the tx for two-element vectors, then call (d/invoke db (first v) (second v))? And I assume I'd do that in first-to-last order of the transaction, assuming I need to do that for multiple functions?#2016-02-1518:56bplatzI guess it is like macro-expansion... so probably each function invocation I start the parse over from scratch.#2016-02-1519:14bplatzAnd I'm just trying to be specific to make sure I replicate what Datomic does internally, else it doesn't achieve the goal. Thanks.#2016-02-1519:37bplatzActually just realized there can be multiple params passed to a function, so scratch the first/second comment. So just wrapping up my logic, the order I'll look to implement is (1) expand all maps with :db/add, (2) look for any tx function (where first vector element is not :db/add or :db/retract) and use (d/invoke db ...). (3) repeat step 2 to look for more functions, stop when no addition functions are found.#2016-02-1520:13caspercWhat is the best way to implement a limit or paging functionality in Datomic?#2016-02-1520:16caspercI guess just doing paging an a query doing a (pull ) is a bad idea since it is pulling a bunch of data up front, so should I find the entities, filter/page and then pull?#2016-02-1520:21val_waeselynck@casperc yes I usually run a query to find the entities ids Im interested in, then convrt them to entities, sort, paginate, then map a function which converts them to maps (or whatever format the client needs)#2016-02-1520:22casperc@val_waeselynck: so you use the d/entity function to get the entity before paginating? Is that fast enough?#2016-02-1522:37val_waeselynckwell as we all know i cant disclose any benchmark :) but this approach has worked for me so far. I dont believe entities have a lot of overhead since theyre lazy and whatever segment they live in has been loaded in the previous query#2016-02-1523:14meowWhy can't benchmarks be disclosed#2016-02-1607:37val_waeselynck@meow such is the EULA#2016-02-1608:28jan.zyoh dear that’s true.. http://www.datomic.com/datomic-pro-edition-eula.html#2016-02-1608:28jan.zy[grep for ‘benchmarks’]#2016-02-1608:40meowWhy would they make that part of the EULA#2016-02-1608:41meowThat makes no sense to me. Can someone explain the reasoning behind this restriction?#2016-02-1608:45dm3performance benchmarks are rarely done correctly#2016-02-1608:45val_waeselynck@meow: @jan.zy I thought everyone would know by now, this has been brought up quite a few times simple_smile apparently this is standard for proprietary databases#2016-02-1608:45dm3and rarely can be objectively compared outside of original cases#2016-02-1608:45jan.zyo rly, I wonder what’s the reason behind that#2016-02-1608:46jan.zy(and I think that this is a good moment to start anonymous benchmarking blog 😉 )#2016-02-1608:46dm3but they have a disproportionate effect on the public image of the product which is very hard to change later#2016-02-1608:47val_waeselynckI also think it would be detrimental to Datomic's adoption, not because Datomic has objective performance issues, but because non experts would misinterpret such benchmarks, because they would still rely on assumptions that don't apply to Datomic#2016-02-1608:51val_waeselynckhaving said that, I would welcome a summary of various companies using Datomic along with their performance requirements. I don't think that would count as benchmarking, the only information disclosed being that it's fast enough for them.#2016-02-1608:56meowI would want to know that we are making the best use of Datomic#2016-02-1608:57meowI don't care about public image or non experts or whether Datomic is fast enough for someone else's use case.#2016-02-1608:57meowJust being honest.#2016-02-1608:57meowI don't like the restriction at all.#2016-02-1610:31val_waeselynck@meow: I'm not saying I approve of this restriction, just trying to explain it realistically.#2016-02-1610:33meow@val_waeselynck: I appreciate that. I am anything but realistic. I'm not too fond of reality. That's why I intend to create alternate realities. Thanks.#2016-02-1610:36val_waeselynck@meow: I think the best thing to do is to voice your concerns to the Cognitect team, you probably can find a thread in the mailing list on this topic#2016-02-1610:37meowI don't participate in mailing lists. I think I've voiced my concerns here. Thank you again.#2016-02-1610:37val_waeselynckI personally was not happy about it either, but now that I know it works for my use case I feel less of a need to complain simple_smile#2016-02-1610:38meowI have several concerns about committing to Datomic in the long run. This issue is one.#2016-02-1610:39meow@val_waeselynck: I truly appreciate your responses, suggestions, and support.#2016-02-1622:00augustlhey folks. Getting a critical error via h2 when starting up my free transactor. Anyone know how to recover from this? https://gist.github.com/augustl/a34a31f9df7c37fa26fd#2016-02-1623:02augustlI've been trying the procedure outlined here - using h2's own recovery tools http://infocenter.pentaho.com/help/index.jsp?topic=%2Ftroubleshooting_guide%2Ftask_h2_db_recovery_tool.html#2016-02-1623:02augustlfor some reason this process yields a database with a much smaller size than my original, and it only seems to contain really old versions of the database#2016-02-1623:05bkamphaus@augustl: have you taken any Datomic level backups?#2016-02-1623:06augustlit's a staging environment (with some semi-important data) so we don't have a backup of it unfortunately#2016-02-1623:10bkamphausfree and dev storages do pose durability risks in usage without datomic level backups. I’ll add that the ops cost of taking backups is not particularly high - it’s a matter of a cron or even a tight loop in a shell running bin/datomic restore-db to e.g. a file target. http://docs.datomic.com/backup.html#backing-up - I would recommend it for dev or staging environments if you want failure tolerance.#2016-02-1623:14bkamphausyou’re correct in looking to storage level recovery tools barring the presence of a Datomic backup. It may be that you can’t get very recent data from it, as you observe. I’m not sure re: H2 write consistency and expectations across failures, but in general Datomic storage level writes are acked by all storages before they’re persisted to disk just as a performance reality. The reason I’m not sure on H2 guarantees off the top of my head is it’s not typical for Datomic users to rely on it for much other than sandbox scenarios, so guarantees around durability if it e.g. crashes mid-write usually doesn’t come into play.#2016-02-1623:16bkamphausbut e.g. if you lost an entire Cassandra cluster mid-write (or sufficient nodes to account for your write consistency level spread across those nodes), you’d also corrupt the database. Or if you had replication factor of 1. It’s the storage/disk level reality of what can be guaranteed, and Datomic can’t tolerate missing data or inconsistent writes given its expectations for its data in storage.#2016-02-1701:43lboliveiraHi, I am trying write a parametrized query that returns n random elements.
The following query works:
(d/q '[:find [(rand 10 ?m) ...]
:in $
:where [?m :cliente/name]]
db )
but this not:
(d/q '[:find [(rand ?n ?m) ...]
:in $ ?n
:where [?m :cliente/name]]
db 10)
ClassCastException clojure.lang.Symbol cannot be cast to java.lang.Number clojure.lang.Numbers.isPos (Numbers.java:96)
Can somebody please point me what I am doing wrong?
ty.#2016-02-1703:52jimmyhi guys can we use isComponent on enum type ?#2016-02-1708:18val_waeselynck@lboliveira: my guess is that Datalog is not dynamic about this#2016-02-1708:18val_waeselynckyou'll probably want to build the query data structure programatically as a workaround#2016-02-1710:08lboliveira@nxqd: I think so. I am using without any issues.#2016-02-1710:10jimmy@lboliveira: thanks for letting me know, I use datomic-schema, it does seem to have problem with component. I might just use plain datomic schema in the future then simple_smile#2016-02-1710:16lboliveira@nxqd: i am using datomic-schema too:
(s/schema foo
(s/fields
[type :enum [:type-a :type-b] :component]
[ref1 :ref]
[ref2 :ref]))])#2016-02-1710:16lboliveiraand this works#2016-02-1710:16lboliveirawhat is happening?#2016-02-1710:16jimmy@lboliveira: cool, thanks. my mistake#2016-02-1710:16jimmyI put :component between :enum and its values xD#2016-02-1710:17lboliveira😃#2016-02-1710:26lboliveira@val_waeselynck: =(#2016-02-1715:28kschradercan someone shed some light on how to interpret the MemoryIndexMB metric in Datomic?#2016-02-1715:28kschraderours bounces up and down like that everyday#2016-02-1715:29kschradernot sure if it’s something to worry about#2016-02-1715:29marshall@kschrader: MemoryIndexMB is the size of the ‘memory index’ - that is the part of the index with novelty that has not yet been incorporated into the disk index.#2016-02-1715:30kschraderahhh#2016-02-1715:30kschraderok, that’s not how I read it#2016-02-1715:30kschraderbut it makes sense now#2016-02-1715:30marshall@kschrader: I would expect that pattern. As the memory index size approaches the MemoryIndexThreshold, a background indexing job will incorporate that data into the disk index#2016-02-1715:30marshallso the drops you see are likely right after the completion of an index job#2016-02-1715:42marshall@kschrader: docs on the memory index are here: http://docs.datomic.com/caching.html#memory-index#2016-02-1715:44kschradergot it#2016-02-1715:45kschraderperhaps the metrics table on the monitoring page could link out to the longer definitions in the docs#2016-02-1715:45kschraderit’s not always clear how to interpret all of them#2016-02-1715:49bkamphaus@kschrader: noted, I can take a look at adding more links to the relevant sections that explain what the metrics correspond to.#2016-02-1808:46caspercWhen i want to replace an entity with a new version of the same entity, but still be able to see the old version of the entity in the history, how do I go about doing this?#2016-02-1808:48caspercAnd by replace i mean that: cardinality many entities should be replaced (not added to), and values not present in the new version not be present after the replace.#2016-02-1808:51caspercI was thinking to retractEntity before asserting the new in the same transaction, but I am getting a :db.error/datoms-conflict Two datoms in the same transaction conflict#2016-02-1809:04caspercI guess I could write a transaction function that finds out which fields to add and which fields to retract. Just seems like an obvious use case that might already be present.#2016-02-1809:22robert-stuttafordor you could do two transactions - db.fn/retractEntity followed by asserting all the new facts#2016-02-1810:03meowCan multiple transactions take place in the same "commit"?#2016-02-1810:05robert-stuttafordyou can put lots of data into a single transaction, but you can’t make multiple conflicting alterations to a single entity in one go#2016-02-1810:05robert-stuttaforde.g. assert a two new values for a cardinality/one attr#2016-02-1812:48casperc=> (instance? (d/entity (d/db @mem-conn) [:bygning/id 12345]) datomic.query.EntityMap)
ClassCastException [trace missing]
=> (instance? (d/entity (d/db @mem-conn) [:bygning/id 12345]) datomic.Entity)
ClassCastException [trace missing]
#2016-02-1812:49caspercHow do I test if what I have in hand is an Entity map?#2016-02-1812:52dm3you have your arguments in the wrong order#2016-02-1813:31caspercah doh, you are right.#2016-02-1914:31caspercIs it possible to transact named query rules to the database, so you can avoid adding the % in your query, but still name them in your where-clause?#2016-02-1915:17jgdavey@casperc No. All query-time stuff lives in a peer. That includes any rules you bring to the party.#2016-02-1915:19casperc@jgdavey: Ok thanks, I just figured it would be able to fetch it like it is able to with database functions. But I guess that is a choice they made to not allow that.#2016-02-1917:41chadhsCan you set a retention period for data with Datomic so it doesn't grow forever?#2016-02-1917:46bkamphaus@chadhs: not at present. That said, dropping history after a retention period is a feature that’s been requested several times and is under consideration.#2016-02-1917:47chadhsGood to know thanks!#2016-02-1918:33sdegutisWhy does missing? have special syntax like [(missing? $ ?product :product/parent)]? Considering refs cannot ever have nil, why doesn't Datomic just use the syntax [?product :product/parent nil] instead?#2016-02-1921:00val_waeselynck@sdegutis less explicit?#2016-02-1921:18iwankaramazowI have a question about generating unique id's. Let's say I have a :person-entity that needs an id I can query through :person/id.
In my schema should I just enter:
{:db/id (d/squuid)
:db/ident :person/id
:db/valueType :db.type/uuid
:db/cardinality :db.cardinality/one
:db/unique :db.unique/identity
:db.install/_attribute :db.part/db}
I am hoping whenever I transact a new Person, Datomic automatically generates that id...
Does this make any sense, is there a better way of achieving this? (I'm very new to Datomic...)#2016-02-1921:21jgdaveyIt looks like you’re conflating creating schema entities with domain entities.#2016-02-1921:22jgdaveyThe :db/id of schema entity should be #db/id [:db.part/db]#2016-02-1921:23jgdaveyOnce you have that :person/id attribute available to you, you could create a “new” person like {:db/id (d/tempid :db.part/user), :person/id (d/squuid)}#2016-02-1921:27iwankaramazowOk, thanks!#2016-02-1921:35iwankaramazowMy code works 😄#2016-02-1921:36iwankaramazowfeels like a very stupid question, now I understand the difference 😬#2016-02-2002:41pleasetrythisathomehey #C03RZMDSH!#2016-02-2002:42pleasetrythisathomejust managed to possibly royally screw my prod db#2016-02-2002:42pleasetrythisathomehad a backup server running for ages#2016-02-2002:42pleasetrythisathomeand just restored from the most recent backup#2016-02-2002:42pleasetrythisathomewhich was actually from months ago#2016-02-2002:42pleasetrythisathomeand overwrote the current db#2016-02-2002:43pleasetrythisathomeany miracles? it's my understanding that deleting a db/restoring isn't destructive of data#2016-02-2002:43pleasetrythisathomei have the rotated s3 logs if that's helpful#2016-02-2107:30lowl4tencyGuys hi all#2016-02-2107:30lowl4tency2016-02-21 06:51:13.541 WARN default org.hornetq.core.server - HQ222113: On ManagementService stop, there are 2 unexpected registered MBeans: [core.acceptor.f8f6f6db-d6fe-11e5-bf5a-3f7dbcb4ed23, core.acceptor.f8f6f6dc-d6fe-11e5-bf5a-3f7dbcb4ed23]
2016-02-21 06:51:13.567 INFO default org.hornetq.core.server - HQ221002: HornetQ Server version 2.3.17.Final (2.3.17, 123) [fa61aa6d-d6fe-11e5-bf5a-3f7dbcb4ed23] stopped
#2016-02-2107:31lowl4tencyIt's on AWS with official AMI. HA enabled.#2016-02-2107:31lowl4tencyActive transactor was restarted (terminated and started again)#2016-02-2107:34lowl4tencyHow I can investigate or get reason of that#2016-02-2118:11bkamphaus@lowl4tency: do you have logs/metrics? Look for any Alarm WARN, FAIL, or ERROR messages. Also log gaps#2016-02-2118:11bkamphausHeartbeatMsec aggregate on 1 minute, max with metrics if you have it, you can see missed or latent heartbeats that way.#2016-02-2217:44gardnervickersHey all, What is the preferred way to ignore certain EID’s in a query? I know before I run the query what EID’s I don’t want to include. What has better performance, a predicate in the query, or joining against a filtered database using d/filter#2016-02-2218:00stuartsierraTest and measure.#2016-02-2219:13gardnervickersIf anyone’s interested, I found that querying the result of (d/filter) in my specific case (where my filter predicate was filtering out certain EID’s) added a large overhead to any query against it. It is much faster to run my query on an unfiltered db and either use a query predicate or filter the query results.#2016-02-2219:13gardnervickersPM me if you’d like the repo I used, it’s quickly thrown together and specific to my use case#2016-02-2223:21grzmI'm using datomic with om.next and would like to strip the datomic portions from the results (e.g., :db-before, :db-after, :tx-data). I just want the data results. Is there a straightforward, standard way of doing this?#2016-02-2302:13lfn3@grzm: select-keys is probably what you’re after#2016-02-2309:20pesterhazy@grzm: are you talking about the results of d/transact?#2016-02-2314:50grzm@pesterhazy yup. I'm using dissoc and it appears to be working.#2016-02-2315:07pesterhazyWhat's the idiomatic way to check if d/entity returned an actual entity?#2016-02-2315:07pesterhazy(seq (d/entity (rdb) 999999)) works, but is that the best option?#2016-02-2315:08bkamphaus@pesterhazy: I would use query instead of entity for an existence check, see Rich’s comment here: https://groups.google.com/d/msg/datomic/jZYXqtB4ycY/sbfHvVm6P5oJ#2016-02-2315:09pesterhazyhmm#2016-02-2315:10pesterhazyseq seems like a good way to test if anything is know about an entity simple_smile#2016-02-2315:59pesterhazy@bkamphaus: thanks for the pointer though!#2016-02-2412:52akiel@pesterhazy: I currently use (d/pull db [:my-entity/id] 99999) where :my-entity/id is the external id attribute of the entity. Apart from that I find the behavior of seq on the result of d/entity rather funny to stay friendly. (d/entity db 9999) prints as {:db/id 9999} but (seq (d/entity db 9999)) returns nil where (seq {:db/id 9999}) returns ([:db/id 9999]) as expected. I always thought d/entity returns something which behaves like a map (a lazy map).#2016-02-2414:07pesterhazy@akiel, d/pull is a good suggestion#2016-02-2414:10pesterhazyEntityMap does behave a bit funny, but that's a consequence of its laziness I guess#2016-02-2414:35ampHi there, having some trouble with lookup refs in in transactions. I want to create a new entity “A” and set a reference to a existing “B” entity “b” via the attribute A/bs. I have b’s unique id (B/id) therefore I write my transaction as:
[{:db/id eid :A/id “new id”}
{:db/id eid :A/bs [:B/id “b’s id”]}]
But when executing this transaction I always get:
IllegalArgumentExceptionInfo :db.error/not-an-entity Unable to resolve entity: “b’s id” in datom [#db/id[:db.part/user -1000079] :videos/tags #uuid “b’s id”] datomic.error/arg
Are lookup refs not allowed as values in transactions, or am I using them wrong her? Thanks for any help in advance!#2016-02-2414:37marshall@amp: try using [[:B/id “b’s id”]]#2016-02-2414:38amp@marshall: :+1: thanks it worked!#2016-02-2414:39marshall@amp: Because you’re permitted to transact multiple values for a card many attrib (http://docs.datomic.com/transactions.html#cardinality-many-transactions) you have to use the extra vector wrap to indicate it’s a lookup ref, not a list of 2 things#2016-02-2414:39ampokay ... now I understand, thanks for the link#2016-02-2414:40marshallno problem#2016-02-2415:16bkamphaus@akiel and @pesterhazy if you can guarantee in your data model that some other id will be on the entity, that approach with pull is a reasonable take. Regarding entity behavior - it’s a combination of the fact that (1) it is lazy (as you mention) and (2) :db/id is not an attr/val specified by any datom (it’s just the entity position that groups the datoms). entity only returns actual attr/val pairs. I don’t really like the seq approach as it’s not particularly evident what’s going on in the code, if you’re married to it I’d change it up to something more clear like (empty? (into {} (d/entity db 999)))#2016-02-2415:19bkamphausIt would make more sense to hear about the use case in which you end up with an entity number and aren’t sure if it’s in the db or not. Assuming as-of or since or history vs. present db filtering (probably the typical case where you’d encounter it), note that if you use a query again e.g. as-of or history originally, it might make sense to pass multiple points in time and modify the original query ( http://docs.datomic.com/best-practices.html#multiple-points-in-time ).#2016-02-2415:20akiel@bkamphaus: Good point about :db/id being no attribute. It’s the same as in transaction maps. :db/id is just syntactic sugar.#2016-02-2415:29pesterhazy@bkamphaus: we get entity ids from untrusted source -- say, passed from APIs -- and need to check if the entity actually exists. For example, d/touch throws a NPE if the entity does not exist#2016-02-2415:33bkamphaus@pesterhazy: ah, I see. I do recommend against doing that to be honest. I would use a separate external key (generating one via squuid if necessary), and use the entity id only when retrieved in datalog or by the API (see http://docs.datomic.com/best-practices.html#unique-ids-for-external-keys )#2016-02-2415:34pesterhazyinteresting. didn't know that was recommended#2016-02-2415:35pesterhazyit's a bit tedious to have to have an explicit "primary key" attribute for everything exposed to the api#2016-02-2415:36bkamphausOne primary issue is entity id’s aren’t assignable and therefore can’t be guaranteed stable across migration or sharding strategies.#2016-02-2415:36pesterhazythat is true, we've seen that issue before#2016-02-2415:46pesterhazyyou can always use lookup refs [:my/uuid #uuid "..."] wherever you'd use the entity id otherwiese#2016-02-2420:33arohnerAre there any docs on why a backup can fail?#2016-02-2420:33bkamphaus@arohner: can you be a little more specific about what you mean by a backup failing?#2016-02-2420:33arohnerI’m running bin/datomic -Xmx4g -Xms4g backup-db datomic: file:/Users/arohner…#2016-02-2420:34arohnerCopied 5356 segments, skipped 16536 segments.
Copied 5379 segments, skipped 16536 segments.
:failed
#2016-02-2420:36arohnerI’ve done this before, successfully, on this machine#2016-02-2420:36arohnerthough not in a few weeks#2016-02-2420:38bkamphaus@arohner: anything in the backup process logs? (by default should go to log/ dir of that Datomic directory)#2016-02-2420:38arohnerah, I didn’t know that existed#2016-02-2420:38arohnercom.amazonaws.services.dynamodbv2.model.ProvisionedThroughputExceededException: #2016-02-2420:38arohneryeah, ok, I know how to deal with that#2016-02-2420:39arohnerthanks#2016-02-2420:40bkamphausCool, glad it was something obvious! Usually failures to write to disk etc. I believe get reported, if something dies without reporting error to stdout or stderr, logs are the next place to check (for datomic processes in general, sometimes something at WARN or not typically catastrophic FAIL/ERROR/Exception level can still be responsible for failure)#2016-02-2509:11isaacWhat meaning of ‘sound data model’ in summary of Datomic?#2016-02-2523:02bkamphaus@isaac: the use of the word ‘sound’ in the high level overview doesn’t have a technically specific meaning, if that’s what you’re inquiring about. I’d say the highlights of the "sound data model" are (a) the universal schema for facts implied by datoms and the flexibility in both (b) projecting entities that match your own minimally defined schema from all present datoms and (c) considering all datoms ever recorded via immutable history. Those details are all elaborated on in “The Datomic Data Model” section from the page I assume you’re referring to: http://www.datomic.com/rationale.html#2016-02-2602:51isaac@bkamphaus: thanks for you explain. One more question: ‘sound’ is the Datomic’s invention OR is a concept of general in programming?#2016-02-2610:07pesterhazy@isaac, it just means "complete, solid or secure", as per https://en.wiktionary.org/wiki/sound (nothing to do with acoustics)#2016-02-2615:57chadhsi like this doc: http://docs.datomic.com/capacity.html#2016-02-2615:57chadhstrying to get a feel for how far i could get with 2 peers, transactor, and storage (aka datomic starter pro)#2016-02-2615:58chadhsi think/hope the license cost and yearly maintenance that follows could be trivial if you scaled to a point of needing pro#2016-02-2616:00dm3you can get pretty far if you're efficient with resources. The only thing you don't have is HA in case 1 transactor fails#2016-02-2616:17chadhs@dm3 that makes sense, if HA becomes a concern because real money is at stake, then pulling the trigger on a license would make sense#2016-02-2616:17chadhsthnx#2016-02-2702:31isaac@pesterhazy: thanks, I got it#2016-02-2713:53caspercSo I have this function “versions”, which returns all the versions of a given entity throughout the history of a db. Now I want to make a similar function, which takes an entity id and a pull expression and returns all the versions of that “pull”. How would I go about making this function?#2016-02-2714:13caspercThis is the current “versions” function, which I am now trying to make work based on a pull expression:#2016-02-2714:13casperc(defn versions
"Returns all the different versions of the given entity that has existed."
[db e]
(let [txes (d/q '[:find ?tx
:in $ ?e
:where [?e _ _ ?tx]] (d/history db) e)]
(mapv (comp (partial into {}) d/touch #(d/entity (d/as-of db (first %)) e)) (sort txes))))
#2016-02-2714:13caspercSo any pointers would be appreciated. simple_smile#2016-02-2715:28val_waeselynck@casperc: you shoud just factor out your versions fn to return entities, not maps#2016-02-2715:29val_waeselynckthen as a post-processing step, you can either convert it to a map (using (into {} ...)) or call pull on them#2016-02-2809:30casperc@val_waeselynck: I think the problem with that implementation is that it only looks at the parent entity and not all the entities in the pull for determining the tx’es to look at.#2016-02-2809:39caspercE.g. my pull might look something like this:
[* {:building/_property-ref [*}]
where I am pulling on the property entity and using a reverse lookup from the building entity to pull that as well. I want to have all the versions of the whole pull (meaning in this case also the tx’es that changes building entity).#2016-02-2809:41caspercMy thinking is that I need to parse the pull expression and follow the lookups and reverse lookups in order to get all the tx’es in the “tree” structure being pulled.#2016-02-2817:45val_waeselynck@casperc: 😕 not sure how I would go about this then#2016-02-2817:45val_waeselynckthe thing is, as you pull in other entities, these may add other versions of the database as well#2016-02-2817:47val_waeselynckI don't know how the pull clauses behave relative to the history db#2016-02-2817:48casperc@val_waeselynck: Yeah, I would need to follow the refs in all the “versions” of the entities, so it might end up being alot. For my usecase, mostly the refs will be constant (but obviously, I still need to write the code to take that into account)#2016-02-2817:50caspercBut if I can find the tx’es, it’s pretty much doing what you are doing by pulling once for each tx#2016-02-2817:50val_waeselynckI think I would split the pull clause into a set of paths, then programmatically build a query which gets the txs of the terminal datoms of the paths on a history db#2016-02-2817:52val_waeselynck@casperc: actually that seems pretty doable#2016-02-2817:52caspercAh interesting, you mean paths in terms of the actual entities that the pull hits through history?#2016-02-2817:53val_waeselyncknot entities, datoms#2016-02-2817:53val_waeselynckbut yeah$#2016-02-2817:53val_waeselynckwell the paths would just be lists of datalog clauses actually#2016-02-2817:55caspercI am not totally sure I understand your thinking, but I am thinking that each lookup, e.g. :building/_property in the pull would be a query in the history db#2016-02-2817:56val_waeselynck[:a {:b [:c]}] -> #{([?e1 :a _ ?tx]) ([?e1 :b _ ?tx]) ([?e1 :b ?e2] [?e2 :c _ ?tx])}#2016-02-2817:58val_waeselynckthen the query you build would be#2016-02-2817:58val_waeselynckoops#2016-02-2817:58val_waeselyncklet me put that in a snippet#2016-02-2818:01val_waeselynck@casperc: voila, you'd build this query programmatically and run it on the history, this should yield the correct txes#2016-02-2818:02casperc@val_waeselynck: Ha, that is quite awesome actually simple_smile#2016-02-2818:03val_waeselynckyeah simple_smile#2016-02-2818:03val_waeselynckoh and for the * clauses all you'd have to do is put an _ in attribute position#2016-02-2818:03casperc@val_waeselynck: I get the principle and I agree that an approach like this is the way to go, I think I need some time to understand it though tbh simple_smile#2016-02-2818:04caspercI don’t know 'rules-to-match’ for instance, so need to look that up simple_smile#2016-02-2818:04val_waeselynckoh that's nothing#2016-02-2818:04val_waeselynckjust a placeholder for whatever logic you use to find the root entity#2016-02-2818:05caspercoh 🐵#2016-02-2818:05caspercMakes sense simple_smile#2016-02-2818:06casperc@val_waeselynck: But again, quite awesome. Mind if I ping you for a sanity check when I have a working implementation?#2016-02-2818:06val_waeselyncksure, I'd gladly do it right now if I had the time, let me know#2016-02-2818:08caspercThanks, I appreciate it.#2016-02-2818:09caspercI am considering making a library for datomic common patterns, since I keep coming up against problems that I suspect other people are having (or have had) as well.#2016-02-2818:11val_waeselynck@casperc: yeah that would be useful#2016-02-2818:12val_waeselynckwhat's more, this could be a data library, i.e a library of common attributes definitions or transaction functions#2016-02-2818:15caspercYep for sure. Plenty of transaction functions are being written over and over again i am sure. And it is a real barrier for entry in using Datomic, when common usecases aren’t available as a library.#2016-02-2818:34val_waeselynck@casperc, having read the docs a bit, I think you can't easily build the set of paths directly from the pull clause because of the way pull works wrt component attributes, recursive attributes etc.#2016-02-2818:36val_waeselynckSo I think the safest course of action would be:
1. run your pull clause on the history db
2. derive the set of paths from the result
3. build the query which finds the set of txes
4. then run you pull clause on each of the databases of the txes
Given the time steps 3 and 4 will take I don't think this adds a lot of overhead#2016-02-2818:41casperc@val_waeselynck: I think you are right about component entities. I guess in principle it would be possible to query the schema to still construct the query programmatically as you originally proposed (just adding in the ones that are components from the schema).#2016-02-2818:42val_waeselynck@casperc: that could lead to an infinity of paths#2016-02-2818:42val_waeselynck(components attributes could form a cycle)#2016-02-2818:43casperc@val_waeselynck: I never considered doing the pull against the history db. Does that return all the datums that the pull matches throughout the history?#2016-02-2818:44casperc@val_waeselynck: Hmm, you may be right, but then wouldn’t the pull do that if performed normally as well?#2016-02-2818:44val_waeselynck@casperc: never tried, but I guess it's as if all your attributes had a cardinality many#2016-02-2818:45val_waeselynck@casperc: no, because datoms might form a cycle doesn't mean they do#2016-02-2818:45caspercTrue#2016-02-2818:46val_waeselynckactually, forget what I said about cycles. All I mean is that there's no limit to the length of a path that a pull clause could yield, because of components and recursive rules#2016-02-2818:48val_waeselynck@casperc: gotta go now, have fun!#2016-02-2818:48casperc@val_waeselynck: See ya, and thanks for the input simple_smile#2016-02-2820:17val_waeselynck@casperc: (seems I cannot get my head off this idea). Instead of deriving the paths from the results of step 1, just derived the datoms (i.e a set of [entity attribute value]) and run a query to find the txes of this datoms.#2016-02-2821:22casperc@val_waeselynck: Hehe, not complaining here, and I think it is an interesting problem as well.#2016-02-2821:25casperc@val_waeselynck: I actually tried doing a pull query against a history db and it is not supported, so unless I am missing something, I don’t think that would work.#2016-02-2821:26casperc(ffirst (d/q '[:find (pull ?e [*])
:where [?e :building/id]]
(d/history (d/db conn))))
IllegalStateException Can't pull from history datomic.pull/pull* (pull.clj:291)
#2016-02-2821:30casperc@val_waeselynck: My current thinking is to use the entity api to traverse the pull expression. Each entity/lookup will be traversed in all its versions, to find all the txes.#2016-02-2821:33caspercOf course implementation is still pending. I hope to have time for it tomorrow.#2016-02-2822:06val_waeselynck@casperc: aaaarg, yeah I was afraid it would not be supported. FWIW if I wanted to be exhaustive my approach would probably be to use the indexes directly to get the equivalent of this "pull history" query.#2016-02-2822:09casperc@val_waeselynck: That could be an option. One reason I want to use the entity api though, is to get the same semantics for components that pull has#2016-02-2917:02greywolvequick question, does the order matter when you use not clauses?#2016-02-2917:02achesnais@greywolve: Hey! I was going to ask that!#2016-02-2917:03greywolve😄#2016-02-2917:04achesnaisso yeah, just to make it clearer – is there any case where clause order has an influence on the result of a query (other than impacting the query time)? Particularly when using not clauses#2016-02-2917:07stuartsierraClause order never has an impact on the result of a query.#2016-02-2917:09stuartsierraFrom http://docs.datomic.com/query.html
"Datomic will attempt to push the not clause down until all necessary variables are bound, and will throw an exception if that is not possible."#2016-02-2917:13achesnaisthanks – so, I currently have in front of me a query where, depending on the placement of the not clause vs a missing? clause, the result differs 😕#2016-02-2917:14greywolvepossible bug?#2016-02-2917:15stuartsierraNot having seen the query, my first guess is that this would be caused by different variables being bound in the not clause from the missing? clause.#2016-02-2917:16achesnaisso clause order would impact how variables are bound?#2016-02-2917:17stuartsierraI don't think so.#2016-02-2917:18stuartsierraI think we would have to see a minimal test case to explain what was happening.#2016-02-2917:18achesnaissure – I'll try and build one I can share ^^#2016-02-2917:19stuartsierraIf you don't get an answer on Slack, I would suggest posting the question to the Datomic mailing list.#2016-02-2917:22achesnaisthanks!#2016-02-2917:22stuartsierraYou're welcome, sorry I didn't have an easy answer ready.#2016-02-2917:27achesnaisnp – I totally understand you can't do much without some actual code you can run on your side#2016-02-2917:27achesnaisor even look at#2016-02-2918:20achesnais@stuartsierra: ^#2016-02-2918:25stuartsierra@achesnais: One thing, datomic.peer is not a public API.#2016-02-2918:26achesnaisoh oops sorry#2016-02-2918:26achesnais🙈#2016-02-2918:27achesnaisis it okay for me to post this snippet to the datomic mailing list though?#2016-02-2918:27stuartsierraAlso the reader literal #db/id is not meant for use in Clojure code.#2016-02-2918:28stuartsierraAnd you can't transact an attribute in the same transaction in which it was defined.#2016-02-2918:33stuartsierraAfter fixing those things, though, I get the same result you reported.#2016-02-2918:34stuartsierraI admit I don't entirely understand what is happening.#2016-02-2918:34stuartsierraIt could be some subtlety related to the joining variables in not clauses.#2016-02-2918:34stuartsierraIt might even be a bug.#2016-02-2918:36achesnaisokay cool, glad someone else can reproduce. Sorry about the newbiness of my snippet#2016-02-2922:22caspercIf I am making a client for the rest API in Clojure (or actually a home made one, but still using EDN), what is a good way to make tempids if I don’t have a dependency on the datomic.api in the client?#2016-02-2922:26mvDatomic newbie here. Do you guys usually manage your schemas directly, or do you use a tool like datomic-schema? Writing schemas seems to involve a lot of boilerplate#2016-02-2922:35stuartsierra@casperc: Create your own datatype and render it in EDN as #db/id[:partition]#2016-02-2922:37casperc@stuartsierra: I was actually going that way just now, I am just not sure how to make the rendering work:
(defprotocol IPrintable (toString [this]))
(deftype DbId [partition id]
IPrintable
(toString [this]
(str "#db/id[" (keyword partition) (when id " " id) "]")))
#2016-02-2922:39casperc=> (str (->DbId :db.part/id nil))
"#db/id[:db.part/id]”
#2016-02-2922:39casperc=> (pr-str (->DbId :db.part/id nil))
"#object[common.datomic.client.DbId 0x57a76d2f \"#db/id[:db.part/id]\”]”
#2016-02-2922:40stuartsierraLook at the definition of pr in ClojureScript to see what protocol or multimethod it's using.#2016-02-2922:43casperc@stuartsierra: Tricky, I was looking in Clojure’s definition, but that didn’t tell me too much tbh. I’ll look in CLJS#2016-02-2922:45caspercAh, actually it ends up in the print-method multimethod, so I would probably need to add a defmethod for that simple_smile#2016-02-2922:58casperc@stuartsierra: It works, thanks for the pointer.#2016-03-0114:57akielI try to test my code even without an in-memory database. What is the reason resolve-tempid needs a database especially the :db-after from the transaction? The tempids map already contains the mapping from tempid to actual id. What does resolve-tempid on top of a simple map lookup?#2016-03-0115:13stuartsierra@akiel: It needs the database to resolve entity partition IDs.#2016-03-0115:14akiel@stuartsierra: in case I create a partition?#2016-03-0115:15stuartsierrano#2016-03-0115:15stuartsierrad/tempid doesn't return a simple number. It returns a record containing a partition ident (keyword) and a number.#2016-03-0115:15akielok I see#2016-03-0115:15stuartsierraThe partition gets resolved into a number as part of the transaction.#2016-03-0115:16akielthanks#2016-03-0117:55alexisgallagherHello all. I'm having a look at datomic to evaluate it and I have a question about getting started.#2016-03-0117:55alexisgallagherI noticed that there are day of datomic training videos ( http://www.datomic.com/part-i-what-is-datomic.html ) and a repo of day of datomic sample code ( https://github.com/Datomic/day-of-datomic ).#2016-03-0117:55alexisgallagherBut the code in the repo does not match the configuration that Stu Halloway actually sets up and encourages people to follow along with. Does anyone know if there are material online that allow you to actually follow along with Stu's configuration and demoes? Or are you on your own with that?#2016-03-0118:09alexisgallagherAh, I see, it's broken out over separate repos and help pages. Not integrated. Ok.#2016-03-0219:50lowl4tencyHm, guys, for which reasons it is?
[
cc @bkamphaus#2016-03-0305:24lowl4tency@bkamphaus: hi Ben, we use official AMI of Datomic for our transactors, we have a couple troubles with failover situation, for investigating it's good to have ssh, datadog and papertrail on the boxes, could we get copy of the AMI (i'm not sure, if you use Packer might be you will share the packer file)? I've found you delete ssh init script with initialization of datomic, I don't want to apply ugly hacks for the transactors simple_smile Thank you!#2016-03-0305:28lowl4tencyBtw, I'm able to install datadog-agent and configure syslog (we need it for realtime log stream) with UserData, but I prefer to have it prepared before running. Running Datomic transactors very important for our apps#2016-03-0312:25jimmyhi guys, I don't know if I understand reverse attribute navigating correct or not, I have :project/city if I query :project/_city it would return all the project has the city.
in my case it returns nil ...#2016-03-0315:39bkamphaus@nxqd if you mean inside a query, don’t use reverse ref - just swap the variables in the tuple, e.g., not [?f :person/_friend ?p], just [?p :person/friend ?f]. Reverse navigation is for pull/`entity`, not needed for datalog.#2016-03-0315:41bkamphaus@lowl4tency: are the failover situations recent? One issue, regarding streaming logging options, we’re considering this as a feature but there’s no way to hook this up with our provided AMI at present.#2016-03-0315:41lowl4tencybkamphaus: yeah, last fail of failover simple_smile#2016-03-0315:42bkamphausSome notes - we can only guarantee good shutdown behavior and metrics reporting for a well behaved self destruct. If something happens that causes the datomic process to suspend or stop suddenly, we can’t guarantee the shutdown behavior for HA failover.#2016-03-0315:42lowl4tencybkamphaus: we are using papertrail, all what we need it's or statfull name of file like datadog.log or opportunity to logback.xml with writing to syslog#2016-03-0315:43bkamphausOne sanity check you can put in place for the AMI is alarms around time window without HeartbeatMsec (ensure an active is up) or HeartMonitorMsec (ensure a standby is up).#2016-03-0315:43lowl4tencybkamphaus: can we get something like that in next version of AMI?#2016-03-0315:43lowl4tencyOr might be you can share your AMI?#2016-03-0315:44lowl4tencyI afraid we have to change it for our purposes, or we should create own 😞#2016-03-0315:44bkamphausThis just gives you an indication to manually kill/force cycle the instance, and you lose logs doing this, but it’s more consistent with the disposable cloud assets deployment model to do so. Regarding diagnosis steps:#2016-03-0315:44bkamphausWe are definitely considering doing something like, but not sure exactly what action we’ll take. It doesn’t provide a safe guarantee that you will know what happened/be able to diagnose.#2016-03-0315:45bkamphausAt present the instance goes dead and stops reporting metrics. If there’s a JVM process that dies without the transactor following its self destruct process.#2016-03-0315:45lowl4tencyI understood, but if we can get realtime logs it can help us to investigate and recognize what happened#2016-03-0315:45bkamphaus… this just adds one more layer of status reporting that goes silent. There’s no guarantee the alarm or metric would come through there, either.#2016-03-0315:46bkamphausBut having more options could possibly make it more likely, I just want to make sure that its understood that it’s not guaranteed to add more meaningful metrics if e.g. the JVM segfaults or something, it just dies and the machine doesn’t cycle and the logs are just another form of monitoring that will go silent without definitive answers.#2016-03-0315:47lowl4tencyThe issue are not getting alarm, the issue are getting understand why it happened. We are not able to ssh into the transactor, we are not able to grab logs from host, we can only restart instances and noone guaranties we get logs on S3 correctly#2016-03-0315:48lowl4tencyMight be we have small perfomance or it's DDB issue, or we catch something else. It's extrimely important#2016-03-0315:48lowl4tencyNow transactors are blackbox for us#2016-03-0315:49bkamphaus@lowl4tency: it may be for the level of investigative steps you want to take, the disposable transactor machine model with our provided AMI isn’t appropriate, and it’d make more sense to keep two transactor vms up with your own process monitor and relaunch logic, with your own configured logging, etc.#2016-03-0315:50lowl4tencybkamphaus: yeah, it sounds good, but I don't wanna to create the own AMI from scratch simple_smile#2016-03-0315:50bkamphausWe do intend to explore options for providing better logging/monitoring configuration for the AMI, so I don’t mean that as a “won’t fix” type response, it’s just that what we provide won’t be available immediately and there will always be some other degree of monitoring or logging you could do if it was your own managed machine. So the time/ops investments to do so may be worthwhile.#2016-03-0315:53lowl4tencybkamphaus: I even don't ask change your logic or processes, I'm just asking about opportunity to change your AMI or add our own apps/scripts to the AMI. Now with dropped ssh is not so simple.#2016-03-0315:53lowl4tencyI can operate by UserData btw, but it looks like ugly ugly hack#2016-03-0315:57lowl4tencyI'm realising how many work you to do for getting all staff together, just if it could be a bit flexible it might do your work a bit easier. If you just provide interfaces for customizing simple_smile#2016-03-0321:39alexisgallagherSo Datomic does not have a data type for BLOB storage. But it does support "small" byte arrays and strings. Is there any documentation or guidance on how large is too large for "small"? 10 kB? 100 kB?#2016-03-0322:48bkamphaus@alexisgallagher: it can be dependent on config - underlying storage, mem provisioning etc. As a rule of thumb, I’d try to stay below 100K max size (stay on the order of 10K) as a guideline.#2016-03-0322:49alexisgallagherah, ok.#2016-03-0322:49alexisgallagherreally not very big then. good to know...#2016-03-0400:32tcrayfordNo TOAST here#2016-03-0413:59isaacShould I prefer enumerated value or keyword?#2016-03-0415:34tianshuAfter a query, I listen to tx-report-queue. How can I compute the difference between old result and new result, with the tx-data that is provided in tx? (PS. without re-run the query)#2016-03-0417:58ljosaWhich version of the couchbase-client library should I be using with Datomic 0.9.5302?#2016-03-0418:10ljosaI see that 1.0.3 comes with the transactor, so I suppose that's the right one to use. The reason I'm asking is that we're having intermittent problems with connecting from Datomic peers to an otherwise well-behaved Couchbase 3.0.1-1444 cluster. The error messages look like this: 2016-03-04T16:48:40.35395 2016-03-04 11:48:40.352 WARN c.c.client.CouchbaseConnection - Closing, and reopening {QA sa=prd-useast-couchbase-config-node-01.node.us-east-1.consul/10.51.155.229:11210, #Rops=1, #Wops=0, #iq=0, topRop=Cmd: 0 Opaque: 4 Key: pod-catalog, topWop=null, toWrite=0, interested=1}, attempt 0.
2016-03-04T16:48:40.35399 2016-03-04 11:48:40.353 WARN n.s.m.p.b.BinaryMemcachedNodeImpl - Discarding partially completed op: Cmd: 0 Opaque: 4 Key: pod-catalog
2016-03-04T16:48:40.40825 2016-03-04 11:48:40.407 WARN c.c.client.CouchbaseConnection - Node exepcted to receive data is inactive. This could be due to a failure within the cluster. Will check for updated configuration. Key without a configured node is: pod-catalog.
2016-03-04T16:48:42.35671 2016-03-04 11:48:42.356 WARN n.s.memcached.auth.AuthThreadMonitor - Incomplete authentication interrupted for node {QA sa=prd-useast-couchbase-config-node-01.node.us-east-1.consul/10.51.155.229:11210, #Rops=0, #Wops=2, #iq=0, topRop=null, topWop=Cmd: 0 Opaque: 5 Key: pod-catalog, toWrite=0, interested=8}
2016-03-04T16:48:42.35758 2016-03-04 11:48:42.357 WARN net.spy.memcached.auth.AuthThread - Authentication failed to prd-useast-couchbase-config-node-01.node.us-east-1.consul/10.51.155.229:11210#2016-03-0418:51alexisgallagher@tcrayford: thanks for the pointer re: TOAST. Didn't know this was what I was wondering about.#2016-03-0421:37weiwhat’s a good way to a keep global config value in datomic? i.e. a property that only has one value#2016-03-0421:38alexisgallagherWhy not just define an attribute to name that config value and define its type, and then have a single entity with that attribute?#2016-03-0421:38weicould you give me an example definition? still wrapping my head around the syntax#2016-03-0421:41weioh, I think I get what you’re saying. the lookup for that value would be annoying though, right?#2016-03-0421:41alexisgallagherI'm still in the head-wrapping stage as well, so you should take my thoughts with a grain of salt.#2016-03-0421:42alexisgallagherBut..., basically there are two entities you want to create, I think.#2016-03-0421:42alexisgallagher1. The schema entity which defines a new kind of attribute.. Let's say it's name (it's "ident") is :mysystem/config.#2016-03-0421:43alexisgallagherYou only need to create this one the database initialization stage, as part of the usual schema definition.#2016-03-0421:43alexisgallagherThen, there's also,#2016-03-0421:43alexisgallagher2. An entity which has no purpose except to have this attribute.#2016-03-0421:43alexisgallagherWhen you change the system configuration, you would add/retract assertions about this second entity.#2016-03-0421:44alexisgallagherI don't see why lookup would be any more or less annoying than lookup elsewhere in datomic.#2016-03-0421:45alexisgallagher(first (d/q '[:find ?config-value :where [?e :mysystem/config ?config-value]] db)) might be right.. not sure off the top of my head.#2016-03-0421:45alexisgallagherI'm just grabbing the first query result since I assume that in your application logic you enforce that there is only ever one entity holding this value.#2016-03-0421:46alexisgallagherYou could even give the entity an alias, and refer to it as :mysystem, if you wanted to, and then maybe you could get easier lookup by using (d/entity :mysystem). Not sure.#2016-03-0421:48jgdaveyFor single-value-returning queries the . is probably more idiomatic: (d/q '[:find ?config-value . :where [?e :mysystem/config ?config-value]] db)#2016-03-0421:53alexisgallagher@jgdavey: what is the . formally in the query grammar? it's news to me.#2016-03-0421:53jgdaveyfind-scalar#2016-03-0421:54alexisgallagherAh, thanks.#2016-03-0421:55alexisgallagherSo datomic-free is directly available via maven. Is there also a way to download the datomic free transactor directly for non-interactive deployment, or is the only way to click-thru the EULA on the website?#2016-03-0422:24wei@alexisgallagher: alias is a good idea, thanks#2016-03-0611:58lucasbradstreetDoes pull-many employ any batching retrieval from storage. I'm thinking about giving Urania (https://www.niwi.nz/2016/03/05/fetching-and-aggregating-remote-data-with-urania/) a play and thought auto batching pulls might be a fun experiment #2016-03-0612:00lucasbradstreetI'm considering adding a new feature to #onyx to allow users to supply a batch-fn which operates on a full onyx batch, rather than just a single segment at a time. Urania/haxl style auto batching would make this pretty powerful. #2016-03-0620:24caspercSo, regarding database functions. Is it possible to have multi arity functions and optional arguments?#2016-03-0704:19isaacthere is two problem confuse me about Datomic:
1. Should I prefer enumerated value than keyword. db.type/ref OR db.type/keyword
2. I have an entity (a user) which created old thant t.
Is this user will recycle when I call (gc-storage conn t)
if this user never updated after created?#2016-03-0704:20bkamphausHi @isaac - see my reply here: https://groups.google.com/d/msg/datomic/m2aVI9IY9Ms/o7CI0AnMAAAJ#2016-03-0704:21isaac@bkamphaus: there is no reply#2016-03-0704:23bkamphaus@isaac: there definitely is one, but Google Groups always had bad delays for update/refresh - it may take a couple of minutes to show up in your view. File under “eventually consistent” I guess.#2016-03-0704:26isaac@bkamphaus: I seen your reply. thanks#2016-03-0704:59weiI’m trying to parameterize the arguments to a find specification#2016-03-0704:59wei(defn sample-n [db n]
(d/q '[:find (sample ?n ?a) .
:in $ ?n
:where [?a :account/uuid]]
db n))#2016-03-0704:59weibut I’m getting an Exception: clojure.lang.Symbol cannot be cast to java.lang.Number#2016-03-0704:59weiwhat’s the right way to pass in n?#2016-03-0705:06hiredmanyou could stick the number in the quoted form, which is how I've seen it done in the examples#2016-03-0705:07hiredman`[:find (sample ~n ?a) ...]#2016-03-0705:08hiredmanbut my experience with datomic is limited, so I am not sure if you'll run in to issues using syntax-quote there#2016-03-0705:10weilike this? (defn sample-n [db n]
(d/q `[:find (sample ~n ?a) .
:in $
:where [?a :account/uuid]]
db))#2016-03-0705:11hiredmanyeah#2016-03-0705:12hiredmanare you sure you want the find scalar dot?#2016-03-0705:12weiThat results in :db.error/invalid-data-source Nil or missing data source. Did you forget to pass a database argument? when called#2016-03-0705:12hiredmanI don't think that makes sense with sample#2016-03-0705:12weiwhich is surprising for me#2016-03-0705:13weiwithout the dot it returns the sample with another layer of nesting#2016-03-0705:13hiredmanoh, no kidding#2016-03-0705:15hiredmanyou could try building the expression using regular quote, in case the namespace qualifying that syntax quote does is screwing it up#2016-03-0705:16hiredman[:find (list 'sample n '?a) '. :in '$ :where ['?a :account/uuid]]
#2016-03-0705:31weiinteresting thing is, this seems to work in pull expressions:#2016-03-0705:31wei(defn all-products [db]
(d/q '[:find [(pull ?e product-props) ...]
:in $ product-props
:where [?e :product/slug]]
db product-props))#2016-03-0705:31weinot sure what’s different about the two#2016-03-0705:52hiredmanyou used ?n for the other one#2016-03-0705:52hiredmannot n#2016-03-0714:18marshall@wei: This might be what you’re shooting for:
(defn sample-query [x]
(let [my-sample (list 'sample x '?m)]
{:find [my-sample]
:in '[$]
:where '[[?m :track/name]]}))
(d/q (sample-query 3) db)
#2016-03-0815:28actsasgeekI’m using q to combine the results from 4 queries. In this query, ?a is a count and I’d basically like to do an outer join against it (default of 0). get-else only works with $ and datomic schemas, I feel like an or might be involved but I can’t quite figure it out. Suggestions?
(d/q
'[:find ?u1 ?a ?b ?c
:in [[?u1 _]] [[?u2 ?b]] [[?u3 ?c]] [[?u4 ?a]]
:where
[(= ?u1 ?u2)]
[(= ?u2 ?u3)]
[(= ?u3 ?u4)]]
data
data-b
data-c
data-d
)
#2016-03-0920:15val_waeselynck@actsasgeek use data sources instead of relation binding#2016-03-0920:20actsasgeekThanks for the pointer but I’m not sure what you mean. These four relations are the results of previous, simpler queries which I’m trying to tie back together. The only thing I did notice was that I couldn’t use things like or-join without using $. I actually ended up solving the problem a different way by taking the aggregation out of datalog.#2016-03-0920:25actsasgeekoh, cool. I didn’t realize that was possible. Thanks.#2016-03-1012:18greywolveassuming i have a database value which has a certain entity id present within it, and i then use (as-of db some-point-in-time-before-that-entity-id exists) , and I do (d/with ...) on that as-of db, with a tx like [:db/add entity-id some-attr some-value], why does that succeed still? surely it should blow up with an invalid entity-id , since it technically doesn't exist at that point in time?#2016-03-1012:52lmergenman, it's so tempting to start using Datomic for a project I'm working on right now, because I saw the video about resource ownership/authorization, and it just felt so elegant to solve it that way (transform the database object into a new database that only contains objects that are "owner" by the current user)#2016-03-1012:52lmergenbut I feel like it would introduce so much complexity at the same time#2016-03-1013:16val_waeselynck@greywolve: weird indeed, are you positive about the entity id not existing before that ?#2016-03-1013:17val_waeselynck@lmergen: what complexity are you worried about ?#2016-03-1013:17lmergenoperational complexity#2016-03-1013:17lmergeni see that i need to maintain & operate several services#2016-03-1013:17val_waeselynckah yes#2016-03-1013:17lmergenright now we do $everything in AWS and the ecosystem it provides#2016-03-1013:17val_waeselynckbiggest impediment is the limit on the number of processes IMO#2016-03-1013:18val_waeselynckI do deploy it on AWS, and it's OK operations-wise#2016-03-1013:18val_waeselynckand we're only a 2-devs startup#2016-03-1013:18val_waeselynckbut you do have to plan and accomodate for it#2016-03-1013:18lmergenyes#2016-03-1013:19lmergenbut it seriously goes against my gut feeling right now... it's a beautiful thing, but it would be awesome if there would be some datomic-as-a-service with AWS or so#2016-03-1013:19greywolveval_waeselynck: yeah i set the entity id to about 2 years back, so pretty sure 😛 when i restore the entire db back to a point in time just before that entity existed, then d/with correctly blows up when i try that tx#2016-03-1013:19lmergenthen I would be using it without even thinking about it#2016-03-1013:19val_waeselynck@lmergen: amen to that#2016-03-1013:19robert-stuttafordapps + 2 transactors + ddb#2016-03-1013:19lmergenalso, i wonder how well Datomic scales on a write level#2016-03-1013:19robert-stuttafordit’s not that big of an issue. there are details, but it’s not rocket surgery simple_smile#2016-03-1013:20val_waeselynck@lmergen: official stance on this is that your write volume must be consistent with the upper limit on your database size (10 billion datoms)#2016-03-1013:20lmergenoh then that's not going to work anyway... shite#2016-03-1013:22val_waeselynck@lmergen: having said that, it can be interesting to use a hybrid architecture with some NoSQL database for collecting most of your data and Datomic to post-process it#2016-03-1013:22lmergenwel#2016-03-1013:22lmergenthe thing is#2016-03-1013:22lmergenwe have a huge data warehouse (think 10 billion records / month)#2016-03-1013:22lmergenand we have some relational database#2016-03-1013:23val_waeselynckso a lot simple_smile#2016-03-1013:23lmergenand I'm trying to find a solution to query our data warehouse, mix & match it with our relational database, and not having to worry about the slow query time of our data warehouse#2016-03-1013:23lmergenin other words, i want magic!#2016-03-1013:23lmergenand it needs to scale up to the moon too simple_smile#2016-03-1013:24robert-stuttafordworrying about running two transactors is the least of your problems, then simple_smile#2016-03-1013:24lmergenyeah, I'm trying to explore whether Datomic is a solution for this#2016-03-1013:24lmergensomeone suggested me into this direction, to "pre-warm" the pipelines with our data warehouse data...#2016-03-1013:25lmergenbut I don't think it will work#2016-03-1013:28val_waeselynckI'm not an expert in this kind of deployment 😕 maybe you shoud just contact the guys at Cognitect#2016-03-1013:29lmergenyeah I'm already in contact with one of their sales people, but I think what I'm after is actually some consultancy on this matter.. hmm#2016-03-1013:30lmergenwell if anyone is reading this, and thinks they can help us with this, send me a pm#2016-03-1013:47stuartsierraI think Datomic aims to fulfill the role of a traditional relational database — with better scaling characteristics than most SQL's — more than the role of a "big data" solution.#2016-03-1014:06lmergenyeah#2016-03-1014:49jonahbenton@lmergen yeah, Datomic is not a solution to this problem, it plays in the oltp space. Likely in your situation your sql database will need to be able to utilize pre-calculated historical aggregations done on warehouse data, in the appropriate shapes, so that you can use sql across both datasets. In the warehouse/sql world this is an ETL problem, one needs to orchestrate pushes from the warehouse into some version (slave/standby/replica) of the oltp system. This usually is very messy and complicated. What's appealing about Datomic from an architectural perspective is being able to look at this problem from a pull perspective, with layers of declaratively-defined caches#2016-03-1014:50lmergenyep, this is exactly the path we're going to be taking#2016-03-1014:51lmergenas in, I don't think it's possible to do this in realtime, we simply need to schedule periodical jobs#2016-03-1014:52jonahbentonyup#2016-03-1015:11dm3people are doing all sorts of fancy stuff to make results appear faster, e.g. https://www.mapr.com/developercentral/lambda-architecture, but all of this brings huge amounts of complexity#2016-03-1017:22bkamphaus@greywolve: branching off of an as-of point with with isn’t supported.#2016-03-1017:31bkamphauswith works against the db without the filter. as-of dbs will filter out prospective transactions from with when queried against. At present, prospective branching from points in the past isn’t supported.#2016-03-1018:31greywolvebkamphaus: thanks for the clarification simple_smile#2016-03-1021:29arthur.boyerHi all, I’ve been examining the schema of a Datomic database, using this kind of query:
clj
(d/q '[:find ?attr
:where
[_ :db.install/attribute ?e]
[?e :db/ident ?attr]]
db)
But I get retracted attributes as well.
Does anyone know how to filter out retracted attributes?#2016-03-1021:38hiredmanwhat makes you think you are getting back retracted attributes?#2016-03-1021:47arthur.boyerI’m getting back attributes like :customer-account-state/customer-username_retracted5632a067-4048-4186-8610-8e5286596ebe
#2016-03-1021:48arthur.boyerI’ve inherited this code, with no handover, so there’s a possibility that’s this is some non standard craziness and I just haven’t found the place where it comes from.#2016-03-1021:56bkamphaus@arthur.boyer: You can’t retract attributes. That looks like renaming (with a retracted+sha convention) only, and renaming is a typical way to deprecate an attribute. In general, retracted values will only appear in history databases, and you have to bind the fifth position of the datom (the ?added) portion to see if it was an add or retract op.#2016-03-1021:57arthur.boyerOk that makes sense. I’ve run queries binding the fifth position and they came back true. I’ll do some more digging and see if there’s a way I can clean them up. Thanks.#2016-03-1021:58bkamphausif you don’t want those values returned by a schema check, I guess a regex against string rep of the ident would be fine (assuming those are only cases of “retracted”)#2016-03-1021:59arthur.boyerThat’s what I’ve been doing, but it feels like a nasty hack, and not what I think idiomatic datomic should look like.#2016-03-1022:01arthur.boyerSo, what do you do if you do want to retract an attribute? Do you retract all the datoms that use it?#2016-03-1022:02ethangracerhey all, I have a question on the pull syntax bnf grammar (copied from the pull api site):
pattern = [attr-spec+]
attr-spec = attr-name | wildcard | map-spec | attr-expr
attr-name = an edn keyword that names an attr
wildcard = "*" or '*'
map-spec = { ((attr-name | limit-expr) (pattern | recursion-limit))+ }
attr-expr = limit-expr | default-expr
limit-expr = [("limit" | 'limit') attr-name (positive-number | nil)]
default-expr = [("default" | 'default') attr-name any-value]
recursion-limit = positive-number | '...'
I just tried putting a default-expr in as the first item in a map-spec and it worked. so i’m thinking the doc on the site is incomplete? or am I missing something?#2016-03-1022:03bkamphausa more common pattern is to migrate data to a new Datomic database (by i.e. replaying the log with filter/transform) with a finalized schema. That works if you want to drop a lot of the initial modeling learning process, say. But I think the possibility of retracting attributes could introduce more operational complexity than keeping them and removing them from queries.#2016-03-1022:04bkamphaus@ethangracer: do you have a repro or just a code example of the specific expression that works and violates the grammar? I can take the repro case to the team and see whether the grammar or the behavior is correct.#2016-03-1022:06arthur.boyer@bkamphaus: Thanks for that, I think the practical upshot now is that I can safely ignore things named retracted and we can consider a more radical solution later.#2016-03-1022:13ethangracer@bkamphaus: sure, for this sample the idea is that I want to load todos from the server. but maybe I haven't sent any todos to the server yet. In which case I want to return an empty vector (to distinguish between loaded / not loaded from the server). So the bnf compatible query is
(d/pull db [{:todos [:id :title :is-completed]}] __id-for-todo-list__)
which returns nil if that todo-list has no todos. If instead, I write:
(d/pull db [{(default :todos []) [:id :title :is-completed]}] __id-for-todo-list__)
then the pull returns [], even though this is not documented by the bnf#2016-03-1022:14bkamphaus@ethangracer: thanks, I’ll investigate and get back to you.#2016-03-1022:15ethangracer@bkamphaus: sounds good, thanks#2016-03-1109:09greywolve@bkamphaus: is it possible that a peer will remain up, but have its notification queue down for quite a while ? (15-30 mins) or possibly never recover?#2016-03-1116:15lucasbradstreet Are peer licenses per JVM or are they per host?#2016-03-1116:30ckarlsenper JVM in my experience#2016-03-1116:41marshall@lucasbradstreet: ckarlsen is correct - the license is intended to be per JVM#2016-03-1117:08lucasbradstreetK, good to know. Thanks!#2016-03-1117:52therabidbananaIs there a way for db.cardinality/many attributes to preserve order, or is it exclusively for sets?#2016-03-1118:09bkamphaus@therabidbanana: exclusively sets, you’d have to provide your own data modeling for order (i.e. next for lists, enumerated order attrs, etc.), or use something sortable and wrap query results or datoms calls with a sort-by.#2016-03-1118:53alexmillerhe's too modest to mention it here, but @bkamphaus did a great writeup about AlphaGo if you're interested in what Ben does besides supporting Datomic. :) http://benkampha.us/2016-03-11.html#2016-03-1118:59bkamphaus<— known AI sympathizer.#2016-03-1119:04alexmilleraka "human battery for the machines"#2016-03-1119:06bkamphausI’m waiting for the GCU Prosthetic Conscience to come by and displace me on board. Any day now.#2016-03-1119:06arohnertrivia: the original script of the matrix had the humans as CPU hardware, not batteries, which is why Neo could hack the system. Later revisions changed them to batteries, to dumb it down#2016-03-1119:09alexmillertrivia OR HISTORY#2016-03-1119:10bkamphaus@alexmiller @arohner that’s enough, take this to the #C0533TY12 room.#2016-03-1119:10alexmillerhah#2016-03-1208:17greywolvejust experienced something odd with datomic in production, there was a stream of "transactor unavailable" exceptions in the log, and when i manually repled into that peer, and tried something like (d/sync conn) , i got the same error. we've seen this before ,and it never recovers#2016-03-1208:17greywolvehave any of you experienced this before?#2016-03-1208:26greywolvewe basically have to restart the jvm#2016-03-1208:26greywolvesimilar to this: http://datomic.narkive.com/ho4GcCOS/recover-from-db-error-transactor-unavailable-exception#2016-03-1211:36bkamphaus@greywolve: do you have metrics/monitoring (or logs you can grep)? One case where this can happen is with extremely large transaction sizes (1MB+).#2016-03-1211:59greywolvewe have the transactor logs, and it usually begins with this:#2016-03-1212:00greywolve3-5 of those , and then everything goes to hell later#2016-03-1212:00greywolveour txes are quite small, and we weren't under load when this happened#2016-03-1212:01greywolveit's happened a couple of times now, next time we'll have some flight recorder metrics too#2016-03-1212:01greywolveis there anything i can check the transactor for?, that's the only thing we have in the peer logs#2016-03-1212:04greywolveand connection destroyed follows the above:#2016-03-1212:05greywolveand after that the transactor is never available again#2016-03-1212:05greywolvethis is our onyx cluster, we have other peers up on our regular servers, and they seem fine#2016-03-1212:06greywolvewe haven't run into this issue there#2016-03-1212:07greywolve(also the transactor metrics look perfectly fine throughout this ordeal)#2016-03-1212:08bkamphausfunction metric-grep () {
cat *.log | perl -n -e 'print "$1 $2\n" if /^(.*) INFO .* '"$1"' {.*?'"$2"' ([0-9]+).*?}/' | less
}
#2016-03-1212:09bkamphausmetric-grep :TransactionBytes :hi
#2016-03-1212:09bkamphausor metrics (max over one minute), just to double check, what’s the largest transaction size?#2016-03-1212:12greywolvedatomic.transaction_bytes ?#2016-03-1212:13greywolve0.41k is the highest during that period#2016-03-1212:13greywolvehighest over the past day is 12.03k#2016-03-1212:16greywolvetrouble started around ~8:00am#2016-03-1212:16greywolvewe had to restart at ~10:00am#2016-03-1212:19bkamphausOk, transaction size unlikely to be the issue then. Hmm, I’m not familiar enough with what Onyx is doing to reason about it much further difference wise yet. Have you done the basics lein deps :tree check for any dependency conflicts, etc.?#2016-03-1213:07greywolvebkamphaus: onyx isn't really doing any more than reading from the log api (polling it), and using datomic's transact, that's about it, nothing fancy. i'll check the deps though to be safe simple_smile#2016-03-1214:04bkamphausIf there's a final tx from the transactor logs, it will be logged with a uuid - you can use that against the log API with tx-range to figure out which final transaction the peer made before failing. It's a key in the nested data structure, not something you can look up directly, and you need a reasonable t/tx/inst bound for the tx-range.#2016-03-1214:05bkamphaus^ @greywolve #2016-03-1214:05bkamphausOn phone now, I can pull up a code example when I get back to a keyboard :)#2016-03-1214:07greywolvebkamphaus: awesome, thanks! that's a good idea simple_smile#2016-03-1214:08greywolvebkamphaus: code example would be welcome if you can#2016-03-1214:36bkamphaus@greywolve https://gist.github.com/benkamphaus/7eaa6484a254a14f8f1f just pulled this out of another project and slightly refactored without testing in isolation (will test it and fix any typos if I get a chance later), so you may have to make a minor correction or two.#2016-03-1214:43greywolvethanks so much simple_smile#2016-03-1302:48codonnellI have a reference to an enum and I'm struggling to get the :db/ident value of the enum inside a pull. Is it possible to do this, or should I just stick to the entity api?#2016-03-1313:15greywolvecodonnell: are you trying to get the ident for the attr's value instead of a map with {:db/ident .. :db/id ... } etc ? if so, i don't think that's possible#2016-03-1314:17bkamphausThat ident behavior is the default behavior for the entity API. There's about equal preference in the community, and there is a corresponding split between the philosophies in pull and entity and how they return facts about entities.#2016-03-1316:09codonnell@bkamphaus: I'm getting an error when trying to restore the mbrainz database. After downloading it and untarring, I run bin/datomic restore-db file:datomic-mbrainz-backup-20130611 datomic: but get the error clojure.lang.ExceptionInfo: :restore/no-roots No restore points available at file:datomic-mbrainz-backup-20130611 {:uri "file:datomic-mbrainz-backup-20130611", :db/error :restore/no-roots}. Any idea what went wrong?#2016-03-1317:50bkamphausTry the absolute path. A lot of people had size issues with the size of that db so the examples, etc. are all now tailored for a subset (info for it documented here: https://github.com/Datomic/mbrainz-sample ). There’s also an outstanding bug on windows where it will fail to restore unless you manually provide the t value as an argument.#2016-03-1317:50bkamphausyou can look at the roots/ subdir of a backup to find any of those t values you could restore to if you are on windows and that’s the issue.#2016-03-1414:36tianshuHow to sort and limit result in datomic query? I saw seek-datoms, but it seems that can only query one attribute?#2016-03-1415:22caspercI have this database function where I require a namespace in my own project, using the requires syntax. This is working when using the in memory database, but not in one that is backed by a Cassandra base.#2016-03-1415:22caspercDo I need to put the namespace on the transactor classpath somehow to make it work?#2016-03-1415:23bkamphaus@casperc: yep, see the second paragraph for create a function here: http://docs.datomic.com/database-functions.html#create-a-function#2016-03-1415:25casperc@bkamphaus: I have read it and I don’t think it explains the issue very well. It just says to use require or imports, but not how to get that that code loaded in the transactor#2016-03-1415:25caspercCan you elaborate what should be done?#2016-03-1415:27bkamphausjar in the lib subdirectory of the Datomic distribution, i.e. datomic-pro-0.9.5350/lib - I can re-read and see if we need to add it.#2016-03-1415:29casperc@bkamphaus: Ok thanks. I guess it makes it a bit harder to use. I think it makes more sense to define the functions in the scope of the code block then (even though that is a bit ugly)#2016-03-1415:30caspercAnd regarding the docs, IMO the documentation is quite weak for database functions, so I think it could use a brush off with some more examples, e.g. with a :require in there.#2016-03-1415:31caspercThanks for the swift answer though simple_smile#2016-03-1415:31bkamphausI can take a look at improving the dbfn docs. I will say, that it’s not a path that we want to be too encouraging simple_smile but maybe we can document that to some extent as well.#2016-03-1415:32bkamphausthere aren’t a lot of use cases for dbfns outside of a transaction function, and transaction function uses should be really limited — i.e. when you need to guarantee ACID isolation and can’t do so with a builtin like cas and an optimistic concurrency strategy.#2016-03-1415:37casperc@bkamphaus: Well maybe there is a better way to do what I am doing then. simple_smile I have made a replace-entity function, which takes an entity id and a new “version” of the entity which should replace the old. The function makes retractions for the fields that are not present in the new version and updates the fields that are still present - cardinality many attrs are retracted and new values are inserted.#2016-03-1415:39casperc@bkamphaus: So it is a CAS-like functionality. Actually I would have expected it to be a built-in, since would seem like a fairly common use case in my mind.#2016-03-1415:43bkamphausI would say what you’re doing matches the transaction function use case fine. When you say “CAS-like” do you mean it should fail on update from time of submission, or is the transaction written so that it should ignore any updates made (i.e. implicit retract) between transaction submission and write time?#2016-03-1415:49casperc@bkamphaus: Well actually no, it’s not really cas-like come to think of it. It only updates the current entity with a sort of “overwrite” semantic on the current entity.#2016-03-1415:50caspercSo it uses the current database value to see which adds and retracts it should make, but doesn’t really compare and fail if the value is not the expected like cas#2016-03-1415:52caspercBut doing it in the peer would allow for a race condition i believe#2016-03-1415:54bkamphausyep, it matches the transaction function use case shape pretty well, so I think your approach makes sense.#2016-03-1415:55casperc@bkamphaus: On a different note, I read your blog post about AlphaGo and the previous one. I liked them simple_smile#2016-03-1415:57caspercI am trying to get a grasp of deep learning so I appreciate your approach of trying to address obstacles when looking into it#2016-03-1416:01bkamphausglad to hear you’ve found it to be worthwhile reading. The way I find it helpful to think about it is frequently different than what I see in my own reading and research, so I’m happy to write the posts as an exercise in understanding for my own benefit. I’m doubly glad if anyone else gets anything out of it.#2016-03-1416:01bkamphausI definitely plan to continue with the deep learning theme and touch on other frameworks, probably at the same 1-2 post/month rate for now.#2016-03-1416:02caspercSounds good, I’ll keep an eye out then.#2016-03-1501:23weiis there a good way to remove an entity’s components and assign new ones in the same transaction?#2016-03-1507:41val_waeselynck@wei I wrote a transaction fn which retracts the target entities except for a whitelist of lookup refs, and I use that in combination with the specs that add or update the "replacing" entitires#2016-03-1513:54wei@val_waeselynck: ended up doing something similar#2016-03-1518:03matthavenerdoes datomic support connecting to a SQL store without using SSL? It seems like its passing ssl=<something> even if I leave ‘sql-driver-params’ blank or some innocuous#2016-03-1518:27matthavenerwell, luckily postgres also has ‘sslmode=disable’, so I’ve fixed this with setting “sql-driver-params=sslmode=disable"#2016-03-1519:10kvltWhen setting up datomic with memcached. Is it preferable to have multiple nodes in the cluster?#2016-03-1519:11kvltDoes datomic make use of consistent hashing?#2016-03-1519:30jamespwThis sounds like Datomic 101 but how do I add a child entity that relates back to a parent entity? For example a University has many Classes. I can add the university and the classes in a single transaction but I can’t add a single new Class to the existing classes. I’ve tried googling and looking through the Datomic docs but hit a wall. What’s the idiomatic way to do this?#2016-03-1519:38codonnell@jamespw: If you're looking to add a class that belongs to a university, you need to get the university's entity from the database. You probably added classes to your university originally using something like [:db/id (d/tempid :db.part/user) :class/university #db/id[:db.part/user -1]]. If your university is an existing entity university-entity, you would add a class to it with [:db/id (d/tempid :db.part/user) :class/university university-entity].#2016-03-1519:39hiredmancorrect me if I am wrong, but if you have an unique attribute on the university, you can also use that right?#2016-03-1519:40codonnellThat's right.#2016-03-1519:40codonnellThere's more info at http://docs.datomic.com/transactions.html#2016-03-1519:46jamespwWhen I created the classes in the initial transaction I used the :university/classes relation and had a vector of classes, as I have a cardinality many ref type for :university/classes#2016-03-1519:50hiredmanso you can query datomic to find the entity id of each university, then use that entity id to add more classes#2016-03-1519:56hiredmanI have not used datomic very much, but if I recall correctly, if you have a unique attribute on a university like maybe :university/name and you had already transacted a university named "Foo" if you do another transaction that contains something like [#db/id[-1] :university/name "Foo"] the tempid will be resolved to the entity id of the already existing university with the name "Foo"#2016-03-1520:01jamespwIs there a ‘belongs to’ association that’s auto created when I add the one to many ref? I can find the university name but when I try and add a new Class it just overwrites the existing one#2016-03-1520:03jamespwOh it looks like I can use the underscore association (:university/_classes)#2016-03-1520:04bkamphaus@jamespw can you provide a code example of how how you’re trying to add a new class? If you’re working with entities for each, one mistake you might be making is retrieving a class entity and overwriting its attributes, rather than transacting a new class entity.#2016-03-1521:24jdubie@bkamphaus: it’s very convenient to transact the schema and create the database every time my server process (re)starts. i’ve heard this is not best practice. do you agree? if so what are the specific bad things that will happen if you do this?
(d/create-database datomic-uri)
(let [conn (d/connect datomic-uri)]
@(d/transact conn schema))
#2016-03-1522:03bkamphaus@jdubie: I think it makes the most sense to use something like the approach in conformity - https://github.com/rkneufeld/conformity - where you ensure that schema (and entites, w/e) specified in e.g. an edn file are in the database in the form you specify, but don’t submit spurious transactions.#2016-03-1522:04bkamphausThe create call is pretty much harmless. Additional schema transactions will generate empty transactions (i.e., if nothing to be done, won’t add any datoms apart from the transaction datom itself), so they're not completely idempotent.#2016-03-1522:25jdubieawesome. thanks. very helpful.#2016-03-1523:59elijahchanceyHowdy! I’m following the instructions here: http://docs.datomic.com/aws.html and have concluded that I can’t create a transactor on AWS while specifying a VPC. We have a multi-vpc environment and need to be able to create the transactor in a non-default VPC. Help?#2016-03-1614:08gerstree@elijahchancey: What exactly keeps you from creating a transactor?#2016-03-1615:59haroldIs there a way to get more feedback from ./bin/datomic restore-db file:./data/db-backup datomic:?
A progress bar, and/or a few prints outlining the steps it is taking would be tremendous.#2016-03-1616:15elijahchancey@gerstree: This is what happens when I try to create the transactor#2016-03-1616:15elijahchancey$ bin/datomic ensure-transactor dev-transactor.properties dev-transactor.properties
{:success dev-transactor.properties}
$ bin/datomic ensure-cf dev-cf.properties dev-cf.properties
com.amazonaws.AmazonServiceException: The security group 'ec2-securitygroup-databases-xxxx’ does not exist in default VPC 'vpc-xxxxxxx’ (Service: AmazonEC2; Status Code: 400; Error Code: InvalidGroup.NotFound; Request ID: xxxxxx)
#2016-03-1616:16elijahchanceyin our environment we have multiple VPCs to separate our infrastructure. it’s important for us to be able to specify which VPC to create the datomic infrastructure in so it can communicate with the correct EC2 resources. The default VPC isn’t specific enough.#2016-03-1616:27elijahchanceyAlso, the security group ec2-securitygroup-databases-xxxx does exist, but it’s in a different VPC than the default VPC.#2016-03-1617:20bkamphaus@elijahchancey: at present the ensure-, etc. process doesn’t accommodate VPC. It’s a feature request we’re considering to document steps or to have it supported via the auto/generative steps. We do have some users who generate the cloud formation json with create-cf-template initially, then edit additional information for VPC. There are users who use the transactor AMI still while doing so.#2016-03-1618:25gerstree@elijahchancey: Yeah, VPC is the reason we left the ensure process and roll our own setup.#2016-03-1618:27gerstreeWe currently run the transactor as a docker container in AWS ECS. You could do that (if you are looking for trouble of a different kind 😉 )#2016-03-1619:02elijahchancey@bkamphaus @gerstree thanks for the workaround suggestions!#2016-03-1619:16gerstreeWhenever you need more details, ping me.#2016-03-1623:09elijahchanceythanks!#2016-03-1704:30kahunamooreIs it possible to save the queries created in the Datomic Console into the browser's Local Storage (on a per-db/uri basis)? I searched the docs and this ML but found no mentions of it.#2016-03-1704:59bkamphausNot at present.#2016-03-1705:36wei@marshall late to respond but thanks for your solution to sample-query last week. worked perfectly#2016-03-1710:58isaacI got this error when I try to connect to remote transactor:
HornetQNotConnectedException HQ119007: Cannot connect to server(s). Tried with all available servers. org.hornetq.core.client.impl.ServerLocatorImpl.createSessionFactory (ServerLocatorImpl.java:906)
#2016-03-1710:59isaacI can use telnet connect those 3 ports — I opened(20177,20178,20179)#2016-03-1711:03isaacI use Dotomic in dev mode#2016-03-1714:17matthavenerisaac: using datomic free?#2016-03-1717:40harold@bkamphaus: Are you in CO?#2016-03-1717:41haroldWe are doing sweet machine-learning in Boulder on top of Datomic (❤️) and would love to chat.#2016-03-1717:43haroldhttp://thinktopic.com#2016-03-1717:44bkamphaus@harold: yep, I’m in Longmont. Went to one of the Clojure meetups that Jeff and Jon organized. Happy to chat with you guys about how you’re using Datomic.#2016-03-1717:44haroldRadical! We'll be doing more meetups for sure.#2016-03-1718:31akielIn my peer, I have a ObjectCache hit ratio of 1 and no reported StorageGetBytes. But I see access to my Riak storage. Is that normal?#2016-03-1718:38bkamphausI would expect some access — peers read active transactor heartbeat from storage, also the transactor notifies peers of newly written data that needs to be read into memory index on peer.#2016-03-1718:40akielBut the access correlates with my load test. I have nearly no storage access during no load.#2016-03-1719:09akiel@bkamphaus: Sorry it was another process accessing the Riak cluster which is triggered by the same load test. I have no storage access from Datomic itself.#2016-03-1719:10bkamphausah, ok, makes sense.#2016-03-1719:11akielyes my oAuth tokens are also stored there...#2016-03-1720:17wei@harold are you using any of the open source ML tools, or your own stack?#2016-03-1722:02ljosaI got a ProvisionedThroughputExceededException while restoring a database to an underprovisioned DynamoDB store. Will the transactor and peers back off and retry instead of dying if DynamoDB's read or write capacity is exceeded?#2016-03-1722:03bkamphausto some extent, Datomic will respond to throttling (and metrics like StoragePutBackoffMsec or StorageGetBackoffMsec will be reported), but it is possible that it throttling is too severe that a transactor will fail to heartbeat and fall over, for example.#2016-03-1722:08ljosathat makes sense.#2016-03-1722:15ljosaAre the transactor and peers resilient to memcached dying as long as the hostname of the memcached server resolves when the transactor or peer starts?#2016-03-1800:08harold@wei - both :: definitely planning on open sourcing our stuff as well, when the time is right.#2016-03-1800:28wei@harold have you found that datomic is sufficient for the volume of data you generate?#2016-03-1815:41curtosiswould there be any issues with creating enum entities dynamically (i.e. to reflect user additions)? I want to provide a hierarchical tagging system and think they'd be more useful than raw strings.#2016-03-1815:42curtosisThe one issue I can think of is that I'd have to manage them indirectly to e.g. handle "deletions"#2016-03-1815:43bkamphaus@curtosis: so for the first pass I’ll throw out the basic rule of thumb — if it’s something you’d type into your application code, that’s the best fit for an ident.#2016-03-1815:43bkamphausfor most other cases having a unique identifier and using lookup refs is the best approach.#2016-03-1815:43curtosishmm.. why is that?#2016-03-1815:46bkamphausthe point of ident is for its keyword name to be interchangeable in many contexts with its entity id. If you have to do partial matching on the stringified ident, or generate things on the fly, you’ll give up performance advantages that come from having the ident in memory.#2016-03-1815:47bkamphausalso ill suited for existence tests in query (error vs. no matches)#2016-03-1816:19stuartsierraAlso, if I recall correctly, all Idents are kept in memory, all of the time, on all Peers. If your users create millions of tags, that could lead to inefficient use of memory.#2016-03-1816:19stuartsierraAs far as convenience, Lookup Refs are almost as convenient as Idents.#2016-03-1816:50curtosis(sorry, Internet decided to die on me. finally gave up and switched to iOS app.)#2016-03-1816:53curtosisThat makes sense. I don't think I expect anywhere near that many tags, but there doesn't seem to be much cost to just avoiding it in the first place. And there's indeed something to be said about not exposing program-internal language to the user.#2016-03-1816:54curtosisas I understand it, I don't even really need to create a uuid attribute - the string name has to be unique anyway for lookup refs to work, right? So there'd be an entity with one attribute, which would be a name instead of a db.ident, right?#2016-03-1816:56bkamphausright, in this case by “unique identifier” I meant to point towards a string (or potentially keyword) attribute with :db/unique :db.unique/identity rather than suggest a uuid valued field.#2016-03-1816:58bkamphausfor tags or something similar, unique string makes sense. Whether or not it makes sense to model it as an entity is a separate question, it looks like given the hierarchy of tags you expect, the entity with an attribute for a name and other attribute(s) for relations to other tags makes sense.#2016-03-1817:02curtosisgood, I'm following. :) And in fact I do have it modeled as an entity, precisely because I also want a :tag/parent ref attribute.#2016-03-1817:12curtosisrereading the page on identity and uniqueness, it's all laid out pretty nicely - the key bit here is that my "enumerated tags" usage is not programmatic, so the semantics should be different (as are the tradeoffs). got it, thanks!#2016-03-1817:53sdegutisWhen a peer upgrades its Datomic version, must the transactor that it connects to also be on the exact same version? If not, how do you know when to upgrade both the peer and transactor, as opposed to just the peer?#2016-03-1817:55bkamphaus@sdegutis: don’t need to match versions except across compatibility breaking changes (all documented here: http://docs.datomic.com/release-notices.html ) — recommendations for upgrading an entire live system are here: http://docs.datomic.com/deployment.html#upgrading-live-system#2016-03-1817:56sdegutis@bkamphaus: Excellent, thanks very much.#2016-03-1821:49curtosishow would you use get-else where the non-default value is not the value that would be missing? This doesn't do it:
(d/q '[:find ?label ?parent
:in $
:where
[?t :tag/label ?label]
[?t :tag/parent ?r]
[(get-else $ ?r :tag/label "N/A") ?parent]] (d/db conn))
#2016-03-1822:25curtosishmm... maybe get-else isn't the right tool here. I got it to do what I expect using an or-join, but it looks a little ugly:
[:find ?label ?parent
:in $
:where
[?t :tag/label ?llabel]
(or-join [?label ?parent ?t]
(and [(missing? $ ?t :tag/parent)]
[(ground "N/A") ?parent])
(and [?t :tag/parent ?r]
[?r :tag/label ?parent]))]
#2016-03-1822:27curtosisI suppose that's not actually terrible, but perhaps there's a simpler way.#2016-03-1901:23weidoes anyone have experience exporting datoms into a SQL db or other SQL-queryable format?#2016-03-1904:10isaacWhat is the uplimit of :db/id in datomic#2016-03-1904:11isaacI found the bigest integer in js is 2^53 - 1#2016-03-1905:19oli@wei whyfor?#2016-03-1905:21wei@oli using an analytics service that requires SQL#2016-03-1905:23oliwell there goes my initial thesis (what query do you want to express in sql that can't be expressed in datalog)#2016-03-1905:32oliwithout knowing the shape of your data it's hard to answer (e.g. are all attributes grouped in a common namespace for a given entity?)#2016-03-1905:37oliif so, you might be able to get away with creating a table for each namespace with columns mapped to the attributes, inserting nulls for missing attributes#2016-03-1905:37oliif not, it might be sixth normal form time#2016-03-1905:39olithat is, creating a [k v] table for each attribute with the v column type mapped to the datomic schema type of the attribute#2016-03-1905:40oliassuming you don't care about transactions#2016-03-1905:41olihttp://www.anchormodeling.com/#2016-03-2023:45j0nidatomic noob question: I see that the tutorials suggest only using :db.part/user for experiments, and presumably making other partitions for real work - all the examples for enumerated type entities use the :db.part/user partition even after saying we shouldn't use it - and when I make a schema that creates a new partition, my #db/id[:db.part/other-partition] ids cause an error#2016-03-2023:45j0niwhat's the right way to do this?#2016-03-2023:46j0nifwiw I'm using conformity and have a norm-map with one item containing the partition, and another containing everything else, with a :requires pointing back to the partition item#2016-03-2112:51caspercI might be coming up against a bug in Datomic here. I am doing a pull call against an as-of db value, and Datomic seems to be doing a full db scan.#2016-03-2113:04casperc(time (d/pull (d/as-of db 13194152884511) '[*] 17592199395616))
"Elapsed time: 127993.736043 msecs"
#2016-03-2113:04casperc(time (d/touch (d/entity (d/as-of db 13194152884511) 17592199395616)))
"Elapsed time: 0.844873 msecs”
#2016-03-2113:06caspercDoes the same thing basically, but the pull version takes forever. When doing the pull against the current db there is no problem.#2016-03-2113:07casperc@bkamphaus: Mind taking a look at this one you get on?#2016-03-2113:43bkamphaus@casperc one big difference between pull and entity is that entity is lazy. So the entity call isn’t really measuring any retrieval. Are there a lot of component refs? If there are, depending on the size of the graph implied by component refs from the starting entity, a wildcard pull could result in a lot of work.#2016-03-2113:45caspercYeah, but I am doing a touch, doesn’t that fetch all the datums that are lazy?#2016-03-2113:45bkamphausgah, reading. Yes.#2016-03-2113:46caspercAnd it doesn’t actually have any component refs in the mix simple_smile#2016-03-2113:46bkamphausHave you restarted and tested in isolation?#2016-03-2113:46bkamphausI.e. is entity taking advantage of pull’s having cached segments?#2016-03-2113:46bkamphausOr are both hot cache examples.#2016-03-2113:47caspercThe database does have alot of data in it though, 5M of that type and about 20M entities in total. Dunno if that makes a difference.#2016-03-2113:47caspercBoth are hot in the sense that pull takes a long time regardless of doing it right after entity or a previous (slow) pull.#2016-03-2113:49caspercAnd it takes up alot of CPU on the peer with the pull, so I think it must be missing an index somehow.#2016-03-2113:54caspercJust retested from a cold start again, and it is the same. And the peer is fetching alot of data on top of the CPU usage, even though it should be cached (via the previous entity/touch call)#2016-03-2113:54bkamphauswhich version of Datomic are you using?#2016-03-2113:54casperc[com.datomic/datomic-pro "0.9.5344"]#2016-03-2113:58caspercMy process is at 1,08 GB received from just having done those two calls simple_smile#2016-03-2114:18bkamphausI can’t repro a disparity in performance like that with any of the large dbs I have availably locally. I suspect there’s something specific to your data or schema that’s hitting a corner case. Quick sanity check - are you essentially getting the same results out of both? E.g. checking count and contents of each returned map? You can (into {:db/id ent-id} (d/entity asofdb ent-id)) to put the data into a map similar to what pull returns (except differing re: some retrieval behavior and ident resolution).#2016-03-2114:25caspercJust checking. Count is the same
(= (keys ent-res-map) (keys pull-res))
true
#2016-03-2114:26caspercBut equallity is false for some reason, let me just check why
(= ent-res-map pull-res)
false
#2016-03-2114:26bkamphaus{:db/id ...} vs ident probably.#2016-03-2114:29caspercIt is due to a ref in the entity map being a datomic.query.EntityMap not clojure.lang.PersistentArrayMap like from the pull#2016-03-2114:29caspercOtherwise they are equal#2016-03-2114:29bkamphauswhat size?#2016-03-2114:30casperc(count ent-res-map)
47
#2016-03-2114:32caspercI should mention that there is only the one tx on the entity i am pulling, so pulling from the current db actually gives the same result as pulling from the as-of db. Dunno if that could be the edge case being hit, given that for most uses you would just use the current db value.#2016-03-2114:32bkamphausI wouldn’t expect that to matter and that was true for the first local repro I tried.#2016-03-2114:34caspercSo any way to debug this? I’d be happy to take it in a private convo or file a bug to avoid spamming the channel.#2016-03-2114:35bkamphausFollowing up in private message.#2016-03-2116:43sonnytoI built a spreadsheet like clojurescript app using OT https://en.wikipedia.org/wiki/Operational_transformation and using datomic for persistence. every key stroke is sent to datomic. It works well but I want to get feed back if this is a good use case for datomic. I'm afraid the transactor cannot handle the load. I like using datomic for this use case for its time model. I'd like a user to go back in time and see all changes to the data#2016-03-2116:44sonnytoperhaps git is a better use case for this? but i'd like the user to be able to query the data as well#2016-03-2117:23sonnytothis looks interesting and probably would work better for OT use case than datomic https://github.com/Jannis/gitalin#2016-03-2117:23kingoftheknollJust throwing it out here. Yesterday I was setting up my first project using Datomic Pro and I noticed that when I do lein run the process will hang, but if I start the repl with lein repl or cider-jack-in, I get the gpg password request and then I can start the server from the repl. It seems like the lein run somehow blocks the gpg auth popup or just doesn’t know how to do that.#2016-03-2117:25sonnyto@kingoftheknoll: strange. i've never had that experience. I'm using boot but that shouldnt make a difference. what is your DB URL?#2016-03-2117:25kingoftheknollNot actually requiring anything in yet.#2016-03-2117:25kingoftheknollJust loading the deps#2016-03-2117:26kingoftheknollI mean I can get around it and I’ve testing that I can use datomic in the repl with an in memory db but just can’t start my ring server with lein run#2016-03-2117:33sonnytokingoftheknoll no error messages?#2016-03-2117:50kingoftheknollnope just hangs#2016-03-2117:52bkamphaus@kingoftheknoll: I always get gpg prompting rather than indefinite hanging when I use it (though it hangs a little sometimes). That said, a workaround that bypasses the entire process is to download the distribution and run bin/maven-install - it will put the dep in your local maven and then you don’t need repo+creds in lein.#2016-03-2117:54kingoftheknolldoes that mean I don’t need to include it as a dep in project.clj?#2016-03-2117:54kingoftheknollwait, I think I’ve already done that#2016-03-2117:55bkamphausYou’ll still have: [com.datomic/datomic-pro "0.9.5350”] in the :dependencies list, it will just find it in your local maven.#2016-03-2117:58kingoftheknollwhat about making a jar file, would it bundle the dep from maven for me?#2016-03-2117:58kingoftheknoll^ sheer ignorance here sorry#2016-03-2118:11dm3@sonnyto - did you do a write up about your app anywhere? sounds interesting#2016-03-2118:25sonnyto@dm3 no but I would like to opensource it later#2016-03-2119:37bkamphaus@kingoftheknoll: ls ~/.m2/repository/com/datomic/datomic-pro/0.9.5350#2016-03-2119:39jannis@sonnyto: Please be aware that gitalin is highly unfinished. 😉#2016-03-2120:19sonnyto@jannis: have you played with it?#2016-03-2120:21jannis@sonnyto: I only tried it last week.#2016-03-2120:21jannis@sonnyto: Just kidding, I'm the author. 😉#2016-03-2120:21sonnytolol cool#2016-03-2120:22sonnytoit looks cool and woudld fit my use case nicely#2016-03-2120:22jannisIf you can call me that, since it's so unfinished...#2016-03-2220:10actsasgeekis there something like get-else but for relations?#2016-03-2220:14actsasgeekbasically, I need to integrate some external data into a query. The data is a vector of tuples with an id and the constant true. I know I can include it via :in $ [[?id ?val]] but I need to default ?val to false for all the ?id not included. 😕#2016-03-2220:39actsasgeekinteresting. if I turn it into a map instead, and refer to it as :in $ $1 and use [(get $1 ?id false) ?val] that works which is unexpectedly…good?#2016-03-2220:44bkamphaus@actsasgeek: You shouldn’t need to turn it into a map or invoke Clojure functions, but you should be using the $ to use the tuples as a src-var as opposed to the relational binding if you want to treat it as a data source (which is what get-else would require).#2016-03-2221:24actsasgeekbut get-else also requires an attribute and I’m not sure what that would be…an index?#2016-03-2221:28bkamphausIt’s tailored for a datom, that’s true, (i.e. EAV…) , so if the data isn’t in that form, the options are to use your own function, which appears to be working in your other example (so maybe fine to use that), or structure the data into a datom like structure which would work with the builtin get-else or other provided functions.#2016-03-2221:30actsasgeekso would transforming [[?id ?val]] into something like [[?id :my/val ?val]] have worked? Or is there more to it than that?#2016-03-2221:32actsasgeek(or better yet, just generate the second form at the start)#2016-03-2309:18lowl4tencyHi guys, are there a way to ask datomic about address of secondary (passive) instance of HA transactors? I've got a way to get Active instance details:
(datomic.peer/transactor-endpoint db-uri)
#2016-03-2309:22lowl4tencyI can't find a doc for datomic.peer lib for take a look on all methods#2016-03-2309:23lowl4tencycc @bkamphaus#2016-03-2312:41stuartsierraThe active transactor writes its location into storage for Peers to find it. I don't think the passive transactor(s) record their location until they become active.#2016-03-2313:10lowl4tencystuartsierra: so I haven't another way except "get active and compare"?#2016-03-2313:11stuartsierraI do not know of any mechanism in Datomic to find passive transactors.#2016-03-2313:12lowl4tencyfeature request simple_smile#2016-03-2313:12bkamphaus@lowl4tency: the issue here is that the passive transactor doesn’t write to storage, so there’s no way for a peer to get to it.#2016-03-2313:13bkamphausIf this is re: cloud formation ops - i.e., which instance to kill, you should kill at the cloud formation level. I.e., put up a new transactor pair, then when you see HeartMonitorMsec sum go up, kill the old transactor pair.#2016-03-2313:15bkamphausThis will force a failover to the new transactor pair (they’re ready to take over when they’re reporting HeartMonitorMsec metrics. Depending on whether or not there’s a standby at present (again discernible if anything is posting that metric), you can either wait for HeartMonitorMsec to show up again after being absent, or look at samples/count or sum values which will tell you that more than 1 standby transactor is ready to go.#2016-03-2313:18lowl4tencyThank you#2016-03-2321:41p.brcI am trying to alter my schema. In order to prepare the addition of a unique constraint I added an index with something along the lines of
[{:db/id :person/external-id
:db/index true
:db.alter/_attribute :db.part/db}]
But now my transactor is dying with:
WARN default datomic.update - {:message "Index creation failed", :db-id “store-0b6b5518-0141-4392-b077-1729ea4464c7", :pid 1965, :tid 12}
java.io.IOException: No such file or directory
at java.io.UnixFileSystem.createFileExclusively(Native Method) ~[na:1.8.0_72]
at java.io.File.createTempFile(File.java:2024) ~[na:1.8.0_72]
at datomic.external_sort$temp_file_io$reify__2757.make_temp_file(external_sort.clj:22) ~[datomic-transactor-pro-0.9.5350.jar:na]
at datomic.external_sort$file_system_sorter$fn__2850.invoke(external_sort.clj:113) ~[datomic-transactor-pro-0.9.5350.jar:na]
I am not sure what I am doing wrong here. I assume permissions for the default temp directory are wrong even though it does not seem like they are and /tmp exists. What is confusing to me is why this has not been an issue before. Does Datomic only create these temp files when I alter an existing schema to add an index?#2016-03-2413:49bkamphaus@p.brc: how large is the db? Is this a small test db? (i.e., has it crossed the memory index threshold and actually persisted an index, or did schema alteration result in the first indexing job?)#2016-03-2414:49p.brc@bkamphaus: Yes that might be the case. It is a tiny database on a dev server (the backup is only about 500KB ). Is there a separate directory setting where the temp files for the indices a written to. Or will they go into the data-dir (which is already writeable for the process)?#2016-03-2713:17afhammadHi, i’m sure you get this question a ton, is there any information on how much "Datomic Pro Starter” can handle before I need to upgrade? I’m just looking for vague example/case studies to gage off of. I’m more than happy to pay when it makes sense and is feasible#2016-03-2715:30raymcdermottanother generic question - are there any references on what 1.0 will mean for Datomic? Are there some specific features that are missing?#2016-03-2813:26robert-stuttaford@bkamphaus and friends: is Datomic ‘complete’? Last major feature release was Jan last year, with 0.9.5130. Is anything new going to be added/updated?#2016-03-2813:34alexmillerI don't think anyone believes Datomic is "complete" :)#2016-03-2813:39robert-stuttafordi agree!#2016-03-2813:54bkamphausYes, new Datomic things are coming.#2016-03-2813:55robert-stuttafordoh, good simple_smile#2016-03-2813:57bkamphausAs usual, though, we follow the first rule of Datomic future club.#2016-03-2813:58robert-stuttafordof course. the key disadvantage to that rule is that it doesn’t allow for letting non-members knowing whether the club is still meeting at all, or not simple_smile#2016-03-2814:06timgilbertHey, kind of a philosophical question, but datomic’s pricing model seems a little hostile to a microservices architecture, since every new microservice peer I add winds up counting as a CPU. I can sort of shield the datomic bits behind a bespoke API or the REST API, but then I’m losing a lot of the benefits its distributed architecture, eg the caching. Just curious if this is something the datomic team has thought about or has recommendations about...#2016-03-2814:14dm3@timgilbert: as a datapoint - we had a similar question and even had a call with Datomic people ~9 months ago. They didn't offer any alternative licencing models at the time.#2016-03-2823:06currentoorIs it possible to call d/pull on a collection? Instead of
(map #(d/pull db pull-exp %) eids)
#2016-03-2902:03bhaganyyou could put that pull expression in a query that takes the eids as a parameter#2016-03-2902:31bhaganyI didn't have time to elaborate before, but I mean something like:
(d/q `[:find [(pull ?e ~pull-exp) …] :in $ [?e …]] db eids)
#2016-03-2902:34bhaganyI don't think that syntax quote/unquote will actually work, but hopefully you get the idea#2016-03-2904:55currentooryeah that could work, thanks @bhagany#2016-03-2904:56currentoorand I can just make the pull-exp an argument as well#2016-03-2904:59tomjackwhy use a query for that?#2016-03-2905:08currentoor@tomjack: I have a very long list of eids and I wanted to test if batching the query together would be more performant.#2016-03-2905:08currentoorI believe the query engine parallelizes stuff under the hood.#2016-03-2905:09currentoorI suppose I could use pmap myself but hopefully I can rely on people smarter than me to do the parallel stuff.#2016-03-2915:36caspercAnyone know of a good way to count the number of entities in the database?#2016-03-2916:39bostonaholic@casperc: (count (distinct (map :e (d/datoms db :eavt))))#2016-03-2916:39bostonaholic^^ that will count ALL entities, including entities which describe the schema and datomic structure itself#2016-03-2918:13casperc@bostonaholic: Thanks, I guess I was hoping for to be able to count just the “user entities”, but this is close enough for now simple_smile#2016-03-2918:13gworley3hi. i'm trying to run the datomic transactor on an m4 aws instance but getting errors like this: https://groups.google.com/forum/#!topic/datomic/IXsSUqMkgGo#2016-03-2918:14gworley3unfortunately that doesn't seem to quite suggest a solution for me. in this case i'm just trying to use the dev adapter but seem to be having host problems#2016-03-2918:15gworley3i got around this before by having a fixed dns entry that referenced the machine, but in this case i can't do that because this is on a testing instance that is self containing and many copies of will spin up/down#2016-03-2918:25matthavenergworley3: what are your host and alt-host params in your transactor.properties file?#2016-03-2918:57bkamphaus@casperc: you can modify what @bostonaholic provided and use :aevt :user/id (replace with whatever attribute will limit results to those entities). You can also write a query with and use the count aggregate.#2016-03-2918:58bkamphaus[:find (count ?e)
:where
[?e :user/id]]
#2016-03-2919:53gworley3@matthavener: host=localhost. alt-host not set#2016-03-2919:54gworley3i tried setting to 0.0.0.0 but didn't help.#2016-03-2919:54matthavenerare you using datomic free (h2 storage) with your tests?#2016-03-2919:56gworley3no, i have our license key put in#2016-03-2919:56gworley3and i'm setting to dev#2016-03-2919:57matthavenerfrom the exception, it looks like H2 storage org.h2.jdbc.JdbcSQLException#2016-03-2919:58gworley3hmm#2016-03-2919:58matthaveneri’ve had issues with that when the host doesn’t know its ‘real’ IP (docker).. i had luck setting host=0.0.0.0 and alt-host=<real host ip>#2016-03-2919:58matthavenerbut that was with H2 storage, which is why I asked#2016-03-2920:00gworley3ok, i'll try that#2016-03-2920:07gworley3@matthavener: hmm, none of that seems to work whether i use public or private ip addresses or if i use aws supplied dns entries for those two or not. same effect if i use host or alt-host#2016-03-2920:12gworley3ah, i see, seems this is a known issue with h2: https://stackoverflow.com/questions/1881546/inetaddress-getlocalhost-throws-unknownhostexception#2016-03-2920:13gworley3the (unfortuante) solution seems to be to modify /etc/hosts (i did this and it worked)#2016-03-2920:24bkamphaus@gworley3: in general, I don’t work with dev transactors on ec2 instances, but maybe worth noting that the Datomic generated AWS cf sets host and alt-host as follows:
"host=`curl http:\/\/169.254.169.254\/latest\/meta-data\/local-ipv4`",
"alt-host=`curl http:\/\/169.254.169.254\/latest\/meta-data\/public-ipv4`",
#2016-03-2920:25gworley3i'm actually not too picky here. i was only using dev because it seemed easiest. i'm trying postgres now to see if that gets around the issue since i'm already running it on these boxes anyway#2016-03-2920:26gworley3but i might try that if this doesn't work. thanks!#2016-03-2920:29gworley3okay, looks like using sql with postgres works. thanks @matthavener for pointing out about h2. that led me to avoiding it to get around this#2016-03-3013:15caspercWhen doing a large import, is it recommended to leave off the indexes and then add them later?#2016-03-3013:23bkamphaus@casperc: recommended is maybe strongly stated, but it’s definitely something that makes sense to do if indexing is a bottleneck for the import. If you haven’t might be helpful to review the [import docs](http://docs.datomic.com/capacity.html#data-imports) for other ideas.#2016-03-3013:28caspercI did check them out, but I can’t e.g. do batching, since I am adding data to the transaction.#2016-03-3013:29casperc@bkamphaus: How can I tell if the indexing is the bottleneck? I only have the logs to go by for now#2016-03-3013:30caspercIs the AlarmBackPressure metric logged as well?#2016-03-3014:12bkamphaus@casperc: yes, any AlarmBackPressure in the logs will indicate transactions are pushed back waiting on indexing.#2016-03-3016:50biggertI have a question about a way I’m trying to write a datalog query where I’m using a collection binding and negation. Long story short, I want the entire list of attributes that are components except for a select few. My query looks like this:#2016-03-3016:50biggert[:find [?name ...]
:in $ ?e [?skip ...]
:where
[?e ?a ?v]
[?a :db/ident ?name]
[?a :db/isComponent true]
(not [?a :db/ident ?skip])]#2016-03-3016:52biggertSo if I pass in my entity-id and a list that contains only one attribute that I want left out, it works. Any other variation on the list and it doesn’t work… so an empty list (saying that I want no attributes ignored) or a list with 2 or more (saying I want all of these attributes ignored).#2016-03-3016:53biggertDoes anyone know why that’s happening? I can’t explain the empty list scenario at all but the 2+ scenario seems to be doing an ‘and' instead of an ‘or'.#2016-03-3016:59bkamphaus@biggert: trying to parse through the problem. My quick reading of this query is that with 2+ it will union several subsets of relations with one thing excluded, which would be the same as the query with no exclusions (without the not clause). I.e. one thing gets excluded at a time and the results are unioned.#2016-03-3017:04biggertGotcha. From the queries we’ve played with and parsing the different results, I can see that being the case. I’m having difficulty coming up with a different approach in datalog to avoid that.#2016-03-3017:05bkamphauspull ‘[*] and dissoc resulting map maybe?#2016-03-3017:12biggertYeah... We have another approach where instead of the ?skip we map over the query and build it dynamically adding nots for every individual attribute and it works but we'd like to do that work in datalog and avoid a dynamic query if possible.#2016-03-3018:55calebp[:find ?e
:in $ ?e ?atts-to-skip
:where
[?e ?a ?v]
[?a :db/isComponent true]
[?a :db/ident ?i]
(not [(?atts-to-skip ?i)])]
#2016-03-3018:55calebpThis also seems to work if you pass in the atts-to-skip as a set#2016-03-3018:57calebpSorry about the typo in the find clause. Obviously not looking for ?E#2016-03-3018:58bkamphausyes, you can pass a collection and invoke Clojure collection manipulation in the body of Datalog (i.e. in this case testing set membership), Not usually a solution I tend to reach for, though it may be a fit for this case. Still thinking on it a bit.#2016-03-3101:09biggertThanks @bkamphaus for the help and understanding earlier.#2016-04-0318:50jetzajachello everyone. I'm here with a question: is there way to subscribe to queries efficiently in datomic? Is it possible to determine if this query is affected by this novelty without executing it against whole db?#2016-04-0318:53jetzajacmaybe we could log index access for such query and check if it intersects with new datoms?#2016-04-0320:40tomjackI suspect it's harder than that. It seems like a research problem to me, which gets harder the more of Datomic's datalog you want to support#2016-04-0320:44tomjackE.g. you can find some papers by Dong from the 90's about "non-recursive incremental evaluation" for a subset of vanilla datalog. I guess we don't care about the "non-recursive" here, though (he wants something implementable in SQL).#2016-04-0412:35stuartsierra@jetzajac: There's no efficient way to determine if an arbitrary query is affected by a transaction, in the general case. As an alternative, you can annotate transactions with metadata describing the “type” of change, and use that to determine what “types” of queries might be affected.#2016-04-0412:37jetzajacok, that’s likely to be a solution, thanx!#2016-04-0414:32mlimotteHow should host and and alt-host be set for Transactor. I didn't find any explicit documentation on these fields, but here's my guess:
'host' is what address the transactor should listen/bind on.
'alt-host' is the address that is advertised (by putting it in datomic storage) for clients to connect to.
In my case, I'm running in a docker-image in AWS EC2. So I believe I should set 'alt-host' to the public-ip of the AWS instance. And I found an example from Stu setting 'host' to be the EC2 private-ip.
But this isn't working, getting "Failed to create netty connection ... connect timed out".
I'm wondering if 'host' should really be the IP of the docker container, rather than the host internal IP.
I also tried 'host' as 0.0.0.0, but that didn't work either.#2016-04-0417:28luposlip@mlimotte: I personally set host to 0.0.0.0 and alt-host to the internal ip address of the host (docker-machine).
If you need to connect to it from the outside, you need a different setup with the public IP. In this case your security settings for the EC2 security group is probably blocking your incoming requests on port 4334-4336.
Also remember to expose the same ports in the docker config.#2016-04-0417:29luposlipPerhaps this will help (really rudimentary, but it was enough inspiration for me to get up and running):
https://github.com/enterlab/docker-aws-ecs-env#2016-04-0419:06mlimotte@luposlip: thanks for the response. I'm using dynamodb storage, so I believe I only need 4334 exposed. This port is exposed by the docker container and is setup for ingress from our network in the EC2 security group.#2016-04-0419:07mlimotteI discovered that I'm able to connect to the transactor from an external IP, but Datomic Console (inside the same VPC) is unable to connect ot transactor.#2016-04-0419:09mlimotteI think using public IP for 'alt-host' is the problem. I need public-ip so I can access the transactor from outside EC2 (batch scripts). But inside VPC I probably need an internal ip.#2016-04-0419:25mlimotteOk, to close the loop... I got this working by specifying the AWS public-hostname (i.e. curl >) for 'alt-host' instead of public IP. The public host resolves to the public IP when used outside of EC2, and resolves to the internal IP when used inside EC2.#2016-04-0420:11luposlip#2016-04-0420:12luposlip#2016-04-0422:34hueyp#2016-04-0422:35hueyp#2016-04-0422:38hueyp#2016-04-0422:43bkamphaus#2016-04-0422:43bkamphaus#2016-04-0422:49hueyp#2016-04-0422:49bkamphaus#2016-04-0517:59josh.freckletonbefore using datomic, I want to make sure: how much can I do with datomic without paying for it? IE can I launch an instance on something like Heroku and use it for free for commercial use, for as long as I want? Scale it as big as i want?
this sounds cheap, but I'm working on personal projects, and I'm spoiled by every other database being freely usable, and the answer to this isn't clear to me from the datomic pricing page simple_smile#2016-04-0519:00bkamphaus@josh.freckleton: the cost scales up with the number of processes. You can run a transactor + two peers against a durable storage (and scale those peers up to large instances, etc.) so can get a long ways. You can run it on AWS or Heroku etc. without a paid license. You won’t be able to take advantage of speedups from memcached or an HA configuration for transactors.#2016-04-0520:10josh.freckleton@bkamphaus: thanks!#2016-04-0523:40podviaznikovfollow up questions to about using datomic. What does 12 months of updates are included mean in the Pro Starter version?#2016-04-0601:41bkamphaus@podviaznikov: it means that your license key will work with any version of Datomic Pro released within one year of when your key is issued.#2016-04-0601:59podviaznikovbut it will work forever with that version, right?#2016-04-0603:06bkamphaus@podviaznikov: Yep.#2016-04-0603:07bkamphausWell, forever is a long time. For your lifetime, sure simple_smile Might not be able to run it around the time that the sun expands into a red giant.#2016-04-0611:37raymcdermott@josh.freckleton: I posted a blog about running Datomic on Heroku (TL;DR it’s either insecure or transient if you want to do it for free) http://blog.opengrail.com/datomic/heroku/postgres/2015/11/19/datomic-heroku-spaces.html#2016-04-0612:52gravHey! I have a list of datoms from a production db, that I want to copy over to an in-mem test db. Can I re-use the ids somehow? Or do I need to create new temp-ids, maintaining relations and everything?#2016-04-0613:03bkamphaus@grav: you can’t set entity id’s with an import. For migrations we generally recommend putting unique identifiers on entities (a domain unique identifier or perhaps a generated uuid) so that you can use lookup refs to resolve refs rather than juggling e.g. negative values to tempid).#2016-04-0614:14josh.freckleton@raymcdermott: Awesome post! Towards the end, aren't you suggesting there's a free way of using it with Heroku Postgres that's safe, and persistent? (I'm confused since your tl;dr suggests otherwise)#2016-04-0614:15raymcdermott@josh.freckleton: you can use Heroku Private Spaces but that is $1k per month#2016-04-0614:16raymcdermott@josh.freckleton: there are some potential hacks with secondary providers to manage IP addresses but YMMV#2016-04-0614:18josh.freckleton@raymcdermott: hm, slightly more than free! Thanks for mentioning your post, it's awesome!#2016-04-0614:18raymcdermott@josh.freckleton: thanks - I have also documented the same for DynamoDB as a backend#2016-04-0614:22raymcdermott@josh.freckleton: the services I refer to are called ‘Fixie’ and ‘Proximo’; with some effort you could provision Datomic that way but it was too hard, for me at least 😉#2016-04-0614:27raymcdermott@josh.freckleton: just checked and seen that there is another one (QuotaGuard Static - perhaps there are more) but same advice applies#2016-04-0614:32raymcdermott@josh.freckleton: just looked and found another buildpack that exposes Datomic as a REST API https://github.com/upworthy/datomic-peer-svc … seems very nicely done#2016-04-0615:16zane@raymcdermott: My colleague @paxan made that buildpack! Hit us up if you've got any questions.#2016-04-0615:18zaneI'm curious. Is anyone using Datomic for the serving layer (batch views) portion of the Lambda Architecture? It seems like a weird fit at first, since they're essentially transient, but seems like it could have some advantages (such as being able to join directly into your speed layer Datomic database).#2016-04-0620:44raymcdermott@zane: send @paxan a HT#2016-04-0700:44tmortenSo I am just wondering how you all are handling wrapping a Datomic connection inside pedestal/ring and what best practices are? I've seen people store the connection and/or the actual db inside the request map. What is better? or is storing both in the request map the best approach? Seems like storing the db in the request map is overkill to me but I've seen code which does this...#2016-04-0701:20taylor.sandohttp://www.rkn.io/2014/02/10/datomic-antipatterns-connnnn/#2016-04-0706:12raymcdermott@tmorten: also see the mount framework for minimal configuration management#2016-04-0708:33gravIf I do a pull directly in the query with two dbs, I always get the entity from the first matching database.#2016-04-0708:33gravThat seems like a bug?#2016-04-0708:34gravEg :
(d/q
'[:find (pull ?e [*])
:in $1 $2
:where
[$2 ?e :foo 42]]
db1 db2)
Now, if both db1 and db2 has an entity with the matching entity id, I apparently get the entity from db1#2016-04-0708:35gravI would expect the entity from db2#2016-04-0708:40gravActually, if the entity isn’t in db1, I just get the db-id. I can then pull specifically from db2, and I then get the entity in full#2016-04-0712:00jouerosehi all. i am having a read about datomic and i would like to know whether geolocation/spatial is baked in it. if not, are there recommended ways to do so ? thank you.#2016-04-0712:05luposlipHi @jouerose. It isn’t. But it’s easy to do with an entity consisting of a couple of attributes like the following:
{:db/id #db/id[:db.part/db]
:db/ident :location/longitude
:db/valueType :db.type/float
:db/cardinality :db.cardinality/one
:db.install/_attribute :db.part/db}
{:db/id #db/id[:db.part/db]
:db/ident :location/latitude
:db/valueType :db.type/float
:db/cardinality :db.cardinality/one
:db.install/_attribute :db.part/db}
#2016-04-0712:07jouerose@luposlip thanks for chipping in. this is new to me. do you think such a solution can go a long way as far efficiency is concerned ?#2016-04-0712:11luposlipIt depends on what you’re going to use the data for, of course. But in general, yes, due to the architecture of Datomic (most importantly in this case is probably the peer cache)#2016-04-0712:14jouerose@luposlip: thanks for your input. it clears it up for me. cheers.#2016-04-0714:21tmorten@taylor.sando and @raymcdermott: thank you for the info!#2016-04-0714:24bkamphaus@grav: pull will only pull from first data source in a query (at present) - you should probably just consider behavior for multiple source variables undefined. You’ll note in the grammar: http://docs.datomic.com/query.html#query-grammar — that a pull expression doesn’t involve a src-var ($).#2016-04-0714:27bkamphaus@jouerose: avet performance from memory is probably ok for some basic point features, if something really needs R-trees to be efficient, nothing in Datomic at present for that.#2016-04-0714:51haroldIs it possible to write a query that will return the keyword name of every attribute in the schema?#2016-04-0714:53haroldhere's my first cut:
'[:find [?i ...]
:where [_ :db/ident ?i]]
#2016-04-0714:53haroldIs that everything?#2016-04-0714:55bkamphaus@harold:
[:find [?i …]
:where
[_ :db.install/attribute ?a]
[?a :db/ident ?i]]
#2016-04-0714:58haroldNice! Thank you.#2016-04-0715:00bkamphausfull schema printer invokable with bin/run at (but uses entity touch, not limited to just idents, and also not limited to : https://gist.github.com/benkamphaus/e5e85def4c08afae4591#2016-04-0715:01bkamphausREDACTED simple_smile#2016-04-0715:23jouerose@bkamphaus: thank you#2016-04-0716:22stuartsierra@bkamphaus: I thought all attributes had to be installed in :db.part/db#2016-04-0716:22bkamphausbrain fail from me, yes that’s correct.#2016-04-0716:22stuartsierrasimple_smile#2016-04-0716:32bkamphausI think I mixed that up with how I’ve done it before, which checking now is just relying on implementation detail of where user added attribute entity ids start (at present):
[:find ?i
:where
[:db.part/db :db.install/attribute ?a]
[?a :db/ident ?i]
[(> ?a 62)]]
#2016-04-0717:33grav@bkamphaus: re: pull syntax and multiple dbs. Thanks for the clarification. Currently, we just retrieve entity ids and then pull afterwards. I’m thinking it should be possible to parse the query to automatically figure out which db to pull which entity from. I’m thinking of pursuing that idea. Any comments as to whether it’s possible?#2016-04-0717:37bkamphausit might be true for your use case but I don’t think it’s true in the general case. I.e. if you’re thinking because ?e was used in a clause with data source var $2, it should pull from there — it’s just a long as far as the query is concerned, even granted specificity of entity id, it’s a really common use case to want to retrieve an entity e.g. from a fn called on the log or an as-of/since database, and then pull things from the most recent database (or vice versa).#2016-04-0717:41bkamphausI do realize that use case is not a good fit with current lack of ability to specify a src-var in a pull expression, I think that (vs. automatic behavior) is probably the feature request statement of what people want, and the current workaround is to do it outside of query (pull or entity) where behavior is well defined (i.e. you’re passing in the database from which the entity is being projected explicitly.)#2016-04-0717:44grav@bkamphaus: yeah, too much magic might not be the way to go. Anyway, our current method does work fine, even if it means taking another roundtrip.#2016-04-0717:48bkamphaus@grav: not sure what you mean by another roundtrip in this case simple_smile Remember that composing multiple datomic api calls (datoms, entity, query, pull) hits the memory in your application (when warm), so a roundtrip to database in storage is typically not implied. In your example pull/entity will probably hit same index segment in :eavt as the query, so local memory access only.#2016-04-0717:49bkamphausYou don’t have the same “do everything in one query to avoid roundtrips” motivation that you typically do when building out complex SQL queries. (well, REST API use is an exception here, but pull expressions in query not supported there yet).#2016-04-0717:50grav@bkamphaus: ah ok. I’m ignorant (eg n00b) as to those kinds of details of datomic simple_smile#2016-04-0717:51bkamphausNo worries, the architecture overview ( http://docs.datomic.com/architecture.html ) and memory and caching topics ( http://docs.datomic.com/caching.html ) are good primers on that particular aspect of Datomic.#2016-04-0718:12gravCool, thanks!#2016-04-0720:00jouerosehi all. any news of an upcoming book on Datomic ?#2016-04-0721:04bkamphaus@jouerose: no news re: upcoming Datomic books.#2016-04-0721:38jouerose@bkamphaus: ok. thanks#2016-04-0803:28alexmillermore Datomic or more Datomic books: pick one#2016-04-0803:28alexmiller:)#2016-04-0809:30isaacWhy Datomic do not support alter fulltext attribute.
At the beginning of our project, we need not fulltext search on a attribute. But now, we want it.#2016-04-0809:36jouerose@alexmiller: I want both simple_smile in due time. I think a solid 10 to 20 pager could be made as a great introduction to Datomic.#2016-04-0814:18bkamphaus@jouerose: what’s the hardest thing to discern from docs/group etc. that you’d want to see addressed in a book? Or is it more a general wish for a certain tone and pacing of introduction?#2016-04-0814:21donaldballThe best way to implement an entity with an ordered list of entities#2016-04-0814:26jouerose@bkamphaus: i think you are spot on. Pacing -(we can learn from Carin Meier @gigasquid and her book "Living Clojure") - starting simple, a building block first approach.#2016-04-0814:27bkamphaus@isaac: it has to do with the overhead of creating and maintaining the fulltext index. The fulltext index is also eventually consistent, and sort of a small step outside of Datomic’s model.#2016-04-0814:27bkamphaus@jouerose: have you seen Carin’s “Conversations with Datomic” blog posts? First one at: http://gigasquidsoftware.com/blog/2015/08/15/conversations-with-datomic/#2016-04-0814:27jouerose@bkamphaus: yes i have#2016-04-0815:00bkamphaus@donaldball: I think solutions vary. It's admittedly not easy with Datomic schema at present. Explicit ordering attributes, linked lists via ref attrs, strings - probably more typical approaches.#2016-04-0815:07donaldballSure, I was just suggesting that as a topic for a book chapter. It’s dead simple in most rdbms and is a common requirement, yet solving it correctly in datomic seems to require either learning about transaction fns or recursive rules.#2016-04-0816:28p.brcI see a weird issue, the transactor does not seem to honour my data-dir setting via the transaction.properties. But when I set the property directly via the (undocumented?) system property datomic.dataDir it works fine. Interestingly HornetQ is able to pick up the setting from the properties file just fine and writes its server.lock into the configured directory.
So this does not work for me:
#transactor.properties
data-dir=/var/lib/datomic
this works:
./bin/transactor -Ddatomic.dataDir=/var/lib/datomic transactor.properties
Any ideas?#2016-04-0816:39bkamphaus@p.brc which version of Datomic are you using and where is it putting data instead? I’m assuming you’re using dev/free and talking about where it’s writing the db dir with the h2 files? and that transactor has permissions to access it? (seems implied by it working with command line arg).#2016-04-0816:44bkamphausI’ve tested locally and it works fine for me. Not sure what’s going on in your case.#2016-04-0817:19p.brc@bkamphaus: I am using the latest pro with cassandra. Strace tells me that it is trying to write indexer data relative to the distribution base directory into ‘data/indexer/$UUID’ which is AFAIK the default.#2016-04-0817:53sdegutisIs it common to write code like this, in order to get a bunch of entities from a query? (->> (d/q '[:find [?entity ...] :where [... clause goes here ...]] db) (map #(d/entity db %)))#2016-04-0817:54sdegutisOr is there an easier or less redundant way to get entities from a query? Or perhaps should getting entities from a query be less common than I'm thinking it is?#2016-04-0818:18stuartsierraThat pattern is common.#2016-04-0818:19stuartsierraWhen defining functions, I find it useful to put just the "query" part in its own function definition. That makes it easier to reuse the query in other ways.#2016-04-0818:45haroldIs that also potentially a good place to employ the pull api?#2016-04-0818:48bkamphausDefinitely possibly a fit for pull expressions. Using a pull expression ( http://docs.datomic.com/query.html#pull-expressions )doesn’t provide the same separation of concerns for the select and project portions that @stuartsierra mentions, but it’s a good fit if you want all the data (or a particular kind of data) for the entities in question and don’t plan on otherwise repurposing that query.#2016-04-0818:50bkamphausThe reason to potentially prefer entity in @sdegutis case over pull would be laziness. I.e. the entities I retrieve from the query are starting points from which I will traverse an entity graph.#2016-04-0818:53bkamphausOf course you can do targeted traversals that return data structures with pull and recursive specifications, etc. Just the data vs. an API. simple_smile Coupled with the other usual concern here about keeping the things a query does simple and composable vs. expressing as much as you can in a single declarative query.#2016-04-0819:09sdegutis@stuartsierra, @bkamphaus, @harold: Excellent points, thanks very much all.#2016-04-0819:35haroldExcellent elucidation Ben, lazily pulling with d/entity could be a huge advantage (assuming you're in a situation where you don't consume the entire sequence - no sort, etc...).#2016-04-0821:15randytQuick question for the listeners:#2016-04-0821:16randytJust trying to get my head around a migration from SQL Server to Datomic.#2016-04-0821:16randytLooking at onyx-etl as one possible transition tool.#2016-04-0821:16randytExamples for onyx-etl seem to suggest a table-by-table migration...#2016-04-0821:17randytIf transitioning this db to Datomic, my initial assumption is that they schema would likely be very different from the relational manifestation of this data.#2016-04-0821:18randytanyone know if I can migrate from the SQL Server to a schema that might span multiple tables in the SQL Server?#2016-04-0901:16bkamphaus@randyt it's hard to tackle that question generically, but I think the basic point would be that yes, multi-table data can be migrated to Datomic usually and depending on e.g. how the tables are typically joined to answer questions, a Datomic schema may be a better fit, especially if it involves implied graphs, card many values, varying presence/absence of fields, etc.#2016-04-0901:18bkamphausI doubt there will be a generic ETL tool you can grab if the best fit for your data is a model fundamentally different in some ways than the relational/table model.#2016-04-0917:27randyt@bkamphaus: thanks for the feedback#2016-04-0920:15mjhamrickIf you know that you are transacting to only one entity, (only one map literal), what is the best way to get the entity id you transacted to from the transaction future. I was doing this via
(defn- get-e-id-for-transaction
[transaction-future]
((comp first vals :tempids)
@transaction-future))
But I realized that only works if you are transacting to a new entity since there won't be tempids in that case.#2016-04-0920:29hiredmanhttp://docs.datomic.com/clojure/#C03RZMDSH.api/transact says :tx-data cotnains the datoms produced by a transaction, so maybe mapping first over that?#2016-04-0920:30hiredmanoh, right, the operation type, so not first, but you may be able to get entity ids there#2016-04-0920:30mjhamrickThanks, I'll give that a shot.#2016-04-0920:35mjhamrickGot something working. Thanks @hiredman
(-> test-db-name
d/connect
(d/transact [{:db/id #db/id[:db.part/user]
;; transact something here
}])
deref
:tx-data
second ;; This used to say first, but the first datom is the transaction
.e
)
#2016-04-1101:23arthur.boyerDoes anyone know if there’s a way that I can wrap a Datomic connection and log all the calls that get made to it?
I’m working with some inherited spaghetti code and it’s not unusual for me to have no idea when certain kinds of entities are created or updated. My hope is that I can log the events I care about so I can work out what code is responsible. Ideally I’d get a stack trace at those times.#2016-04-1101:28adamkowalskicould you make a core async channel which you use to talk to datomic? so rather then calling something directly you put it into the channel which will then do the actual call. then the channel could do arbitrary things, like maybe log something to a file#2016-04-1101:43arthur.boyerThe way this thing is built there are a whole bunch of auto-magic macros that wire up functions and local mutable state into a graph. I have control of the place where the connection comes from, but the other functions are littered across the code base, so it is currently impractical to refactor. As such I don’t think a channel will help. Thanks though.#2016-04-1103:10arthur.boyerIt looks like I can probably use the [transaction report queue](http://blog.datomic.com/2013/10/the-transaction-report-queue.html) for my needs.#2016-04-1103:11arthur.boyer@adamkowalski: I found a promising lead that uses core.async here: https://github.com/thegeez/gin/blob/master/src/gin/system/database_datomic.clj#L127
Thanks for the suggestion.#2016-04-1117:12sdegutisIs there a simple way to run a query that matches a given string against one of a few attributes? For example, something like this pseudocode: :where [?user :user/email ?string] :or [?user :user/name ?string]#2016-04-1117:12sdegutisIs this possible?#2016-04-1117:14stuartsierraQuery syntax supports or — check the docs for the correct syntax#2016-04-1117:16sdegutisOh wait, I remember why this was more complicated...#2016-04-1117:16sdegutisIt was because I'm using clojure.string/includes? to check for a substring of any of these. Which exploded the query into a bunch of temporary variables which I then check with or, and needed to bind with or-join.#2016-04-1117:18sdegutisThat explosion of a query is what I'm trying to avoid. Is that possible?#2016-04-1117:19sdegutisHmm. I'll experiment with it a bit more. I think I can get it simpler.#2016-04-1117:22zentropeI had to use rules to get that working.#2016-04-1117:23zentropeOr lets you test different values of the same attribute, but if you want to (say) search for a string among a few attributes, the rule stuff was what worked for me.#2016-04-1117:24sdegutis@zentrope: Thanks, that sounds promising, will look further into rules.#2016-04-1117:26zentropeHere was my in-a-pinch solution: https://gist.github.com/zentrope/aea55ff520da2fad85837c4ce6514222#2016-04-1117:30sdegutis@zentrope: Ahh, I see why that works. Per the docs: "Rules with multiple definitions will evaluate them as different logical paths to the same conclusion (i.e. logical OR)."#2016-04-1117:30sdegutisBeautiful, sounds like exactly what I need.#2016-04-1120:47zaneAnyone have any experience with Adi? https://github.com/zcaudate/adi#2016-04-1121:30dominicm@zane: @zcaudate hangs out mostly on his gitter simple_smile https://gitter.im/zcaudate/adi answers loads of questions there#2016-04-1121:31zane@dominicm: Thanks for the heads up! I don't have a specific question. I was just wondering what others' experiences with it have been like.#2016-04-1121:32dominicm@zane: Positive. I really like it.#2016-04-1121:32dominicmHe has a lot of great ideas about clojure in general, and his libs reflect that#2016-04-1121:35zaneSeems like I have a lot of reading to do.#2016-04-1121:54dominicmCheck out hara.event, it's a personal favourite#2016-04-1207:28gurdasAny idea what i’m doing wrong with this pull query?
(d/q `[:find [(pull ?person [:odata/first_name :odata/last_name])]
:where [?person :odata/ssn "343-34-3434"]] (d/db conn))
Can’t seem to get anything other than a Argument [:odata/first_name :odata/last_name] in :find is not a variable exception being thrown#2016-04-1208:56dominicmI'm not sure, but is it possible you mean :find (pull ?person...#2016-04-1209:03gurdasGetting the same exception when omitting the wrapping vector#2016-04-1211:49dominicm@gurdas: I'm not sure if this is it either, but I tend to use the ' not the ` when writing datomic queries, I can never remember the distinction, but it may have some effect.#2016-04-1211:52dominicm@gurdas: yeah, just tried locally, that looks like it's the problem.#2016-04-1212:22stuartsierraSyntax-quote (backquote) is used when you want to substitute in some values, it will automatically namespace-qualify any bare symbol, making it impractical for Datomic queries.#2016-04-1212:28dominicm@stuartsierra: I'm guessing that pull was becoming some.namespace/pull then? and therefore, it was looking for variables to pass in, not values?#2016-04-1212:28stuartsierrapossibly#2016-04-1212:29dominicmI guess pull is a "special form" if that is true.#2016-04-1212:29stuartsierra“special form” generally means symbols that are built-in to the Clojure compiler, so nothing to do with Datomic really.#2016-04-1212:30dominicmA special form within the datomic datalog query syntax. I might be wrong, I don't understand datomic's rules engine, and all the custom functions you can implement.#2016-04-1213:10stuartsierrayes, pull is something Datomic's datalog parser recognizes.#2016-04-1213:17bkamphausJust jumped in here right after typing this up: https://stackoverflow.com/a/36574439/3801886 — that answer has the link you’re looking for, to the grammar ( http://docs.datomic.com/query.html#grammar ) and pull expressions ( http://docs.datomic.com/query.html#pull-expressions ).#2016-04-1215:16caspercI need to implement limit and paging functionality on resultsets that can potentially be in the millions. Is there a recommended way of doing this, since it is not directly supported by datomic?
The best thing I can come up with is doing the query finding only entity ids, doing the limiting and then doing pulls on the limited resultset.#2016-04-1215:17caspercIs there any better way or will this actually perform well even if the query matches alot of entities?#2016-04-1215:22stuartsierra@casperc: That's a good start.#2016-04-1215:24stuartsierraAnother possibility is to start with iteration, using d/datoms or d/index-range, get batches of candidate entities, then use queries to filter them.#2016-04-1215:25stuartsierraStop as soon as you have a “page” full of results.#2016-04-1215:25casperc@stuartsierra: Hmm, and use the list of entity ids as input for the query?#2016-04-1215:26stuartsierrayes#2016-04-1215:26stuartsierraBut I would try the straightforward query approach first.#2016-04-1215:26caspercThanks, I’ll do that#2016-04-1215:27caspercI would hope that limit and/or paging is added as a built in functionality eventually though. Seems a bit strange to have to implement it yourself tbh.#2016-04-1215:29bkamphaus+1 to @stuartsierra ’s suggestions. A lot of this depends on the shape of your query. I.e. if it’s equivalent to filtering by Attribute or Attribute + Value and you want to get the same attributes from each entity, you could use d/datoms or d/index-range filtered to match the queries pretty easily and then map a d/pull for attributes. But definitely stick with query unless you’re sure it is/will be a performance bottleneck as it’s the simplest approach.#2016-04-1215:30bkamphausUnderstood re; limit/offset/order by equivalents, it’s a feature request under consideration.#2016-04-1215:31casperc+1 to that feature request from here then 😉#2016-04-1215:32bkamphausThere are some obvious advantages to expressing those in declarative form, but do note that the work for it (perf cost) will still be done in the peer given Datomic’s architecture.#2016-04-1215:36caspercI understand, but it is natural to compare features with other databases and through that lens it seems like an oversight.#2016-04-1215:36casperc(though it is still completely doable)#2016-04-1215:38caspercAnother way to address it would be to add a section on how to do paging/limits to the best practices section of the docs. This way it would at least be addressed, which I haven’t seen anywhere currently.#2016-04-1215:41bkamphausYep, like I said we’re investigating adding it 😉 It does run into the difference between “get everything done in one query” vs “use queries and other datomic api to build composable pieces of your application” approaches, which runs into the difference of Datomic being a db in your app vs. having roundtrip pressure inform a lot of design decisions. I definitely understand the request and that there’s potential value, just trying to point to some of the context that informs how features are prioritized.#2016-04-1215:43dominicm@bkamphaus: Just so I'm sure that I understanding correctly, are you saying that any implementation of limit/offset/sort would be similar/the same as what Stuart suggested for this problem?#2016-04-1215:49bkamphaus@dominicm: nope simple_smile not offering any precise comments specifically on what the implementation would look like. Just that it would necessarily involve sorting query results on the peer/in the app (because query work isn’t done on some server somewhere).#2016-04-1215:53bkamphausObviously you can skip a sorting step if you are mapping pull or entity on a seq from datoms when you can guarantee that you’re (1) filtering by attr or attr + value as your query, and (2) you’re ordering by value (3) you have the index set for the attr and (4) the sort order of datoms aligns with the sort order you want precisely, etc.#2016-04-1215:55dominicm@bkamphaus: ah, so it would be possible for the datomic peer will be smart about things like sorting then. That would make it a worthwhile feature to implement.#2016-04-1215:58stuartsierraYeah, right now Datomic doesn't have any built-in support for “secondary indexes,” another oft-requested feature. That would probably make it easier to support efficient limit/offset. But you can create your own secondary index by adding another attribute with a computed value (e.g. “Join Date, Lastname, Firstname”) if you need to maximize the efficiency of iterating over that specific thing.#2016-04-1215:59stuartsierraStill though, start with simple queries and see if performance is adequate first.#2016-04-1216:10dominicmDatomic is pretty good for paging now that I think about it. If you've ever used reddit, you'll notice that on page2+ they include an "after" as part of the query params, this is a reference to the last posting on the previous page. This obviously doesn't work perfectly if the last posting has changed position, but because datomic's database is a value, you would just have a "at" parameter for the time the page2 should be checked at.#2016-04-1216:11dominicmYou might want to get the "current" value of those entities, for changes to upvotes or deletions (spam). That would also be pretty easy in datomic. I like it.#2016-04-1218:01gurdas@stuartsierra: @dominicm Thanks for the clarification; didn’t know about the differences between backtick and single quote when it came to namespace resolution#2016-04-1303:12lauri@casperc: maybe this is helpful - @jonas talks in this presentation how he implemented pagination with Datomic:
https://www.youtube.com/watch?v=hGWovJGbKJk&t=18m45s#2016-04-1309:18a.espolovHello
Before this used postgresql as storage for datomic.
Datomic supported Couchbase Community Edition?#2016-04-1319:50hiredmanI am trying to run a (datomic-free) transactor on one host, with a peer on that same host (loading in data) and a peer on another host (where I am trying to explore the data), the connection uri I am using seems to work fine on the same host as the transactor (not using the loopback ip) but the same uri results in a NoRouteToHostException (seems to be coming from h2) on the peer not on the same host as the transactor#2016-04-1319:53hiredmanclojure.lang.Compiler$CompilerException: java.util.concurrent.ExecutionException: org.h2.jdbc.JdbcSQLException: Connection is broken: "java.net.NoRouteToHostException: No route to host: 192.168.1.125:4335" [90067-171], compiling:(NO_SOURCE_FILE:29:10)
#2016-04-1319:54hiredmanthat sort of looks like it is trying to use host+port as just the host, but I dunno#2016-04-1319:58hiredmanoh, looking at netstat it looks like h2 database is being started listening locally, regardless of 'host' setting in the properties file of the transactor#2016-04-1320:10hiredmanoof, my network must be screwed up, because in fact, the thing I should have checked first (telneting to the host and ip) is also throwing the same error, even though I am sshed in to the host, and ping is working fine#2016-04-1320:24hiredmannope, just a combination of firewall rules I didn't know I had, and tools saying "no route to host" for any kind of connection error#2016-04-1420:12jonathan.langenshi#2016-04-1622:12amashiI know this is a very broad and ill-defined question (I'm just starting to understand the datomic model,) but I'm trying to get a very back-of-the-envelope idea of what "datomic doesn't have arbitrary write-scalability" means in practice.#2016-04-1622:14amashiI guess I'm curious, on an order of magnitude level, about what scenarios datomic can support, and where you would hit a wall with datomic even if you were willing to throw a lot of machine at it.#2016-04-1723:52francoiswirionWhat are people's approaches to modeling sum (aka union) types in datomic?
In clojure you can have a single key map, with the key as type. For datomic I thought of the following way:
Have attributes :my-sumtype/string and :my-sumtype/int.
To know which one it is, test the attribute for existence (this might be a problem if the type changed over time from e.g. int to string, both will exist, maybe use retraction?)
Attach this sum entity to other entities via a ref.
Are there better ways?#2016-04-1813:09stuartsierra@francoiswirion: I've used that approach (two attributes) successfully#2016-04-1818:00leontalbot@madvas and I were wondering what is the cheapest way to host datomic? Say we have a hobby project, very small and hosted on heroku. I heard datomic free cannot be host on separated server. #2016-04-1818:02leontalbot(And heroku resets every 24 hours so we can't host datomic free there...)#2016-04-1818:02leontalbotMuch thanks in advance!#2016-04-1818:33leontalbotJust found https://github.com/colinrymer/docker-datomic-free/blob/master/README.md would you recommend it? Also found free hosting for a docker container here https://cloud.docker.com/ #2016-04-1818:42bvulpesleontalbot: you can try out my alpha hosted transactor service!#2016-04-1818:42bvulpeshttp://tx.survantjames.com#2016-04-1818:48leontalbotReally cool! cc @madvas#2016-04-1818:48leontalbotThanks @bvulpes!#2016-04-1819:01francoiswirionThanks @stuartsierra !#2016-04-1820:27jgdaveyI’m having a heckuva time trying to upgrade my peer to the latest release. I’ve added the necessary dependency for [com.amazonaws/aws-java-sdk-dynamodb “1.9.40”], but when the system is attempting to connect to the transactor, I get a java.lang.NoSuchMethodError: com.amazonaws.services.dynamodbv2.model.AttributeValue.getM()Ljava/util/Map;#2016-04-1820:29jgdaveyI previously was on 0.9.5327, trying to get to 0.9.5350#2016-04-1820:30bkamphaus@jgdavey: Not sure if version issues could be behind it, (1) we have tested and include/suggest version 1.9.39 (documented in the pom.xml provided scope in the datomic distribution), and it could be 1.9.40 introduces (or you have some other) dependency conflict, lein deps :tree might help you troubleshoot (that error looks like it could easily caused by a dependency conflict).#2016-04-1820:31jgdaveyI should have mentioned: 1.9.39 yields the same error. I’ll hunt more for dependency clashes.#2016-04-1820:36jgdaveyI found it. [clj-aws-s3] pulls in [com.amazonaws/aws-java-sdk], which is the whole she-bang.#2016-04-1820:37jgdaveyThanks for the help, @bkamphaus !#2016-04-1909:13caspercI have a question regarding transaction functions and which guarantees the database that is passed provides. Can I always be certain that the db value is “sync’ed”, meaning that it holds all the previously transacted values?#2016-04-1909:19pesterhazy@casperc: what do you mean?#2016-04-1909:22casperc@pesterhazy: Can you expand on which part is unclear? I am trying to find out if the db value in a transactor function is always up to date or can be lagging behind (like in a peer, unless you sync it)#2016-04-1909:22pesterhazyah now I get you#2016-04-1909:24pesterhazydidn't know that peers needed to sync actually, so maybe I'm not in the best position to answer this#2016-04-1909:24pesterhazybut: it would be very surprising if the db wasn't guaranteed to be up to date, because otherwise the transactor wouldn't be able to ensure consistency#2016-04-1909:25caspercI ask because I have a database function which runs during an import, which obviously puts a fair amount of pressure on the transactor, and it doesn’t seem like the db value is up to date. Sometimes I will get an “java.lang.IllegalArgumentException: :db.error/not-an-entity Unable to resolve entity” for a lookup which has already been passed to the transactor. Retrying the transaction later will succeed.#2016-04-1909:27casperc@pesterhazy: I agree, which is why i find it surprising. I might be making some error in my import, but it would be nice to know the guarantees that are provided.#2016-04-1909:27caspercI’ll leave it here for when @bkamphaus wakes up#2016-04-1913:04stuartsierraTransactions execute serially. A transaction function, when called as part of a transaction, always has the most recent value of the database.#2016-04-1913:18bkamphaus@casperc — Re: "for a lookup which has already been passed to the transactor.” — and in that case the transaction has definitely succeeded (e.g. no retry) and for a t prior to the dependent transactions arrival? If you’re loading a transactor with a bulk async import from the peer, I would suspect the order transactions are being submitted on the peer may be behind this.#2016-04-1913:20casperc@bkamphaus: So just to be clear, the db value passed to a transaction function should be up to date e.g. it cannot lag behind as is possible with a peer?#2016-04-1913:22bkamphaus@casperc: yes, it’s guaranteed to be the most recent db value at the time the transaction arrives at the transactor. (as @stuartsierra mentions, guaranteed by the transactor serializing all transactions, which is inclusive of transaction function calls).#2016-04-1913:23caspercAh, sorry I missed stuarts response.#2016-04-1913:23stuartsierra@casperc If you are running an import job with many transactions, it may be the case that the transactions are not arriving at the Transactor in the order you expect.#2016-04-1913:24caspercYeah, that sounds to be the case simple_smile#2016-04-1913:25caspercMy problem is that several 100k transactions have not completed at the point i thought they were, but clearly the problem is on my end then.#2016-04-1913:26caspercThanks @stuartsierra and @bkamphaus#2016-04-2020:28currentoorI'm trying to do a simple join in a pull api call. And I get
:db.error/invalid-recur-limit Cannot interpret as a recursive pull
specification: :db/ident
{:db/error :db.error/invalid-recur-limit}
#2016-04-2020:28currentoorAny ideas what the cause might be?#2016-04-2020:29currentoor(d/pull db [{:data-source/response-type :db/ident}]
[:data-source/id data-source-id])
#2016-04-2020:29bkamphaus@currentoor: :db/ident needs to be [:db/ident]#2016-04-2020:32currentoor@bkamphaus: oh right!#2016-04-2020:32currentoorthanks#2016-04-2020:32bkamphausMore specifically, the value position in a map-spec needs to be a pattern itself, so wrapped in a list or a vector. Since it can’t parse it as a pattern it tries to parse it as a recursion-limit, which fails. May seem overkill at times but I find stepping through the grammar helpful when troubleshooting patterns that don’t seem to be working: http://docs.datomic.com/pull.html#grammar#2016-04-2020:33currentoorgood idea#2016-04-2109:41dominicmIs there a way for the pull syntax to pull something up to the "top-level" of the returned entity? I have a cardinality one reference, and I would like one of it's keys to be included in the returned object. For now I have done an update/get combo, but I wonder if there was something in the pull syntax I hadn't understood, hiding this functionality#2016-04-2110:56gravIs there something like an UPDATE … WHERE … syntax for transactions? Currently we do a find and then generate the transactions on the client, but that takes a while.#2016-04-2110:58grav@dominicm: can’t you just do [:find (pull ?p [*]) (pull ?c [*]) :in $ :where [?p :rel ?c]] or am I misunderstanding something?#2016-04-2111:00dominicm@grav: The relations would be like {:ref {:ref/key :foobar}} I want to be able to just get {:ref/key}#2016-04-2112:16pesterhazy@grav, I think that's the normal way to do UPDATEs. How long does it take? Maybe there's a way to speed up the query.#2016-04-2113:17grav@pesterhazy it’s not the query that takes time, it’s the transactions.#2016-04-2113:17pesterhazyif so, doing it inside a transactor fn will probably not help, true?#2016-04-2113:19bkamphaus@dominicm: pull will return data in a standard way for entities and nesting/refs, etc. - it’s just normal Clojure data. If you want it to be in a different shape, you’ll have to manipulate the returned data - there’s no part of a pull specification that will e.g. flatten multiple entities into a single map.#2016-04-2113:21bkamphaus@grav: There shouldn’t be something intrinsic to transactions taking a while from the peer unless you’re e.g. calling transact instead of transact-async in parallel or looping transaction submission logic (not using async implies blocking on a round trip per submitted transaction).#2016-04-2113:23bkamphausyou don’t want to push that logic to the transactor in e.g. a transactor function as it will then run in serial if e.g. the logic that’s generating the transactions that’s the bottleneck. Of course, if you need to query the exact database state before the transaction goes in (i.e. ACID isolation) then inside a transaction function is the correct place for that logic.#2016-04-2113:24dominicm@bkamphaus: I thought not, just wanted to check. I wasn't sure how comprehensive the pull api was aiming to be.#2016-04-2119:33actsasgeekso I’ve used hashmaps as inputs where the logic variable was a key. For example, [(get $1 ?k) ?v] but I’m not sure how—or if it’s even possible—to get a value into a logic variable [(get $1 :id) ?v]. I understand why it doesn’t work but I can’t quite figure out if something else could work.#2016-04-2119:34actsasgeekI sometimes feel like I’m playing Jeopardy…can you put that expression in the form of a relation, Alex?#2016-04-2119:37actsasgeekI thought something like [(= (get $1 :id) ?v)] might work but it can no longer resolve $1.#2016-04-2119:46grav@bkamphaus: thanks, that cleared it up a bit. We are doing sync. transactions, but only for small amounts of data. This was for a migration, so it would make sense to do it async. Thanks!#2016-04-2119:52bvulpes> hashmaps as inputs where the logic variable was a key#2016-04-2119:52bvulpesneat!#2016-04-2120:34bkamphaus@actsasgeek: I think I’m gonna need you to step back to describe the use case and maybe see a full query example. simple_smile#2016-04-2121:13sdegutisIs there anything built into Datomic to allow case-insensitive substring matches on string attributes?#2016-04-2121:14sdegutisLike testing "foo" against a :user/email of " and ".#2016-04-2121:17bkamphaus@sdegutis: you can use regex stuff in a query if that’s what you mean. Example from a different use case here: https://stackoverflow.com/questions/32164131/parameterized-and-case-insensitive-query-in-datalog-datomic?answertab=oldest#tab-top#2016-04-2121:20sdegutis@bkamphaus: Thanks. I think fulltext might actually be what I'm looking for, still figuring this out.#2016-04-2121:23bkamphausfulltext makes sense if what you’re actually trying to match/search is compatible with Lucene’s defaults simple_smile Also worth noting that it’s the only aspect of Datomic that’s essentially eventually consistent (fulltext index updates in the background, not always guaranteed to reflect most recent transactions).#2016-04-2121:24sdegutis@bkamphaus: Oh, so :db/fulltext needs to be true on an attribute for it to even work?#2016-04-2121:24bkamphausNot trying to discourse use of fulltext, but I think it is worth commenting on. For limited text match/search use it’s fine. For anything less trivial it suggests keeping text data you want to search that way outside of Datomic and pointing to it from there.#2016-04-2121:24sdegutisThat would explain why my query is turning up nothing.#2016-04-2121:24bkamphaus@sdegutis: yep#2016-04-2121:24sdegutisWelp.#2016-04-2121:25sdegutis@bkamphaus: From a high level, what technique would you recommend to search for users whose :user/name or :user/email or (comp :account/name :user/account) match a given substring case-insensitively?#2016-04-2121:26sdegutis@bkamphaus: I only provide those examples to demonstrate the scope of the kind of query I'm trying to make, i.e. that the thing I'm searching for (via case-insensitive substring) may not all be on the same attribute or even entity.#2016-04-2121:26sdegutisI have a solution in mind, but I'd like to know how you'd personally go about this, at a high-level.#2016-04-2121:30bkamphausI would probably start with something like re-matches in an or clause. I would reserve fulltext for cases where I had something like posts or tweets or short descriptions of something (i.e. paragraph or a few sentences) I needed to search.#2016-04-2121:31sdegutis@bkamphaus: Ah. My solution was going to be three separate d/q queries, and then to just distinct the results together.#2016-04-2121:31sdegutis@bkamphaus: But I like your idea, it may be quicker to run and easier to write.#2016-04-2121:32sdegutisThanks.#2016-04-2121:35bkamphausnothing wrong with composing the queries and I’d prefer composing separate queries if there’s a use case for checking to see if only one match applies. But if you always want to collapse those, or or a rule makes sense.#2016-04-2121:35sdegutisI guess my only worry was that all the variables in the query unify, but in this case that's not a problem.#2016-04-2121:39sdegutis@bkamphaus: Ah, I remember now why my solution wouldn't work. It's because my :user/account (not real attribute name) may be nil, and thus it would fail to match any ?users which didn't have one.#2016-04-2121:39sdegutisOn account of how a :where clause specifying an attribute inherently assumes that attribute exists.#2016-04-2121:40sdegutisOr rather, rejects entities which lack it.#2016-04-2121:41bkamphausThe clauses establishing e.g. ?u :user/account ?a and possibly ?a :account/name ?name would each need to be a different path in a rule or handled by an and in an or clause possibly for that use case.#2016-04-2121:41sdegutis@bkamphaus: right, and then the or would need a join, and it gets real messy real quick.#2016-04-2121:41sdegutisI wonder if it would just be cleaner and still run reasonably fast to just do three queries.#2016-04-2121:42sdegutisAlthough, rules (via %) might solve that like you suggest.#2016-04-2121:42sdegutisYeah, I'll experiment with rules.#2016-04-2121:43bkamphausthis is exactly the use case where I jump from or to rules personally simple_smile “Three different ways by which a particular condition is met”, especially when the or clause can get messy.#2016-04-2121:44sdegutisRight on.#2016-04-2121:45bkamphausha, using this example:
[[(social-media ?c)
[?c :community/type :community.type/twitter]]
[(social-media ?c)
[?c :community/type :community.type/facebook-page]]]
#2016-04-2121:46sdegutisAh, there's a typo in the example in the docs.#2016-04-2121:47sdegutisIt says In this rule, the name is "twitter", but the first line is actually [(twitter? ?c)#2016-04-2121:47bkamphausexcept you’ll have three (user-search ?string) rule heads, for user name, user email, or -> user/account account/name#2016-04-2121:48sdegutis@bkamphaus: which I'll probably just make into a function and refer to it as a fully qualified Clojure function in the query#2016-04-2121:48sdegutisAssuming that's not gonna be horribly slow.#2016-04-2121:50bkamphausall done at typical Clojure/JVM speed in the Peer/ your app. Nothing really faster inside/outside query (after your little bit of overhead for the query to be parsed.#2016-04-2121:50sdegutisGreat, just as I'd hoped.#2016-04-2121:51bkamphauswhy taking a union of relations from multiple queries isn’t a big deal typically, either, unless each part match benefits from the previous clauses restricting the number of datoms that are matched.#2016-04-2121:52sdegutis@bkamphaus: I was kind of figuring it would be quicker as one query because otherwise it has to enumerate all users thrice.#2016-04-2121:52sdegutisWhich may take a while, I dunno.#2016-04-2121:53sdegutisI try not to worry about performance until things get too slow.#2016-04-2121:55bkamphausyep, if each query enumerates all users and the rule w/three paths can operate on a one time numeration of those users that’s a performance win, but semantics and composability can trump performance if it’s not a bottleneck or requirement for that case.#2016-04-2121:56sdegutisRight on.#2016-04-2121:56sdegutisOkay, just fighting against errors now. But I think I've got what I need. Thanks!#2016-04-2121:57sdegutisAha! First case is working!#2016-04-2121:58sdegutisSweet, this works amazingly. Thanks @bkamphaus.#2016-04-2122:01sdegutis@bkamphaus: the only thing that would make this sweeter is being able to bind a function to a local name, so I don't have to put a fully qualified function (including namespace path) into the query.#2016-04-2122:02sdegutisSo instead of (myapp.a.b.c/matches? ?a ?b) I could bind that to matches? in the rule definition or something and then just use that.#2016-04-2122:02sdegutisBut maybe I'm overthinking it.#2016-04-2122:11actsasgeek@bkamphaus: I’m not sure why the use case is mysterious. 😛 It’s easy to include a vector of tuples as an input into a query. Suppose I have a vector of tuples, [[id email]], then I can do
[:find ?id ?name ?email :in $ [[?id ?email]] :where [?id :person/name ?name]]
Now what if I had {:id id :email email} instead…is it possible to use it directly or must the data be transformed to a relation?#2016-04-2122:14actsasgeekI could do (map (juxt :id :email) input-data), I suppose but I’d been able to use hashmaps directly in the past.#2016-04-2122:17bkamphausPossible to use? Yes. Simple to use? No. I think it’s probably simpler to transform to a collection of tuples/relations for binding inputs to a parameterized query and using those inputs in evident ways in clauses in the general case.#2016-04-2122:20actsasgeekok, thanks!#2016-04-2215:18firstclassfuncWhen loading data into datomic does one need to populate the entities associated with a reference before adding that reference to a new entity. for example. Do I have to populate :room/name before I reference it by another entity? {:my/name "myname", :my/room {:room/name "roomname"}}#2016-04-2215:26bkamphaus@firstclassfunc: you can use tempids to transact both sides of the ref at the same time. You can nest map forms in the tx if the ref is a component.#2016-04-2215:41firstclassfunc@bkamphaus: ok great tks.. I am using Tupelo so that generates the negative IDs for me..#2016-04-2218:12ethangracerhey all, i’m playing around with aggregates and built this query:
'{:find [(count? ?b) (count ?a)]
:in [$ ?id1 ?id2]
:where [[?b :attr/three ?id2]
[?b :attr/two :attr.two/enum]
[?b :attr/one ?id1]
[?a :attr/two :attr.two/enum]
[?a :attr/one ?id1]]}
essentially I want to get two counts, the first is a count of all entities with the specified id and enum, and the second is a count of a proper subset of the first set of entities
however, right now these counts seem to be multiplied together. I.e. if (count? a) is 5 and (count ?b) is 10, then [(count? a) (count? b)] returns [[50] [50]]. anyone have some insight?#2016-04-2218:16bkamphaus@ethangracer: you're counting the result of the Cartesian product of ?a and ?b twice essentially. The short version is you'll need to split this into two queries.#2016-04-2218:17ethangracerI was hoping that wouldn’t be the answer. Why are they being combined? I don’t understand#2016-04-2218:17bkamphausOne query will aggregate over one set of tuples/relations with grouping implied by find.#2016-04-2218:18bkamphausAll a to each b, then count. All b to each a then count.#2016-04-2218:18ethangracerhuh, interesting#2016-04-2218:18ethangracerso the find specification binds the two counts together similar to a non-aggregate query#2016-04-2218:19ethangracerthat makes sense#2016-04-2218:19ethangracerthanks#2016-04-2220:34sdegutisIs it possible for a :db.cardinality/many to be ordered?#2016-04-2220:40bkamphausnot naively at rest. At present, defining ordering of values in Datomic implies building a linked list in your schema (node/value, node/next) or refs of (coll/first, coll/second), or having a second attr for the representation that’s a concatenated string (same strategy used for composite keys in some cases), etc.#2016-04-2220:40bkamphausand yes, we’re aware of and considering the features that resorting to such workarounds suggests simple_smile#2016-04-2220:41sdegutis@bkamphaus: I'm surprised you didn't mention something like :sortable/position, as I imagine that would be a common way to do this.#2016-04-2220:42bkamphausyep, a position attr that points to an enum is also an option.#2016-04-2220:43bkamphausor has a numeric value, this all of course assumes you’re writing ordering by logic outside of query/pull as you can’t indicate the equivalent of e.g. SQL “order by” (but that work is done in the peer anyways w/Datomic so it’s not implying a different perf cost per se)#2016-04-2220:47sdegutis@bkamphaus: right on#2016-04-2220:48sdegutis@bkamphaus: hmm, I don't understand those last two suggestions, the enum that might or might not have a numeric value, used for ordering#2016-04-2220:49bkamphausyou can just use e.g. an int for position if you provide a position attribute: 1,2,3, etc. I was overthinking it in mentioning an enum there.#2016-04-2220:49sdegutisah ok#2016-04-2220:50sdegutisI figure the easiest is to have any sortable entity just have :sortable/position which is a :db.type/long#2016-04-2220:50sdegutisBut maybe I'm thinking of this too Java-ish.#2016-04-2220:50bkamphausI think as I was imagining cases before (with the first suggestion) that might have to point to things that aren’t reified as entities, but maybe you don’t like the implicit sort order. I.e. a bunch of strings but you don’t want them sorted in lexicographical order.#2016-04-2220:50sdegutisThen you'd do (->> entities (sort-by :sortable/position))#2016-04-2220:50sdegutis@bkamphaus: ah yeah, I see now what you mean.#2016-04-2220:51bkamphausit does imply having to offset a bunch of values on update to insert in first/mid position though, or some other logical ordering for determining spacing and insert logic on noncontiguous integers, etc. Other solutions have the same problem though.#2016-04-2220:51sdegutis@bkamphaus: unfortunately, we do have an "ordered list of strings" on a given attribute, which we had to split out into their own entities referenced by :product/things, which each have :thing/value (string) and thing/sort-position (int).#2016-04-2220:52sdegutisrather than :product/things being a :db.cardinality/many of :db.type/string which I'd have preferred.#2016-04-2306:28tomjackhmm, the facts for a card/many are [e a v ...], where v is a single value#2016-04-2306:29tomjackI would understand [e a vs ...] for a :db.type/listOfString (or something..)#2016-04-2312:34bkamphaus@tomjack it stores multiple datoms of that shape (remember Datomic has a universal schema), but it does return a collection e.g. In pull or entity. That universal schema is what enables positional binding to be consistent for every clause in query, among other things.#2016-04-2312:35bkamphausAn ordered list of strings (or any type of data) is essentially a feature request, and one with several applications (including composite keys), but it's a different semantic from card many.#2016-04-2312:57isaacHow can I apply missing? to backwards attribute?
'[:find ?e
:where
[?e :loction/code]
[(missing? $ ?e :location/_children)]]
I want to find some root entities that has no parent(`_children`)#2016-04-2319:37noziarDoes anyone know what the backupPaceMsec property means exactly? The doc says "setting this value to an integer will cause backup to pause that many milliseconds between backup operations", but I'm not sure what a "backup operation" is. Is it querying a single segment, multiple segments, the whole table, or is it something else entirely? I'm using a MySQL backend.#2016-04-2320:18bkamphaus@noziar: it’s paced per segment copied. Most significant with e.g. DynamoDB where it’s otherwise really simple for backup/restore to exceed provisioning when it runs at full tilt. If your storage can’t keep up with the copy/write volume, that’s when you look at upping that settings.#2016-04-2320:19noziar@bkamphaus: thanks for the answer! That does indeed correspond to my observations#2016-04-2513:58pheuterIs it considered idiomatic to sort Datomic result sets outside of the query? For larger result sets, is it more performant to use Datomic functions?#2016-04-2513:58pheuterUpon an initial read through the docs, it doesn’t seem like there’s a SQL order by equivalent.#2016-04-2514:00bkamphaus@pheuter: that’s correct, there’s not an order by equivalent at present and it’s necessary to sort outside query. If the default sort of an indexed attribute (in :avet) works for your use case, datoms/`seek-datoms`/`index-range` are tools you can use to lazily iterate/page through facts.#2016-04-2514:02pheuterbkamphaus: thanks! that’s good to know, index-range looks like what I was looking for.#2016-04-2514:02pheuteri wonder how expensive avet indexes are#2016-04-2514:05bkamphausIt’s not really expensive to create/maintain. I’d just turn it on for anything I ever suspected I’d want to look something up by simple_smile The cost incurred by indexing is fairly trivial, and storage, well, usually really cheap given the size of most Datomic dbs. We’ve considered just having all attributes indexed by default but thus far haven’t done so. (the only thing you want to avoid indexing is anything overly blob/document like).#2016-04-2514:06pheutermakes sense, good to know nothing funky going on with avet#2016-04-2514:06pheuternot being turned on by default spooked me a little bit#2016-04-2514:24bkamphausDatomic 0.9.5359 is now available https://groups.google.com/d/msg/datomic/P9QtPhzswPY/e4r_Fn_LBAAJ#2016-04-2514:54caspercI am having some trouble with a transaction where d/with and d/transact gives different answers. Am I correct in thinking this should really not ever happen?#2016-04-2514:55caspercThe discrepancy between d/with and d/transact i mean#2016-04-2514:57pheutertransact takes a connection I believe and applies the tx-data to the latest version of the database whereas with takes a particular database value which may not be the latest one.#2016-04-2514:58pheuteralso, applying transactions using with will not affect the source database passed in#2016-04-2514:59bkamphaus@casperc: as @pheuter mentions there could be a time discrepancy between what occurs over the conn since you don’t transact to a db value directly. Another case could be using with on a database with e.g. an as-of filter. The as-of filter will prevent the resulting database from seeing the data added by with (it’s after the as-of-t)#2016-04-2515:03caspercYes, I should mention that the database is not being written to, so the db is not changing and should be up to date. I am using a transaction function remove values containing lookup-refs if the entity that they reference doesn’t exist. d/with correctly removes the lookup-refs but when transacting the same thing the transaction fails due to the lookup ref not being removed. So something is going on with a function which is not acting the same when run on the transactor compared to in the peer using d/with.#2016-04-2515:07bkamphaus@casperc: with a transaction function, there are definitely some possible differences, the primary being how arguments to the transactor function are serialized. If you’re using Clojure specific collection logic on args the java level interface behavior is preserved in parameters passed over the wire (i.e. they’re java.util.Collections, etc.) it may be if you’re checking to see if something is an instance of a vector, for example, with the transactor function run in the peer (when the arguments don’t go over the wire) it evaluates true, but on the transactor it evaluates false.#2016-04-2515:07caspercOh, ouch#2016-04-2515:08caspercThat would explain it. I am indeed checking if something is an vector to see if it is a lookup ref.#2016-04-2515:10casperc@bkamphaus: How would you recommend that I check if something is a lookup ref?#2016-04-2515:10caspercThis is my current check: (and (vector? v) (= 2 (count v)) (keyword? (first v)) (= :db.type/ref (:db/valueType (d/entity db k))))
#2016-04-2515:11caspercWhere v is the value and k is the attribute.#2016-04-2515:13bkamphausI’m not sure from the use case if I’d use a transaction function? Is there a race by peers to write something first? If it’s unique/identity and what you want is “write a new entity if this doesn’t exist, otherwise add/change the existing entity”, that’s basically how upsert with unique/identity behaves.#2016-04-2515:15bkamphausFor the other use case of just removing transactions that don’t match to anything, if there’s not an expectation that peers are racing to create, etc. it might be a use case for running that processing on the peer and then just submitting the cleaned transaction.#2016-04-2515:18bkamphaus{:db/id (d/tempid :db.part/user)
:unique/id “myId"
:some/fact “some value”}
{:db/id [:unique/id “myId”]
:some/fact “some value”}
If :unique/id “myId” entity exists and it’s unique identity upsert will mean those two transactions behave the same. Under those same assumptions it will create a new one if it doesn’t exist with the first form, and fail with the second form.#2016-04-2515:20caspercWell I might get by using d/sync and that is probably the route I will end up taking, but there are multiple threads transacting at the same time which can cause a race condition for my specific case.#2016-04-2515:21bkamphausRight, which does imply handling it in a transaction function, I just wasn’t sure about the behavior for handling that you want. I.e. if the lookup ref fails just drop those datoms on the floor? Or create it instead of using the lookup ref? Which points as relying on upsert behavior instead.#2016-04-2515:22caspercAh, if it doesn’t exist, I drop the datom with the lookup ref.#2016-04-2515:23bkamphausand no retry of that fact later, etc. or reporting that that fact was invalid anywhere?#2016-04-2515:24caspercCurrently our dataset is incomplete, which is why I drop the refs that don’t exist.#2016-04-2515:24caspercLater it will result in a failed transaction.#2016-04-2515:26caspercI can tell that you have some amount of aversion to me using transaction functions though, so I’ll take that into account 😉#2016-04-2515:27bkamphausIf it’s a temporary solution, I would probably just invoke the invalid lookup ref dropping logic on the peer when the transaction is built (prior to it being submitted). But it’s an odd use case to me, because the precision/isolation guarantee seems somewhat arbitrary. I do get that it’s not going to be the production logic in the end.#2016-04-2515:29bkamphausThe aversion is that transactor functions that do things like walk all the transaction data to clean them can end up being huge performance bottlenecks simple_smile I just always try to push back to see if there’s an optimistic concurrency strategy that makes sense peer side, or if the time-based logic is not really about serialized, atomic updates of entities but just about preferring the time by coincidence that shows up on the transactor vs. the peer.#2016-04-2515:31caspercPoint taken simple_smile And honestly, a d/sync would probably do the trick but since I was seeing a difference between d/with and d/transact I was wondering why.#2016-04-2515:32caspercSeems like a bit of a gotya that should probably be documented if it isn’t simple_smile#2016-04-2515:34bkamphausI agree, I’ll take a stab at adding it to the transactor function docs.#2016-04-2515:36caspercSounds good. Thanks for the help making sense of things though.#2016-04-2612:06conawAnyone have experience allowing end users to define attributes — and even further — allowing end users to define queries using those attributes?#2016-04-2612:18stuartsierra@conaw: I know that people do such things. You have to consider how much you want to trust your users — Datomic's datalog queries allow arbitrary code execution.#2016-04-2612:18stuartsierraUser-defined attributes are pretty safe, as long as the names don't clash.#2016-04-2612:19stuartsierraAlso, keep in mind that all idents are kept in memory all the time.#2016-04-2612:19conawCan you have the user defined attribute stored as an identity — and map it to some generated attribute name — that way the user doesn’t deterimine what the actual query is?#2016-04-2612:19conawsorry, stored as an entity#2016-04-2612:19stuartsierraAttributes are entities already.#2016-04-2612:21conawso presumably you could create a new entity when the user is creating some custom relationship, 'user/custom-attr and then generate a unique attribute based on that, which would be the actual entity#2016-04-2612:23conawthanks @stuartsierra#2016-04-2612:27conawI may not have explained that clearly, but the idea is that you limit what queries a user can actually run, and avoid the risk of conflicting attribute names, by having some intermediary entity sitting between what the user thinks the attribute is called and what the actual attribute is that ties an entity to a value#2016-04-2617:56bvulpes(d/q '[:find (count ?e) :in $ :where ... ] db)) returns nil if there are no entities matching, and not zero. am i misusing count?#2016-04-2618:05bkamphaus@bvulpes: at present, aggregates return nil instead of 0 in cases where there are no matched datoms. We are considering requests around changing this behavior, but for now you'll need to check for nil case manually if you want to return 0. Note this has to do with aggregates in general so you can't e.g. use a workaround with a custom aggregate with that same query.#2016-04-2618:31bvulpes@bkamphaus: thank you for the clarification!#2016-04-2619:50p.brc@bkamphaus: I managed to reproduce the issue mentioned a while ago in this channel: datomic seems to ignore the data-dir setting for indexing jobs when setting it via the property file. I have put together what I think is a minimal example that reproduces it here: https://github.com/pebrc/datomic-datadir-issue#2016-04-2620:18bkamphaus@p.brc: thanks for putting together the repro and report, I’ll look into it.#2016-04-2621:54currentoorIf I want to store blobs of data as bytes, how big is too big?#2016-04-2621:55currentoorIs under 1MB (when serialized as edn) acceptable?#2016-04-2621:55stuartsierrarecommended max value size is 1 KB#2016-04-2621:55currentoorthank you#2016-04-2622:03therabidbanana@stuartsierra: what's the reasoning behind that recommended size? Are there instances where it might be safe to go over? If you can't store blobs bigger than 1kb, any recommended approaches for how to handle it separately?#2016-04-2622:04stuartsierraThe way Datomic stores values in its indexes. Index segments in storage are 40-60 KB in size. Large values would bloat the segment sizes.#2016-04-2622:05stuartsierraYou can always go over, it's not a hard limit, but performance will degrade with a large quantity of large blob values.#2016-04-2622:05stuartsierraInstead, use a separate blob store and keep just metadata in Datomic.#2016-04-2622:07therabidbananaI see - so even if we don't want any indexes on the blob values, they still can bloat the indexes?#2016-04-2622:07stuartsierrayes, the values still have to be stored. All values are stored in indexes.#2016-04-2622:08bkamphauseven if avet/indexed isn’t turned on, eavt, aevt are covering indexes of segments for all values in Datomic.#2016-04-2622:09therabidbananaI see - thanks for the additional details#2016-04-2622:15therabidbananaAre there any recommendations for ways to store larger blobs that integrate well with datomic holding the metadata? Essentially we just need a key-value DB that's easy to query and join onto datomic data I guess?#2016-04-2622:16stuartsierrayep#2016-04-2622:46bvulpes@therabidbanana: for maximum ops simplification, use a postgres data store and a field of blobs in there#2016-04-2622:46bvulpesthen you'll have datomic_kvs and therabidbananas_blobs#2016-04-2622:47bvulpesuuid up and you're done#2016-04-2622:47therabidbananaThat's basically what we're thinking of doing, though we had planned on using Cassandra as the datomic datastore (it's what we've used in another project)#2016-04-2622:48bkamphausYeah, it can be reasonable to just use the underlying storage, or on aws s3 might be preferred over dynamo. Also if you’re transacting a lot of blob data to the underlying storage it could impact performance, so take the volume you expect to be transacting into consideration.#2016-04-2622:49therabidbananaI had heard that Postgres was not a preferable storage backend for Datomic if we can avoid it - not sure if that's still the case?#2016-04-2622:50bkamphaus@therabidbanana: not preferable for what reasons? The storage choice is really contingent on your familiarity and use case.#2016-04-2622:51bvulpes@therabidbanana: i asked some cognitect staff about data store selection criteria and the answer was "budget first, then familiarity. if you can swing the price of DDB, use that."#2016-04-2622:52bvulpesto that end, i use pg as a datastore. granted, the stuff i do is pretty low-throughput, so i've not yet hit performance ceilings or the like, but provisioned reads and writes on ddb get priiiiicey quickly.#2016-04-2622:52bvulpesplus, mother postgres can do no wrong!#2016-04-2622:58therabidbanana@bkamphaus: heard it from @currentoor, so not sure of the exact details - but apparently someone at Cognitect advised against it since we also had familiarity with Cassandra? Maybe that was because our other use case was more likely to have high amounts of writes though.#2016-04-2622:59bkamphausScaling up to high throughput would point to Dynamo, Cassandra, etc. yeah. If you’re familiar with Cassandra then it’s less risky to take on.#2016-04-2622:59currentooryeah I believe we were told cassandra or dynamoDB is preferred because of write scalability, @marshall I believe you mentioned this on the phone#2016-04-2623:00therabidbananaThis database, especially without blobs being stored in it, is much less high-throughput on the write side than our other use case though#2016-04-2623:00bkamphausYep, given an expectation to scale writes that’s reasonable. Just want to clarify that’s only indicated by scale and not a generic recommendation to avoid Postgres by any means.#2016-04-2623:01therabidbananaMaybe we could swing consolidating in Postgres - we're planning on using it as a store for Quartzite (http://clojurequartz.info/articles/guides.html) already anyway.#2016-04-2623:19ambroisehi
I’m trying to get a transactor running in a Docker container, linking it to a mysql database that I am running locally.
Things seem fine when I docker run my image: in mysql, when I query, I get
mysql> select * from datomic_kvs;
| id | rev | map | val |
| pod-coord | 9 | {:key "[\"192.168.1.95\" nil 3306 \"QfiEuYt70Bt3Qy7JVPuuW47I4uLze8+jKUCAcrrXCAI=\" \"uIi+Qhy2RQPs8JHqb6pChvuEWoQTeK0S26hPDrjlcNM=\" 1461711438104 \"0.9.5350\" true 2]"} | NULL |
1 row in set (0.00 sec)
in the repl, I then try to create a database, runing
(datomic.api/create-database "datomic:)
and i get
HornetQNotConnectedException HQ119007: Cannot connect to server(s). Tried with all available servers. org.hornetq.core.client.impl.ServerLocatorImpl.createSessionFactory (ServerLocatorImpl.java:906)
ERROR org.hornetq.core.client - HQ214016: Failed to create netty connection
java.nio.channels.ClosedChannelException: null
at org.jboss.netty.handler.ssl.SslHandler.channelDisconnected(SslHandler.java:649) ~[netty-3.6.7.Final.jar:na]
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:102) ~[netty-3.6.7.Final.jar:na]
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) ~[netty-3.6.7.Final.jar:na]
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559) ~[netty-3.6.7.Final.jar:na]
at org.jboss.netty.channel.Channels.fireChannelDisconnected(Channels.java:396) ~[netty-3.6.7.Final.jar:na]
at org.jboss.netty.channel.socket.oio.AbstractOioWorker.close(AbstractOioWorker.java:229) ~[netty-3.6.7.Final.jar:na]
at org.jboss.netty.channel.socket.oio.AbstractOioWorker.run(AbstractOioWorker.java:104) ~[netty-3.6.7.Final.jar:na]
at org.jboss.netty.channel.socket.oio.OioWorker.run(OioWorker.java:51) ~[netty-3.6.7.Final.jar:na]
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) ~[netty-3.6.7.Final.jar:na]
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) ~[netty-3.6.7.Final.jar:na]
at org.jboss.netty.util.VirtualExecutorService$ChildExecutorRunnable.run(VirtualExecutorService.java:175) ~[netty-3.6.7.Final.jar:na]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[na:1.8.0_74]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[na:1.8.0_74]
at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_74]
Any ideas on where I am missing something?
Also subsidiary question, is the Peer connecting to the transactor or is it the other way around?
thanks a lot for any help!#2016-04-2623:23bvulpes@ambroise: peer connects to transactor#2016-04-2623:23bvulpesensure that you have a clear network path to the transactor from wherever the repl is#2016-04-2623:26ambroisehi @bvulpes, thanks for your help. I’m actually not sure how the repl (peer) gets the network path to the transactor. Is it supposed to be in the mysql database?#2016-04-2623:26bvulpesprecisely#2016-04-2623:28ambroiseright. because right now, in the mysql db, I have 192.168.1.95 and 3306, which points to the database (and not the docker container)#2016-04-2623:28bvulpesodd#2016-04-2623:28bvulpessounds like you have host=<mysqlhost> in your transactor.properties#2016-04-2623:29ambroiseaaah#2016-04-2623:29bvulpesgimme a cookie#2016-04-2623:29bvulpesparrot#2016-04-2623:29bvulpes(i have spent hours automating transactor.properties lately is all)#2016-04-2623:30ambroisethanks a lot!#2016-04-2623:32bvulpesanytime#2016-04-2623:32kenbiercan reverse attribute navigation be used in the :db.cardinality/many situation? something like :foo/_bar [vector-of-lookups-refs], when creating a new foo entity.#2016-04-2712:29christianromney@ambroise: check out https://github.com/pointslope/docker-datomic-example#2016-04-2712:30christianromneyneeds to be updated (i’ll try to get to that soon), but serves as a decent starting point#2016-04-2716:01ambroise@christianromney: thanks a lot. I did use it, the tricky part was the alt-host. This link helped a lot as well: http://datomic.narkive.com/7OyGDp2N/peer-cannot-connect-to-dockerized-transactor#2016-04-2721:29christianromney@ambroise: awesome, glad you got set up. yes, alt-host is the magic sauce#2016-04-2808:16rauhFirst steps with datomic: WhenI enter or click anywhere in datomic console it just reloads the entire page. Any clues?#2016-04-2819:55bkamphaus@rauh: it does that with various browsers? with/without plugins disabled, etc.?#2016-04-2821:05ethangracerhey all, anyone have tips / methodologies / links to resources about debugging queries? I’m talking both about debugging queries that throw errors, and debugging queries that return data in an unexpected way#2016-04-2821:05ethangracerthe only way i’ve found so far is just to play with where clauses#2016-04-2821:06ethangraceris there any way to see how datomic is processing a query?#2016-04-2821:16bkamphaus@ethangracer: is there a specific issue you’re trying to address? There’s no equivalent of e.g. a SQL query plan available in Datomic.#2016-04-2821:19ethangracerno specific issue at the moment, i’ve just had a very difficult time deciphering error messages and understanding how complex queries work (many :where clauses, :with and aggregates, using clojure functions in queries)#2016-04-2821:19ethangracerI end up just adding and removing random stuff to see what happens#2016-04-2821:20ethangracerwhich is fine from a learning perspective, but when I’m trying to achieve a certain result in production it takes a whole lot longer than I think it would with a debugger#2016-04-2821:22ethangracerso if tinkering is the best available option, that’s fine, just curious if there are others#2016-04-2821:22ethangraceror to see if anyone has thoughts on what is a more / less effective way of tinkering#2016-04-2821:26bkamphausI personally haven’t run into issues adding/removing clauses in a controlled manner to get a query to work the way I’d like it to, or to build understanding while doing so. But I tend to build up queries incrementally and check intermediate results either with a small db or a set of tuples I use in place of a db along the way. And I tend to keep queries smaller and compose multiple queries, or e.g. a query versus pull many to split selection and projection.#2016-04-2821:26bkamphausThat’s just my personal take, though. I do understand the implied feature request for tools for explaining or tuning queries, and it’s one others have made and that we’re considering.#2016-04-2821:30ethangraceryeah some kind of tool for tuning queries would be great. I like the idea of creating a set of tuples to use in place of a db to troubleshoot queries, didn’t realize that was an option#2016-04-2821:30ethangracerso far I’ve just been using the repl hooked up to our full database, which makes it tricky to understand the behavior… lotsa data#2016-04-2821:32ethangracerhow do you build your own tuples to test against?#2016-04-2821:41kennyHow do you transact a new entity to a :db.cardinality/many? This will throw "Expected number or lookup ref for entity id, got {:new "entity"}":
(d/db-with db [[:db/add id :children {:new "entity"}]]) where the schema for :children is {:db/cardinality :db.cardinality/many
:db/valueType :db.type/ref}#2016-04-2821:48bkamphaus@kenny: for the list form :db/add you need to generate one add transaction per set of things in the relation. If you were using a map form and [:new entity] was a stand in for a lookup ref, it needs to be nested in another collection, e.g. {:db/id ent-id :children [[:lookup “1”]]} (or a list of ids there). You can nest a new map without using an entity id only in the case that :children is a component, and still only in the map form.#2016-04-2821:49kenny@bkamphaus: Ah I see. Thanks!#2016-04-2821:52bkamphaus@ethangracer: by tuple examples I mean:
(def tuples [['sally :age 21]
['fred :age 42]
['ethel :age 42]
['fred :likes 'pizza]
['sally :likes 'opera]
['ethel :likes 'sushi]])
(d/q '[:find ?e :where [?e :age 42]] tuples)
; #{[ethel] [fred]}
#2016-04-2821:53bkamphausbut I think with just a little bit of added complexity (especially considering pull specifications, etc.) I’d move pretty quickly to a small dataset in edn that I’d transact into a mem db. Or use the mbrainz subset if you can construct analogous queries with it.#2016-04-2821:53bkamphausIf you’re trying to get a feel for perf/clause ordering, then you definitely want something with a datomic db’s indexes.#2016-04-2821:56ethangracer@bkamphaus: thanks, that’s super helpful! for simpler queries I’d definitely prefer to use tuples, but I see the value in pulling down the mbrainz data and trying to build analogous queries too.#2016-04-2900:23bvulpesis there a way to use aggregates to return entities that don't have more than a certain number of values, or don't have fewer than a certain number of values?#2016-04-2900:28bvulpes(which clearly doesn't do what i want, attempting to get count of a long, longs being the entity id type...)#2016-04-2900:29bkamphaus@bvulpes: you can do it by chaning queries or in one query using subquery. I have a subquery example here: https://stackoverflow.com/a/30851855#2016-04-2900:30bvulpeswhoa#2016-04-2900:30bvulpesdope#2016-04-2900:30bvulpesgotcha#2016-04-2900:30bvulpesthanks @bkamphaus!#2016-04-2900:33bvulpesbkamphaus: does this strategy work with named functions?#2016-04-2905:52rauh@bkamphaus: Reload problem for console: No plugins, also happens in Incognito mode (which has no browser plugins). Chrome on Linux. Also happens with FF.#2016-04-3007:38bvulpesis there anyway to call user-defined clojure code in queries? a la <, am i missing some clever way to use my own code in the queries? i'm using a rule, but having my own code in queries would be tres neat.#2016-04-3007:46tomjacksee "calling clojure functions" and the stuff above at http://docs.datomic.com/query.html#2016-04-3017:32bvulpes@tomjack: no idea why that didn't work last night.#2016-04-3017:35bvulpesbut thanks!#2016-05-0213:26grounded_sageIm looking into different databases to learn them and I was wondering if Datomic is an in memory database what happens when it goes down or restarts?#2016-05-0213:41bostonaholic@grounded_sage: the in-memory database is typically only used for experimentation and testing#2016-05-0213:43bkamphausIf you're talking about the mem storage, right, it isn't durable. If you're referring to the db being cached in memory on peers, it's still on durable storage and the cache will be repopulated as segments are read on the peer after restart.#2016-05-0218:50jgdaveyCan anyone refresh my memory about how to pass a “predicate” into query? I know there are other ways, but wanted, if possible to use a predicate within the query itself.#2016-05-0218:51jgdaveyThat is: ’[:find ?e :in $ ?pred :where [?e :foo/bar ?val] [(?pred ?val)]]#2016-05-0218:51jgdaveyThat obviously doesn’t work, but I think illustrates what I’m going for#2016-05-0218:52bkamphausjust define it outside query and fully namespace it when you invoke it in the query.#2016-05-0218:52jgdaveySo anonymous functions are out?#2016-05-0219:13bkamphaushmm, haven’t tried or tested anonymous functions in query before, now I’m curious myself… simple_smile#2016-05-0219:24bkamphaus@jgdavey: so I was able to do it with a user defined generic predicate applier (ignore the fact that the toy example could be replicated with a provided comparison operator):
(defn apply-pred [some-fn x]
(some-fn x))
(d/q '[:find ?name ?year
:in $ ?fn
:where
[?a :artist/name "Pink Floyd"]
[?r :release/artists ?a]
[?r :release/name ?name]
[?r :release/year ?year]
[(user/apply-pred ?fn ?year)]]
(d/db conn) #(< 1972 %))
#2016-05-0220:10jgdavey@bkamphaus: Oh nice!#2016-05-0222:18pheuterSo I’m trying to do an “upsert” in a transaction using the list form and am getting the following error: :db.error/not-an-entity Unable to resolve entity. Does that mean the entity id does not exist in the database or that i’m not specifying it properly? This is what the transaction list looks like: [:db/add [:user/email “some email”] a v]#2016-05-0222:24bkamphaus@pheuter an entity has to exist for a lookup ref to work. An upsert would be that attr val asserted for an entity with a tempid (typically in map form) and the tempid would resolve to the entity that already exists in the event that that unique identifier is already in the db.#2016-05-0222:25pheuterright, that makes sense, i guess i’m wondering if my list form is properly setup, specifically that i can specify a vector containing a unique attribute and its value as an entity id.#2016-05-0222:40taylor.sandoCan you pass anonymous functions to a transactor function? I know the transactor can run namespaced functions as long as they are on the classpath. I was thinking that you could make a transactor function called validate/fn, which you could call with the arguments, :query, :params, :error-msg, checker-fn. In the actual function you would do (apply datomic.api/q db (:query params) (:params params)), then you would check the result returned against the checker fn, which, depending whether it was true or false, could raise an exception to reject the transaction.#2016-05-0223:29taylor.sandoI can do it locally, but on my dev database I'm getting a illegal argument exception from a fressian write handler. Is it trying to serialize the transaction when sending to the remote transactor, but not locally? I kind of assumed you wouldn't be able to pass a function to the transactor, but wanted to try it.#2016-05-0301:12bkamphausIt serialized transactions to dev or another storage, not to mem, And yeah anon fns not passable to tx functions over wire.#2016-05-0310:40rauhIn all of:
http://docs.datomic.com/excision.html
The partitions are specified without a keyword (missing colon). Is this a mistake? On other pages it's all like :db/id[:db.part/db]#2016-05-0316:11pheuterAnyone have any experience enforcing composite unique attributes? The documentation briefly alludes to transaction functions, but I’m not sure where to go from there.#2016-05-0316:13bkamphaus@pheuter: there are really basic examples in Clojure https://github.com/Datomic/day-of-datomic/blob/master/tutorial/transaction_function_exceptions.clj#L15 and Java https://github.com/Datomic/datomic-java-examples/blob/master/src/java/datomic/samples/TxFunctions.java#L37 but in practice I think many people lean more on defining an additional attribute for the composite key (i.e. a concatenated string).#2016-05-0316:16pheuter@bkamphaus: thanks! will take a look. also interesting point about a composite attribute that serves as a concatenation.#2016-05-0317:26danielwoelfelCareful when you're choosing the attributes to concat--unless they've changed the contract, you can't rely on :db/ids to stay the same across db restores#2016-05-0317:27bkamphausright, don’t use entity id as an external identifier in general. :db/id isn’t an attribute. simple_smile#2016-05-0317:34mlimottegood tip. i hadn't thought about that before.#2016-05-0318:51pheutergood point#2016-05-0413:15jgdaveyI’m having an issue where I’ve killed one peer to spin up another, but the Transactor still thinks the first peer is connected (I think), and won’t let the new one connect. The CloudWatch metrics report RemotePeers as the max, even hours after the other peer spun down.#2016-05-0413:15jgdaveyIs there a way to have the transactor release that connection without taking down the transactor completely?#2016-05-0413:19bkamphaus@jgdavey: a peer should definitely time out. You’re sure there aren’t other peers you’re not counting (e.g. console, REST server, etc.)?#2016-05-0413:19jgdaveyActually, I made an incorrect assumption. There’s an Datomic-unrelated problem going on.#2016-05-0413:20jgdaveyDisregard. simple_smile#2016-05-0413:20jgdaveyAnd as always, thank you for the quick reply.#2016-05-0501:46noziarIs there a way to force a transactor to take over? I have a transactor plugged to a MySQL storage but it stays in standby although I'm pretty sure no other transactor is connected (there was one before but I had to kill it - maybe it left the storage in a bad state somehow?)#2016-05-0501:47bkamphaus@noziar transactor will automatically take over unless there's a live heartbeat from another transactor.#2016-05-0501:50bkamphaus@noziar You can run datomic.peer/transactor-endpoint (side note: diagnostics tool, not stable api, so don’t use outside of this intended purpose) to sanity check what the current transactor endpoint is.#2016-05-0501:53noziarAmazing! datomic.peer/transactor-endpoint was exactly what I needed to figure out where the transactor was. Thanks a lot!#2016-05-0518:26ethangraceris there a way to specify that pull should follow all non-component refs?#2016-05-0518:26ethangracerpulled this line off of the Non-component Defaults part of the pull documentation:
> If the reference is to a non-component attribute, the default is to pull only the :db/id.#2016-05-0518:27ethangracersuggests there may be a way to override the default without changing schema?#2016-05-0519:36tomjackthe implied way to override the defaults is to use a map-spec instead of a bare attr-name. I don't think you can follow all non-component refs -- unless you build a pull expr using the schema, maybe?#2016-05-0519:54ethangraceryeah I haven’t been able to find a way other than being explicit about each ref that I want to follow#2016-05-0600:17stuartsierrathat's right, you have to specify which refs you want to follow#2016-05-0613:17imaximixHello, I've been playing around with datomic, and I've stumbled upon the following link https://gist.github.com/stuarthalloway/2645453 . My question is: Can the pull syntax be used with queries against clojure collections?#2016-05-0616:43bvulpesimaximix: nope. pull works with entities specifically, which are a db-only artifact.#2016-05-0618:15haroldCan I search, historically, for a retracted entity which once had a value for a unique attribute, but no longer does?#2016-05-0618:18bkamphaus@harold: yep, I believe you should be able to do it using a query against a history db. Have you attempted anything yet?#2016-05-0618:20haroldI have a history query thing, but I need an entity ref to feed into it.#2016-05-0618:20bkamphausa “retracted entity” is kind of a confusing category to ask about since entities are not intrinsically reified in the underlying datom model. Facts are asserted or retracted, all facts are about an entity, all facts about an entity may be retracted, but the entity’s existence is not otherwise a fact that can be asserted or retracted. If that makes sense.#2016-05-0618:20haroldIt does.#2016-05-0618:20haroldSay I knew that at some point in the past there was an entity e with value v for attribute a.#2016-05-0618:20haroldBut that's all I know.#2016-05-0618:21haroldNow I'd like to find the eid associated with that entity.#2016-05-0618:21haroldand it no longer has value v for attribute a, because the entity was retracted.#2016-05-0618:21bkamphausenter and shift enter, ugh#2016-05-0618:22bkamphausso all facts, assertiosn or retractions are in a history db, and you can limit to retractions with a fifth position of false in a :where clause.#2016-05-0618:23bkamphaussome queries against history (not an exact match for what you’re asking but maybe helpful) in the day of datomic provenance example: https://github.com/Datomic/day-of-datomic/blob/master/tutorial/provenance.clj#L61#2016-05-0618:24haroldNice! Didn't think of false in the fifth position.#2016-05-0618:24haroldI'll give it a whack.#2016-05-0618:27bkamphaus(d/q '[:find ?e
:in $ ?a ?v
:where
[?e ?a ?v]]
(d/history db))
@harold that should return every entity that ever had that ?a ?v (as query parameters) asserted or retracted for it, could put in [?e ?a ?v _ false] as :where clause instead to limit to retractions (untested as of yet, just building query off expectations).#2016-05-0618:27haroldYup! Just hit on something similar. Thank you! I am off and running.#2016-05-0622:45bkamphaus@fenton: http://docs.datomic.com/javadoc/datomic/Datom.html — an you use :e, :a, :v, :tx, :added etc. from Clojure instead of usual Java interop.#2016-05-0622:46fenton@bkamphaus: thx#2016-05-0622:57fentonis this idomatic...is there a way to query for available functions?#2016-05-0623:07brian_mingus@fenton (map ns-publics (all-ns))#2016-05-0623:16fenton@brian_mingus: crashed my repl! lol!#2016-05-0623:25bkamphaus@fenton: you mean what you can call on a datom in clojure? supports nth, juxt (with keywords), destructuring.#2016-05-0623:26bkamphaus((juxt :e :a :v :tx :added) datom)
[0 10 :db.part/db 13194139533312 true]
(let [[e a v tx added] datom] [e a v tx added])
[0 10 :db.part/db 13194139533312 true]
etc.#2016-05-0623:27bkamphausthen it’s just values in Clojure you can do w/e with.#2016-05-0623:35fenton@bkamphaus: thx#2016-05-0623:35fentonanyone have issues with uniqueness...when i transact two items quickly in sequence, seems to violate the uniqueness of that attribute?#2016-05-0623:37bkamphausUnique value or identity? Identity will upsert rather than raise an exception.#2016-05-0623:40bkamphausIt's upserting, if you query the resulting db only one entity will have that attr/value, more here: http://docs.datomic.com/identity.html#2016-05-0623:40fenton@bkamphaus: ok...thank you again! simple_smile#2016-05-0705:10zentropeI’m adding an entity with [:data/user [:user/id #uuid “…”]] but get a :db.error/not-an-entity Unable to resolve entity: :data/user error. But the user entity with that UUID does exist. Is that error telling me that data/user (an attribute) doesn’t exist?#2016-05-0705:13zentropeI’m getting the same answer for all refs I attempt to lookup in that way.#2016-05-0705:19zentropeHuh. Trashing and restarting the database fixed it. Somehow.#2016-05-0712:23imaximixGot it. Thanks #2016-05-0819:25hueypis this a legit query to see if an attribute was created as part of a tx? (vs an upsert) : (d/q '[:find [?v ...]
:in $ [[?e ?a ?v _ ?added]]
:where
[?e ?a ?v]
[(= ?added true)]
[?a :db/ident :loan/id]]
db-after tx-data)#2016-05-0819:26hueypthe attribute in this being :loan/id#2016-05-0819:29ckarlsenI deleted a reasonably large database and ran "bin/datomic gc-deleted-dbs ...", but I terminated the process by accident after a short time. Pretty sure nothing actually got gc'ed, because storage size is about the same as before (postgres, and I manually did a full vacuum and restarted both transactor and postgres). Is this fixable?#2016-05-0819:53bkamphaus@ckarlsen: if you can afford the down time, the guaranteed way to get the smallest size in storage is to back the dbs up, blow away the underlying storage construct (keyspace, table, bucket, w/e) and then create one and restore the dbs back there. Are you saying that additional calls to gc-deleted-dbs don’t accomplish anything?#2016-05-0819:57bkamphaus@hueyp: not sure I follow the intended behavior of that query. Any ?e ?a ?v should necessarily filter to ?added as true if you’re not using a history db (only history has retractions). So even supplying and binding datom values in the query, the first :where clause should limit to assertions. Apart from that, it seems as though you’re checking for a hard coded attribute and whether it was in that transaction data, not sure what role using a query or db in that context plays. Is this in the context of with ?#2016-05-0820:14ckarlsen@bkamphaus: luckily this was on my dev machine. Additional calls just yieled "no deleted dbs found"#2016-05-0822:21rmuslimovExcuse me, may I ask a newbie question here:
Let’s suppose I have 3M records in datomic database and I want to get first 20 sorted by particular field (checkin date for example, or whatever) since datomic query API doesn’t support ordering - does it means that I have to download all that 3M records to my peer and sorted them after?#2016-05-0822:24bkamphaus@rmuslimov: if the sort you want matches the lexicographical sort order for the value when indexed (with :avet present), you can use index-range/`datoms` or seek-datoms to handle paging behavior lazily.#2016-05-0822:46hueyp@bkamphaus: sorry, missed this … I’m doing a scheduled import and upsert’ing say 100 entities at once. I want to know which of those entities were created vs updated. I could look at the datums directly, but saw an example of querying tx-data here : http://docs.datomic.com/transactions.html … I think the only thing I use the db-after for is to to map the attribute to its ident (`:loan/id`) … but again, not sure simple_smile#2016-05-0822:46hueyp:loan/id is a unique identity#2016-05-0822:47hueypthe thought was — filter tx-data to just added, look for attribute of :loan/id … get the value?#2016-05-0822:48bkamphaus@hueyp: just off the top of my head, it may be a better option to get the create time of the entity (tx or t) and see if it’s the same as the tx or t? but that does imply doing the lookup for an entity in the log of a min aggregate, so I could see why it’d be desirable to just inspect the tx-data. But you’re really just looking at the tx-data then to see if the attr/value that woudl result in an upsert is asserted there or not? (it won’t be there in add form if the resulting tx was an upsert).#2016-05-0822:49hueypyah, I’m just looking at the tx-data to see if the attribute is present as a new fact#2016-05-0822:57hueypyah, trying out the example from : http://docs.datomic.com/transactions.html : [:find ?e ?aname ?v ?added
:in $ [[?e ?a ?v _ ?added]]
:where
[?e ?a ?v _ ?added]
[?a :db/ident ?aname]]
it only shows added of true versus "show each datom of the transaction”.#2016-05-0822:58hueypwhich makes sense as you said ?added is always true for a normal db#2016-05-0822:58hueypso really I just want to filter against the tx-data and not bother with db-after I think#2016-05-0822:59hueypI was just liking the idea of using datomic query syntax vs map / filter simple_smile#2016-05-0823:00bkamphausFor what you're doing yeah - I would say that docs query is probably better suited for understanding less trivial changes.#2016-05-0823:01hueypthanks!#2016-05-0916:20bostonaholichas anyone implemented a blockchain in datomic?#2016-05-0916:25syk0sajehey everyone, just ran into the same issue as this one and was able to use his "fix" as well, but i thought you datomic devs might wanna look into it: https://stackoverflow.com/questions/31371993/basic-logging-in-clojure-web-service-not-appearing-on-console#2016-05-0916:26syk0sajein a nutshell, i'm using hoplon, and adding datomic suppresses some errors from showing up in the log#2016-05-0916:28syk0sajespecifically, the error was "http request header too large". the accepted answer explains what the author thinks the cause could be.#2016-05-0917:14spiralganglion@bostonaholic: I haven't heard of anyone doing this, but if you do find out (perhaps, elsewhere), I'd be excited to hear about it so definitely let us know here.#2016-05-0917:15spiralganglionI'm about to start working on a project that is spiritually similar to IPFS, so I'm voracious for ideas in that neighbourhood (blockchain, DHTs, etc)#2016-05-1003:40sdegutisDoes retracting an entity which has {:db/noHistory true} also result in excising that entity from the database?#2016-05-1003:42sdegutisI read the part that says there are no guarantees about how much is removed (it could have more than it should seem to) but it also says the purpose of noHistory is to reduce storage usage, which seems to suggest that it would be a reasonable optimization.#2016-05-1003:43bkamphaus@sdegutis: it’s not really equivalent to excision :db/noHistory is really just a performance hint that goes into effect with indexing. And :db/noHistory is on an attribute, not an entity. Entities are just projected from all facts about them, but an entity with only :db/noHistory attributes with all facts retracted could result in no facts for that entity in history.#2016-05-1003:55sdegutis@bkamphaus: Right, and no facts about an entity throughout history is really not much different than having never existed in histoyr.#2016-05-1015:45sdegutisThe query docs (http://docs.datomic.com/query.html) don't mention that < and > can compare two :db.type/instant values. Is this undefined behavior that just happens to work? I'm pretty sure it's not allowed in plain Clojure, so it must be a Datomic-specific feature.#2016-05-1017:41peterromfeldi quite like clojure, but i also wanna get back to elixir and phoenix for my BE.. but i also cant live without datomic anymore 😛
is there a way to write a peer lib for other langs without using REST api? http is so slow...#2016-05-1017:42peterromfeldor it all ends up to get java interop first, then use java peer lib#2016-05-1017:50peterromfeldi cant look into the source to see how peer caching actually works 😞
im also not that senior to understand if it would make sense to have the lookup caching abstracted away with common things like redis or co, so it eventually could be easier to implement peer libs in other langs, just would be awesome if you could use datomic to full potential in every language!#2016-05-1017:52peterromfeldsilly me, i found how to look into source#2016-05-1017:53peterromfeldits java (puke)#2016-05-1018:12curtosisAFAIK, at present, a peer library has to be implemented by Cognitect. The wire protocol is not public, and reverse engineering would violate the license.#2016-05-1018:19curtosisfor one solution, you could look at https://github.com/psfblair/datomic_gen_server (not used, just aware of)#2016-05-1018:20bvulpes@peterromfeld: there's always the REST API#2016-05-1018:20bvulpesis there a shorthand to retract all values for an attribute?#2016-05-1019:20spiralganglion@peterromfeld: I too have been spending a bit of time with Phoenix and wishing I could back it with Datomic. Doubtful it's high on anyone's list of priorities, but an Erlang peer lib would probably be a pretty huge deal if it ever happened.#2016-05-1020:02bkamphausWe definitely are considering the requests to open up peer lib/full client capabilities to other languages, but no specific plans to share at this time.#2016-05-1021:55bhaganyif we're voting, +1 for erlang peer lib#2016-05-1117:53vinnyataideI'm learning datomic right now and seeing that it works with arbitrary data could it be an replacement to filter func in clojure?#2016-05-1117:59bvulpesam i intuiting correctly that the pull api doesn't return tx data?#2016-05-1118:52val_waeselynck@vinnyataide: you don't like filter ? simple_smile yes Datalog can be useful for declarative data manipulation and expressing rules, doesn't mean it's always better than clojure.core functions though#2016-05-1118:53val_waeselynck@vinnyataide: but given that querying is practically non-remote, nothing stops you from mixing both styles, that's one of the wonderful things with Datomic#2016-05-1118:53vinnyataide@val_waeselynck: well I didn't say but I was asking if it blends in like the core#2016-05-1118:54vinnyataideall right#2016-05-1118:54vinnyataidethat's exactly my point, it's very nice indeed#2016-05-1118:56val_waeselynck@vinnyataide: my everyday Datomic code mixes Datalog queries for selecting entities and clojure.core function that manipulate entities (i.e instances of the Entity class)#2016-05-1118:57vinnyataideokay#2016-05-1118:57vinnyataidemy next question is a little more scientific. the "bound" variables follow the same principles from the monads?#2016-05-1118:59val_waeselynck@vinnyataide: not sure what you mean by that#2016-05-1119:27vinnyataide@val_waeselynck: no need to answer just found out, now... Is there any resource to learn datomic inserts anywhere? I'm not finding it#2016-05-1119:36val_waeselynck@vinnyataide: have you watched the Day of Datomic videos?#2016-05-1119:37val_waeselynckthose + some REPL experimentation worked well for me#2016-05-1119:41vinnyataide@val_waeselynck: all right gonna check it out and play with the api a little bit#2016-05-1119:46vinnyataidebtw stu's hair is epic 😄#2016-05-1120:48curtosisstu's hair being epic is a system invariant. simple_smile#2016-05-1121:49vinnyataidean interesting aspect is that I came upon this database already thinking about a more generalized way to build immutable databases that sees the facts as timeless atoms. I commented this idea with my research teacher but was like daydreaming for him, sadly there's not much research in my area about functional programming. but I'm very happy to see this idea work so perfectly#2016-05-1121:58vinnyataidethe state machine as we know today takes pictures of the whole universe in a fraction of a clock, now this db builds atoms that represent all the dimensions that it is in, that what I got#2016-05-1205:47fasihaHi friends. I'm reading the Datomic docs on tuple bindings for multiple inputs (http://docs.datomic.com/query.html#sec-5-7-1) in order to ask the http://learndatalogtoday.org dataset, "What movies did both Mel Gibson and Danny Glover collaborate on?" And here's what works for me (apologies for using DataScript syntax, since that's what I'm using):
(d/q '[:find ?movie-title
:in $ [?name-1 ?name-2]
:where
[?person-1 :person/name ?name-1]
[?person-2 :person/name ?name-2]
[?movie :movie/cast ?person-1]
[?movie :movie/cast ?person-2]
[?movie :movie/title ?movie-title]]
@conn
["Danny Glover" "Mel Gibson"])
#2016-05-1205:48fasihaMy question is, if I wanted to generalize this function to handle N>=2 collaborators, would I basically build that :where clause programmatically to include N sub-vectors of [?person-n :person/name ?name-n] and N other vectors of [?movie :movie/cast ?person-n], for n running from 1 to N?#2016-05-1205:51fasihaI.e., is there another, less intricate, way to enforce AND requirements other than this? Of course using collection bindings, with :in $ [?name ...] will give me movies where any of the N actors have appeared, with an OR relationship, rather than AND (collaborations), which is what I need.#2016-05-1205:53fasihaI suspect it would be advantageous for me to read up on Datomic rules at this juncture#2016-05-1208:43val_waeselynck@fasiha: good question, worth asking on stackoverflow or the mailing list#2016-05-1208:49val_waeselynck@fasiha: not sure if it's less intricate, but I think I'm able to do in one query it using 2 datasources and double-negation#2016-05-1218:28adamkowalskiis there a good guide you guys can recommend to learn about data modeling in datomic? I’m hoping to learn about modeling one to many relationships, and just relationships between entities in general#2016-05-1218:45bvulpes adamkowalski one-to-many is a cardinality many 'ref'#2016-05-1221:54bvulpesam i correct in my reading of the pull API documentation that it doesn't specify any ordering?#2016-05-1221:55bvulpesit appears to be ordered by transaction, but confirmation would be nice#2016-05-1221:56val_waeselynck@bvulpes: I don't believe you can really on any-order in Datomic queries#2016-05-1221:58bvulpesthanks val_waeselynck #2016-05-1222:00val_waeselynck@adamkowalski: @bvulpes technically I'd say that a cardinality many ref attribute is always many to many - IMO the best way to enforce one-to-many from A to B is a to-one ref attribute from B to A; keep in mind that such attribute cannot be a component then#2016-05-1222:01bvulpeshm good point#2016-05-1222:02val_waeselynck@adamkowalski: have you read the best practices? http://docs.datomic.com/best-practices.html#2016-05-1222:07val_waeselynckAnd finally, some other personal data modeling tips:
- don't hesitate to reflect on how generic your attributes can be. Sharing attributes across different entity types can be very powerful.
- for "joint table"-like entities you may need compound identifiers, see this discussion https://groups.google.com/forum/#!topic/datomic/4cjJxiH9Lkw#2016-05-1222:10adamkowalski@bvulpes: @val_waeselynck Thanks for all the tips! I have not read those yet, so I will definitely look at those before I continue in my project.#2016-05-1302:02fasiha@val_waeselynck: thanks for the suggestion about Stack Overflow: I posted a related but more basic question to https://stackoverflow.com/questions/37200086/how-to-construct-a-query-that-matches-exactly-a-vector-of-refs-in-datascript — any advice?#2016-05-1308:14val_waeselynck@fasiha: added a quick answer, tell me if that's enough for you, I can also add a code sample, and also an implementation using Datomic's datalog.#2016-05-1310:07lowl4tencyHi guys, when restoring db are in progress, any jobs on the db might be failed?#2016-05-1311:50fasiha@val_waeselynck: thanks so much! This is trickier than I thought 😄#2016-05-1312:36jimmyhi guys is there any way we can backup mem db ?#2016-05-1312:58val_waeselynck@nxqd: if you need persistence, why not use dev storage?#2016-05-1313:00jimmy@val_waeselynck: I will use dev storage then. just asking if it's possible 😛#2016-05-1313:00val_waeselyncknot that I know of 🙂 but curious too#2016-05-1313:12stuartsierraNo, you cannot back-up a mem database.#2016-05-1509:28jonaskelloI just started experimenting with datomic today so I don’t known much about it.. I’m having some trouble getting the rest service to work with postgres… .. what I don’t understand is what URI I should provide to the rest service..#2016-05-1509:29jonaskelloThe transactor says: System started datomic:sql://<DB-NAME>?jdbc:<postgresql://localhost:5432/datomic?user=datomic&password=datomic>, you may need to change the user and password parameters to work with your jdbc driver#2016-05-1509:30jonaskelloWhat is the <DB-NAME> part? Is it a datomic database name or a postgres database name?#2016-05-1509:30jonaskelloI guess it is a datomic database name…. but then I could only start the REST service for a specific datomic database, not for all of them at the storage service?#2016-05-1509:40jonaskelloDid some more experiments and it looks like I could just leave the <DB-NAME> part out when starting the REST service...#2016-05-1509:53jonaskelloNow I’m having trouble with the console…. I can start it and access the web page but when I click something or type a character it just reloads… no error messages or anything, just a reload of the whole web-page#2016-05-1509:54jonaskellobin/console -p 8080 sql "datomic:sql://?jdbc:<postgresql://localhost:5432/datomic?user=datomic&password=datomic>"#2016-05-1509:54jonaskelloConsole started on port: 8080#2016-05-1509:54jonaskellosql = datomic:sql://?jdbc:<postgresql://localhost:5432/datomic?user=datomic&password=datomic>#2016-05-1509:54jonaskelloOpen http://localhost:8080/browse in your browser (Chrome recommended)#2016-05-1509:55jonaskelloSame issue if I try it with dev storage#2016-05-1509:55jonaskelloAnyone seen this before?#2016-05-1509:59val_waeselynck@jonaskello: I know that at least one version of Datomic was released with a broken console, would recommend downloading another version and try the console of that one (just the console, you don't have to change the version you're running the transactor with)#2016-05-1510:00jonaskello@val_waeselynck: Thanks, I’m currently running datomic-pro-0.9.5359.. will try another version of the console..#2016-05-1510:04jonaskelloDownloaded console v0.1.206 (which was the only version I could find in http://my.datomic.com)… I just pasted those files into the datomic-pro-0.9.5359 directory replacing any duplicates.. not sure if that is the correct way to do it? When I try to start the console now it says "Error: Could not find or load main class clojure.main"#2016-05-1510:07jonaskello@val_waeselynck: Downloaded the full package for 0.9.5350 and now the console works! Thanks for the tip!#2016-05-1615:55lowl4tencyHi guys, I did backup-db with encryption sse flag to S3. I downloaded same file and don't see difference between encrypted and non encrypted backups#2016-05-1615:56lowl4tencyI would like store my backups encrypted. What I'm doing wrong? ._.#2016-05-1615:57lowl4tency./bin/datomic backup-db --encryption sse -Ddatomic.s3BackupConcurrency=350 $uri s3://$backup_bucket/encrypted/${db_name}#2016-05-1616:01lowl4tencyAnd might be are there a way to check this behavoiur? Also, if I copy my backups from s3 to another place, am I able to restore-db with ecrypted backup files?#2016-05-1616:02bkamphaus@lowl4tency: the invocation looks correct to me on first glance. I’ll look into it. Also, re: your previous question, the expectation is that processes are down for a restore until its complete for non-dev transactors. if transasctor/peer are up they may fall over on seeing inconsistency.#2016-05-1616:03lowl4tencybkamphaus: thank you for the answer, btw I use datomic-pro-0.9.5344 for backup-db#2016-05-1616:04lowl4tencyI use diff for comparing files. Also I check out it over cat. Don't see difference ._. as far as i understand files shouldn't be same if I use encryption#2016-05-1616:11bkamphausthe encryption is amazon level, I haven’t thought through it entirely before but I don’t think you should see a file level difference if you’re actually able to inspect the files - it’s probably handled by some aspect of AWS access control.#2016-05-1616:14lowl4tencybkamphaus: thx, will check out how it works on AWS level and back 🙂#2016-05-1616:16lowl4tencybkamphaus: if I understand correctly encryption in datomic implemented via this https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html#2016-05-1616:29bkamphaus@lowl4tency: if you look at one of the files in the AWS management console view for s3, under properties in the drop down menu there should be a “Details” expandable view where you can see if “Server Side Encryption” is listed as either None or AES-256.#2016-05-1616:33bkamphaustrying to find an equivalent cli invocation, no luck thus far.#2016-05-1619:01sdegutisDoes :db.error/transaction-timeout mean the transaction did not happen, or only that requesting the transaction's result simply timed out?#2016-05-1619:04sdegutisAh, found it: "When a transaction times out, the peer does not know whether the transaction succeeded, and will need to query a recent value of the database to discover what happened."#2016-05-1619:04bkamphaus@sdegutis: you don't know until you reconnect and check. Pitfalls of distributed systems and all that.#2016-05-1619:04sdegutisGreat. Just queried the database and figured it out. Thanks.#2016-05-1619:05sdegutisThe docs say to use the overloaded Future.get() method that accepts a timeout, but I still got a timeout even though I didn't pass one and didn't set it via System/setProperty either. Is there a default timeout value or something?#2016-05-1619:07sdegutisAh, datomic.txTimeoutMsec defaults to 10000.#2016-05-1619:07sdegutisThanks.#2016-05-1619:28zaneDatomic doesn't have a built-in query log or anything like that, does it?#2016-05-1620:30lowl4tencybkamphaus: wow, yeah exactly I see the checkpoint AES-256. I've read the doc more carefully, so when I downloaded the file it's non-encrypted yet 🙂#2016-05-1717:10sdegutisHello. I'm considering a migration that will clone 7,322 entities resulting in the creation of 139,337 more entities. The documentation doesn't say much in the way of any risks having this many entities may pose on a Datomic database, from the stand-point of queries, future transactions, and storage capacity.#2016-05-1717:11sdegutisCame here to see if there is any conventional wisdom on this matter to be found in this channel. /cc @bkamphaus#2016-05-1718:57ckarlsen@sdegutis: thats "nothing". IIRC they recommend to limit your database to around 10 billion datoms#2016-05-1719:26sdegutis@ckarlsen: Sweet, we only have 5 million datoms. So that's pretty reasonable.#2016-05-1721:52chadhsi have to say when you start playing with some example data of your own creation, it’s fun to create some simple nested data and then query it#2016-05-1817:21caspercI am looking to generate some queries based on user input. I think this is doable but given that Datomic doesn’t have a query optimiser, how do I make sure that I order the clauses in my :where in the right (or at least a reasonably right way)?#2016-05-1817:22caspercAre there some guidelines that I can go by when generating the query?#2016-05-1817:25gardnervickersAre datomic s3 backups just the new transactions since the previous backup?#2016-05-1817:57sdegutisIs it possible for a transaction to have only partially completed due to java.lang.OutOfMemoryError?#2016-05-1818:05marshall@gardnervickers: Since 0.9.5130 Datomic backups have been incremental if they’re issued against the same storage location: http://docs.datomic.com/backup.html#differential-backup#2016-05-1818:06marshall@sdegutis: Transactions are atomic, so they either complete successfully or fail, there is no way to have a ‘partial transaction’; did you see an OOM error on the transactor?#2016-05-1818:07marshall@casperc: Are you generating the entire query or just altering parameters based on the user input?#2016-05-1818:07sdegutisPhew. @marshall I just verified that it did not partially go through. Thank goodness for ACID compliance I suppose.#2016-05-1818:08sdegutis@marshall: I think so... I tried to d/transact-async a hundred thousand entities into existence, and got java.lang.OutOfMemoryError.#2016-05-1818:10marshall@sdegutis: create 100,000 entities within a single transaction? that is a bit on the large size for number of datoms in a single txn - do you need to be able to create those together atomically?#2016-05-1818:10sdegutis@marshall: Probably not, I'm devising a way of splitting this data migration into multiple transactions. Think I found a way.#2016-05-1818:10sdegutis@marshall: It does need to be done within the same 20 minutes though.#2016-05-1818:11marshallI’d definitely recommend splitting something that size up. I don’t think it should be particularly hard to get it through in 20 minutes, of course it will depend on the specifics of your system and schema, etc.#2016-05-1818:11sdegutis@marshall: Someone yesterday mentioned that Datomic recomends a maximum of 10 billion datoms in a database. After this migration we'll have gone from 5 million to 7 million, which eases my mind considering it's not even 1000th of the max.#2016-05-1818:12sdegutisBut I just didn't anticipate that it would be too big for a single transaction.#2016-05-1818:12sdegutisBut yeah I've got me an idea for splitting it up.#2016-05-1818:13marshallIncidentally, the “Understanding and Using Reified Transactions” talk here: http://www.datomic.com/videos.html discusses a few approaches to large operations that span transaction boundaries#2016-05-1820:04casperc@marshall: I am generating the entire query. Our data model forms a DAG and I am generating a :where clause joining from (if that is the right way to put it) one of the leaf nodes to the root.#2016-05-1820:07caspercIt might just be that it is not a problem though if I put the clauses with input parameters first and then just join up towards the root.#2016-05-1820:13marshall@casperc: If your user-paramaterized clauses narrow the dataset fairly substantially, that sounds like a reasonable place to start. I’d recommend against premature optimization and tend to worry about making it faster only if you see significant perf issues#2016-05-1820:15bkamphaus@casperc: might be helpful to look at the code in the mbrainz sample database for generating rules: https://github.com/Datomic/mbrainz-sample/blob/master/src/clj/datomic/samples/mbrainz/rules.clj and the resulting rules: https://github.com/Datomic/mbrainz-sample/blob/master/resources/rules.edn for graph traversal for collaborating artists.#2016-05-1820:15casperc@marshall: Sound advice, I’ll see how it performs. 🙂 I guess I was looking for some reference material of some sort for generating the query#2016-05-1820:15casperc@bkamphaus: Perfect, I got my wish delivered 🙂#2016-05-1820:16marshallRight, the other thing I was going to say was that it sounded like a recursive rule might fit the problem, depending on your schema.#2016-05-1820:17bkamphaus@sdegutis: if you haven’t yet, might want to check out the transaction pipeline example here: http://docs.datomic.com/best-practices.html#pipeline-transactions — though that’s for the step after you break up the transaction. (you put transactions on a channel that the tx-pipeline function would take from).#2016-05-1821:20bvulpesis there any way that calling (d/tempid :db.part/user) in lazy-seq's would result in producing the same db/id?#2016-05-1821:20bvulpes(when i go to transact the lazy seq, i mean#2016-05-1821:46bkamphaus@bvulpes: so not generally but two possible issues: 1 - messing up the code so you just generate it once and repeat the generated value, and 2 - transaction functions that generate tempids can unintentionally conflict with tempids generated on the peer#2016-05-1821:47bvulpesthanks bkamphaus #2016-05-1821:47bvulpesran it down to a mistaken db/unique #2016-05-1905:27arthur.boyerI’m trying to work out which partitions the various attributes in the database belong to. I can get a list of partitions:
(defn get-partitions [db]
(d/q '[:find ?ident
:where [:db.part/db :db.install/partition ?p]
[?p :db/ident ?ident]]
db))
And I can get all the attributes:
(d/q '[:find ?attr ?name
:where
[_ :db.install/attribute ?attr]
[?attr :db/ident ?name]]
db)
But I can’t seem to find how they’re connected.#2016-05-1911:09eplokoHey all. I’m quite new to Datomic and have an issue trying to query the database in the Console. I have a dev transactor and the console app running and can access the Console UI on http://localhost:9000/browse, but… whenever I try to click on the database selection dropdown in the UI, or pretty much do anything else, the only thing that happens is a browser refresh and I’m back to the initial state.
The browser console shows this:
0.js:6027 XHR finished loading: POST "…Fbrowse&v-wn=browse-1380604278-0.20254203379239133&v-wsver=7.1.10&v-uiId=8".qy @ 0.js:6027Hec @ 0.js:6116afc @ 0.js:6068Dec @ 0.js:6274Gec @ 0.js:5870Ifc @ 0.js:6374Xk @ 0.js:5587Nk @ 0.js:4640tk @ 0.js:4415sk @ 0.js:5616(anonymous function) @ 0.js:4646
Navigated to
and nothing else. Any ideas what can be the cause of this and how this can be fixed? I’m running datomic-pro 0.9.5359 and the corresponding Console that comes with the distribution.#2016-05-1911:11eplokoAlso, if that ever matters, I’m on OS X 10.11.4 and run Chrome 50.0.2661.102. I’ve tried to access the console in Safari, but it behaves in exactly the same way.#2016-05-1911:20eplokoHa. Apparently the console that comes with 0.9.5359 is broken somehow. I’ve just downloaded the separate Console distribution from http://my.datomic.com and it works fine. 🙂#2016-05-1915:52dustingetzWhena datomic peer does a query like [:find (count ?r :where [complicated]], and the jvm peer cache is cold, how does datomic know what pages of entities to load from storage?#2016-05-1917:20stuartsierra@dustingetz: The query engine evaluates the datalog clauses in order, fetching index segments from storage as needed.#2016-05-1917:20stuartsierraIt knows which index segments to fetch based on which parts of the datalog clause are constants or bound variables.#2016-05-1918:22adamfreyUsing (d/tx-range (d/log conn) nil nil) gives me a lazy iterator over the log of transactions from past to present. But is there anyway to get reverse time access to transactions? e.g. Find the 10 most recent transactions?#2016-05-1918:24bkamphaus@adamfrey: you’ll want to figure out from the basis-t of the latest db what a reasonable most recent bound to use to pass as a first argument there (not super simple because there’s not a one to one relationship, though maybe simpler with t->tx that way), then do usual clojure sequence manipulation to get the last X values.#2016-05-1918:28jeff.terrell@bkamphaus: Not really sure whether I or you are missing something here 🙂, but wouldn't your approach only give what's happened since a point in time, rather than iterating backwards through time?#2016-05-1918:29bkamphaus@jeff.terrell: I think you’re not inferring enough from the “figure out from” part of my sentence. To be fair, it’s a long sentence.#2016-05-1918:29bkamphausbut yes, something like (subtract 1000 from the latest basis-t) as a dumb heuristic to only have a small collection to do seq manipulation on.#2016-05-1918:30bkamphausthere’s isn’t a direct otherwise (last 10), it’s get something you know will be small and do the last 10 of that.#2016-05-1918:30bkamphausThere is the tx report queue if you want to hold onto the last few things in a live situation#2016-05-1918:32jeff.terrellOK, that makes sense. Thanks for explaining. That would solve the problem, but a bit awkwardly (I should perhaps mention that @adamfrey and I are trying to solve the same problem). Is there no way to get a lazy sequence from Datomic (whether transactions, datoms, or whatever) that's sorted by time, most recent first? Even if it were fairly low-level, I imagine that would be easier to work with in the end.#2016-05-1918:35bkamphausnothing in api for it at present#2016-05-1918:38jeff.terrell@bkamphaus: Fair enough. I'm not sure whether y'all discuss such things—and please forgive the question if you don't—but do y'all have any plans to add a feature like that? It sure would make our problem (ultimately, paging through most recent things in a webapp) easier, and I imagine our problem is not very unusual, especially for Datomic-backed webapps.#2016-05-1918:39bkamphauscan’t pass on specific plans, but I’ll note the feature request. For most recently updated entities, etc. though note that log is still ordered by all transactions.#2016-05-1918:39bkamphausentities are ordered monotonically by transaction time by their id, but_only by partition_ (i.e. :db.part/user :db.part/mydomain), so can’t infer anything if they span partitions.#2016-05-1918:42jeff.terrell@bkamphaus: Thanks, I appreciate that. Good to know about the partitioning. I expect that will be more helpful than harmful for us, but definitely worth noting, thanks. I'm not sure what you mean that the log is only the last few transactions. I think the log does go back to the first transaction, right?#2016-05-1918:43bkamphausjust caught me editing#2016-05-1918:43jeff.terrellha#2016-05-1918:43bkamphausbrain to fingers message garble. I mean that it was ordered by all transactions, so not constrained by “last five of these types of entities"#2016-05-1918:44jeff.terrellRight, OK, that makes sense. Thanks again for explaining all that.#2016-05-1918:44adamfrey@bkamphaus: so to start receiving the log at a given time for a given partition I have to use entid-at right?#2016-05-1918:47adamfreyas opposed to a bare Date being passed to tx-range#2016-05-1918:49adamfreywait, that might be wrong. when you said "entities are ordered monotonically by transaction time by their id, but_only by partition_” where were you talking entities being ordered?#2016-05-1918:50bkamphaus@adamfrey: entid-ad just simulates/fabricates a new entity id reachable from a db -- entities are ordered in datoms/seek-datoms :eavt#2016-05-1918:52adamfreyyes I see that now. but could explain this a little further, please: https://clojurians.slack.com/archives/datomic/p1463683184000677#2016-05-1918:54bkamphausyou’ll have to tell me what’s not clear before I know what to explain differently 🙂#2016-05-1918:54bkamphausor maybe you’re feeling in catch 22 territory#2016-05-1918:55bkamphausHmm, entities are created in partitions. For each entity in each partition, they are ordered by increasing entity id in a way that matches the time in which they were transacted.#2016-05-1918:56bkamphausthis is also the sort order you access with datoms or seek-datoms specifying :eavt as the index.#2016-05-1918:57bkamphausBut if you install entities to multiple partitions, they are only ordered by their entity ids in relation to the other entity id’s in the same partition (partition is part of the entity id)#2016-05-1918:59adamfreyok. But we can always infer transaction time moving forward from the order of the transaction Log right? Partitions aren’t relevant to the Log ordering?#2016-05-1919:00adamfreyor am I missing something because transactions are also entities?#2016-05-1919:00bkamphauspartitions aren’t relevant to the log ordering.#2016-05-1919:01bkamphausI’m just saying the transaction ordering doesn’t constraint by entity type#2016-05-1919:01adamfreyyes, that makes sense#2016-05-1919:01bkamphausi.e. “I want to see the last users who bought this product” is not the last 10 facts in the db typically, but something that must be constrained by “users” entities first.#2016-05-1919:02adamfreyright. Thanks for your help!#2016-05-2014:29limistHi Datomicists, I'm running some tests that use Datomic (both in-memory, and queries against a persistent backend) and seeing a ton of DEBUG logging statements that clutter the test output. Is there a way to turn off those logging statements? I've tried adding the following code to the test namespace, but it has no effect:
(.setLevel (Logger/getLogger (str *ns*)) Level/WARN)
#2016-05-2018:23taylor.sandoThere should be a logback.xml in the root of your source folder. Here is what I have for mine:#2016-05-2018:24taylor.sando<configuration scan="true">
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>%d %-5p [%c{2}] %m%n</pattern>
</encoder>
</appender>
<logger name="org.eclipse.jetty.server" level="WARN"/>
<logger name="org.eclipse.jetty.util.log" level="WARN"/>
<logger name="org.jboss.logging" level="WARN"/>
<logger name="datomic" level="WARN"/>
<logger name="io.pedestal" level="ERROR"/>
<root level="info">
<appender-ref ref="STDOUT" />
</root>
</configuration>#2016-05-2019:22stuartsierra@limist: Datomic uses SLF4J for logging, which forwards to whatever logging framework you want to use, which is controlled by the presence of SLF4J bindings in your project's dependencies. So you need to know what logging framework you have set up in your project, then configure it to reduce the logging level of the datomic.* loggers.#2016-05-2022:10gravI’m using java.lang.Math/abs in a query, and I get a reflection warning, if I don’t define (defn abs [v] (java.lang.Math/abs v)). Then, my IDE tells me to type-hint v, so it ends up being (defn abs [^double v] (java.lang.Math/abs v)). Is this the way to go? Seems a bit boiler-platey 🙂#2016-05-2022:11grav(query ends up being something like :where [(my.ns/abs ?v) ?abs-v])#2016-05-2022:13gravOh, and just using Math/abs works fine, save for the reflection warning#2016-05-2022:23bkamphaus@grav: put the type hint in the query, as per: http://docs.datomic.com/query.html#type-hinting#2016-05-2508:25rauhIs there a list of how database function parameters get converted?#2016-05-2508:27rauhFor instance, a clojure set gets quietly converted to a java.util.HashSet. Which was very confusing and broke my function. This is also not documented anywhere. I posted this on the mailing list a few days ago, but it hasn't been published.#2016-05-2513:42bkamphaus@rauh: at present Java collections — java.util.HashSet, java.util.ArrayList, etc. — not documented at present because it falls out of an implementation details. We’re considering whether or not to change the behavior in a future release and will document the boundaries when we make that decision.#2016-05-2513:43bkamphausdefinitely understand why it’s surprising at present.#2016-05-2517:23jannisHi. There may be an obvious question but how do I query only the datoms that were introduced/modified/removed in a specific transaction? I know I can run (let [db (d/db conn)] (d/q ... (d/history db) (d/t->tx (d/basis-t db)))) but then I can't use pull in the query because that doesn't work against the history.#2016-05-2517:25bkamphaus@jannis: you want to do a query against the log, as with the last few examples on this page: http://docs.datomic.com/log.html#2016-05-2517:26bkamphausis X in the db? - (d/db conn)
was X ever in the db? - (d/history db)
was X in the db at this time? (as-of filter)
did X get added after this point? (since filter)
what data was in X transaction (log)#2016-05-2517:37jannis@bkamphaus: Nice summary of scenarios, thanks 🙂#2016-05-2517:37jannisHowever, I'm testing against an im-memory db, which doesn't have a log.#2016-05-2517:37jannisI guess since would work if there is a simple way to obtain the penultimate transaction.#2016-05-2517:38jannisSomething like d/next-t just with previous instead of next.#2016-05-2521:05zaneWhen testing queries that use the pull syntax I'm assuming I need to create a memory-backed Datomic connection and transact some test fixtures into it?#2016-05-2521:06zane(Or use d/with.)#2016-05-2521:16bvulpeszane: 'tis what i do#2016-05-2521:16bvulpesi don't think d/with will work, as those datasets aren't "proper datomic databases"#2016-05-2521:18bkamphausnot sure I follow what is meant by “those datasets”, d/with returns db-after, db-before etc. and these are database values.#2016-05-2521:47haroldFor protecting tests that mutate state from each other's damage we also enjoy: https://github.com/vvvvalvalval/datomock
(bonus: our tests run a zillion times faster because we only migrate the test database once at the start of the test runs)#2016-05-2522:10bvulpeswat?!#2016-05-2522:11bvulpes(let [test-str (str gensym)] (d/create-database test-str) (d/connect test-str)) ?#2016-05-2522:11bvulpes(d/transact schema/schema.clj (d/connect test-str))#2016-05-2522:12bvulpesand if you don't want to do that for each test you can always create-database and delete-database in the test ns fixtures#2016-05-2522:12bvulpeswhich is vastly slower but whatever at least you don't have to worry about cross-test state persistence#2016-05-2522:13bvulpesbkamphaus: i must've misunderstood how d/with works.#2016-05-2522:42zane@bvulpes: Something like this:
(defn empty-db-with-schema []
(d/create-database uri)
(let [conn (d/connect uri)
db (d/db conn)]
(d/delete-database uri)
(d/release conn)
(-> db
(d/with schema)
:db-after)))
#2016-05-2522:43zaneIf all your tests need to do is query you could call that once and use the returned database value in all of your tests.#2016-05-2522:44zaneIf you need to test code that calls d/transact it's another story.#2016-05-2522:44bvulpesmy tests are pretty side-effecty, so i tend to test code that transacts pretty frequently.#2016-05-2522:45zaneSeems like vvvvalvalval/datomock just automates use of d/with.#2016-05-2522:48bvulpes#nodeps#2016-05-2523:02zane#nih?#2016-05-2523:07bvulpes#railslifestyle#2016-05-2523:08bvulpesthere's a pragmatic middle ground.#2016-05-2602:18arthur.boyerI’m getting this intermittent error:
Critical failure, cannot continue: Could not write log
java.lang.Error: Conflict updating log tail
at datomic.log.LogImpl$fn__7023.invoke(log.clj:484)
at datomic.log.LogImpl.append(log.clj:476)
at datomic.log$fn__6684$G__6596__6688.invoke(log.clj:59)
at datomic.log$fn__6684$G__6595__6693.invoke(log.clj:59)
at clojure.lang.Atom.swap(Atom.java:65)
at clojure.core$swap_BANG_.invoke(core.clj:2234)
at datomic.update$writer$log_block__18834.invoke(update.clj:335)
at datomic.update$writer$proc__18842.invoke(update.clj:355)
at datomic.update$writer$fn__18845$fn__18847.invoke(update.clj:364)
at clojure.lang.AFn.run(AFn.java:22)
at java.lang.Thread.run(Thread.java:745)
I’m not sure how to go about debugging this. I think it’s caused by some data and schema migration code, but at the application end I just get:
May 26, 2016 2:14:37 PM org.hornetq.core.protocol.core.impl.RemotingConnectionImpl fail
WARN: HQ212037: Connection failure has been detected: HQ119015: The connection was disconnected because of server shutdown [code=DISCONNECTED]
ExceptionInfo :db.error/transactor-unavailable Transactor not available datomic.peer/transactor-unavailable (peer.clj:186)
Has anyone got any ideas on how to track the cause of this down?#2016-05-2612:02taylor.sandoWould there be a way to ask for a recursive attribute, but only a certain number of them? For example person/friend, you're asking the system to get me this person and 100 people related to him through person/friend. So if the person has 50 friends, and all his friends have 50 friends. You'd get the 50 friends, and then it would grab the next friend, and his 50 friends, but it would stop there? I feel like you'd have to do that manually through entity, rather than a query. You'd get the person, and then you'd have a local transient/atom which would hold friends while you walked the entity. Seems like you'd have to do it with reduce, so you could call reduced and stop the function early once you had found the 100 people.#2016-05-2612:05taylor.sandoI guess it would be loop/recur, not reduce#2016-05-2612:43stuarthalloway@taylor.sando: related https://hashrocket.com/blog/posts/using-datomic-as-a-graph-database#2016-05-2612:45taylor.sandoI'll have a look at it.#2016-05-2613:07bkamphaus@arthur.boyer: is the storage responsive, or do you see a trend toward spikes in storage latency? (StoragePutMsec metric) — the log write is a write to storage. Basically if the transactor can’t update the log tail it’s usually a symptom of not being able to write to storage. Since this is how transactions persist/are made durable, the transactor will fail (Datomic is a CP system in CAP terms).#2016-05-2613:29bkamphaus@arthur.boyer: didn’t read that carefully enough “Conflict updating tail” usually comes up if something changes storage out from under the transactor. Either a manual write or truncation to Datomic’s table/keyspace, or restoring a database in place (while transactor is up)#2016-05-2613:31bkamphauscan also be consistency guarantees not met by storage during e.g. a riak or cassandra node rebalancing#2016-05-2621:40arthur.boyer@bkamphaus: Restoring a database in place. It’s happening in my dev environment when I restore a database backup. Restarting the transactor prevents the problem. Thanks.#2016-05-2718:51jamesnvcIs there a best practice around deref'ing datomic transactions? i.e. should I always deref d/transact so I get errors thrown, or will that be suboptimal?#2016-05-2718:56bvulpesjamesnvc: i do#2016-05-2718:59jamesnvcAny cases where it is better to not deref?#2016-05-2719:00bvulpesi suppose when you don't care about seeing an error on the spot, or when you're okay running the transaction asynchronously#2016-05-2719:00bvulpes"when you don't have to" i imagine would be the advice#2016-05-2719:05stuartsierrad/transact is actually synchronous, even though it returns a Promise, so you almost always want to deref it immediately.#2016-05-2719:06jamesnvcstuartsierra: ah, good to know, thanks!#2016-05-2719:06jamesnvc(I guess that would explain why transact-async exists#2016-05-2719:06bvulpeshuh. today i learned!#2016-05-2719:07stuartsierraYes. d/transact-async is truly async, it returns immediately, so you can choose if / when you want to deref it.#2016-05-2719:07bvulpeslol right#2016-05-2719:08bkamphausNote in the way the pipeline example here: http://docs.datomic.com/best-practices.html#pipeline-transactions - uses transact-async, we still deref to be alerted to errors and handle (in this case just closing channels and aborting, reporting the error).#2016-05-2811:54pesterhazytransact returns a promise for no reason other than API parity with transact-async (that's something that it took me a while to figure out)#2016-05-2915:47dryewohi all, I’m new to datomic and I have a datalog question.
This is my query:
'[:find (pull ?p [:]) (count ?v)
:where
[?p :]
[?v :td.vote/post ?p]]
and it returns no result if there are no votes for a post. I’d like datomic to put count 0 instead of omitting the post from results. Is there a nice way to achieve that?#2016-05-2915:53bkamphausNot a nice way, no.#2016-05-2916:01bkamphausAt present to get any value for an aggregate you have to have some tuple that matches in the intermediate set. I think @marshall may have an example using or or not with ground and missing or something as a workaround hack for this or a similar problem.#2016-05-2916:01bkamphausPull does support default expressions so it may make more sense to get all values with a 0 as a default in the pull and do count or sum outside the query.#2016-05-2916:04marshallYes, you can sort of get that behavior with or and missing?, but it's not great. @bkamphaus suggestion of pull with default is probably cleaner#2016-05-2916:07dryewolike this?
'[:find (pull ?p [: (default :td.vote/_post [])])
:where
[?p :]]
#2016-05-2916:07dryewoand then map the counting#2016-05-2916:12dryewoThe only thing that scares me is, what if I have too many votes so they don’t fit in memory (very unlikely, but I’m curious)? With aggregation functions this problem should be avoided, but not with manual counting.#2016-05-2916:41bkamphausThe intermediate rep has to fit in mem for the query to work in the first place (query runs on peer).#2016-05-2916:41bkamphaus@dryewo: ^#2016-05-2916:42dryewoah, I see#2016-05-2916:42dryewomakes sense#2016-05-2916:42dryewothen it should be ok#2016-05-2916:42dryewothanks#2016-05-3005:38vmarcinkohi, just started playing with datomic last few days, and noticed some thing which IMO is very important but not much discussed about around... Unlike SQL dbs where you do bunch of mutation operations on db during unit of work, these are not "transacted" until db commit is called. In datomic, since we don't have TX open/commit/rollback mechanism, does this mean that it simply forces the developer to structure the code in functional way so that every function which is called during this unit of work "adds" its own datomic TX data to this "unit of work global vector of TX data", so that top level function, which should be enforce atomicity of db change, should finally take this global vector and transact this to datomic?#2016-05-3005:40hans@vmarcinko: yes. your application needs to organize transactions into small logical units.#2016-05-3005:40vmarcinkobecause top level function are nothing but composition of low level functions where each of these low level ones can mutate db in its own way, so I must actually prevent this (for lo level fns to call d/transact), but only to collect their TX data and execute this transact in top level one at the end?#2016-05-3005:41vmarcinkomeaning, this requirement substantially affects style of code structure one can get used to when working with sql dbs#2016-05-3005:42hansYes. and it can be a major obstacle when you're used to dealing with large transactions, because Datomic does not support large transactions.#2016-05-3005:45vmarcinkowhat is meant by large TX? even sql dbs suggest not to make TX too large#2016-05-3005:45vmarcinkohow many datoms aproxximately is considered large?#2016-05-3005:45vmarcinkofor TX#2016-05-3005:46hansTen thousand is large. The sweet spot for transaction size is in the low hundred Datoms, I'd say.#2016-05-3005:48vmarcinkoand if I make TX contains say 30,000 datoms, does it mean transact wil ltake a bit longer, or some other thing will actually prevent this from executing successfully at the end?#2016-05-3005:48vmarcinkoi'm OK if slowness is the only problem in such rare cases#2016-05-3005:54vmarcinkobecause in sql dbs, i sometimes, rarely, commit 5000 new records, which is say roughly the same as 30000 datoms, and it takes sometimes few seconds, and I'm OK with that#2016-05-3006:01hansYou will need to make sure that you don't run into timeouts, and that will sometimes require tuning.#2016-05-3006:02hansThat tuning comes at the expense of increasing failover time in a redundant configuration, i.e. if you allow larger transactions by increasing the heartbeat interval, it takes longer for a new transactor to detect whether another transactor is already active.#2016-05-3006:02hansOverall, dealing with large transactions in Datomic is a tricky issue and the system itself does not cater for you very much.#2016-05-3006:04vmarcinko@hans: ok, thanx..Anyway, back to influence on code structure... Do datomic users sometimes use some dynamic var to represent this datomic TX data for single unit of work, thus relieving the low level functions the burden of adding this TX data as return value (beside some toher things)?#2016-05-3006:05vmarcinkoI know this dynamic var aproach is not the cleanest from functional perspective, but just wanting to see if there is some way to offer old way of structuring code everyone is used to with sql dbs#2016-05-3006:05hansThere are many ways to organize collection of transaction data, and it is certainly possible to use a dynamic variable for that. I'm not a big fan of dynamic variables (anymore) because they make testing difficult and don't play well with threading.#2016-05-3006:06vmarcinkoare there any docs about some aproaches to collection of TX data around..I coudln't find any one the web from quick glance?#2016-05-3006:06hansWe collect data that we want in a transaction in a top-level function that transacts it.#2016-05-3006:06vmarcinkook#2016-05-3006:09potetm@vmarcinko: No, you don't want to build up some global state like that.#2016-05-3006:09potetmThat's just asking for a mess.#2016-05-3006:10potetmI'm having trouble seeing why something like
(concat (gen-one-set-of-facts ...)
(gen-two-set-of-facts ...) ...)
wouldn't work.#2016-05-3006:11potetmIf you're worried about transaction size, then you need to break it up, not build it up.#2016-05-3006:14vmarcinko@potetm: Thanx, I understand the benefits of functional approach, it's just that I find functional way sometimes difficult with code modularization... Sturart sierra's component and "dependency injection" it provides is mostly used for that, and I sometimes dunno if some lower level module (component) and its functions are wanting to transact some data, so I guess I should make much of these functions that belong to lower level modules, and can be plugged polymorphically , shoudl return map that has optional :datomic.tx-data key, thus higher level code can check if some tx data from lower level function exists and should be conjoined to global unit of work TX data vector#2016-05-3006:15vmarcinkoand this looks like major influence on code structure#2016-05-3006:16vmarcinkoon simple code bases it is easy to reason with purely functional way, but large codebases sometimes bring problems due to modularization requirement#2016-05-3006:17vmarcinkobecause there can be cases when one impl of some component wants to transact some new data, and some other impl doesn't#2016-05-3006:19potetmRight, so why wouldn't this work:#2016-05-3006:20potetm(defn my-transacting-fn [uri args]
(d/transact (d/connect uri)
(concat (ns-1/set-of-facts args)
(ns-2/set-of-facts args)
(ns-3/set-of-facts args)
(ns-4/set-of-facts args))))
#2016-05-3006:20potetmEach set-of-facts fn can make whatever decision it wants based on args.#2016-05-3006:21potetmIt can return [], it can return [[:db/add 1 :my-attr "val"]]#2016-05-3006:22potetmHow is that not modular?#2016-05-3006:23vmarcinkothis is your top level fn that does transact, right?#2016-05-3006:23potetmYeah, that's the logical unit of work.#2016-05-3006:23hansI guess what @vmarcinko is looking for is a way to make several distinct modules add to a transaction that is then committed at the end of the request, for example.#2016-05-3006:23potetmthat's what that does^#2016-05-3006:23hansWhich is a common pattern with SQL databases.#2016-05-3006:25hans@potetm: Not quite, because you're enforcing a direct call relationship between the request handler and the modules that create data to transact.#2016-05-3006:26hans@potetm: All that @vmarcinko wanted to have confirmation for is that it is common to structure applications to cater for Datomic's requirements, which I'd say can be confirmed. Datomic requires significant architectural support from the application. It is not at all a drop-in replacement for an SQL database.#2016-05-3006:29potetm> I find functional way sometimes difficult with code modularization
I think there's a fundamental misunderstanding about the value referential transparency provides. That's what I'm trying to get after.#2016-05-3006:30hansOkay, but that's no longer something that is Datomic specific.#2016-05-3006:31potetmIf you go make a global mutable var you've thrown away all of the leverage datomic and clojure have to offer.#2016-05-3006:32potetmYou should take the decision that seriously. That's way I'm pushing a bit on this.#2016-05-3006:35hansI don't agree at all. We're really talking about databases, and databases by their nature are about effects. It can certainly not be said that the only proper way to deal with effects is to organize them as a call chain.#2016-05-3006:35vmarcinko@potetm: Thanx, though I know the value f referential transparency and functional way, sometimes I struggle a bit to organize large code bases whcih are modular in their very nature to allow purely functional aproach..SQL dbs don't enforce me to use functional way, while datomic seems to do just that, which as @hans nicely put, it makes datomic not drop-in replacement to SQL dbs in one's code#2016-05-3006:35vmarcinkoeg.#2016-05-3006:36vmarcinkoI have top level fn which calls in other pluggable module function which is called resolve-country (phone number), and this fn returns coutnry code#2016-05-3006:36vmarcinkobut you wouldn't believe, although this polymorphic function doesn't imply to return anything else, there were cases where in some deployments, meaning, other implementations of this functions, we have to register something in db#2016-05-3006:36hansvmarcinko: Instead of a dynamic var, collecting the transaction data in a stream or queue-like structure might be better.#2016-05-3006:37vmarcinkomeaning, one impl of this function shoudl return datomic TX data#2016-05-3006:37vmarcinkowhereas in SQL db, we touch global state (SQL db) and nothing is seen from the caller#2016-05-3006:37vmarcinko@hans ok, thanx#2016-05-3006:38vmarcinkoI don't have any problem with datomic pushing some application structure, just that I haven't seen this documented around in contrast to all pervasive SQL dbs#2016-05-3012:01stijnI have a couple of questions about running datomic on google cloud#2016-05-3012:02stijn1/ anyone any experience with running the transactor in google container engine (or aws container service). HA looks possible to me.#2016-05-3012:03stijn2/ is there any chance that Google's Cloud DataStore will ever become a Datomic backend?#2016-05-3013:45caspercI am wondering, is there any way to have Datomic participate in a two-phase commit? I am writing a file to a file store and some metadata in Datomic, but I want to make sure that the transaction doesn’t commit if the push to file storage fails or vice versa (creating an inconsistency between the two).#2016-05-3013:47marshall@casperc: The Reified Transactions video here http://www.datomic.com/videos.html discusses approaches to solving that issue#2016-05-3013:48marshall@vmarcinko: incidentally, that video ^ also touches on solutions to the large transaction issue #2016-05-3013:54casperc@marshall: Thanks, I’ll take a look at that.#2016-05-3013:57hans@casperc: generally, Datomic's transaction model does not blend well with the transaction model that two-phase commit systems usually prescribe.#2016-05-3013:58hans@casperc: as Datomic's transactions are basically determined by the serialized execution of the transaction code in the transactor, you cannot really wait for other transactional systems to commit their work before you do.#2016-05-3013:59hans@casperc: reified transactions can help grouping large operations together, but that is a very different thing from the isolation that traditional transactional systems provide.#2016-05-3014:02casperc@hans: Yeah, I was afraid of that. I am thinking something along the lines of reverting the transaction if the file storage fails.#2016-05-3014:03hans@casperc: we're committing to our file storage before we transact in datomic and live with the potential garbage that we accumulate that way.#2016-05-3014:04caspercYeah that is probably a good option#2016-05-3014:05caspercInconsistencies can still happen though, as we need to change files and metadata, so it is not just a missing reference to a file in our case.#2016-05-3014:14bkamphausAnnotate transactions by file or a hash on state of file they refer to, if file store commit fails then retract all tx data for all transactions with that annotation.#2016-05-3014:14bkamphausThat and other strategies can be trivially adapted from the examples in the second half of that video.#2016-05-3014:16caspercCool, I am looking through the video so I’ll keep watching.#2016-05-3014:19bkamphausI would say the semantics in the domain and the other file store's consistency have more to do with the problem than Datomic's constructs.#2016-05-3014:19bkamphausI.e. If you can only transact to Datomic with a push from an event where the file is definitely stored, it's a simple problem.#2016-05-3014:20bkamphausIf you need Datomic to know you tried to commit something elsewhere and then somehow update it when the file is available for sure, and you can only poll after some vector clock time or something to build an expectation that he file is there for good, that's more complicated.#2016-05-3014:23hans@bkamphaus: The problem is not so much whether one can undo something that is not wanted because of an error that occurs later, it is more that the burden of filtering out "uncommitted" data is on the other readers of the database.#2016-05-3014:29bkamphausYep, totally understand that limitation and it's true for other aspects of use as well. I would say it's a trade off of the distributed query/read model that applies to other domains as well (I.e. filtering for permission to access) #2016-05-3014:31bkamphausDepending on the domain as well it may make more sense to just annotate metadata entities as whether their file backing has been verified.#2016-05-3023:44vinnyataidehello, as I was searching through ways to expose my datomic through graphql I stumbled upon the rest api. Firstly I thought it would expose the models, but I saw that it exposes the system itself, so my question is, is it somehow possible to use it as an complete api or is the graphql the best option for non optionated back end apis?#2016-05-3109:13xsynQuick question, which I think I know the answer to but want to sanity check.#2016-05-3109:14xsynWe’re using Datomic with a DynamoDB back-end, I was just looking at AWS Quicksight, and I imagine that even though we’re using DynamoDB as the store I imagine we would’t be able to just transact directly from DynamoDb into QuickSight because Datomic would obfuscate the data in Dynamo somehow. Is that correct?#2016-05-3111:35jballancxsyn: That's accurate. DynamoDB is just the backing store. The data itself is encoded by Datomic.#2016-05-3111:59xsynThanks very much#2016-05-3114:43nwjsmithI've started on spec-ing Datomic queries: https://gist.github.com/nwjsmith/0b87c522cfeba68f1928d48c89adef78#2016-05-3114:43nwjsmithNeed to get generators working, pull syntax, and fix some stuff up around rule-vars#2016-05-3114:43nwjsmithBut it's a start#2016-05-3114:43robert-stuttafordi wonder if Cognitect will be doing the same officially#2016-05-3114:43nwjsmithNot sure if anyone else has started on it yet#2016-05-3114:44robert-stuttafordquery generators is a fascinating idea 🙂#2016-05-3114:44nwjsmith@robert-stuttaford: yeah, would be cool#2016-05-3114:45nwjsmithI'm impressed with how easy it is to translate the grammar from http://docs.datomic.com/query.html#grammar to spec.#2016-05-3114:47nwjsmithMy goal here is to be able to test.check some query-generating code I'm going to write#2016-05-3114:49bkamphausNo promises about the future. 🙂 Though the generic background context is - as with most companies -Cognitect designs and ships things that solve problems we have or our customers have.#2016-05-3114:50bkamphausYou can typically assume that Datomic will take advantage of new Clojure language features (with Clojure version adoption lag) and that Datomic itself will, from time to time, motivate new language features in Clojure.#2016-05-3114:51potetm@bkamphaus: Can you elaborate on the specific problems that drove you to spec?#2016-05-3115:27stuartsierra@potetm: The motivations for clojure.spec are described in the guide http://clojure.org/about/spec#2016-05-3115:32potetmYeah I was curious whether there were specific projects you guys had that pushed this to the fore.#2016-05-3115:33potetmSo I guess when I said "problems" I meant "projects" 🙂#2016-05-3115:36potetm(I know there's only so much you can/will divulge. I was just curious, so I thought I'd ask.)#2016-05-3116:02marshallDatomic 0.9.5372 is now available https://groups.google.com/d/msg/datomic/sF5C5mpLCmE/gq29iOxpAgAJ#2016-05-3120:26currentoorIs it true that transactions should not have more than one million datoms in them?#2016-05-3120:37bkamphaus@currentoor: “should not have” is a pretty strong phasing, but it’s likely that transactions of that size are likely to run into performance issues (best performance size is ~40k or so) — do you have transactions of that size that represent an atomic boundary in the domain?#2016-05-3120:37currentoor@bkamphaus: not in the domain, I just need to migrate some old data#2016-05-3120:39currentoorso this script will just run once and not be userfacing#2016-05-3120:41bkamphaus@currentoor: if its not atomic in the domain I would break it up into smaller transactions and pipeline it, as per the example here: http://docs.datomic.com/best-practices.html#pipeline-transactions#2016-05-3120:43currentoor@bkamphaus: thanks I'll check it out.#2016-06-0112:30vmarcinkois there some function in datomic to convert TX data maps to list of datoms ?#2016-06-0112:30vmarcinkoin other words, when we define datoms in map form, to convert it to list of lists form?#2016-06-0112:34bkamphausNothing provided,, the logic is basically an entid, db/add, mapkey, mapval for every mapkey, mapval in the map form (except the ID, of course)#2016-06-0112:35bkamphausI believe someone in here or on group has pointed to a gist or list post with an example before.#2016-06-0112:51vmarcinko@bkamphaus: thanx...one more question...#2016-06-0112:51vmarcinkoWhen using datomic, in some top level function which should execute atomically, meaning, its execution constitutes unit-of-work#2016-06-0112:52vmarcinkoand this top level fn is calling few low-level fns which are doing something with db, but these low level fns are actually just preparing tx-data that top level fn collects and waits until very end to transact this collected tx data (postpone side-effect until very end)...#2016-06-0112:54vmarcinkoDo you code your functions this way when using datomic in your apps, or you're dealing with domain objects represented as maps, you transform them in this functions, and then at the very end, you just transact whole new state of this domain entitiy by using some entitiy-> datomic tx data functions, but that means that one doesn't know which for eg. fields of these maps were updated, but always transact whole entitiy to datomic (all key/value pairs as datoms) ?#2016-06-0112:55vmarcinkoto recap, 2 aproaches are:#2016-06-0112:55vmarcinko1. collect TX data from each db "mutating" low level functions, and at the end of top level fn just trasact whole collection of tx data#2016-06-0112:55vmarcinkoor#2016-06-0112:55vmarcinko2. load entities from db as clojure maps, tranform them during fn execution, and at the end just transact whole new state of these entities back to datomic, with all keys#2016-06-0112:57vmarcinkoin second aproach we don't care about fine grained approach where you take care that you only transact changed entities' keys, just transacting whole entitiy state at the end of top level fn (unit of work)#2016-06-0112:57vmarcinkoI'm total datomic newb, just started evaluating it in one of my apps, thus the question - maybe the second aproach is not eevn feasible, dunno, just speaking what's in my head right now#2016-06-0113:01vmarcinkoof course, in second aproach, i could at the end of top level fn compare final entity's state with initial ones, and maybe by diffing these states resolve what keys were changed and construct datomic tx datoms from that, and transact these#2016-06-0115:21mlimotte@vmarcinko I don't know if this is best practice, but in one similar case, I do #2 with DIFF. When I query the db to get the data, I also get the basis-t and store it with the data. When I get the modified data structure back, I get the basis-t from it, obtain the db at that point and do a nested diff (original vs new). In addition, since I have the basis-t (original value), when I go to update I can use db.fn/cas to assert the prior value (this way I can avoid conflicts if the data was changed due to some other process).#2016-06-0115:53vmarcinko@mlimotte: Thanx, though I think we don't speak about same case. When I say diff, I mean the process of diffing at the end of top level function, just prior to doing single trasnact to datomic, and diff is necessary because there could be few fucntions that this top level function called and that modified some initially loaded entity#2016-06-0115:59vmarcinkoSo you see in example above, this top level function calls some-fn1 and some-fn2 which "change" initially loaded entity state, and at the end of this top level fn, I make diff, and transact this final state of entity#2016-06-0115:59vmarcinkothat's option 2 in my inital question#2016-06-0116:00vmarcinkoand this diff is needed to transact only those attributes that changed#2016-06-0116:01vmarcinkobut diffing is burden, so I could just convert whole final state of entity to datomic datoms, and transact all its attributes, regardless if they were changed during this function#2016-06-0116:01vmarcinkothat would definitely simplify code#2016-06-0116:23nwjsmithI've hit a bump in the road using clojure.spec to specify Pull queries#2016-06-0116:23nwjsmithSpecifically map specifications#2016-06-0116:25nwjsmithLooks like there isn't a way in clojure.spec to say 'here is a spec for keys and a spec for values, give me a spec for a map'.#2016-06-0116:35mlimotte@vmarcinko: I believe we are saying the same thing. Get some entity (in my case a complex nested entity using a pull pattern). It gets edited. Then do a diff on the original vs. the edited structure. This diff is then converted into datomic txns and passed to d/transact.
In my case, I save and then use the basis-t to get the original data... in your case, looks like it all happens in the same thread, so you still have the original data and don't need to use basis-t, so a little simpler than my case.
I can tell you it's feasible... up to you if the diff is worth the effort.#2016-06-0117:23jdkealyHi guys, is there a way for me to bulk import large transactions without tying up my transactor? I just tried importing 2M records using transact-async and my transactor was unresponsive for up to 5 minutes.#2016-06-0117:29marshall@jdkealy: You can set up a pipeline import and tune the number of concurrent transactions to find a balance between import speed and availability of the transactor: http://docs.datomic.com/best-practices.html#pipeline-transactions#2016-06-0117:35jdkealythanks @marshall... out of curiosity, can you pay to scale this kind of thing? Since I'm doing this on my localhost, would I get 3x the availability + speed if I paid for 3 processes in production?#2016-06-0117:36jdkealysorry.. i mean 5 processes#2016-06-0117:38marshall@jdkealy: Datomic serializes all transactions to maintain ACID semantics. A given Datomic database is only written to from a single transactor process.
The additional processes available in larger licenses would be peers, not transactors, and allow horizontal scaling of read/query, not of writes.#2016-06-0118:29nwjsmithIf I've found an issue in the documentation (I think I've found an inaccuracy in the Query grammar), where would I report it?#2016-06-0118:32marshall@nwjsmith: You can email it to <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection> or <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>#2016-06-0118:32nwjsmith@marshall: thanks!#2016-06-0213:15jgdavey@jdkealy: If you’re doing a bulk import for a brand new database, I’d recommend backup/restore. That is, run the import locally, backup the db, restore to prod.#2016-06-0213:16jgdaveyOnly works for fresh dbs though. I collected a few of my other thoughts about doing bulk imports here: https://hashrocket.com/blog/posts/bulk-imports-with-datomic#2016-06-0217:22bvulpesare there plans to migrate datomic's time representations to the java 8 time api?#2016-06-0218:09bkamphaus@bvulpes: nothing concrete at present.#2016-06-0218:10bvulpesnbd, just curious.#2016-06-0310:29bendyhey, I’m very new to datomic, and I’m trying to set up an auto incrementing field (I need a serial id for reference). I get the general idea of database functions, but how can I set up a field to be auto incrementing? Can anyone point me to a beginner friendly resource? 😄#2016-06-0312:41viniciushanadepending on the constraints you need for this number, perhaps the db/id itself may suffice#2016-06-0312:43viniciushanayou could keep an entity to control the sequential generation, always keeping the current value, and calling inc in a transaction fn alongside with persisted the incremented value at the entity using it (and also updating the value at the control entity).#2016-06-0312:43pesterhazya function I use to generate order numbers: (let [new-number (d/q ' [:find (max ?number) . :in $ :where [_ :order/number ?number]] db)] [{:db/id eid, :order/number (inc (or new-number 0))}])#2016-06-0312:44viniciushanamy experience with using max for high insertion volumes is not so good 😕#2016-06-0312:45viniciushanahaving a control entity helped tremendously - although it’s not sharding-proof, for this case we plan on using zookeeper.#2016-06-0312:47pesterhazy@viniciushana: shouldn't that attribute be indexed so max would be pretty instant?#2016-06-0312:49pesterhazy@bendy, the idea would be to transact a transaction that uses this fn, rather than setting :order/number directly#2016-06-0312:49viniciushanaeven indexed, for hundreds of thousands of entities a day, it takes more than 20 seconds to run a max#2016-06-0312:49viniciushanawe keep those sequentials unique#2016-06-0312:50pesterhazythe other option is to remember the highest order number in an entity somewhere#2016-06-0312:54viniciushanayep, and I also recommend to keep it accessible by a :db/ident to ease retrieving it#2016-06-0313:00pesterhazythat's what I would do too#2016-06-0313:18viniciushana@bendy so in practical terms you’ll want the transactor fn to be like:
1 - lookup the control entity
2 - call inc on the attribute you keep as a sequential control
3 - return the tx vector asserting the business entity with the incremented sequential + a db.add for asserting the incremented sequential attribute at the control entity#2016-06-0313:19bendyjust to make sure, this is what I’ve come up with so far - is this what you’re suggesting?#2016-06-0313:20bendy{:db/id #db/id[:db.part/db]
:db/ident :invoice/next
:db/valueType :db.type/long
:db/cardinality :db.cardinality/one
:db/doc "Next invoice number"
:db/noHistory true
:db.install/_attribute :db.part/db}
{:db/id #db/id [:db.part/user]
:invoice/next 1}
{:db/id #db/id [:db.part/user]
:db/ident :number
:db/doc "Function that returns the next id for an invoice."
:db/fn #db/fn {:lang "clojure"
:params [db id]
:code (let [[e n] (datomic.api/q '[:find [?e ?n]
:where [?e :invoice/next ?n]]
db)]
[[:db/add id :invoice/number n]
[:db/add e :invoice/next (inc n)]])}}
#2016-06-0313:20bendy(don’t worry this isn’t for a real billing system)#2016-06-0313:21bendyI also ask because I’m getting an error and I can’t seem to figure out why… still debugging#2016-06-0313:24bendyah… well the issue is with my query, apparently you can’t do a :find ?a ?b ?c . 😁#2016-06-0313:24bendyanyways, I think I’m doing what your recommending, so thanks for the confirmation!#2016-06-0313:24bendydidn’t know whether this was the right track or not#2016-06-0313:24bkamphausyou can put them in [?a ?b ?c] . (omit the . actually)#2016-06-0313:25bendyoh ok, that gave me a null pointer exception#2016-06-0313:25bendyI guess there also another bug somewhere else!#2016-06-0313:25bendydon’t mind me haha#2016-06-0313:26bkamphauswait one second#2016-06-0313:26bkamphausno ., but you do want find single tuple from here: http://docs.datomic.com/query.html#find-specifications#2016-06-0313:27bendyah, bingo!#2016-06-0313:27bendythanks#2016-06-0313:46stijnas soon as I add datomic as a dependency to the peer, logging in other parts of the application starts to break. I have configured leiningen as instructed here http://docs.datomic.com/configuring-logging.html#2016-06-0313:46stijnbut e.g. ExceptionInfo doesn't get logged with the data printed#2016-06-0313:46stijnwithout datomic
(log/error (ex-info "oo" {:a 1 :b 3}) "aaa")
Jun 03, 2016 3:32:40 PM clojure.tools.logging$eval420$fn__425 invoke
SEVERE: aaa
clojure.lang.ExceptionInfo: oo {:a 1, :b 3}
at clojure.core$ex_info.invokeStatic(core.clj:4617)
at clojure.core$ex_info.invoke(core.clj:4617)
#2016-06-0313:47stijnwith datomic & logback
(log/error (ex-info "oo" {:a 1 :b 3}) "aaa")
=> nil
19576820 2016-06-03 15:39:40,202 [nREPL-worker-1] ERROR user - aaa
- clojure.lang.ExceptionInfo: oo
at clojure.core$ex_info.invokeStatic(core.clj:4617)
at clojure.core$ex_info.invoke(core.clj:4617)
#2016-06-0313:47stijnlet's say I have zero experience with JVM style logging 🙂#2016-06-0313:53stijnIt is a problem with logback (pattern: %date{ISO8601} [%thread] %-5level %logger{36} - %msg%n %ex{full}) But why does a lein repl without logback print it properly then?#2016-06-0313:56bendygot everything working, updated my snippet with the working code, thanks everyone for your help!#2016-06-0317:56stuartsierraI know that clojure.lang.ExceptionInfo includes data in its toString implementation but not its getMessage.#2016-06-0318:17stuartsierraIt looks like Logback only uses getMessage and the stack trace in its default exception printer.#2016-06-0318:18stuartsierra@stijn: Whatever logging framework is active in your standalone REPL (maybe java.util.logging) must be printing the ExceptionInfo with toString.#2016-06-0318:21stijn@stuartsierra: thanks I'll try with another logging backend, or get logback to use toString#2016-06-0319:39stijngot it to work with logback by getting the formatted message, which displays exceptions the same way as a standard leiningen repl#2016-06-0509:04madvasHow can I pass collection of referenced subitems inside where query. for example like this
(defn make-decision [subitems]
;; subitems is only 1 ID not collection of ids
true)
(d/q '[:find ?item-name
:where
[?i :item/name ?item-name]
[?i :item/subitems ?s]
[(myns.sth/make-decistion ?s)]] db)
#2016-06-0514:33bhaganyIf I understand you correctly, are you looking for something like:
(d/q '[:find ?item-name
:in $ [?i …]
:where
[?i :item/name ?item-name]
[?i :item/subitems ?s]
[(myns.sth/make-decistion ?s)]] db subitems)
#2016-06-0514:35bhaganyshould have tagged you: @madvas ^^#2016-06-0514:38bhaganyah, I think I understand better now - my post isn't what you're looking for. I shouldn't try to help pre-coffee...#2016-06-0514:41bhaganymy second impression is that I wouldn't try to do this in a query. All of the data is local to the peer anyway, so I think I'd make the query without the last where clause, and pass the results of the query to make-decision#2016-06-0514:46madvas@bhagany: Thank you. I figured out that what I want is probably undoable in one query, but that’s okay#2016-06-0515:14jannisQuick question because I haven't found any information on it so far: is the tx-report-queue available for in-memory dbs? I use an in-memory Datomic db for testing but even though my calls to d/transact succeed, I get no transactions reported from the queue.#2016-06-0515:18hansit is not, and it is somewhere in the documentation.#2016-06-0515:32jannisOk. I was aware of the log being unavailable but not the report queue. Thanks 🙂#2016-06-0515:49jannisActually... I think that's wrong. tx-report-queue works now, even with an in-memory db. So it's only the log then.#2016-06-0516:05hansoops, sorry for spreading wrong info.#2016-06-0516:05hansit was the log that was mentioned.#2016-06-0601:30adamkowalskihey if you use the dev transactor where is the actual data stored on your filesystem?#2016-06-0601:38marshall@adamkowalski: the default location is a directory named data at the top level of the Datomic distribution. You can change the location in the transactor properties file#2016-06-0623:57kennyIf you have an arbitrarily nested set of refs, what is the best way to query for the parent? For example, take a set of facts that looks like below.
{:label "parent"
:props [{:type :type1
:content [{:label "Label1"
:content [{:type :type2
:content [{:db/id 1234
:type :type3
:label "Label2"
:content [{:type :type3
:key "Key1"}]}
{:label "Label4"
:content [{:type :type4
:key "Key2"}]}]}]}]}]}
Say I have the :db/id 1234 for the nested fact in the data structure above. What is the best way to query for the parent entity? Or would it be better to just attach a parent id to all necessary children?#2016-06-0706:32hansIt is better to attach the parent to the children rather than the children to the parent because that enables referencing in both directions.#2016-06-0706:34hansi.e. zou can then select the children using the inversion syntax (:_<attribute>) in pull expressions. also, if a child needs to be removed, the parent entity does not need to be updated.#2016-06-0706:57bvulpeshans you can walk refs both direction#2016-06-0706:57bvulpesnxqd: i use it for basically all state storage#2016-06-0707:00hansbvulpes: sure. but with an 1:m relationship, it is better to put the "1" into the "m"s rather than the other way round.#2016-06-0707:03bvulpesi suppose it depends on how frequently you'll be walking which direction, and how much you like typing underscores#2016-06-0707:03bvulpesbut that's kind of glib#2016-06-0707:06bhaganyI agree it depends a lot on your use case. I do the opposite of what hans recommends, because I need multiple parents pointing to the same children, without modifying the children#2016-06-0707:07bhaganyI also end up typing a lot of underscores, though#2016-06-0711:14dm3is it true that Tx size can affect query performance?#2016-06-0711:14dm3found this thread from 2013 which seems to indicate that: https://groups.google.com/forum/#!searchin/datomic/performance/datomic/ijZV-PKTCQc/-yGSEViwJUUJ#2016-06-0711:15dm3suggests it's best to keep Txs in 1-45kb range. However one would think this only matters for writing transactions#2016-06-0714:46jonpitherhi - do people run the transactor in a docker image?#2016-06-0714:46jonpitherfor dev purposes#2016-06-0715:09marshall@jonpither: Hey Jon. Yes, I know several folks run Datomic in docker containers. Not sure about dev vs. production, though.
The guys at PointSlope had a blog post with a basic rundown of running the transactor in a docker container: https://pointslope.com/blog/datomic-pro-starter-edition-in-15-minutes-with-docker/#2016-06-0716:12nwjsmithIn the docs for the query grammar, the grammar for inputs is ':in' (src-var | variable | pattern-var | rules-var)+, but shouldn't it be ':in' (src-var | variable | pattern-var | rules-var | binding)+?#2016-06-0716:13nwjsmithinputs can specify bindings, right? (http://docs.datomic.com/query.html#sec-5-7)#2016-06-0716:17jonpitherthanks @marshall#2016-06-0815:36ckarlseniirc someone here posted some cassandra keyspace optimizations or something for datomic a couple of months ago. mind sharing them again? 🙂#2016-06-0905:27peterromfeldjust a initial commit, and by far not finished. But would like some input:
https://github.com/peterromfeldhk/datomic-schema#2016-06-0905:29peterromfeldbtw throw exceptions and lazyiness is a bit painfull#2016-06-0905:30peterromfeldhow to be able to still catch ex-data, without dropping lazyness? is it possible?#2016-06-0905:37peterromfeld(i was working as sys admin and devops before, so this is my first lib since i entered developent! Im super thankful for any input and help!)#2016-06-0906:19peterromfeldneed to make it more clear what i mean without constraints compared to other libs with example, but too tired now. will do over weekend#2016-06-0906:21peterromfeldits mostly about that a entity is not a table, and you might be open that certain attribute could belong to one or another entity#2016-06-0906:21peterromfeldgithub example#2016-06-0906:21peterromfelda user can have repos#2016-06-0906:22peterromfeldbut also an org can have repos#2016-06-0906:22peterromfeldbut when you lookup you might dont want to care if its a user or org entity, but you still wanna know the associated repos#2016-06-0906:23peterromfeldso you have a entid(user or org) and want to get repos for it, without caring if its a user or org entity#2016-06-0906:24hanspeterromfeld: we have such a library as well, seems like many people do#2016-06-0906:24peterromfeldyou could still use or, but why constrain yourself?#2016-06-0906:25hanspeterromfeld: one comment: why use refs for enums? we're moving over to keywords because they're much easier to work with.#2016-06-0906:25peterromfelda enum is just a ref to a ident#2016-06-0906:25peterromfeldwe had start problems with datomic because we used ADI at first#2016-06-0906:25peterromfeldand almost all devs got missleaded how datomic schema works#2016-06-0906:26hanspeterromfeld: i know. yet, as there is no type checking anyway and a ref attribute can take any value, we don't see why one not want to use keywords instead.#2016-06-0906:26hanspeterromfeld: funny, we used and abandoned adi as well.#2016-06-0906:26peterromfeldi dont know the difference of ref datoms to keyword datoms#2016-06-0906:27peterromfeldbut a ref to the same ident may be more effective then a new keyword for every entity#2016-06-0906:27hans"effective" in what way?#2016-06-0906:28peterromfeldi dont know whats the difference between a ref to same datom, and a new datom for every keyword for a entity#2016-06-0906:28peterromfeldin the end it may be the same, you have a ref datom or a keyword datom#2016-06-0906:29peterromfeldbut i read that ident enums more effective then keyword datoms#2016-06-0906:29hansnot quite, because a keyword is stored as a string.#2016-06-0906:29hansyou probably mean efficient. i'd like to know more.#2016-06-0906:30hansanyway, thank you for sharing your library. i hope that at some point, there will be a unified schema syntax for datomic#2016-06-0906:30peterromfeldits not a lib yet, didnt published it yet 🙂#2016-06-0906:30peterromfeldi would love that we could build it out a bit together#2016-06-0906:31peterromfeldbut i have to say, i have to deal much with afterdamage done from ADI#2016-06-0906:32peterromfeldmany still dont understand datomic#2016-06-0906:32hansit is hard to understand, and there are many ways to use it.#2016-06-0906:32peterromfeldand its was very bad discussion to use for initial learning#2016-06-0906:33peterromfeldbetter go forward a bit slower, but at least you understand what you working with#2016-06-0906:33peterromfeldnow we need to refactor soo much..#2016-06-0906:35peterromfeldanyway, i would love if we could build out my initial idea (which is most likly very similiar to many others)#2016-06-0906:35hanswe've kind of succumbed to not creating any abstraction layer over datomic and letting it bleed into our application code in various ways.#2016-06-0906:36hanswe cannot share our library for legal reasons. i don't like it, but that's how it is.#2016-06-0906:36peterromfeldyou can still have a ds namespace which takes care of it#2016-06-0906:36peterromfeldand if you decide to change you only need to change there and in subns#2016-06-0906:38peterromfeldalways keep ds stuff in one place! you never know if you wanna change... easier to refactor in one ns + subns then all over the place#2016-06-0906:39peterromfeldeven with good IDE#2016-06-0906:40peterromfeldyou dont know, if/when you leave that next people wanna move forward#2016-06-0906:40peterromfeldyou always develop legacy of tomorrow#2016-06-0906:40peterromfeldand needto make it easy to throw away#2016-06-0906:41peterromfeldsry im a bit drunk and too talk active 😉#2016-06-0906:47peterromfeldanyways if you have inputs, just put it into issues 🙂 i welcome every input#2016-06-0911:22mtgredIs there any graphical data modeling tool that generates Datomic schema?#2016-06-0912:57danielstocktonwhy can some attributes be looked up and not others? (d/attribute (d/db conn) :db/add) => nil (d/attribute (d/db conn) :db/unique) => 42#2016-06-0913:00stuartsierra@danielstockton: :db/add is not an attribute. It is a special form in transaction data.#2016-06-0913:02danielstocktonit looks like it is an attribute, at least there is an entity with this ident:
1
([:db/ident :db/add]
[:db/doc
"Primitive assertion. All transactions eventually reduce to a collection of primitive assertions and retractions of facts, e.g. [:db/add fred :age 42]."]),#2016-06-0913:02danielstocktonwith entityid 1#2016-06-0913:09danielstocktoni also wonder what 'installing' an attribute really does and why it isn't allowed to just use anything with a :db/ident as an attribute#2016-06-0913:14danielstockton(d/entid (d/db conn) :db/add) => 1 there seems to be something special that makes something an attribute but i don't really see what that is#2016-06-0913:42jannisAre there any decent examples on querying tx data out there?#2016-06-0913:57marshall@jannis: This blog post has an example of querying provenance data https://www.jayway.com/2012/06/27/finding-out-who-changed-what-with-datomic/#2016-06-0913:58marshallThis example also queries for the transaction entity to identify a user who changed a value: https://github.com/Datomic/day-of-datomic/blob/master/tutorial/provenance.clj#2016-06-0914:04jannis@marshall: I'm specifically interested in this case: http://docs.datomic.com/transactions.html#sec-5-3#2016-06-0914:05jannisWhere you have the db after and the datoms from the transaction and want to query the db starting with one or more matching datoms in the tx data.#2016-06-0914:08jannisUnfortunately, with the input datoms you don't seem to be able to use [?e :some/attr ?v2] to match attributes. You have to do something like [?a :db/ident :some/attr] to match the attribute separately. I guess I don't understand why that is yet.#2016-06-0914:15marshall@jannis: if you look at the datoms that are returned, they are things like:
[#datom[13194139534313 50 #inst "2016-06-09T14:13:57.997-00:00" 13194139534313 true]
where the attribute is a number (50 in this case)
That is the entity ID of the attribute definition in the schema#2016-06-0914:15jannisI suppose the reason is that the ?as are IDs of attribute entities in the db partition and not keywords.#2016-06-0914:15jannisYeah#2016-06-0914:15marshallyep, there you go#2016-06-0914:16jannisI wonder what the most elegant way to query just a particular transaction is if one already has the tx-data available, except in a slightly less useful form.#2016-06-0914:16marshallso providing the database as a source allows you to look up the entity ID of the ?a
and resolve it to the :db/ident
keyword#2016-06-0914:17marshallYou might want to look at the Log API if you’re looking for info about a specific transaction: http://docs.datomic.com/log.html#2016-06-0914:18jannisWell, I already have the info by receiving the transaction data from the log (via onyx-datomic).#2016-06-0914:22jannisWhen I do [?a :db/ident :command/data] then ?v will give me the ID of the entity that the :command/data attribute points to in the transaction. Now my only remaining problem is how to query information about that entity. [?v :data/some-attr ?v2] should give me - in ?v2 the value of :data/some-attr in the entity ?v, right?#2016-06-0914:23jannisI'm not an experienced user of datomic terminology, so I hope this description isn't too weird.#2016-06-0914:34marshallYes I believe that’s right#2016-06-0914:34marshall@jannis: ^#2016-06-0914:34marshallassuming I understand your question#2016-06-0914:34jannis🙂#2016-06-0914:34jannisAnd that's what isn't working. It's almost as if it can't match the input datoms and the db in the same rule.#2016-06-0914:34marshallHow are you binding ?v#2016-06-0914:35marshallto the entity ID you’re interested in#2016-06-0914:35jannis?v comes from the tx datoms, just like in the example on http://docs.datomic.com/transactions.html#sec-5-3#2016-06-0914:35marshallah, ok. Then in that case it would only be an entity ID if the added datom was a reference#2016-06-0914:36marshallso, in the example datom I posted above - the ?v there would be the inst#2016-06-0914:37jannisQUERY [:find ?v :in $ [[?e ?a ?v ?tx ?added]] :where [?a :db/ident :command/data]]
> #object[java.util.HashSet 0x2071c184 [[285873023223181]]]
#2016-06-0914:38jannisSo ?v is the entity ID of the entity refered to by the matching [?e :command/data ?v] datom in my tx data.#2016-06-0914:38marshallRight, in that case you’re getting all the values of ?v that are passed in as input#2016-06-0914:38jannisMhm. So far so good. 🙂#2016-06-0914:39jannisNow what I really want though is query an attribute of that entity, e.g. by adding a rule like [?v :data/some-attr ?v2] and than return ?v2.#2016-06-0914:40danielstockton@jannis to your earlier question, you should be able to do [?e :some/attr ?v2] to match attributes but it doesn't bind it to anything. If you want to bind it, you need the ?a symbol and it gets bound to the entityid of the attribute so you then need to join that with :db/ident to the actual keyword value#2016-06-0914:42jannisQUERY [:find ?v2 :in $ [[?e ?a ?v ?tx ?added]] :where [?a :db/ident :command/data] [?v :project.create/project ?v2]]
> #object[java.util.HashSet 0x7008a6b8 []]
#2016-06-0914:42jannisvs. the following query against the db directly (which works):
boot.user => (datomic.api/q '[:find ?v2 :in $ :where [?c :command/data ?d] [?d :project.create/project ?v2]] (datomic.api/db (:conn (:datomic system))))
#{[1]}
#2016-06-0914:44marshalltry replacing [?v :project.create/project ?v2] with [?v ?a ?v2]#2016-06-0914:45marshalloh, hang on, you’ve got two levels of reference here#2016-06-0914:45jannisYes 🙂#2016-06-0914:46marshallso, the same thing is happening with the project.create/project#2016-06-0914:46marshallas with the command/data#2016-06-0914:46marshallyou need to resolve the keyword attribute ID to an entity ID if you’re going to query it that way#2016-06-0914:47marshallalternatively, and possibly more simply, I’d do two separate queries#2016-06-0914:47marshallsince all query happens in process in Datomic there’s not the traditional client/server type of need to jam all the logic into a single query#2016-06-0914:48marshallyou can run the first query to get the entity IDs of the intermediate entites, then a second query to populate the values from those#2016-06-0914:48jannisTrue. Run the first against the datoms and the second only against the db.#2016-06-0914:49marshallespecially if it’s data that was just transacted, that will all be in the peer’s local cache#2016-06-0914:49jannisHowever, since it's such a simple chain of matching rules with only two levels of reference, this should easily be doable in a single query I'm thinking.#2016-06-0914:49jannisAh, that's certainly good to know 🙂#2016-06-0915:05jannisOdd. A separate query against db-after, after extracing the entity ID of the first reference, fails:
VALUE FOR ?d 285873023223262
QUERY [:find (pull ?e [*]) . :in $ ?d :where [?d :project.create/project ?e]]
> nil
But running the query against the latest db of the connection separately works:
boot.user => (datomic.api/q '[:find (pull ?e [*]) . :in $ ?d :where [?d :project.create/project ?e]] (datomic.api/db (:conn (:datomic system))) 285873023223262)
{:db/id 1, :db/ident :db/add, :db/doc "Primitive assertion. All transactions eventually reduce to a collection of primitive assertions and retractions of facts, e.g. [:db/add fred :age 42].", :project/name "New project 1"}
#2016-06-0915:07jannisOh.#2016-06-0915:09jannisThe onyx-datomic inject-db-calls lifecycle helper must be injecting an old version of the db into the function that runs the first query (basis-t = 1095 vs. 1503 in the second case).#2016-06-0915:10jannisNo wonder my query over db + tx datoms couldn't resolve references any further - they are probably missing from that old db.#2016-06-0915:17jannisThere we go. Using inject-conn-calls instead and then using the latest db for the queries makes it all work.#2016-06-0915:21marshallAh. well, that makes a lot more sense 🙂#2016-06-0915:23jannis@marshall: I still can't get the single query to work but with the two separate queries you recommended it works.#2016-06-0915:23jannisThat'll do for now, thanks 🙂#2016-06-0916:26marshall@jannis: I usually prefer the single query approach, but you could also do a nested query if necessary: https://groups.google.com/d/msg/datomic/5849yVrza2M/31--4xcdxOMJ#2016-06-0916:45achannot sure if this is the place to ask for help but i am having some trouble with connecting to transactor from a remote peer#2016-06-0917:02hansit can happen when you're specifying a host name in the transactor.properties file which is not resolvable by clients.#2016-06-0917:03hansor if you use "localhost" there.#2016-06-0917:03hans@achan: ^#2016-06-0918:53achani can ssh by the ip address. Can i put the ip address in the transactor.properties?#2016-06-0918:54marshall@achan: Yes, the hostname value in the properties file should be something that the peers can resolve to the transactor machine.#2016-06-0919:02achanthis is how i tried to connect:#2016-06-0919:02achan(def uri "datomic:<sql://merckx?jdbc:postgresql://gmahg0ppav8v6d.cbv0gfpvcvqy.us-west-2.rds.amazonaws.com:5432/datomic?user=datomic&password=datomic0%22|sql://merckx?jdbc:postgresql://gmahg0ppav8v6d.cbv0gfpvcvqy.us-west-2.rds.amazonaws.com:5432/datomic?user=datomic&password=datomic0">)#2016-06-0919:02achan(d/create-database uri)#2016-06-0919:02achangot error: HornetQNotConnectedException HQ119007: Cannot connect to server(s). Tried with all available servers. org.hornetq.core.client.impl.ServerLocatorImpl.createSessionFactory (ServerLocatorImpl.java:906)#2016-06-0919:03achani think i am missing something#2016-06-0919:03achanhow my peer know where the transactor is when in my uri for datomic specifies only RDS endpoint?#2016-06-0919:04marshallThe peer reads the transactor location from storage. http://docs.datomic.com/deployment.html#getting-connected#2016-06-0919:05marshallDo you have a transactor up and running against that URI and does the machine you’re attempting to connect from have proper AWS credentials to access that RDS table?#2016-06-0919:06marshallAnd to access the transactor machine#2016-06-0919:06achanyes the transactor is up and running against that URI#2016-06-0919:07marshalland, as @hans mentioned, the address of the transactor needs to be correct in the transactor properties file.#2016-06-0919:07achani tested with psql command to connect to RDS from my machine and passed#2016-06-0919:07marshallThen, yes, it’s most likely the host value in the properties file#2016-06-0919:08achanmy properties file looks like this:#2016-06-0919:08achanprotocol=sql
host=ip-10-16-25-24
port=4334#2016-06-0919:09achanOne thing is after i built the EC2. It couldn't resolve the hostname. So i had to add "ip-10-16-25-24" to /etc/hosts file#2016-06-0919:09achanthen Peer from the same EC2 can connected to it#2016-06-0919:09marshallhost needs to be something that your remote peer can use to connect to the transactor#2016-06-0919:10achani can connect to the EC2 from my machine.#2016-06-0919:10achan4334 port is open#2016-06-0919:10marshallalso, if you’re using the provided CF template and you’re trying to connect from a non-EC2 instance, you need to set aws-ingress-cidrs=0.0.0.0/0
#2016-06-0919:11marshallhttp://docs.datomic.com/aws.html#create-cloudformation-template#2016-06-0919:11achanno i didn't build it with CF template because of some permission error#2016-06-0919:11achanso i built it by unzipping the file.#2016-06-0919:11achandatomic#2016-06-0919:11hans@achan: can you "telnet ip-10-16-25-24 4334" from your client machine?#2016-06-0919:12achanummm....no#2016-06-0919:12marshallthen you need to set the value of host to something you can reach from the client machine#2016-06-0919:12marshallor add it to your /etc/hosts#2016-06-0919:12achanis there a way for me to set aws-ingress-cidrs=0.0.0.0/0 without cloudformation?#2016-06-0919:13achanok, i will try that out. thank you so much.#2016-06-0919:16marshall@achan: you should be able to set inbound rules from the EC2 instance security group settings: http://docs.datomic.com/aws.html#aws-management-console#2016-06-0919:17achangreat. that is what i need! thanks#2016-06-1003:00danielwoelfelDoes anybody else run Datomic on FreeBSD? About to upgrade from openjdk7->openjdk8 and was curious if there are any known problems.#2016-06-1012:13mitchelkuijpers(d/q '[:find [?e ...]
:in $ ?host-id [?ignore-keys ...]
:where
[?e :host/belongs-to ?host-id]
[(!= ?e ?ignore-keys)]]
(d/db (user/get-connection))
:company/name
(:db/id (user/get-tenant))
[17592186104562 17592186081714 17592186098554])
Does anyone know why [(!= ?e ?ignore-keys)] does not work? it works when I change != to =#2016-06-1012:13mitchelkuijpersOr is the problem that I try to filter by id's#2016-06-1016:39rwtnortonI see the same result. If I pass in ?ignore-keys whole (`:in $ ?host-id ?ignore-keys`) and look for ?e missing from ?ignore-keys via homegrown predicate (`(defn nadda [e vs] (-> vs set (get e) not))`), I get the expected behavior, using [(#'datomic.fun/nadda ?e ?ignore-keys)].#2016-06-1017:06bhaganyI'm not sure why it doesn't work, but in this case I'd just leave that where clause out, and filter the result of the query. The data's all local.#2016-06-1017:06bhaganyunless you're using the rest api#2016-06-1017:07bhaganyI'm all about encouraging people to remember the data is local, because I'm using the rest api, and I'm constantly wishing it were true for me.#2016-06-1018:19stuartsierraNegation in Datalog is tricky.#2016-06-1018:20stuartsierra[(!= ?e ?ignore-keys)] will be evaluated as a predicate.#2016-06-1018:21stuartsierraThat means, for all bindings of ?e and ?ignore-keys established so far in the query, it will evaluate the expression (!= ?e ?ignore-keys) and remove any bindings for which that expression returns false.#2016-06-1018:26stuartsierraThe bindings include every possible combination of ?e and ?ignore-keys, so the != predicate will return true at least once for every ?e.#2016-06-1019:13pheuteris it possible to parameterize the pull query like so?
(d/q ‘[:find (pull ?e ?query) :in $ ?query :where [?e …]] db query)
#2016-06-1019:25pheuternvm, got it! https://github.com/Datomic/day-of-datomic/blob/master/tutorial/pull.clj#L110#2016-06-1021:52petterikuse not= instead of !=?#2016-06-1118:50raymcdermottI was playing with Datomic Pro metrics (using a Clojure callback). It seems not to be supported in the free edition although that’s not clear from the documentation. Has anyone managed to get that working or is it a known limitation?#2016-06-1118:51raymcdermott[It reads like it’s just the CloudWatch aspect which is only available to the Pro edition]#2016-06-1119:41raymcdermott[ there is a some information in the log but this is obviously trickier to work with 😉 ]#2016-06-1309:18danielstocktonis it right that attributes are idents and therefore are stored in memory in every peer? is this why the recommended limit is 1000000?#2016-06-1313:19potetm@danielstockton: That is my understanding.#2016-06-1313:22danielstocktonReading the docs a bit more, it's a bit more clear: Idents should be used for two purposes: to name schema entities and to implement enumerated tags.#2016-06-1313:22danielstocktonWhat I'm calling an attribute is actually an ident to a schema entity (attribute)#2016-06-1313:24danielstocktonI wonder if you can create schema entities which don't have an ident and refer to them by id#2016-06-1313:34marshall@danielstockton: the ident is a required element of a schema entity (attrib definition): http://docs.datomic.com/schema.html#required-schema-attributes#2016-06-1313:35danielstocktonah yes, so it is#2016-06-1313:44danielstocktoni'm trying to get my head around the point of installing an attribute and why this is useful
i understand why a schema entity must have particular attributes but not this one:
:db.install/_attribute :db.part/db#2016-06-1313:44danielstocktoncan't the partition be inferred from the tempid anyway?#2016-06-1313:52rauh@danielstockton: Not an expert, but there is also :db.alter/_attribute so it doesn't have to be :db.install/_attribute#2016-06-1313:56danielstocktoni guess that could also be inferred, if the ident already exists#2016-06-1313:56danielstocktonim just wondering if there is any leverage to be had from this or its just an implementation detail#2016-06-1314:17curtosis@danielstockton: not an expert, but as I understand it the purpose of :db.install/_attribute is to hook the schema entity to the db entity. Otherwise the database doesn't know about it (and can't use/enforce it).#2016-06-1314:18conanDoes anybody know of good resources about running multiple datomic instances on the same backing storage? we're trying to work out what the best practice is around this sort of thing#2016-06-1314:23curtosisThe tempid is intended to be opaque, so no inference about partition should be made from it. Implementation-wise, the transactor could presumably infer the partition, but it can't infer that the entity you're creating is intended to be a schema entity that should be used to define legal attributes.#2016-06-1314:23curtosiss/define/constrain/#2016-06-1314:26curtosis(remember that :db.install/_attribute is an inverse, so the schema entity doesn't have the partition as an attribute; the db partition entity has the schema entity as an attribute.)#2016-06-1314:28danielstocktonthat's another thing i don't understand, i thought refs were in both directions#2016-06-1314:28danielstocktonit just depends whether you use the eavt or vaet index#2016-06-1314:29danielstocktoni guess you have to state a direction so datomic knows which one to use actually#2016-06-1314:29curtosisexactly#2016-06-1314:30curtosisthey're traversable in either direction b/c of the indexes, but you still have to pick which entity has the attribute.#2016-06-1314:30danielstocktonso when you want to transact a new attribute on to an entity in a certain partition, i guess datomic looks up whether this is a valid attribute for that partition#2016-06-1314:31danielstocktonstill seems like it would be possible to infer this information throug the tempid and if :db/cardinality is present#2016-06-1314:31curtosisnot quite... it looks up whether there is a valid attribute for the db.#2016-06-1314:31danielstocktonhmm, why does an attribute need to be installed in a specific partition then#2016-06-1314:32curtosisit's not being installed in the partition, it's installed as an attribute of the db partition, which is special.#2016-06-1314:35curtosis(that may not be really accurate, implementation-wise, but the idea is roughly the same. you're telling the db to use this entity as a schema entity; it's very different from just transacting an entity into a partition.)#2016-06-1314:36danielstocktonyeah, thats why i wonder if it can't be inferred#2016-06-1314:37danielstockton:db/cardinality is a reserved attribute, it could tell the transactor that 'this is a schema entity'#2016-06-1314:37curtosisby very extremely inappropriately coarse analogy to SQL: DDL does different things than DML.#2016-06-1314:39danielstocktonor there could have been a different function transact-schema#2016-06-1314:39danielstocktonit makes me think there is some leverage to be had from it#2016-06-1314:39danielstocktonthere has clearly been some effort to define everything using the same data model#2016-06-1314:41curtosiswell, yes. that's very much the point.#2016-06-1314:42curtosisthere's nothing stopping you from writing transact-schema that adds the :db.install attribute for you#2016-06-1314:42danielstocktoncan i transact an entity that uses an attribute defined in a different partition from that entity?#2016-06-1314:42curtosisyou can't not#2016-06-1314:43curtosisattributes are always and only defined in the db partition; user entities should never be in the db partition.#2016-06-1314:44curtosisessentially, all your data should be in :db.part/user unless you need to split it up for index locality reasons.#2016-06-1314:44marshall@danielstockton: partitions are strictly an indexing performance enhancement. there are no restrictions about attributes from multiple partitions being associated with a given entity#2016-06-1314:44marshallsorry - not indexing performance, but index locality#2016-06-1314:44marshallwhich may impact query performance, depending on usage patterns#2016-06-1314:46danielstocktonmakes sense, so you want an entity to be in the same partition as its attributes (typically) so you get more cache hits around them#2016-06-1314:46danielstocktonbut it isn't a hard rule, there might be reasons for not having them in the same partition#2016-06-1314:47curtosisnot quite#2016-06-1314:47marshallkeep in mind, an entity is not something that ‘exists’ in it’s own#2016-06-1314:47curtosisthe db partition is special (and small) so it really shouldn't be a factor#2016-06-1314:47marshallhttp://docs.datomic.com/glossary.html#entity#2016-06-1314:48marshallthe entity is simply an ID, which is the first element of a set of datoms, each of which is an attribute/value combination#2016-06-1314:48curtosisoh, sorry - getting a little imprecise in this about "attributes" and "attribute definitions"#2016-06-1314:49danielstocktonyeah, think i was getting them muddled too#2016-06-1314:49marshallthe idea with partitions is if you have a set of attributes that are likely to be accessed together, then having them in the same partition can improve performance via index locality#2016-06-1314:50danielstocktoni get that, since its encoded in the entityid and therefore they're adjacent in the eavt index and you'll get more segment cache hits#2016-06-1314:50marshallyep exactly#2016-06-1314:50curtosisright. :db.install puts the schema "entity" (attribute definition) into the db partition.#2016-06-1314:50danielstocktonjust trying to think why you might want to install a schema entity in a different partition to db.part/db#2016-06-1314:50danielstocktonwhy that's explicit in the transaction, rather than just inferred#2016-06-1314:50marshallyou cant install the schema definition anywhere but the db partition#2016-06-1314:51curtosisI would say that it's explicit because it has to be explicit somewhere that you intend to create a schema definition.#2016-06-1314:51danielstocktonthe entity id is in the db.part/db partition and it has a :db/cardinality attribute, those are two clues it would seem#2016-06-1314:51danielstocktoni don't know, it seems strangely verbose#2016-06-1314:53curtosisit would be surprising for "enforce this as a schema definition" to be implicit behavior.#2016-06-1314:53curtosis@marshall: huh? I've not seen that constraint before. Or at least not understood it.#2016-06-1314:53marshallsorry, I mis spoke#2016-06-1314:53danielstocktonwhat else would you be transacting into #db.id[:db.part/db]?#2016-06-1314:54curtosispartition entites#2016-06-1314:54curtosis(for example)#2016-06-1314:54danielstocktonah and functions maybe?#2016-06-1314:55marshallhttp://docs.datomic.com/schema.html#installing-attribute-definition#2016-06-1314:55danielstockton"The attributes you create for your schema must live in the :db.part/db partition."#2016-06-1314:55curtosisfunctions don't need to be in :db.part/db, IIRC#2016-06-1314:56marshallhttp://docs.datomic.com/database-functions.html#using-transaction-functions#2016-06-1314:56danielstocktonam i right that nearly everyone uses db.part/user for their data, even though it says its for experimenting (like the default repl namespace)#2016-06-1314:56marshallthey do not#2016-06-1314:56marshallfunctions go in user data#2016-06-1314:57curtosis(aha! I learned something today as well! I've been playing in :db.part/user by default. 🙂 )#2016-06-1314:59danielstocktoni haven't ever seen any justification for that advice though#2016-06-1315:00danielstocktonit seems to me like you'd usually have a totally separate db for experiments#2016-06-1315:02curtosisI think it's at least advisable for the same reasons that you shouldn't put your code in the :user namespace, or in Java in the default package, or in SQL in the "dbo"/"admin" schema/namespace/partition/whatever-your-SQL-calls-it.#2016-06-1315:03danielstocktonbut why create a user namespace at all, it seems like most people have taken to using it as the main partition for their data#2016-06-1315:04curtosisfor the same reason there is a default package, etc. 😉#2016-06-1315:04curtosisso that it works out of the box#2016-06-1315:05danielstocktonyeah, fair point#2016-06-1315:06curtosisa disturbing number of production MSSQL databases are in the dbo schema.#2016-06-1315:06curtosis(IMO)#2016-06-1315:08danielstocktonmhm, should probably be a bit louder in the docs and getting started type tutorials, i imagine its a pain to start building your application and then realise later all your entities are in the wrong partition (especially with datomic)#2016-06-1315:11danielstocktonwith datomic, you kind of have to get things right from the beginning#2016-06-1317:45sdegutisIs it possible to get a list of transactions? Something like (d/q '[:find ?tx :where [_ _ _ ?tx]] db)?#2016-06-1317:49stuartsierra@sdegutis: yes, with the Log API#2016-06-1317:51sdegutis@stuartsierra: Perfect, thanks. It seemed a bit hidden, but it's exactly what I need.#2016-06-1317:52sdegutisHmm, is 16,000 transactions high?#2016-06-1419:04hueypanyone using mysql rds with encrypted connections?#2016-06-1517:19bostonaholicis it reasonable to call (.release connection) in a Database record in the stop function for a stuartsierra/component?#2016-06-1517:58stuartsierra@bostonaholic: You can, but it's usually not necessary.#2016-06-1517:59bostonaholicThanks. I wasn’t sure since the docs for .release is a bit unclear#2016-06-1517:59bostonaholicis there any reason I would or woudn’t use .release?#2016-06-1518:01stuartsierraIf you're going to acquire another connection to the same database, in the same process, you should not call release.#2016-06-1518:03stuartsierraIf a long-running process had to connect to multiple different Datomic databases for different periods of time, you might want to release the one you are no longer using.#2016-06-1518:03bostonaholicperfect. thank you!#2016-06-1518:42lellisHello! Any tip on how to implement Accent Insentitive query?#2016-06-1519:55marshall@lellis: are you wanting that behavior in a fulltext query? If so, the fulltext dictionary and configuration are not currently configurable.#2016-06-1520:00marshallIf that is the case, I will note your interest for that feature.#2016-06-1520:04lellisYes @marshall. In a fulltext search. Ty!#2016-06-1610:12gravCan I check if all entities in a list have the same value for an attribute with a single query?#2016-06-1613:26bahulneel@grav you can look entities the do not have the same value as any other entity#2016-06-1613:26bahulneel[:find ?e :where [_ :attr ?v] (not-join [?e] [?e :attr ?v])]#2016-06-1613:28grav@bahulneel: cool!#2016-06-1613:29bahulneel@grav this goes some way toward explaining https://math.stackexchange.com/questions/528990/relationship-between-universal-quantifier-and-existential-quantification-is-it#2016-06-1613:31bahulneelalso, there's a mistake, the not-join should include ?v#2016-06-1613:33bahulneelalso not sure the query is safe as there is a free variable in negated part that isn't in the positive part#2016-06-1613:39bahulneel@grav this would be better [:find (count-distinct ?v) :where [_ :attr ?v]]#2016-06-1706:16grav@bahulneel: thanks!#2016-06-1706:17gravA different question: I like the feature that allows me to query a datastructure directly, eg (d/q ‘[:find …] ‘[[:foo :foo/bar 42] [:bar :foo/bar 43]])#2016-06-1706:18gravIs it possible to query a nested datastructure in any way? Eg, converting the nested structure to a list of datoms somehow?#2016-06-1709:10bahulneelyes, for example, you have the following {:foo {:bar 4}}#2016-06-1709:10bahulneelif you make :foo a reference then you can directly store this in datomic#2016-06-1709:12bahulneelwill need to provide :db/ids for both maps, however, if you can make :foo a component and datomic will delete the children when you remove foo#2016-06-1709:13bahulneelwhen you query you just follow the join: [[?e :foo ?foo] [?foo :bar ?bar]]#2016-06-1709:13bahulneel@grav here's a blog post explaining http://blog.datomic.com/2013/06/component-entities.html#2016-06-1709:17bahulneel@grav I think I missundersood you question#2016-06-1709:18bahulneelIf you're passing in a data-structure rather than a db then you would need to do something like flatten it to datoms.#2016-06-1709:23grav@bahulneel: yes, exactly, I need to convert it to datoms somehow. I cannot find an api for it.#2016-06-1709:31bahulneel@grav so, the shape you need is a triple of [E A V] this would mean flattening out the data structure so that each map becomes an entity and then you assign ids to those.#2016-06-1709:32bahulneeldepending on the complexity of the data structure and how much you know about it in advance you may want to consider using datascript as any implementation will eventually become some implementation of this#2016-06-1709:34bahulneelor you can use the in memory datomic but you'll have to specify all the schema rather than just the references#2016-06-1709:35rauh@grav: Haven't tried, but you can just create an empty dummy db in your app, then use (d/with ) on it to get back a proper db (it always keeps the database empty). Needs a schema though.#2016-06-1710:51bahulneel@grav: looks like @conaw has some code to help you, the transactions he produces should be directly queriable by d/q#2016-06-1721:41noziarHi - I am sending a lot of transactions to my database, and many of them are timing out because of the load. I assume that even if there is a timeout, the transaction will still go through as long as the queue on the transactor is not full. Is there a way to detect if the transactor actually starts rejecting transactions? Not sure what to look for in the logs#2016-06-1722:16marshall@noziar: you may want to look at the docs on imports http://docs.datomic.com/capacity.html#data-imports#2016-06-1722:19noziaryes, I did it but I forgot the part about using transactAsync instead of transact - so I ended up having a bunch of timeouts and wanted to figure out if there was a way to detect actual transaction failures at the transactor level. I'm not using CloudWatch so I don't see alarms either#2016-06-1722:29marshallAlarms should be reported in your txor logs as well#2016-06-1804:17jimmyhi guys, when I run datomic free locally I got this:
Starting datomic:, storing data in: data ...
where is that data folder ?#2016-06-2009:28dominicm@nxqd: For me, it's in the root folder of datomic. It's a sibling of README, bin, etc.#2016-06-2009:30jimmy@dominicm: thanks for your answer. I think i should look at where is the datomic root when installed using homebrew 🙂#2016-06-2105:38andmedhi. is it possible please to find information on data representation in SQL layer in datomic: which tables, relations, logic of storing the data really. Can not find it anywhere#2016-06-2105:47jimmy@andmed: I don't think you would have that since it's a part of datomic internal design and implementation and it's not open source.#2016-06-2106:13andmed@nxqd got it. have now a weird task to build on top of RDBMS some schemaless data design, everything what i have seen says it is a bad idea, was interested to know about datomic#2016-06-2107:43kahunamoore@andmed: If you are only talking about something schemaless (and not full blown datomic which is somewhat-schemaless but also maintains history) AND you don’t expect it to scale in any significant way, you should be able to get away with a dead simple table with entity, attribute, value columns. Anything more than that and your intuition about it being hard (to do right, fast, reliable) is correct. Having said that I think there are likely other OSS dbs out there that do what you want. See also:
http://nosql-database.org/
https://blog.jooq.org/2014/10/20/stop-claiming-that-youre-using-a-schemaless-database/
Good luck with your project!#2016-06-2111:19andmed@kahunamoore: yes, the hard case seems to be my case exactly 🙂 The guy wants to manage relations in a separate table, and have more crazy ones with "relatives", "relatees" and such like, full awfulness of which I am yet to discover. Thank you, these are good links to start#2016-06-2113:26stuartsierraDatomic uses all backing stores the same way — as a key/value store for opaque binary blobs. You can't query them with the backing store's native query language, e.g. SQL.#2016-06-2116:53noogaA while ago I stumbled upon a tutorial that showed how to implement basic fact oriented database much like datomic/datascript. It was on github, written as a md document. The repo contained a bunch of other articles, presumably some kind of sources for a book by various authors. I can’t find it anymore. Maybe someone seen that and remembers where it was?#2016-06-2117:47adamfrey@nooga: is this it? https://aosabook.org/en/500L/an-archaeology-inspired-database.html#2016-06-2117:48noogayay!#2016-06-2117:48noogathanks @adamfrey#2016-06-2117:48adamfreynp#2016-06-2119:41noogaI’m going to implement that in haskell as a learning project#2016-06-2217:25bhaganyI made a terrible assumption about :where clauses pruning the results of pull in a query. I'm now pretty sure the only way to prune pull is to use a filtered db, which I can't do because REST api, and probably would be a bad idea in this case anyway. However, I'm throwing this out there before I engage in a large refactor, in case anyone has an idea off the top of their heads that I haven't thought of.#2016-06-2223:32bkamphaus@bhagany: afaik pull doesn’t work in query via the REST api, though if you use it with a find spec you can sometimes nest it just so that the normal collection logic for sets of tuples will return something anyways.#2016-06-2223:53bhagany@bkamphaus: pull in query via REST is my primary way of using query. I do it like [:find [(pull ?e [*])] …]. My only complaint in this area is that [:find (pull ?e [*]) . ...] (the scalar find spec) returns a vector of lists, instead of a map.#2016-06-2223:56bhaganynot entirely sure I've explained my problem well, but in any case, I'm still pretty confident I'll have to go through with my big refactor#2016-06-2223:58bkamphaus@marshall (or someone at Cognitect, not there any longer) can speak more definitively about the status of pull in the REST API, but if it’s the issue I remember, the behavior you’re seeing is falling out of an implementation detail and not explicitly defined or promised (i.e. you should flat out receive an error doing a scalar find spec w/o the pull expression), it just so happens that those forms fall into the same abstract collection shape as a set of tuples (as the REST API is designed to support).#2016-06-2223:59bkamphauseither way I’m not entirely sure what you mean by the :where clause pruning results, anything not matched in :where shouldn’t be expanded into a map via pull.#2016-06-2300:00bkamphausthere is of course the (expected) divergence in behavior that pull will omit fields not matched but keep the entities, whereas where clauses must all unify (i.e. if an attribute is specified but missing the entity won’t be included in the results).#2016-06-2300:02bhagany@bkamphaus: ah, I was just doing [*] as the pull expression as convenient shorthand, I don't believe I've ever actually done it that way. My pull expressions are all fairly long and nested.#2016-06-2300:06bhaganyre: what I'm expecting, perhaps an example would suffice. Given 3 entities with this shape: {:db/id 1 :children [{:db/id 2 :attr :foo} {:db/id 3 :attr :bar}]}, I was trying to write a query like [:find [(pull ?e [:db/id :attr])] :in $ ?e :where [?e :children ?child] [?child :attr :bar]]#2016-06-2300:07bhaganyand then expecting only entities 1 and 3 to be returned by the pull#2016-06-2300:07bhaganymy actual situation is more complex than this, and it was easier to not test this than it looks 🙂#2016-06-2300:12bkamphausNot sure I follow why that's your expectation, ?e will only match the first entity, and won't have :attr so it would return the ID for 1 and omit any key/value for :attr (is that what you see?) -- a nested pull spec could retrieve the child but won't limit to the child you intend to retrieve, assuming that's what you mean by pruning.#2016-06-2300:13bhaganyright. when I say it was a "terrible assumption", I mean, it was really stupid and doesn't hold up to even the most cursory thought experiment.#2016-06-2300:14bhaganyvery focused on whether there are other ways to recover from my blunder#2016-06-2300:14bkamphausQuery at present will limit results to parents that have a child with a bar Val for attr, but pull will retrieve anything it can.#2016-06-2300:15bkamphausCan refactor to tuples though, you can still get [e child child-attr-val] shaped results#2016-06-2300:17bhaganyI will give that some thought. I'm dealing with some decently long graph traversals with my pull expressions though, and that makes it seem more prohibitive than refactoring without a bad assumption#2016-06-2300:30bkamphausHmm, yeah that's a fairly tough shaped problem. Non-trivial graph traversal via included rest API functionality I mean. I would probably reach for rules though i haven't used any with rest API calls before. Rules are just edn Params though I'm not sure if there are any hiccups there.#2016-06-2300:35bhaganyI've used them with REST without issue… I don't think I'm grokking how they'd apply here, but it's been a long day#2016-06-2300:44bkamphausWithout knowing the nontrivial expansion of your graph I'm just thinking of something like the mbrainz sample rules https://github.com/Datomic/mbrainz-sample/blob/master/resources/rules.edn#2016-06-2409:08rauhIs it possible to get the source of squuid? I'd like to implement it in lua (for nginx)#2016-06-2409:25cursork@rauh datascript has an implementation that seems to agree with datomic:
boot.user=> (ds/squuid) (d/squuid)
#uuid "576cfc7a-f3e7-493c-bb6a-da9536ec6caf"
#uuid "576cfc7a-8671-4d59-8053-6be834ac295f"
#2016-06-2409:40rauhJust finished it: https://gist.github.com/rauhs/b93bcf0d676f0335fd483d7c7c77303d#2016-06-2409:44rauh@cursork: Thanks! I'll fix my version, it doesn't set the variant (IETF) properly.#2016-06-2410:04stuarthalloway@rauh also https://github.com/clojure-cookbook/clojure-cookbook/blob/master/01_primitive-data/1-24_uuids.asciidoc#2016-06-2410:11rauh@stuarthalloway: Thanks! Got it all working. 🙂 The java.util.UUID/randomUUID also shows setting the variant nicely#2016-06-2415:47adamfreyHi everyone. I wrote a blog post about the first steps to using property-based testing to test your Datomic transaction building code: http://blog.altometrics.com/2016/06/property-based-testing-with-clojure-and-datomic-part-2/ I’m interested in your thoughts on the approach.#2016-06-2701:10conawHey is there a way to do recursive specification on a pull AND use wildcard [:person/firstName :person/lastName {:person/friends 6}] is possible, how would I do [* {:person/friends 6}]#2016-06-2708:01mitchelkuijpers@adamfrey: nice blogpost!#2016-06-2810:13robert-stuttafordcan i restore a database that was backed up as name A but restore it to name B -- the transactor has no existing copies of the db in storage?#2016-06-2811:11stuarthalloway@robert-stuttaford: sure can "Restore can rename databases. However, you cannot restore a single database to two different URIs within the same storage.” — http://docs.datomic.com/backup.html#restoring#2016-06-2811:20robert-stuttafordthanks, Stuart. i thought so, but i wasn't sure#2016-06-2811:43leontalbot@conaw have you found an answer?#2016-06-2811:48conaw@leontalbot: my answer was to only recursively pull explicit relationships, and then pull wildcard for each of those separately#2016-06-2811:48conawso, no#2016-06-2811:48conawnot yet#2016-06-2820:56pheuterIs it possible to enforce Datomic to do a write and fail with a uniqueness conflict if it will do an update, that is if a lookup-ref already exists?#2016-06-2820:56pheuterThe driving force for this is to explicitly separate creating new entities from updating existing ones.#2016-06-2820:57pheuterIt seems like one possible solution is to create a database function that does a lookup for any possible look-up refs and fails the transaction if results come back, but I was hoping there may be a more straightforward solution.#2016-06-2821:44codonnell@pheuter: I believe if you use :db.unique/value rather than :db.unique/identity, you'll get the fail rather than update behavior (see http://docs.datomic.com/identity.html)#2016-06-2821:45pheuterHm, but doesn’t :db.unique/value serve as a lookup ref as well?#2016-06-2821:46pheuterOh I see:
> Unique values have the same semantics as unique identities, with one critical difference: Attempts to assert a new tempid with a unique value already in the database will cause an IllegalStateException.#2016-06-2821:46codonnellexactly#2016-06-2821:46pheuterGood catch!#2016-06-2821:46pheuterThanks#2016-06-2821:46codonnellno problem#2016-06-2915:23marshallDatomic 0.9.5385 is now available https://groups.google.com/d/topic/datomic/WY6kEZb0KAk/discussion#2016-06-2921:06jdkealyHi, I've been trying to set up a transactor. I am using dynamo-db, tried the ensure-transactor command, and an EC2 instance called DatomicTestTransactor keeps being created, started, stopped, and terminated. It happens over and over (like a new instance every 5 mins ) until I remove the AIM roles that datomic created. That doesn't seem right to me -- though i'm not sure.#2016-06-2921:10jdkealyanyways, i should formally ask a question... Is this desired behavior or is this a sign that there's something wrong with my setup#2016-06-2921:10marshall@jdkealy: Did you start with the http://docs.datomic.com/storage.html#provisioning-dynamo instructions#2016-06-2921:11marshallThe ensure-transactor script should create dynamo db tables, roles, etc.#2016-06-2921:11marshallbut not actually start a transactor#2016-06-2921:19jdkealyyes. i started with that. what would be creating DatomicTestTransactor then ?#2016-06-2921:45jdkealythat's useful info, i'll scratch everything and try again#2016-06-2921:52marshallI don’t believe a transactor will be started until you get to the create-cf-stack step#2016-06-2921:52marshall(assuming you’re using the included CF process)#2016-06-2923:24bvulpesjdkealy: sounds precisely like something i ran into as well.#2016-06-2923:24bvulpesi 'just' scripted the provisioning of transactors.#2016-06-2923:38jdkealycool.. i just started from scratch and no ec2 instance was launched. i think i was tinkering around and overwrote the original properties file and didn't realize it#2016-06-3001:14jdkealyI finally just got the transactor working. Thanks for your help 🙂#2016-06-3008:59lenHi all, I have this existing system that uses a trad ORM with the party model. Party is a base entity that has children of either an Org or a Person. Whats the way to approach such a thing with datomic ?#2016-06-3009:06danielstockton@len divide the attributes into :party/attributes, :org/attributes and :person/attributes#2016-06-3009:07lenand then just join them together as needed, seems so simple 🙂#2016-06-3009:12danielstocktonyep, you just get things which have one or more of the attributes you're interested in, you don't have to think in terms of models and inheritance like a trad ORM#2016-06-3009:13lenI am going through that switch now yip, very elegant#2016-06-3017:21zentropeIf you’ve got many :refs associated to an attribute, how do you remove one of them from the list?#2016-06-3017:22marshall@zentrope: simply transact a retraction
[:db/retract entity-id :my-ref-atrib current-value]#2016-06-3017:23zentropeAh, okay. ;) Hm.#2016-06-3017:23marshallBased on your description i’m assuming it is a cardinality many attribute#2016-06-3017:24marshallthat ^ form will retract only the "current-value” ref, but leave the others#2016-06-3017:25zentropeYes, I have something like :product/orders :many. I just want to retract one of those orders.#2016-06-3017:26marshallso, that retraction will retract the reference from your entity to the order.#2016-06-3017:26marshallif you want to retract the whole order entity you’ll need to handle that separately#2016-06-3017:26zentropeWhat’s the current-value in your example?#2016-06-3017:26zentrope(No, don’t want to retract the actual order.)#2016-06-3017:27marshallit would be the entity id of the order that is referred to by the reference#2016-06-3017:27marshallso if :product/orders has 3 values, call them A, B, C#2016-06-3017:27marshalland you want to retract the reference to B#2016-06-3017:27marshallyou’d put the entity ID of B in for current-value#2016-06-3017:28zentropeAnd a reference to :product/orders’ entity as entity-id.#2016-06-3017:28marshallyou might also want to have a look at http://docs.datomic.com/transactions.html#built-in-transaction-functions
specifically the retractEntity function - if you ever want to remove the order itself and all references to it#2016-06-3017:28zentrope[::db/retract [:product/id “asda”] :product/orders [:order/id “asa”]] <— that sort of thing (pseudocode)?#2016-06-3017:29zentropeOkay, I think I got it.#2016-06-3017:31zentropeRetract the attribute/value from the entity.#2016-06-3017:31marshallthe pseudocode you have there says something along the lines of
“i have an entity with :product/id “asda” and it has a bunch of references through the attribute :product/orders. one of those references is to an entity that has the attribute :order/id “asa” and I want to remove that reference from the “asda” entity"#2016-06-3017:31zentropeOk. Thanks.#2016-06-3017:44zentropeHm. Can you actually use [:ns/attr “an-id”] in db/retract?#2016-06-3017:44zentropeI’m getting an “invalid lookup ref”.#2016-06-3017:45zentrope[:db/retract 277076930200665 :spec/controls [:product/id "a82f5914d899edc259c96303e89133299f14b635”]]#2016-06-3017:47marshallI believe retract requires a value there, not an identity. #2016-06-3017:48marshallYou can query for the value first then issue the retraction#2016-06-3017:48marshallYou should be ablr to use a lookup ref in the e position however#2016-06-3017:49zentropeHm. I’m getting an “invalid list form” even with proper eids. Must be something else.#2016-06-3017:50zentropeOh, I see. I’m just adding the retract/asserts as a value to an attribute in a map, rather than just concatting them on to the TX itself.#2016-06-3017:54zentrope@marshall: Using [:product/id “foo”] inside a [:db/retract] works fine.#2016-06-3017:55marshallAh. Good deal. #2016-06-3017:55zentropeYeah. Makes sense that it would.#2016-06-3021:50jdkealywhen i try to compile my app, it starts a peer connection and i go over my peer limit. Do people typically have a def like this (def conn (d/connect uri)) or do you set the connection as an atom var on start ?#2016-06-3021:54bkamphaus@jdkealy: the component (Stuart Sierra’s lib) example here is the typical way I see people approach this (also some other good advice in the blog post in general): http://www.rkn.io/2014/12/16/datomic-antipatterns-eager-conn/#2016-06-3022:19ddellacostadatomic newbie here—is there any way for me to introspect into the schema itself?#2016-06-3022:20ddellacosta...assuming that's a meaningful question#2016-06-3022:21bkamphausYeah, it's just data and you can query it. https://github.com/Datomic/day-of-datomic/blob/master/tutorial/schema_queries.clj#2016-06-3022:22ddellacosta@bkamphaus: just the sort of thing I was looking for—thanks so much!#2016-06-3022:31jdkealyim confused about what peers might be connected. is there some way to reset them all ?#2016-06-3022:33bkamphausKilling the process or calling release for a peer is sufficient, transactor will log all ips connected.#2016-07-0114:56eraserhdIs there a way to make datomic entities unique across multiple attributes, instead of just a single one?#2016-07-0115:06danielstocktonno eraserhd, don't think soi#2016-07-0115:07danielstocktonDatomic does not provide a mechanism to declare composite uniqueness constraints; however, you can implement them (or any arbitrary functional constraint) via transaction functions.#2016-07-0115:07danielstocktonhttp://docs.datomic.com/database-functions.html#2016-07-0115:07danielstocktonso in a way, yes there is#2016-07-0115:10eraserhdHrmm. But you can’t, say, make a :db.unique/identity attribute which is a component.#2016-07-0115:13danielstocktonnope#2016-07-0115:13eraserhdAh. OK#2016-07-0115:16bkamphaushttps://github.com/Datomic/day-of-datomic/blob/master/tutorial/transaction_function_exceptions.clj#L15 and https://github.com/Datomic/datomic-java-examples/blob/master/src/java/datomic/samples/TxFunctions.java are Clojure, Java versions of a transaction function to ensure a unique composite key, essentially.#2016-07-0115:17eraserhdI’m still reading docs, but I’m guessing these can’t handle the upsert case?#2016-07-0115:21eraserhdOh, I see, it’s like a stored procedure, and you call it to do the thing, it’s not like a hook.#2016-07-0115:29eraserhdOK, so I see it’s not really possible to enforce uniqueness on multiple fields. The transaction function thing only works if everyone uses the transaction function, and it doesn’t add the ability to do lookup refs on multiple fields. That sound right?#2016-07-0115:38bkamphausthat’s correct, true of transaction functions in general and really how you should think about anything in Datomic. “security only works if people use the API / query layer I define” “domain specific transactional constraints only work if people use the transaction functions"#2016-07-0115:42bkamphausthat said, there are other strategies people use for composite keys - i.e. a string concat field, etc. — but then you have to be careful with how you handle the composite key munging and alignment between the key and field it refers to in the transaction and query strategies you use.#2016-07-0115:47eraserhd@bkamphaus @danielstockton Thanks!#2016-07-0116:23timgilbertHi al, anyone have best practices for working with datomic data from plain JavaScript (rather than ClojureScript)? Most of the EDN->JSON libraries I’ve seen don’t seem to handle namespaced keywords very well (they drop the namespace bit)#2016-07-0120:46val_waeselynck@timgilbert: maybe serialize to Transit then ?#2016-07-0314:03isaacDoes datomic database function support overloading?#2016-07-0316:28val_waeselynck@isaac: do you mean arity-based overlaoding? I doubt it 🙂#2016-07-0317:13isaacyeah, include this case?#2016-07-0413:26casperc@isaac: No, it doesn't#2016-07-0413:38isaac@casperc: I got it, thanks#2016-07-0418:21zentropeIs there a reason upgrading to 5385 would give me a “Transactor request timed out” even after destroying and re-creating the database, etc, etc?#2016-07-0418:30zentropeSeems to get hung up on create-database.#2016-07-0418:33zentropeWhy would (d/create-database “datomic:) fail with a timeout? Strange!#2016-07-0418:36zentropeHm. Downgrading to client 5372 talking to server 5385 works.#2016-07-0418:42zentropeDoes 5385 somehow break something running on OpenJDK 8?#2016-07-0418:46zentropeNope.#2016-07-0418:49zentropeHm. Not sure where to report this. Mailing list? Huh.#2016-07-0418:50bvulpeszentrope: is your transactor also 58385?#2016-07-0418:50zentropeYes.#2016-07-0418:50bvulpesah, i see it now.#2016-07-0418:50bvulpesso 5385->5385 doesn't work but 5372 -> 5385 does?#2016-07-0418:51bvulpeswaacky!#2016-07-0418:51zentropeYes.#2016-07-0418:51zentropeI see this in the logs org.hornetq.core.server - HQ222190: Disallowing use of vulnerable protocol: SSLv2Hello. See for more details. but it’s just a WARN.#2016-07-0418:52zentropeBut that’s not every time. Kinda one of those “note it once” things.#2016-07-0418:53zentropeAlso: {:event :transactor/admin-command, :cmd :create-database, :arg {:db-name “my_db"}, :result {:exists “my_db"}, :pid 83706, :tid 125}#2016-07-0418:53zentrope{:event :transactor/remote-ips, :ips #{"127.0.0.1"}, :pid 83706, :tid 63}#2016-07-0419:14zentrope2016-07-04 12:12:45 | DEBUG | org.hornetq.core.client | ClientSessionFactoryImpl received backup update for live/backup pair = TransportConfiguration(name=4a096a42-421b-11e6-b265-a7082dd004e2, factory=org-hornetq-core-remoting-impl-netty-NettyConnectorFactory) ?trust-store-path=datomic/transactor-trust-jks&trust-store-password=transactor&port=4334&ssl-enabled=false&key-store-path=datomic/transactor-key-jks&key-store-password=****&host=127-0-0-1 / null but it didn't belong to TransportConfiguration(name=4a096a42-421b-11e6-b265-a7082dd004e2, factory=org-hornetq-core-remoting-impl-netty-NettyConnectorFactory) ?trust-store-path=datomic/transactor-trust-jks&trust-store-password=transactor&port=4334&ssl-enabled=false&key-store-path=datomic/transactor-key-jks&key-store-password=****&host=127-0-0-1
2016-07-04 12:12:45 | ERROR | org.hornetq.core.client | HQ214000: Failed to call onMessage
java.io.EOFException: null
at org.fressian.impl.RawInput.readRawByte(RawInput.java:40)
at org.fressian.FressianReader.readNextCode(FressianReader.java:927)
at org.fressian.FressianReader.readObject(FressianReader.java:274)
at datomic.hornet$create_deserializer$fn__6605.invoke(hornet.clj:353)
at datomic.hornet$create_rpc_client$fn__6638.invoke(hornet.clj:412)
at datomic.hornet$set_handler$reify__6592.onMessage(hornet.clj:288)
at org.hornetq.core.client.impl.ClientConsumerImpl.callOnMessage(ClientConsumerImpl.java:1116)
at org.hornetq.core.client.impl.ClientConsumerImpl.access$500(ClientConsumerImpl.java:56)
at org.hornetq.core.client.impl.ClientConsumerImpl$Runner.run(ClientConsumerImpl.java:1251)
at org.hornetq.utils.OrderedExecutorFactory$OrderedExecutor$1.run(OrderedExecutorFactory.java:104)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
#2016-07-0419:15zentropeSomething in my client app is messing things up. I can run an inlein script and connect just fine.#2016-07-0419:26zentropeAh hah. If I have a dependency on aleph, the latest datomic client breaks.#2016-07-0419:29zentropealeph -> [io.netty/netty-all "4.1.0.CR3”] , datomic -> [io.netty/netty-all "4.0.13.Final"]#2016-07-0419:29zentropeand “lein deps :tree” didn’t find that as an issue.#2016-07-0419:33zentropeWhat do you do when you have incompatible dependencies like this? If I exclude from datomic, it’s broken. If from aleph, aleph is broken.#2016-07-0419:51alexmillerthis is the problem colloquially known as “jar hell”. The “tree” of dependencies is flattened into a linear classpath (often with little control from a user standpoint) where the jar version being used is whichever happens to come “first”, rather than the one that is actually needed by a particular dependency. This is one of the primary motivations for the classpath module system being added in Java 9.#2016-07-0419:52alexmillerUsually you first try to exclude unneeded deps#2016-07-0419:52alexmillerThen try to find a version that is harmonious with both#2016-07-0419:52alexmillerThen change versions of the upstream deps to find a compatible set#2016-07-0419:53alexmillerThen pester people producing those deps#2016-07-0419:54alexmillerFrom a lib perspective, there are other solutions as well, like modifying the dep to use a new package root such that there is no package conflict#2016-07-0419:56alexmillerThe use of custom classloaders with dynamic classpaths can sometimes be helpful in isolating parts of the classpath from the other#2016-07-0419:58zentropeI can revert to the earlier version of Datomic, I guess. Interesting that there’s some weird nuance such that any newer version of netty breaks datomic.#2016-07-0419:58alexmillerwould be good to report that#2016-07-0420:00zentropeWhere do I do that? The mailing list?#2016-07-0420:00zentropeOh, maybe there’s a support email. I can look for that when I get back.#2016-07-0420:02zentropeGot it: http://www.datomic.com/support.html. ;)#2016-07-0420:14alexmilleryes, either of those#2016-07-0422:14zentropePosted to the Datomic group. Not seeing the message. Hopefully it’s just in a moderation queue.#2016-07-0422:34zentropeApparently, the hornetq client depending on netty 4.0.13 is not compatible with 4.1.0, but the client using netty 3.6.7 is compatible with 4.1.0. Curious! ;)#2016-07-0500:03zentropeWhen I run a query on d/history, I get a number back for the attribute:#2016-07-0500:03zentrope[277076930200636 102 #uuid "577ae77c-6c62-407f-8123-d48f40dc399f" 13194139534395 true]#2016-07-0500:03zentropeHow do I resolve that “102” into the actual :ns/attr?#2016-07-0500:18zentroped/ident#2016-07-0510:19pesterhazyWhen using lookup refs in d/q, an invalid input will raise an exception like java.lang.IllegalArgumentException: Cannot resolve key: [:cat.feature/slug "asdf"]#2016-07-0510:20pesterhazyIf I want to handle these gracefully (say, return a 404 instead of 500), what's the best way to do that?#2016-07-0510:20pesterhazy1. catch IllegalArgumentException#2016-07-0510:21pesterhazy2. check if the lookup ref check out before (using (->> lookup-ref (d/entity db) :well.known/attr) perhaps)?#2016-07-0511:30robert-stuttafordi'd go with 2., @pesterhazy; presumably your LRs are user input coming in off the web. you'll want to validate those, just like any other user input#2016-07-0512:14rauh@pesterhazy: I'd probably use d/entid instead of entity#2016-07-0512:20pesterhazy@rauh, unfortunately (d/entid (rdb) 12341234) returns 12341234#2016-07-0512:20pesterhazyso it won't discover if an entity-id does not exist#2016-07-0512:23pesterhazyI know I shouldn't be using entity ids anyway, so I guess the real solution would be to stop doing that#2016-07-0512:35marshall@zentrope: Thanks for the deps conflict report. I will look into it today.#2016-07-0516:15conawHey all, I’m storing a multigraph in datomic, and I’m interested in the shortest path (or all paths) between 2 or more vertices.
I found one example https://hashrocket.com/blog/posts/using-datomic-as-a-graph-database
Here, using rules, the author @jgdavey (thanks for the post) was able to get back a path, provided he specified the depth to search.
Since that approach is very limited he ended up just working with pure datoms to do the graph search stuff
I found a number of papers about implementing recursive algorithms in datalog
http://blogs.evergreen.edu/sosw/files/2014/04/Green-Vol5-DBS-017.pdf
http://web.cs.ucla.edu/~zaniolo/papers/tplp01.pdf
but before trying to convert those into the datomic flavored datalog syntax I know and love, I wanted to see if anyone else has an opinion on this.
Has anyone used Loom and Datomic together? Are there some smart patterns for using rules to do recursive querying? Or is @jgdavey’s approach still the best 2 years later?#2016-07-0517:31jgdavey@conaw I ended up using the raw datoms access in part because there was so much less overhead than using the datalog approach. I’d love to see what implementations of those whitepapers would look like (selfishly), but my hunch is that the raw datoms approach is nearly always going to be more performant.#2016-07-0522:09eraserhdOK, so if I `(d/transact conn [[:db.fn/retractEntity :foo]], and on the next line (d/entity (d/db conn) :foo), it’s not nil. What am I missing?#2016-07-0522:14hueypI think entity always returns a ‘map' with the id you give it even if it does not exist … I feel like I have to do a query to test for existence#2016-07-0522:15hueyp(d/entity (user/db) 123123123123) => {:db/id 123123123123}#2016-07-0522:17hueyp(d/q '[:find ?e
:in $ ?e
:where
[?e]]
(user/db) 123123123123) => #{}#2016-07-0522:18hueyppull is the same way — always get the id regardless of existance#2016-07-0522:19eraserhd@hueyp: Hi! Hrmm… I thought I tried this, but now I’m not so sure.#2016-07-0522:22marshall@eraserhd: @hueyp is correct. Entities exist independent of whether they have any currently asserted values.#2016-07-0522:23eraserhdEven if I query for it, I get it after a :db.fn/retractEntity#2016-07-0522:24marshallFrom the Entity documentation: Entities do not "exist" in any particular place; they are merely an associative view of all the datoms about some E at a point in time. If there are no facts currently available about an entity, Database.entity will return an empty entity, having only a :db/id.
(http://docs.datomic.com/entities.html)#2016-07-0522:25eraserhdDoes :db.fn/retractEntity not retract all values?#2016-07-0522:25marshallDid you get a new database value between issuing the retraction and doing the query?#2016-07-0522:25eraserhdYes#2016-07-0522:28eraserhdI can definitely query it and show that it still exists.#2016-07-0522:29hueyphm, dunno … would query for the attributes / values and see if that helps explain? 😜#2016-07-0522:30eraserhdThey seem to be all still there.#2016-07-0522:31eraserhdUh geez. I didn’t deref the transact result.#2016-07-0522:31marshallException in there?#2016-07-0522:32eraserhdI think just that it hadn’t completed.#2016-07-0522:32hueypyah thats all I could think? I think the normal transact blocks regardless of deref#2016-07-0522:32hueypor not!#2016-07-0522:34jdubieyup transact blocks regardless of deref. i think it just returns a promise to conform with transact-async interface#2016-07-0522:37eraserhdYeah, it failed because one of the entities being retracted was an attribute definition.#2016-07-0522:37eraserhdOK, so unblocked. Thanks!#2016-07-0522:37eraserhd@hueyp: Good to see you, too.#2016-07-0522:38marshallAh. Yeah, can't retract schema. Glad you got it sorted#2016-07-0522:39hueyp@eraserhd: likewise, small world 🙂#2016-07-0600:53arthurMy transactor keeps getting stuck in a state where it can't start and just hangs on start for 50 min and dies with:
Jun 30 21:51:49 ip-172-16-211-135.ec2.internal bash[2005]: Launching with Java options -server -Xms1g -Xmx24g -XX:+UseG1GC -XX:MaxGCPauseMillis=50 -Xloggc:/var
Jun 30 21:51:53 ip-172-16-211-135.ec2.internal bash[2005]: Starting datomic: ...
Jun 30 21:51:53 ip-172-16-211-135.ec2.internal bash[2005]: System started datomic:
Jun 30 22:55:36 ip-172-16-211-135.ec2.internal bash[2005]: Critical failure, cannot continue: Indexing retry limit exceeded.
Jun 30 22:56:07 ip-172-16-211-135.ec2.internal systemd[1]: datomic.service: Main process exited, code=exited, status=255/n/a
#2016-07-0600:54arthurI'm at something of a loss, I can't make it get into this state intentionally and once it gets into that state it stays that way to up five days#2016-07-0600:57arthurohh i'm running datomic-pro-0.9.5359#2016-07-0601:19arthurIs there such a thing in datomic as a "rebuild all the indexes from scratch" command or is this a silly question#2016-07-0603:01bkamphaus@arthur: doesn’t look like that’s necessarily the solution to your problem. Your system may be underprovisioned (i.e. not enough dynamo throughput provisioned), or you might have catastrophic GC happening with a heap that large (really no benefit to going more than ~12 max heap on a transactor). It’s hard to say what is, though logs/metrics will show bottlenecks for write throughput and things like laggy heartbeat that would point to catastrophic GC. If you can’t find an obvious culprit in storage throughput, you may want to reach out to someone at Cognitect support in case it’s a bug or something their diagnostic tools will help you locate.#2016-07-0622:21pheuterSeeing some funky behavior where the tx metadata associated with a particular transaction becomes stale after the first transaction. That is, the db/txInstant and db/id of the transaction associated with the latest entity do not change after the first transaction, even when the actual entity attributes returned reflect more recent transactions.#2016-07-0622:21pheuterIs there some kind of caching of transaction entities?#2016-07-0622:28pheuterIt can’t be Redundancy Elimination (http://docs.datomic.com/transactions.html#redundancy-elimination) because of the :db/txInstant#2016-07-0622:29bkamphaus@pheuter not sure I follow what you mean. An entity can span multiple transactions, so you need some logic defining which transaction concerning the entity you want information about.#2016-07-0622:29pheuterI see, I was assuming that given a single d/db value, a (pull ?tx-id [*]) will return the latest transaction attributes, no?#2016-07-0622:30pheuterfor a particular entity#2016-07-0622:31bkamphaushow does your query relate the entity in question to the transaction?#2016-07-0622:32pheuter[:find (pull ?e [*]) (pull ?tx-id [*])
:where [?e a v ?tx-id]]
#2016-07-0622:34bkamphaushow are ?a and ?v bound? if not at all, should replace with _ — and that version of the query should return multiple relations of the pulled e to the pulled ?tx-id.#2016-07-0622:35pheuterin our particular case we’re dynamically generating these queries from a json request, they’re being filled in with actual values, so it looks more like:
[:find (pull ?e [*]) (pull ?tx-id [*])
:where [?e :user/firstName “Bob" ?tx-id]]
#2016-07-0622:35bkamphausso in that case you’ll get the ?tx-id for when that specific fact — the user first name — was specified.#2016-07-0622:36pheuterlet’s say the query is this:
[:find (pull ?e [*]) (pull ?tx-id [*])
:where [?e :user/firstName _ ?tx-id]]
#2016-07-0622:36pheuterwe’ll get back a list of users that have a firstName#2016-07-0622:36pheuterwhich ?tx-id will we get back then?#2016-07-0622:37bkamphausany ?tx-id in the database (note if you’re using a present database this does not include history) in which a fact about :user/firstName for that ?e was asserted.#2016-07-0622:38pheuteri see, so we can potentially be getting back an older transaction as long as the entity has a first name, even if it has changed since then#2016-07-0622:38pheuteris there a way to specify to get the latest transaction associated with that entity?#2016-07-0622:38pheuterDamn, I guess that was my question.#2016-07-0622:38bkamphausout of order reply#2016-07-0622:38pheutergotcha#2016-07-0622:38bkamphausNo that’s not what’s going on, I don’t think.#2016-07-0622:39bkamphaustransaction ids are for facts about entities#2016-07-0622:39bkamphausthere is no idea of an entity id -> transaction id mapping that transcends facts#2016-07-0622:39bkamphausentities are projected from facts#2016-07-0622:39bkamphaustransaction ids are for facts#2016-07-0622:39bkamphausif you want to get the latest transaction with an entity, you get the max ?tx-id for [?e ?tx-id]#2016-07-0622:40pheuteryeah, i guess i was hoping to be able to order all transaction facts associated with an entity by txInstant and get the most recent one#2016-07-0622:40pheuterright#2016-07-0622:40pheuterthat could work#2016-07-0622:40bkamphausnot necessary to order by txInstant and less reliable to do so (transaction granularity means transactions can occur on the same instant) — transaction ids are guaranteed to ascend monotonically, just max that.#2016-07-0622:41pheutergood point, thanks! this has been really helpful#2016-07-0622:42bkamphausof course if you want readable time it might make sense. Here’s an SO answer where I don’t follow the advice I just gave you 🙂 https://stackoverflow.com/a/24655980#2016-07-0622:47pheuterheh#2016-07-0622:58pheuterhm, seeing the following error now: clojure.lang.PersistentList cannot be cast to java.lang.Number
query looks like this:
[:find (pull ?e pattern) (pull ?tx-id [*]):where [?e :account/email _] [?e _ _ (max ?tx-id)]]
#2016-07-0623:02pheuteri’m basically trying to avoid having to break it out into two separate queries#2016-07-0623:04marshallTry the max in the find specification#2016-07-0623:05pheuterInvalid pull expression (pull (max ?tx-id) [*])#2016-07-0623:05marshall[:find (max ?tx-id)
:where [?e ?tx-id]]#2016-07-0623:05bkamphausdefinitely can’t nest function calls inside single where clauses in general and aggregates can only legally be called in the :find clause as @marshall points out. I think you’ll run into issues using the pull combined.#2016-07-0623:05bkamphausYou could probably get max txInstant and still pull the ?tx-id? Or try and subquery it, e.g. https://stackoverflow.com/a/30771949#2016-07-0623:06pheuterah i see#2016-07-0623:06pheuterinteresting#2016-07-0623:06pheuterthanks!#2016-07-0623:06pheuterdidn’t know subqueries is a thing o.0#2016-07-0623:08bkamphausnot the first thing I’d reach for, depends on what your need to do things in one query is. In general, you’re better off breaking up pull/entity (project analogues) of query from the where clause/select portions, they’re more composable and general purpose that way.#2016-07-0623:08marshallAlso keep in mind since query occurs in process on the peer and a lot of this data will be cached there isn't the same cost for multiple queries as there would be in a traditional rdbms#2016-07-0623:08bkamphausif you’re building queries (as your earlier discussion implies) via REST or a client layer where you don’t get the same benefit of caching and have to round trip to the peer it’s a different story.#2016-07-0623:09pheuteryeah, in this case i’m building a query dynamically from a client request and the desired behavior is to return the latest tx metadata associated with the entities returned#2016-07-0623:10pheuterit’s a fair point that queries are heavily cached on the peer at it may not be the worst thing to make two queries to fulfill each request#2016-07-0623:33pheuterhm, is it possible to pass a particular entity into a subquery as a query arg?
[:find (pull ?e [*]) (pull ?latest-tx [*])
:in $
:where
[?e :account/email _]
[(datomic.api/q [:find (max ?tx-id) :where [?e _ _ ?tx-id]]
$ ?e) [[?latest-tx]]]
[?e _ _ ?latest-tx]]
#2016-07-0623:34pheutergetting error: processing clause: [?e _ _ ?tx-id], message: Insufficient bindings, will cause db scan#2016-07-0623:37pheuterah i forgot the in clause in the subquery, doh#2016-07-0623:38pheuteryay it worked#2016-07-0701:47bhaganyMy data model is a digraph in which some children can have multiple parents, with multiple ref attributes linking parents to children, depending on the kind of parent. I'd like to write a query that takes parent eids and returns all the nodes that are children of the complete set of the parents. I could do this if I could pass in only a tuple of parent-ids, but since I also need to pass in ref attributes for each parent, I am not sure how to do it.
Something like:
[:find ?child
:in $ [[?parent-id ?parent-ref] ...]
:where [?parent-id ?parent-ref ?child]]
except that will pull back children of any of the parents, whereas I want children of all of the parents. I'm hoping I've missed something.#2016-07-0702:02marshall@bhagany: I would have to spend #2016-07-0702:02marshallArgh. Phone keyboard#2016-07-0702:03marshallI think you can accomplish that with a rule that resolves the parent relationship via the specified attribute but I'd have to play with it a bit to be sure#2016-07-0702:03marshall@bhagany: ^ I'll see if I can get something that works tomorrow #2016-07-0702:29bhagany@marshall: okay, thanks for the direction. I'll play around with that too. #2016-07-0717:41marshall@bhagany: I thought a bit more about your question.
The reason you’re getting the behavior you’re seeing is that using the collection binding form (…) will run the query on each element of the collection (your [id ref] tuples) individually and return the union of those results.
If you need to join across elements, they should be provided in the same tuple to the query.
There are a couple options for your particular problem. Are you generating these queries dynamically? If so, I’d recommend just generating them entirely programmatically with each parent/reftype relation as its own datalog clause (i.e. [?parentA :ref/typeA ?child] [?parentB :ref/typeB ?child]).#2016-07-0717:42marshallAlso, if you’re fully generating them, there’s not any need to parameterize the parentA and parentB in that example - you can simply generate the query with the entityIDs of those elements in place.#2016-07-0718:02bhagany@marshall: I started out fully generating them, which I can still do. But I'm using the rest api, so it basically comes down to string interpolation, and I was hoping to avoid all that :). I'll stick with that until I manage to get this written in clojure. #2016-07-0718:07bhaganyThanks for looking into it#2016-07-0718:07marshallAh. gotcha. Sure. Happy to help if I can#2016-07-0718:08marshallAnother thought/question - do you have a fairly small set of ref types and are they mostly fixed? if so, you could hand write a rule to handle them and that might make things easier#2016-07-0718:09marshallthen you could still parameterize the query with the parent node IDs, but the rule could take care of all the ref type stuff#2016-07-0718:10bhaganyYes, it's a small fixed set of refs... that's a good idea. #2016-07-0718:10bhaganyThanks!#2016-07-0720:19timgilbertHey all... just curious, is anybody using https://hub.docker.com/r/pointslope/datomic-pro-starter/ in production? Seems a lot simpler than manually setting my transactors up directly on ec2 instances, but I'd be curious about other people's experiences#2016-07-0721:21eraserhdSo, I think I discovered there’s no way to recursively pull all attributes, right? I have to specify individual ones.#2016-07-0721:38currentoorIn our app we've got some attributes in client side queries that are not actually retrievable with the pul API. For example the query [:dashboard/title :dashboard/created-at] title works with datomic.api/pull but created-at is derived with a query over the :db/txInstant.#2016-07-0721:38currentoorIdeally we'd like to have an extensible pull wrapper where we could specify custom ways to query keys like :dashboard/created-at but the results get merged back in such that it looks like a regular pull API call.#2016-07-0721:39currentoorAny suggestions?#2016-07-0722:09ckarlsenre. "Datomic supports 2.0.X and 2.1.X versions of Cassandra." - does this imply that datomic does not support the latest versions?#2016-07-0807:52caspercSo I have a cardinality many value (ref actually, but never mind that), and I want to perform a query taking a list as input giving me of all entities that have at least those values in its cardinality many attribute. My problem is that when binding the input values to a list, Datomic does an ‘or’ on the values meaning that if I provide [:a :b] as input it will also give me entities having only 😛. So can I have the query treat the input with ‘and’ semantics somehow?#2016-07-0808:00caspercJust to make it a bit more specific, I have a template entity which can contain some fields. I want to query for templates having a number of fields (though they can have more). The following query threats the inputs as ‘or’ where I would like ‘and’:
(d/q '[:find [?e ...]
:in $ [?ids ...]
:where
[?e :template/field ?fields]
[?fields :template.field/id ?ids]
]
db [101 102])
#2016-07-0810:33danielstocktonif i transact 'bob likes cheese' and that fact already exists, is any transaction created that reaffirms this fact?#2016-07-0810:35danielstocktonfor example, if at tx1 bob tells me that he likes cheese, and then at tx2 alice tells me that bob likes cheese#2016-07-0810:35danielstocktoni want some auditing capabilities that can not only tell me that bob told me he likes cheese at tx1 but also that alice reaffirmed that bob likes cheese at tx2#2016-07-0810:42pesterhazyHow do I find all the datoms transacted in a tx? (d/q '[:find ?e ?a ?v ?tx ?added :in $ ?tx :where [?e ?a ?v ?tx ?added]] (rdb) 13194153065948)
datomic.impl.Exceptions$IllegalArgumentExceptionInfo: :db.error/insufficient-binding Insufficient binding of db clause: [?e ?a ?v ?tx ?added] would cause full scan
#2016-07-0810:55pesterhazythat obviously doesn't seem to work 🙂#2016-07-0811:09pesterhazyI guess (first (-> conn d/log (d/tx-range 13194153065948 nil))) works#2016-07-0811:23casperc@pesterhazy: Yep, that’s how.#2016-07-0811:30pesterhazyhere's what I came up with: https://gist.github.com/pesterhazy/699455c956759c793c9802cf7ad4714b#2016-07-0811:30pesterhazyrepl-based helpers for inspecting an entity's history ^^ @casperc#2016-07-0813:49marshall@danielstockton: If the datom being transacted is identical in the second transaction, it will result in an empty transaction generated. You could use transaction metadata to indicate that the transaction was issued by a different user (i.e. ‘alice said’ in transaction #2), however the transaction itself won’t be associated with the ‘bob likes cheese’ fact. If you need to track that information as well, you’ll want to model that directly (possibly also as transaction metadata).
The reified transactions talk here: http://www.datomic.com/videos.html provides some more detail on using transaction metadata for provenance tracking.#2016-07-0813:53marshall@casperc: This is essentially the same question as was asked by @bhagany yesterday. Binding using the collection form (…) will run the query on each element of the collection and return the union of those results.
To ask “and” questions your list of ids need to be provided in the same tuple, using the tuple binding form (http://docs.datomic.com/query.html#tuple-binding)
Each id will have its own individual datalog clause and the default behavior of multiple clauses in a query is “And”.#2016-07-0814:09danielstocktonthanks @marshall, pretty much what i had thought#2016-07-0815:47lvhIs there any particular recommended docs for taking existing nested data and turning it into datoms? I’m currently doing it semi-manually — it’s fine, but it feels like there should be an easier way#2016-07-0815:47lvhin particular, I’m trying to put swagger API specs in datomic to query them#2016-07-0816:21stuartsierra@lvh You can transact nested data as maps if it matches your schema. You have to create entity IDs for any map that isn't a component nested under another map.#2016-07-0816:23lvhOh, cool! That'll definitely save me a bunch of time, I'll just go in and add appropriate namespaced keywords #2016-07-0816:24stuartsierra@lvh reference docs at http://docs.datomic.com/transactions.html#nested-maps-in-transactions describe what forms of nesting are allowed#2016-07-0816:27lvhThanks!#2016-07-0816:27lvhLooks like I will still have to mangle the data somewhat since e.g. paths/status codes aren’t very useful keys:
https://github.com/OAI/OpenAPI-Specification/blob/master/examples/v2.0/json/petstore-minimal.json#L26-L46#2016-07-0816:29stuartsierraYes. For cases like that, my usual pattern is:
- every function returns a vector of valid transaction data
- (vec (concat ...)) things together to build the full transaction#2016-07-0816:29stuartsierra- create tempids in the "parent" and pass them to a "child constructor"#2016-07-0816:30lvhAlso, some parameters/definitions are inline, some of them are defined at the top level (so e.g. you don’t have to repeat how your limit query parameter works each time)#2016-07-0816:32lvhAh; I saw something earlier like {:widget/manufacturer [:manufacturer/e-mail “ ; are idents on insertion a Datascript-only feature, or would those work too?#2016-07-0816:32lvhI’m wading through a whole bunch of links and tutorials and they don’t always specify#2016-07-0816:33lvhI’m glad that I’m going in the right direction because I did in fact have:
(->> (concat (operations swagger))
(map (fn [m] (assoc m :swagger/api (:info swagger))))
vec)
in my code already 😄#2016-07-0820:38alexmiller@stuartsierra: how about (into [] cat […]) instead of (vec (concat …))…#2016-07-0820:39stuartsierra@alexmiller: I have that defined as concatv 🙂#2016-07-0820:39alexmiller:) I just enjoy putting things into my cat#2016-07-0821:47lvhIs there a recommended way for describing, uh, … overlays? I have a high-level default, which specific components lower in the hierarchy can override
it seems the two options are:
1. propagate the default to the leaves
2. walk the graph until you find the default
In particular, is there a standard way of writing e.g. fns that produce clauses that go in where, or do you need to concat in that case?#2016-07-0821:47lvhI don’t know if my description makes any sense#2016-07-0821:47lvhin my specific case, apis have operations, apis produce and consume mime types (e.g. application/json); individual operations can claim to produce e.g. application/xml in addition to json#2016-07-0822:38lvhalso, are attributes generally spelled in the singular if they have cardinality many?#2016-07-0912:20val_waeselynck@lvh regarding your first question I would need an example #2016-07-0912:21val_waeselynckI mean with attribute definitions and specific questions you want to ask#2016-07-0914:03casperc@marshall: Hmm, I tried that but seems I got caught in a bit of a gotya. I was trying to do the ‘and’ logic on the child entity being referenced by the cardinality many ref, which does not work.#2016-07-0914:04caspercWorks:
(db/q
'[:find [(pull ?e [*]) ...]
:in $ ?one ?two
:where
[?e :template/field ?one]
[?e :template/field ?two]]
db [:template.field/id 102] [:template.field/id 101])
#2016-07-0914:05caspercDoesn’t work:
(db/q
'[:find [(pull ?e [*]) ...]
:in $ ?one ?two
:where
[?e :template/field ?field]
[?f :template.field/id ?one]
[?f :template.field/id ?two]]
db 102 101)
#2016-07-0914:10caspercWorks too:
(db/q
'[:find [(pull ?e [*]) ...]
:in $ ?one ?two
:where
[?f1 :template.field/id ?one]
[?f2 :template.field/id ?two]
[?e :template/field ?f1]
[?e :template/field ?f2]
]
db 102 101)
#2016-07-0914:12caspercI guess I see the logic now, in that the second one only sees one template.field at a time and then asks it to be both ?one and ?two.#2016-07-0918:42lvhval_waeselynck: More detail then what I provided in https://clojurians.slack.com/archives/datomic/p1468014473000883 ? I’m trying to translate Swagger into datomic#2016-07-0918:47lvhI’m trying to find a sufficiently complex and pretty-printed swagger example#2016-07-0918:51lvhI am trying to turn this: https://gist.github.com/lvh/b45dc7994336b4171d78622af81f7e52 into datomic so I can query it. petstore-minimal.json specifies the generally accepted and returned mime types here: https://gist.github.com/lvh/b45dc7994336b4171d78622af81f7e52#file-petstore-minimal-json-L20-L25
but also overrides it for a specific operation here:
https://gist.github.com/lvh/b45dc7994336b4171d78622af81f7e52#file-petstore-minimal-json-L30-L32#2016-07-0918:51lvhincidentally, in this case they agree (application/json)#2016-07-0918:53lvhaccording to the spec, which unfortunately doesn’t have anchor links for a specific part (http://swagger.io/specification/) the operation one overrides the global one#2016-07-1020:38bhaganypretty sure this is going to be a "no", but - is there any way to use the log api with the rest api?#2016-07-1021:42val_waeselynck@lvh OK, I think I understand, correct me if I'm wrong. You want to store REST configuration in Datomic.#2016-07-1021:43val_waeselynckBasically, you'd like global defaults for the configuration, as well as resource-scoped / endpoint-scoped configuration to override it.#2016-07-1021:46val_waeselynckThe way I would do it would be to have entities representing the global configuration, the resources and the endpoints, attributes for the options (e/g :rest.config/consumes) and a to-one ref attribute representing configuration hierarchy (similar to Javascript's prototypal inheritance, e.g :rest.options/parent).#2016-07-1021:47val_waeselynckI would then write something that generically climbs the inheritance chain to find the value for an option.#2016-07-1021:48val_waeselynckIf you're querying from Datalog, you could use a recursive rule, e.g :#2016-07-1021:51val_waeselynckIf you're querying from Clojure with Entities, I'd walk the inheritance chain manually, e.g :#2016-07-1021:53val_waeselynck(or you could use a transducer of course :))#2016-07-1021:54val_waeselynck@lvh does this answer your questions ?#2016-07-1022:05marshall@bhagany: I believe the log in query is supported via REST http://docs.datomic.com/log.html#log-in-query I'm not at a computer to test currently.#2016-07-1022:06bhagany@marshall: Aha, it hadn't occurred to me to try inside query. I'll give it a shot. Thanks!#2016-07-1022:08bhagany@marshall: hrm, on second thought, doing that appears to require passing the log as a data source to query. I don't see a way to do that via REST. #2016-07-1023:40lvh@val_waeselynck: I think so! Thanks!#2016-07-1023:40lvhfind-opt was the missing piece#2016-07-1116:47cap10morganSince the recommended transactor upgrade approach is to stand up the new version and then kill the old ones (letting the new ones fail over), is there a way to do that without losing the ability to transact data during the failover delay? Or do you just have to schedule a bit of downtime for all your systems?#2016-07-1117:45cap10morganGot my answer in a support ticket. The key was reading and fully understanding this: http://docs.datomic.com/ha.html#peer-recovery-time. TL;DR is you either design your whole system to tolerate this or yes, you need scheduled downtime.#2016-07-1120:57ethangracerhey all, the schema documentation says "You can also retract optional schema attributes with :db.alter/attribute.” Does anyone know how to do this? There don’t appear to be any examples.#2016-07-1121:00marshall@ethangracer: This section of the docs: http://docs.datomic.com/schema.html#schema-alteration discusses altering schema. Further down in this section there is an example that uses retraction to ‘unset’ isComponent#2016-07-1121:05ethangracer@marshall: thanks, i’m looking for something a bit different. I think by optional schema attributes they mean things like isComponent. I want to take one attribute name and effectively ‘merge’ it with another. So I have :category/name that I want to now be equivalent to :tag/name. So I was hoping to retract the :category/name attribute, and either replace it with :tag/name or make new entries on the entity called :tag/name#2016-07-1121:06marshallThe key point there is “optional schema attributes”
:db/ident is a required attribute#2016-07-1121:07marshallthat same page has a table listing the optional attributes that can be altered#2016-07-1121:07ethangracerI was afraid you were gonna say that 😄#2016-07-1121:08ethangraceris removing the uniqueness attribute a bad idea?#2016-07-1121:08marshallhttp://docs.datomic.com/schema.html#renaming-an-identity There is a way to ‘rename’ an attribute#2016-07-1121:09marshallso, :db/ident can be renamed, it can’t be retracted.#2016-07-1121:09ethangraceryeah I saw that, unfortunately I want to rename it to an attribute name that already exists#2016-07-1121:09ethangracerso we’re getting a :db.err/unique-conflict#2016-07-1121:09marshallah, in that case you’ll want to transition your data from one attribute to the other#2016-07-1121:10marshallassuming the new data don’t violate uniqueness constraints#2016-07-1121:10ethangraceroh boy. alright, thanks for the help!#2016-07-1121:11marshallNote: please backup your DB before any schema alteration#2016-07-1121:11marshalland if at all possible, do the alteration in staging or dev prior to rolling it out in prod#2016-07-1121:19ethangracerthanks, those are both absolutely happening!#2016-07-1121:33ethangracer@marshall: so, am I right in thinking that it’s impossible to remove the old name from the schema? but from now on, consistently use the new one?#2016-07-1121:40marshallThat's correct. You can't remove schema elements #2016-07-1123:26zentropeWell! Far as I can tell, Aleph (latest) and Datomic (latest): no go in the same JVM: fundamentally incompatible versions of netty-all.#2016-07-1123:57marshall@zentrope I'm looking into the compatibility issue and potential solutions
I'll update on the Google group thread when I have something to report#2016-07-1123:58zentropeOkay. It’s not that big of a deal (other than just worrisome jar-hell stuff). But if a simple tweak on Datomic would make everything magically work, I’d not turn it away! ;)#2016-07-1123:58zentrope@marshall: Meanwhile, I’ll see what it’s like to use httpkit stuff again.#2016-07-1207:38stijn@zentrope, @marshall I ran into the same issue with aleph and latest datomic. current solution for me: going back to datomic 0.9.5372#2016-07-1215:19pheuterIs there any potential harm in transacting the same schema attributes multiple times?#2016-07-1215:19pheuterfor example, when doing naive migrations of new attributes, not worrying about filtering out attributes already in the db#2016-07-1215:29marshall@pheuter: If a given transaction has no ‘novel’ datoms, it will create an empty transaction entity in the log. Otherwise, no, re-asserted datoms are ignored#2016-07-1215:30marshallIn general I wouldn’t do it all the time (i.e. on startup), but occasionally (as in part of a migration) shouldn’t be a big deal#2016-07-1215:33dominicm@pheuter: You've probably seen this, but just in case https://github.com/rkneufeld/conformity#2016-07-1215:39pheuterthis is great, thanks guys! yeah, this isn’t an on-startup thing as much as one-off migrations of schema changes#2016-07-1215:46davinconformity seems to be heavily inspired by the day-of-datomic code for migrations, that is worth checking out as well: https://github.com/Datomic/day-of-datomic/blob/master/src/datomic/samples/schema.clj#2016-07-1217:42pheuteryeah i saw that earlier, just wasn’t sure if that was required#2016-07-1316:55zaneIf I use the pull API to retrieve a value for an attribute that doesn't exist on the specified entity do I get null or does the result map just omit the attribute entirely?#2016-07-1316:56zaneSeems like the latter.#2016-07-1317:11bostonaholic@zane: yes, the latter#2016-07-1404:40zentropeOh, no. 0.9.5385 on FreeBSD: client can’t talk to host!#2016-07-1404:43zentrope{:type clojure.lang.ExceptionInfo
:message Error communicating with HOST 127.0.0.1 on PORT 4334
:data {:alt-host nil, :peer-version 2, :password cvDgzartcCIkdLS6YW2s0M7IJd5zG1p0BEPfo28Dn/c=, :username Knqi3QCMoTYAPzQZgsRpbX9IcqLjhjgbvArvmUeOv64=, :port 4334, :host 127.0.0.1, :version 0.9.5344, :timestamp 1468471153847, :encrypt-channel true}#2016-07-1404:43zentropeversion 5344?#2016-07-1404:45zentropeAh. My pkg that updates datomic is … deficient.#2016-07-1414:59isaacHow many Datoms in one transaction is recommended;#2016-07-1414:59isaac?#2016-07-1415:21hansdepends on how long you're willing to let other transactions wait. if you're committing very large transactions, you'll also have to tune the transactor and the peers to tolerate long pauses.#2016-07-1415:22hans100s are safe#2016-07-1415:27isaacI has 272 new entities (about 700 datoms). I want to save it in one transaction. I got tempid coniflict.#2016-07-1415:27isaachow many tempids in one transaction is safe?#2016-07-1415:31marshall@isaac: 700 datoms should not be an issue. Can you post the exception here?#2016-07-1415:34isaac1. Caused by datomic.impl.Exceptions$IllegalArgumentExceptionInfo
:db.error/datoms-conflict Two datoms in the same transaction conflict {:d1
[17592186046132 :project.stage/step 3 13194139535021 true], :d2
[17592186046132 :project.stage/step 1 13194139535021 true]}
{:d1 [17592186046132 :project.stage/step 3 13194139535021 true], :d2 [17592186046132 :project.stage/step 1 13194139535021 true], :db/error :db.error/datoms-conflict}
error.clj: 124 datomic.error/deserialize-exception
peer.clj: 400 datomic.peer.Connection/notify_error
connector.clj: 169 datomic.connector/fn
MultiFn.java: 233 clojure.lang.MultiFn/invoke
connector.clj: 194 datomic.connector/create-hornet-notifier/fn/fn/fn/fn
connector.clj: 189 datomic.connector/create-hornet-notifier/fn/fn/fn
connector.clj: 187 datomic.connector/create-hornet-notifier/fn/fn
core.clj: 1938 clojure.core/binding-conveyor-fn/fn
AFn.java: 18 clojure.lang.AFn/call
FutureTask.java: 266 java.util.concurrent.FutureTask/run
ThreadPoolExecutor.java: 1142 java.util.concurrent.ThreadPoolExecutor/runWorker
ThreadPoolExecutor.java: 617 java.util.concurrent.ThreadPoolExecutor$Worker/run
Thread.java: 745 java.lang.Thread/run
#2016-07-1415:37marshallYou’re attempting to assert that :project.stage/step is both 1 and 3 for the same entity (17592186046132) in the same transaction. If it is cardinality one, it can only get a single value at a time.#2016-07-1415:43isaacYeah! the message told our that. but I store them in two different entities, all entity assign :db/id via (d/tempid :db.part/project)#2016-07-1415:45marshallare you using the (d/tempid :db.part/project <SomeNegativeNumber>) arity anywhere? If so, you may have inadvertently used the same negative indicator for what should have been two different entities#2016-07-1415:48isaacno, I just use (d/tempid :db.part/project)#2016-07-1415:49isaac(d/tempid :db.part/project)#2016-07-1415:52isaacOh, I find the bug! there has two entities has the some attribute value, and this attribute is set``` db.unique/identity#2016-07-1415:54isaacThe data is too large? 🙂#2016-07-1415:54isaacthanks @marshall#2016-07-1418:20cezarPlaying with Datomic Pro and it appears to me that every transaction is saved in one (or more) rows in the DATOMIC_KVS table.
I think this could pose problems for me given that I intend to have 1B+ datoms in a single database and I doubt Postgres or MySQL will cope well with billions of rows in a single table. Especially given that Datomic appears to add at least one index to the DATOMIC_KVS table (the primary key).
Should I try to bunch up my datoms into transactions as coarse as possible or should I look for a different underlying storage? (might be a pain given that I'd need to convince enterprise architects that we need a new NoSQL system deployed)#2016-07-1418:24marshall@cezar: The ‘soft limit’ for Datomic datoms is 10Billion per database.
There should be no problem using Postgres or MySQL for a 1B datom DB. Transactions should be sized ‘transactionally’ - that is put things you need to have atomically combined in a single transaction into the same transaction. It’s best to limit transactions to 100s or low 1000s of datoms if possible, but there is no harm in having small granular transactions.#2016-07-1418:28cezarYeah, I understand this @marshall but I'm trying to figure out if Datomic won't inadvertently slaughter the performance of the underlying RDBMS by simply inserting billions of rows into DATOMIC_KVS. In the past I have not had good experiences with tables that have huge row counts on traditional RDBMS engines like Postgres. Their B+ trees became progressively slower to the point where beyond ~100 million rows inserts became unusably slow.#2016-07-1418:29cezarso my question is. Has anyone had good success with using a RDBMS for a large Datomic database or should I go with Cassandra/Couchbase etc as the backing store?#2016-07-1418:30marshallThe backend storage is strictly used as a k/v storage and all data are immutably written - in practice, yes, we have multiple customers running very large databases (>1B datoms) in production using RDBMs backing storage#2016-07-1418:30cezarI see. Maybe I'm just not configuring my Postgres properly... on a related note does Datomic actually need the primary key on DATOMIC_KVS?#2016-07-1418:31cezarbecause I'd be a lot more comfortable if there wasn't any indexes on the DATOMIC_KVS table#2016-07-1418:37marshallThe primary key is required.#2016-07-1419:05cezarAre expectations though that NoSQL backing storages (e.g. Cassandra, Couchbase) are more likely to provide more consistent performance for large Datomic databases or is it a wash in your experience?#2016-07-1419:08marshallWe’ve seen very good and consistent perf with RDBMs backing storage; if you absolutely need the highest level of performance and throughput, yes, something like DynamoDB or Cassandra is merited#2016-07-1419:26hans@cezar: We're running out .25 bn datoms database against a basically untuned Postgres and have no performance issues.#2016-07-1419:28cezar@hans what is the ingestion rate that you observe if you fire your datoms at a rapid speed? Also how large or small are your transactions? I get about 34,000 datoms/sec which I'm very happy with but I still have a fairly small set and I'm sending them within large transactions (not ideal for my case)#2016-07-1419:30hansLarge transactions are one of the pain points that made us look for alternatives. We have not measured insertion rates in a long time, but we had no reason for that because performance is sufficient for our application.#2016-07-1419:50cezar> Large transactions are one of the pain points that made us look for alternatives.
@hans: are you saying that your transactions are too large for Datomic to cope with or that you had to resort to large transactions to make the ingestion performance acceptable?#2016-07-1419:51hansWe have a lot of transactions that are too large for Datomic, and we've been engineering around that by splitting the operations up into smaller units. This is just a workaround, though, and the lack of support for larger transactions is one of the reasons why we're using another database for our next project.#2016-07-1419:54cezaroh I see. Kind of the opposite of the issue I may have. I have lots of data with no clear transaction delineation. Which is nice because I can make them arbitrarily large or small as long as I get good performance. However, the "time travel" of Datomic is nearly ideal for my use case so I'm not eager to explore alternatives.
Out of curiosity though, what alternative are you looking at?#2016-07-1419:57hansWe're now migrating to MarkLogic for most of the things that we do. It suits our application space better, and in fact it has time travel as well. We're not planning to use that feature in the same way as we used Datomic's reified transactions, though, because we've learned that it is important to us to change the history if the business requires it or if we need to correct errors that occurred in the past.#2016-07-1421:18cezarNever heard of that DB. Hard to find any real info on it beside marketing hype. Have you done a proper evaluation of its RDF capability? I'm a bit skeptical tbh from what little googling I've done on it...#2016-07-1503:39hans@cezar We're not interested in RDF, and this is of course not the right forum to discuss it, but http://www.marklogic.com/resources/inside-marklogic-server/ links to a paper describing the internals.#2016-07-1515:54isaacDoes datomic support customize valueType :db.install/valueType#2016-07-1516:08stuarthalloway@isaac the underlying design was made to facilitate that, but not today. What value type(s) do you need?#2016-07-1604:41isaac@stuarthalloway: Many times, We just want to save some presenting data (some additional info of an entity, it's normally is a Map). Just now, we save via serialize it to bytes.
BTW: Does your team has plan to let fulltext of Datomic support customize analysis? We want fulltext support chinese. our company in China.#2016-07-1611:28jonsharrattSo was just trying to spike something with Datomic and the SSE endpoint, enabled CORS as per docs to #{/*/} for testing. In Chrome still get:
EventSource cannot load . No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin '' is therefore not allowed access.
#2016-07-1611:45jonsharrattfor now just proxying them through but wondered if I am doing something daft (more than likely :D)#2016-07-1613:46cap10morganJust upgraded my transactors to 0.9.5385, now trying a 0.9.5385 peer against them (DynamoDB storage, so I added [com.amazonaws/aws-java-sdk-dynamodb "1.11.6"] to deps). Now I'm getting this error from datomic.api/connect: CompilerException java.lang.NoSuchMethodError: com.fasterxml.jackson.databind.JavaType.isReferenceType()Z, compiling:(form-init2275309853874966894.clj:1:16) Anyone seen that or have ideas how to fix it?#2016-07-1613:50cap10morganah, looks like another dep is bringing in a different version of com.fasterxml.jackson.core. so probably just some exclusions I need to add#2016-07-1708:22tengHi. I’m new to Datomic. I have some problems with the Seattle example: https://github.com/espeed/dskel/blob/master/samples/seattle/getting-started.clj. The first thing that I don’t understand is why i can execute "(d/create-database uri)” and it returns true (success) and then stop the REPL, restart my computer and then try again and it returns false (like if the database still exists), so I have to execute "(d/delete-database uri)” first. It is an in memory database in the example "(def uri "datomic:<mem://seattle>”)”. I think this is the root of my problem, because what happens next is that I get ”duplicate key” errors when I try to load the data (submit the transaction), like if the data is already there, when I run the whole example again. Any ideas?#2016-07-1715:31paulspencerwilliamsHey all. Has anyone been using Spec-tacular much?#2016-07-1715:54calvis@paulspencerwilliams: well, I have! but I don’t think many other people do haha#2016-07-1717:28paulspencerwilliams@calvis: it looks like a neat DSL.#2016-07-1717:31paulspencerwilliams@calvis: can / would use use the specs as ‘records’ throughout the rest of an application, or keep these contained within a ‘db layer’, and use records or maps across the app?#2016-07-1717:40calvis@paulspencerwilliams: yes we use them throughout, I think the only time we don’t is when we have a required field on the database that isn’t necessarily required off the database. I’d like to add a more precise specification for that type of field#2016-07-1717:48paulspencerwilliams@calvis: and you just use the (house
functions to create them as needed in controller for example?#2016-07-1717:49calvisyeah#2016-07-1717:50paulspencerwilliamsCool. Cheers. Will play. It seemed a little boilerplaty to have both defrecords and defspecs around.#2016-07-1719:30jaret@teng: I tried to re-create the issue you are facing and was unsuccessful. Tell me if I have your steps in correct order:
1. Def the URI
2. d/create the uri
3. C-d (control D or quit the REPL)
4. Restart machine#2016-07-1719:30jaret@teng when I even c-d the Repl I am able to create a new in-memory DB with the same name with no problem.#2016-07-1719:31jaret@teng I also skipped ahead and tried d/deleting the in-memory DB and was able to remove and re-create it and load the schema again#2016-07-1719:33jaret@teng are you sure you are passing in the :mem protocol and not :dev or :free?#2016-07-1817:49bhaganyI'm beginning the process of getting datomic into production on aws, and running into problems. I'd like to start the transactor in a VPC, but ensure-cf created a security group outside of a VPC. I manually created a security group in the VPC and re-ran ensure-cf, which resulted in the error:
com.amazonaws.AmazonServiceException: Invalid value 'goodsie-datomic-production' for groupName. You may not reference Amazon VPC security groups by name. Please use the corresponding id for this operation.
So I replaced the name of the security group with its id, and got:
com.amazonaws.AmazonServiceException: Value (sg-feb18a85) for parameter GroupName is invalid. Group names may not be in the format sg-*.
I don't see any other id's for security groups. What kind of value does it want?#2016-07-1817:56cap10morgan@bhagany: ran into and worked around this very thing. let me dig up the details.#2016-07-1817:57bhagany@cap10morgan: excellent, thank you#2016-07-1818:01cap10morgan@bhagany: I just switched the LaunchConfiguration’s Properties to use the SecurityGroups parameter (https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-as-launchconfig.html) and put our pre-existing VPC security group IDs into that (the sg-* values). This is all in the CF template JSON file.#2016-07-1818:02bhaganyokay, so just get to the point where datomic will generate the cf template, and edit it to be correct?#2016-07-1818:03cap10morganyes, that’s all I’ve found to make it work in a VPC, unfortunately.#2016-07-1818:03bhaganyno problem, I was about to do that anyway 🙂 thanks!#2016-07-1818:03cap10morgannp, good luck#2016-07-1818:06kschraderhey all, we moved to Datomic 0.9.5385 on Friday and now when I’m trying to do a data restore to our backup database from s3 I start getting one “Copied” message over and over again (i.e. “Copied 2223 segments, skipped 93335 segments.”) and then eventually the restore times out java.util.concurrent.ExecutionException: java.util.concurrent.ExecutionException: java.net.SocketTimeoutException: Read timed out#2016-07-1818:06kschraderhas anyone else seen this happen?#2016-07-1818:09kschraderseemed to have worked after 3 tries#2016-07-1818:49marshall@kschrader: Have you retried and had the same problem?#2016-07-1818:49kschraderit just took 3 tries#2016-07-1818:49kschraderhaven’t done it again since then#2016-07-1818:50kschraderit’s just the first time that I’ve ever seen it#2016-07-1818:50kschraderis it a normal error to come across?#2016-07-1819:12marshallit may be a transient failure of S3#2016-07-1819:51kschraderok, I’ll watch it and see if it comes up again#2016-07-1820:51zaneIs there a predicate for whether a value is a datomic connection?#2016-07-1821:41codonnell@zane: (or (instance? datomic.peer.Connection conn) (instance? datomic.peer.LocalConnection conn)) ?#2016-07-1821:42codonnelloops, missed datomic.peer.RemoteConnection#2016-07-1904:30jjunior130Can someone help me make Datomic schemas for use with DataScript? At least for the ones with => Schema for DataScript
(def Name
Str)
(def FundamentalUnit
(enum "K" "s" "bit"
"dollar" "cd" "kg"
"A" "m" "mol"))
(def FundamentalUnits
{(conditional FundamentalUnit
FundamentalUnit
:else (enum "bits" "<<IMAGINARY_UNIT>>" "oz"
"lb" "1000inch"))
Int})
(def Unitless
{})
(def Unit
[Name {:v Num
:u (conditional empty?
Unitless
FundamentalUnits)}])
(def PhysicalQuantity
{:amount pos?
:unit Unit})
(def Transaction
{(enum :production :consumption) PhysicalQuantity
:timestamp DateTime})
=> Schema for DataScript
;; most important piece of data for what I'm working on.
(def TransactionHistory
'(Transaction))
=> Schema for DataScript
(def Dimension
[(conditional empty?
Unitless
FundamentalUnits)
Name])
The units are based on https://github.com/martintrojer/frinj/blob/master/resources/units.edn
I'm studying to understand writing Datomic schemas.#2016-07-1919:25lockdownfor writes, one should expect performance to be equal or lower to sqlite correct? since datomic serializes all writes just as sqlite does#2016-07-2017:28tengWhat is the best way to ”convert”/import a relational database to Datomic?#2016-07-2017:54stuartsierra@teng: There's nothing that can do it automatically for you. You need to understand the schema of the relational database, then recreate equivalent schema in Datomic.#2016-07-2017:58teng@stuartsierra: Ok, so I create the schema manually, that sounds reasonable. But should I just write a simple ”program” by myself to import the data from my RDBMS to Datomic?#2016-07-2017:59stuartsierra@teng I'm not aware of any other way it could be done.#2016-07-2018:05teng@stuartsierra: Thanks. It’s a quite straight forward problem that shouldn’t be too hard I think (even if I’m quite new to Clojure). I will start working with Clojure and Datomic in August, after 20 years of Java and relational databases! And by the way, your Pod is great 😉#2016-07-2018:06jgdaveyIn my experience, modeling in Datomic is different enough from RDBMS’s that you wouldn’t want an automated tool to do anything for you.#2016-07-2018:07stuartsierra@teng I'm not sure what "Pod" refers to, perhaps you're thinking of someone else?#2016-07-2018:09teng@stuartsierra: thought you were the man behind Cognicast pod cast.#2016-07-2018:10stuartsierra@teng: No, although I have participated in the Cognicast, I'm not usually involved in producing it. The Cognicast host is Craig Andera#2016-07-2018:11stuartsierraSo if you meant “you” as Cognitect in plural, then I will say thank you on their behalf 🙂#2016-07-2018:11teng! 🙂#2016-07-2019:54bkamphaus@teng: there may be some things worth looking over in the Onyx examples that go from datoms->rows and back, e.g. https://github.com/onyx-platform/onyx-examples/blob/master/datomic-mysql-transfer/src/datomic_mysql_transfer/core.clj#2016-07-2019:58teng@bkamphaus: This can be helpful. Thanks!#2016-07-2117:05lucasbradstreet@teng we created onyx-etl for this purpose but it hasn't seen much use and is not actively maintained https://github.com/onyx-platform/onyx-etl#2016-07-2117:07teng@lucasbradstreet: ok, thanks!#2016-07-2118:39fentonif i'm calling a function in my datomic query....is there a way to debug that? i.e. know what the values are bound to etc...#2016-07-2121:29kenbieris it preferable to store t or tx-id if i plan to query the for the attribute and then pass it into d/as-of? does it even matter, considering i can map between the two?#2016-07-2121:37kenbieri think the T in the EAVT is the tx-id, hence that would be better to use. plus ids are garunteed not to change?#2016-07-2206:25val_waeselynck@kenbier: not sure if that's what you imply, but you cannot set the t in datoms#2016-07-2206:44hansyou cannot set anything in datoms. it is possible, however, to set the t when you assert a datom, provided the t is higher than the previous t asserted to the database.#2016-07-2206:44hansthat feature can be used to construct a database with a "fake" history, i.e. inserting events in the past.#2016-07-2208:24kenbier@val_waeselynck: @hans to clarify, i transact something to the database. i then get the t from the database value after that transaction
(-> db-after d/basis-t /t->tx). Then i store that value in another transaction i.e. [:db/add :foo/as-of-point some-tx-id]#2016-07-2208:28kenbieri was just curious if i should store t or tx-id. i ended up doing neither, as the requirements for some feature changed so i didnt care about the old value. though there doesnt seem anything wrong about it, unless you are planning to join on many foo’s.#2016-07-2208:30kenbierperhaps its bad practice?#2016-07-2208:44hansthere is nothing wrong with referring to a transaction entity from another entity per se. they're just entities.#2016-07-2210:51robert-stuttaford@kenbier you'd store the tx-id as a :db.type/ref. that's part of their intended design#2016-07-2210:52robert-stuttafordthe t value is outside of the reified transaction entity data; it's place separate to the datom indexes, in the transaction log#2016-07-2214:28bhaganyAgreed, I do this, and I refer to the tx-id#2016-07-2216:27michaeldrogalisIs it possible to obtain the last time an entity was referenced from another transaction? I have a component representing an API key, and I'm storing references to that API key for all transactions represent API calls. I'm trying to efficiently query for the last time an API key was used without using an explicitly "last-used" field on the API key component.#2016-07-2216:43bkamphaus@michaeldrogalis: apply max on (map :tx (datoms db :vaet ent-id)) maybe? (brainstormed, not attempted)#2016-07-2216:44michaeldrogalis@bkamphaus: Unfortunately that pulls all transactions. Doing an aggregation across all API calls will be linear time. Can't see a reason not to get this one done in constant time.#2016-07-2216:48bkamphausah, didn’t read the constraint carefully enough, yes, linear scan for all transactions referring to that entity. But I think you’re looking at a scan through the log until first instance as the alternative. At least, I think for constant/log time in all components you’d need a subindex of time on references or references on time, which aren’t provided by Datomic. Maybe not thinking about it correctly.#2016-07-2216:49michaeldrogalisCame to the same conclusion if I wanted to not maintain a timestamp myself. The alternative is keeping a timestamp on the API key component itself, and bumping that ts on each call.#2016-07-2216:49michaeldrogalisThen I can just get the latest entity for the API key and get the ts.#2016-07-2216:54bkamphausYeah, I think it’s reasonable to explicitly annotate the timestamp for last use of the API key as an attribute on the key if you’re expecting the query on ref index and/or log to touch too many datoms on the linear portion and be a drag on the system.#2016-07-2216:56michaeldrogalisOkie dokie. Thanks for the confirmation @bkamphaus 🙂#2016-07-2217:17bhagany@michaeldrogalis: I'm certain to be misunderstanding something here, so if you don't mind indulging me - if you're tagging your transactions with a ref to your API key entity, would something like [:find (max ?used-t) :in $ ?key :where [?tx :tx/key ?key] [?tx :db/txInstant ?used-t]] work?#2016-07-2217:17bhaganyI'm doing something similar to this to generate last-modified times for a sitemap#2016-07-2217:18michaeldrogalis@bhagany: That works, yes. But if you have 500 billion API calls, do you really want to query for the max date each time the user accesses an API page? 🙂#2016-07-2217:18michaeldrogalisBit of an exaggeration, but you get the point.#2016-07-2217:19bhaganyah, okay. I read you as trying to avoid the full log scan error. thanks 🙂#2016-07-2217:20michaeldrogalisNo worries, thanks for pitching in ^^#2016-07-2218:17kenbier@hans: @robert-stuttaford @bhagany that makes sense, thanks.#2016-07-2307:54robert-stuttaford@michaeldrogalis: know that scanning the datomic log does not benefit from peer caching at all. you want to avoid any sort of regular query on the transaction log directly#2016-07-2319:37michaeldrogalis@robert-stuttaford: Thanks Robert, I actually didn't know that. I ended up putting a UUID attribute with noHistory on the API key component, and I rotate a new UUID in everytime I make an API call. It's not pretty, but it yields a constant time look up and still maintains a ref on every API call back to the API key.#2016-07-2319:59robert-stuttafordnot a bad way to go: you're trading storage for compute. storage is cheaper 🙂#2016-07-2401:28cezarquestion regarding d/q vs d/datoms performance... I notice that for queries which grab a lot of entities the runtime performance and the memory usage of d/q is quite bad. Here's an example from my prototype. The query runs to determine the last time an attendee had an encounter with an RFID reader. This is modeled using the :encounter entity which contains a timestamp and links via a ref to a reader and an attendee.
Here's my naive implementation of the filter:
(defn get-max-encounters [db]
(d/q '[:find (max ?et) ?ea ?er :in $
:where
[?e :encounter/time ?et]
[?e :encounter/attendee ?ea]
[?e :encounter/reader ?er]]
db))
the above query takes well over 10 minutes and frequently blows up with OOM on a Peer with Xmx4G. The number of :encounter entities is 20,000,000 and the number of readers around 600 and about 100,000 attendees.
Using d/datoms however, is a whole different story with the "query" completed in less than 40 seconds and memory usage staying well within the Xmx limit. However, that means adding extra code to basically do Datomic's job by hand.
Unfortunately my code makes assumptions about the number of datoms in the :encounter entity so in that sense it just feels "wrong" so I'd prefer not to use it. But maybe it's the only way to make the thing work. I'll post the snippet in the next message#2016-07-2416:18stuartsierra@cezar: What you see is what I would expect. The implementation of d/q realizes all intermediate results in memory. Roughly speaking, each datalog clause in the :where part of the query generates a set of intermediate results in memory. For this reason I would say that, in general, d/q is not appropriate for operations that must scan all (or nearly all) entities in the database.#2016-07-2416:19stuartsierraOn the other hand, if your query were restricted to a single entity (for example, a single attendee) it could be made more efficient by placing that restriction first in the :where clause.#2016-07-2416:19stuartsierra(defn get-max-encounters [db attendee]
(d/q '[:find (max ?et) ?ea ?er
:in $ ?ea
:where
[?e :encounter/attendee ?ea] ; narrow results to 1 attendee
[?e :encounter/time ?et]
[?e :encounter/reader ?er]]
db attendee))#2016-07-2416:21donaldballJust spitballing, but if I were faced with this problem, I’d consider storing the most recent encounter in a no-history attribute on the attendee datom. Would that be a bad idea?#2016-07-2421:55cezar@donaldball: the problem is that I may be getting encounters out of sequence. so I can't rely on Datomic's time keeping for the exact calculations of the attendance based off that. I can for the rough approximates however, and I already do that.#2016-07-2423:07donaldballWhen applying a transaction that records a new encounter, you could check and see if the attendee does not already have an encounter with a later time, right?#2016-07-2502:25cezarI'm worried that would impact ingestion performance too much. I want to be able to ingest about 100,000 encounters per minute. testing as it is (without a transaction function) I'm already left with not too much headroom.#2016-07-2510:00x1n4uhow can I remove or deprecate a attribute in an existing schema-file ?#2016-07-2511:31robert-stuttaford@x1n4u: one simple way is to remove the code from your schema.edn with a good git commit message, and then to make a new attr ':some-suitable-ns/deprecated-reason' as a string, and then transact a message to that attr on the attribute you're deprecating: [[:db/add (d/entid db :your/old-attribute) :your/deprecated-reason "We don't sell widgets any more, so we no longer need this schema."]]. then, if you really want to, you can wrap d/transact with your own function that checks all the attrs you're writing to and warns if any of them have a deprecated message#2016-07-2520:48fentoni have a query that calls a custom function. I'm trying to find out what the value of the parameters of the function call are. I've updated the logback.xml file to turn on logging for namespace of my custom function, restarted datomic, but I don't see the logging. Am I approaching this wrong? How does one inspect the values of function arguments for custom function in a datomic query?#2016-07-2604:45hans@fenton: Your queries are evaluated in your application process, not in the transactor. Thus, whatever logging you use in your client application can be used to log functions called from within Datalog queries. The same does not hold for functions evaluated by the transactor. The easiest way to debug transactor functions is to use an in-memory database to make them be evaluated in your application process as well.#2016-07-2604:48fentonIf I have a separate datomic server, the transcript is out of my process#2016-07-2604:48fentonOK in Mendy is how I should change the deployment then I guess. Thank you for the advice!!#2016-07-2604:50fentonHmmm I guess I'm unclear which are transactor functions... I have a function call in my datomic query datalog...#2016-07-2605:01hansYour queries are never sent to the transactor.#2016-07-2605:02fentonok.#2016-07-2607:09yonatanelIs the datomic:free:// protocol still relevant? Is it the same as :dev?#2016-07-2610:50robert-stuttaford:free means limited to 3 peers at the transactor (so two other peers), and using the local h2 in-transactor storage#2016-07-2610:50robert-stuttaford:dev means limited to your paid license peer limit, and using the local h2 in-transactor storage#2016-07-2610:51robert-stuttafordyou can use :free with a pro transactor.#2016-07-2611:07yonatanelI'm using to Pro Starter license#2016-07-2613:15stuartsierraWith Pro Starter you can use the datomic:dev://... protocol for local storage on the filesystem where the transactor is running. This is effectively the same as the datomic:free://... protocol.#2016-07-2613:20marshall@fenton: You can use stored database functions (http://docs.datomic.com/database-functions.html) in query OR as transaction functions (assuming they return the proper type of data for the latter). If they’re executed in a datalog query, as @hans says, they’ll be run entirely in your peer (your application process). If you call them in a transaction (i.e. inside a call to d/transact - http://docs.datomic.com/transactions.html#built-in-transaction-functions) they will be executed on the transactor.#2016-07-2613:29yonatanelHow would you tag some entities in order to toggle them in and out of query results, having multiple tags where only one can be active at any moment?#2016-07-2613:31yonatanelI have normal data, and extra data that is part of a simulated data generation. I'd like to run several simulations, saving them, and later choose which one to active. If none is activated, I ignore the tagged entities.#2016-07-2613:33yonatanelEither a :simulation/active attribute for each of those simulations, where I have to remove and add this attribute manually, or some kind of singleton entity for the currently active simulation. But I don't know how to implement this singleton entity.#2016-07-2613:33danielstocktoncan't you just tag them all with a string, and query for a certain string?#2016-07-2613:34yonatanelHow do I know which string?#2016-07-2613:35marshallIs your set of tags fairly small and fixed?#2016-07-2613:36yonatanelfairly small but dynamic. not fixed. Every time someone runs a simulation, I'd like to represent it as an entity, with time of execution etc.#2016-07-2613:36marshalland that entity is the tag?#2016-07-2613:37yonatanelyes, and only one of them can be active, or none at all.#2016-07-2613:37marshallyou could use a cardinality one reference attribute to the 'tag entity'#2016-07-2613:38marshallyou can retract to set ‘none'#2016-07-2613:40yonatanelthat cardinality one ref attribute, which entity does it itself belong to?#2016-07-2613:40marshallthe data entities you’re tagging#2016-07-2613:42yonatanelBut if I retract the tag from data entities, they will become normal data. They must always remember to which tag they belong, so I can activate/deactivate any set of simulation data.#2016-07-2613:45yonatanelperhaps an entity with
{:db/ident :active-simulation
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/one}#2016-07-2613:45marshallif you need to maintain both ‘on/off’ and a specific tag, then yes, you’ll need two separate attributes.#2016-07-2613:48yonatanelmaybe {:db/id #db/id[:db.part/user]
:db/ident :active-tag
:active-tag/current 123}
That way I can always refer to the single entity with :active-tag ident.#2016-07-2618:54fenton@marshall: thank you marshall, i understand better now what a transactor function is. I was confused about query function and datalog in general. it seemed like it was all shipped off somewhere as functions had to be fully qualified, but i now just assume that they are run from a different namespace ( the datomic namespace ), but same process. I had cider-debug instrumented the functions, but they weren't getting breakpointed in, so I assumed, incorrectly, they were shipped off to the datomic process. Now I've got the correct mental model, I'll pursue finding out where/why the logs didn't seem to work. Thanks!!#2016-07-2619:05marshall@fenton: Glad to help.#2016-07-2701:22hueypwhen sorting a list of entities by an attribute … is there a way to read from the sorted index (well, is there a sorted index?) … or just query entity-id and attribute-value and sort in memory?#2016-07-2701:23hueypindex being EAVT in this case (right?)#2016-07-2701:31cezarwell, AEVT and AVET are sorted by attribute... you could use one of those. The doc page on Datomic has a good description of how the different indices are laid out:
http://docs.datomic.com/indexes.html#2016-07-2703:59hueypyah, I’m dumb, I meant AVET, not EAVT 😜#2016-07-2703:59hueyp@cezar: thanks 🙂#2016-07-2704:01hueypperfect, have never used the datoms API, looks like what I want#2016-07-2704:10bkamphaushttp://docs.datomic.com/clojure/#datomic.api/index-range#2016-07-2717:08ethangracerhi all, is anyone familiar with a way to incorporate a recursive search into a query? I know that there is the … symbol for recursion in pull syntax, but I’m not seeing anything in the query grammar. Right now I’m resorting to writing recursive rules, which are kind of brutal:
'[[(item-for-subitem ?item ?sub)
(or-join [?item ?sub]
[?item :item/subitem ?sub]
(and
(item-for-subitem ?item ?mid)
[?mid :item/subitem ?sub]))]]
#2016-07-2717:14val_waeselynck@ethangracer: recursive rules is the way to go I think, but you usually don't write them with an or-join, you just declare 2 implementations of the rule#2016-07-2717:18ethangracer@val_waeselynck: so, something like this?
'[[(item-for-subitem ?item ?sub)
[?item :item/subitem ?sub]]
[(item-for-subitem ?item ?sub)
(item-for-subitem ?item ?mid)
[?mid :item/subitem ?sub]]]
That seems less clearly recursive to me for some reason, but if that’s the standard, good to know#2016-07-2718:53nandohi all, I’m new to both Clojure and Datomic, but I’ve been hovering around both for awhile now. Downloaded Datomic Pro Starter, have the http://my.datomic.com credentials, trying to experiment with Datomic Pro using boot. For an IDE, I’m using Atom with the proto-repl plugin.#2016-07-2718:55nandoWhen I tried to start the repl from the build.boot file I have open, I got the following error:
No such task (dev)
boot.core/construct-tasks
core.clj
#2016-07-2718:56nandoI figured I needed a deftask named dev, and sure enough, adding this to build.boot got around that error:
(deftask dev [])
#2016-07-2718:57dominicm@nando: You probably want (deftask dev [] (repl))#2016-07-2718:58dominicmI think dev is the task proto repl expects to start your repl, but doesn't call repl directly to allow you to do things like start file watchers or compile sass#2016-07-2718:58nandoOk, I changed it and that works.#2016-07-2718:58dominicmGreat 🙂#2016-07-2719:00nandoNow, to start using Datomic pro, I’ve added a dependency to my build.boot file, like so:
(set-env! :resource-paths #{"src"}
:dependencies '[[org.clojure/clojure "RELEASE"]
[framework-one "RELEASE"]
[com.datomic/datomic-pro "0.9.5385"]]
:repositories #(conj %
["my-datomic" {:url ""
:username “my-datomic-user"
:password “my-datomic-pass"}]))
#2016-07-2719:03nandoUsing the require statement I see in the tutorial doesn’t seem to work if I try and evaluate it in the repl:
(ns telogenix.model.schema
(:require [datomic.api :as d]))
user=> ClassNotFoundException clojure.lang.Tuple java.net.URLClassLoader.findClass (URLClassLoader.java:381)
#2016-07-2719:04nandoI’m stuck here.#2016-07-2719:05nandoThe dependency and repository seem to be correct, because I can run the project and all dependencies seem to resolve without error.#2016-07-2719:05bkamphausSmells like an IDE/config issue, not Datomic specific. I’d ask boot, atom users or in an editor/ide channel. also “RELEASE” if present in that form also needs a real version number added.#2016-07-2719:06nandoIt works without datomic#2016-07-2719:06dominicm@bkamphaus: I think RELEASE works without a number, no?#2016-07-2719:07dominicm@nando: Could you try (require '[datomic.api :as d]) at the repl instead? Not entirely sure how the user repl behaves when subject to (ns)#2016-07-2719:08nandoThat works!#2016-07-2719:08dominicmExcellent!#2016-07-2719:08nandoAt least it doesn’t throw an error.#2016-07-2719:09dominicmYou should be able to do (d/conn ...) and all the other good parts of the datomic api. connecting is a decent test.#2016-07-2719:10bkamphausoh, ok, I haven’t used metaversions ever, nevermind on that.#2016-07-2719:11nandoOk, I’m playing with it.#2016-07-2719:11dominicm@bkamphaus: They're, admittedly, dodgy. But a fairly good way to get going fast.#2016-07-2719:12nandoIs following the example code in the seattle subdirectory still the recommended way to get started?#2016-07-2719:13nandoOr is there something more current? I’m referring to the sample app included with the datomic download.#2016-07-2719:17nando@dominicm: thanks for getting me unstuck#2016-07-2719:18dominicm@nando: No problem. Sorry I can't answer your other questions.#2016-07-2719:20nandoNP. I’ll continue to muddle through and try to get something working. I’ll start by creating a schema.edn file now that I seem to have datomic set up.#2016-07-2720:38kschraderif I’m using the memcached support and the memcached server falls over do the peers just fall back to using storage directly for everything?#2016-07-2720:41kschrader(assuming yes, but can’t find anything in the docs about how it handles failure conditions)#2016-07-2802:45zentropeIs there a way to query such that you pass in a vector for one of the terms? (q '[:find ?e :in $ ?ns :where [?e :foo/bar ?ns]] db [“a” “b” “c”])#2016-07-2802:45zentropeThat sort of thing?#2016-07-2802:46zentropeOr do you have to call it X times, or use a rule?#2016-07-2803:27bkamphaus@zentrope: you just mean like: http://docs.datomic.com/query.html#collection-binding ?#2016-07-2812:36yonatanelWhen :finding multiple aggregations on different entity attributes, how can I avoid ignoring entities that are missing one of the attributes?
In the following query, entities 2 and 3 are not participating in the aggregation because of unification.
(q '[:find ?e (min ?a) (max ?b)
:where
[?e :a ?a]
[?e :b ?b]]
[[1 :a 400]
[1 :b 100]
[2 :a 200]
[3 :b 500]])
=> [[1 400 100]]#2016-07-2813:15dominicm@yonatanel: two answers to that:
Easy one is: http://docs.datomic.com/query.html#sec-5-12-1
The long one is: don't. Query is just for saying what the base rules are for an entity, and how to return that entity. Then you lazily/using pull syntax, get the attributes of that entity.#2016-07-2813:28yonatanel@dominicm: Currently I've separated the queries, one for min and one for max, and merged the results. How would you have done it?#2016-07-2813:31dominicmderp, didn't see the min/max. Your approach is good, there's no need to try and cram as much into a single query as the db is local.#2016-07-2813:57yonatanelThanks. Though it's not cramming stuff into a query, but declaring what you want and expecting the library to worry about the rest. Splitting queries because of implementation details is just the same as cramming everything into one because of remote connections.#2016-07-2814:01dominicmHmm, not sure about it being an implementation detail.
When you make a query, you say you're looking for an entity with all these attributes.#2016-07-2814:04dominicmYou could implement your own aggregate if you're really interesting in pursuing this. Using http://docs.datomic.com/query.html#sec-5-12-1 get-else, you can return nil if the value isn't there.
max would first filter out nil values, before applying max like normal.#2016-07-2814:04yonatanelI'm not sure that's what I'd like to see, but Cascalog for example has optional binding with '!' instead of '?'#2016-07-2814:05dominicmActually, max might be smart enough to handle nils already... Yeah, try get-else#2016-07-2814:12yonatanelI wonder if max/min are using indexes and whether get-else interferes with it#2016-07-2816:20zentrope@bkamphaus: Yes, that’s it. My first attempt didn’t work because I forgot the “…” on the pull function.#2016-07-2817:11zentropeWhen you’re associating entities, do you just :thing/list [3424234, 32423432, 45634342] (those being :db/ids), or do you structure them with [:db/id 43534534]?#2016-07-2817:17zentropeHuh. I can’t see an example in the docs. I suppose the error messages will teach me. ;)#2016-07-2817:22codonnell@zentrope: I think it has to be in a map (see http://docs.datomic.com/transactions.html#building-transactions)#2016-07-2817:24zentrope@codonnell: I think it worked using a vector of entity ids.#2016-07-2817:25zentropeI suspect [:db/id <some-existing-id>] would also work.#2016-07-2817:26codonnellI'm trying to learn this stuff myself, as well. What does your transaction look like, if you don't mind? I tend to use lookup refs, which sit as maps in my transactions.#2016-07-2817:28codonnellFor example, [[:receipt/id 42 :db/id (d/tempid :db.part/user) :receipt/customer {:customer/id 15}]]#2016-07-2817:30zentropeAh, I see in the second example in the “Nested maps in transactions” shows that you can just use a vector of IDs.#2016-07-2817:30codonnell"When a transaction adds an attribute of reference type to an entity, it must specify an entity id as the attribute's value." Ok, so attributes of ref type are just entity ids. Makes sense how your vector of db/ids works.#2016-07-2817:33zentropeMy particular case is something like “tags” and “notes”. You have tags, and you have notes, but for this given note, you need pointers to a list of tags. (Assume tags are more interesting than just a keyword or something — the analogy isn’t perfect.)#2016-07-2817:33zentropeSo, I already know the tag ids.#2016-07-2817:33zentropeThe struggle is a DB function that knows how to update the “note” such that it retracts or adds appropriate tags, depending on the value of the DB at the time of the transaction. (So a nice history is maintained.)#2016-07-2817:34zentropeThe other option is to retract all the tags, then assert all new tags (which may or may not have changed).#2016-07-2817:35zentropeIt’s the one use case I wish datomic had some help with.#2016-07-2817:35zentropeA “set” type.#2016-07-2817:35zentropeOr set cardinality, or something like that.#2016-07-2817:36codonnellI don't think your :thing/list has any particular order; it just associates the :thing entity with each of the ids it is passed#2016-07-2817:36codonnellthe vector notation is just a convenient syntax for adding multiple associations at once#2016-07-2817:37zentropeYes, but of :thing/list already exists and I give it a subset of what it used to be, it’ll just add the two sets together. It won’t retract anything.#2016-07-2817:37codonnellI see#2016-07-2817:38zentropeOne option is to make a TX function that queries for the old values, does some comparisons (clojure.data/diff is nice for that), then you build the transaction that way.#2016-07-2817:38marshall@zentrope: there is a discussion of this topic here: https://groups.google.com/d/msg/datomic/MgsbeFzBilQ/fzR_P43-jTIJ#2016-07-2817:41zentrope@marshall: Yes, I’ve seen that and a lot of other discussions. Been down this road before.#2016-07-2818:35zentropeInteresting. If you do something like :thing/list [{:a/b id :x/y [id1 id2]} {:db/id id6 :x/y [id9 id10]}] then I don’t see how to retract one of those objects. There’s no db/id I can see for stuff in thing/list. Hm.#2016-07-2819:25zentropeIf you have a database transaction and decide no changes need to be made, do you return an empty vector?#2016-07-2820:12marshall@zentrope: yes, an empty transaction will create an entry in the log, but not affect any other datoms#2016-07-2823:51zentropeIs there a discussion of not and not-join somewhere other than in the docs?#2016-07-2900:25zentropeThe database function docs are kinda sketch on how one tx function can call another tx function. Is that possible, even?#2016-07-2900:37cezaris there a way to efficiently read through all datoms within a given partition? I'm thinking of using indexRange and the EAVT index. But don't know how to specify the lower and upper bound. Can i somehow find out what the lowest and highest entId is for a given partition?#2016-07-2900:52kenny@cezar: I'd question why you want to do something like that. This should do what you want though.
(d/datoms (d/filter db (fn [_ ^Datom datom]
(->> datom .e d/part (= part)))) :eavt)
#2016-07-2900:54cezarwhat you showed is a sequential scan over the whole database. If I have a 1000 customers and each in its own partition and I want to trawl through a reasonably large collection of datoms (say 10,000,000 for each customer) it will be way more efficient to hone in only on the section of the index where their specific data is located. Isn't that what partitions are for?#2016-07-2900:56kennyJust query the database?#2016-07-2900:57cezarqueries are very eager and I find that running d/q where the result is more than about 5,000,000 entities can easily blow up a Peer with OOM even on a Peer with Xmx4g#2016-07-2900:58cezarI prefer the lazy nature of d/datoms or indexRange or seekDatoms#2016-07-2900:58cezarfor my use case here#2016-07-2900:59bkamphausThe locality of reference is in :eavt, you could take-while with a pred on part - http://docs.datomic.com/clojure/#datomic.api/part — starting from the beginning of the partition with seek-datoms — http://docs.datomic.com/clojure/#datomic.api/seek-datoms#2016-07-2901:48cezarah, figured it out. looks like seekDatoms on EAVT is the way to do this#2016-07-2901:48cezarthanks @bkamphaus for pointing me in the right direction#2016-07-2901:49cezarI can simply look use entIdAt with Date 0 and Date now to get all entities over a given partition#2016-07-2902:21zentropeHow do you lookup a database function so that you can call it from within another database function?#2016-07-2902:24zentropeI ask because I find myself wanting to (mapcat (fn [x] [:my/fn db a b c]) values) which seems, odd? Hm. Maybe it’ll work.#2016-07-2903:20zentropeHa! It works! My tx function is huge with multiple functions, but I don’t think there’s much else I can do when updating a complicated “document-like” structure.#2016-07-2903:47jimmyhi guys how do I avoid transacted value being printed to *out* in repl ? I import quite big amount of data and it's annoying.#2016-07-2903:47zentrope@nxqd (do @(d/transact ….) :done) Something like that?#2016-07-2903:49jimmyyeah totally forgot T_T thanks#2016-07-2908:45val_waeselynck@bkamphaus: I'm having a doubt about how Datomic works: are the indexes persisted in storage are immutable trees of segments, or is this just the in-memory representation?#2016-07-2909:26danielstocktonin storage too#2016-07-2910:43robert-stuttafordi usually do (-> (d/transact ...) :db-after) or ( .. :tx-data count)#2016-07-2912:50marshall@val_waeselynck: it’s turtles all the way down.#2016-07-2912:51marshall@zentrope: Yes, you can call txn functions from other txn functions. As long as they all eventually resolve to valid txdata (i.e. list of datoms)#2016-07-2912:53marshall@cezar: Yep, as you found (and @bkamphaus indicated), the most significant bits of the entity ID encode the partition. Thus, the partition is contiguous in EAVT index.#2016-07-2913:07val_waeselynck> it’s turtles all the way down.
@marshall: sorry, I don't know what that means 😕 (not a native English Speaker)#2016-07-2913:08marshall@val_waeselynck: Sorry. Yes, it’s immutable segments everywhere.#2016-07-2913:08val_waeselynck@marshall: thanks! 🙂#2016-07-2913:09bkamphaushttps://en.m.wikipedia.org/wiki/Turtles_all_the_way_down#2016-07-2913:13dominicm@val_waeselynck: As an english speaker, I do not understand either.#2016-07-2917:34zentrope@marshall: Yes, it all makes sense if you think of a TX function as producing a data structure. Kinda like a macro.#2016-07-2919:25nandoI have a few questions regarding the console.#2016-07-2919:26nando1) Does it work with mem databases?#2016-07-2919:27nando2) From where do I get the dev-transactor.properties file?#2016-07-2919:33nando3) The example command to start the console here http://docs.datomic.com/console.html:
bin/console -p 8080 dev datomic:
Has 2 port designations 8080 and 4334. I get that the web app that is the console is then available on 8080, but what is running or available on 4334? Do I need to start a transactor in a local dev environment on 4334? I think I read somewhere that I don’t, but it’s not clear from the explanation on this web page.#2016-07-2919:34marshallin that example, the "datomic:<dev://localhost:4334/>“ is the URI of the database#2016-07-2919:34nandoWhere do I set that?#2016-07-2919:34marshallThere is an example dev-transactor.properties file in the Datomic Pro distribution under config/samples#2016-07-2919:35nandoNot in the most recent download ...#2016-07-2919:36marshalldo you have Datomic Pro Starter ?#2016-07-2919:36nandoOH, sorry. I found it#2016-07-2919:36nandoIt is in a samples subdirectory.#2016-07-2919:37marshallif you’re using free, there is a free-transactor-template.properties file in config/samples#2016-07-2919:40nandoThanks @marshall. When do I need to specify the transactor.properties file in a command? Do I need to start a transactor in a local dev environment?#2016-07-2919:44jaret@nando you want to specify the file when you launch your transactor. Yes, you will need to start a transactor in a local dev env.#2016-07-2919:45nandoThanks @jaret#2016-07-2919:46nandoIs there a guide somewhere I’m missing that explains how to set up a dev environment?#2016-07-2919:47jaret@nando: http://docs.datomic.com/run-transactor.html#2016-07-2919:47nandohttp://docs.datomic.com/console.html jumps into explaining how to start the console without mentioning that you need a transactor running#2016-07-2919:49jaret@nando The getting started section is where I went first. I didn’t make my way into the overview section until after the tutorial.#2016-07-2919:50nandoOk, I’m more clear what I need to do now. Thanks again.#2016-07-2919:53zentropeUsing the pull api: Is there a way to turn {:foo {:id “1234”}} to {:foo “1234”} without doing any post processing?#2016-07-2920:14uwoIf I want to use an aggregate result to track down another value on an entity, will I always need to use two queries?
(defn last-edited [conn eid]
(d/q '[:find (max ?inst)
:in $ ?e
:where
[?e _ _ ?tx _]
[?tx :db/txInstant ?inst]]
(d/db conn) eid))
;;(the following is not what I want because “Query variables not in aggregate expressions will group the results and appear intact in the result.”)
(defn last-edited [conn eid]
(d/q '[:find (max ?inst) ?user
:in $ ?e
:where
[?e _ _ ?tx _]
[?tx :db/txInstant ?inst]
[?tx :user/source ?user]]
(d/db conn) eid))
#2016-07-2920:38uwoalternatively I could just query the entire history, sort by txInstant, yadda yadda#2016-07-2920:38bhagany@zentrope: no, pull will always do that for refs#2016-07-2920:40bhagany@uwo: I believe you can use a subquery for this, but I don't know the right syntax off the top of my head. I think that you can put a second (d/q …) in the place of a where clause, though.#2016-07-2920:40uwohmm! thanks I’ll look into that#2016-07-2921:27zentrope@bhagany: I was willing to be totally surprised that it was possible. ;) Fortunately, clojure.walk solves all problems. #2016-08-0114:47davinhi all. I’m having a bit of trouble with what I thought would be a simple create transaction, creating a post entity with an existing author:#2016-08-0114:47davin(defn createPost [id to-be-created]
(def conn (d/connect (:db-uri env)))
(let [post-body (:body to-be-created)
post-name (:name to-be-created)]
(d/transact
conn
[[:db/add (d/tempid :db.part/user)
:post/public-id id
:post/body post-body
:post/name post-name
:post/author [:author/email “#2016-08-0114:48davinIs there something simple I’m missing? I’m new to datomic.#2016-08-0114:48davinthe :author/email is marked as identity#2016-08-0114:48yonatanel@davin: what's the error?#2016-08-0114:49davinYeah, that’s the strange thing. It isn’t throwing an error, but I don’t see a registered transaction in the console either#2016-08-0114:49davinI’m running everything locally with bin scripts#2016-08-0114:49stuartsierrad/transact returns a Future, you won't see an error unless you deref it.#2016-08-0114:50davin@stuartsierra: ok I’ll pipe to deref and see what comes back, thanks#2016-08-0114:50yonatanel@davin: aren't you mixing list form and map form of transaction?#2016-08-0114:51davin@yonatanel: I might be, I’m not familiar enough with it yet to know the difference 🙂#2016-08-0114:51marshall@davin: @yonatanel is correct. You look to be using the map form, but it’s in a list. You can replace the :db/add with :db/id and turn the whole thing into a map.#2016-08-0114:52marshallhttp://docs.datomic.com/transactions.html#transaction-structure#2016-08-0114:52yonatanel@davin: look here: http://docs.datomic.com/transactions.html#transaction-structure#2016-08-0114:52davinthanks both of you, I’ll have a look at the link#2016-08-0114:52marshallYou also want to be careful defining a connection inside a function that transacts#2016-08-0114:52davin🙂#2016-08-0114:53davin@marshall: I’m refreshing the conn everytime until I’m more familiar, what do you recommend?#2016-08-0114:55marshallManaging things like your connection is something people often use tools like Stuart Sierra’s Component for https://github.com/stuartsierra/component#2016-08-0114:57marshallIn general the connection is something shared across your program, so defining it in a single function is not ideal. Functions that transact data should probably take a connection as an argument, but also keep in mind you’ll want to be careful about that if all you’re doing is reading (http://www.rkn.io/2014/02/10/datomic-antipatterns-connnnn/)#2016-08-0114:57davinThanks @yonatanel @stuartsierra and @marshall, it was the list vs map form that was the problem. 🙂#2016-08-0114:58davinThanks @marshall I’ll read everything you linked to#2016-08-0114:58stuartsierraAlso, don't def inside another def — def is always global, regardless of where it appears.#2016-08-0114:58davinYes, the def inside did seem weird, I probably meant it to be a let#2016-08-0115:00davinWill the conn timeout at some point?#2016-08-0115:00davinif it is defined more globally?#2016-08-0115:01yonatanelIs there a datomic cookbook or something similar except the best practices section of the docs, with explanation of each solution and addressing performance, indexes, gotchas etc?#2016-08-0116:50timothypratleyIs it possible to write a retraction query, like 'retract all entities that have a :from attribute 1'?#2016-08-0116:56timothypratleyand/or if I have a schema like:
(def schema
{:to {:db/type :db.type/ref}
:from {:db/type :db.type/ref}})
should it have been retracted automatically? (seeing it was a ref)#2016-08-0116:57timothypratleywhen I [:db.fn/retractEntity 1]#2016-08-0116:59timothypratley(I'm using DataScript, so maybe it is just a missing feature, hmmm I shall try it in Datomic proper)#2016-08-0118:11marshall@timothypratley: You might want to look at the retractEntity database function: http://docs.datomic.com/transactions.html#built-in-transaction-functions#2016-08-0118:19timothypratley@marshall right, rectractEntity appears to only work with a provided id or lookup ref, what I'm wondering is whether I need to do 2 steps: a) lookup all the entities, b) retract them... or if there is some way to in a single transaction identify and retract them together.#2016-08-0118:21timothypratleyIn this case I'm representing nodes and edges. When deleting a node, some edges become invalid and need to be removed. So I find the edges and remove them. But it seems like it should be possible to do it in a transaction, instead of as a lookup + transaction.#2016-08-0118:22marshallretractEntity will remove all references from and to a given entity
it does this by retracting all datoms with that entity ID in either the e or v position
if you need to retract more than one entity this way, then yes, you’ll need to create a list of them (presumably via query) and call retractEntity on each#2016-08-0118:23marshallif you want it all to occur in a single transaction, you can write a transaction function that does the lookup and calls retractEntity on all results of the lookup#2016-08-0118:25timothypratleyah so in the case above it removes the :from, but leaves the :to (which might point to 2) -- which is an invalid edge -- so I need to go with writing a transaction function, thanks.#2016-08-0121:25kennyI am getting java.lang.NoClassDefFoundError: Could not initialize class datomic.ddb_cluster__init when calling (d/create-database "datomic:). Is there something else I need to do besides adding com.amazonaws/aws-java-sdk-dynamodb to my project?#2016-08-0121:26kennyI am running the transactor locally and I am running create-database in a REPL.#2016-08-0121:30kennyAlso, I'm using datomic 0.9.5385#2016-08-0121:33kennyAlso probably relevant: I can create a test database using Datomic shell#2016-08-0121:36kennyHmm.. After restarting the REPL, the first time I run create-database I get java.lang.ClassNotFoundException: com.amazonaws.DnsResolver but every following time I run create-database I get the previous error, java.lang.NoClassDefFoundError: Could not initialize class datomic.ddb_cluster__init.#2016-08-0122:13kennyHmm.. just updated to 1.11.22 and it worked. You guys should update the docs.#2016-08-0122:14shinychHi all, how possible is it to run Datomic on DB2?#2016-08-0207:30yonatanel@shinych: from http://docs.datomic.com/storage.html#sql-database it seems that if it has a jdbc driver, you can use it, but this is software, so nothing is certain.#2016-08-0207:35shinychexactly 🙂 interested if anyone is using (has tried it) with DB2
we have kind of an enterprise hell here, and it is quite possible that DB2 would be the only option in production deployment#2016-08-0212:54marshall@shinych: you should be able to use DB2. any jdbc compliant SQL store. You’ll need to bring your own jdbc driver#2016-08-0212:55marshallyou’ll also have to adapt the sql provisioning scripts included with the datomic distribution for postgres#2016-08-0213:14yonatanelDoes transactor serializes transactions per database or is it a global queue? Is it possible to manually "shard" the DB and have parallel writes that way (ignoring the underlying storage limitations since it's all on the same DB table)?#2016-08-0213:14marshallper database, but you should have only one DB of any substance per transactor#2016-08-0213:15yonatanel@marshall: because of the underlying storage provisioning?#2016-08-0213:18marshall@yonatanel: there are multiple reasons. storage throughput is one, but process isolation is another, particulars of how indexing behaves and is triggered is another#2016-08-0213:19yonatanelI see#2016-08-0213:22yonatanelSo when the license refers to 5 processes in http://www.datomic.com/pricing.html it means I can have 5 transactors?#2016-08-0213:30marshall@yonatanel: a Datomic system can only have a single transactor; the process count is the sum of your transactor and all your peers. you can have as many peers as you need/want on the system#2016-08-0213:30marshallmore details here: http://docs.datomic.com/architecture.html#2016-08-0213:31marshallso, a 5 process license will let you run a transactor and up to 4 peers#2016-08-0213:35yonatanelIn one of the Nubank talks they mentioned maybe sharding/partitioning on customers. I wonder how they're going to do that.#2016-08-0213:36marshallmultiple transactors. each peer can connect to multiple separate databases (and even query across them). they’ll simply need to handle traffic sharding upstream (i.e. at the load balancer or at the peer application)#2016-08-0213:41yonatanelWhich means multiple licenses or a special license I suppose.#2016-08-0213:43marshallin general, yes, if you’re considering needing multiple transactors we’d prefer you gave us a call so we can discuss your system and work out an appropriate licensing option#2016-08-0218:29kennyWhat is the recommended amount of disk space the the transactor needs?#2016-08-0219:18bostonaholic@kenny: probably not a straight up answer, but this resource should help you http://docs.datomic.com/capacity.html#2016-08-0219:19bostonaholicyou will probably adjust your transactor.properties file several times to get it just right#2016-08-0219:20bostonaholicit’s a living system, not a “set it an forget it”#2016-08-0219:21kennyI saw that. I am talking about actual disk space, not RAM. That article has one small section about storage size but does not make any recommendations for the amount to allocate.#2016-08-0220:14bvulpestransactor doesn't need much disk space.#2016-08-0220:14bvulpeskennyjwilli: are you planning to run the transactor on a very storage-constrained vm or the like?#2016-08-0220:16kenny@bvulpes: No. Will be running as a marathon app in Mesos. Just wondering how much disk space I should allocate for the app.#2016-08-0220:28bvulpesi've run it handily with 8g on an aws t2 micro#2016-08-0220:32kenny@bvulpes: Ok. Will start with that then#2016-08-0221:33bvulpeskennyjwilli: it never came close to needing that, but if you can overprovision...#2016-08-0223:14zentropeI’m running datomic in a script (http://inlein.org): is there a way to suppress debug logging?#2016-08-0300:12zentropeI have an attribute [:foo/bar :ref :many], is there a datalog query that allows me to query for “foo” such that :foo/bar must have three specific refs?#2016-08-0300:36zentropeSeems I can query: find entities that have all the related entities in my param list, or any of the params, but not a subset of the params.#2016-08-0305:54zentropeTo clarify, I want to write a query like “find all the clubs in which “bill” “fred” and “sue” are members.#2016-08-0306:54yonatanel@zentrope: can you show the two queries you mentioned that are not quite what you want? "all related entities" and "any of the params"#2016-08-0306:55zentropeI ended up just dynamically generating the query.#2016-08-0306:56zentrope[[?e :class/students [:student/id “ted”]] [?e :class/students [:student/id “fred”]] something like that.#2016-08-0306:58zentrope(q [:find ?e :in $ [?students …] :where [?e :class/students ?students]] db [“ted” “fred”])#2016-08-0306:58zentropeThat gives me all the classes where either ted or fred is a student.#2016-08-0306:59zentropeAnd of course finding the classes where “ted” is a student is trivial. (That’s the “any one of” the params.)#2016-08-0307:00zentropeMy solution is to just take the vector of names and generate clauses for each one, thus creating the “and” condition. I sort them so that query caching has a chance.#2016-08-0307:17yonatanel@zentrope: just an idea, add a (count ?students) and :with ?students (not sure if you need that actually), and later keep only the results which has count equal to input students#2016-08-0312:35robert-stuttaford@zentrope assuming the need for exactly 3 is fixed:
(d/q '[:find ?foo :in $ [?bar1 ?bar2 ?bar3] :where
(and [?foo :foo/bar ?bar1]
[?foo :foo/bar ?bar2]
[?foo :foo/bar ?bar3])]
db ["one" "two" "three"])
#2016-08-0314:08robert-stuttafordanyone using https://www.terraform.io and Datomic together? @marshall, do you know if this has been done? i'm guessing it's fairly straightforward, if one uses your AMI#2016-08-0314:10marshall@robert-stuttaford: I’ve not heard of anyone using it. Is it supposed to take the place of e.g. CF?#2016-08-0314:10robert-stuttafordyeah#2016-08-0315:36fentonNoob question: How do people programmatically interact with datalog? I dont want to always hard code all my queries, I'd like to parameterize it somehow. Any blogs or suggestions. Do people use macros for this. I haven't got into macros myself yet fwiw.#2016-08-0315:37marshall@fenton: are you using Clojure?#2016-08-0315:37fentonyes#2016-08-0315:37marshallthe great thing about queries is they are just data#2016-08-0315:37marshallyou don’t have to use macros to generate them#2016-08-0315:38marshallthis might help: http://docs.datomic.com/query.html#building-queries-programmatically#2016-08-0315:39marshallyou can generate each element of a query as members of a map#2016-08-0315:39fentonok, let me read and I'll come back if not understood. thx! 🙂#2016-08-0315:39marshallsure#2016-08-0315:41marshallalso an example at the bottom of this: https://github.com/Datomic/day-of-datomic/blob/master/tutorial/building_queries.clj#2016-08-0315:55fentonI guess what I'm wondering is could a datomic query be parameterized in a function like so: (defn generic-find [data] (d/q 'data (get-db)))#2016-08-0316:05fentonI'm a big confused about how to programmatically work with a quoted map.#2016-08-0316:05fenton*bit#2016-08-0316:15fentonI think i figured it out:#2016-08-0316:16fentoninstead of quoting all the datalog, just quoting the bits I didn't want evaluated.#2016-08-0316:47mlimotte@fenton: you can also use syntax quoting (`) and unquote (~) which can be more convenient:
`{:find [[?ent ...]] :where [[?ent :type ~type]]}#2016-08-0316:48fentonoh that looks nice!#2016-08-0316:48mlimottealso, you should have a namespace for the :type attribute#2016-08-0316:49fentoncan u give a short explanation why it needs to be namespaced?#2016-08-0316:49fentondoes it become like a global keyword or something otherwise?#2016-08-0316:50fentonit seems to work without the namespacing.#2016-08-0316:59fenton@mlimotte: f--king cool! omg! sorry had to share my enthusiasm about finally understanding the use of this stuff! 🙂#2016-08-0317:01mlimottewithout namespace, it's all global. the namespace helps to organize things. best practice.#2016-08-0317:01fenton@mlimotte: ok cool thanks.#2016-08-0317:01mlimotteyw#2016-08-0317:21robert-stuttaford@fenton, be aware that d/q does computation to prepare queries (its first arg) and caches that work using the query value itself as a key. so if you generate them dynamically, you could be generating unnecessary work for d/q. i recommend seeing if you can parameterise with :in, first.#2016-08-0317:24fenton@robert-stuttaford: ok. i had some where clause 'datoms' that were being used over and over, so I pulled those out to keep it DRY. I wonder if that will cause a cache miss. Hmmm... Wonder how to tell what gets cached.#2016-08-0318:49robert-stuttaford@fenton: datalog rules 🙂#2016-08-0318:49robert-stuttafordrules is a noun, there, not a verb 🙂#2016-08-0318:50robert-stuttafordhttp://docs.datomic.com/query.html#rules#2016-08-0318:50robert-stuttaford@fenton, here's a fairly concise example (using datascript) https://github.com/robert-stuttaford/stuttaford.me/blob/master/src/stuttaford/client/components/codex.cljs#L20-L52 see it in action http://www.stuttaford.me/codex view the source of the page to see the data#2016-08-0318:53robert-stuttafordnote the % in the :in clause#2016-08-0318:54fenton@robert-stuttaford: ok, let me read and try to grok that... looks promising.#2016-08-0318:55fenton@robert-stuttaford: so cool!#2016-08-0320:22marshallDatomic 0.9.5390 is now available https://groups.google.com/d/topic/datomic/QLdZ_WePR5A/discussion#2016-08-0321:21zentropeI bet a lot of folks will appreciate the use of the log API in the in-memory version of the DB.#2016-08-0321:29zentrope@robert-stuttaford: Thanks! The problem is that I don’t have a fixed size. Instead, I just generate the where-clauses dynamically. The end result looks just like your version, more or less.#2016-08-0403:23robert-stuttaford@marshall: https://twitter.com/mrmcc3/status/760950046330277888 🙂#2016-08-0403:26robert-stuttafordseems someone is using https://terraform.io with Datomic already#2016-08-0414:42marshall@robert-stuttaford: cool#2016-08-0421:51kennethkalmerI’m experiencing an issue with :db.fn/cas failing when trying to compare dates (or #inst) values#2016-08-0421:51kennethkalmerNot sure if this is related to using a with-db in the tests, or just not possible#2016-08-0421:51kennethkalmerI read the old value directly from the entity, so not like I’m experiencing drift or recalculating it#2016-08-0421:52kennethkalmerOr if it is because the entities are components...#2016-08-0423:01kennethkalmerSeems related to with-db, which makes kinda sense since I assume the transactor isn’t participating#2016-08-0423:04kennethkalmerThat said, does anyone have a work around for applying transactions with :db.fn/caswhen using a with-db ?#2016-08-0423:04kennethkalmerSpecifically for testing#2016-08-0514:21anmonteiroprobably already in the works, but I just noticed that this needs to updated following the most recent release
http://docs.datomic.com/clojure/#datomic.api/log#2016-08-0514:21anmonteiroi.e. the mem db now has log#2016-08-0518:12marshall@anmonteiro: Yep, it’s on the list to update. Thanks for mentioning!#2016-08-0609:46fossifoofrom googling around i got the impression that datomic is not really fit for use in a microservice environment because of the "process" based licensing. is this still the case? we could really use some ACID properties for our cassandra cluster 😕#2016-08-0609:48fossifooso far the "solution" has been to implement all the great stuff datomic provides in an ad-hoc manner as is custom, but i'd rather get our customer to maybe license a fairly big installation with about 30 microservice, currently 3 instances each. but paying all of them seperately is clearly insane#2016-08-0609:48atrochedo your microservices all need a direct connection to the database?#2016-08-0609:49fossifooi assume you mean running peers and transactors as seperate services and adding 1-2 hops?#2016-08-0609:50fossifooi guess for most of them this would probably be okayish, since a memcached installation would probably answer about as fast as the cassandra does now#2016-08-0609:51fossifooand we honestly don't write that much (don't ask... -_-)#2016-08-0609:52fossifooso is that actually a "valid" workaround to have your peer as a seperate microservice and talk over the REST api with datomic? that seems like "cheating" the license from the other side#2016-08-0609:52atrochehave you seen http://docs.datomic.com/rest.html ?#2016-08-0609:53atrochethat doesn’t exactly answer your question, but it’s a sign that having peers act as REST APIs isn’t outside what the creators intended#2016-08-0609:54atrocheAFAIK cognitect don’t mind you having as many clients as you want talking to your peers (in your example, the microservice(s) with direct connections to datomic)#2016-08-0609:54atrochebut definitely can’t speak for them#2016-08-0609:55fossifoowell, i guess we would need to just call/write and ask#2016-08-0609:56atrochebut if you already have 30 microservices that talk to a database, it might be a pain to rewrite them to get what they need from other microservices instead#2016-08-0610:08atrochein any case, good luck 🙂#2016-08-0610:09fossifoowell, i really hope i can get this through. cassandra lightweight transactions are a major pain to deal with#2016-08-0610:10fossifoobasically they tell you that you should almost never use them and if, they don't even guarantee that they will be consistent under contention... -_-#2016-08-0610:11fossifooso you are basically forced to write both a transactor and some "aggregator" anyway or do leader election and such yourself. totally not what i expected from a persistent storage 😕#2016-08-0612:40iwankaramazowA few weeks/months ago someone posted a link to a blogpost here on Slack, that showed how to implement Datomic or a datalog engine from scratch. I don't remember 100% anymore. Does anybody know the link to that blog?#2016-08-0612:46anmonteiro@iwankaramazow: https://aosabook.org/en/500L/an-archaeology-inspired-database.html#2016-08-0612:46anmonteirowould that be it?#2016-08-0612:47iwankaramazowYea that was it 😄#2016-08-0612:48iwankaramazowMuchas gracias#2016-08-0612:57marshall@fossifoo: I'd be happy to discuss options and approaches folks use for microservices with Datomic. Shoot me an email (marshall at http://cognitect.com) and we can schedule something. #2016-08-0612:58fossifoo@marshall: thanks. i'll discuss this again internally and get back to you in the next week#2016-08-0612:59marshallSounds good #2016-08-0612:59fossifoothe weekend is for using datomic for my private projects 😉#2016-08-0703:00podviaznikovquick question: is there terraform templates for setting up datomic on aws?#2016-08-0706:42robert-stuttaford@podviaznikov: @mrmcc3 has Datomic + Terraform running.#2016-08-0710:25mrmcc3I’ve got a terraform module which from my understanding sets up a more or less
identical system on AWS as the datomic scripts. It ended up being pretty succinct and readable.#2016-08-0710:26mrmcc3Only tedious bit was the transactor bootstrap/userdata script which i converted to a terraform template.
Because I essentially copied the userdata script produced by datomic scripts I wasn’t sure if I could share it publicly.#2016-08-0713:45codonnellis it possible to connect to a datomic database while the transactor is unreachable?#2016-08-0714:00yonatanel@codonnell: Not when using dev protocol#2016-08-0714:01codonnell@yonatanel: my database is on RDS; sql protocol#2016-08-0714:07yonatanel@codonnell: I honestly don't know. I just tried now with dev protocol. Even if you tried and it worked with RDS, I don't know what the nuances are, so better wait for an authoritative answer#2016-08-0714:09codonnell@yonatanel: appreciate the attempt#2016-08-0714:12yonatanel@codonnell: Not sure how helpful this is, but here it says "Transactor does not participate in queries. Peers with no transactor connection can still do reads from storage": http://tonsky.me/blog/unofficial-guide-to-datomic-internals/#2016-08-0714:15codonnell@yonatanel: yeah, I think I heard that somewhere in the datomic tutorial videos. That page looks like a nice resource, though. I'll definitely look through it. If I try to connect to the database from a box that can't reach the transactor, I get an exception (reasonable behavior). I just want to know if there's a way to get my peer to ignore the transactor and set up a kind of "read-only" connection.#2016-08-0714:16robert-stuttaford@codonnell, @yonatanel: peers connect to storage first, which has the connection details for the primary and backup transactors. this is how failover is possible 🙂#2016-08-0714:16robert-stuttafordhave you tried connecting to storage with your transactor down, and issuing only queries?#2016-08-0714:17codonnell@robert-stuttaford: Yeah, I tried connecting to storage, and I got CompilerException clojure.lang.ExceptionInfo: Error communicating with HOST x.x.x.x on PORT 4334#2016-08-0714:17robert-stuttafordi don't think it'll work, but it's worth trying. when first connecting to a database, peers have to grab the latest in-memory live indexes, which the transactor has, but storage does not. this may prevent reads against 'now' dbs. you may be able to read from the past#2016-08-0714:19codonnellI didn't know that live indexes lived in the transactor; that's good to know.#2016-08-0714:20robert-stuttafordthe live indexes have to live there (and in all peers) because they haven't made it to storage yet. that's what indexing's job is: integrating all live index into storage#2016-08-0714:20robert-stuttafordalso memcached if you have one connected#2016-08-0714:27codonnell@yonatanel: this blog post is really fantastic; thanks for the link#2016-08-0714:34yonatanel@codonnell: yep. kinda makes you wanna write an open source clone ;)#2016-08-0714:39marshall@codonnell: You can’t start up a connection to a Datomic DB without access to the transactor. It is possible that under certain circumstances, loss of connectivity to the transactor will not prevent peers from running queries, but this is not guaranteed behavior, and it will depend on specifics of storage/config/deployment/etc#2016-08-0714:40robert-stuttafordcan confirm that queries continue on fine during transactor outages#2016-08-0714:40robert-stuttaford(using dynamodb)#2016-08-0714:41robert-stuttafordjust wasn't sure about starting new connections#2016-08-0714:41codonnell@marshall: Alright, thanks for the definitive answer. Given what @robert-stuttaford mentioned about live indexes living on the transactor and each live peer, it makes a lot of sense.#2016-08-0717:25pesterhazy@mrmcc3: any chance you could put that up as a gist or on github? that would be tremendously helpful#2016-08-0717:26pesterhazyI'm not too comfortable with the default CF-based set up where you can't ssh into the machines and tail logs in realtime#2016-08-0719:23podviaznikovdoes anyone have problem with getting datomic license via email? Registered yesterday, but didn’t get email yet.#2016-08-0804:50mrmcc3@pesterhazy, @robert-stuttaford, @podviaznikov: https://github.com/mrmcc3/tf_aws_datomic i’m sure there are many things that could be improved but I can at least confirm that this gives you a running transactor (with dynamo) and properly configured peers#2016-08-0806:40robert-stuttafordthank you @mrmcc3 !#2016-08-0807:45pesterhazywow, awesome 🎁 @mrmcc3#2016-08-0812:11yonatanelWhy is the semi sequential squuid better for indexing than a random uuid?#2016-08-0812:38robert-stuttafordindexing sorts datoms#2016-08-0812:38robert-stuttafordsequential uuids sort faster#2016-08-0812:39robert-stuttafordbecause most of the uniqueness is towards the end of the uuid value, which means that on average, it takes less bits of the uuid to determine a sort decision#2016-08-0812:39robert-stuttafordthat's my layperson understanding, anyway 🙂#2016-08-0813:09yonatanelI see. So there's no concept of a hash index that doesn't care about ranges, as in mongo?#2016-08-0814:28danielstocktondatomic's indexes are like b-trees, if things are sequential it means the tree needs much less re-balancing#2016-08-0814:39stuarthalloway@yonatanel: semisequential guids are a good idea in general, you never know what kinds of indexes your data might want to live in#2016-08-0814:42stuarthalloway@yonatanel: adapting indexing (http://blog.datomic.com/2014/03/datomic-adaptive-indexing.html) means that Datomic will index efficiently even if store purely random values#2016-08-0814:43stuarthallowayit is a principle of Datomic that you should get good indexes by default, without having to make indexing choices up front (that inevitably serve some jobs well and other jobs poorly)#2016-08-0814:44stuarthallowaythis is exactly the opposite of the “Model Around Your Queries” idea encouraged in e.g. Cassandra http://www.datastax.com/dev/blog/basic-rules-of-cassandra-data-modeling#2016-08-0815:30alexatiHi! I have a super-newbie question: After leiningen downloads the datomic peer library dependency correctly, when I (require ‘datomic.api) on my project, I get ClassNotFoundException clojure.lang.Tuple java.net.URLClassLoader.findClass (URLClassLoader.java:381). What’s happening? I’m stuck… thanks in advance!#2016-08-0815:34marshall@alexati: I’m guessing you’re using an incompatible version of Clojure#2016-08-0815:34marshallthe latest release of Datomic requires Clojure 1.8#2016-08-0815:36marshall@alexati: Alternatively it may be some other incompatible dep. You can run lein deps :tree to find what may be conflicting#2016-08-0815:38alexmillerthat has to be the Clojure dep#2016-08-0815:38alexati@marshall Thanks a lot, I completely overlooked that - I was using clojure 1.7#2016-08-0815:39marshallSure. I may have found that one myself…. once or twice 🙂#2016-08-0815:39alexatierror messages are the best part of clojure 🙂#2016-08-0816:16yonatanel@stuarthalloway: Can I just use the squuid function when I need to generate an ID, even without datomic? e.g when I know I will need to store it later#2016-08-0816:16stuarthallowayyes, and the source code is like ~10 lines if you don’t want a dep#2016-08-0816:17stuarthallowayIt is in the Clojure cookbook https://github.com/clojure-cookbook/clojure-cookbook/blob/master/01_primitive-data/1-24_uuids.asciidoc#2016-08-0816:41yonatanelbtw, when there's partitioning involved, isn't it recommended to have the most significant part of the id all over the place, as is recommended with amazon s3 keys? (and of course ranges are not needed)#2016-08-0817:22stuarthalloway@yonatanel: depends on the partitioning scheme, but yes that can be a concern#2016-08-0817:59severed-infinityhey guys just getting into using datomic, right now I am working on just a simple registered user system. testing the add and retract feature but I’ve notice I had to re-evaluate the db after each add/retract to see the effect take place. Am I missing something?#2016-08-0818:13jaret@severed-infinity: I imagine you are querying against the same database value. In Datomic the database is a value and database values are immutable. So whatever item you are adding and retracting has another value (new) that you need to use. Each transaction in Datomic adds a new database value and all of the old values are still present, but if you want the most recent value you will need to retrieve it first.#2016-08-0818:14severed-infinityso I assume then what I’d want is some function (say add function) that gets the db in let binding?#2016-08-0818:14jaretFor a much better and more detailed explanation: https://channel9.msdn.com/posts/Rich-Hickey-The-Database-as-a-Value#2016-08-0818:14jaret@severed-infinity: sure, that is one way of doing it.#2016-08-0819:56rwtnorton@severed-infinity: There is also :db-after you could use (without having to make another roundtrip): http://docs.datomic.com/clojure/index.html#datomic.api/transact#2016-08-0820:06severed-infinityah never thought of that, using that approach then would it be best to create the db as an atom?#2016-08-0821:54val_waeselynck@severed-infinity an atom holding the db value wouldn't give you persistence, so it's only useful for speculative work. Semantically a Connection is already a kind of reference holding db values #2016-08-0822:09severed-infinity@val_waeselynck: ah okay, currently I am using the approach I mention earlier of using it in a let
context but are there any good code examples or best practices code examples in regards to handling the db change over time?#2016-08-0822:10val_waeselyncktransacting and obtaining :db-after is the way to go, but you usually don't need to do it more than once #2016-08-0822:11val_waeselynckif you're adding several things, just make a batch transaction that adds them all at once#2016-08-0822:17sdegutisIs there a way to see the tx-data of a given transaction in the database history?#2016-08-0822:20sdegutisSorry, I meant tx-data. Fixed the typo.#2016-08-0822:21marshall@sdegutis: yes, you can use the log api.#2016-08-0822:21sdegutisAh right, thanks.#2016-08-0822:21marshallhttp://docs.datomic.com/log.html#log-in-query#2016-08-0822:22sdegutisAhh, that's gonna suck. Having to parse through datoms.#2016-08-0822:23marshallYou can use it in query. Use the tx-data function to bind e, a, v, and op#2016-08-0822:23sdegutisThanks again.#2016-08-0822:25severed-infinity(def db (d/db conn))
…some transaction…
(set! db (:db-after transaction))
would this be the suggested approach?#2016-08-0822:26sdegutis@severed-infinity: that's usually not necessary since Datomic stores the most recent transaction's :db-after along with its connection, so all subsequent calls to (d/db conn) will return it.#2016-08-0822:26marshall(d/q '[:find ?e ?a ?v ?tx ?op
:in ?log ?tx
:where [(tx-data ?log ?tx)[[?e ?a ?v _ ?op]]]]
(d/log conn) <your-tx-id>)
#2016-08-0822:26sdegutis@severed-infinity: The only time you ever need to access :db-after directly is when you need to make sure you have a consistent view of the database, e.g. to avoid race conditions.#2016-08-0822:27marshallHrm. Not sure how to code block on mobile. Anyway that query will get all the datoms for a given tx#2016-08-0822:27sdegutis@marshall: It looks formatted like a code block to me (on desktop).#2016-08-0822:27marshallOh. Go me!#2016-08-0822:27sdegutisThanks btw.#2016-08-0822:28marshallSure. And of course you can add clauses as normal to do whatever else you want. Remember to pass a db in addition to the log if you want to do any 'normal' querying additionally#2016-08-0822:30marshall@severed-infinity: you might want to look at Component (https://github.com/stuartsierra/component) for managing your db state throughout an application #2016-08-0822:31marshallIn very general terms, functions that only query should take a db as an argument and functions that transact should take a connection#2016-08-0822:33marshallAlso relevant: http://docs.datomic.com/best-practices.html#consistent-db-value-for-unit-of-work#2016-08-0822:34severed-infinity@sdegutis: ah okay, its just that after adding a new user to the db and looking them up using the following query with id in this case representing an id in my system and db being the same as I declared above just simply returns an empty set unless I re-evaluate db before the call.
(d/q '[:find ?e :where [?e :user/id id]] db)
In actuality I use this function specially
(defn lookup-user [id]
(let [db (d/db conn)
result (d/q '[:find ?e
:in $ ?id
:where [?e :user/id ?id]] db id)]
result))
#2016-08-0822:34severed-infinitybut that db bound in the let form still returns an empty upon querying#2016-08-0822:35sdegutis@severed-infinity: Right, like I said, if you want to see the value of your DB after your transaction, either use :db-after or (d/db conn).#2016-08-0822:37severed-infinitygotcha#2016-08-0823:12sdegutisHow would you go about "reversing" a transaction?#2016-08-0823:13marshallQuery for the datoms in the txn (as above), reverse the op position of all datoms in the txn (other than the txInstant one) and re-transact#2016-08-0823:16marshallTim Ewald gave a really nice talk on Reified Transactions last year at conj: http://www.datomic.com/videos.html#2016-08-0823:19sdegutisAn, nice.#2016-08-0900:07podviaznikovhit problem with aws deployment. I have micro ec2 instance with clojure app that uses datomic. I’m getting following error:
got an error java.lang.IllegalArgumentException: :db.error/not-enough-memory (datomic.objectCacheMax + datomic.memoryIndexMax) exceeds 75% of JVM RAM
Is this problem with machine where my app runs or machine where transactor runs?#2016-08-0900:08marshallAre you running both peer and transactor on the same instance?#2016-08-0900:10bvulpespodviaznikov: naively provisioned t2.micro will struggle to run both a transactor and a clojure app for precisely reasons of RAM allocation. using swap to increase RAM can help.#2016-08-0900:11marshallIf so, you'll definitely want to run them on separate instances. Regardless, that error indicates that the system you're running on has too little memory to run with your configured settings.
You can try reducing the object cache size in your object cache max http://docs.datomic.com/capacity.html#peer-memory#2016-08-0900:12podviaznikovsorry, I have two instances. transactor on m3.medium and my app on t2.micro. I’m getting error above in my clojure app. What I’m not sure about whether problem with datomic instance or my app instance#2016-08-0900:12podviaznikovthanks for the System.setProperty("datomic.objectCacheMax", "256m"); tip. Trying now#2016-08-0900:12marshallAnd @bvulpes is correct. A micro is pretty small for a datomic app, although not impossible#2016-08-0900:13marshallSpecifically regarding the error you saw, the sum of object cache max and memory index max (set in transactor properties file) cannot exceed 75% of JVM heap#2016-08-0900:14bvulpesbezos' artificial ram constraints are easily gotten around with an SSD drive and swap space.#2016-08-0900:14podviaznikovyeah, I’m assuming that is from my app. I was confused because I google that error and some people were complaining about transactor. So I wanted to double check#2016-08-0900:15podviaznikovI’d like to change instance type, but now lost how to do that. I’ve launched instance using amazon container service and there is no easy way to change instance type. Trying to figure that out#2016-08-0900:15marshallIt's affected by transactor memory settings because the peer will always use the same memory index values as the transactor#2016-08-0900:16marshall@bvulpes clever. I like it:)#2016-08-0900:19bvulpesheh i can't take much credit for it but ty#2016-08-0900:21marshallNot sure if the JVM will let you provision a larger than ram heap or not. I'll have to try it tomorrow #2016-08-0900:25marshallI would worry about GC doing that though#2016-08-0900:47podviaznikovIt would be good if “java.lang.IllegalArgumentException: :db.error/not-enough-memory (datomic.objectCacheMax + datomic.memoryIndexMax) exceeds 75% of JVM RAM” error included current JVM RAM limit. Hard to tell now what is missing. I changed instance type to t2.small with 2GB of RAM and my single docker container has limit of 1.8GB. That should be enough right to connect to empty datomic database?#2016-08-0900:49bvulpespodviaznikov: usually one provides heap size explicitly when booting a jar#2016-08-0900:49bvulpeshow are you booting this app?#2016-08-0900:50bvulpesyrva << rot13'd guess#2016-08-0900:50podviaznikovyeah, I think I’m missing that parameter. My last limes of the Dockerfile:
RUN mv "$(lein uberjar | sed -n 's/^Created \(.*standalone\.jar\)/\1/p')" api.jar
EXPOSE 3000
CMD ["java", "-Dfile.encoding=UTF-8", "-jar", "api.jar”]
#2016-08-0900:51bvulpesyeah xmx and xms are the flags i think you want#2016-08-0900:52bvulpes"-Xmx 1g" for instance#2016-08-0900:52bvulpes(re: rot13, i was wrong :P)#2016-08-0900:53podviaznikovthanks! trying#2016-08-0900:53bvulpeser#2016-08-0900:53bvulpesno space#2016-08-0900:53bvulpespodviaznikov: the flag is "-Xmx1g"#2016-08-0900:53podviaznikovyeah, I did added it without space#2016-08-0901:01marshallIf you have 1.8gb, I'd probably try to get 1.5g heap#2016-08-0901:03marshallI.e. -Xmx1500M#2016-08-0901:06podviaznikovI don’t have that error anymore. but now need problem:
Aug 09, 2016 1:01:46 AM org.hornetq.core.client.impl.ClientConsumerImpl$Runner run
ERROR: HQ214000: Failed to call onMessage
java.io.EOFException
at org.fressian.impl.RawInput.readRawByte(RawInput.java:40)
at org.fressian.FressianReader.readNextCode(FressianReader.java:927)
at org.fressian.FressianReader.readObject(FressianReader.java:274)
at datomic.hornet$create_deserializer$fn__6615.invoke(hornet.clj:353)
at datomic.hornet$create_rpc_client$fn__6648.invoke(hornet.clj:412)
at datomic.hornet$set_handler$reify__6602.onMessage(hornet.clj:288)
at org.hornetq.core.client.impl.ClientConsumerImpl.callOnMessage(ClientConsumerImpl.java:1116)
at org.hornetq.core.client.impl.ClientConsumerImpl.access$500(ClientConsumerImpl.java:56)
at org.hornetq.core.client.impl.ClientConsumerImpl$Runner.run(ClientConsumerImpl.java:1251)
at org.hornetq.utils.OrderedExecutorFactory$OrderedExecutor$1.run(OrderedExecutorFactory.java:104)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Happens when I call (d/connect data/uri)#2016-08-0901:09marshallDoes your peer instance have correct role etc to reach transactor?#2016-08-0901:12podviaznikovI can’t reach transactor from laptop in the shell. I enabled ingress for all addresses. I assume I don’t need to do anything specific to my peer instance in that case, right?#2016-08-0901:15marshallIf you're on aws you'll need to setup aim roles#2016-08-0901:15marshallhttp://docs.datomic.com/aws.html#2016-08-0901:16marshallPerhaps not of you've enabled access from anywhere. Best way to #2016-08-0901:17marshallCheck is to try accessing transactor instance from peer instance#2016-08-0901:18marshallhttp://docs.datomic.com/deployment.html#peer-fails-to-connect#2016-08-0901:19marshallYou'll want to make sure you can reach the transactor from the peer using either host or alt-host, as specified in transactor properties file #2016-08-0901:34podviaznikovlast link suggest checking out transactor logs. I didn’t configure saving logs initially. Is there is tutorial how to update configuration?#2016-08-0901:37marshallTransactor logs default to log directory in datomic distribution. http://docs.datomic.com/configuring-logging.html
Peer logs need to be enabled#2016-08-0901:41podviaznikovright, but now it’s unclear how to ssh into ec2 instance with transactor. KeyPair wasn’t added when instance was first created. Is there a way to specify keypair when created cloudformation template for datomic transactor?#2016-08-0901:42marshallNot if you're using the provided AMI. You'd need to set up log rotation. Were you able to test host and alt-host from the peer machine?#2016-08-0901:43marshallIf neither of those resolve to the transactor machine that is the issue.#2016-08-0901:46podviaznikovhow do I test test host and alt-host from the peer machine? It wasn’t clear to me how to do that based on http://docs.datomic.com/deployment.html#peer-fails-to-connect#2016-08-0901:48marshallIf you can ssh to the peer machine, try pinging the host and alt-host from it#2016-08-0901:49marshallMake sure at least one of those resolves to the transactor machine#2016-08-0901:49marshallIf peer is in a container you should test from within the container#2016-08-0901:53podviaznikovand by host you mean public DNS for ec instance? Like “http://ec2-54-152-44-27.compute-1.amazonaws.com”?#2016-08-0901:53podviaznikovwhat is alt-host?#2016-08-0901:54marshallNo, host and alt-host are specified in the transactor properties file #2016-08-0901:54marshallThey are the values the transactor writes to storage that peer uses to locate and connect to live transaxtor#2016-08-0901:57podviaznikovthose on top of the file protocol=ddb
host=localhost
port=4334
?#2016-08-0901:57marshallAre you running in a vpc? If so, are both transactor and peer in the same vpn?#2016-08-0901:58marshallRight, what is the alt-host value?#2016-08-0901:58podviaznikovthere is no alt-host value in the ensured-transactor.properties file#2016-08-0902:00marshallYou might want to review http://docs.datomic.com/storage.html#provisioning-dynamo
The host should be specified by the scripts, but there are manual configuration directions below if necessary#2016-08-0902:02podviaznikovI followed today this video: https://www.youtube.com/watch?v=wG5grJP3jKY Is this still up to date resource how to setup datomic in aws?#2016-08-0902:02podviaznikovalso you are correct, my app and datomic are in different vpss#2016-08-0902:02podviaznikovI assume that is the main issue now#2016-08-0902:06marshallThe video may not be up to date. I'd highly recommend following the steps in the docs here http://docs.datomic.com/storage.html#provisioning-dynamo
They're definitely the most up to date #2016-08-0902:08podviaznikovoh, I see#2016-08-0902:09podviaznikovI see now that docs mention setting up host
host=<FINAL-HOST-NAME>
#2016-08-0902:11codonnellFWIW, I had no trouble setting up dynamo with the linked instructions a few weeks ago#2016-08-0902:11codonnellso I don't think anything is out of date#2016-08-0902:17marshall@podviaznikov: the ensure scripts should handle all of that. Did you run ensure-transactor as described?#2016-08-0902:23marshallI just reviewed the video you linked. I believe it is up to date. You'll probably want to resolve the vpc thing #2016-08-0902:24marshallAlso, if you did use the provided cloud formation template your transactor should be rotating logs to s3 of you want to check them for the values of host and alt-host. They should be printed to the logs on syartup#2016-08-0902:24marshallStartup#2016-08-0902:45podviaznikovyeah, but video doesn’t mention changing “host” property, right? Or I didn’t pay attention. Also I don’t think I’ll have s3 logs. I have # aws-s3-log-bucket-id= commented it out. It was like that by default#2016-08-0902:59marshallAh. Yeah, you'll need to uncomment that to get logs. Correct, you don't need to manually specify host if you use the provided scripts#2016-08-0903:01marshallThe test he shows in the video of connecting from his laptop to the local transactor then to the remote transactor is a good one. It helps you narrow down the issue to the peer vs the transactor config#2016-08-0903:02marshallIf you ran that test successfully, then it's most likely the vpc issue and/or IAM config issue for the peer instance. The ensure scripts should have created a datomic peer role. You need to grant it to the peer instance#2016-08-0903:05podviaznikovyeah, I run that test both from shell and running my clojure app locally. I was able to connect to datomic on aws. I already fixed issue with datomic peer role and now fixing issue with VPC. Hopefully that will work#2016-08-0903:09marshallSounds like you're on the right track then. If you can connect from your local machine with your app it's definitely an issue with the peer instance. Sorry for the meandering to get there :)#2016-08-0905:20bvulpesfwiw i vastly prefer 'manually' provisioning datomic into my clusters.#2016-08-0905:20bvulpespodviaznikov: ^^#2016-08-0905:21bvulpesbut i also don't use spacemacs#2016-08-0908:32mrmcc3:address/city-id -1000001 should that be :address/city-id #db/id[:db.part/user -1000001]#2016-08-0909:04tengNow it works. Needed to change ":person/address-id -1000002” also.#2016-08-0909:04tengThanks!#2016-08-0909:07mrmcc3👍#2016-08-0911:37yonatanelIs there a single page infographic for "managers", emphasizing datomic advantages and presenting it as less scary, or comparing to sql for instance?#2016-08-0913:45stuartsierra@yonatanel: maybe http://www.datomic.com/benefits.html#2016-08-0913:54yonatanelThanks, stuart. I saw it but I guess I need something a little technical and more schematic.#2016-08-0914:07stuartsierraThen http://www.datomic.com/rationale.html perhaps#2016-08-0914:07stuartsierra(Both are links from http://datomic.com homepage)#2016-08-0914:45yonatanelDo I need to call shutdown when developing in the REPL? Currently I'm using Component and just sets :connection to nil on Lifecycle/stop.#2016-08-0914:47robert-stuttafordno#2016-08-0914:50bostonaholicwhat is the benefit of using datomic.Util/readAll https://github.com/Datomic/day-of-datomic/blob/053b3bd983d165b8fa7c0c039712fb1cb75eddf3/src/datomic/samples/io.clj#L18#2016-08-0914:54bostonaholicI’ve always just used (read-string (slurp (io/resource f)))#2016-08-0914:55marshallhttp://docs.datomic.com/javadoc/datomic/Util.html#readAll-java.io.Reader-
readAll will read multiple items (i.e. multiple transactions) from an edn file and return a list#2016-08-0914:55marshallthe Day of Datomic repo uses it to get multiple transactions out of a file#2016-08-0915:06bostonaholicah ok, I guess usually what I’m reading in (a small schema) is one tx#2016-08-0916:29sdegutisDo medium-complexity queries basically have the same performance profile as starting with an entity and navigating it via entity relationship psuedo-keys (such as :foo/bar and :bar/_quux, etc.)?#2016-08-0917:25podviaznikovStill having problems with deploying talking from clojure app to new datomic on aws. This time both instances are in the same VPC and I’n getting error clojure.lang.ExceptionInfo: Transactor request timed out {:db/error :peer/request-timed-out, :request :start-database, :result #object[java.lang.Object 0x697bf0e8 ".
Also side question. What is the recommended way to deal with situation where I want to change e.x. ensured-transactor.properties. Should I regenerate cf.json and then update cf stack with it?#2016-08-0919:47bvulpespodviaznikov: can you telnet to the transactor ip/port from your app instance?#2016-08-0919:53podviaznikovtelnet 4334
Trying 172.31.23.102...
Connected to .
Escape character is '^]'.
Connection closed by foreign host.
#2016-08-0919:54podviaznikovis this correct? port 4334, right?#2016-08-0920:02jgdavey@podviaznikov: You may be missing an entry in your transactor’s security group setting.#2016-08-0920:02podviaznikovI have automatically created sg for datomic instance:
22 tcp 0.0.0.0/0 ✔
4334 tcp 0.0.0.0/0, sg-d34eb2a9 ✔
#2016-08-0920:03podviaznikovsomething is missing there?#2016-08-0920:05jgdaveyNope that seems legit#2016-08-0920:21colindresjJust getting started with Datomic and trying to wrap my head around limiting access to particular data when querying. As a hypothetical, given a person entity and a company entity, how would I get the full collection of company members (person entity ref) only if the given person entity is contained within that collection of company members, otherwise return an empty collection?#2016-08-0921:03podviaznikovI see this line in the docs: 'The transactor writes the value of the host transactor property in storage’. How can I see what are the host and alt-host values? are they somewhere in the dynamo db now? I see dynamo db key pod-coord, is it the place?#2016-08-0921:30bvulpespodviaznikov: i'm going to gently suggest doing your own devops instead of wrestling the cloudformation. this'd be trivial for you to run down if you had access to the transactor.properties on disk.#2016-08-0922:26atroche@colindresj: how about https://gist.github.com/atroche/a8e731202cdc01d7e0bbe0c4102704b9 ?
i’m making some assumptions about your data model, but hopefully that helps.#2016-08-0922:28atrochesdegutis: i’d say it depends on factors like a) how many entities are you doing the navigation on? b) how much of that data is cached on the peer already?#2016-08-0922:29atrochedo you have a specific example you’re wondering about?#2016-08-0922:34atrochesdegutis: i’ve been running datomic in production for ~9 months on https://www.bookwell.com.au/ and my approach has been to “make it nice, then make it fast”. and it’s super rare that i have to do the second step 🙂#2016-08-0923:03podviaznikov@bvulpes: setting up infrastructure manually sounds like a good idea. But I’m not sure I’d be able to do everything correctly since I don’t know what is the problem now. Datomic error messages and documentation for troubleshooting is not excellent for sure#2016-08-0923:04bvulpeshey now buddy nobody said "manually"#2016-08-0923:05podviaznikovhow to do that without cloudformation?#2016-08-0923:19bvulpesi use https://aws.amazon.com/sdk-for-java/#2016-08-0923:20bvulpespodviaznikov: ^^#2016-08-1000:27podviaznikovI have those logs in transactor:
2016-08-09 23:43:59.604 WARN default org.hornetq.core.client - HQ212040: Timed out waiting for netty ssl close future to complete
2016-08-09 23:44:00.573 WARN default org.hornetq.core.server - HQ222190: Disallowing use of vulnerable protocol: SSLv2Hello. See for more details.
2016-08-09 23:44:00.573 WARN default org.hornetq.core.server - HQ222190: Disallowing use of vulnerable protocol: SSLv3. See for more details.
Those are just warnings, right? I assume I can ignore those#2016-08-1013:20yonatanelIs there any advantage other than convenience for having :where clauses that don't use any index? Maybe caching of those extra filters?#2016-08-1013:26jimmyrcomDoes anyone know what would cause this timeout to show in the logs when transacting files of a few mb in size "PoolingHttpClientConnectionManager - Closing connections idle longer than 60 SECONDS"#2016-08-1013:45hansjimmyrcom: datomic is not really good at storing large amounts of data in single transactions. transactions with at most a few hundred datoms, a few kilobytes per datom is where its sweet spot sits.#2016-08-1013:46jimmyrcomThanks hans, so the number of items per transaction could trigger this?#2016-08-1013:46jimmyrcomHans#2016-08-1013:47hansin the end, it is the overall size of the transaction that matters. if you put too many datoms into one transaction, indexing can have trouble to keep up. if the datoms are too large, datomic's assumptions regarding segment sizes become invalid, making it less efficient.#2016-08-1013:48hansalso remember that if you have large transactions, you're blocking out other writers for the duration of the transaction. you mentioned "a few megabytes", and that is a lot of data to be committed in one transaction.#2016-08-1013:48hansthe general advice is: make your transactions smaller, store blobs somewhere else (e.g. directly in the backing store without using datomic for it).#2016-08-1013:49jimmyrcomThanks for the advice Hans#2016-08-1014:33colindresj@atroche, in your example is that assuming I’ve already queried for a company entity and a person entity? I’d like to be able to solve my case within the query alone. For more background, a company has a members attribute which is a cardinality many of type ref. I managed to have some success doing something like this:
[:find [?name ...]
:in $ % ?p-name ?co-name
:where [?p :person/name ?p-name]
[?c :company/name ?co-name]
[?pc :company/members ?p]
[(= ?pc ?c)]
[?c :company/members ?m]
[?m :person/name ?name]]
Something in my head is telling me however, that I should be working with contains?#2016-08-1014:56yonatanel@colindresj: I don't think you need ?pc. If you use ?c instead you can drop the equality check.#2016-08-1014:56robert-stuttaford@marshall: great article on the blog. am i correct that queries on d/log do not interact with the peer query cache mechanism? or do they indeed cache as well?#2016-08-1014:59marshallThe log is a separate index so the segments retrieved via log access are different than those retrieved when you access one of the other indexes (i.e. AVET EAVT, etc)
If you have a query that uses both the Log API (via helper functions) and other datalog clauses, the query engine will still use the other indexes as appropriate to satisfy the query, and those will be accessed the ‘regular’ way (i.e. with caching)#2016-08-1014:59robert-stuttafordright. so queries that only work with d/log are, in essence, not cacheable#2016-08-1015:00robert-stuttaforde.g. if i used d/log with a filter on the datom :a values and reverse to make an activity stream view, that'd be bad from a performance perspective, because no caching happens on the log segments in the peer library#2016-08-1015:03marshallLog segments are cached.
http://docs.datomic.com/caching.html#object-cache
Whether or not certain segments are in cache at a given time is, of course, dependent on usage#2016-08-1015:07robert-stuttafordok, awesome!!#2016-08-1015:07robert-stuttafordfor some reason i had this idea that only the covering indices were cacheable, and something told me to double-check#2016-08-1015:07marshallwell, the log is a covering index 😉#2016-08-1015:07robert-stuttafordyaknowwhatimean 🙂#2016-08-1015:08marshallyep#2016-08-1015:08robert-stuttafordeavt avet aevt vaet#2016-08-1015:08robert-stuttafordrather than .... teav?#2016-08-1015:08marshallor at least t___#2016-08-1015:08marshalli’d have to check , but i don’t think the log provides any ordering within a transaction#2016-08-1015:09robert-stuttafordi know tx datoms come first#2016-08-1015:09robert-stuttafordwhich is contrary to storage indexes, i think#2016-08-1015:27bhaganythis is excellent news, I also thought d/log didn't cache#2016-08-1017:08robert-stuttafordi know, right#2016-08-1017:08robert-stuttafordtotally changes my perception of it, actually#2016-08-1018:15kschraderis the easiest (only?) way to allocate more memory to peers just to set -Xmx8g -Xms8g (for example) from the command line?#2016-08-1018:17kschraderwhich will then allocate 50% of that to the object cache?#2016-08-1018:41jgdaveyIt depends on what you want. 50% of the JVM’s max heap is a good start, but for your particular needs, you might be able to change that to something else. You can set it to a custom value (in bytes) with the datomic.objectCacheMax java property.#2016-08-1018:42jgdavey(see http://docs.datomic.com/system-properties.html)#2016-08-1018:43jgdaveyBut for the JVM instance itself -Xmx would need to be high enough as well#2016-08-1018:51marshallYour system will throw an exception if you request an objectCacheMax greater than 75% of your JVM heap#2016-08-1019:32timgilbertHey, quick question: given a datomic connection object, is there a quick way to get the URI it connects to? I scanned the Java API for a .getUri() method or similar but didn't see anything#2016-08-1019:41timgilbertI just want to use it for logging in the case of a failed connection#2016-08-1019:41bhaganyI don't see anything relevant upon reflecting the connection either, but I agree this would be useful#2016-08-1108:31yonatanel@stuarthalloway: I know this is an old answer, but what did you mean by "For attributes that are marked :db/index true, it would be possible for the query engine to use predicates to avoid streaming all values of an attribute. That is planned, but not yet implemented."? Isn't that what the index is for? From https://groups.google.com/forum/#!msg/datomic/IxyFQnodrF0/in1vewXEzs8J.
And here's another example where it comes up: https://groups.google.com/forum/#!msg/datomic/fsowOBbhvXU/dWHpj7JiDdQJ#2016-08-1108:45yonatanelCan anyone explain the following? In http://docs.datomic.com/clojure/#datomic.api/datoms it says
:vaet contains datoms for attributes of :db.type/ref
:vaet is the reverse index
It looks like a mistake when reading it since there's one line for each index but two lines for :vaet, but it might mean that :vaet is the reverse of :eavt.#2016-08-1109:42danielstocktonHas anyone solved the datomic (recent) with aleph problems without excluding hornetq server and depending on hornet 2.4.8? https://groups.google.com/forum/#!msg/datomic/pZombLbp-tQ/pyU37oAnAgAJ#2016-08-1110:26danielstockton@timgilbert the id of the database value is the uri afaict, unless you're using the mem db (:id (d/db conn))#2016-08-1110:35robert-stuttaford@danielstockton yep, just confirmed, you do get the uri out if you use :id, which gives you a java.util.URI. str on that value to get back to the value you gave to d/connect#2016-08-1110:36robert-stuttaford(keys some-db) reveals a bunch of interesting things: (:id :memidx :indexing :mid-index :index :history :memlog :basisT :nextT :indexBasisT :elements :keys :ids :index-root-id :index-rev :asOfT :sinceT :raw :filt)#2016-08-1112:50timgilbertOh, interesting. Thanks guys!#2016-08-1116:30val_waeselynckDo you see anything wrong with the following transaction? I'm getting a IllegalArgumentExceptionInfo :db.error/not-an-entity Unable to resolve entity: :bot.log.msg.kind/outgoing in datom [#db/id[:bs.parts/raw-data -1] :bot.log.msg/kind :bot.log.msg.kind/outgoing] when using it with datomic.api/with#2016-08-1116:34val_waeselynckEven more strangely, it seems to work if I wait a few minutes.#2016-08-1117:42robert-stuttaford@val_waeselynck: is :bot.log.msg/kind valuetype ref or keyword? if ref, have you previously transacted [:db/add <tempid> :db/ident :bot.log.msg.kind/outgoing] ?#2016-08-1117:47val_waeselynckit's ref, and yes I have transacted it#2016-08-1118:59potetmAlright, I have an offbeat question. When you pass d/tx-range an #inst, does it resolve that to a t in order to pull from the log? If so, how is that resolution done?#2016-08-1119:54jaret@potetm: http://docs.datomic.com/best-practices.html#t-instead-of-txInstant You could have several transactions processed within the same second (inst), but different t.#2016-08-1119:55jaret"Datomic's own t time value exactly orders transactions in monotonically ascending order, with each transaction receiving its own t value."#2016-08-1120:02potetmHey @jaret! I see what you mean, multiple t values could be applicable for a given #inst. I was wondering how that resolution happened. For example, does it look in the index? (Yeah this in reference to the issue I just updated 😄 )#2016-08-1120:02potetmI thought someone here might just know offhand, and that would explain it.#2016-08-1120:02potetmIn hindsight, it was a real shot in the dark.#2016-08-1122:02devn@marshall: https://github.com/Datomic/day-of-datomic/blob/master/tutorial/log.clj#L39 -- are those hard-coded entity IDs safe for the tutorial?#2016-08-1122:02devns/safe/going to work/#2016-08-1122:05marshallThey should if you follow the steps as written#2016-08-1123:12d._.bHeya folks, I am having some trouble wrapping my head around modeling data in Datomic. Let's say I have Company which has 0 or many Applications. I would like to ensure that while I only have a few types of Applications named: "A", "B", and "C", that a Company cannot have more than one "A" application.
I imagine something like:
Company
name: string, cardinality: one, unique: value
applications: ref, cardinality: many, unique: ???
Application
_company: company, cardinality: one
name: string, cardinality: one, unique: ???
#2016-08-1123:12d._.bAn Application will have additional attributes beyond just a name and a reference back to company.#2016-08-1123:14d._.bReasoning out loud: I can't set unique on an attribute which has a cardinality of many, so would I use unique: identity on an application?#2016-08-1123:15d._.bShould I use an enum for the name, since there is a limited, known set of application names?#2016-08-1200:05mrmcc3if you’ve got a fixed list of application types you could make an attribute for each
:application/A (valueType ref, cardinality one)
:application/B (valueType ref, cardinality one)
...
#2016-08-1200:11d._.b@mrmcc3: In general, I am confused about cardinality and uniqueness (identity vs value).#2016-08-1200:18d._.bSay I have:
:company/name (valueType string, cardinality one, db.unique/value)
:company/applications (valueType ref, cardinality many)
:application/name (valueType string, cardinality one)
:application/cool (valueType boolean, cardinality one)
I would like to make it so there can never be a duplicate :application/name belonging to a particular :company.#2016-08-1200:21d._.bSo transacting something like the following will fail:
[{:db/id #db/id[db.part/user]
:company/name "Foo"
:company/applications
[{:application/name "Duplicate"}
{:application/name "Duplicate"}]]
#2016-08-1200:23d._.b(I realize that sort of makes it look like they're component entities, but I am not using a component here, using a ref)#2016-08-1200:24marshall@d._.b: you might want to look at transaction functions http://docs.datomic.com/database-functions.html#2016-08-1200:24d._.b🙂 I sort of had a feeling that might be coming.#2016-08-1200:24d._.bIs what I'm asking crazy talk?#2016-08-1200:27marshallEntity-level uniqueness enforcement is something you'd need to implement via something like transaction functions. Uniqueness by value can be enforced database-wide with :db.unique/value#2016-08-1200:28marshall:db.unique/identity is for enforcing unique entity identities (ie your company name)#2016-08-1200:28d._.b@marshall: Based on my reading, the same is true of specifying an attribute is "required", yes?#2016-08-1200:28marshallCorrect#2016-08-1200:29marshallDatomic is 'inherently sparse'. If you need to ensure the presence of certain attributes you can do that via transaction functions used as 'constructors'#2016-08-1200:31marshallPossibly relevant example: https://github.com/Datomic/day-of-datomic/blob/master/tutorial/constructor.clj#2016-08-1200:31d._.bSo, I could conceivably achieve the same effect as what I was describing above by enforcing some set of attributes are present#2016-08-1200:32d._.b(which are also unique)#2016-08-1200:32marshallYep. That logic can be put in the transaction function. #2016-08-1200:32d._.bWhether that's a good idea or not is another matter 🙂#2016-08-1200:33d._.b"depends on what you're trying to do" of course#2016-08-1200:35marshallIn general, I'd say that is the right approach for that use case. The major caveat is that transaction functions run on the transactor and can affect overall write throughput, but I'd argue that these cases (creating a new customer/user/etc) are infrequent and important, so i would tend to implement validation and enforcement that way#2016-08-1200:36d._.bYeah, when I can in relational land, I like to get out of the way and let the database do the work. Validation functions vs simply enforcing constraints#2016-08-1200:37marshallIn a sense, transaction functions are exactly that - letting you define the behavior the db enforces #2016-08-1200:37marshallS/behavior/constraints#2016-08-1200:38d._.b@marshall: Perhaps I've just missed a couple of places in day-of-datomic, but a suggestion: One file containing a schema with some many/ref attributes and uniqueness and backward references (e.g. :_foo), where all of the datoms are transacted in the same file, using a nested map, the list form, and finally, adding onto and retracting an item from the many/ref attribute.#2016-08-1200:39d._.bThat might be deeply specific, but I've read the Transaction, Schema, Identity and Uniquness, etc. docs several times over, and clicked around the day of datomic repo, and wanted to pass along the feedback.#2016-08-1200:40d._.bOf course, I missed constructors 🙂#2016-08-1200:40marshallI appreciate it. I'll look at what we have and see if I can put together some something along those lines#2016-08-1200:59d._.b@marshall: In general, I think a fuller example of an app with some slightly-less-than trivial data model would be much appreciated. For instance, the Best Practices documentation mentions that Database updates often have a two-step structure: .... The examples I've seen don't do a whole lot of this.#2016-08-1201:00d._.bLast suggestion is: the mbrainz example doesn't create a partition, and it seems to me it ought to.#2016-08-1201:07d._.bAlong those lines, two more questions:
- Is :foo.bar a legal partition name?
- "Your schema should create one or more partitions appropriate for your application." <- I seem to recall hearing during the Q&A at Datomic Conf something about when it's appropriate to mess with partitions, and I thought it was "not much, if ever". If I have :foo/name, :bar/name, :baz/qux, should I be creating partitions for :foo, :barr, and :baz?#2016-08-1201:09marshallPartitions are an optimization for index locality only. If you know ahead of time that some set of data will often be accessed together, then it might make sense to put those data in a partition#2016-08-1201:10marshallYou are very unlikely to suffer from not having defined your partitions perfectly. #2016-08-1201:30d._.b@marshall: I promise these are the last of my last questions for the night, and want to thank you for all of your help so far: I notice that the samples/seattle/getting-started.clj example shows creating a partition for :communities. It transacts a :community datom using a :communities tempid. It made me wonder a couple things:
1.) The partition was created after import of the seattle data. Can the creation of a partition after-the-fact change anything about locality of datoms that were already added?
2.) Along the same lines, is there any secret handshake w/r/t the partition's name :foo, and attributes which use the namespace :foo. (See: :community/name w/ a partition named :communities) There's not right? I assume that the only way to get the locality boost that partitions allow for, is to reference that partition as the :db/id when adding a datom.#2016-08-1201:32d._.bAnd, I suppose finally -- if you were to realize at some point: "Wow, I really need better locality..." what would you do?#2016-08-1201:32marshallCorrect. The partition a datom is in can only be defined when it is transacted #2016-08-1201:34marshallAnd the name namespaced keyword you use for the attribute id is not related to the partitin#2016-08-1201:34marshallPartition#2016-08-1201:36marshallFor your last question- I've never seen that happen, but if it did the approach would be the same as that for 'I need fulltext' or 'I need to change the data type of an attribute'
- rename the existing attribute
- create a new one with the old name in the desired partiton
- migrate the data from the 'old' attribute to the new one#2016-08-1201:38d._.b@marshall: I realize you get paid for it, but it's late, so I owe you a beer.#2016-08-1201:38d._.bto help people out in this channel, I mean#2016-08-1201:38d._.bThanks a lot for all of the help; I really appreciate it.#2016-08-1201:38d._.bHave a good night.#2016-08-1201:45marshallNo worries:) you too#2016-08-1210:16val_waeselynck@robert-stuttaford: was a typo in my schema, my bad 😕 thanks for your help#2016-08-1214:32jaret@yonatanel: Per your question from yesterday at 4:31 AM EST, I talked to Stu about the 2012 post you linked. We optimized query with predicates in 2013:
## Changed in 0.9.5130
* Performance Enhancement: The query engine will make better use of AVET
indexes when range predicates are used in the query.
In terms of your second question, the API is correct and is essentially saying :vaet contains datoms for attributes of :db.type/ref and is the reverse index.#2016-08-1218:04yonatanel@jaret: Do you know if only a single index is used in queries? I wonder if I should cram filtering logic into queries, or have a minimum of that in queries and the rest in regular clojure code. I have reasons for both#2016-08-1218:38kennyHow can I query to see if a :db.cardinality/many value is exactly equal to a passed in collection? For example, I pass in a collection ?coll and I want to find all entities whose :cardinality-many value is exactly equal to ?coll. So I write:
'[:find ?e .
:in $ [?coll ...]
:where
[?e :cardinality-many ?coll]]
But this returns all entities whose :cardinality-many value contains an entity in ?coll.#2016-08-1218:41kennyWhere the passed in collection is a set, so order does not matter in the equality check.#2016-08-1218:41robert-stuttaford@kenny: you can just compare the set of coll to (:c-m (d/entity db your-e)) with =#2016-08-1218:42kennyI am trying to find your-e though#2016-08-1218:43robert-stuttafordafaik datalog doesn't support this. to express it in datalog terms, it'd be [:find ?e :in $ ?c1 ?c2 <and more> :where (and [?e :attr ?c1] [?e :attr ?c2] <and more>)]#2016-08-1218:45robert-stuttafordyou could write a function (defn has-exact-coll? [db e your-coll-as-set] (= your-coll-as-set (:attr (d/entity db e)))) and call it from within your datalog (d/q '[:find ?e :in $ ?your-coll-as-set :where [?e :attr] (your-ns/has-exact-coll? $ ?e ?your-coll-as-set)] db (set your-coll))#2016-08-1218:46robert-stuttafordbut you'll want to find some other way to first restrict which ?es you're looking at, because otherwise you're checking every entity this way 🙂#2016-08-1218:47robert-stuttafordone simple way to do that is to include a clause that first checks for ?e with :attr, as i have done above#2016-08-1218:47robert-stuttaforddoes that make sense?#2016-08-1218:48kennyYes. Thank you 🙂#2016-08-1219:58flipmokidHi all!
I'm playing around with querying clojure data structures with datomic's query engine. I've covered off most of the things I wanted to try but I'm finding it difficult to express one particular thing.
Say I have a two lists of lists, the first containing user information (account id, gender, zip code) and the second containing some replacements (account id and zip code). I want to get all users from the first list of lists and return the zipcode to be the one form the second list if the user is in it or the zipcode from the first list otherwise.
I'm not sure the best way of expressing it, whether the data should be merged before I query it or whether I can do this within the query. So far I have
`
(ns datalog-test.core
(:use [datomic.api :only (db q pull) :as d]))
(q
'[:find ?accid ?gender ?zip
:in $p $r
:where (or (and [$p ?accid ?gender _]
[$r ?accid ?zip])
[$p ?accid ?gender ?zip])]
[[1 :m 22321]
[2 :f 23343]
[3 :m 32431]
[4 :f 34958]]
[[2 49884]
[3 4857]])
`
but I get the error:
`
:db.error/invalid-data-source Nil or missing data source. Did you
forget to pass a database argument?
{:input nil, :db/error :db.error/invalid-data-source}
`
Can anyone offer guidance on how best to achieve the above?#2016-08-1220:16vinnyataidehello, I've installed the datomic dep in my clojure project but don't know where to find the transactor#2016-08-1220:18vinnyataide@flipmokid: the db connection is a obligatory property that should be passed as last parameter#2016-08-1220:20vinnyataideaka
(def db (d/db conn)
#2016-08-1220:26flipmokid@vinnyataide Hi, I'm using this directly on Clojure data structures and not using a Datomic instance.#2016-08-1220:27vinnyataidethe problem is that you are using a q function that expects a db#2016-08-1220:27vinnyataidethe datomic api expects a data source, even an in memory one#2016-08-1220:28bhaganypassing data structures instead of db's is a thing you can do#2016-08-1220:28bhaganynot sure what's up with the error, though#2016-08-1220:29vinnyataideoh ok#2016-08-1220:29vinnyataidesorry#2016-08-1220:30vinnyataideI see 2 data structures right?#2016-08-1220:31vinnyataidecan you do that?#2016-08-1220:31bhaganymaybe not?#2016-08-1220:32bhaganyI also think that you don't need the :in clause#2016-08-1220:33flipmokid@bhagany: Yes it's an odd one (unless I'm doing something silly), I find I'm only seeing the errors when using the or/and expressions
@vinnyataide: Check out https://gist.github.com/stuarthalloway/2645453, I was surprised and happy that you could do datalog queries on clojure data directly#2016-08-1220:33flipmokidIn the gist he uses multiple collections too#2016-08-1220:34vinnyataide@flipmokid: this is gorgeous#2016-08-1220:34bkamphausI suspect it’s related to the or clause and multiple data sources in there? and the in clause is definitely necessary when passing more than one data source.#2016-08-1220:34bhaganyI was looking at this: https://github.com/alandipert/intension, and noted the lack of :in… but now I realize that it's because it's implicit#2016-08-1220:35bhaganyah, that's right… with or it has to be like ($ or …)#2016-08-1220:35flipmokidI see, so I can only refer to one data source at a time in the or#2016-08-1220:36flipmokidHmm... I wonder how I could achieve what I want to with that restriction#2016-08-1220:36bkamphausI believe or, not, and pull expressions may all have rough edges when it comes to handling multiple data sources.#2016-08-1220:36bhaganyfwiw, this returns results for me:#2016-08-1220:36bhagany(d/q
'[:find ?accid ?gender ?zip
:where (or (and [?accid ?gender]
[?accid ?zip])
[?accid ?gender ?zip])]
[[1 :m 22321]
[2 :f 23343]
[3 :m 32431]
[4 :f 34958]
[2 49884 0]
[3 4857 0]])#2016-08-1220:37bhaganyI added the 0's in the last two datoms to resolve an IndexOutOfBoundsException#2016-08-1220:37flipmokid@bhagany: Thanks for trying, I'll give it a go now and see what results it gives#2016-08-1220:42vinnyataidewhat about the transactor?#2016-08-1220:42vinnyataideI can't find anything about the location in the documentation#2016-08-1220:42vinnyataideit expects you to download the lib as a standalone service#2016-08-1220:43bhaganyare you trying to run a dev transactor?#2016-08-1220:44vinnyataideyeah#2016-08-1220:44bhaganythere's a shell script to start it up in the package you download - bin/transactor#2016-08-1220:45vinnyataidebut I downloaded it as a dep in lein#2016-08-1220:45vinnyataideidk where it is#2016-08-1220:46bhaganythere are two parts - the thing you downloaded via lein is the client library. the thing you download from http://my.datomic.com is needed as well, for the transactor#2016-08-1220:47vinnyataideoh, thanks#2016-08-1220:47bhaganyI'm using pro starter though, if you're using free, the process might be somewhat different#2016-08-1220:48vinnyataideme too, I'm using pro starter#2016-08-1220:50vinnyataideYeah, since the transactor is only one per machine its kinda obvious#2016-08-1220:50vinnyataidethanks for the help#2016-08-1220:50bhaganyokay, then just to put it all together, here's my whole install process:
- download a zip from http://my.datomic.com and unzip it
- bin/maven-install for the client library
- modify the sample transactor.properties to fit my needs
- bin/transactor + appropriate args to run the transactor#2016-08-1220:51vinnyataidedo I need the maven install if I did the project with credentials gpg?#2016-08-1220:52bhaganyI think that accomplishes the same thing, but I've never tried it#2016-08-1221:01vinnyataideI'm gonna make a technical report about a system that I'm making in datomic with om next, so these details are really good to me 🙂#2016-08-1222:36zaneAre pull queries supposed to work with history databases? http://docs.datomic.com/best-practices.html#use-history#2016-08-1222:37zane(d/q '[:find ?p (pull ?tx [:db/txInstant]) ?added
:in $ ?userid
:where [?u :user/purchase ?p ?tx ?added]
[?u :user/id ?userid]]
(d/history db) userid)
#2016-08-1222:37zaneThrows an IllegalStateException for me.#2016-08-1223:04marshall@zane: no, pull is only supported on current value of db#2016-08-1223:43marshallOr rather, not on a history db. I believe it does work on asOf dbs#2016-08-1300:17vinnyataide@marshall: you can edit your messages#2016-08-1300:18marshallCouldn't figure out edit on mobile :)#2016-08-1300:18marshallAh, it appears to be hold to edit #2016-08-1300:19marshallSoftware iz hard#2016-08-1300:49bkamphausEntity doesn't work either. The schema constraints which allow a valid projection of an entity via the entity API or pull don't hold for a history database (retraction datoms in history db, cardinality or uniqueness violations possible, etc)#2016-08-1302:07zane@marshall: That example should probably be removed from the official docs, then?#2016-08-1302:10bbloom@zane: what is the message of your illegal state exception?#2016-08-1414:18robert-stuttaford@marshall et al, is there any conceivable reason why d/tx-range would return txes outside of the given start and end when both are of type Date?#2016-08-1414:19robert-stuttafordi'm busy working on an epic task to rebuild a database from the first transaction, and integrate a separate database into it as i go#2016-08-1414:20robert-stuttafordi'm trying to find txes in Database Two that are between the current tx for Database One and the next tx for Database One. but, bizarrely, Database Two is giving me a tx from before the start#2016-08-1414:22robert-stuttaford(seq (d/tx-range tx-log2
#inst "2012-12-20T12:32:38.900-00:00"
#inst "2012-12-20T12:32:41.244-00:00"))
({:id #uuid "...",
:data
[#datom[13194139534503 50 #inst "2012-12-20T12:14:12.391-00:00" 13194139534503 true]
...],
:t 1191,
:tempids
{...}})
#2016-08-1414:23robert-stuttafordgiven a ~3 second timespan, it gave me a tx from 18 minutes earlier 😞#2016-08-1414:27robert-stuttafordseems like something is off with one particular transaction. seeks in other areas behave well. given the age of the transaction, it's possible that the tx in question would no longer be accepted by current version of Datomic.#2016-08-1517:19marshallDatomic 0.9.5394 is now available https://groups.google.com/d/topic/datomic/HV9Xero74P0/discussion#2016-08-1520:11timgilbertSay, inside an EDN file I'm running with conformity, is there a way to call (d/squuid)? The context is I'm setting up some sample data and some of my attributes use them#2016-08-1520:11timgilbert...like as external identifiers#2016-08-1520:13timgilbertRight now I'm sort of hacking around it by just specifying #db.id[:db.part/db -1] etc, but they are not technically EIDs so that doesn't seem exactly correct#2016-08-1520:14timgilbertIs this the sort of thing I might use #db/fn for?#2016-08-1522:11atroche@timgilbert: when i needed to do that, i used clojure to generate the EDN (with squuids), then spit it into a file#2016-08-1522:23timgilbertCool, thanks#2016-08-1522:24timgilbertI'm currently thinking I'll just generate a bunch of UUIDs and just hardcode them in there. It's all fake data anyways#2016-08-1605:32robert-stuttafordit would be nice if datomic provided a reader literal for squuids #squuid "etc"#2016-08-1610:35danielstocktonaren't they just uuids when you're reading them?#2016-08-1610:36danielstocktonif you already have them, what part needs to know whether they were generated sequentially or not?#2016-08-1611:16robert-stuttafordgosh. you're right. i'm a dork. i guess what i meant is it would be nice to generate squuids via a tag in edn#2016-08-1611:16robert-stuttafordkinda like temp ids#2016-08-1611:22danielstocktonno, i thought that's what you meant, just got confused by the "etc" i think#2016-08-1611:34robert-stuttafordyeah, that was incorrect#2016-08-1611:39danielstocktonI was curious, looks like these are the data readers defined by datomic
{db/id datomic.db/id-literal
db/fn datomic.function/construct
base64 datomic.codec/base-64-literal}#2016-08-1611:39danielstockton#squuid might be handy too, I don't see why not#2016-08-1611:40robert-stuttafordif we could do the same as we do with temp ids, e.g. #squuid[1], so that you could create relationships with squuids the same you can with db ids#2016-08-1611:40robert-stuttafordthat would be awesome#2016-08-1613:45eggsyntaxAnyone done a query/pull-exp cheat sheet? Because I would use that thing every day...#2016-08-1613:58robert-stuttafordhttp://docs.datomic.com/query.html and http://docs.datomic.com/pull.html are pretty comprehensive. i found that good old practice embedded the concepts quickly#2016-08-1613:59eggsyntaxYeah, those are my go-tos. Still, it'd be nice to have a one-pager to quickly refer back to, especially if I haven't been doing it for a while & I'm forgetting particular details of syntax.#2016-08-1614:00val_waeselynck@robert-stuttaford: the trouble with a squuid tag is that the generated uuids would not be deterministic... at this point I'd say the content of the EDN file has stopped being 'just data',#2016-08-1614:00robert-stuttafordsounds like a good opportunity to contribute 🙂#2016-08-1614:00eggsyntaxTrue, true.#2016-08-1614:00robert-stuttaford@val_waeselynck: true, but this is already the case with #db/id#2016-08-1614:01robert-stuttafordIF you supported determinism with e.g. #squuid -1 so that multiple uses of the token resolve to the same value#2016-08-1614:01robert-stuttafordusing a basic cache#2016-08-1614:01val_waeselynckI know, I feel there's a difference with tempids though, not sure how to express it#2016-08-1614:02val_waeselynckat least with tempids it's always the same datoms that ends up in storage, not so with random uuids#2016-08-1614:03val_waeselynckso it's kinda more deterministic#2016-08-1614:03robert-stuttafordyeah#2016-08-1614:03robert-stuttafordwhich is probably why we don't have a reader tag 🙂#2016-08-1614:04val_waeselynck@robert-stuttaford: I guess so. Even the #db/id tag felt weird to me in the beginning TBH#2016-08-1614:04robert-stuttafordhttp://blog.cognitect.com/cognicast/107#2016-08-1614:06val_waeselynck@robert-stuttaford: btw, I recently stumbled on your podcast about Datomic and Onyx, I really liked it#2016-08-1614:06robert-stuttafordthanks! which one? on defn.audio?#2016-08-1614:06val_waeselynckyeah that's the one#2016-08-1614:07robert-stuttafordthat was a fun chat. Vijay and Ray are a blast#2016-08-1614:07val_waeselynckI'm looking for solutions to make my analytics faster and more scalable, so definitely looking into tools like Onyx#2016-08-1614:08robert-stuttafordthis may be of service to you http://www.stuttaford.me/2016/01/15/how-cognician-uses-onyx/#2016-08-1614:09val_waeselynck@robert-stuttaford: read it too 🙂#2016-08-1614:09robert-stuttafordoh heh#2016-08-1614:25robert-stuttaford@iwillig: enjoying your episode 🙂#2016-08-1614:27iwilligthanks @robert-stuttaford#2016-08-1614:27robert-stuttafordyou mentioned about how you're having to think differently about historical data#2016-08-1614:27robert-stuttafordhave you started to realise the difficulty of technical debt in your data ? 🙂#2016-08-1614:31bhaganyoh man. I am already stressing out about this, and I haven't had any problems yet.#2016-08-1614:31robert-stuttaford-grin-#2016-08-1614:31robert-stuttafordi'm busy working on an epic to rebuild our database, transaction by transaction#2016-08-1614:32robert-stuttafordinitial analysis of the first 2mil txes yields ±120k txes i want to keep. the rest is either schema, data we no longer want, or bad programming#2016-08-1614:32robert-stuttafordthe bad programming and old data are about equal!#2016-08-1614:32bhaganyyikes!#2016-08-1614:33bhaganyI'm worried about the ever-increasing complexity of historical queries that have to deal with schema changes#2016-08-1614:33robert-stuttafordyou mean having to query across all the versions of the schema?#2016-08-1614:33bhaganyyes, correct#2016-08-1614:33bhaganywhich exacerbates my tendency to bikeshed such things#2016-08-1614:34robert-stuttafordyeah. we've handled that in a couple ways. small data sets, we just re-transact and lose the time information. larger ones, we've continued to query across#2016-08-1614:34bhaganyIt may not even become a problem in practice, I'm not yet sure how far back we'll need to go. But here I am worrying about it 🙂#2016-08-1614:35robert-stuttafordi'm looking forward to unifying all that in the rebuild#2016-08-1614:35robert-stuttafordthe primary driver for doing this is to be prepared to shard in future, by building good tooling now#2016-08-1614:36robert-stuttaford10 billion datoms is the theoretical upper limit for a db. we're at around 100mil, which means we have 99 copies to go. that's what's worrying me 🙂#2016-08-1614:36bhaganyI have a looooooooong way to go before I'm there 🙂#2016-08-1614:36robert-stuttafordalso, i get to re-partition the data according to the read patterns we've since discovered we have#2016-08-1614:37bhaganythat kind of thing keeps me from worrying about partitioning too much - I just don't know how it'll be. For some reason, that reasoning works on me for partitions, but not for future schema changes.#2016-08-1614:39robert-stuttaford-grin-#2016-08-1614:41val_waeselynck@bhagany: curious about your specific problem. Is it that you are querying on asOf dbs and need to compensate for "future schema change" in your queries that go too far in the past ?#2016-08-1614:41bhaganyyes, that's right#2016-08-1614:42bhaganyI don't actually have that problem yet. But I am trying to anticipate future schema needs now (and at the same time trying to not try, because that kind of thing can get you in trouble too)#2016-08-1614:43val_waeselynck@bhagany: my take on this was to actually stop using asOf in application code#2016-08-1614:43val_waeselynckhistory is not programmable#2016-08-1614:44bhaganyI can see your point there. I may come around to endorsing it, depending on how this goes.#2016-08-1614:50val_waeselynck@bhagany: That's a very interesting problem actually. I think what you could do in a technology like Apache Samza is derive a new Log of facts from an old Log of facts, adding the migration, and using the new Log as the data source in the application code.#2016-08-1614:53val_waeselynckThat'd be an indirection between facts-recording and querying which Datomic does not have (yet)#2016-08-1614:54bhaganyinteresting idea. I'll have to give that some thought.#2016-08-1614:56bkamphaus@robert-stuttaford: have you found in testing how much re-partitioning could possibly speed up query patterns that are currently problematic for you? Just curious.#2016-08-1615:14robert-stuttaford@bkamphaus: not yet. i haven't managed to actually rebuild the db yet. it's a big task -- 58mil txes, ~4 years worth. the first 2mil txes yielded over 100 transaction shapes to reason through alone#2016-08-1615:14robert-stuttafordi'll be certain to share any findings, though. this gon' be fun!#2016-08-1615:25bkamphausyeah, one of the big value props for Datomic for me is being able to support arbitrary query without over-engineering any particular aspect of the schema/model for particular query patterns. Obviously you never quite hit that point 100% with any database, but I’m curious if in the wild people end up needing to solve pain points with partitioning, or something like a reasonable set of partition across a few logically grouped domains is usually sufficient.#2016-08-1615:36danielstockton@val_waeselynck im not sure that's true.. #db/id[:db.part/user] isn't deterministic, it's using a counter behind the scenes which is increased on each transaction#2016-08-1615:36danielstockton#db/id[:db.part/user -1] would be deterministic#2016-08-1615:37danielstocktonor it depends on the basis-t of the db-after the transaction, im not sure it uses a counter or not#2016-08-1615:47val_waeselynck@danielstockton: yes but I would argue that it's the same datoms [eav] that end up in storage, so it's more deterministic in a way#2016-08-1615:47val_waeselyncke.g you can rely on transacting your edn file being idempotent#2016-08-1615:48danielstocktonbut the tempid determines the e in a datom, which can be different?#2016-08-1615:49danielstocktonit depends when you transact and against what db#2016-08-1615:49danielstocktonbut if you're importing one edn file on a fresh database, then i guess it is...#2016-08-1615:50bkamphausthe idempotent aspect of the schema comes from upsert for a :db/ident att/val pair (which is a unique identity) and special rules for tempid resolution in that case.#2016-08-1615:50bkamphauswhich is dependent on the fact that an entity id is not an attr/val pair but its own thing, and which is not true of e.g. a uuid attribute.#2016-08-1615:52bkamphausit’s less what #db/id[:db.part/user -1] resolve to across all invocations versus the fact that they will resolve to the same tempid within a particular transaction, meaning that it will result in the implied link/relation/join for tempids being fulfilled by the resulting entity id generation.#2016-08-1617:15jdkealyis it possible to have a function to get-or-create an entity? I wanted to write a function that does a lookup, if it finds the criteria, it returns the ent-id of the match otherwise it creates the entity and returns the ent-id of the newly created entity. I can do the lookup in a regular datalog query but I believe that would not be thread-safe, e.g. if I'm importing big datasets and the query is run on multiple machines, it will rapidly create multiple duplicate entities. My function looks like this:
https://gist.github.com/jdkealy/42bf630ceba6385914a43d5645d31d55#2016-08-1617:17jdkealymy function returns tx-info like so {:db-before datom/cdn-cgi/l/email-protection, :db-after /cdn-cgi/l/email-protection, :tx-data #object[java.util.ArrayList 0x787bfc5c [/cdn-cgi/l/email-protection]], :tempids {}}.... but i didn't actually transact anything... do i access the returned query via tx-data ?#2016-08-1617:20jdkealyalso... im calling the function like so... @(d/transact @db/conn [[:person/namer oid name]])... so i guess i am transacting... i'm a bit confused obviously on this subject#2016-08-1617:21bkamphaus@jdkealy: you’re crossing a couple of concerns that are decoupled in Datomic. I would split the logic somewhat.#2016-08-1617:22bkamphausDo the query to see if what you’re looking for exists yet, if not, go through either a transaction function to create it or rely on assigning entity a unique identity so you can rely on Datomic’s upsert behavior#2016-08-1617:22bkamphauseventually to figure out the outcome of the transaction to get the entity that was created you’ll want: http://docs.datomic.com/clojure/#datomic.api/resolve-tempid#2016-08-1617:23jdkealyright.. but datomic's uniqueness constraint is only on a single attribute as far as i know#2016-08-1617:23bkamphausIf something has a unique identity in Datomic, it will handle that race for you, i.e. it will resolve the transaction to the existing entity ( http://docs.datomic.com/identity.html#unique-identities )#2016-08-1617:23bkamphauscomposite uniqueness isn’t a thing in Datomic at this point in time, yeah.#2016-08-1617:24jdkealyi thought that this kind of thing was the point of datomic functions#2016-08-1617:25bkamphausyes, it is, though there’s an advantage to taking opportunities to rely on predefined behaviors rather than explicitly program your own with transaction functions.#2016-08-1617:25bkamphausbut composite uniqueness would preclude being able to rely on the default behavior for this case.#2016-08-1617:26jdkealyindeed 🙂 so is there any way to do what i'm trying to do ?#2016-08-1617:26jdkealylike... return the entity id or else create it in a single-threaded way? i'm worried about creating dozens of dupes as i'm going to be running this code on like 4 servers#2016-08-1617:27robert-stuttaford@stuartsierra: hi 🙂 in the latest Cognicast, Craig mentioned your predilection for "decanting databases". it sounds like you've done this a couple times. i'm embarking on a rather large decanting of my own soon, and i wonder if you have any tips, or perhaps even generalised code that may be useful?#2016-08-1617:27jdkealyi.e. can the datomic function return the result of a query or does it only return data related to a transaction#2016-08-1617:28bkamphausif this is basically a big import and you can provide a unique identifier from the domain or by pre-generating uuids for everything prior to import, the default unique identity upsert behavior gets you there for free.#2016-08-1617:29bkamphausa transaction function (note this isn’t the only kind of database function but the typical one) returns transaction data that are then transacted on the transactor (provided it doesn’t throw an exception), but the results are standard transaction result maps. I.e. you can’t change the behavior of what happens on the other side.#2016-08-1617:30bkamphausbut you could define things like for example, attempt to create this thing it it doesn’t exist, throw an exception, rely on that exception on the peer to know that if you get/sync a database value after your attempted transaction you can get the entity via query.#2016-08-1617:30jdkealyok... so perhaps instead of returning the entity id and then transacting with the id i should focus on doing the full transaction in the function ?#2016-08-1617:32jdkealyor... another way would be ... if i do call the thread-safe transact function, i can do a lookup directly after and it's guaranteed to be unique right ?#2016-08-1617:32bkamphausyes though that implies a blocking deref on the transaction and inspecting the :db-after, which is fine but may slow down import logic considerably if you’re doing this e.g. on every typical transaction.#2016-08-1617:33jdkealyit would be like.... 20k times a day maybe ? tops#2016-08-1617:35jdkealyi'm not as worried about slowness as i am about my app crashing 😕#2016-08-1617:36bkamphausMy first pass (knowing nothing else of the domain) would probably be the transaction function that tries to transact the thing and if it already exists, aborts the transaction via exception, and then either uses A. tempid resolution for a successful transaction result and a query to find it if the transaction aborts, or possibly B. just query to find it on a database value after the transaction attempt (successful or not) since it should be there either way.#2016-08-1617:37jdkealyawesome... i think B sounds pretty straightforward... many thanks!#2016-08-1617:39robert-stuttafordbasically: find, or try: create-via-tx-fn, catch: find#2016-08-1617:40robert-stuttaford(or (d/q ...) (try (d/transact ... [[:your-make-fn-which-first-also-does-the-d/q-thing ...]]) (catch ... (d/q)) (d/q ...))#2016-08-1617:40robert-stuttafordyou'd move the query bit to a function of its own to keep things DRY of course#2016-08-1617:43robert-stuttaforddistributed systems are hard 🙂#2016-08-1617:43pheuterif i have multiple (pull) expressions inside of the :find clause in a query, is it possible for Datomic to not return nil if one of the pull queries doesn’t return anything?#2016-08-1617:44robert-stuttafordno. nil is not a thing that datalog does at all#2016-08-1617:44pheutersorry, not nil, in this case just []#2016-08-1617:44pheuterstrangely enough the value the find returns is nil#2016-08-1617:45robert-stuttafordsounds like a good candidate for breaking your code apart#2016-08-1617:46robert-stuttafordi may not fully understand how you're getting an empty vec though#2016-08-1617:46pheuter[:find (pull ?e […]) (pull ?e […]) :where [?e …]]
#2016-08-1617:46pheuterif one of those pulls doesn’t return any data, even if the other one does, the query will return [[nil]]#2016-08-1617:47robert-stuttafordi'd put the pulls outside of d/q in a separate fn call#2016-08-1617:47robert-stuttafordand deal just with ids in d/q#2016-08-1617:48robert-stuttafordi don't know the answer to your actual question, though#2016-08-1617:48robert-stuttafordwhat happens if you explicitly include :db/id in your pull expressions?#2016-08-1617:48pheuteryeah, the underlying problem is a complex query for various data and metadata associated with certain entities, some of which can be potentially missing, and i still want to get all the data back, instead of constraining the result set#2016-08-1617:48pheuteri feel like i might have to settle for making n separate queries#2016-08-1617:49robert-stuttafordnothing wrong with separate queries#2016-08-1617:49robert-stuttafordit's all in local memory anyway 🙂#2016-08-1617:49pheuteryeah, maybe not the first request but perhaps it’s not such a big deal#2016-08-1617:50robert-stuttafordit's a non issue; datalog is working with sorted sets of datoms in local memory, always#2016-08-1617:50robert-stuttafordvery often it's better to decouple things!#2016-08-1617:51pheuterthanks! makes sense...#2016-08-1617:52robert-stuttafordgood luck!#2016-08-1617:54jdkealythanks for your help @bkamphaus... i went with solution B... it appears to work on https://gist.github.com/jdkealy/4d8da9c5bbb37df19978c45256ea1856#2016-08-1618:02kennyWhat is the S3 backup-uri format? I tried http://bucket.s3-aws-region.amazonaws.com and <s3p://bucket-name>.#2016-08-1618:02kennyAh, found it. Never mind 😛#2016-08-1618:03bkamphaus for reference, which is probably what you just found 🙂#2016-08-1618:04kennyYes. Where backup-name is a folder or an actual backup?#2016-08-1618:11kennyHmm.. Is it possible to use backup/restore to copy one DB to another? I tried, however, I got this exception: java.lang.IllegalArgumentException: :restore/collision The database already exists under the name '...'
#2016-08-1618:24bkamphausCan’t copy one db to two different names in the underlying storage. You can overwrite a db by restoring to the same name, or restore that db to new name on a different storage.#2016-08-1618:25kennyAh I see, thanks#2016-08-1618:44pheuter[:find (min ?e) (max ?e)
:in $
:where [?e :some/attr “some-value"]]
does it make sense to interpret the two values returned above as the earliest entity associated with that value vs. the latest entity associated with that value, assuming that there are multiple entities that share that same attribute-value pair?#2016-08-1619:13bkamphaus@pheuter: leading part of entity id is from partition, so multiple partitions can break that strategy.#2016-08-1619:15bkamphausI would bind the 4th position (tx) and use that if it’s what you mean specifically. Also note that unless the parameter $ is a history db, you won’t find the earliest association if it has since been retracted.#2016-08-1619:16pheuterGood points, thanks for the heads up!#2016-08-1619:17pheuterso my question is then how does it resolve aggregating multiple entities, and then for each entity multiple tx-entities?#2016-08-1619:17pheuterif i do a max on the ?tx, will that be across all entities?#2016-08-1619:21bkamphausIf you do a (max ?tx) on [?e :some/attr "some-value” ?tx] it will be the most recent tx in which the datom matching the leading portion [?e :some/attr “some-value …] was asserted. (and is still true as of the most recent database value). If you pass a history db, it will be most recent tx to touch it (even a retraction) unless you also bind the 5th position to true, i.e. [?e :some/attr “some-value” ?tx true].#2016-08-1619:23pheuterin my particular case i’m looking to use ?e in a subsequent :where clause to get a related entity, how can i know that i’m getting the entity associated with the latest tx?#2016-08-1619:26pheuterbasically, there are two entities, a and b. b has an attribute that’s a ref to a, and it’s possible to have multiple b entities that ref to the same a#2016-08-1619:26pheutergiven a, i’d like to get the latest transacted b that links to a#2016-08-1619:26pheuterthat’s the general problem#2016-08-1619:43bkamphausI’m not sure I follow what you’re asking as it looks like your concern is covered. The where clause limits the results, so you only get the relation from entity to transaction constrained by the presence of that attribute and value, for the most recent transaction.#2016-08-1619:48bkamphaus:where [?b :some/ref ?a ?tx] aggregated on the max value of the ?tx returns that datom. I guess you could get a set of matches in the event that there are multiple b entities which assert :some/ref a-id in one transaction.#2016-08-1619:48bkamphausOh, grouping behavior.#2016-08-1619:48pheuterwhat i need is something like: :where [?e :some/attr “some-value” (max ?tx-id)]#2016-08-1619:48pheuterwhere ?e would represent the entity associated with the latest tx#2016-08-1619:49pheuterright, it seems like a workaround now is to manually build a map of tx-ids to entity-ids, find the max, then get the entity-id associated with it#2016-08-1619:50bkamphausif the grouping behavior runs afoul of what you need, as is the case here (just tested it), I would just return the ?e ?tx tuple and apply max-key second on the result.#2016-08-1619:50pheuterthat seems like what i need#2016-08-1619:50bkamphausthe aggregation in query always realizes the intermediate set in memory on the peer anyways, so it doesn’t save you any performance cost to avoid the seq manipulation, really.#2016-08-1619:52bkamphaussorry for initial detour, forgot that ?e (max ?tx) only shows you max ?tx grouped by e, not what you wanted in this case. It’s also possible to use a subquery, if you’re stuck with REST API or don’t have clojure manipulations and don’t want to realize the whole thing in a query, but if you’re in clojure I’d stick with a single query and a sequence manipulation.#2016-08-1619:53vinnyataidehello! how to recover from a connectexception? I wanted to create a db if there's none so I made the follow command
(try
(def conn (d/connect uri))
(catch ConnectException e (d/create-database uri)))
#2016-08-1619:53vinnyataide(:import ( ConnectException)))
#2016-08-1619:55vinnyataideShow: Clojure Java REPL Tooling Duplicates All (3 frames hidden)
3. Unhandled java.util.concurrent.ExecutionException
2. Caused by org.h2.jdbc.JdbcSQLException
1. Caused by java.net.ConnectException
Connection refused
#2016-08-1619:56pheuter@bkamphaus: thanks for the patience and help, makes a lot of sense now 🙂#2016-08-1619:56vinnyataideoh I guess I need to start the transactor#2016-08-1620:03kennyI am having trouble connecting a third peer. We currently have a license for 10 peers. Two of the peers are being used by a staging and production server. I want to query the Datomic instance running in the cloud from the REPL. However, when I try and connect to my transactor running in the cloud from the REPL, I get clojure.lang.ExceptionInfo: Error communicating with HOST ... or ALT_HOST ... on PORT 4334. Both the staging and production servers are able to connect to the Datomic instance. Do I need to set a username and password locally somewhere or change a local license key?#2016-08-1620:22kennyI also cannot connect to the database from the shell on a server running in the cloud#2016-08-1620:24bhaganyin cases like these, network configuration is always my first stop. have you checked that the transactor is reachable, ports are open, etc?#2016-08-1620:25kennyIt is reachable. Both my staging and production servers can connect to it#2016-08-1620:25bhaganybut is it reachable from the machine you're on?#2016-08-1620:26bhaganyI mean, obviously you can't connect with the peer library. But can you, say, telnet to it?#2016-08-1620:26bhaganyon 4334#2016-08-1620:28kennytelnet ip 4334
Trying ip...
Connected to ip.
Escape character is '^]'.
#2016-08-1620:28kennyYes#2016-08-1620:29bhaganyalright, to be honest, that just about exhausts my advice. it's always the network for me 🙂#2016-08-1620:30kennyIt would be nice if there was a different exception thrown if it was a peer problem or a network problem#2016-08-1620:32kennyShutting down my staging server allows me to connect to the db from the REPL.#2016-08-1620:33kennyIs it possible there is an issue with the license?#2016-08-1620:34bhaganythat exception would really surprise me, if that's the case#2016-08-1620:34bhaganynot sure I can explain what you're seeing any other way, though#2016-08-1620:42kennyHmm.. Will someone from the Datomic team see these messages or should I email them directly?#2016-08-1620:46bhaganythey're usually on here, but if you're paying, I'm pretty sure that comes with direct support#2016-08-1621:30jaret@kenny: Sent you a private message so we can get a support case going 🙂#2016-08-1708:50danielstocktonAre there any interesting strategies for dealing with data spread over geographical regions? The single transactor seems like a weakness here, although it's easy to replicate the storage for reads.#2016-08-1708:50danielstocktonJust accept slightly slower writes? Separate DBs for different regions and accept slower reads when you have to query across DBs from other regions?#2016-08-1708:51danielstocktonThose are the obvious trade-offs I can think of#2016-08-1708:52danielstocktonWait, in fact you don't have to accept slower reads if storage is replicated...That seems like the best choice, having different DBs per region and querying across multiple DBs?#2016-08-1708:55danielstocktonUse case: I have an application served from multiple regions all making writes and a cron job in the background which needs to compile an XML using data from all regions to send off to a third party..#2016-08-1709:15val_waeselynck@danielstockton: I would definitely try to see if I can live with slow writes and 1 transactor before jumping to geographical sharding 🙂#2016-08-1709:16danielstockton@val_waeselynck: yeah, my application isn't very write heavy, that would probably be just fine#2016-08-1710:26robert-stuttaford@danielstockton: do you need transactionality across regions? or can you consider them to be sharded from each other?#2016-08-1710:31robert-stuttafordi would definitely try 1-transactor first, and see what the latency does to reads and writes from other regions#2016-08-1710:37val_waeselynckI know that's what my SQL-users friends do: they distribute their read replicas and have one master write replica#2016-08-1710:37val_waeselynckAnd this is so much easier to do with Datomic that we have no excuse for not trying it 🙂#2016-08-1710:38val_waeselynckSurprisingly, low-latency writes is not such a common requirement#2016-08-1710:46danielstocktoni can consider them to be sharded, i think, but i do need to be able to see a full view of the data from the background job#2016-08-1710:47danielstocktonthe requirements are forever changing, which makes it difficult#2016-08-1710:47danielstocktonone thing that worries me is making a bad choice that is hard to reverse#2016-08-1712:55vinnyataidehow to create a nil reference when the entity has a cyclic reference?#2016-08-1712:55vinnyataideof cardinality one#2016-08-1712:56danielstocktonyou can just omit the attribute#2016-08-1712:59vinnyataidehm, that's odd, it's saying that I'm inserting a string type, even when I ommit#2016-08-1713:01vinnyataideseems like an odd change in the schema that made my db react#2016-08-1713:01vinnyataidejust deleted it and now its ok#2016-08-1721:41cezarI'd like to use Datomic on Postgres but I have a deep concern about it using one table with a Primary Key (which means a BTree index in postgres). Is the uniqueness constraint actually necessary for it to work or could I create the datomic_kvs table without declaring the id column as the Primary Key and instead creating a hash index for it?#2016-08-1807:40danielstockton@cezar I'm not sure the hash index is WAL logged so it might not be reliable enough#2016-08-1807:42danielstocktonhttps://www.postgresql.org/docs/9.5/static/indexes-types.html I'm looking at the warning here#2016-08-1807:48val_waeselynck@cezar: just out of curiosity, what is the issue you see with BTree indexes?#2016-08-1807:51danielstocktoni think its a bit faster for KV type lookups and quite significantly faster for writes#2016-08-1807:54danielstocktonunless you're really concerned with writes, I don't think it's an issue, just cache reads as much as possible so you never have to go to storage#2016-08-1807:54danielstocktonor infrequently at least#2016-08-1807:56danielstocktonif writes are the bottleneck, datomic probably isn't the best fit anyway#2016-08-1808:01val_waeselynck@danielstockton: ah ok. I agree, I would add that data transfer time will likely dominate the btree read lookup time, and that indexing time will likely dominate the btree write time 🙂#2016-08-1808:05danielstocktontrue, log index is also a b-tree though and needs to be written to before a transaction is committed (not via background indexing)#2016-08-1812:18cezarMy concern is not so much with speed but with data volume. If I have a bunch of Datomic databases managed by a single transactor the sole datomic_kvs table will become massive and the corresponding B-Tree index will be very slow for new inserts. In my experience anything over 100M entries in a BTree is just not performant for most applications. Again, I'm more concerned over inserts than reads.
Also to preempt some, yes, I realize there is an option to use Dynamo, Couchbase etc but within an organization it's always easier to deploy on infrastructure that's already in place#2016-08-1812:24val_waeselynck@cezar: I doubt you'll reach 100M entries (would mean 100M segments, each of which contains from 1000 to 20000 datoms according to the docs - http://docs.datomic.com/capacity.html#sec-6), whereas we know the practical limit of Datomic is 10G datoms.#2016-08-1812:24cezar@val_waeselynck: the limit is per database not per transactor#2016-08-1812:24val_waeselynckSo theoretically, you'll stay 1 order of magnitude below the 100M limit I guess#2016-08-1812:25cezarI want to blow way past that limit by having many databases#2016-08-1812:25val_waeselynckhmm I see what you men#2016-08-1812:25val_waeselynck*mean#2016-08-1812:25robert-stuttaford10billion datoms is the theoretical upper limit#2016-08-1812:25robert-stuttaforddue to the size of the index roots in peer memory#2016-08-1812:26cezarok let's work with my actual numbers:
~1000 databases (only a handful used at any one time)
1 transactor
up to 500M datoms per database#2016-08-1812:26robert-stuttafordat 64k datoms per segment, that gives you 156 250 segments#2016-08-1812:27cezarthere is not 64,000 datoms per segment#2016-08-1812:27cezareach segment is about 64Kbytes#2016-08-1812:27cezarthat's far fewer datoms#2016-08-1812:27robert-stuttafordoh, you're right! feel the learn!#2016-08-1812:27cezarusually a couple of thousand in my experience#2016-08-1812:28robert-stuttafordat 20k a seg, that's a lower bound of half a million segments#2016-08-1812:28val_waeselynckpessimistically, assuming you have 1000 datoms per segment, that's about 500k segments per database#2016-08-1812:29cezaryeah, but times 1000 databases total I'm looking at 500M+ rows in postgres#2016-08-1812:29cezarnot to mention we have to remember there are at least three indexes#2016-08-1812:29robert-stuttafordand you can't use more than one transactor?#2016-08-1812:29cezarI can... I just don't want to because my traffic isn't very concurrent#2016-08-1812:29robert-stuttafordlicense limits only apply at the txor level. you can run as many as you want#2016-08-1812:30cezari will have spikes of heavy writes to a couple of database at a time and then they go dormant for a long time#2016-08-1812:30cezarbut I can't excise or archive them. they have to be theoretically accessible due to SLA#2016-08-1812:30robert-stuttafordright#2016-08-1812:30robert-stuttafordthen perhaps psql isn't the right storage for you#2016-08-1812:30val_waeselynck@cezar: If you don't want your BTrees to get too deep you could maybe create several datomic_kvs tables#2016-08-1812:31cezar@val_waeselynck: but how do I set up the transactor to write to a bunch of tables vs just one?#2016-08-1812:31cezaror do you mean use N transactors?#2016-08-1812:31val_waeselynckI'm not sure you can share a transactor between several databases actually#2016-08-1812:32cezaryes you can... I tested that with no ill effects#2016-08-1812:32val_waeselynckinteresting!#2016-08-1812:32cezarI believe it is officially supported#2016-08-1812:33val_waeselynckoh you're right, that's even how dev storage works#2016-08-1812:33robert-stuttafordmultiple databases on a transactor? definitely#2016-08-1812:33val_waeselynckmy mistake#2016-08-1812:33robert-stuttafordmultiple storages on a transactor? nope#2016-08-1812:34val_waeselynckI think for this kind of advanced stuff I should definitely leave you in the good hands of Cognitect support 🙂#2016-08-1812:34cezarI hope they could pipe in here 🙂 I don't have a contract with them yet (though we are currently 90% committed to Datomic for this project)#2016-08-1812:35cezarbut I do have to resolve the BTree growth issue or get the buy in to use a proper KV store like Cassandra or Couchbase#2016-08-1812:35robert-stuttaford@marshall and @bkamphaus can likely offer useful info#2016-08-1812:40stuarthalloway@cezar Datomic does not currently provide any way to remove the “dormant” dbs from a transactor, you would have to fail over to another transactor to do that#2016-08-1812:41cezar@stuarthalloway: that's not really my issue. Scroll up for the start of this conversation#2016-08-1812:41robert-stuttafordor Stu 🙂#2016-08-1812:41stuarthalloway@cezar: maybe it is an issue you don’t know about yet 🙂#2016-08-1812:42stuarthallowaytransactors are intended to manage a small number of dbs#2016-08-1812:42stuarthallowayif you have lots of dbs, you will need lots of transactors#2016-08-1812:42stuarthallowayseparately from any storage implications#2016-08-1812:43cezarwhy is this? What if I only access a handful of dbs at any one time?#2016-08-1812:43stuarthallowaythey stay in memory forever#2016-08-1812:43cezaroh#2016-08-1812:44stuarthallowayI am not saying it has to be that way — nobody has requested a use case like yours before#2016-08-1812:44cezarhow do folks like Nubank who eventually plan to use datomic at scale? tons of transactors?#2016-08-1812:44stuarthalloways /tons /tens 🙂#2016-08-1812:44cezarI see 🙂#2016-08-1812:45cezarwell, I was hoping to neatly subdivide my data into a database per customer account (we have a couple of thousand customers)#2016-08-1812:45cezarand they will rarely access that data concurrently#2016-08-1812:45stuarthalloway@cezar how quickly do your need to bring up a db for a mostly dormant customer?#2016-08-1812:46stuarthallowaye.g. you could make a process manager that spins up an appropriate peer/transactor pair on demand, and then have your own external logic to spin them back down#2016-08-1812:46cezardepends on the request, but usually if it's dormant the a couple of minutes might be OK... but I'd have to consult product managers on this#2016-08-1812:47robert-stuttafordyou'd have to shard storages with that approach, right, @stuarthalloway ?#2016-08-1812:47stuarthallowayrough guess, you should be able to spin up a system in about a minute#2016-08-1812:47cezaroperationally that may be a hard sell for me#2016-08-1812:48cezarto have all this infrastructure around standing up/shutting down transactors#2016-08-1812:49stuarthalloway@robert-stuttaford: transactors cannot share the same storage, but can cohabit in the same storage engine under different table names#2016-08-1812:49robert-stuttafordthat's what i thought, which does solve the original problem cezar mentioned#2016-08-1812:49cezar@stuarthalloway: how many dbs per transactor are "reasonable"?#2016-08-1812:49cezar10? 100?#2016-08-1812:50cezarmore? fewer?#2016-08-1812:51stuarthallowayDatomic is fundementally a Cloud architecture, built for a world where processes are cheap and isolation is a Good Thing#2016-08-1812:52stuarthalloway@cezar transactor should only handle a tiny number of dbs that are both (a) large and (b) have ongoing write volume, and by tiny I mean <10, probably closer to 1-3#2016-08-1812:52stuarthallowaylots of customers at scale shard by time, so only 1 db has ongoing write volume#2016-08-1812:52cezarwhat if only a couple are written to concurrently#2016-08-1812:53stuarthallowayright, see above#2016-08-1812:54stuarthallowayin e.g. the AWS cloud, the answer is clear — you just do 1 transactor per db and be done with it#2016-08-1812:54stuarthallowayfor people running their own data centers, this can be more of a challenge because they lack something as polished as CloudFormation, ASGs, etc.#2016-08-1812:55cezarthis is my use case unfortunately 😕#2016-08-1812:55robert-stuttafordi'd love to know how they accomplish that time based sharding#2016-08-1812:55cezarie internal data center#2016-08-1812:56stuarthalloway@robert-stuttaford it is easy if your queries are time-scoped by the nature of the domain. Just start a new transactor+db on each domain time boundary#2016-08-1812:56stuarthallowayis it not easy otherwise 🙂#2016-08-1812:57robert-stuttafordi guess i'm more curious about the boundary between the shards and the control database#2016-08-1812:57robert-stuttafordbut i suppose i could figure it out if i thought it through!#2016-08-1812:57robert-stuttafordi hear you, though. it has to make sense for the domain#2016-08-1812:58stuarthalloway@cezar I understand, and Datomic may not be a great fit. What was your aggregate data size across all customers, in datoms?#2016-08-1812:58robert-stuttaford@stuarthalloway do you guys have any tools for rebuilding databases? what was mentioned as 'decanting' on the last cognicast episode. i'm gearing up to do so at the moment, and i'd love to leverage any shortcuts that may exist, if you have any 🙂#2016-08-1812:59robert-stuttafordreason is to get rid of all the accumulated cruft over 4 years - badly named schema, unwanted data (in the 100,000s datoms range), no-op transactions, etc#2016-08-1813:01cezar@stuarthalloway: Datomic is a very good fit otherwise. Plus we already started building on it. It never occurred to us that the limit of DBs per transactor was so small. We might still manage somehow but it's certainly making our lives a lot harder.
I don't have an "aggregate" figure now but the data (like most data) will be cumulative over time. I forecast about 100B datoms per year (spread across many separate DBs)#2016-08-1813:01cezarvery rough estimate#2016-08-1813:01stuarthalloway@robert-stuttaford: several customers have written tools, some with our help. Some planned to open source but not sure any have.#2016-08-1813:02stuarthalloway@cezar we should have @marshall give you a call and talk through options#2016-08-1813:02cezarI'll PM him with my phone number#2016-08-1813:03robert-stuttafordthanks Stu#2016-08-1813:07robert-stuttaford@stuarthalloway forgive my cheekiness, but is it perhaps possible for you to put me in touch with those who planned to open source theirs? it's a big job i'm tackling, and i'd love an independent perspective on this, as i may save myself some time and effort#2016-08-1813:07robert-stuttafordtotally cool, if not possible#2016-08-1813:32stuarthalloway@robert-stuttaford: I will defer to @marshall to sort that out#2016-08-1813:32robert-stuttafordthank you 🙂#2016-08-1813:50pesterhazy@robert-stuttaford I have a tool like that useful to make a subset db, based on your work#2016-08-1813:50pesterhazyseems like everyone has a tool like that#2016-08-1813:55robert-stuttafordyeah. this time i care about maintaining the transaction order, and not losing the original timestamps. the one i shared with you before is just a 'now' snapshot, which is a lot simpler to produce#2016-08-1814:02marshall@robert-stuttaford I’ll do a bit of asking around#2016-08-1814:18robert-stuttafordthank you, sir 🙂#2016-08-1815:35kvltHey all... I've got kind of a difficult query. The issue is that the data set is rather large. I have an event of a specific type that I"m trying to tie back to another entity based on related ref's they each have. I'm finding that I'm running out of memory before this query completes. I was wondering if anyone had any tips#2016-08-1815:45robert-stuttaford@petr datalog queries are set based; the whole result needs to fit in ram#2016-08-1815:45kvltSo you'd suggest breaking them up?#2016-08-1815:46robert-stuttafordyou could instead use d/datoms -- which is lazy -- to walk one entity kind and use pull / entity to discover the rest#2016-08-1815:46robert-stuttafordthis at least allows you to do partial query or batched query#2016-08-1815:47robert-stuttafordif you need to do this query often, you could write cache refs into the database that shorten the path from one to the other#2016-08-1815:47kvltWoud'nt that require there to be indexes?#2016-08-1815:47kvltThis is a once off query#2016-08-1815:47robert-stuttafordeverything's indexed already 🙂#2016-08-1815:47kvltThe idea is to create this cache ref#2016-08-1815:47kvltThats teh reasoning behind the query#2016-08-1815:47robert-stuttafordgotcha#2016-08-1815:47robert-stuttafordhave you used d/datoms before?#2016-08-1815:48kvltI have but I think only with the :avet index#2016-08-1815:48kvltIt's been a while#2016-08-1815:48robert-stuttafordright. you can use it with others eavt vaet aevt#2016-08-1815:48kvltI know 🙂 JUst trying to refresh my memory#2016-08-1815:49robert-stuttafordthe event type - is that expressed as an attr?#2016-08-1815:49kvltYeah#2016-08-1815:49robert-stuttafordperhaps as a ref to an enum?#2016-08-1815:49kvltI can show you the query I've written#2016-08-1815:49robert-stuttaforde.g. one of several entities with db/ident values?#2016-08-1815:50kvlt(d/q '[:find ?ue
:in $
:where
[?ue :user-event/type :user-event.type/create-share]
[?ue :user-event/asset ?asset]
[?share :share/assets ?asset]
[?share :share/created-at ?t]
[?ue :user-event/occurred-at ?t]
[?user :user/events ?ue]
[?user :user/shares ?share]]
#2016-08-1815:50kvltThe first line refers toa n enum#2016-08-1815:50robert-stuttafordif so, then you can cheat: (seq (d/datoms db :vaet (d/entid db :user-event.type/create-share) :user-event/type)) all the :e values on this seq will give you ?ue#2016-08-1815:50kvltThe others are refs#2016-08-1815:51robert-stuttafordlazily#2016-08-1815:52robert-stuttafordyou can then craft a pull spec which expresses the rest of your clauses, or perhaps several normal clojure operations#2016-08-1815:52robert-stuttafordsee where i'm going with this?#2016-08-1815:52kvltYep I do#2016-08-1815:53kvltI don't see how this would be much better though#2016-08-1815:53robert-stuttafordnot sure if you know, but pull does support reverse ref traversal#2016-08-1815:53kvltI did knwo that#2016-08-1815:54robert-stuttafordwell, this way, you have the option of batching results and transacting those cache refs every so often#2016-08-1815:54kvltAt some point I"m going to evaluate this expression anyway. Now I could chunk it#2016-08-1815:54robert-stuttafordwhich allows you to do a small portion of the work at a time#2016-08-1815:55kvltYep, I was originally just thinking of doing the first line adn then using partitions to chunk the data into smaller pieces#2016-08-1815:55kvltThanks rob#2016-08-1815:56robert-stuttafordgood luck 🙂#2016-08-1818:19danielstocktonDid Datomic used to have attributes which were later removed? I'm wondering why there seem to be gaps (e.g. no entities 5-7) and nil entries in (:elements db)#2016-08-1821:39jdkealyDoes anyone here use Datomic in tests with Circle CI ? I can't seem to figure out if this is possible#2016-08-1822:12zentropeSay you regularly receive a broadcast entity (say, a User profile) that probably hasn’t changed.#2016-08-1822:13zentropeIf you save it to the DB every time, you get a bunch of empty transactions.#2016-08-1822:13zentropeIf you make a db/tx function to check if it needs to actually be transacted, you also get a bunch of empties (because you return []) most of the time.#2016-08-1822:14zentropeIf you query the DB to resolve the entity, then compare it to the one coming over the wire, you’re not transactionally safe.#2016-08-1822:14zentropeIs there a story for this?#2016-08-1822:15zentropeIf you’ve only got one client, you can serialize I guess.#2016-08-1822:16zentropeAnd if you have more than one, perhaps the occasional empty if the “value of the db” you’re querying is updated elsewhere. Hm.#2016-08-1822:16zentropeOr you could have your db/tx throw a specific “short-circuit” exception you don’t have to log as an error.#2016-08-1822:55val_waeselynck@zentrope you could also batch them to reduce the number of empty transactions#2016-08-1822:57zentropeHm. Makes sense. Or even put a cache/memoize in there somewhere. Store incoming message checksums.#2016-08-1822:58val_waeselynck@zentrope of a Bloom filter or whatnut, but you may run into cache invalidation issues#2016-08-1822:59val_waeselynckyou can also serialize externally even with several peers using e.g HornetQ with Message Grouping#2016-08-1823:00zentropeYeah. All techniques outside of datomic itself. Perhaps the “throw a special exception” idea is the least amount of work.#2016-08-1823:01val_waeselynckwhatever floats your boat 🙂 what's the frequency ?#2016-08-1823:01zentropeWell, I’m working on a POC, so it’s about every 10, 15, 30 seconds or so.#2016-08-1823:02zentropeRegardless, I was mainly interested in if there was an obvious Datomic answer.#2016-08-1823:02zentropeFor instance, with RDBMS, you can use a .rollback if you discover things don’t need to be done. That kind of thing.#2016-08-1823:03zentropeEven if I do the naive thing and just query the database right before deciding to write, if I do overwrite something, I’ve always go the history. ;)#2016-08-1823:04val_waeselynckhum i guess in your case problems arise if you decide not to write#2016-08-1823:07zentropeOops. Yep. Right.#2016-08-1900:25stuarthalloway@zentrope if you throw an exception from a tx function then nothing is added to the durable store, similar to rollback#2016-08-1901:09zentrope@stuarthalloway Thanks. I personally like that approach.#2016-08-1903:33robert-stuttaford@jdkealy: tests on circleci are totally possible IF your tests use a memory database - datomic:mem://#2016-08-1903:41bostonaholic@robert-stuttaford: CI aside, that’s a pattern I’d suggest anyway#2016-08-1907:17robert-stuttafordyep!#2016-08-1908:42pesterhazyStoring information in reified transactions can be pretty inflexible sometimes#2016-08-1908:43pesterhazyI wrote an exporter that exports a (subset of) a prod db, so that developers can use it as a dev db on their laptops#2016-08-1908:44pesterhazyThe exporter exports the datoms but transacts them in a different granularity (in chunks of 1000)#2016-08-1908:45pesterhazyBut this also means that the dev dump lacks the tx metadata#2016-08-1908:46pesterhazyWe're using conformity for migrations, which uses tx metadata to mark transactions as "already performed".#2016-08-1908:47pesterhazyUnfortunately, this means that conformity considers all txs as "not yet performed", which leads to issue with schema alterations.#2016-08-1908:47pesterhazyAnd as transactions are immutable, it's not easy to fix after the fact.#2016-08-1909:06robert-stuttafordyep. you basically have to not-use-conformity for schema in your dump db#2016-08-1909:08pesterhazyI mean it works for the most part, until you start renaming attributes#2016-08-1909:08pesterhazy(I don't know why we started doing that, not a good idea unless you're forced to)#2016-08-1909:24onetomis there an idiomatic way to get enums in query results as keywords (their :db/ident)?#2016-08-1909:26onetomif i do pull [* {:user/status [:db/ident]}] i get a nested map which is an unnecessary complication#2016-08-1909:51robert-stuttaford@onetom, pull does that by design, because you may have other things on that entity that you care to pull#2016-08-1909:51robert-stuttafordd/entity returns the keyword; d/pull returns what you've printed#2016-08-1909:52onetomawesome, thanks!#2016-08-1911:00stuarthalloway@pesterhazy I think the inflexibility is in the exporter, not in reified transactions. The exporter ignores a semantic of the db, so you cannot then expect that semantic to be preserved#2016-08-1911:00stuarthallowayand conformity is hardly the only thing that uses reified transactions#2016-08-1913:20lvh@robert-stuttaford: I was listening to your podcast episode on Datomic on a plane ride; it was interesting to hear about your experiences even though I’m already somewhat familiar with datalog/datomic/datascript -- thank you 🙂#2016-08-1913:21robert-stuttafordabsolutely my pleasure 🙂#2016-08-1913:21robert-stuttafordwas great fun to chat to the guys. it got Deep Nerd in there#2016-08-1916:18codonnellIs there anything I can do to reduce transactor memory usage? I'm using datomic for a personal project; the database is only a couple hundred MB, but the transactor is eating 1.5G of memory.#2016-08-1916:50potetmIs there any way to log dynamodb requests?#2016-08-1916:50potetmSpecifically a slow log? (e.g. log request id every time a request takes longer than Xms)#2016-08-1917:44jdkealy@robert-stuttaford but how do you boot the transactor? do you need to create a docker container ?#2016-08-1918:18jaret@codonnell: The transactor by default launches with 1 gig of memory. You can modify the properties in the transactor properties file and try to push this lower, but 1 gig is recommended for use on laptops/development. The transactor also accepts java arguments when executed so you can pass in -Xmx1g or whatever your desired memory usage is.#2016-08-1918:27robert-stuttaford@jdkealy we use the AWS AMI that Datomic provides, after sprinkling some https://datadoghq.com agent code on it#2016-08-1918:28robert-stuttafordhowdy @jaret 🙂#2016-08-1918:28jaretHey @robert-stuttaford ! parrot#2016-08-1918:55cmcfarlenIs this a reasonable way to get the last few transactions or will it perform poorly for large transaction logs? (take-last 5 (d/tx-range (d/log connection) nil nil))#2016-08-1919:29arohnerpotetm I’m not sure I’ve ever seen a slow request in dynamo. i.e. I don’t think it’s possible to write a slow query, aside from scan#2016-08-2007:27robert-stuttaford@cmcfarlen you'll have to check the behaviour of take-last, but i'm pretty sure that'll walk your entire transaction log#2016-08-2014:59bkamphaus@cmcfarlen Yes, as @robert-stuttaford mentions it will perform poorly (always scan the transaction log in its entirety) with nil passed to both time limiting arguments. Best to make some educated guess/use some heuristic about the start argument (position of the first nil) so as to limit the total number of transactions considered.#2016-08-2022:28zentropeIs there a way to get the wall-clock time of a TX returned from the d/history database?#2016-08-2022:29zentropeI get numbers like: 13194139534536.#2016-08-2022:30zentropeResolves to year 2388.#2016-08-2022:35zentrope(d/tx->t 13194139534536)#2016-08-2022:37zentropeNope. Misunderstood the glossary on that one.#2016-08-2022:40zentrope:db/txInstant. Hm.#2016-08-2107:51jethroksyhi#2016-08-2107:52jethroksyI have a query that fetches all locations#2016-08-2107:52jethroksyeach location belongs to a space#2016-08-2107:52jethroksy(defn get-locations [conn]
(->> (d/q '[:find [(pull ?locations [*]) ...]
:in $
:where [?locations :location/id]]
(d/db conn))
(map c/db->loc)))
#2016-08-2107:52jethroksyI was assuming that the wild card for the pull would recursively fetch attributes for the space#2016-08-2107:52jethroksybut I still get a result like this:#2016-08-2107:52jethroksyactual: ({:address "19 Foobar Street",
:postal-code "S890123",
:space {:db/id 277076930200554}}
{:address "19 Barbaz Street",
:postal-code "S123456",
:space {:db/id 277076930200554}})
#2016-08-2108:02pesterhazyI believe the space needs to be marked as :db/isComponent true: http://blog.datomic.com/2013/06/component-entities.html#2016-08-2108:09jethroksyah I'll play around with that, thanks!#2016-08-2108:20jethroksyfwiw I think I had to touch the return results#2016-08-2108:38hansyou can also recursively pull referenced entities, i believe the syntax was [* {:space [*]}]#2016-08-2108:39hansit is documented, though.#2016-08-2108:49pesterhazytouching is giving up 🙂#2016-08-2109:11robert-stuttafordalso, d/touch only works with d/entity results#2016-08-2109:30jethroksyyup, I ended up with#2016-08-2109:30jethroksy(defn get-locations [conn]
(->> (d/q '[:find [(pull ?locations [* {:location/space [*]}]) ...]
:in $
:where [?locations :location/id]]
(d/db conn))
(map c/db->loc)))
#2016-08-2109:31jethroksyI was mislead into thinking it would recursively pull my referenced entities#2016-08-2109:31jethroksy> Wildcard Specifications
The wildcard specification * pulls all attributes of an entity, and recursively pulls any component attributes:#2016-08-2109:31jethroksythis was from http://docs.datomic.com/pull.html#2016-08-2109:32jethroksy:release/media seemed to be recursively pulled, as well as :medium/tracks#2016-08-2119:33bkamphaus@jethroksy just to note this, that documentation specifies component attributes only, and those attributes are set with isComponent/true :: https://github.com/Datomic/mbrainz-sample/blob/master/schema.edn#L266 — also see the blog post: http://blog.datomic.com/2013/06/component-entities.html (though that’s in the context of touch, same applies to pull wildcard pull spec).#2016-08-2209:00jethroksy@bkamphaus: thanks, it hadn't occurred to me that component was treated as a technical term before the above two posts. Forgive my mediocrity.#2016-08-2209:10greywolve@bkamphaus Does d/with work on as-of db values ?#2016-08-2211:25yonatanelCan I add a nil attribute value with the map form and expect it not to be asserted at all while the rest of the attributes are?
{:db/id (d/tempid :db.part/user)
:order/status (get-status)}
;; get-status might return nil#2016-08-2211:56robert-stuttafordno. you need to exclude it from your transaction#2016-08-2211:57yonatanelthanks robert#2016-08-2211:57robert-stuttaford(let [s (get-status)]
(cond-> {:db/id (d/tempid :db.part/user)}
s (assoc :order/status s)))#2016-08-2211:57robert-stuttafordyou probably aren't doing this, but you can't assert {:db/id ...} without at least one attr along with it#2016-08-2211:58yonatanelyeah i have a bunch of them#2016-08-2212:50pesterhazyso I'm not the sole inventor of the (cond-> m x (assoc :x x)) pattern. bummer#2016-08-2213:50gerrityou bet! rich did that years ago 😉 https://github.com/Datomic/codeq/blame/master/src/datomic/codeq/core.clj#L406#2016-08-2213:55robert-stuttaford-grin- use the hell out of that pattern#2016-08-2215:13bhagany@greywolve: I did an experiment over the weekend using d/with on as-of dbs, and I haven't seen any issues, fwiw#2016-08-2215:22bkamphausI’ll have to defer to @marshall or @jaret re: anything changing with that behavior, but last I was aware with prospective transactions don’t change what an as-of database will find/retrieve with query, datoms, etc. - the with transaction is filtered out of the as-of db.#2016-08-2215:27jaret@greywolve: @bhagany https://groups.google.com/d/msg/datomic/-gDB2EX7r58/CP2n4GELAgAJ with and as-of should not be used together.#2016-08-2215:28bhagany@jaret I saw that, and I was under the impression that that thread applies to doing the reverse#2016-08-2215:28greywolveThanks all 🙂#2016-08-2215:28bhagany(d/as-of (d/with …)) does lead to unexpected results#2016-08-2215:30bhagany(if you're expecting your d/with stuff to show)#2016-08-2215:30marshallspeculative branching (i.e. with) is only allowed on the “current” DB
as-of is a filter, so I wouldn’t recommend doing either#2016-08-2215:30bhaganyhuh, okay#2016-08-2216:30nlessaHi, I think I am understanding something wrong in the CORS option of REST APi.
Isn’t it the way to specifying all origins allowed, eg,
bin/rest -p 8001 -o /'*'/ ddb datomic:<ddb://us-east-1/nameofdynamodatabase>#2016-08-2217:10zentropeWhen you retract an entity related to an isComponent attribute, is the history for that entity lost? I can’t find some of them even with (d/as-of db tx).#2016-08-2217:32zentropeOh. Hm. If a given TX has retracted an attribute, you have to use an earlier “value of the database” to find out what it was.#2016-08-2217:35zentropeIf that history datom indicates the attribute was “deleted”, then decrement the TX and run the query with the as-of db to find out what it pointed to. Seems to work.#2016-08-2218:18chris_johnsonoperational question: if you have a transactor running in EC2 and you want to schedule backups of the db, is the best practice to have a separate EC2 instance (or a Lambda function or what have you) with a separate Datomic install and a crontab? My initial thought was to add an /etc/cron.d entry in the CF template but my stumbling block is the fact that the actual install location of …/bin/datomic is not easily found#2016-08-2218:18chris_johnsonwhich makes me suspect that I’m not supposed to use it for anything. 🙂#2016-08-2218:26pesterhazyIMO better to use a separate instance. We use our jenkins for that#2016-08-2218:41robert-stuttaford@chris_johnson we use a separate t2.medium which runs hourly. i haven't yet figured out how to make it run backups in a loop (start again as soon as you're done). it's NOT recommended to run backups on your transactor machine because of the CPU hit#2016-08-2218:41robert-stuttafordbackups run from storage directly, no need for transactor involvement!#2016-08-2218:42chris_johnsonright, I was thinking since we don’t (yet) have anything running but transactors, and since I’m about to push a version update, maybe we could throw the cron job on the machine we have#2016-08-2218:43chris_johnsonbut I actually reached the same conclusion as you as soon as I read “please give backup as much memory as your transactor usually gets"#2016-08-2218:43chris_johnsonour data is small and backups that have been done by hand are pretty snappy, I think I’m going to try doing it in a Lambda function#2016-08-2218:43robert-stuttafordyeah, not a good idea. too much of a disturbance to transactor process. of course, we did exactly this for the first 18 months, when we ran a snowflake server with transactor, postgres, memcached, backups, and rabbitmq all on one instance 😄#2016-08-2218:43chris_johnsonI will let you know how it goes#2016-08-2219:10bkamphausI’ll just echo the recommendation to use separate instance and have it backup in a loop or at regular (frequent) time intervals.#2016-08-2310:17jimmyhi guys, is there anyway we can output missing fact as nil in datomic query ? Thanks#2016-08-2311:25hans@nxqd: Datomic has no concept of a "missing fact", so there is no direct way to include keys in entities that are not present with a value. You need to be aware that any entity can have any attribute attached. There are not "entity types" which would restrict the possible attributes to a certain set.#2016-08-2311:26hans@nxqd: What this means is that Datomic can't do it for you, but you can certainly do it yourself in your data layer which sits between your application and datomic.#2016-08-2312:11yonatanel@nxqd: I've used a similar query to this (not with nil in my case). Maybe it will work:
(or (and [(missing? $ ?entity :some/attribute)]
[(identity nil) ?value])
[?entity :some/attribute ?value])#2016-08-2312:42yonatanel@nxqd: Also, there's get-else (http://docs.datomic.com/query.html#get-else) and pull default expressions (http://docs.datomic.com/pull.html#default-expressions)#2016-08-2313:06ckarlsenI've always had the impression that the rest api service counts as a normal licensed peer - is this correct?#2016-08-2313:06marshall@ckarlsen Yes, the REST API is a peer.#2016-08-2313:09ckarlsenfor some reason I can run 2 jvm peers + 1 rest service with a 2 peer license...#2016-08-2313:10ckarlsenusually the third peer just throws an exception on connect#2016-08-2314:35yonatanelIs it possible to use a lookup ref instead of entity id in :db.fn/cas? Or even better, a temporary id (negative) in the same transaction that asserts a :unique/identity attribute and the :db/id?
Also, what if entity has two identity attributes in the same transaction, one matches and the other doesn't? Does it resolve entity ID by the first one in the transaction sequence?#2016-08-2323:50jimmy@yonatanel cool, those seem to be what I need.#2016-08-2401:11jimmy@yonatanel the default expression doesn't seem to support '[* (default :something/a "na")]. I have to write my function to get all keys, mapping to default value and merge back to current one.#2016-08-2404:54hans@nxqd: why does that bother you? merging the defaults into the entity returned by datomic can be done succinctly in clojure.#2016-08-2406:39magnarscan I use pull syntax to get the count of a :db.cardinality/many attr?#2016-08-2406:45robert-stuttafordi don't believe so, no, @magnars#2016-08-2406:48magnarsI guess it wouldn't compose well#2016-08-2409:10jimmy@hans yeah, I thought having default expression would solve the problem cleanly but it doesn't.#2016-08-2409:12hans@nxqd It is not really unclean to do it the way I suggested. Datomic has a very broad data model and what you're describing is narrow in comparison. If you want to restrict Datomic's open data model to something closer to your application, implementing a layer between your application and Datomic is the right thing to do.#2016-08-2409:13hans@nxqd I personally think it is not good practice to let database queries leak into the application anyway, but there are different opinions regarding that.#2016-08-2409:15jimmytrue, I agree. I support that idea also#2016-08-2514:13karol.adamiechi, how do i find proper ami for datomic in my region? i am trying to use https://github.com/mrmcc3/tf_aws_datomic#2016-08-2514:14karol.adamiecis datomic peer and transactor bound to specific ami?#2016-08-2515:15robert-stuttaford@karol.adamiec i suppose you'll need to use the commands on http://docs.datomic.com/aws.html to generate some CFN and copy the AMI out of there#2016-08-2515:16robert-stuttafordi see that the changelog lists regions that are supported Transactor AMIs are now available in all AWS regions that support DynamoDB: us-east-1, us-west-1, us-west-2, eu-west-1, ap-northeast-1. and ap-southeast-1.#2016-08-2515:17karol.adamiecrather new to this so i will have to sum it up to make sure i get it 🙂#2016-08-2515:18karol.adamiecpeer is not bound to ami, should work with any linux amazon image#2016-08-2515:18karol.adamiectransactor needs specific ami image, to find it one chould run aws commands from linked article?#2016-08-2515:50robert-stuttafordyes. or wait for @marshall or @jaret to point us all to a table of region + ami codes 🙂#2016-08-2517:29andrewhryou could make some searchs on AWS console to discover the AMI id#2016-08-2517:30andrewhrthere is a new feature on terraform 0.7 to use this as a data-source... I made a proof of concept here and it worked quite well#2016-08-2517:32andrewhrI planned to port and make a PR to the this recipe, but didn't managed to take the time yet#2016-08-2518:22colindresjHey all, if I’ve got a message entity with a topics attribute which is a ref attribute of cardinality many, and I’ve got a project entity, how can I write a pull pattern that allows me to get all messages where a given project is a topic?#2016-08-2518:26bkamphausif you want a set of things filtered by some criteria - i.e. the selection for which some constraint holds, you’ll need to use query (pull is projection, essentially)#2016-08-2518:41colindresj@bkamphaus I was planning to write the pull inside of a query, not use the pull api directly, does that change anything?#2016-08-2518:44bkamphausYou'll just need to use the pull expression to get the attributes of interest and the where clause to select the entities (projects that are topics).#2016-08-2518:47colindresjThat makes sense, but what if I want to return a project that includes those very messages where it was reference, kind of like
{:name “My project”
:messages [{:text “Msg text”}]}
?#2016-08-2518:48colindresjWould that be done as two separate queries that are then merged in clojure-land?#2016-08-2518:56bkamphauscaveat: not tested. Could also just put pull portions in where clause if you’re ok with those limiting results (i.e. don’t need to keep projects in results if no project name or messages in results if messages don’t have text.
[:find (pull ?p [:project/name]) (pull ?m [:message/text])
:in $ ?t
:where
[?p :project/topic ?t]
[?p :project/message ?m]]
inputs: database, topic#2016-08-2519:23jaret@robert-stuttaford @karol.adamiec We do not have a table of regions and AMI codes. Robert’s method of generating CFN and copying the AMI is the way to get the code.#2016-08-2520:06zaneIs this a bug?
dev> (d/q '[:find (pull ?eid [:db/id "*"]) .
:in $
:where
[?eid :db/ident :state/CA]]
(d/db (:core moyo.datomic/db-connections)))
{":db/id" 17592186045434, ":db/ident" :state/CA}
dev> (d/q '[:find (pull ?eid [:db/id *]) .
:in $
:where
[?eid :db/ident :state/CA]]
(d/db (:core moyo.datomic/db-connections)))
#:db{:id 17592186045434, :ident :state/CA}
#2016-08-2520:07zaneSpecifically, whether or not you pass in "*" or '* changes the type of keys returned by a pull query.#2016-08-2520:07jgdaveyThat is by design, I believe. The ”*” version is a convenience for e.g. Java clients#2016-08-2520:08zaneI would imagine that the Java client would handle that kind of translation.#2016-08-2520:09jgdaveyIt can, but getting strings back immediately is sometimes desired#2016-08-2520:13jgdaveyNot sure where I got that impression, but my understanding is that ”*” and * have those differing semantics.#2016-08-2520:14zaneI see. Having that happen implicitly depending on the type of * passed in seems like it's an easy way to confuse consumers.#2016-08-2520:15jgdaveyI think it surprised me the first time I saw it.#2016-08-2603:42mrmcc3@karol.adamiec You could use the AWS CLI aws ec2 describe-images --owners 754685078599 otherwise last time I checked they’re all in the generated CFN template.#2016-08-2608:45karol.adamiec@jaret @mrmcc3 thanks a lot, will do.#2016-08-2614:49karol.adamiec@mrmcc3 found them all in the template. As additional twist there are separate architecture so also need to match ami=>instancetype#2016-08-2619:11jonasI finally got around to fixing the query evaluation on http://www.learndatalogtoday.org. It’s been running for years on Heroku without problems so I had forgotten how to use that platform 🙂#2016-08-2619:18jaretThanks @jonas!#2016-08-2621:44renanHi 😄, There is how to use datomic for log's system & table logs of SQL databases?#2016-08-2700:08mrmcc3@karol.adamiec yeah this process could be more automatic in the terraform module by having it lookup the correct AMI https://www.terraform.io/docs/providers/aws/d/ami.html. Would need to have a virtualization_type variable (HVM as default) to find the correct image#2016-08-2707:41magnarsI have a db and some transactions, and I'd like to get a db after applying the transactions without actually running it through the transactor - it's the famous "what if" datomic feature. I must be blind, but I just can't find the command in the docs. Any help please? 🙂#2016-08-2707:49magnarsI found it: (d/with db tx-data) Not the easiest word to google for. 🙂#2016-08-2715:58isaacDoes Datomic has a plan to support GIS ?#2016-08-2722:11magnarsHow can datomic.api/as-of take both a transaction number and a transaction ID and consistently tell them apart?#2016-08-2805:52robert-stuttaford@magnars t->tx, tx->t provide the machinery for that. you'll notice that these fns don't take a db, which makes this conversion an algorithm, not a lookup#2016-08-2907:48danielstockton@magnars t starts at 1000 and tx starts from 13194139534312, 10 billion datoms is Datomic's reccomended limit#2016-08-2914:00marshall@danielstockton @magnars In fact the 10B recommendation doesn’t stem from tx or t numbers, but rather from logistical size considerations for DBs that large (i.e. restore from backup of that size can take hours or days depending on infrastructure)#2016-08-2914:02danielstocktoni know, i just interpreted magnars question as "when as-of takes a number, how does datomic know whether it represents a T or a tx" and i presumed the id ranges are far enough apart for them never to overlap#2016-08-2914:02marshallah.#2016-08-2914:31robert-stuttaford@marshall, and also the amount of ram that holding all the roots and idents and whatnot would use, right?#2016-08-2914:48marshallyep#2016-08-3000:50fentonout of the blue i'm unable to connect to datomic. getting: ConnectException Connection refused java.net.PlainSocketImpl.socketConnect (PlainSocketImpl.java:-2)#2016-08-3000:50fentonits running, the port is open/listenning#2016-08-3000:51fentondatomic:#2016-08-3010:23magnarsThat was a great interview you did on the defn podcast btw, @robert-stuttaford 🙂#2016-08-3010:24robert-stuttafordwhy thank you -bends a knee, tips hat-#2016-08-3011:07karol.adamiectransactor autoscaling is terminating my instance and then starts another one. and then stopps it and again forever. any ideas what could go wrong and how to get more data?#2016-08-3012:19karol.adamiecafter further digging here is a log from instance that stopped:
'user-data: inflating: datomic-pro-0.9.5390/README-CONSOLE.md
user-data: pid is 1879
user-data: ./startup.sh: line 26: kill: (1879) - No such process
/dev/fd/11: line 1: /sbin/plymouthd: No such file or directory
initctl: Event failed'#2016-08-3012:20karol.adamiecseems like issue with startup.sh, but unable to see whats inside there atm.#2016-08-3013:10robert-stuttaford@jaret hi 🙂 are you around, and able to help with a transactor downtime analysis?#2016-08-3013:11jaret@robert-stuttaford absolutely. Whats up? or should I say down?#2016-08-3013:12robert-stuttaford-grin- it's up again, but i'd really like to get a LOT better at analysing root cause#2016-08-3013:12robert-stuttafordPMing you#2016-08-3013:13jdkealywhen i get the message "Critical failure, cannot continue: Heartbeat failed" how can i find out what the failure was? it's happening every time i try to restart my transactor#2016-08-3013:14marshall@jdkealy Heartbeat failed indicates that the transactor can’t write to storage. What storage are you running on?#2016-08-3013:14jdkealydynamo#2016-08-3013:16jdkealyit happens before anything even connects to the app#2016-08-3013:16jdkealyi mean... it happens before the app connects to datomic#2016-08-3013:17marshallduring transactor startup, yeah.
Are you using our provided cloudformation scripts/etc?#2016-08-3013:18jdkealyyes#2016-08-3013:18marshalli.e. did you follow the process here http://docs.datomic.com/storage.html#provisioning-dynamo and here http://docs.datomic.com/aws.html#2016-08-3013:19jdkealyyes. it was working for over a month#2016-08-3013:20marshallwhat changed when you started seeing heartbeat failures?#2016-08-3013:20jdkealyi did change a lot of application code, but now that's not even running#2016-08-3013:21marshallis the transactor starting for some amount of time before you get the heartbeat failure?#2016-08-3013:22jdkealylike 5 seconds perhaps#2016-08-3013:22marshallif you look in cloudformation, do you see active heartbeats for any time or is it immediate failure?#2016-08-3013:23marshall@karol.adamiec Similar to my question to @jdkealy, did you follow this: http://docs.datomic.com/storage.html#provisioning-dynamo and this http://docs.datomic.com/aws.html#2016-08-3013:24robert-stuttafordguys, i've just had dynamo db issues as well#2016-08-3013:24robert-stuttafordin production#2016-08-3013:24robert-stuttafordwith a system that has been up for months#2016-08-3013:24robert-stuttafordbet you DDB is having a tantrum#2016-08-3013:24marshallaha.! perhaps there is a plot afoot 😉#2016-08-3013:24karol.adamiec@marshall i used terraform module from https://github.com/mrmcc3/tf_aws_datomic#2016-08-3013:25jdkealyi don't see any events since the thing initially launched#2016-08-3013:25robert-stuttafordnothing on aws status#2016-08-3013:26marshallThe twittersphere seems to agree#2016-08-3013:27marshallas of about 15 min ago there are reports of a DDB outage. possibly more out in us-east1#2016-08-3013:27robert-stuttafordexpletives and swearwords#2016-08-3013:30marshallEC2 now shows “elevated launch error rates” on AWS status page#2016-08-3013:37potetmI should have check in earlier. We were on the line with the AWS guys half an hour ago. They said "we're updating the status page soon."#2016-08-3013:37robert-stuttafordlol#2016-08-3013:37marshallhah. nice#2016-08-3013:37robert-stuttafordwas that because you had downtime @potetm ?#2016-08-3013:37marshall@potem I saw your tweet - figured something like that was happening#2016-08-3013:37marshall@potetm ^#2016-08-3013:38jaretI need to start following everyone...#2016-08-3013:38potetmYeah we're out completely right now.#2016-08-3013:38robert-stuttafordso do i#2016-08-3013:38robert-stuttafordhow did you know there's an outage, @potetm ?#2016-08-3013:38robert-stuttafordby calling AWS?#2016-08-3013:38potetmWe were on chat with AWS. Yeah.#2016-08-3013:38marshallnothing like finding out the best source for outage news is twitter vs. official status pages.#2016-08-3013:39robert-stuttafordass. that's not scalable at all#2016-08-3013:39potetmYeah, that's why I tweeted about it 🙂 Just doing my duty!#2016-08-3013:39potetm#moreimpactfulthanvoting troll#2016-08-3013:39robert-stuttaford-follows you- think you could keep doing that? -grin-#2016-08-3013:39jaret@robert-stuttaford you mean you can’t keep an open chat with AWS 24/7 for status updates?#2016-08-3013:39jaret😉#2016-08-3013:44robert-stuttafordso what do we do now? game of chess? 🙂#2016-08-3013:45potetmHeroes of the Storm#2016-08-3013:48robert-stuttafordlooks like things are stable again#2016-08-3013:49robert-stuttafordoh, wait, no. my EC2 console was stale#2016-08-3013:49robert-stuttafordlast new transactor was 5 minutes ago#2016-08-3014:00pesterhazynot seeing any dynamo issues (eu-west-1). fingers crossed!#2016-08-3014:00robert-stuttafordAmazon DynamoDB (N. Virginia) Increased latencies less
6:47 AM PDT We are currently investigating increased API latencies in the US-EAST-1 Region.#2016-08-3014:17ljosawe're also seeing transactor restarts (and flapping between our two transactors, as one tries to take over when the other kills itself). Is it expected behavior for the transactor java process to kill itself and restart when a heartbeat fails? Selected log lines: 2016-08-30 14:01:27.031 INFO default datomic.lifecycle - {:tid 18, :pid 7028, :event :transactor/heartbeat-failed, :cause :timeout}
2016-08-30 14:01:27.033 ERROR default datomic.process - {:tid 120, :pid 7028, :message "Critical failure, cannot continue: Heartbeat failed"}
2016-08-30 14:01:27.057 WARN default org.hornetq.core.server - HQ222113: On ManagementService stop, there are 2 unexpected registered MBeans: [core.acceptor.7b92fd66-6eb7-11e6-a9c9-eb6e98878cd4, core.acceptor.7b932477-6eb7-11e6-a9c9-eb6e98878cd4]
2016-08-30 14:01:27.076 INFO default org.hornetq.core.server - HQ221002: HornetQ Server version 2.3.17.Final (2.3.17, 123) [5d3fd9ae-ed45-11e5-a317-db72314f6b95] stopped
2016-08-30 14:02:03.345 WARN default datomic.slf4j - {:tid 10, :pid 12511, :message "Starting datomic: ..."}#2016-08-3014:18robert-stuttafordare you using your own instance configuration, rather than the AMI provided by Cognitect, @ljosa?#2016-08-3014:18ljosayes. so runit restarts the process when it quits.#2016-08-3014:19robert-stuttafordyes. it kills itself to allow auto-scaling to notice that it's dead and replace the instance entirely#2016-08-3014:21mitchelkuijpersWe are also down 😞#2016-08-3014:21robert-stuttafordwe've been stable for 30 mins now#2016-08-3014:21mitchelkuijpersWe were stable for 5 minutes and then it went dark again#2016-08-3014:22potetmI'm still having lots of errors, but I'm still able to run queries successfully.#2016-08-3014:22mitchelkuijpersData reads keep working for us too#2016-08-3014:22mitchelkuijpersbut that could be cached#2016-08-3014:22potetm#retriesFTW #ReleaseItPatterns#2016-08-3014:22robert-stuttafordalmost certainly cached#2016-08-3014:23ljosawe don't see any SystemErrors in CloudWatch, and the SuccessfulRequestLatencies are normal. In between the failed heartbeats and transactor restarts, things work normally.#2016-08-3014:25mitchelkuijpersAnd ours is back#2016-08-3014:25mitchelkuijpers😄#2016-08-3014:26ljosaour most recent failed heartbeat were at 14:01Z and 14:11Z. So good for 15 min now.#2016-08-3014:29jdkealyhave you guys seen this happen before?#2016-08-3014:29potetmAlright, on your word @ljosa we'll try and bring the backend services up.#2016-08-3014:30potetmI'll send you the bill if it doesn't work troll#2016-08-3014:30ljosa😛#2016-08-3014:30mitchelkuijpers@jdkealy There also was an dynamodb issue a while ago, but it does not happen often#2016-08-3014:31ljosaon the upside, this shows that the transactor HA is working. 🙂#2016-08-3014:31jdkealyim back up too... is there any way to protect against this? should i be thinking of different backends other than dynamo ?#2016-08-3014:33ljosawe had to switch from couchbase to dynamodb, and ddb has been great so far (about 6 months).#2016-08-3014:33marshallRealistically, the kind of downtime DDB has is still an order of magnitude (or more) better than pretty much any option you could run on your own behalf#2016-08-3014:34robert-stuttafordthe last time DDB went down was 1 week after a MAJOR launch at Cognician. that was so much fun. September last year#2016-08-3014:34robert-stuttafordyup Marshall totally#2016-08-3014:35robert-stuttafordno issues for 45 mins now#2016-08-3014:36potetm@jdkealy There was an outage similar to this last year. https://aws.amazon.com/message/5467D2/#2016-08-3014:37potetmI agree with marshall about relative ddb uptime though.#2016-08-3014:38potetmSome guy must be watching #dynamodb on twitter. The second I said something about an outage, he likes it and tweets this: https://twitter.com/shirleman/status/770614099726114816#2016-08-3014:39ljosaCassandra is not a great option for Datomic if you're worried about downtime because Datomic cannot work across Cassandra data centers, and a Cassandra data center must be in a single AWS availability zone.#2016-08-3014:41potetmThe fallacy there is that there is zero cost to managing your own machines vs using a hosted service. Even assuming the claims are accurate.#2016-08-3014:43marshallCassandra is a fine option, but I’d be shocked if you could maintain a Cassandra ring with the same uptime and perf as DDB for anywhere near the cost#2016-08-3014:43marshallnot to mention you have to do all the work, which means hiring ops staff#2016-08-3014:44ljosaDDB has been great for us cost wise. Memcached is very effective at reducing the number of DDB requests.#2016-08-3014:44robert-stuttafordhas anyone used DDB streams to set up real-time multi-region replication?#2016-08-3014:44robert-stuttafordi wonder how quickly one can shift regions with Datomic and DDB#2016-08-3014:45ljosathere's no guarantee that the replica will be consistent, so you're just praying that Datomic will be okay after being started in the other region, right?#2016-08-3014:46robert-stuttafordwell, that's why i'm asking, i guess -- is it even a valid strategy#2016-08-3014:46robert-stuttafordso far we've done the old backup-datomic, restore-datomic thing to switch transactor+storage, just once, back when we moved off of our snowflake transactor+postgres server#2016-08-3014:47ljosawe're doing hourly datomic backups to S3. we populate our dev environment with those. I guess we could in theory restore them to a disaster recovery environment in another region. but realistically, if the entire us-east-1 goes down, we're our of business until it comes back.#2016-08-3014:48potetmSame.#2016-08-3014:48robert-stuttafordwe're also doing the backups that way#2016-08-3014:48mitchelkuijpersSame#2016-08-3014:49robert-stuttafordalthough i'm planning to switch from hourly to continuously#2016-08-3014:49robert-stuttafordis anything keeping any of you in US-EAST-1 in particular?#2016-08-3014:49potetmNo. I want off.#2016-08-3014:49robert-stuttaford-ing ditto#2016-08-3014:50mitchelkuijpersWe are creating a Atlassian Connect addon and their servers are also in US-EAST-1 That is the only reason#2016-08-3014:50ljosawe could operate in other regions for maybe an hour or two before we'd have to shut down because we rely on processing in us-east-1 to turn off ad campaigns when they exceed their daily budgets, etc.#2016-08-3014:50robert-stuttafordi'm about 90% of the way to having a fresh env set up in oregon - new AMIs and Ubuntu LTS and whatnot#2016-08-3014:51robert-stuttafordswitching from upstart to systemd has been fun#2016-08-3014:51potetmBut the cost of relocating is very non-trivial.#2016-08-3014:52potetmAnd the gain is theoretical.#2016-08-3015:21jdkealyI've read a bit about laziness in datomic and i wanted to ask a quick Q about my use case... i have accounts, accounts have collections, collections have photos, photos have tags. Tags are often removed / edited and i'd like to mark them as active / inactive. The criteria for being active/inactive is having just ONE photo that is not hidden. Is there any way I can have datomic fetch that criteria without scanning every photo in every collection? i.e. is it possible to write a query that returns true / false and will stop scanning after it hits a truthy value ?#2016-08-3015:25marshall@jdkealy Depending on your schema, you might be able to use get-some: http://docs.datomic.com/query.html#get-some#2016-08-3015:26marshalland/or get-else: http://docs.datomic.com/query.html#get-else#2016-08-3015:26jdkealyoh amazing... i think that might be perfect for my needs#2016-08-3015:27marshallthis may also be useful: http://docs.datomic.com/query.html#missing#2016-08-3015:28jdkealyis get-some or missing faster ?#2016-08-3015:28jdkealyi guess i'm looking for (not (missing ))#2016-08-3015:29jdkealyanyways, those all look great many thanks#2016-08-3015:42marshallsure. And I actually am not sure which would be faster. I would need to do some testing and thinking 🙂#2016-08-3015:51severed-infinityhey guys I’ve this query
(defn mult-lookup-user [db phones]
(let [result (d/q '[:find ?e ?phone
:in $ [?phone ...]
:where [?e :user/phone ?phone]] db phones)]
(map second result)))
(mult-lookup-user (d/db connect) ["0862561423" "0877654321”])
where it will return only existing phones, and running it standalone works perfectly but my issues is using with a yada resource, the important section below
{:post {:parameters {:form {:users [String]}
:body [String]}
:consumes #{"application/json" "application/x-www-form-urlencoded;q=0.9"}
:produces #{"application/json" "application/edn"}
:response (fn [ctx]
(let [users (or (get-in ctx [:parameters :body])
(get-in ctx [:parameters :form :users]))]
(when-let [valid-users (mult-lookup-user (d/db connect) users)]
(println "valid" valid-users)
(if (seq? valid-users)
(json/generate-string valid-users)))))}}
using the same input as when called standalone returns an empty list, but if I include just one value (valid of course) it returns that singular result. Can any help explain this issue?#2016-08-3017:09robert-stuttaford@severed-infinity i would trace the inputs going into mult-lookup-user in both cases and compare that to what happens when you call it directly#2016-08-3017:09robert-stuttafordthose are south african numbers, right? 🙂#2016-08-3017:11severed-infinity@robert-stuttaford I’ve removed the println calls for clarity but input shows the list of numbers coming in, but the results are as follows before and after
["0862561423","0877654321"]
valid ()
they are Irish mobile phone numbers#2016-08-3017:12robert-stuttafordagainst the same database value?#2016-08-3017:13robert-stuttafordare you printing the result coming from datalog directly?#2016-08-3017:14robert-stuttafordie, put (prn :in phones db :out result) before (map second result)#2016-08-3017:14severed-infinityI assume you mean like so
(defn mult-lookup-user [db phones]
(let [result (d/q '[:find ?e ?phone
:in $ [?phone ...]
:where [?e :user/phone ?phone]] db phones)]
(println results)
(map second result)))
#2016-08-3017:14robert-stuttafordwell, result not results 🙂#2016-08-3017:14robert-stuttafordand print the inputs#2016-08-3017:15severed-infinity:in ["0862561423" "0877654321"] :out #{[17592186045419 "0862561423"] [17592186045453 "0877654321"]}#2016-08-3017:15robert-stuttafordlooking good so far#2016-08-3017:15severed-infinitybut when called from the resource modal
:in ["\"0862561423\",\"0877654321\""] :out #{}#2016-08-3017:15severed-infinitymodel*#2016-08-3017:17severed-infinitythese are the two I am testing currently, as you can see with more than one I get an empty set but with one value I get the results
[0862561423,0877654321]
:in ["0862561423,0877654321"] :out #{}
valid ()
[0862561423]
:in ["0862561423"] :out #{[17592186045419 "0862561423"]}
valid (0862561423)
#2016-08-3017:17fentonis there api to insert into datomic...letting datomic set the :db/id? Seems odd to force the user to create a temp id for all the inserts...#2016-08-3017:17robert-stuttafordlooks like you're passing in a string#2016-08-3017:17robert-stuttafordin your yada impl, before you pass the numbers to your query fn, first (clojure.edn/read-string) it#2016-08-3017:17severed-infinityoh so does appear to be#2016-08-3017:18robert-stuttaford@fenton, no. you have to make tempids every time#2016-08-3017:18robert-stuttafordwhich, imho, is a far better tradeoff than some hidden magic you can't control 🙂#2016-08-3017:19fenton@robert-stuttaford I'd have preferred it to create it auto if not specified. 😞#2016-08-3017:19fentondo people do more than just call 'create temp id'?#2016-08-3017:19robert-stuttafordif you absolutely must have it, write a function that does it for you. speaking as someone who's been there, and learned the hard way, you really just want to get used to providing them#2016-08-3017:20fenton@robert-stuttaford ok...its a minor inconvenience only...and can be abstracted like u suggest. thx! 🙂#2016-08-3017:20robert-stuttafordyep!#2016-08-3017:21robert-stuttafordthe danger with the abstraction is it makes it harder for you to use them in more complex ways later on when you realise the full power of the design#2016-08-3017:21robert-stuttafordyou end up either ditching the abstraction part of the time, or making more convoluted abstractions. either way, you lose, either consistency, or simplicity#2016-08-3017:22fentonwhy is this feature powerful?#2016-08-3017:22robert-stuttafordi know. i've got several tens of thousands of lines of code written over several years by many people which bears the evidence of this#2016-08-3017:22fentoni believe you 🙂#2016-08-3017:22severed-infinity@robert-stuttaford thank you for that, solution parsed-users (str/split (first users) #",”) though I do not know why are array of strings turned into a an array with a joined single string value#2016-08-3017:22robert-stuttafordyou can express complex relationships in a single transaction#2016-08-3017:23robert-stuttaford@severed-infinity likely something yada or a middleware is doing#2016-08-3017:23severed-infinityYea I tired to ask in the yada chat before I got to the datomic stuff but got no response and continued on with what seemed like a working solution#2016-08-3017:25robert-stuttaford@fenton e.g. transact an order and all of the individual order items together#2016-08-3017:26robert-stuttafordwith all the relationships expressed in the same transaction#2016-08-3017:29robert-stuttaford@fenton a contrived example#2016-08-3017:30robert-stuttaford(let [order-id (d/tempid :db.part/user)]
[{:db/id order-id
:order/uuid (d/squuid)
:order/user [:user/email "#2016-08-3017:30robert-stuttafordnote order-id#2016-08-3017:31fenton@robert-stuttaford ok...yes that makes good sense for sure.#2016-08-3017:32robert-stuttafordthis makes mocking fake databases with d/with to test functions at the repl an absolute pleasure#2016-08-3017:32fentonoh?! i've not seen the d/with thing...#2016-08-3017:34fentonhmm... I'll have to keep that in mind...#2016-08-3017:35robert-stuttafordhighly recommend a scan-read of http://docs.datomic.com/clojure/#datomic.api/#2016-08-3017:35fentonalready there! 🙂#2016-08-3017:35fentonjust trying to understand the d/with part a bit better...how do u use that in the repl for testing?#2016-08-3017:36robert-stuttaford(def mock-db (->> some-made-up-tx-that-uses-real-data-and-adds-some-mock-data,-like-above (d/with some-actual-storage-backed-db-value) :db-after))
#2016-08-3017:37robert-stuttafordmock-db is a db you can pass into any api fn that takes a db (including d/with!) that you can query against as normal. you'll find all the stuff in storage, and all the stuff in your mock transaction, all together as though it was really transacted#2016-08-3017:37robert-stuttafordyou may have heard of time-travel databases, or speculative databases. this is that.#2016-08-3017:38robert-stuttafordit's all just in local memory#2016-08-3017:39fentonok, obviously this is something I'll need to know and being slow will take a bit of time to grok...I'll share it with our local clojure meetup for discussion...#2016-08-3017:39robert-stuttafordyou'll get it sooner than you think, once you poke at it for a bit#2016-08-3017:39robert-stuttafordif you've not found it already; https://github.com/clojure-cookbook/clojure-cookbook/tree/master/06_databases#2016-08-3017:39fentonkk @robert-stuttaford thanks for taking the time to hand hold...really appreciate! 🙂#2016-08-3017:40robert-stuttaford10-15 are about Datomic#2016-08-3017:40robert-stuttafordsome good (peer-reviewed and nicely edited) explanation in there!#2016-08-3017:40robert-stuttafordenjoy!#2016-08-3017:40fentonoh really cool! thx!!! 🙂#2016-08-3019:26fenton@robert-stuttaford I get it now. Pretty straight forward actually. Just to re-iterate. d/with allows you to run transactions with a 'seeming' copy of the database. Then you can inspect the results to see that they are what you want them to be. Thereby allowing you to test new DB functions on a live database without mucking up the live database.#2016-08-3019:59pheuterIf our current Datomic Pro license supports 5 processes, and we’re currently running 2 transactors and 3 peers (different environments), what happens when a 4th peer attempts to connect?#2016-08-3019:59pheuterWill it throw an error?#2016-08-3020:00pheuterOr perhaps it will bump another peer off?#2016-08-3020:06pesterhazydon't think it'll ever bump others off#2016-08-3020:08dm3license is # of peers per transactor IIRC#2016-08-3020:09dm3so you can have 5 peers. 6th peer will not be able to connect#2016-08-3020:10pheuteroh interesting, so not a total number of processes#2016-08-3020:10pheuterwe have a big deploy coming up, any documentation around this just to make sure?#2016-08-3020:12pheuterthe website seems to suggest otherwise#2016-08-3020:12pheuteras in total process count (transactor + peers)#2016-08-3020:13potetm@pesterhazy that hasn't been my experience. When you cross the limit, the existing peers don't get to keep their connections.#2016-08-3020:14potetm@pheuter my experience has been that it's peer count, txors don't go against the total#2016-08-3020:14potetmBut you can always fire it up in AWS to test before you get started.#2016-08-3020:14pheutermakes sense :+1:#2016-08-3020:15pesterhazyyeah in my experience the transactors don't count#2016-08-3020:15pheuter> the existing peers don't get to keep their connections.
that’s scary, no?#2016-08-3020:16pesterhazybut we have some distance to the limit of 5, so ymmv#2016-08-3020:16pesterhazywhat I've seen is that you can't connect if the limit is reached#2016-08-3020:18pheuterthe website says “processes (transactors + peers)"#2016-08-3020:18pheuterthat seems to suggest transactors count towards process count, no?#2016-08-3020:19potetmIt does suggest that, but if you turn on CW logging, there's a specific peer count metric. And we ran into problems when that metric was over the max.#2016-08-3020:20potetmI don't believe I've seen the "existing peers don't keep connections" documented anywhere, but that's what appeared to happen to me last week. So, def wanna confirm that with @marshall or @jaret#2016-08-3020:20ljosait's specifically the number of different IP addresses, it seems.#2016-08-3020:21ljosawe haven't hit the limit of 22, but before we bought the licenses we kept hitting the limit of 2.#2016-08-3020:22marshallThe limit is transactor + peers (i.e. a 5 process license would be 1 txor, 4 peers). HA Transactors don’t count.
Each license is contractually limited to a single production system, so if you have a 5 process license, you should be running no more than 1 transactor and 4 peers concurrently in production#2016-08-3020:23pheuteris “production” defined as a sql driver vs dev?#2016-08-3020:23pheuteror can we run a sql backed transactor on stage as well without incurring license costs?#2016-08-3020:24marshallyou can fully replicate your system on staging/dev/etc#2016-08-3020:24marshallproduction in this case is defined as your production application that faces users/runs the business, etc#2016-08-3020:25marshallyour testing/staging/dev instances can use whatever storage you like#2016-08-3020:25marshallas long as their purpose is for staging, etc, not for production use#2016-08-3020:26pheuterThanks for clarifying! Makes more sense now.#2016-08-3020:26marshallsure 🙂#2016-08-3020:26ljosahttps://clojurians.slack.com/archives/datomic/p1472588454000286 ^ by this I meant that the technically enforced limit is a little more permissive than the agreement, so you'll still have to count manually to stay honest. the tech just prevents massive overruns from happening when you forget.#2016-08-3021:29timgilbertSay, what's the quickest / simplest way to check whether an entity with a given value exists? I'm trying to come up with a sytem where some things use a String slug as their ID, like {:company/name "Boris LLC"} winds up as {:company/name "Boris LLC" :company/slug "boris-llc"}...#2016-08-3021:30timgilbertSo I'm looking at writing a loop where if there's already a [:company/slug "boris-llc"] I generate "boris-llc-1", "boris-llc-2" etc#2016-08-3021:31timgilbertRight now I'm planning on (d/entity db [:company/slug "boris-llc"]) but I thought I'd check to see if anyone has some advice on it first#2016-08-3022:06bhagany@timgilbert I think that would be fine. I do something like this with query, but my situation is complicated by my slugs not being globally unique. I wish I could do it with entity.#2016-08-3022:09timgilbertCool, thanks#2016-08-3022:12adammiller@timgilbert believe you could do something like this to avoid a loop and get them all at once: (q '[:find ?e
:in $ ?slug-partial
:where
[?e :company/slug ?slug]
[(.startsWith ?slug ?slug-partial)]]#2016-08-3022:12adammillerthat way you could just pass “boris-llc” and get everything that begins with it in one call.#2016-08-3022:13adammillergranted that may not be the most efficient#2016-08-3102:25adammillerUpon further thought, that’s probably abysmal performance unless you had something to filter on before evaluating the matching of the slug (or the db is extremely small).#2016-08-3107:02caspercI am wondering what the best practice in Datomic is for doing autoincrementing sequences. Currently I have this implementation, using a db function to increment a noHistory field.
(def gen-id
(d/function
'{:lang "clojure"
:params [db entity-type entity-id]
:code (let [seq-ent (d/entity db [:sequence/name entity-type])
_ (cond
(empty? seq-ent)
(throw (Exception. (str "Sequence with name " entity-type " not found in database. Are the sequences initialized?")))
(nil? (:sequence/sequence seq-ent))
(throw (Exception. (str "Sequence with name " entity-type " is nil. Are the sequences initialized?"))))
new-value (inc (:sequence/sequence seq-ent))]
[[:db/add (:db/id seq-ent) :sequence/sequence new-value]
[:db/add entity-id entity-type new-value]])}))
#2016-08-3107:05caspercIt can then be used like this when creating the transaction data:
`
(let [tempid (d/tempid :db.part/user)]
[[:fns/gen-id :task/task-id tempid]
{:db/id tempid
:task/status "PENDING"}])
where :fns/gen-id is the function and :task/task-id is the noHistory attribute that is incremented.#2016-08-3107:08caspercMy problem is that I want to add several incremented ids in a transaction, but adding several just means that they all see the same old value of the :task/task-id attribute and increment to the same new value, meaning that all the entities get the same id.#2016-08-3107:10caspercSo I want to be able to create one transaction, that adds several entities, each of which is assigned a (different) incrementing id. Any ideas?#2016-08-3107:11caspercJust pinging @bkamphaus for when he gets up 🙂#2016-08-3107:33alexmillerBen doesn’t work at Cognitect anymore, but perhaps @marshall can help#2016-08-3107:35caspercAh, thanks. I’ll make sure to not bug Ben then 🙂#2016-08-3110:45karol.adamiectrying to set up transactor on AWS proves to be problematic. AutoScaling group keeps cycling the transactor. I followed the AWS setup docs and still get the same results (had similiar issue with terraform before). I see Dynamodb table, roles, scaling group and launch configuration. But it keeps cycling transactors because it claims they fail health checks. When trying to connect using shell:
datomic % uri = "datomic:
<datomic:ddb://eu-west-1/temporary/mydb>
datomic % Peer.createDatabase(uri);
// Error: // Uncaught Exception: bsh.TargetError: Method Invocation Peer.createDatabase : at Line: 2 : in file: <unknown file> : Peer .createDatabase ( uri )
Target exception: java.lang.IllegalArgumentException: :db.error/read-transactor-location-failed Could not read transactor location from storage
#2016-08-3110:48pesterhazyperhaps dynamo aws permission issues?#2016-08-3110:49karol.adamiecwell roles are ensured#2016-08-3110:49karol.adamiecthey do exist#2016-08-3110:49karol.adamiecsee them in iam console#2016-08-3110:50pesterhazyI mean maybe your peer (not the transactor) can't connect to dynamodb#2016-08-3110:50karol.adamieci did open cidr 0.0.0.0 so i would expect it to work...#2016-08-3110:51karol.adamiecso basically i do have dynamo table transactor. and want to connect from datomic shell from my local machine#2016-08-3110:51karol.adamiecthing that worries me is constant cycling of transactors#2016-08-3110:52karol.adamiecseems like they do not initialize properly#2016-08-3110:52karol.adamiecso they do not drop information about transactor into dynamo so shell peer can not connect#2016-08-3110:52karol.adamiecon my ect transactor instance this is my log#2016-08-3110:54karol.adamiecuser-data: inflating: datomic-pro-0.9.5394/LICENSE-CONSOLE
user-data: inflating: datomic-pro-0.9.5394/datomic-pro-0.9.5394.jar
user-data: inflating: datomic-pro-0.9.5394/README-CONSOLE.md
user-data: pid is 1557
user-data: ./startup.sh: line 26: kill: (1557) - No such process
/dev/fd/11: line 1: /sbin/plymouthd: No such file or directory
initctl: Event failed
Stopping atd: [ OK ]
Shutting down sm-client: [ OK ]
Shutting down sendmail: [ OK ]
Stopping crond: [ OK ]
#2016-08-3110:54karol.adamiecit definitely is not looking good#2016-08-3110:54karol.adamiecit spins up and then a fatal error shuts instance down#2016-08-3110:56karol.adamiecand what is /sbin/plymouthd ?#2016-08-3110:57karol.adamiecand in aws console in my dynamo table there no data whatsoever…. that is why peer can not connect i suppose.#2016-08-3111:08karol.adamieccould someone take a peek at a correct transactor logs and paste what it does right after unpacking of datomic?#2016-08-3111:45karol.adamiecas a side note it is definitely not a license issue, i just mangled the licencse key on purpose and same results followed#2016-08-3112:01karol.adamiecsame behaviour on us-west-1 region 😞#2016-08-3113:16jaret@karol.adamiec are you sure you are not exceeding the allotted number of processes for your license? Do you have transactor logs so I can verify it is failing on a health check to heartbeat? Finally, I assume you reviewed this page: http://docs.datomic.com/aws.html#2016-08-3113:30jaret@karol.adamiec other thoughts: Are the memory settings valid? (e.g. heap fits within instance size)
Test the license key being valid by launching transactor locally (not just garble the key). are you using a supported AWS instance type? (writable file system?) per the docs supported instance types can be found in a datomic generated cloud formation template under the key “AWSInstanceType2Arch"#2016-08-3113:30karol.adamiecheartbeat is not on#2016-08-3113:31karol.adamiect1.micro#2016-08-3113:31karol.adamiecapparently supported#2016-08-3113:31karol.adamiecwill follow on rest of hints shortly 🙂, thx#2016-08-3113:32jaretIf the transactor fails to write heartbeat at launch the docs indicated to verify storage connection information Datomic has can be used to reach the storage.#2016-08-3113:33karol.adamieci will clean all infra get fresh datomic zip and follow very carefully the docs. might be some detail somewhere#2016-08-3113:34karol.adamiecsecond time the charm 🙂#2016-08-3113:34jaretI think t1 Micro is not officially supported. People have done it and I am not sure what they configured to get it to work#2016-08-3113:34jaretok good luck! Let us know how it goes.#2016-08-3113:34karol.adamiecsure thing#2016-08-3113:35karol.adamieci hoped to shortcut the setup with terraform template but time to go elbows deep would come anyway….#2016-08-3113:49marshall@karol.adamiec Yes, @jaret is correct - t1.micro is not officially supported. Default settings won’t work with that instance type. i’d recommend getting it running on a larger (supported) instance type (i.e. these: http://docs.datomic.com/capacity.html#dynamodb) then you can go back and tweak configuration to get it running on the instance type that suits your needs#2016-08-3113:53karol.adamiecthanks @marshall will do and report back#2016-08-3114:45marshallYou need to have 2 transactions#2016-08-3114:46marshallThe first defines the partition#2016-08-3114:46marshallthe second can use it#2016-08-3114:46marshallyou can’t both define and transact something to the same partition in a single txn#2016-08-3114:46marshallditto install/use an attribute#2016-08-3114:48tengOk, will try that. Thanks!#2016-08-3114:55tengCompilerException java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException: :db.error/not-a-db-id Invalid db/id: #db/id[:db.part/mypartition -10100001], compiling:#2016-08-3115:18marshallChange your data tx to:
`#2016-08-3115:18marshallargh#2016-08-3115:19marshall@teng ^#2016-08-3115:21marshallAlternatively, change the ident in your partition definition to:
:db/ident :db.part/mypartition
#2016-08-3115:23tengNow it works, thanks! What is most correct, to use “:mypartition” or to use “db.part/mypartition”?#2016-08-3115:23marshallProbably the latter.#2016-08-3115:24tengOk, thx.#2016-08-3115:56karol.adamieci had success with us and eu regions for datomic. previous issues most likely due to instance type. now using m3.medium.#2016-08-3115:56karol.adamiecwhat is the cheapest supported transactor instance type so one can spin dev/qa environments?#2016-08-3115:58marshall@karol.adamiec Some discussion of this here:
https://groups.google.com/forum/#!searchin/datomic/micro%7Csort:relevance/datomic/9q-HGWulKwo/U4GwZXI2DQAJ
https://stackoverflow.com/questions/26102584/decrease-datomic-memory-usage/26108628#26108628#2016-08-3116:02karol.adamiecthanks @marshall very helpful links 🙂#2016-08-3116:55karol.adamiecanyone having tried @mrmcc3 terraform module? i would be glad for syntax for the licence. The one from email text file pasted into tf file complains about escaping. Are the licence trailing \ needed?#2016-08-3117:04karol.adamiechmpphhhhh. Long story short: most likely it was license encoding issue. Would be nice to get some sensible output from transactor startup on AWS in such cases!!#2016-08-3117:42marshall@casperc regarding your incrementing function from this morning - if you’re calling the transaction function more than once (i.e. on multiple entities), the value you get from the current db (in your call to d/entity) will be the same for all those calls - the A in ACID means there is no ‘order’ within a transaction, it all happens at the same time.
If you need to have dependent ordering, you could either write a transaction function that handles the multi-entity case manually (i.e. you resolve which of the entities gets what values) or break it up into multiple transactions#2016-08-3118:03jgdaveyYou probably want something like this
(sort (d/q '[:find [?ident ...]
:where
[?e :db/ident ?ident]
[_ :db.install/attribute ?e]]
db))
#2016-08-3118:03marshall@teng that would list all datoms in the database. if you…. yep what @jgdavey said#2016-08-3118:03jgdavey(taken from https://github.com/Datomic/day-of-datomic/blob/master/tutorial/schema_queries.clj)#2016-08-3118:04jgdaveyConceptually, your query is like saying: “Give me all the datoms in the database, and return a set of the attribute of each”.#2016-08-3118:06tengGreat way to look at the problem @jgdavey! Thanks.#2016-08-3118:07jgdaveyYeah, an easily queryable schema comes in very handy.#2016-08-3118:08tengI’m still new to Datomic, but I really like the design and the simplicity of it!#2016-08-3118:10tengI’m a "SQL pro", but I need some more days to adjust the mindset to Datalog!#2016-08-3119:18tengThought the order only affected the performance?!#2016-08-3120:20jaret@teng I got the same results with both:#2016-08-3120:20jaret(d/q '[:find ?attr ?type ?card
:where
[_ :db.install/attribute ?a]
[?a :db/valueType ?t]
[?a :db/cardinality ?c]
[?a :db/ident ?attr]
[?t :db/ident ?type]
[?c :db/ident ?card]]
db)
=>
#{[:db.alter/attribute :db.type/ref :db.cardinality/many]
[:db.install/function :db.type/ref :db.cardinality/many]
[:db.install/valueType :db.type/ref :db.cardinality/many]
[:db.excise/attrs :db.type/ref :db.cardinality/many]
[:db/ident :db.type/keyword :db.cardinality/one]
[:db.excise/before :db.type/instant :db.cardinality/one]
[:db/index :db.type/boolean :db.cardinality/one]
[:db/fn :db.type/fn :db.cardinality/one]
[:db/fulltext :db.type/boolean :db.cardinality/one]
[:db/unique :db.type/ref :db.cardinality/one]
[:db.excise/beforeT :db.type/long :db.cardinality/one]
[:db.sys/partiallyIndexed :db.type/boolean :db.cardinality/one]
[:db/isComponent :db.type/boolean :db.cardinality/one]
[:db/lang :db.type/ref :db.cardinality/one]
[:db.sys/reId :db.type/ref :db.cardinality/one]
[:db.install/partition :db.type/ref :db.cardinality/many]
[:db/txInstant :db.type/instant :db.cardinality/one]
[:db/valueType :db.type/ref :db.cardinality/one]
[:db/cardinality :db.type/ref :db.cardinality/one]
[:db/excise :db.type/ref :db.cardinality/one]
[:db/doc :db.type/string :db.cardinality/one]
[:fressian/tag :db.type/keyword :db.cardinality/one]
[:db/noHistory :db.type/boolean :db.cardinality/one]
[:db.install/attribute :db.type/ref :db.cardinality/many]
[:db/code :db.type/string :db.cardinality/one]
[:country/name :db.type/string :db.cardinality/one]}
(d/q '[:find ?attr ?type ?card
:where
[?a :db/valueType ?t]
[_ :db.install/attribute ?a]
[?a :db/cardinality ?c]
[?a :db/ident ?attr]
[?t :db/ident ?type]
[?c :db/ident ?card]]
db)
=>
#{[:db.alter/attribute :db.type/ref :db.cardinality/many]
[:db.install/function :db.type/ref :db.cardinality/many]
[:db.install/valueType :db.type/ref :db.cardinality/many]
[:db.excise/attrs :db.type/ref :db.cardinality/many]
[:db/ident :db.type/keyword :db.cardinality/one]
[:db.excise/before :db.type/instant :db.cardinality/one]
[:db/index :db.type/boolean :db.cardinality/one]
[:db/fn :db.type/fn :db.cardinality/one]
[:db/fulltext :db.type/boolean :db.cardinality/one]
[:db/unique :db.type/ref :db.cardinality/one]
[:db.excise/beforeT :db.type/long :db.cardinality/one]
[:db.sys/partiallyIndexed :db.type/boolean :db.cardinality/one]
[:db/isComponent :db.type/boolean :db.cardinality/one]
[:db/lang :db.type/ref :db.cardinality/one]
[:db.sys/reId :db.type/ref :db.cardinality/one]
[:db.install/partition :db.type/ref :db.cardinality/many]
[:db/txInstant :db.type/instant :db.cardinality/one]
[:db/valueType :db.type/ref :db.cardinality/one]
[:db/cardinality :db.type/ref :db.cardinality/one]
[:db/excise :db.type/ref :db.cardinality/one]
[:db/doc :db.type/string :db.cardinality/one]
[:fressian/tag :db.type/keyword :db.cardinality/one]
[:db/noHistory :db.type/boolean :db.cardinality/one]
[:db.install/attribute :db.type/ref :db.cardinality/many]
[:db/code :db.type/string :db.cardinality/one]
[:country/name :db.type/string :db.cardinality/one]}
#2016-08-3120:27tengIf you @jaret have look at http://www.learndatalogtoday.org/chapter/4, the tab ”2” at the bottom and then click the link ”I give up”, then <Run Qurery> works just fine, but if you change place of the first and the second where criteria, then it doesn’t work. I have the same behavior in my database.#2016-08-3120:28marshall@teng What version of Datomic are you using?#2016-08-3120:29teng0.9.5372#2016-08-3120:29tengdatomic-pro#2016-08-3120:31tengBut the query should always return the same result, regardless of the order of the predicates?#2016-08-3120:32marshallthat would be my expectation. Same set of results.
Can you verify that you see the same count when you run against an empty database?#2016-08-3120:32marshallThen re-transact whatever it is you have in your DB and try again#2016-08-3120:38tengI got the same result on an empty database.#2016-08-3120:38marshallI verified that the same thing happens on the mbrainz example database#2016-08-3120:38marshall(different counts)#2016-08-3120:38marshallI’ll look into the reason and get back to you#2016-08-3120:39tengOk, thanks!#2016-08-3121:22marshall@teng Were you running against a restore of a db of some kind? If so, I do have an explanation.
Datomic 0.9.5206 included a fix for some bootstrap datoms not being in the VAET index. If you run the queries against a database created by a version older than that the result you see is expected.#2016-08-3121:29marshallI will look into creating a new version of the mbrainz example database that doesn’t exhibit this behavior#2016-08-3122:17sdegutisIs there anything else needed to connect a Java program to a Datomic database, besides adding com.datomic/datomic-free:0.9.5372 and doing Connection conn = Peer.connect("datomic:); in my static main function?#2016-08-3122:18sdegutisThat line throws ExceptionInitializationError because it can't load "datomic.db__init" internally. The Clojure runtime has loaded, I've verified that.#2016-09-0109:05teng@marshall No, it was not a restore of a db. I created a database from scratch (using datomic-pro 0.9.5372), then I added the schema-definitions, and then the data. Everything fresh from scratch.#2016-09-0112:32robert-stuttaford@mrmcc3 you had a nifty aws cli command that listed the datomic amis -- i can't find it in the history here -- what was it, again, please? 🙂#2016-09-0112:38karol.adamiec@robert-stuttaford aws ec2 describe-images --owners 754685078599#2016-09-0112:38karol.adamiecmaybe that?#2016-09-0112:40robert-stuttafordthat's the one! thank you!#2016-09-0112:40robert-stuttafordkarol.adamiec are you the person who had this issue?#2016-09-0112:40robert-stuttaforduser-data: pid is 2173
user-data: ./startup.sh: line 26: kill: (2173) - No such process
/dev/fd/11: line 1: /sbin/plymouthd: No such file or directory
initctl: Event failed
#2016-09-0112:41karol.adamiecyes#2016-09-0112:41robert-stuttaforddid you overcome it?#2016-09-0112:41karol.adamiecyes 🙂#2016-09-0112:41robert-stuttafordmy current theory is my ami is too old#2016-09-0112:41karol.adamiec99% it was malformed license key#2016-09-0112:41robert-stuttafordoh. bugger. yes. that's quite possible#2016-09-0112:41robert-stuttafordseeing as i've just replaced it#2016-09-0112:41karol.adamiecwould expect a nice WRONG licence in there though 🙂#2016-09-0112:43robert-stuttafordthank you, karol#2016-09-0112:43karol.adamiecno prb#2016-09-0112:43karol.adamiecgiving back 😄#2016-09-0112:43karol.adamieclet us know if it was license when done and dusted#2016-09-0112:49robert-stuttafordhow did you know which among the AMIs returned by that command to use, @karol.adamiec ?#2016-09-0112:50karol.adamieci did not use that in the end. i picked amis from CloudFormation template json#2016-09-0112:50robert-stuttafordok, cool#2016-09-0112:51karol.adamiec"AWSRegionArch2AMI":
{"ap-northeast-1":{"64p":"ami-952c6a94", "64h":"ami-bf2f69be"},
"us-west-1":{"64p":"ami-3a9fa47f", "64h":"ami-789fa43d"},
"ap-southeast-1":{"64p":"ami-ecfaa8be", "64h":"ami-92faa8c0"},
"us-west-2":{"64p":"ami-1b13652b", "64h":"ami-f51264c5"},
"eu-central-1":{"64p":"ami-e0a4a9fd", "64h":"ami-e2a4a9ff"},
"us-east-1":{"64p":"ami-34ae4c5c", "64h":"ami-82a94bea"},
"eu-west-1":{"64p":"ami-6d67a11a", "64h":"ami-a566a0d2"},
"ap-southeast-2":{"64p":"ami-2d41da17", "64h":"ami-c942d9f3"},
"sa-east-1":{"64p":"ami-df238ec2", "64h":"ami-ad238eb0"}}},
#2016-09-0112:51karol.adamiecrecent run on newest datomic from yesterday#2016-09-0112:53robert-stuttafordthanks -- the one we're using is present there, so we're good. just gotta get this license key right#2016-09-0112:53robert-stuttafordfiguring out how to include it in a CFN Fn::Join is fun#2016-09-0112:55karol.adamiecyeah, i had to figure out how to include it in terraform. ;/#2016-09-0112:55karol.adamiecfun as hell#2016-09-0112:56robert-stuttafordhow's the terraform going?#2016-09-0112:56robert-stuttafordit's too late for us to switch now, but i'm planning to revise things again with that#2016-09-0112:58karol.adamiecmanaged to put up datomic in dev env with a press of a button so i would say nice 🙂#2016-09-0112:58karol.adamiectook some time though#2016-09-0112:59karol.adamiecmodeule from @mrmcc3 is a gold, just needs some ironing a bit… i have put that into future tasks of mine#2016-09-0112:59robert-stuttafordthe thing for me is making the infrastructure code approachable for others. right now our CFN codebase is hella scary#2016-09-0113:00karol.adamiecpeeking at datomic cftemplate is definitely scary 🙂#2016-09-0113:00robert-stuttafordterraform looks a lot simpler to reason about, even if it ends up doing the same stuff#2016-09-0113:01karol.adamiecyeah, for me it is even i do not want to look at aws console. it is easier to trace sec groups and put together complete flow in your mind if it is in terraform#2016-09-0114:13mrmcc3a recent change to the module means terraform looks up the correct AMI for you. You can verify it with terraform plan before you apply changes. pretty slick#2016-09-0114:49sdegutisWhat's needed to connect a Java program to an existing database, besides adding the "datomic-free" dependency and calling Peer.connect with the right URI? Doing so throws an exception for me about an internal Datomic class not being found or something.#2016-09-0115:25jaret@sdegutis did you "import datomic.Peer”?#2016-09-0115:26jaretHere is our java Mbrainz example. Maybe it will serve as a comparison to catch anything you might have missed: https://github.com/Datomic/mbrainz-sample/blob/master/examples/java/datomic/samples/Mbrainz.java#2016-09-0115:31robert-stuttaford@jaret, i'm having the same issue that karol had earlier, where the transactor instance dies during initialisation right after extracting the datomic runtime. i've made very sure that my license key is represented correctly; i used ensure-* to generate cf via datomic and my license string is identical. how do i diagnose further, given that i can't ssh in and dig around?#2016-09-0115:31robert-stuttafordthis is in our UAT env. not a prod problem#2016-09-0115:33robert-stuttaforduser-data: inflating: datomic-pro-0.9.5394/datomic-pro-0.9.5394.jar
user-data: inflating: datomic-pro-0.9.5394/README-CONSOLE.md
user-data: pid is 2180
user-data: ./startup.sh: line 26: kill: (2180) - No such process
/dev/fd/11: line 1: /sbin/plymouthd: No such file or directory
initctl: Event failed
Stopping atd: [ OK ]
#2016-09-0115:36robert-stuttafordtested the key locally, it works fine#2016-09-0115:39robert-stuttafordmaybe there's a variant of s3:\/\/$DATOMIC_DEPLOY_BUCKET\/$DATOMIC_VERSION\/startup.sh that logs a lot more?#2016-09-0115:41jaret@robert-stuttaford I am asking around to see what we can do. Usually testing locally is my cure-all#2016-09-0115:42robert-stuttafordthank you#2016-09-0115:42robert-stuttafordi'm pretty sure i'm doing something dumb, but i'm at a loss as to what it could be#2016-09-0115:43karol.adamiechave you looked at generated user data script to verify the licence?#2016-09-0115:43robert-stuttafordyes#2016-09-0115:45sdegutis@jaret yes, did not help#2016-09-0115:45sdegutisI'm getting two simultaneous exceptions: ExceptionInInitializerError and org.fressian.handlers.IWriteHandlerLookup#2016-09-0115:46sdegutisIt's getting the second one when trying to load a class via URLClassLoader.findClass.#2016-09-0115:46marshall@sdegutis: what versions of Datomic and of Clojure?#2016-09-0115:46sdegutisClojure 8, Datomic-free 0.9.5372#2016-09-0115:48marshallOther deps possibly conflict in the project? Can inspect with mvn dependency:tree -Dverbose #2016-09-0115:48sdegutisThat could be it.#2016-09-0115:49robert-stuttaford@jaret could i show you the UserData my CFN produces? perhaps you spot something?#2016-09-0115:50jaretSure#2016-09-0115:50sdegutisFwiw it was the default Jooby project. I will try it in a fresh Java project now.#2016-09-0115:51marshall:+1: #2016-09-0115:54sdegutisGreat, it works in a fresh Maven project.#2016-09-0115:54sdegutisTurns out Jooby just has some sort of conflicting dependency.#2016-09-0115:54sdegutisDang 😞#2016-09-0115:55marshallYou can probably track it down with that mvn command and a bit of time#2016-09-0115:56marshallMight be something you can exclude or upgrade#2016-09-0116:02sdegutisThanks a ton marshall.#2016-09-0117:49marshall@teng Can you provide a repro case to me (possibly offline - email me at <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection> )?
I have tried creating multiple databases with large imported data sets and I don’t see that behavior if I’m using 0.9.5372#2016-09-0119:08timgilbertPersonally, we eventually decided against using the AMIs, especially because of the lack of ssh access for diagnosing install problems. It's not too difficult to just fire up a plain ubuntu ec2 instance and install the transactor files on there, and you can still use the ensure-transactor bits to set up the AWS permissions#2016-09-0119:09timgilbertWhat I would really like is a docker container with the transactor in it, but that doesn't appear to exist#2016-09-0119:11marshall@timgilbert : our friends at Pointslope do maintain this: https://github.com/pointslope/docker-datomic-example#2016-09-0119:12marshallblog post about ^ : https://pointslope.com/blog/datomic-pro-starter-edition-in-15-minutes-with-docker/#2016-09-0119:13timgilbertHmm, will look into that, thanks @marshall.#2016-09-0119:22kennyWhy is there no .keySet function in Datomic Clojure API?#2016-09-0120:45kennyIs it guaranteed that all the tx ids in a transaction report are the same? e.g.
(= 1
(count (reduce (fn [txs datom]
(conj txs (.tx datom))) #{} (:tx-data tx-report))))
#2016-09-0121:09jaret@kenny you should be able to use java.util.hashmap.keyset().#2016-09-0121:09jaretI am also not sure I understand your second question#2016-09-0121:12kennyIs the code I posted always true, given that tx-report is a transaction report returned from transact?#2016-09-0121:13kennyFor example, here is a tx-report:
{:status :ready,
:val {:db-before datomic.db.Db
:db-after datomic.db.Db
:tx-data [#datom[17592186045421 67 17592186045418 13194139534313 true]
#datom[17592186045421 67 17592186045419 13194139534313 true]
#datom[17592186045421 67 17592186045420 13194139534313 true]],
:tempids {-9223350046625933567 17592186045418,
-9223350046625933568 17592186045419,
-9223350046625933569 17592186045420,
-9223350046625933566 17592186045421}}}
Is it guaranteed that the tx value for each datom in :tx-data is the same?#2016-09-0121:14kennyIn this case it is ^#2016-09-0121:15kenny(map .tx (:tx-data tx-report))
=> (13194139534313 13194139534313 13194139534313)
#2016-09-0121:15kennyIs that always true?#2016-09-0121:20jaretYes it is always true. tx-report is recorded as :tx-data in EAVT form and the tx values returned from transact should be the same. I will double check with @marshall but that is my understanding.#2016-09-0121:21kennyIt makes sense that they would be, just wanted to confirm#2016-09-0121:26kenny@jaret About the .keySet: I was referring to calling .keySet on an entity returning a set of strings instead of keywords. For example:
(.keySet (d/entity db-init 17592186045419))
=> #{":foo/bar"}
#2016-09-0121:28kennyPff.. I'm silly:
(keys (d/entity db-init 17592186045419))
=> (:foo/bar)
#2016-09-0122:44marshallYes @kenny , @jaret is correct. All the datoms in tx-data returned by transact are in the same transaction, and therefore all have the same tx value#2016-09-0122:47kenny@marshall @jaret Thanks#2016-09-0122:52codonnellI need to (very) occasionally do a sweep and update "large" amounts of data (commiting ~14k datoms). Should I break that update up into chunks of a certain size?#2016-09-0123:39marshall@codonnell: yes, I wouldn't suggest 14k datoms in a single transaction. I'd recommend keeping transactions to the hundreds of datoms. Low thousands would be ok, but not ideal #2016-09-0123:39marshallOf course it depends on the size of individual datoms as well as schema#2016-09-0123:47codonnell@marshall: alright, thanks. I'm guessing one at a time would also not be ideal? I can play with batch size to see what works best. #2016-09-0205:58teng@marshall I can prepare a repo and mail you when I’m finished.#2016-09-0213:27jaret@teng Learndatalogtoday has a DB that was generated prior to the fix I believe. That is why it still exhibits this error. Any DB created prior to Datomic 0.9.5206 will have this issue.#2016-09-0213:27teng@jaret Ok!#2016-09-0217:04jonas@teng I bumped the datomic version and redeployed learndatalogtoday. Could you check if ths solves the issue?#2016-09-0217:06teng@jonas Now it works. Thanks!#2016-09-0220:43sdegutisWhen using Datomic from Java without ever initializing Clojure myself, how can I avoid "No reader function for tag db/id" errors when reading Datomic schema from EDN via datomic.Util.read(ednString)?#2016-09-0220:44sdegutisI've tried clojure.var("clojure.core/require").invoke("datomic.api") based on my google searches, but it had no effect on resolving this error.#2016-09-0221:08rwtnorton<stab-in-the-dark>
Curious if calling (#'clojure.core/load-data-readers) after your require might help.
</stab-in-the-dark>#2016-09-0221:09sdegutisNever mind, it works with readAll which takes a stream. That works for now.#2016-09-0221:09sdegutisThanks.#2016-09-0223:26adammilleranyone put much thought into modeling translations inside datomic? For instance if I have simple entity like :content/name
:content/description
:content/type
but I want name and description to be able to be translated into multiple languages. Curious what others have done? I have done this in RDBMS and that would translate ok, just wondering if there is better way.#2016-09-0223:27adammillerI think the obvious may be to just do [property]-[isocode] so like have :content/name-es and :content/name-cn etc…?#2016-09-0223:32adammillerMy other thought is to create an entity like this: :translation/iso
:translation/attribute
:translation/text
Then in my content entity I’d have an attribute of :content/translations#2016-09-0223:34adammillerAn aside, I’d be very interested in book/blog series/etc… on datomic data modeling best practices!#2016-09-0306:44robert-stuttaford@adammiller, we use layered entities#2016-09-0306:46robert-stuttaford@adammiller, i wrote a gist for you: https://gist.github.com/robert-stuttaford/50acaa23986a52281f15982baa4922c9 - Language translations for Datomic entities with fallback to base entity#2016-09-0315:32adammillerThanks @robert-stuttaford , I’ll have a look!#2016-09-0411:39yonatanelHow can I distinguish between a :db.fn/cas exception thrown when the old value is wrong, and some other exception that might occur? Even better, is there a quiet way of doing this? I want to generate a UUID the first time that transaction is called and then keep the same value.#2016-09-0510:57gravHey! How can I find the type of stuff in my schema?
'[:find ?ident ?v
:in $
:where
[?e :db/ident ?ident]
[?e :db/valueType ?v]
[_ :db.install/attribute ?e]]
This gives me eg [[:foo/bar 23] [:foo/baz 22]], so I somehow need to resolve these numbers to a type, eg. 23 -> :db.type/string#2016-09-0511:23drankardHi there.
Im using local cassandra storage, now i try to setup Cloudwatch metrics in aws, following doc: http://docs.datomic.com/aws.html#other-storages.
It works perfectly, if running it all with direct internet access, i se metrics comming.
But if i start up behind my coorporate proxy i get "Connect to http://my-metrics.s3.amazonaws.com:443 timed out" when starting the transactor.
The proxy grants access to aws, If i curl with HTTPS_PROXY, everyting is fine.
Im starting the transactor with -Dhttps.proxyHost=my-proxy-host -Dhttps.proxyPort=8080 like any other Java application, but it seems like the transactor ignores the options.
HTTPS_PROXY or https_proxy env vars is ignorred to.
Any ideas ?#2016-09-0511:59gravTo answer my own question: I just needed to add [?v :db/ident ?t]#2016-09-0512:00gravSo
'[:find ?ident ?t
:in $
:where
[?e :db/ident ?ident]
[?e :db/valueType ?v]
[?v :db/ident ?t
[_ :db.install/attribute ?e]]
#2016-09-0513:28robert-stuttafordright, @grav. and you don't need the db.install clause at all#2016-09-0513:29robert-stuttafordassuming ?ident is a schema attr, it'll work just as well#2016-09-0513:29gravcool!#2016-09-0608:17tomtauin the REST tx-data, what is the syntax for adding something to existing entities? I've got this: [
{:db/id 17592186045432
:foo/refAttr #db/id[:db.part/user]}
{:db/id #db/id[:db.part/user] :bar/something true :bar/baz "blabla"}
] and get an error... if my tx-data is only [{:db/id #db/id[:db.part/user] :bar/something true :bar/baz "blabla"}], it works fine, so i suppose the problem is in {:db/id 17592186045432
:foo/refAttr #db/id[:db.part/user]}#2016-09-0608:36tomtauThis one works: [
{:db/id 17592186045432
:foo/refAttr #db/id[:db.part/user -1]}
{:db/id #db/id[:db.part/user -1] :bar/something true :bar/baz "blabla"}
] 🤔#2016-09-0612:51yonatanel@tomtau #db/id[:db.part/user] represents a new ID on every appearance. In order to correlate two entities you need to use the same #db/id[:db.part/user <negative-id>] to represent the same entity inside a transaction.#2016-09-0613:25jaret@drankard I just saw your google group question. What version of Datomic are you using?#2016-09-0613:29drankard@jaret the transactor jar is: datomic-transactor-pro-0.9.5344.jar#2016-09-0613:32drankardit’s like the (aws) ClientConfiguration does not get the proxy settings#2016-09-0613:33jaret@drankard 0.9.5385 added support for AWS SDK 1.11.6 I think there is a deps problem on your version. Could you try on 0.9.5385 or a more recent version?#2016-09-0613:34jaretI am going to update the ggroup discussion if that works for you#2016-09-0613:57andersAnyone from Cognitect around? The AWS Datomic AMIs (ami-e0a4a9fd and ami-e2a4a9ff) for eu-central-1 seems to be missing#2016-09-0614:07robert-stuttaford@marshall @jaret ^^#2016-09-0614:11marshallLookin into it#2016-09-0614:12andersthanks#2016-09-0614:25marshall@anders I’ve pushed the AMIs. They might take a few to show up, but they should be available soon#2016-09-0614:25andersgreat, thanks#2016-09-0614:25marshallno problem. Thanks for catching it#2016-09-0622:01kennyIs there an easy way to tell when a differential backup was last updated?#2016-09-0623:21kennyI just used this: https://stackoverflow.com/questions/31062365/get-last-modified-object-from-s3-cli#2016-09-0706:18robert-stuttafordthe most recently modified file in the roots folder will do, @kenny#2016-09-0707:29achesnaisHi all 🙂 quick question: when using lookup-ref in parametrised queries, what is the ref resolved against? The most recent available index, or the datasource provided in the query?#2016-09-0708:20val_waeselynck@achesnais I experimented a bit, AFAICT it's definitely from a data source#2016-09-0708:20val_waeselynckbut if there are several data sources not sure what's going on#2016-09-0708:21val_waeselynckit seems to me it gets resolved once per datasource used in a Datalog clause involving the entity#2016-09-0710:30achesnaisIn your ‘ambiguous data source’ example, isn’t the error stemming from the fact that only $ can be an implicit datasource, meaning that if you don’t specify it in the :in clause datomic won’t know where to find it?#2016-09-0710:31achesnaisexample 2 is super super interesting. I would have expected it not to work if [:a/id …] were to resolve to the entity within the data source, but it seems what’s binding is indeed the lookup-ref itself#2016-09-0710:33achesnaisor rather, it seems that the resolution is limited to the clause scope meaning this works because you’re not passing raw id directly#2016-09-0710:33achesnaisAnd thanks for taking time to experiment @val_waeselynck#2016-09-0710:36misha@robert-stuttaford greetings!
Do you store datetime/user-id on a "data-entity" or on a transaction data?
How does it work for you in datascript (level of convenience)?
Currently, I am storing those on data, but feels like a bit dirty.
Can you share any insight on whether migrating datetime/user-id to tx-data worth it?#2016-09-0710:38mishaAnother question is: is there a recommended approach to describing content sharing?
e.g. I am the author of the blog post, and I grant permission to read/modify it to these 3 users#2016-09-0710:49robert-stuttafordhi @misha - can you describe what you mean by datetime/user-id ? do you mean linking the user who caused a transaction to occur, to that transaction - so, an audit trail?#2016-09-0710:58misha@robert-stuttaford classic created-by created-datetime updated-datetime#2016-09-0711:03mishain my case, the entity being created/edited is private/individual in a sense of "ownership", where only 2 use cases of transactions are possible:
1. the same user updates his own data.
2. some other user might update the data, if he is a collaborator (this is where my 2nd question origins).
but for the sake of 1st use case only: if I need to mark an entity as belongs to user1, where do I put :entity/user-id: in entity data, or in tx data?#2016-09-0711:04misha(from the "needs to be synchronized with datascript a lot" point of view, if that matters)#2016-09-0711:47robert-stuttafordok, gotcha. basically, you only need to do something for 'created-by'. the rest is discoverable already#2016-09-0711:48robert-stuttafordwhen transacting something you want to track, you can link directly to the in-flight transaction's reified entity:
(d/transact conn [{<your entity here>} [:db/add #db/id[:db.part/tx] :transaction/responsible-user your-logged-in-user-id-here]])#2016-09-0711:49robert-stuttafordwhat's nice about this is you can use it for lots of stuff. we do this for all txes performed by a logged in user.#2016-09-0711:49robert-stuttafordhowever, you could also just make an attr that links the creating user directly to your entity#2016-09-0711:50robert-stuttaford(d/transact conn [{<your entity here> :your-entity/created-by your-logged-in-user-id-here}])#2016-09-0711:51robert-stuttaford@misha ^#2016-09-0711:55misha@robert-stuttaford so you chose to go with user-id in tx-data?
how often do you flush UI (datascript) data to server? Do you accumulate any period of txs at all (e.g. offline usage for minutes/hours)?#2016-09-0711:58robert-stuttafordyes, on tx#2016-09-0711:58robert-stuttaforddatascript syncs as early and as often as it can, but of course if you're offline for a long time, it'll only sync when it's back on#2016-09-0711:59robert-stuttafordNikita wrote a rad bi-di event source sync mechanism that can handle just about any amount of data, and batches events, etc#2016-09-0712:59marshall@kenny You can also use list-backups http://docs.datomic.com/backup.html#listing-backups#2016-09-0713:00marshallthat gives you approximate t values for the backups at a location#2016-09-0713:00marshallwhich you can convert to wall clock time via the log#2016-09-0713:00misha@robert-stuttaford but you keep individual tx-data intact, and send it to datomic?
or attaching tx-data with user-id/date happens in back-end only, and datascript just sends something like data + cookies to infer the tx-data for datomic upon sync payload arrival?#2016-09-0713:07mishaI'd like to know if constructing tx-data on datascript side is viable, since I need:
- client be able to work offline for days
- still have correct timestamps on things in datomic (time of update, not time of sync)
E.g. update thing on Monday, send it to Datomic on Friday, and on Saturday, as a result, be able to see that thing was updated on Monday by going to Datomic only (no help from datascript at this point)#2016-09-0713:23robert-stuttaford@misha, we track client-side timestamps per-event (which is the source of truth for timing on those), and batch transact them#2016-09-0713:24mishathank you#2016-09-0816:21pesterhazyIs there a way to "expand" a transaction? Esp. if I use :db.fn/retractEntity, I'd love to see before running it what it would expand to.#2016-09-0816:21pesterhazyWhen I look at txs returned from @(d/transact...), I see the expanded txs, but then it may be too late.#2016-09-0816:22pesterhazyd/with allows me to see if the tx is fine, but doesn't return the datoms#2016-09-0816:44marshall@pesterhazy d/with returns a map that is analogous to the map from d/transact and should contain tx-data#2016-09-0816:45pesterhazy@marshall, oh I misremembered#2016-09-0816:45pesterhazythanks! perfect 🙂#2016-09-0905:01magnarsHow can I query for the x most recently created entities (in my case chat messages) without pulling in all of them, sorting them in the client, and then do a take ?#2016-09-0906:04robert-stuttafordit's a tough problem, because you can't traverse the transaction log in reverse order - you can go back some arbitrary period and walk forwards, and keep taking chunks like that until you've found PAGE_SIZE#2016-09-0906:06robert-stuttafordif you're modelling a new system with an empty db, then you can save an ever-decrementing value with all the things you want to walk backwards in this way, because then you can take advantage of d/datoms to walk that index in sorted (incrementing) order -- giving you a naturally reversed index to traverse#2016-09-0906:08robert-stuttafordhappy to discuss in more detail, @magnars, because it's an interesting problem that i'm wide open to solving better 🙂#2016-09-0906:09magnarsah, that's an interesting solution. I'll give that a shot. Thanks again, Robert. 🙂#2016-09-0906:09robert-stuttafordi'd love to hear how it goes, if you're amenable!#2016-09-0906:09magnarsI'll let you know. 🙂#2016-09-0909:54drankard@jared i now se logs in S3, but no transactor metrics in Cloudwatch.
There must be missing a step or something in the documentation.
The only thing i can think of is that the ensure-transactor is calling out to aws and setting something up ?
Im not using ec2, i have 'on premis' cassandra storage and a transactor running with:
aws-s3-log-bucket-id=ice-dev-transactor
aws-cloudwatch-region=region=eu-west-1
aws-cloudwatch-dimension-value=ice-dev-transactor
The Policies are set up as documentet (PutObject and PutMetricData,PutMetricDataBatch) in Security Credentials->Policies
I se the logs in S3 and i se the S3 metrics in Cloudwatch (PutMetricData and BucketSizeBytes) but no transactor metrics.#2016-09-0912:35jaret@drankard do you see a HeartbeatMsec metric in the transactor logs? or any of the metrics IN the transactor logs?#2016-09-0912:37jaretTo see your Transactor's logs:
Go to the S3 console
Select your log bucket (see the transactor properties file output by the bin/datomic ensure-transactor command, it contains the bucket name)
Drill down in the directory hierarchy to find .zip'd log files#2016-09-0912:40drankardnop no metrics#2016-09-0912:45drankardim unable to run ensure-transactor but added the properties manually#2016-09-0912:45drankardensure-transactor cassandra-transactor.properties cassandra-transactor.properties cassandra-transactor.properties.ensured
java.lang.IllegalArgumentException: No method in multimethod 'ensure-transactor*' for dispatch value: :cass
at clojure.lang.MultiFn.getFn(MultiFn.java:156)
at clojure.lang.MultiFn.invoke(MultiFn.java:229)
...#2016-09-0913:39jaretIf you do not have metrics in your transactor logs then you wont see any metrics in Cloudwatch. It indicates to me that your transactor is not up. Additionally wherever (environment) your transactor is running it needs AWS access keys. Those access keys have to be for the user you have granted permissions for in AWS.#2016-09-0916:23magnars@robert-stuttaford Using an indexed attribute with a declining value to find the x last entities seems to be working fine. 🙂
(defn next-chat-event-id [db]
(if-let [datom (first (d/datoms db :avet :chat-event/id))]
(dec (nth datom 2))
0))
(defn get-recent-events [db num]
(->> (d/datoms db :avet :chat-event/id)
(take num)
(map (fn [[eid _ _ tx]]
(get-event (d/entity db eid)
(:db/txInstant (d/entity db tx)))))))
#2016-09-0916:25magnars(d/datoms) is powerful stuff!#2016-09-0916:33jdkealydoes anyone else find converting dates to instants to be a little tedious? Coming from using ORM's, I'm used to passing a string value and having things be converted automatically. The case I have is a datepicker component from the frontend, and saving its value in datomic. I'm tempted to just store it as a timestamp and have the type be a long... Since I will be living with this decision for years to come, is this a bad idea ?#2016-09-0916:37magnarsIs your datepicker on the frontend sending longs, tho? Wouldn't you have to convert the string either way?#2016-09-0916:39jdkealywell... converting is more convenient at the datepicker level than in some backend function that updates the database -- which may or may not have the date attribute passed to it#2016-09-0916:39jdkealyso.. yes in javascript, before the the update API call is made, convert it to a long, and parse from long when setting its value#2016-09-0916:41magnarsI think I would go for the proper data type in the db over a little convenience.#2016-09-0916:45jdkealycool thanks for the advice. I'll go for the proper data type then 🙂#2016-09-0916:54robert-stuttafordfantastic, @magnars 🙂 i'm working on a tx-by-tx rebuild of our prod database ( north of 40mil txes so far ), and i'm definitely going to include a decrementing index with the new data where necessary#2016-09-0916:57magnarsoh man! Are you worried about the 10 billion datom limit at all?#2016-09-0916:57robert-stuttafordi am, a little. we need to make another ~165 copies of our current database to reach that#2016-09-0916:58robert-stuttafordthis is why i want a codebase that can rebuild the db, given rules for each transaction shape#2016-09-0916:58robert-stuttafordso that we can shard things later on#2016-09-0916:58magnarsAye, makes sense. #2016-09-0916:59robert-stuttafordadmittedly, a lot of the data in our db right now is trash. we made so many beginner mistakes in the first couple years 🙈#2016-09-0916:59robert-stuttafordwhich is another reason for the rebuild#2016-09-0916:59magnarsHaha, I bet you're not alone in that. #2016-09-0916:59robert-stuttafordindeed 🙂#2016-09-0917:24jdkealyi've been storing cookies in my datomic DB.. i guess it's time to rethink that#2016-09-0917:24pesterhazya selective database rebuild would be incredibly useful, for almost every production user of datomic#2016-09-0918:38jdkealywhat's the preferred way to unparse the instants... i've been so confused about these different date classes, like joda vs clojure.instant... Is there a simple way to take the instant and return "yyyy-MM-dd"... clj-time seems to want a joda instance, so do you convert from instant to joda, then use clj-time on it ?#2016-09-0918:39robert-stuttafordclj-time.coerce/from-date clj-time.coerce/to-date gets you 80% there#2016-09-0918:46jdkealyperfect, exactly what i was looking for 🙂#2016-09-0920:52domkmI ran into an error with fulltext search. (fulltext $ :recall/search-text ?search) errors when ?search is RAGE3:10". The " in the search criteria is the culprit. Is this a bug?#2016-09-1103:13kingoftheknollIs conformity still the go to for schema migrations?#2016-09-1103:13kingoftheknollI haven't seen much written about the subject. #2016-09-1106:45robert-stuttafordpretty much, @kingoftheknoll . it's actually a pretty straightforward topic 🙂#2016-09-1114:55kingoftheknoll@robert-stuttaford: thanks, also how common is the pattern of injecting the db conn into a web request via interceptors/middleware?#2016-09-1116:02robert-stuttafordvery common 🙂 we inject the conn and the latest db, as most requests only need to read#2016-09-1116:10codonnellanother option is to close over your db conn in your handler function#2016-09-1116:27kingoftheknollthanks for the advise!#2016-09-1202:13lvhhm; I’m trying to see if I can keep my stuff in datomic or if I should use core.match instead. I have data describing the general structure of something in terms of data (mostly predicates); I have something that’s maybe a specific instance of one of those things, and want to efficiently query if it is and if so, which one, e.g.
kinds of things:
[{:id :xyzzy :path [“a" odd? “b” even?]}
{:id :iddqd :path [“c” “d"]}]]
example:
{:path [“a” 3 “b” 2]}
… and I get :xyzzy. It seems like using the data there to construct a core.match expr than to query through datalog directly.#2016-09-1202:19lvh(I appreciate that I can call arbitrary fns from datalog, but I guess core.match can probably do that path dispatchy bit faster :))#2016-09-1208:20lenHi all, is there any ref on setting up datomic with postgres, specifically the URI for the peers and how it hangs together ?#2016-09-1208:22danielstocktonhttp://docs.datomic.com/storage.html#sql-database#2016-09-1208:23hansI'm not sure about the ref, but it is simple and not really PG specific: Each peer needs to know the location of the storage and how to read it. When starting, it reads from the storage to determine the address of the current transactor.#2016-09-1208:24lenthanks hans that was what was confusing me#2016-09-1208:24lensince I only had the the storage in the uri#2016-09-1208:26hansThis is how failover works - A normal peer reads the transactor location and fails if it cannot contact it. The transactor tries to contact the transactor that's mentioned in the storage and if it cannot establish contact, registers itself as the new transactor in the storage.#2016-09-1208:27lennice and simple !#2016-09-1211:01yonatanelGiven a parameter that is either an entity id (integer) or and identity uuid, I want to use the same query for both. I tried
(or [?e :entity/ex-id ?id]
[?e :db/id ?id])
But I get :db.error/not-an-entity Unable to resolve entity: :db/id data: {:db/error :db.error/not-an-entity}
How can this be done?#2016-09-1212:51luposlip@yonatanel: I usually do something like this:#2016-09-1213:03yonatanel@luposlip thanks. I wanted to use pull inside the query but ended up using if as well. Separating the identity from the pull could be nice though. Thanks.#2016-09-1213:03luposlipYou could also do something like this:#2016-09-1213:05luposlip(or [?e :entity/some-ident ?obj-ident]
[?e :entity/some-other-ident ?obj-ident]
(and [?e :entity/some-attr-that-is-always-present-for-this-entity-type _]
[(= ?e ?obj-ident)]))
#2016-09-1213:07luposlipWhich is best/fastest depend on your style and use case 🙂#2016-09-1213:09yonatanel@luposlip I remember trying that but couldn't make it work. Next time I'll remember to try this pattern again#2016-09-1213:10luposlipJust tried it, it works fine.#2016-09-1213:10luposlipIn my case I use it for finding users.#2016-09-1213:12luposlipBut for performance reasons I usually check if the user-ident is a number, and then use a different (faster) query if that is the case. Then only if the ident is a String, I use the above, but without the (and …) clause.#2016-09-1213:13robert-stuttaford@yonatanel (or [?id] [?e :entity/ex-id ?id])#2016-09-1213:13robert-stuttafordbut, let me introduce you to the wonder of lookup refs#2016-09-1213:14luposlipYes, I know lookup refs 🙂#2016-09-1213:14luposlip[:user/username “some-user”]#2016-09-1213:14robert-stuttaford(d/q '[:find ?e :in $ ?e :where [?e]] db a-long)
(d/q '[:find ?e :in $ ?e :where [?e]] db [:entity/ex-id a-uuid])
#2016-09-1213:14robert-stuttafordthose two work the same#2016-09-1213:18luposlip@robert-stuttaford for some reason I cannot get your example to work.#2016-09-1213:18yonatanelIf I remember correctly, if the ex-id doesn't belong to any entity, the lookup ref method explodes instead of returning nothing.#2016-09-1213:19luposlipAssertionError Assert failed: All clauses in 'or' must use same set of vars, had [#{?id} #{?id ?e}]
#2016-09-1213:20yonatanel@robert-stuttaford Also, you must use the same vars on both sides of or#2016-09-1213:20robert-stuttafordthat is indeed a pity#2016-09-1213:21robert-stuttaforddidn't realise it threw when LR doesn't resolve#2016-09-1213:21robert-stuttaforddoesn't do that for missing ids#2016-09-1213:23robert-stuttaford@luposlip: your query pattern works if the data type is the same for the first two clauses, but i think it may throw if not#2016-09-1213:23robert-stuttaford@luposlip: curious; why return [?e ...] and not ?e .?#2016-09-1213:23luposlipyou’re probably right @robert-stuttaford#2016-09-1213:24luposlipJust because in my case I simply want the user :db/id, not anything else.#2016-09-1213:24robert-stuttaford?e . will give you the first result#2016-09-1213:24robert-stuttaford[?e ...] will give you a vector of the results, no matter how many#2016-09-1213:24luposlipahh, didn’t realize that, cool! 🙂#2016-09-1213:24robert-stuttafordyou want ?e . 🙂#2016-09-1213:26robert-stuttaford@luposlip @yonatanel these are pretty handy https://gist.github.com/robert-stuttaford/39d43c011e498542bcf8#2016-09-1213:26luposlipwill look at that @robert-stuttaford, seems sane.#2016-09-1213:26robert-stuttaford(def uri "datomic:....")
(as-conn uri)
(as-db uri)
(as-db (as-conn uri))
etc#2016-09-1213:27robert-stuttafordoh, it's out of date, i've got more to add#2016-09-1213:28robert-stuttafordadded the log#2016-09-1213:33luposlip👍:skin-tone-4:#2016-09-1215:51luposlipLove the ?e . notation @robert-stuttaford. wonder why I haven’t seen this before 🙂#2016-09-1217:08yonatanelWhen an entity has multiple identity attributes, which one identifies the entity if I change some of them in a transaction to values already in other entities?#2016-09-1217:49robert-stuttaford@yonatanel what do you mean by identity attr -- indexed, or unique?#2016-09-1217:50robert-stuttafordcurious, and eager for input from the Cognitects amongst us, aside from d/query's timeout, what other protections do we have against queries that OOM the process cc @marshall @jaret?#2016-09-1217:51robert-stuttafordby OOM, i mean the result set can't fit in the available ram. we've had it a couple times now where some query by someone somewhere will consume all the ram and then peg the CPU with a GC moshpit#2016-09-1217:51robert-stuttafordsometimes it recovers, sometimes not#2016-09-1217:52robert-stuttafordi'd like to know, conceptually, what the api offers to protect us from this. with traditional dbs you can issue a COUNT to the box 'over there' and protect the client app, but in this case, the peer will download everything when issuing a count#2016-09-1217:52marshallIn Datomic there isn’t an “over there"#2016-09-1217:52robert-stuttafordi know 🙂 which is why i ask the question#2016-09-1217:52jaretThe results have to be fully realized in the JVM. So if the result set can’t fit in ram...#2016-09-1217:53robert-stuttafordthis is a growing pain, of course; things that were small a while ago have gotten bigger. we're already auditing our code and our queries, and finding ways to further partition and break down the work, but i just thought i'd ask what's in the peer library to help us#2016-09-1217:53robert-stuttafordaside from d/query, which certainly does help somewhat#2016-09-1217:55marshallYeah, I’d say you might consider breaking up queries if you’re trying to realize large result sets.#2016-09-1217:55marshallPossibly split into “get entites” then populate subsets of entities#2016-09-1217:56marshallalternatively, there’s the direct datoms approach#2016-09-1217:57robert-stuttafordi fully acknowledge that the answer may be 'nothing'#2016-09-1218:05robert-stuttafordwe're going to season everything we suspect with d/query and log the timeouts, and see where the fires start#2016-09-1218:06marshallare you having OOM as the only issue or are you also looking for queries with perf issues?#2016-09-1218:08robert-stuttafordwe're having OOM on a box that only has Datomic as a ram sink. the rest is http + websockets, and we have way too little active connections to cause an OOM#2016-09-1218:08robert-stuttafordwe suspect that some query or queries for certain users (due to large source datasets) are consuming all the ram#2016-09-1218:09robert-stuttafordso the diagnostic vector for us is query#2016-09-1218:10marshallif you have things set up with timeouts and the ability to record them that sounds like a good approach. particularly if you can log the query parameters along with the timeouts#2016-09-1218:11robert-stuttafordyes!#2016-09-1219:43domkmDoes sync-index include fulltext indexes?#2016-09-1219:44domkmAnd if not, is there any way to ensure that fulltext indexes are fully created for a particular t before querying?#2016-09-1220:11stuartsierraFulltext indexes are eventually-consistent by definition, so I expect there is no way to ensure they are up to date. http://docs.datomic.com/release-notices.html#0-9-4699#2016-09-1220:53domkmThanks @stuartsierra.#2016-09-1222:50yonatanel@robert-stuttaford By identity attr I mean :db.unique/identity. I have several per entity. Perhaps I should have used only one as identity and the rest as unique, but I wanted to use flexible lookup refs, and even if I didn't have this problem it's still interesting to know which of the identity attributes will determine the entity.#2016-09-1223:10yonatanelThe docs say that in case of conflict the transaction will throw IllegalStateException, but in my case it didn't, since my "conflict" is between identities in the same entity, e.g asserting that [/cdn-cgi/l/email-protection, "new unique name"] while [/cdn-cgi/l/email-protection, "old name"] already exists will overwrite the existing name. Now I know.#2016-09-1223:27bkamphaus@yonatanel you asserted both of those datoms in one transaction (what that section of docs applies to)? It looks more like you’re running into the difference between unique identities http://docs.datomic.com/identity.html#unique-identities and unique values http://docs.datomic.com/identity.html#unique-values and how they behave when you assert different information.#2016-09-1301:05arthurI must be doing this:
(let [db (d/db *conn*)]
(->> (take 50 (d/datoms db :avet :y.example/replaced))
(map #(.e %)) ;; <----- wrongness here ?!?!
(map #(d/entity db %))
./pprint))
wrong. what is the proper way to get an entity our of an item returned by the d/datoms call?#2016-09-1305:08robert-stuttaford@arthur (map :e) 🙂#2016-09-1319:48robert-stuttaford@jaret is it possible to control the location of the log directory that bin/datomic backup uses?#2016-09-1319:55jaret@robert-stuttaford specifically for backup? I am not sure let me ask around. But generally you can modify the path in bin/logback.xml <fileNamePattern>${DATOMIC_LOG_DIR:-log}/%d{yyyy-MM-dd}.log</fileNamePattern>#2016-09-1320:03robert-stuttaforddoh, of course, thanks#2016-09-1320:04robert-stuttafordyeah, so control DATOMIC_LOG_DIR#2016-09-1320:08marshall@robert-stuttaford I just checked locally - doing an export of DATOMIC_LOG_DIR to my env followed by the backup command did work to direct logs to that location.#2016-09-1320:10robert-stuttafordsweet, thanks for that!#2016-09-1320:10marshall@jaret found it 🙂 i just checked it#2016-09-1320:10robert-stuttaforddon't know if you guys have a need for it, but highly recommend checking https://terraform.io out. been rebuilding our whole set up with it and it. is. fantastic#2016-09-1320:12bkamphausWe’ve also been doing some stuff with terraform at ThinkTopic. Pretty nice 🙂#2016-09-1404:09kvlt@robert-stuttaford I did some work with terraform. Found the infrastructure as code to be incredibly valuable#2016-09-1404:15robert-stuttafordwe were using CFN for both infrastructure and provisioning before. separating them into two codebases (terraform and ansible) has simplified both sides quite a bit. <strikethrough>decomplection</s> simple wins again!#2016-09-1406:05robert-stuttafordis there a non-hacky way to pipe logs produced by the transactor binary to syslog?#2016-09-1408:14karol.adamiecbig thumbs up for terraform as well. If Datomic would support that as alternative way and officialy support terraform modules that would be awesome <wink>#2016-09-1410:08val_waeselynckHaving a doubt about the behaviour of datomic.api/transact-async. Does it return (its Future) immediately or is there a blocking part to the call?#2016-09-1410:10danielstocktonDocs say immediately http://docs.datomic.com/clojure/#datomic.api/transact-async, what leads you to have doubts?#2016-09-1410:12val_waeselynck@danielstockton this thread: https://groups.google.com/forum/#!topic/datomic/Qn55MLE_nfg#2016-09-1410:13val_waeselynckIIUC seems to me transact-async makes an attempt to contact the transactor before returning,#2016-09-1410:17danielstocktonSeems to me that might only apply to transact, I'll let someone else answer definitively. You could see what happens when you transact-async and the transactor is down#2016-09-1412:14kurt-yagramUsing pull, can I add transaction metadata of a nested entity?#2016-09-1412:20kurt-yagramFor example:
{:find [(pull ?e
[:con/id
:con/class
{:con/items [:item/id {:item/data [...]}]}
]) ...]
:in $ ?cl
:where [[?e :con/class ?cl]]}
How to add transaction metadata to all item-entities (`:item/id`)?#2016-09-1413:22robert-stuttaford@kurt-yagram you can't reach tx via pull, because txes aren't linked to e via a. d/pull (like d/entity) is about traversing e <=> a relationships. you can reach via d/q clauses (via fourth binding position) or d/datoms (via :tx)#2016-09-1413:26robert-stuttafordin your code, you'd add ?tx after ?cl in your where clause, and then you can do a separate (pull) on that in your find.#2016-09-1413:28kurt-yagramyeah, but that doesn't fetch me the tx's of the items. A separate pull/query is necessary... Thx!#2016-09-1413:31robert-stuttafordthat's right#2016-09-1414:06ethangracerare datomic rules evaluated in order? as in, if I have 3 rules with the same name defined in one vector , then the first one is checked first?#2016-09-1414:07ethangracerthe behavior I’m seeing would suggest otherwise, but want to be sure I’m not missing something#2016-09-1414:10bkamphaus@ethangracer can you describe how you believe the order would matter in your case?#2016-09-1414:17robert-stuttafordafaik, in the order you've written them#2016-09-1414:17robert-stuttafordlooking forward to discovering if i'm mistaken, though 🙂#2016-09-1414:19ethangracer@bkamphaus the database is configured so that I have item templates and item instances (kind of like classes vs objects in OO). So if an item instance has the :item/tag field, I want to get that field instead of getting the :item/tag field on the item template#2016-09-1414:21ethangracerso I wrote the following 2 rules:
‘[[(item-with-attr ?item ?attr ?value)
[?item ?attr ?value]
[?item :item/template _]]
[(item-with-attr ?item ?attr ?value)
[?item :item/template ?template]
[?template ?attr ?value]]]
#2016-09-1414:22ethangracerthe idea being, if the entity is an instance, get me the value on that instance. if the instance doesn’t have that field, then go and get me the attribute on the template#2016-09-1414:23ethangracerthe behavior is currently nondeterministic — if I add additional schema, the rule sometimes returns the template attribute (`:item/tag`, in this case) before the instance attribute#2016-09-1414:36bkamphaus“if the instance doesn’t have that field” - rules are not meant to guarantee an order to implicitly handle what you mean to be a conditional evaluation on something not being there (unless there’s a specific not cause). I would use missing http://docs.datomic.com/query.html#missing or a not clause http://docs.datomic.com/query.html#not-clauses explicitly.#2016-09-1414:45ethangracer@bkamphaus awesome, that worked, thank you! if rules don’t guarantee an order to handle conditional evaluation, why can you use multiple rules in place of an or or or-join? Just want to make sure I understand the distinction#2016-09-1414:47bkamphausbecause you only have to fulfill one criteria in the or to return true. I guess there could be performance implications for which clause is matched, but any clause should be sufficient to fulfill the criteria.#2016-09-1414:48ethangracergot it, that makes sense#2016-09-1414:49ethangracerdoesn’t quite work like or in a programming language then, where it ignores all false values (in order) until finding the first true value#2016-09-1414:52bkamphausyeah, I don’t want to speak too generally about when order is/isn’t guaranteed but you should always approach the correctness aspect of any query as if order doesn’t matter. Clause ordering typically only matters e.g. within a where clause for performance reasons, filtering the set down to smaller intermediates rather than larger ones.#2016-09-1414:55bkamphausthe or short-circuit behavior you describe where you want the first truthy thing is definitely a different beast. I think with datalog (or even SQL, etc. for that matter) it’s easier to just think of set unions and intersections, etc.#2016-09-1414:58ethangracerahh, right. rules being literal substitutes for where clauses helps to clarify that distinction. I’m definitely still new to database programming in general, so I appreciate the tip about considering queries in terms of set operations rather than programming constructs.#2016-09-1415:02drankard@jared sorry for the long response time.
I tried a lot of different things now, but i still se no metrics in Cloudwatch
I have the AWS keys in place and the proxy settings is working for s3 logging, when i uncomment the aws-s3-log-bucket-id, The transactor log is updates in s3
When i unkomment the datomic.lifecycle in logback.xml i se headbeat lines in the transactor log file, but not the metrics HeartbeatMsec format, regular log statements
I set all aws and http logs levels to DEBUG and i can see a few lines in the log:
com.amazonaws.metrics.AwsSdkMetrics - Admin mbean registered under com.amazonaws.management:type=AwsSdkMetrics#2016-09-1419:25jaret@drankard Datomic reports metrics to Cloudwatch under a separate namespace.
At the bottom of the left navigation bar in the CloudWatch dashboard there is a pull down menu (Custom Metrics...). There should be a Datomic namespace there.#2016-09-1419:26jaret@drankard could you supply your transactor logs if that doesn’t do the trick?#2016-09-1510:56magnars@robert-stuttaford Hi! The ever-decreasing index for latest entities is working great, but I stumbled over this from DataScript: https://github.com/tonsky/datascript/wiki/Tips-&-tricks#getting-top-10-entities-by-some-attribute - Quote Nikita:
> "Reverse return a special view on an index that allows walking it in the reverse direction. This operation is allocation free and about as fast as direct index walking."
Would this be possible in Datomic as well? Or is this possible for DataScript because it keeps everything in memory? Any thoughts? 🙂#2016-09-1511:11robert-stuttafordi'll try it and let you know, @magnars 🙂#2016-09-1511:20robert-stuttafordvery quick testing seems to suggest that the same thing works in Datomic, @magnars#2016-09-1511:20robert-stuttafordwhich is -ing awesome#2016-09-1511:20magnarswhoa, that is impressive#2016-09-1511:21robert-stuttafordwith a warm cache, counting 50k datoms forwards and backwards takes the same amount of time#2016-09-1511:21robert-stuttafordtrying a bigger index#2016-09-1511:22magnarsis that using a special trick, or does it work out of the box with reverse?#2016-09-1511:23robert-stuttaford(time (->> (d/datoms (db/db) :aevt :chat.event/client-uuid)
seq
reverse
count))
;; with no reverse
"Elapsed time: 925.572616 msecs"
3809777
;; with reverse
"Elapsed time: 2667.018015 msecs"
3809777
#2016-09-1511:23robert-stuttafordnearly 3x slower for a larger index#2016-09-1511:24robert-stuttafordbut still quite quick#2016-09-1511:24magnarsyeah, not bad#2016-09-1511:25robert-stuttaford(time (->> (d/datoms (db/db :events) :aevt :event/uuid)
seq
reverse
count))
;; with no reverse
"Elapsed time: 1492.233743 msecs"
6576672
;; with reverse
"Elapsed time: 4446.358167 msecs"
6576672#2016-09-1511:25magnarsBut the implementation of reverse in clojure.core is not lazy, and looks like (reduce1 conj () coll)#2016-09-1511:26robert-stuttafordthat-shrug-emoji 🙂#2016-09-1511:27magnarsNikita says "This operation is allocation free", so he can't be talking about that implementation of reverse.#2016-09-1511:28robert-stuttaford@tonsky, any thoughts?#2016-09-1511:30robert-stuttaford@magnars https://github.com/tonsky/datascript/blob/0f942b9666c7c7bfbecb5a45368c61569718d1ff/src/datascript/btset.cljc#L838#2016-09-1511:31robert-stuttafordi guess we'll have to ask the fine Cognitects if a similar thing happens for Datomic indexes.. @marshall, @jaret? 🙂#2016-09-1511:33tonskyno idea what datomic does#2016-09-1511:34tonskyI think you can get a pretty good idea just by trying to iterate reasonably big database in reverse#2016-09-1511:35tonsky@robert-stuttaford you clearly don’t want to do seq before reverse#2016-09-1511:35tonskyif there were any kind of optimizations that might kill it#2016-09-1511:39tonskyuser=> (time (first (d/datoms (db/db) :aevt :event/uuid)))
"Elapsed time: 0.322985 msecs"
user=> (time (first (reverse (d/datoms (db/db) :aevt :event/uuid))))
"Elapsed time: 124.845047 msecs"
user=> (time (count (seq (d/datoms (db/db) :aevt :event/uuid))))
"Elapsed time: 188.143681 msecs"
#2016-09-1511:39tonskypretty sure there’s no optimization for it#2016-09-1511:39tonskywhich is a real bummer: all they had to do is implementing reverse iterator#2016-09-1511:40tonskyyou can write to mailing list and they might consider that for the next version#2016-09-1511:41magnarsThat's very interesting. Thanks for the answer, @tonsky. I'll keep my ever-decreasing index for now then. 🙂#2016-09-1511:41tonskyI remember reading (that was ~2 years ago, but still) that they recommend to store negative values if you want quick access to latest, not first, datoms#2016-09-1511:42tonskywe even use to store negative timestamps (as longs) back in these days, for that reason#2016-09-1511:43tonskyyeah, that’s the solution#2016-09-1511:43tonskyI believe B-Tree index can handle insertion to the head just as well as it does instertions to the tail#2016-09-1511:52robert-stuttafordthanks @tonsky!#2016-09-1514:17stuartsierraThe structure of Datomic's indexes in storage makes iterating in reverse non-trivial. When you call reverse you're just using ordinary Clojure sequence reverse, which realizes the whole sequence in memory.#2016-09-1514:24magnarsThanks for the clarification, Stuart. :+1:#2016-09-1518:18iwilligis 100k datums still consider the the rough size limit for datomic transactions?#2016-09-1518:28jgdavey#2016-09-1518:28marshallper transaction?#2016-09-1518:28jgdavey^whoops. misread#2016-09-1518:29marshallI wouldnt suggest over 100k. That actually is a bit high, but of course like everything, it depends#2016-09-1518:29marshallideally you’re in the thousands to tens of thousands max#2016-09-1518:29marshallper transaction#2016-09-1519:46jdkealyI'm trying to excise an entity, though in my tests the entity is available immediately to (d/touch ) on a new db instance... Is there some kind of indexing period when the entity will still be there?#2016-09-1519:52jdkealyI'm successsfully excising in other parts of my app... My excised entity does have some ref'd entities that are components, I don't know if that makes a difference.#2016-09-1519:56jaretHow many datoms are you excising? At some point after the excision the indexing job runs. The resulting index will no longer contain the datoms excised. The indexing job however is proportional to the size of the entire database.#2016-09-1519:56jaretyou can utilize sync-index to force the execution of the indexing job#2016-09-1519:58jdkealyCool. I'll look into that... It's in my tests, just a single entity with 5 datoms, and a component with 2. I don't totally need to excise it, I'm just a little paranoid after reading about 10B datoms and I'm not particularly worried about preserving history on this individual entity type.#2016-09-1520:09bkamphaus@jdkealy everything in Datomic is optimized around immutability so in general it’s a terrible idea to excise. I’d leave it to the “legally compelled to remove this fact” case.#2016-09-1520:20jaret@jdkealy To echo what Ben said I highly recommend that you not go that route. @marshall and I have been discussing this “problem” with other clients and if you think you’d benefit maybe we can chat about solutions#2016-09-1520:26jdkealyYes, I'd like that... I'm fine with adding some attribute like archived:true/false or something like that... I'm really just concerned with the 10BN datoms I've been reading about. I think this kind of data I might want to use a different data store for.#2016-09-1520:27marshall@jdkealy We’re happy to set up a call. Can you email me at <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection> ?
Also, you could consider noHistory on some attributes if you don’t want/need any history tracking on them#2016-09-1520:29jdkealyCool, will do marhall#2016-09-1520:57iwilligthanks @marshall#2016-09-1521:14eggsyntaxI think I'm missing something obvious here: why is it that
'[:find ?id :where [?id :foo/bar 33]]
works (finds several ids), but
'[:find ?id :where [?id _ 33]]
returns an empty set?
I could imagine that being disallowed by datomic, but if so, I'd expect an error rather than just getting no results.#2016-09-1605:43robert-stuttafordjust curious; what'd happen if i attempt to restore a backup that's also currently being backed-up-to?#2016-09-1612:20bkamphaus@robert-stuttaford I believe it should work because you can only restore a backup that is in the set of ts which have been fully backed up in that directory, and segments are immutable so nothing will have been changed by backup process (2).#2016-09-1613:00robert-stuttafordrad, thanks#2016-09-1614:24eggsyntaxBump on my question from yesterday, in the hope that there’s someone around now who can help me understand what’s going on:
https://clojurians.slack.com/archives/datomic/p1473974074000693#2016-09-1614:59bkamphaus@eggsyntax I’d write up a gist with a repro case and point it at @marshall or @jaret — as it could be a bug that there’s no error. My estimated reasoning would be that a query like that might work with a reference and since you’re sticking in a Long it might treat it as there not being anything in the reverse index (v/raet).#2016-09-1615:34eggsyntax@bkamphaus OK, thanks.#2016-09-1617:34eggsyntax@marshall or @jaret -- as @bkamphaus suggested above, here's a gist with a minimal complete example of this unexpected behavior in Datomic, where using an underscore as the attribute in a query doesn't return matching datoms. Any insight into what's going on here would be very much appreciated. Thanks!
https://gist.github.com/eggsyntax/09aa74cd86f26cffc44c79019216f182#2016-09-1617:34eggsyntax(Or really @whoever, if anyone else has some clarity about this)#2016-09-1617:34jaret@eggsyntax taking a look now#2016-09-1618:19eggsyntax[EDIT: correct outdated comment]#2016-09-1618:36stuartsierra@eggsyntax: I think it's a consequence of indexes. d/q will not do a full scan of the database, which is essentially what you're asking it to do with [?e _ "whatever"] since ?e is unbound.#2016-09-1618:37stuartsierraWith the attribute, the query can use the AEVT index.#2016-09-1618:37stuartsierraBut there is no index that starts with V for primitive (non-ref) values.#2016-09-1618:38stuartsierraIn most cases where your query would imply a full scan, d/q signals an error, e.g. (d/q '[:find ?e :where [?e _ _]] (d/db conn))#2016-09-1618:38marshall@stuartsierra is correct. if you don’t supply e or a to query, but do supply v, it must be a ref type you’re looking for#2016-09-1618:39marshallbut since values of reftypes are just numbers (eids) the query engine doesn’t know if you asked for 42 the entity or 42 the value#2016-09-1618:39eggsyntax@stuartsierra @marshall thanks, y’all, much appreciated. That makes sense. Obviously it’s a terrible idea in most cases, but of idle curiosity, is there a way to actually request a full scan?#2016-09-1618:40marshallnope. any query you try to do it with will throw an exception#2016-09-1618:40marshallyou can scan manually with the datoms API#2016-09-1618:40eggsyntaxAh, ok.#2016-09-1618:40stuartsierraThe datoms, index-range, and log APIs are all different ways to "scan" the database. But you can't do it with query.#2016-09-1618:40eggsyntaxOK, thanks!#2016-09-1618:40stuartsierrayou're welcome#2016-09-1618:47eggsyntaxI do think that throwing an error would be a useful behavior there, btw, to help out other folks who encounter this.#2016-09-1619:21eggsyntaxBut ultimately, for me, it's a good reminder that I still have leftover SQL intuitions to eradicate 😊#2016-09-1620:20magnarsA good way to find all datoms in a given transaction? I've been looking at using d/log, but that requires the conn. This should be doable with just the db, right?#2016-09-1620:38stuartsierra@magnars d/log is best, but I think you can also do something like (d/q '[:find ?e ?a ?v ?tx ?add :in $ ?tx :where [?e ?a ?v ?tx ?add] (d/history db) transaction-eid)#2016-09-1804:06magnarsAny thoughts on the namespaced keywords in Datomic vs the ones we need for clojure.spec? I don't want to put fully qualified namespaces into my data model, since data lives longer than code.#2016-09-1808:06robert-stuttafordcan you expand on what you mean by fqns, @magnars?#2016-09-1808:07robert-stuttafordyou mean e.g. :org.your-company.your-app.your-module/thing vs simply :module/thing?#2016-09-1808:12magnarsyes. As far I understand, spec encourages the former, since you can then use the :: - but I guess that it is entirely optional since the spec registry is global.#2016-09-1808:12robert-stuttafordyeah. so we have to deal with this at some point too, when we start using spec#2016-09-1808:13robert-stuttaford:: syntax is really nice, but, of course, you can also just not use it for those things you're specing that will live in Datomic#2016-09-1808:13robert-stuttafordthat is, i'd just (s/def :module/thing some-spec)#2016-09-1808:14magnarsyeah, that makes sense. I was initially under the impression that the fqns were significant (that clojure would look up the spec def in that ns), so that contributed to my confusion.#2016-09-1808:16robert-stuttafordnaw, it's in an atom i believe#2016-09-1808:16magnarsyeah, it's global, so that's no issue#2016-09-1808:16robert-stuttafordhttps://github.com/clojure/clojure/blob/master/src/clj/clojure/spec.clj#L45#2016-09-1907:19magnarsI heard some rumors about people having issues with using datomic entity ids in JavaScript, since JS' highest safe integer is 2^52-1, while Datomic entity IDs are 64-bit longs. But I see that all my entity IDs are around 175e+11, meaning I would run out of my allotted 10B datoms way before I encounter that issue. What am I missing?#2016-09-1907:30danielstockton@magnars https://groups.google.com/forum/#!topic/datomic/s0u3vjb0GG4#2016-09-1907:31danielstocktonI think you'll only have problems if you have a large number of partitions#2016-09-1907:42magnarsThanks, that makes sense. :+1:#2016-09-1907:49yonatanelHow does Datomic implement consistency on top of DynamoDB's eventual consistency?#2016-09-1908:11danielstocktonAs far as I understand, it uses conditional put for the root refs (index-root-id etc..) and then subsequent segments are either there or not there yet#2016-09-1908:11danielstocktonCan't remember the exact behaviour if a query needs a segment that isn't there yet, I guess it throws an error and you just wait until it's available#2016-09-1908:11danielstocktonBut might be completely wrong#2016-09-1911:21yonatanelIf I change the system clock will it mess up squuid order?#2016-09-1911:24danielstocktonpotentially i think, it depends when the last one was created, how much you change it, and in what direction#2016-09-1911:25danielstocktoni think you're ok as long as you don't change it backwards past the time that the last one was created#2016-09-1911:26danielstocktonthat might cause other problems with transaction instants anyway#2016-09-1911:26yonatanelyeah...#2016-09-1913:46stuartsierra@yonatanel If you change the system clock to a time earlier than the last recorded transaction, Datomic will reject transactions until the clock "catches up" again#2016-09-1913:48marshall@magnars One note about EIDs in your external client - It’s generally not recommended to use the datomic-generated entity IDs as external identifiers. Whenever possible you should model identity yourself with a domain or generated identifier.#2016-09-1913:50marshall@yonatanel @danielstockton is mostly correct about the way consistency is implemented. The conditional put of the root ONLY occurs once all the segments below that root are written. This means that you can never have an inconsistent database. If the root is present, all nodes below it are as well.#2016-09-1913:53magnars@marshall: thanks for the heads up. I'm using them for only short-lived transient IDs in the client tho. They're never stored anywhere. #2016-09-1913:53marshall:+1:#2016-09-1913:57tengWe have a Datomic database where the "core entities” are categorised by countries and a lot of other entities around these (indirectly also categorised by country). We also have users corresponding to one of these roles: ADMIN, COUNTRY_ADMIN and REVIEWER. Unless you are an ADMIN, you can’t read information from other countries than the ones you belong to. ADMIN always has “write” rights (to add facts) to all entities. All other roles can only write to the entities belonging to countries you are a member of. A REVIEWER can read ADMIN related information, but is not allowed to write ADMIN information (the same for COUNTRY_ADMIN -> ADMIN). We keep track of the current logged in user, and we store the countries he belongs to and which role he has.
How should we best implement this?
1. By adding extra parameters to every function that does a Datomic query + extra criterias in the query. Maybe by using Datomic rules.
2. Have a central function that returns a filtered database, based on the current users countries and user role level, that we use to query the database.
3. Any other ideas to solve these cross cutting concerns?#2016-09-1914:19stuartsierraWithout getting into your specific use case, a filtered database is a convenient general solution, but comes with a performance cost (examining every Datom your query touches). Adding extra selection criteria to every query might be more efficient, but comes at the cost of added complexity on every query. Another possible approach is to do all your normal queries without considering authorization, then trim the results based on what the current user is allowed to see.#2016-09-1914:19robert-stuttaford@marshall and @jaret, i've encountered this issue in my new infrastructure codebase https://groups.google.com/forum/#!msg/datomic/IXsSUqMkgGo/hMVLcUeqmNEJ#2016-09-1914:20robert-stuttafordroot cause is transactor isn't receiving a public ip (doesn't need one), and so setting alt-host is failing. any recommendations for options, or should i just set an ip?#2016-09-1914:25stuartsierraI've encountered that issue as well. Adding a public IP is the easiest thing to do. Some users have reported success removing or editing the alt-host line in the CloudFormation template; I don't know if that works.#2016-09-1914:26robert-stuttafordpublic ip it is!#2016-09-1914:33robert-stuttaford@stuartsierra: a recent cognicast mentioned your prediliction for 'decanting Datomic databases'. is this something you've done a lot?#2016-09-1914:35stuartsierrayes#2016-09-1914:37robert-stuttafordi'm busy preparing to do this for a pretty large database. any .. uh, tips? 🙂#2016-09-1914:38robert-stuttafordby decant, i mean, rebuild in transaction order. and per tx, either discarding, or altering in flight#2016-09-1914:38robert-stuttafordi have to use a streaming approach because it's tens of millions of transactions. i was wondering if there are any gotchas you may be able to warn me about#2016-09-1914:58stuartsierra@robert-stuttaford: The main challenge is translating entity IDs from the "old" DB to the "new." If every entity in your database has a :db.unique/identity attribute, then just use those.#2016-09-1914:59stuartsierraWithout that, you have to maintain a mapping from old EIDs to new EIDs. I used a key/value store like LevelDB.#2016-09-1915:01stuartsierraIf you're relying on that EID mapping in an external store, then you cannot stream the transactions, because you have to get the resolved tempids from the previous transaction before you can translate the subsequent transaction.#2016-09-1915:04stuartsierraAlso make sure the process you're building is resumable: During a long import job, the Transactor will pause occasionally, causing transaction errors. Your Peer process has to be able to continue where it left off without skipping any transactions. Ideally, you want it to persist its state (i.e., last transaction copied) on disk.#2016-09-1915:06robert-stuttafordthank you, @stuartsierra -- i'm definitely planning a pause capable approach#2016-09-1915:07robert-stuttafordhappily, i think i will be able to avoid the external ID mapping, because i can just add unique ids to everything in the source database first#2016-09-1915:07robert-stuttafordand use the source database as the mapping, because i don't care about its cleanliness in the long run#2016-09-1915:08robert-stuttafordi may have a question or two, but what you've shared so far is great. thank you#2016-09-1915:19stuartsierraYou're welcome.#2016-09-1915:23robert-stuttafordwhat's the largest database you've decanted, @stuartsierra?#2016-09-1915:25ckarlsenI've deleted all datomic db's, ran gc-deleted-dbs, ran full/freeze VACUUM in postgres, restarted all processes and somehow the "datomic_kvs" table use ~2.5GB of disk space?#2016-09-1915:30stuartsierra@robert-stuttaford about 9 billion datoms.#2016-09-1915:31robert-stuttafordwow. that's awesome!#2016-09-1915:31stuartsierraThat took days.#2016-09-1915:31robert-stuttafordyeah i was just about to ask#2016-09-1915:31robert-stuttafordi haven't counted datoms yet, but we're looking at 50mil+ txes#2016-09-1915:32robert-stuttafordi'm going to be interleaving two databases into one#2016-09-1915:32robert-stuttafordgoing to be quite a lot of fun, and it's going to feel really good to expunge all the newbie mistakes we made over the last 4 years#2016-09-1915:33robert-stuttafordsome real 🙈 moments in there#2016-09-1915:33stuartsierraThat's a common motivation for doing it.#2016-09-1915:35robert-stuttafordany idea how big that database was in storage, @stuartsierra ?#2016-09-1915:35stuartsierrano#2016-09-1915:35stuartsierraAnother trick: consider "decanting" into a dev database and then use backup/restore to move into distributed storage.#2016-09-1915:36robert-stuttafordthe longer term motivation for building this out now is that it becomes possible to rebuild far more quickly in future, e.g. to shard#2016-09-1915:36robert-stuttafordoh, yes. definitely#2016-09-1915:40ckarlsenis the diagnostic tool mentioned in the announcement of version 0.9.5302 easly available?#2016-09-1915:49robert-stuttafordhuh. think we'll go quite quickly; we're only at 84mil datoms#2016-09-1915:51jaret@ckarlsen So if you deleted the DB as mentioned earlier I am not sure you can run the diagnostic. Is there a reason you cannot just delete the table and assume the 2.5gig is garbage that can no longer be collected?#2016-09-1916:06ckarlsen@jaret no reason, just curious. I've been doing lots of retractions and additions lately on local dev db during testing, and often the transaction throughput is horribly slow.. from ~5ms to 2-3sec for no apparent reason. This is orginally a database that's been through a lot of software upgrades#2016-09-2007:30kurt-yagramIs it possible with pull to pull all attributes only one level deep?
Having a tree like data, , which contains many components, using pull, pulls the whole tree (which makes sense). However, I'd like to pull only one level deep. I could use attribute names, but since there may be a lot of them and the pull shouldn't depend on the knowledge of all attributes, I'd rather have a pull of one level deep.
This is not recursion I'm after: it's not about friends of friends of friends-relationships. Taking that analogy, it's more a friends of one or more of of friends/mother/father/nephews/nieces/... of, well, that depends on if it's a friend of a friend, or a friend of a mother, or a friend of a father and so on.#2016-09-2007:32tengThanks @stuartsierra#2016-09-2013:21ethangracer@kurt-yagram you could write a query that returned the attributes that you were looking for, i.e.:
‘[:find [?attr1 ?attr2 ?attr3]
:in $ ?e
:where [?e :entity/attribute ?attr1] [?e :entity/attribute ?attr2] [?e :entity/attribute ?attr3]]
It wouldn’t be in a map, but it would be guaranteed to only go one level deep#2016-09-2014:16bahulneel@kurt-yagram what about:
(into {} (d/q '[:find ?an ?v :in $ ?e :where [?e ?a ?v] [?a :db/ident ?an]] db eid))
#2016-09-2014:24ethangracer@bahulneel nice query#2016-09-2014:25bahulneelremembering that the schema is queryable is something that keeps coming up over and over again for me#2016-09-2014:26ethangracervery handy#2016-09-2014:26ethangraceronly downside of the above is it returns one ID for ref many fields#2016-09-2014:27ethangracerbut it gets really close#2016-09-2014:27bahulneelsure#2016-09-2014:27bahulneelI think you could write a rule to fix that#2016-09-2014:27ethangracerah, because of the into#2016-09-2014:27ethangracerthat’s the only reason#2016-09-2014:27bahulneelah yes#2016-09-2014:27ethangracerif you do a merge-with it’ll work#2016-09-2014:27bahulneelmaybe a reduce#2016-09-2014:28kurt-yagram@bahulneel yeah, I think something like that may work#2016-09-2014:29ethangracer@bahulneel you’re right, a reduce is easier#2016-09-2014:32bahulneelThis works
(->> eid
(d/q '[:find ?an ?v
:in $ ?e
:where
[?e ?a ?v]
[?a :db/ident ?an]]
db)
(reduce (fn [m [k v]]
(update m k (fn [o]
(cond
(nil? o) v
(sequential? v) (conj o v)
:else [o v]))))
{}))
#2016-09-2014:32bahulneelAlthough if there's only one in a many you'll only get one#2016-09-2014:34devthI just switched from dev storage to sql and now I'm trying to figure out how much disk space I should give the transactor(s). The docs only say it's used as temp storage. Does it need much? Does it need to be fast / ssd?
Only relevant docs I could find were from the config samples:
# Data directory is used for dev: and free: storage, and
# as a temporary directory for all storages.
# data-dir=data
#2016-09-2014:35bahulneel@ethangracer this would work properly:
(->> eid
(d/q '[:find ?an ?v ?c
:in $ ?e
:where
[?e ?a ?v]
[?a :db/ident ?an]
[?a :db/cardinality ?c]]
db)
(reduce (fn [m [k v c]]
(update m k (fn [o]
(if (= :db.cardinality/many c)
(if o
(conj o v)
[v])
v))))
{}))
#2016-09-2014:38ethangracer@bahulneel looks right but isn’t quite working for me#2016-09-2014:38ethangraceranyway it’s a fun idea#2016-09-2014:38ethangracerdefinitely doable, just some nuance to figure out#2016-09-2014:43bahulneelI didn't have a dataset handy to test it on, but the idea is there.#2016-09-2015:08bahulneel@ethangracer ok so I forgot to get the ident of the cardinality the query part should be:
'[:find ?an ?v ?cn
:in $ ?e
:where
[?e ?a ?v]
[?a :db/ident ?an]
[?a :db/cardinality ?c]
[?c :db/ident ?cn]]
#2016-09-2015:09marshall@devth Datomic doesn’t need a lot of space. Faster certainly wouldn’t hurt.#2016-09-2015:09marshallit’s used for local temp/swap#2016-09-2015:10devthOk, so e.g. a 10gb local ssd would be enough? Should it be persistent or can it be lost when a transactor fails and gets brought up elsewhere?#2016-09-2015:10marshallmore than enough. doesn’t need to be persistent#2016-09-2015:10devth@marshall awesome, thanks!#2016-09-2015:10ethangracerawesome, @kurt-yagram the solution above from @bahulneel is far better than mine 🙂#2016-09-2015:15bahulneel@kurt-yagram @ethangracer altogether here https://gist.github.com/bahulneel/0ac62dda604936b6a22e497bffb33769#2016-09-2107:06kurt-yagram@bahulneel @ethangracer Thanks! This is really cool...
For now, since it's inside another pull (I need one-level deep attributes of one of the attributes of a pull), I'm defining which attributes are used :
(d/q '[:find [?an ...]
:in $
:where
[?e :whatever/value ?m]
[?m ?a _]
[?a :db/ident ?an]]
(d/db conn))
and I use this one inside a pull like:
...
whatever-attributes (d/q '[:find [?an ...]...) ;; query above
attributes [:whatever/id
:whatever/value whatever-attributes]
find `[[(~'pull ~'?e ~attributes) ...]]
...
(d/q `{~':find ~find
~':in ~in
~':where ~where})
#2016-09-2113:22ethangracer@kurt-yagram awesome! take a look at the following tutorial to see how you can parameterize your queries, so that you don’t have to inject them using syntax quotes, etc. which is both time consuming and hard to get right
http://www.learndatalogtoday.org/chapter/3#2016-09-2113:26kurt-yagram@ethangracer yeah, I didn't like all the syntax-quote stuff... didn't think of parameterizing it though, stupid me. I only parametrized single values so far. Thx!#2016-09-2113:29ethangracer@kurt-yagram I just discovered the full power of datomic’s parameterization too. It’s pretty fantastic.#2016-09-2114:20kurt-yagramfrom datomic docs:
(d/q '[:find ?p (pull ?tx [:db/txInstant]) ?added
:in $ ?userid
:where [?u :user/purchase ?p ?tx ?added]
[?u :user/id ?userid]]
(d/history db) userid)
However, when I try something like that, I get IllegalStateException Can't pull from history datomic.pull/pull* (pull.clj:294)
So... when did that change?#2016-09-2114:37marshall@kurt-yagram Where in the docs is that example? I’ll fix it.#2016-09-2114:38marshall## Changed in 0.9.5344
* Improvement: Throw error when datomic.api/pull is passed a history db.#2016-09-2114:41ethangracer@marshall why was it decided that pull couldn’t be used on a history db?#2016-09-2114:42marshall@ethangracer it never would have worked, the improvement was to throw when tried.
The semantics of pull are current-schema-dependent#2016-09-2114:44ethangracer@marshall Interesting. Is there any documentation / talks about why that is? I’m curious to understand more#2016-09-2114:44kurt-yagram@marshall http://docs.datomic.com/best-practices.html#use-history#2016-09-2114:45marshall@kurt-yagram thanks. i’ll fix that#2016-09-2115:15kurt-yagramjust a quick question: if an entity is retracted, and that entity is a component of other entities, it's retracted from the parent-entities as well. Also, all components of the retracted entity are retracted - I never used retract so often in 1 sentence. However, can I add the retracted entity again? What will happen? Will it restore the old state, like, add previously retracted components? Will it be added as a component to the previously parent, where it was a component of? (Does this actually make any sense?)#2016-09-2115:40robert-stuttafordthat all depends what you mean by 'add the retracted entity again' 🙂#2016-09-2115:42bahulneelDoes anyone know if it's possible to know if a variable is already bound in a rule?#2016-09-2115:42jgdaveyI’ve done a “reverse” transaction before, and that works well for this kind of thing#2016-09-2115:50robert-stuttaford@kurt-yagram: remember that it's all just datoms. [this-e :value ":-)"] [this-e :relationship that-e] [that-e :value " 😎 "]. if you restore these datoms, you restore those entities and their relationships#2016-09-2115:53robert-stuttafordDatomic becomes crystal clear when you understand this fact, imho#2016-09-2118:41jfntnHas anyone used datomic as an http-session store for something like ring-session?#2016-09-2118:42jfntnI’m thinking the history would be really useful for analytics and the peer cache should mitigate performance concerns, but I don’t know if there are reasons why this would be a bad idea…?#2016-09-2118:43marshallI have heard of several people considering Datomic as a session store#2016-09-2118:43marshallI don’t have much in the way of specific details#2016-09-2118:44marshallOne question I might consider is the total volume of sessions you expect to need to handle over time#2016-09-2118:44marshallsince you can’t delete data from Datomic#2016-09-2118:45marshallif you only want to keep sessions around for a few days (or however long), you’d have to address how to handle that#2016-09-2118:47marshallevidently a couple folks have indeed thought about it at least:
https://github.com/gfZeng/datomic-session-store
https://github.com/hozumi/datomic-session
Caveat - I have no personal knowledge of these repos and their value/quality/live-ness/etc 🙂#2016-09-2118:49jfntninteresting!#2016-09-2118:51jfntnMaybe having a session/key that doesn’t track history could be a way to invalidate them while keeping the data around?#2016-09-2118:51marshallYou could definitely use noHistory. of course then you don’t get the benefit of having history for audit/metrics/debugging/etc#2016-09-2118:52marshallbut unless you have a really busy site, the overall data volume will likely not be prohibitive even if you keep everything#2016-09-2118:53marshallthat is the Datomic Way after all 😉#2016-09-2121:03chris_johnsonsilly question - is Cognitect aware that has an invalid SSL cert on it (cert is for , Safari on (i|mac)OS 10 at least carps about the hostname mismatch)?#2016-09-2121:35csmalso, chrome and curl both complain about that cert, but https just forwards to http#2016-09-2203:59robert-stuttaford@jfntn do not use datomic as a session store for logged out users. you do not want 100s of 1000s of google bot and phpmyadmin and sql injection attacks cluttering up your database 🙂 trust me on this. i'm busy crafting a codebase to rebuild our database from scratch partly because we have so much of this junk. we settled on using cookie storage for sessions, to store a uuid, and we create a session entity in datomic with that uuid when the user signs in, to capture ua and ip, and to link activity to a session. we use the uuid as a value directly for the things where we need to record the user when signed out. this works out much better. happy to answer questions if you have 'em#2016-09-2205:54robert-stuttaford@marshall please consider allowing multiple Download Keys for Datomic Pro. having one active at a time means that we have to reset it and very quickly go and update all the places it's used, or e.g. builds break. Supporting multiple gives us a grace period to update things without breakage. thanks!#2016-09-2209:50pesterhazyseconding @robert-stuttaford's suggestion: use cookie storage for sessions wherever possible and for as much data as possible. Session storage used to be one of the most important causes of database slowness, and an append-only db is really not the best place to store what is essentially ephemeral data#2016-09-2212:48marshall@robert-stuttaford I’ll register the request with the team!#2016-09-2213:14shooodookenIf Cognitect are updating the website, they could also update the 'Project Setup' section so it references current version of the product. http://docs.datomic.com/project-setup.html#2016-09-2213:16jfntn@robert-stuttaford great advice thanks. So if I understand correctly the flow is: sign-in -> tx session -> store session/key (uuid) in cookies. Then on the next request you’d lookup the session/key from the cookies (probably indexed?) and get a hold of the user through a session/user ref?#2016-09-2213:31robert-stuttafordfirst visit: write session uuid to cookie. middleware does this. maybe use said uuid as a normal value in datomic transactions.. e.g. :cart/guest-session-uuid
visitor signs in: create session entity, using uuid as its unique identifier. capture additional metadata, such as ip and user-agent. link session entity to user entity.
subsequent visits from signed in user: middleware uses uuid from cookie to find session, and uses session to find user.
@jfntn 🙂#2016-09-2213:35jfntn@robert-stuttaford gotcha, makes sense to create a cookies-only session then escalate it to an entity on login, that’s great advice thank you!#2016-09-2215:30jaret@shooodooken I updated the project-setup page 🙂 http://docs.datomic.com/project-setup.html#2016-09-2216:16zaneTrying to make sense of this error:
datomic.impl.Exceptions$IllegalStateExceptionInfo: :db.error/unique-conflict Unique conflict: :email/address, value: N/A already held by: <eid elided> asserted for: <eid elided>
data: {:db/error :db.error/unique-conflict}
#2016-09-2216:16zaneThe N/A is what's tripping me up.#2016-09-2217:07marshalllooks like it’s saying the string “N/A” is set for an EID in the db#2016-09-2217:08marshallI’m assuming :email/address is set to db/unique value#2016-09-2217:15zaneIt is indeed.#2016-09-2217:16zaneLet me check if we have an :email/address" set to "N/A".#2016-09-2217:16marshallyou should be able to pull the first EID#2016-09-2219:40bmaysIs there a way to invalidate the cache for a connection?#2016-09-2220:02jaret@bmays by invalidate do you mean to delete or reset cache entries? Are you referring to the object cache or memcached?#2016-09-2220:06bmays@jaret — the object cache. I’m hoping to empty/reset the cache. I’m noticing entities in the cache after restoring the database to a previous version.#2016-09-2220:23robert-stuttaford@bmays you absolutely should reboot any peers after a restore#2016-09-2220:24bkamphausNot entities in the way it sounds like you’re thinking — Datomic caches segments and segments are immutable. But also as indicated here you should run restore with all processes down: http://docs.datomic.com/backup.html#restoring#2016-09-2220:37jaret@bmays as Robert indicated you can restart jvm to clear the object cache#2016-09-2220:37bmays@robert-stuttaford I’m trying to avoid impacting another database being served by the same process. We have multiple databases, some which are periodically ingesting large amounts of data (attempting to use restore to avoid using the transactor).#2016-09-2220:40bmaysMy goal is really to set a database state without impacting the transactor’s performance — attempting to use the restore-db function to do that.#2016-09-2307:15robert-stuttafordfrom my understanding of how things work, @bmays, i don't think that's possible. perhaps @stuarthalloway can shine more light#2016-09-2614:14zaneI'm also curious about the answer to this question.#2016-09-2614:16robert-stuttafordi know that you can release a connection#2016-09-2614:16robert-stuttafordhttp://docs.datomic.com/clojure/#datomic.api/release#2016-09-2614:17robert-stuttafordthe doc string seems to suggest that it's not designed for this though#2016-09-2614:43marshall@zane @robert-stuttaford I’d be interested in what the use case is for that#2016-09-2614:49potetm@marshall: re: https://stackoverflow.com/questions/39688899/if-you-discover-a-fact-after-the-fact-how-do-you-datomic/39690195#39690195 I think he's more interested in how to efficiently access historical values. I know datomic uses AVET for as-of lookups with datetimes. Is there a way to efficiently use that index to give you the datom at or immediately before a particular value? Or is there an optimized query for datetimes that does the same?#2016-09-2614:51marshallYou could use the log potentially#2016-09-2614:51marshallI’m not sure what you mean by datom at or before a particular value#2016-09-2614:57potetmNever mind. I was thinking for some reason that historical values would be in the live AEVT index.#2016-09-2614:58potetm(Thinking was, you're given some date X by the user, you want to do an AEVT lookup to get the closest date prior to X)#2016-09-2615:01potetmOrdinarily I would just use a query. Not certain what his use case is.#2016-09-2615:07robert-stuttaford@marshall this person ( @bmays ) wanted to delete a time-sharded database and purge all its cache, and start a new time-sharded database. the question is, how to purge that cache#2016-09-2615:08marshallI know what was asked. Why the need to purge the cache?#2016-09-2615:08marshallIf your new DB was created with a separate call to create-database#2016-09-2615:08robert-stuttafordah, my explanation was off: I’m noticing entities in the cache after restoring the database to a previous version.#2016-09-2615:09robert-stuttafordhis words are about a page up 🙂#2016-09-2615:09marshallright, after restoring over an existing DB you need to restart peers and txor#2016-09-2615:09marshallbut if you’re putting in “separate” DBs, the cache shouldnt matter#2016-09-2615:09robert-stuttafordyeah. so he's restoring, and that's why he's having issues#2016-09-2615:10marshallyou can use restore to put a ‘new’ DB (i.e. not a backup of the same db) into storage#2016-09-2615:10marshallin a running system#2016-09-2615:10marshallany time you restore a DB to ‘replace’ one in there, restart definitely required#2016-09-2615:11robert-stuttafordi'm guessing because uuids#2016-09-2615:11marshallindeed.#2016-09-2615:40bmays@marshall thanks for the reply. The use case is that we have a ETL process for large data sets into different logical DBs that share a transactor. We don’t want to consume transactor resources to do the ETL so we want to simply ‘replace’ the database periodically. We’d like to avoid restarting the peers/stopping the transactor, because we have another database serving reads/writes to a web service.#2016-09-2615:40bmaysWe can live with a JVM restart but it does feel like we’re abusing the restore functionality#2016-09-2616:22marshall@bmays Are you replacing the database that serves the web service or a separate one? Do the same peers serve both?#2016-09-2616:25bmays@marshall: The peers serve both and it’s a separate database.#2016-09-2616:40marshall@bmays If you had dedicated peers to the DB in question I’d say you should take down the peer, delete the db, call gc-deleted-dbs, restore, then restart the peer. I’m thinking that won’t work well if the other (web service) DB is also served by those same peers.#2016-09-2616:41bmaysGotcha, thanks guys#2016-09-2616:42bmaysWe considered having dedicated peers for the DBs but it’s not worth it at the moment. We’re going to just do a staggered reboot of the peers post restore#2016-09-2616:42bmaysthanks for your help @robert-stuttaford#2016-09-2616:43robert-stuttafordhappy to learn along with ya!#2016-09-2618:00zaneThat robert-stuttaford is all right.#2016-09-2622:38mishagreetings!
is ~10 billion datoms datomic capacity limit includes history datoms (retracted ones)?
or is it current db view only? (since those are being stored in a separate trees)#2016-09-2622:42mishaalso, is it a datomic db limit, or datomic server limit (where multiple dbs will have to share those ~10b datoms)?#2016-09-2704:44robert-stuttafordit'll be the full database, @misha, as the limit is about how big the database roots are in peer memory before it becomes unwieldy. that'll include history.#2016-09-2704:44robert-stuttafordand you're right, each database uses its own ram, which means two 10bn databases uses twice the ram#2016-09-2704:44robert-stuttafordit's not a hard limit of the database system#2016-09-2709:27mishathanks, @robert-stuttaford
and "sharding" user-generated data into many db-s is not particularly useful, right? (data for users A-M goes here, for N-Z goes there)#2016-09-2712:04yonatanelDoes anyone know how to catch and handle transact exception specific to when a :db.unique/value conflict occurs? I can look inside the exception but I don't know what data is part of the public api and can be trusted (as mentioned in http://docs.datomic.com/exceptions.html#clojure-iexceptioninfo and a little in http://docs.datomic.com/clojure/#datomic.api/transact)
I need this for cqrs idempotency (handling at-least-once behavior of my events log), so I'm transacting the event ID as unique so duplicates just abort the transaction, so if there's a better way that doesn't rely on exceptions it could be helpful as well.
This works but uses the undocumented ex-data value:
(try
@(d/transact conn [{:db/id (d/tempid :db.part/user)
:cqrs.command/uuid #uuid "a1ad6690-8498-11e6-8495-c4fae74533c7"}]))
(catch java.util.concurrent.ExecutionException e
(= (:db/error (ex-data (.getCause e))) :db.error/unique-conflict))#2016-09-2712:55stuartsierra@yonatanel Although the contents of ex-data is not promised as a stable public API, I think it is not likely to change. One way to be safer is to write a unit test for the behavior you expect, and run it every time you upgrade Datomic. As an alternative, you could write your own transaction function to enforce the constraint, throwing its own ex-info with data you can catch and use in your application code.#2016-09-2713:02yonatanel@stuartsierra Good idea, though I still don't want to touch transaction functions. Also I changed my code just now to be even dirtier, checking the exception message string because it's the only thing that contains my specific uuid attribute ident, as it's the only one I want to silently ignore. Would be nice to have a silent "conditional put" or more info outside of a message string.#2016-09-2713:04stuartsierra"Conditional put" is a clear use case for transaction functions.#2016-09-2713:14yonatanelIt might have been simpler to just do it instead of the whole exception thing :)#2016-09-2713:52annataberskiI changed an ident name in an in memory database, which seems to change the name as expected, except when I try to query by the new name I get :db.error/not-an-entity. Any insight as to why this is happening? Maybe related to renaming an ident in an in memory db?#2016-09-2713:55marshall@misha sharding by either time or domain (ie users) can be a good approach if necessary. However, it's generally recommended that each shard be served by its own transactor #2016-09-2714:01jaret@annataberski Can you try the same ident change on local storage and not in-memory?#2016-09-2714:02annataberski@jaret sweet. i’ll give that a try#2016-09-2719:26mlimotteDoes anyone have any suggestion, or better yet, examples of how to limit data access for incoming queries when they are coming from a internet facing client?
For context, the relevant portion of the proposed architecture looks something like this:
We have a client application (Web UI). It authenticates and connects to a backend service. Then it can hit a generic /query/ endpoint in the service. The generic end-point allows the user to pass in an arbitrary datalog query. The authentication means that not just anyone can execute a query, but once authenticated, that user should also be limited.
For example, if this were an e-commerce site, and a customer logs in. They should be able to query for things in their shopping cart, but not another customers's info.
I'm thinking some sort of Access Control Lists associated with roles that indicates what entity types can be retrieved and what constraints are required.#2016-09-2719:47stuartsierra@mlimotte: This is difficult. Datomic datalog supports arbitrary code execution, in the form of predicate functions.#2016-09-2719:48stuartsierraIf you're facing the public Internet, I would recommend developing your own query format — perhaps a subset of Datomic datalog — which supports only the features the client will need.#2016-09-2719:49stuartsierraAlso consider alternative query formats, such as GraphQL or Om.next, which allow a lot of flexibility in the client but are still less open-ended than Datomic datalog.#2016-09-2719:51mlimotteHave been considering Om.next. Don't fully understand the query format there-- kind of like datomic pull syntax I think. Even in the Om.next case, though, I think I would still need some additional ACL limits.#2016-09-2719:52mlimottefyi, the idea for this open /query interface comes from this Bobby Calderwood video on ES+CQRS: https://www.youtube.com/watch?v=qDNPQo9UmJA&noredirect=1#2016-09-2721:00kenny@mlimotte I have been thinking about that for a while now and I think it is an incredibly powerful idea (I also got the idea from the video you linked). As @stuartsierra said, arbitrary code execution is allowed so your first step would be to sanitize the query by having a whitelist of allowed functions. Next, run the query against the database and filter the results down based on the user's role (I recommend using Clojure's isa? and derive to create your use roles). I have defined my user's roles as a map with the key being the role (e.g. :role/admin) and the value being a map of various whitelists, blacklists, and authorization functions which are used to create the aforementioned query result filter.#2016-09-2722:19misha@kenny you did it in code, not in datomic, right?#2016-09-2722:19kennyYes, in code#2016-09-2808:12yonatanel@mlimotte That talk describes a solution done for an internal company application. This open ended /query endpoint shouldn't be public. There's another talk about "datomic superpowers" where they sanitize the query and add constraints (can't remember if in a database function or by appending :where clauses) according to user roles: https://www.youtube.com/watch?v=7lm3K8zVOdY#2016-09-2811:18kurt-yagramShould lookup refs in transactions work fine? - I thought they would, but I do get :db.error/not-an-entity Unable to resolve entity: ....
[...
{:db/id #db/id[:db.part/user]
:some/tag "whatever"}
{:db/id #db/id[:db.part/user]
:ref-to-some [:some/tag "whatever"]}
...
]
:some/tag is a unique string (unique-value or unique-identity)#2016-09-2811:18robert-stuttafordin map txes, you must wrap with {:db/id lr}#2016-09-2811:20kurt-yagramso {:db/id [:some/tag "whatever"]}? What if I want to put it in a different partition?#2016-09-2811:21robert-stuttaford:ref-to-some {:db/id [:some/tag "whatever"]}#2016-09-2811:21kurt-yagramaah, sorry.#2016-09-2811:21robert-stuttafordat this point, it's already in a partition#2016-09-2811:21kurt-yagramyeah, right. get it, thx.#2016-09-2811:24kurt-yagram... or not. It doesn't seem to work: still same error.#2016-09-2811:30kurt-yagramjava.lang.IllegalArgumentException: :db.error/not-an-entity Unable to resolve entity: [:some/tag "whatever"] in datom [#db/id[:db.part/user -1] :ref-to-some [:some/tag "whatever"]]
#2016-09-2811:49kurt-yagramSo, no lookup refs in transactions. Switching back to #db/id[db.part/user -1] and stuff. Too bad.#2016-09-2812:08robert-stuttaford@kurt-yagram lookup refs can only find entities that are already in storage. you need to use temp ids to refer to temp ids#2016-09-2812:11kurt-yagramallright, thanks!#2016-09-2813:04kurt-yagramIf I query a db, can I return a default value if a many-cardinality attribute doesn't contain any value? (something like get-else, but for many-cardinality)#2016-09-2813:15marshall@kurt-yagram You could probably write something using missing? and ground to do so,#2016-09-2813:15kurt-yagramaha, missing seems to be something...#2016-09-2813:33kurt-yagramthis does the trick, there may be no attrs in :some/attr :
[(missing? $ ?t :some/attr) ?no-attr]
(or-join [?no-tag]
[?t :some/attr ?attrs]
[(when ?no-attr []) ?attrs])]
#2016-09-2813:35kurt-yagramor not... it's not right.#2016-09-2813:44marshallI figured out a way to do this in the past at one point.
I think it was a missing and a ground clause both inside an or#2016-09-2814:20mlimotte@kenny thanks for the feedback. That is in-line with what I was planning. Was hoping there was already some published example, but not sure how generic vs. application-specific the result will be.#2016-09-2814:53marshallDatomic 0.9.5404 is now available https://groups.google.com/d/topic/datomic/JKbyd8VFYm8/discussion#2016-09-2816:18mlimotte@yonatanel I'm not sure that's true. The speaker at one point refers to the web service layer handle edge concerns to protect from the "Scary internet". In any case if you support authentication and authorization, and sufficiently sanitize the input, I could see doing the same thing for an internet facing application.#2016-09-2817:05potetmCurious about the switch away from HornetQ.#2016-09-2817:07potetmPurely performance?#2016-09-2817:10marshall@potetm The HornetQ codebase was donated to Apache and became part of Artemis#2016-09-2817:10potetmAh. So basically just a HornetQ update.#2016-09-2817:11marshallhttps://hornetq.blogspot.com/2015/06/hornetq-apache-donation-and-apache.html#2016-09-2819:32leovhihi. can I ask maybe a stupid question, however I am new to data structures and algorithms#2016-09-2819:33leovso I was trying to google that out, but still: is datomic um.. indexes sort of persistent data structures, right?#2016-09-2819:33leovif so, what are in the leaves?#2016-09-2819:33leovand what is stored in the upper level of the tree#2016-09-2819:40leov(the guide says that datomic stores segments, not individual datoms in the tree. so this means those have to be periodically rebuilt and written anew somewhere on the disk)#2016-09-2819:45bkamphaus@leov yes and no. It does store segments, and the indexes are periodically rewritten (see http://docs.datomic.com/capacity.html#indexing for the performance impact), but the segments themselves are immutable.#2016-09-2819:46arohner@leov persistent datastructures means “when I ‘update’ the structure, the old one is untouched, and I have a new structure representing the change’. kind of like a git fork#2016-09-2819:49bkamphausif you’re interested in implementation details, etc. Rich’s Writing Datomic in Clojure talk is probably the best resource: https://www.infoq.com/presentations/Datomic (though note that some details may have changed since then of course).#2016-09-2820:00kennyHow can I get a pull pattern to be recursive at an arbitrary depth? For example:
[:db/id
:some-value
:nested {:foo '...}]
I want :foo to use the entire pull pattern defined above. The result I get is :nested has no data pulled:
{:db/id 1, :some-value 6, :nested #:db{:id 2}}#2016-09-2821:31leovthank you#2016-09-2912:56misha@kenny
map-spec = { ((attr-name | limit-expr) (pattern | recursion-limit))+ }
recursion-limit = positive-number | '...'
#2016-09-2913:01mishabasically, you need to replace ... with a depth number:
[:db/id
:some-value
:nested {:foo 3}]
#2016-09-2913:01misha(as per Recursive Specifications from http://docs.datomic.com/pull.html)#2016-09-3008:52pesterhazyAny experiences with allowing non-programmers to query analytics data? Our main db is datomic but we can't expect our marketing/business people to learn datalog. They need simple queries but giving them the ability to write/tweak those themselves (as they could with an SQL db) would be useful.#2016-09-3011:56jethroksy@pesterhazy you'll probably get closest to this with https://github.com/zcaudate/adi#2016-09-3012:00pesterhazy@jethroksy that seems to try to do many things (we already have a schema etc.)#2016-09-3012:01jethroksyright, it wasn't designed for your use case#2016-09-3012:01jethroksyadi feels to me like a graphql-like wrapper for datomic#2016-09-3012:02jethroksyknowledge of datalog here is not necessary, so you probably could architect something along the lines of adi#2016-09-3012:05pesterhazyI see#2016-09-3012:06pesterhazyI was thinking more along the lines of
- excel export based on pre-defined queries
- hooking up queries to an SQL db (sqlite? mysql?)#2016-09-3012:14jethroksyhaven't seen anything like that#2016-09-3014:21bkamphaus@pesterhazy because the default return for a query is a set of tuples, it’s fairly trivial to write a function to wrap query results that exports to a comma/tab/arbitrarily delimited text file, then import it into whatever db or system you like. I did this myself several times to get stuff into a SQL store or other tabular data analysis tool (I use pandas in Python, others use R, Tableau, etc.). Of course there are also two common feature requests that would make this more trivial: (1) some kind of SQL mechanism to query Datomic (SQL->Datalog or otherwise) or (2) ability to download tuples from a query in console as CSV.#2016-09-3014:23bkamphausThe team was aware of these when I was there, but there’s also a high bar of quality that has to be cleared especially re: supporting SQL queries (because a lot of people want the ability to plug in their analysis tool of choice, which means support for all kinds of arbitrary crazy generated SQL queries no sane human would ever write).#2016-09-3014:31pesterhazy@bkamphaus I empathize and understand a generic SQL query layer would be hard to get right#2016-09-3014:33pesterhazyfeature request (2) you mentioned would get us pretty far as well#2016-09-3014:33pesterhazyI suppose I could easily build something like that myself#2016-09-3014:35pesterhazythe limitation is that it would have to be a long-running process as just booting up and connecting to datomic and loading segments brings query times up to 3min (long feedback loop compared to a quick "SELECT * from orders")#2016-09-3014:38bkamphausI don’t follow all the logic there. The Datomic system isn’t otherwise up?#2016-09-3014:44pesterhazythe transactor is running#2016-09-3014:44bkamphausoh booting up a new peer to do this?#2016-09-3014:44pesterhazyyes#2016-09-3014:44pesterhazyas in a Jenkins job#2016-09-3014:45pesterhazythat's what we've been doing (and what I would do in a typical SQL setting); it works fine but it's just not a great experience for interactive use because of the startup time#2016-09-3014:46bkamphauswhat’s the time sensitivity of these queries? could you just periodically store/cache the query results elsewhere?#2016-09-3014:46pesterhazyyes, we also do that (push a CSV to S3)#2016-09-3014:47pesterhazyall of that works to some extent, I'm just thinking out loud if there's an elegant possibility I've missed#2016-09-3015:55annataberskiI’m trying to retract an entity. But when I run @(datomic.api/transact conn [[:db.fn/retractEntity [:db/ident :item/ident-name]]]), I get an error saying IllegalArgumentExceptionInfo :db.error/invalid-attribute Schema change must be followed by :db.install/attribute or :db.alter/attribute. Any thoughts on why this is happening?#2016-09-3016:15bkamphaus@annataberski it looks like what you’re trying to retract is an attribute. You can rename things and alter some aspects of an attribute ( http://docs.datomic.com/schema.html#Schema-Alteration ) - but can’t remove it outright.#2016-09-3021:06annataberskigotcha. thanks @bkamphaus#2016-09-3021:58bmaysBeen puzzling over a weird bug for some time now, curious to understand more about how Datomic history databases work and whether this is actually expected. What we’re seeing is multiple [e a v t] facts share the same e a t but have different v values. How would this be possible?#2016-09-3022:03marshallWhen you assert a new fact (i.e. Marshall now loves ice cream) against a cardinality one attribute, that will expand into 2 datoms in that transaction. One is the retraction of the current value and the other is assertion of the new value. Both will have the same E, A, and T, but different V and Op (or added) values#2016-09-3022:03marshall@bmays ^#2016-09-3022:03bmaysgod bless 🙏#2016-09-3022:04marshallBind [e a v t op] and you'll see the difference #2016-09-3022:05bmaysMakes sense now, didn’t know retractions are automatically added#2016-09-3022:05marshallThey will for cardinality one attributes #2016-10-0121:17arohnerIs the hornet connection encrypted?#2016-10-0121:18marshall@arohner yes by default it is. You can disable it via an option in the transactor properties file #2016-10-0121:18arohnerthanks#2016-10-0121:35arohnerclojure.lang.ExceptionInfo: Error communicating with HOST 0.0.0.0 or ALT_HOST my.dns.name on PORT 4334
java.lang.NoClassDefFoundError: Could not initialize class org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl
#2016-10-0121:35arohneris that a ‘normal’ connection problem, or is that artemis weirdness?#2016-10-0121:35arohnerI can successfully telnet to my.dns.name 4334#2016-10-0121:42marshallWhat storage?#2016-10-0121:42arohnerDDB#2016-10-0121:42arohnerI moved the transactor & peer back to 5394, and everything works#2016-10-0121:43marshallYou only get that error when using both txor and peer at 5404?#2016-10-0121:44arohneryep#2016-10-0121:44marshallAnd you're seeing it on the peer?#2016-10-0121:44arohneryes#2016-10-0121:45marshallAre you sure the transactor is up and healthy? You can look at cloud watch for heartbeat msec#2016-10-0121:46arohnerreasonably sure. The transactor was running, and no suspicious messages in stdout#2016-10-0121:46marshallOk. Looking at the error again that looks like a class path issue#2016-10-0121:46arohnerI’m using boot, with datomic-pro 5404 in :dependencies#2016-10-0121:48arohnerand (require '[datomic.api :as d]) loaded just fine#2016-10-0121:48marshallI have to admit a bit of ignorance of boot as of yet. I'm guessing it has some equivalent of lein deps?#2016-10-0121:48marshallI suspect some other dep in your project may be pulling in an incompatible dep#2016-10-0121:49arohneryeah, my suspicion as well#2016-10-0121:49arohnerbut 5394 is working fine, so I probably won’t spend too much time chasing it down#2016-10-0121:51marshallFair enough. However, it looks like it may be Artemis related and releases of Datomic from now on will all be on Artemis #2016-10-0121:51marshallYou should be able to use a 5394 peer with a 5404 transactor#2016-10-0121:51marshallWhich would let you figure out the peer dep issue separately #2016-10-0121:52arohnerinteresting#2016-10-0312:55robert-stuttaford@jaret @marshall one of the things that worries me is how easily someone can delete a database in Datomic. what protections do we have, other than code review and preventing the use of remote repls entirely?#2016-10-0312:57danielstocktonIt's pretty easy to drop most databases (postgres, mysql...) if you have root access. Are backups not enough protection against that?#2016-10-0312:58robert-stuttafordwe have hourly backups. but, prevention is better than cure. also, other dbs allow you differing levels of access - ie, no DELETE commands, whereas due to Datomic’s design, the peer has full privileges#2016-10-0312:58robert-stuttafordi don’t want to have to deal with any data loss at all, and this is one of the gaps i’m aware of#2016-10-0312:59robert-stuttafordwe love our remote repls — they afford us a lot of power. but that datomic.api/delete-database is so near to hand 😱#2016-10-0313:00robert-stuttafordwe already only allow our transactor DDB writes, but if the txor does the delete (which i’m pretty sure it does), that doesn’t help us#2016-10-0313:00robert-stuttafordso i’m wondering what we can do if it is deleted to restore it (i know it sticks around in storage for a while), or what we can do to prevent deletion#2016-10-0317:57jaret@robert-stuttaford at present there is no configurable option on these api calls. Your approach of preventing remote REPLs and reviewing code is the current recommendation. We’ll register the request for more configurable control over the api.#2016-10-0318:07robert-stuttafordthanks, @jaret. i think it’s definitely worth adding some protection, but i have no idea how best to actually do that.#2016-10-0319:01vipacaAre there any recommendations for pagination or limit with offset for datomic?#2016-10-0319:05potetm@vipaca AFAIK datomic doesn't have any inherent notion of order. You can add an :item/index attribute and query using limit+offset.#2016-10-0319:05vipacaI was just looking at this http://docs.datomic.com/pull.html#limit-expressions#2016-10-0319:06potetmYeah. That's what came to my mind as well.#2016-10-0319:07potetmThat + predicate expressions: http://docs.datomic.com/query.html#sec-5-11-1#2016-10-0319:07vipacathanks potetm#2016-10-0319:09potetmNo prob!#2016-10-0319:27timgilbertHey, this kind of feels like a newbie question, but I have a company with a list of members, and I’m using the pull syntax to get them out like this:
(d/pull db '[:company/members] [:company/slug slug])
=> #:company{:members
[#:db{:id 285873023222825}
#:db{:id 285873023222826}
#:db{:id 285873023222827}]}
#2016-10-0319:28timgilbert..is there a way to tell the pull API that I just want the list of members so I don’t have to (-> (d/pull …) :company/members)?#2016-10-0319:28timgilbertI know (d/query) has some options for that in the pattern syntax#2016-10-0319:29marshallAre you using clojure 1.9 ?#2016-10-0319:29timgilbertYep#2016-10-0319:29marshallI think that's the cause here#2016-10-0319:29marshallTry it with 1.8#2016-10-0319:29timgilbertNo, the namespace syntax is fine, what I want to do is control the nesting#2016-10-0319:29marshallOh, sorry. Misunderstood #2016-10-0319:30timgilbertNo prob… I mean like with query I can say (d/query [attribute …]) vs (d/query attribute) to control the shape of the data coming back#2016-10-0319:30marshallRight#2016-10-0319:31timgilbertJust wondering if there’s an analogous control for the pull API#2016-10-0319:32marshallI don't believe so. The idea is you get the nested hierarchy of the data structure as specified in the pull patter #2016-10-0319:32marshallPattern#2016-10-0319:33marshallAnd since this is a multi cardinality attribute, you'll get the map of attribute name to set of values#2016-10-0319:33timgilbertHmm, yeah, I guess since I’m giving it an EID to start with that makes some sense.#2016-10-0319:33timgilbertOk, cool. Thanks!#2016-10-0319:34marshallSure#2016-10-0408:21robert-stuttaford@marshall @jaret — hey folks. we sometimes get a delay from this codepath: https://github.com/onyx-platform/onyx-datomic/blob/0.9.x/src/onyx/plugin/datomic.clj#L309. what could cause tx-range to return a delay rather than [] or nil?#2016-10-0408:21robert-stuttafordthe delay takes the form of "#object[clojure.lang.Delay 0x5201a90 {:status :pending, :val nil}]”#2016-10-0408:21robert-stuttafordit may be that onyx-datomic’s use of seq over the result of d/tx-range is leaking a delay out#2016-10-0408:22robert-stuttafordcan you give any insight, please?#2016-10-0413:39marshall@robert-stuttaford the use of seq would be my guess as well#2016-10-0413:56marshall@robert-stuttaford looking again - there’s also a take read-size which could be doing it#2016-10-0414:02robert-stuttafordthank you @marshall, as always!#2016-10-0414:12zamaterianHi whats the default query consistency level (com.datastax.driver.core.ConsistencyLevel ) when using datomic on top of cassandra ?,
and is there any way of seeing the defaults the datomic uses when configuring the datastax driver ?#2016-10-0415:12robert-stuttafordis that not perhaps a bug, @marshall? i can’t imagine a situation where getting a delay like this is a worthy outcome, given the documented behaviour#2016-10-0415:13marshall@robert-stuttaford yeah, i’m not sure - it might be worth asking @alexmiller and/or the onyx team in the clojure and/or onyx rooms; If it is indeed related to Datomic itself I’m happy to elevate it, but I’m not sure whether it’s Datomic or not#2016-10-0415:16alexmillerneither the seq nor take should introduce a delay#2016-10-0415:16alexmillerso I would look at Datomic#2016-10-0415:16marshallok Thanks @alexmiller 🙂#2016-10-0415:29robert-stuttafordgreat, thanks @alexmiller and @marshall — let me know if you need me to participate in some way#2016-10-0416:05bkamphaus@marshall and @robert-stuttaford I believe the premise in Onyx documented as: "relies on the fact that tx-range is lazy” is not true.#2016-10-0416:05marshalli think you’re right#2016-10-0416:20lucasbradstreet@bkamphaus: that is useful to know that it is not lazy, and we may rewrite how it's used. However, should that affect whether a delay is returned or not? I would expect it to always return maps or always return delays#2016-10-0416:39malcolmsparksHere's a quick question re. idioms - I know that it's common to have attributes namespaced with the type of the entity that 'belong' to, e.g. :customer/id, :department/id. I find this is a bit limiting in a few situations. For example, you might have a number of 'nouns' in your system and want to express a general relationship between them, like :like, :has-favorite. Is it considered OK to have a single universal attribute for public uuids, e.g. :global/id for this use-case?#2016-10-0416:39malcolmsparksI've read there are storage downsides but I'm not sure what they could be#2016-10-0416:42malcolmsparksI sort-of referring to http://docs.datomic.com/best-practices.html#group-related-attributes but I can't remember where I read that there's some advantage to grouping wrt. storage and index compation#2016-10-0416:42malcolmsparksDoes anyone here use 'global' attributes for a few things?#2016-10-0416:49bkamphaus@malcolmsparks the main impact is query, where e.g. you can’t use :person/has-favorite to only consider people, so [?p :person/has-favorite ?f] becomes [?p :person/id][?p :global/has-favorite ?f] (you have to join a clause that filters people instead of limiting to entities of interest with one clause).#2016-10-0416:51bkamphausmay be others as well, but I’ve found query paths in large dbs that walk through a globally namespaced attribute tend to be performance bottlenecks in many cases (though sometimes restructuring the query or re-ordering it can significantly reduce the impact).#2016-10-0416:57malcolmsparks@bkamphaus ah I see, that helps. In this case there aren't going to be many datoms with :global/has-favorite, the problem we've found is locating entities via lookup-refs to create transactions. Without a global key it complicates the code to create a generic 'has-favorite' relationship.#2016-10-0416:58malcolmsparksI guess in our case with relative few :global/has-favorite datoms the attribute index would help a lot with performance. Plus, with unique attributes of course there's a index on uuid values.#2016-10-0419:31misha@malcolmsparks: same here: I went with global :g/id for the ease of creating stuff +, maybe, sync with datascript. But system is not in production, so I did not get any regrets yet.#2016-10-0419:57jfntnIs it possible to specify that an eid should resolve to its ident with the pull syntax?#2016-10-0421:16bhagany@jfntn: no, unfortunately, if you want the ident you’ll need to specify it in the pull expression#2016-10-0421:18jfntnhmm that actually sounds like what I want, not sure I understand?#2016-10-0421:20bhaganywell, it’s a nesting level down from what I understand you to want - you’d want to get {:ref/to-ident :your /ident}, but what you’ll get is {:ref/to-ident {:db/ident :your/ident}}#2016-10-0421:21jfntngot it, and you’re right#2016-10-0513:48danstoneIf I wanted to implement a retry loop for failing cas operations, what aspect of the thrown exception could I rely on to identify unambiguously a cas failure?#2016-10-0514:23pesterhazy@danstone, I'd be interested in this as well#2016-10-0514:30marshall@danstone The string :db.error/cas-failed Compare failed indicates the failure is in the comparison#2016-10-0514:30marshalland the “Compare failed” part only gets thrown when the failure is in the comparison itself#2016-10-0514:32danstonehmm, it feels a bit like relying on undocumented details to me - it may be better to implement a generic transaction fn that throws a known type on not=#2016-10-0514:32danstoneI was hoping there was some ex-info meta or something#2016-10-0514:33danstoneI guess with these sort of remote CAS spin loops, a rather short time out is appropriate anyway, in which case I can just catch and if I get false positives it doesn't really matter too much#2016-10-0514:33danstoneThe data I would use this on would obviously have very low (read non-existent in normal usage) contention anyway#2016-10-0514:35danstonethanks for the info though @marshall 🙂#2016-10-0515:40cap10morganfor a cardinality-many keyword attribute, can I write a query that says "the attribute should have all of these values"? (versus one that just says "this value should be among them")#2016-10-0516:02pesterhazycan't you just add multiple where clauses?#2016-10-0516:03pesterhazy@cap10morgan ^^#2016-10-0516:03pesterhazy[e :attr v1] [e :attr v2] ...#2016-10-0516:04cap10morgan@pesterhazy yeah, but I was hoping there was a way to do it where I could bind a set w/o knowing its size first or having to dynamically build where and in clauses based on it#2016-10-0516:04pesterhazyI see#2016-10-0516:05pesterhazyyou could work around it by adding, say, 3 where clauses and then adding a filter you don't over-count results#2016-10-0516:05pesterhazyjust a thought 🙂#2016-10-0516:07cap10morganin theory datalog is just data and that's great, in practice it is data that is much simpler to write as literals in a text editor than it is to manipulate programmatically 🙂#2016-10-0516:10pesterhazyoften you can just filter results in clojure, rather than trying to express everything in datalog#2016-10-0516:10pesterhazywhich is much more straightforward#2016-10-0516:11pesterhazybut obviously this has performance implications, depending on the result set#2016-10-0516:45djjolicoeurperhaps someone might be able to help me out here, I just migrated a datomic DB from one dynamoDB table to another, restoring it to a differently named URL in the process, e.g. datomic:ddb://<zone>/foo/name1 -> datomic:ddb://<zone>/bar/name2 . I seem to be able to connect to the DB but I am getting invalid-entity on attempting to transact against it. and I can query the value given in the exception just fine. could the renaming via URL be the cause of this behavior?#2016-10-0516:46djjolicoeurthe :db-id in the connection still reflects name1#2016-10-0516:48robert-stuttafordis it on the same transactor, @djjolicoeur ?#2016-10-0516:48djjolicoeurnew transactor#2016-10-0516:49robert-stuttaford@jaret @marshall 🙂#2016-10-0516:54djjolicoeur@robert-stuttaford is the fact that it is not on the same transactor an issue? we are in the process of restoring to the original name on the new table#2016-10-0517:00tony.kayOur local Datomic expert @rwtnorton wrote a great command-line operations tool for people to do simple queries and data fixes against Datomic using an SQL 92 syntax. Think psql for Datomic. We just open-sourced that at https://github.com/untangled-web/sql-datomic#2016-10-0517:50djjolicoeurit would appear that the renaming via new URL was the culprit. restore to the original DB name on the new table seems to have fixed the issue.#2016-10-0609:53pesterhazy@tony.kay that's pretty awesome!#2016-10-0609:55pesterhazy"Assumes that the Datomic schema makes use of the :entity/attribute convention for all attributes." -- that's a pretty strong assumption though#2016-10-0610:12robert-stuttafordit is a strong assumption. as a data point, we have over 600 attrs and they all follow that convention#2016-10-0610:20val_waeselynckas another data point, we do use the entity-type/attribute convention, but we sometimes mix-in several entity types for the same entity, i.e {:animal/sound "woof" :dog/breed "labrador"}#2016-10-0610:21val_waeselynckso not sure it complies to said constraint#2016-10-0612:29pesterhazy@val_waeselynck are you working at a pet shop?#2016-10-0612:30val_waeselynck@pesterhazy not at all, but mentioning attributes we actually use would be leaking IP#2016-10-0612:30pesterhazysorry was just a joke#2016-10-0612:30val_waeselynck(just kidding, really it's just that it seemed more clear)#2016-10-0612:30pesterhazyFWIW, we also use the :entity-type/attr convention#2016-10-0613:41mishaany examples of other conventions?#2016-10-0613:41mishaor is it about entities having :non-entity-type/attrs, like :db/id?#2016-10-0613:44misha@robert-stuttaford greetings! how do you handle ordinal :db/isComponent false entities? I end up wrapping those in
{:ordinal/idx 2
:ordinal/ref [:user/email "
are there any sound alternatives to this?#2016-10-0615:46pesterhazyone example is cross-cutting attributes, like a created-at attribute#2016-10-0615:47rwtnorton@val_waeselynck the mixed attr case you listed above would be fine as far as the sql-datomic tool is concerned. it is mainly the case where there is no namespace for an attribute that the tool might not take into account.#2016-10-0615:49pesterhazydoes the tool use pull syntax or datalog to pull in attributes?#2016-10-0616:00robert-stuttaford@misha i’m not sure i understand your question#2016-10-0616:00rwtnorton@pesterhazy It grabs the entities with non-pull syntax and then filters out attrs.#2016-10-0616:01pesterhazyit uses d/entity? I see#2016-10-0616:02pesterhazyit's important as for handling of missing attributes (NULL columns)#2016-10-0622:59afhammadwhats with the negative temp ids?#2016-10-0623:49bvulpes@afhammad they become positive once transacted#2016-10-0701:56djjolicoeuranyone ever have an issue with a datomic connection being null when trying to call transact after a long running query?#2016-10-0702:02djjolicoeurthe transactor appears to be healthy, other peers are transacting. It sometimes recovers and reconnects, other times it does not. I thought it may be due to transactor load, but I can reproduce it locally with no other peers hitting my transactor#2016-10-0711:03pesterhazy@djjolicoeur what exceptions are you seeing?#2016-10-0715:18djjolicoeur@pesterhazy it appears this is due to running several large, concurrent queries. Temporary fix was to bump up datomics TTL, but will likely be rewriting that section of code to be a bit smarter about how those queries are built and run.#2016-10-0715:19bmaysI have a pretty simple question about the query api on a history database — is there anyway to take the max txid for a single attribute lookup in the where clause?
(d/q '[:find ?e ?state ?tx
:in $ ?eid
:where
[?e :someEntity/id ?eid]
[?e :someEntity/state ?state ?tx] ;; want to only choose the latest tx value
[?tx :db/txInstant ?inst]] hist-db "<valid id>")
=>
[[17592186484088 :someEntity.state/created 13194139995319] ;; want to choose this value in the query
[17592186484088 :someEntity.state/patientUnresponsive 13194139978521]
[17592186484088 :someEntity.state/created 13194139972983]]
#2016-10-0715:26pesterhazyI'd try filtering in clojure code after the query completes @bmays#2016-10-0715:27bmaysYup, that’s what I’ve been doing but wasn’t sure if there was an easier way#2016-10-0715:27bmaysI was curious if the single tuple syntax exposed a fn to select the tuple it returned#2016-10-0715:28bkamphaus(max ?tx) in the :find clause produces a different grouping behavior than you want?#2016-10-0715:35bmaysI haven’t had success; the grouping isn’t what I expected:
user=> (d/q '[:find ?e ?state-name (max ?tx)
:in $ ?eid
:where
[?e :someEntity/quartetId ?eid]
[?e :someEntity/state ?state ?tx true] ;; want to only choose the latest tx value
[?state :db/ident ?state-name]
[?tx :db/txInstant ?inst]] hist-db #uuid "<totally valid uuid")
[[17592186484088 :someEntity.state/created 13194139995319]
[17592186484088 :someEntity.state/patientUnresponsive 13194139978521]]
#2016-10-0715:36djjolicoeur@bmays I have no idea how if this is a good idea or supported or not, but you can bind ?tx in a query evaluated within the query. I will whip an example real quick.#2016-10-0715:41bmaysokay thanks, that sounds promising#2016-10-0715:42bmays@bkamphaus — intuitively I would expect this to return the max-tx for all the txs of this entity but it doesn’t seem to be grouping that way:
(d/q '[:find ?e (max ?tx) ?inst
:in $ ?e
:where
[?e _ _ ?tx] ;; want to only choose the latest tx value
[?tx :db/txInstant ?inst]] hist-db 17592186484088)
=>
[[17592186484088 13194139972983 #inst "2016-09-08T18:39:04.471-00:00”]
[17592186484088 13194139974019 #inst "2016-09-08T19:31:56.518-00:00"]
[17592186484088 13194139974022 #inst "2016-09-08T19:32:00.594-00:00"]
[17592186484088 13194139974058 #inst "2016-09-08T19:35:20.979-00:00"]
[17592186484088 13194139974062 #inst "2016-09-08T19:35:48.438-00:00"]
[17592186484088 13194139976273 #inst "2016-09-09T15:23:52.788-00:00"]
[17592186484088 13194139976274 #inst "2016-09-09T15:24:00.969-00:00"]
[17592186484088 13194139978521 #inst "2016-09-12T14:42:47.598-00:00"]
[17592186484088 13194139978522 #inst "2016-09-12T14:43:49.859-00:00"]
[17592186484088 13194139995319 #inst "2016-09-16T17:34:59.882-00:00"]
[17592186484088 13194139995394 #inst "2016-09-16T17:46:15.081-00:00"]
[17592186484088 13194139995605 #inst "2016-09-16T18:01:02.649-00:00"]
[17592186484088 13194139995659 #inst "2016-09-16T18:05:33.927-00:00"]
[17592186484088 13194139995660 #inst "2016-09-16T18:05:34.005-00:00"]
[17592186484088 13194139995668 #inst "2016-09-16T18:05:40.830-00:00"]
[17592186484088 13194140023845 #inst "2016-09-23T15:54:30.171-00:00"]
[17592186484088 13194140037787 #inst "2016-09-28T21:34:13.794-00:00"]
[17592186484088 13194140040978 #inst "2016-09-29T20:59:27.388-00:00"]]
#2016-10-0715:45bmaysI suppose what I really want is to group all these txs by attribute and take the max/min of each of those to get created/updated times#2016-10-0715:45bmaysinteresting, didn’t know you could nest#2016-10-0715:46djjolicoeurneither did I, found out a while ago by accident, really. And again, I have no idea if its a good idea or not!#2016-10-0715:46bmaysI guess we’ll see, I’ll give it a run — thanks#2016-10-0715:47djjolicoeurI didn’t run that so it might need a few tweaks to run, but I think you get the idea#2016-10-0715:47djjolicoeura :with clause might also get you what you need, here instead of a nested query#2016-10-0715:50bkamphaus@bmays I believe the reason it doesn’t is because of the instant. You’re returning the max-tx for all unique [entity, instant] pairs, which w/roughly 1->1 instant correspondence to tx means no meaningful max.#2016-10-0715:50bkamphausI would pull the instant using the tx after the query and take that out of the find clause.#2016-10-0715:51bmaysinteresting#2016-10-0715:52bmaysthat did indeed work#2016-10-0715:53bkamphausin general I try to structure queries that way — provide just enough information to limit your results to the tuples of interest, then use pull or entity to retrieve more information. Roughly splitting what would be the select and project portions of a query in the relational algebra world.#2016-10-0715:59bmaysDefinitely. It’s making sense — I was binding the ?state value and getting back two groupings#2016-10-0716:15bmays@djjolicoeur the subquery did the job, here is the working version:
(d/q '[:find ?e ?state-name ?state-tx-max-inst
:in $ ?e
:where
;; get the max-tx for the attribute :serviceRequest/state by running a subquery for all
;; txs affecting the attribute and taking the max
[(datomic.api/q '[:find (max ?state-tx) .
:in $ ?e
:where
[?e :serviceRequest/state _ ?state-tx true]] $ ?e) ?state-tx-max]
;; get the value of the :serviceRequest/state at the latest tx
[?e :serviceRequest/state ?state ?state-tx-max true]
[?state :db/ident ?state-name]
;; get the last updated time for the :serviceRequest/state attribute
[?state-tx-max :db/txInstant ?state-tx-max-inst]]
hist-db 17592186484088)
#2016-10-0716:15bmaysthanks for your help#2016-10-0716:16djjolicoeurno prob, glad it worked out for you!#2016-10-1007:11val_waeselynckpublished a guide to help get started building Datomic apps: https://vvvvalvalval.github.io/posts/2016-07-24-datomic-web-app-a-practical-guide.html#2016-10-1009:54tengWhat's the best practice for handeling concurrent changes to an entity in Datomic?
For example, if both user A and user B retrieve entity X with transaction id 1 and then user A changes attribute X1 and user B changes attribute X2 and then user A updates the entity with transaction id 2, followed by user B updates the entity with transaction id 3.
If the client, by convenience, wants to send all the attributes for the entity to the transactor, how can we make sure that user B doesn't overwrite user A's changes?
Our idea is to pass the transaction id with the entity to the client and back again so that the server can do a diff with that transaction and only update the changed attributes. Is there a better approach?#2016-10-1010:04robert-stuttaford@teng :db.fn/cas#2016-10-1010:04val_waeselynck@teng if you have no automated way of resolving conflicts, there's a transaction function :db.fn/cas (compare-and-set) which would cause one of the updates (in your case B) to fail. To minimize the risk, I suggest you perform of diff#2016-10-1010:05val_waeselynck@robert-stuttaford you beat me to it 🙂#2016-10-1010:05tengThanks!#2016-10-1013:10yonatanelIf I'm using DynamoDB for storage and a datomic transaction completes, is the data already in DynamoDB or is there a period where it's on local disk? Is it safe to destroy the transactor machine and disk immediately when the transaction completes without losing data?#2016-10-1013:13marshall@bmays @djjolicoeur Because of the caching and local execution of query, nested query and/or sequential query are both good options that don’t pay the same kind of penalty sequential query might pay in a more traditional client/server RDB#2016-10-1013:14marshall@yonatanel When a transaction completes successfully the data are guaranteed to be written to durable storage (whatever your backend is) in the log#2016-10-1013:15marshall@yonatanel if your transactor dies immediately after a transaction completes your data will not be lost.#2016-10-1022:32hueypis there a way to mark a ref as cardinality one in both directions?#2016-10-1022:43kenny@hueyp you always have reverse lookups on any ref: http://docs.datomic.com/pull.html#reverse-lookup#2016-10-1023:10hueypthe reverse returns a sequence for a one ref — which makes sense, its one-to-many … just wondering if there is a way in the schema to tell datomic to enforce one-to-one and then the reverse lookup would not be a sequence?#2016-10-1023:11hueyper many-to-one#2016-10-1023:12hueype.g. right now I do (-> entity :thing/_foo first) all over the place 😜#2016-10-1104:05robert-stuttafordthink about it, @hueyp — it’ll always be a coll. several things may have entity alpha as its one thing. having datomic enforce that only one thing has entity alpha adds a lot of busy-work to quite a lot of the api for not much gain.#2016-10-1104:11hueypwasn’t sure if there was some unique constraint magic I didn’t know about is all (you can get the behavior with components)#2016-10-1108:48odinodinI’m having trouble using Clojure spec to validate Datomic entities, specifically using clojure.spec/keys since that requires the input to be a map. Any thoughts on that?#2016-10-1108:49odinodinHere’s a simple example:
(s/def ::age number?)
(s/def ::name string?)
(s/def ::person (s/keys :req [::name ::age]))
(def person-entity (d/entity (d/db conn) [:person/name "Mr entity"]))
(s/explain ::person person-entity)
val: #:db{:id 17592186069950} fails spec: :some-ns/person predicate: map?
#2016-10-1108:54danielstocktonYou could call d/touch on the entity first#2016-10-1108:55odinodin@danielstockton that is still not a map#2016-10-1108:56odinodinI’ve made some progress with using a conformer together with and#2016-10-1108:57odinodinbasically this:
(defn to-map [input]
(if (= (type input) EntityMap)
(d/pull (d/db conn) '[*] (:db/id input))
input))
(s/def ::person2 (s/and (s/conformer to-map) ::person))
#2016-10-1109:00danielstocktonThat looks OK, what's missing?#2016-10-1109:02odinodinthe trouble with pull * is that it only goes one level down#2016-10-1109:06danielstocktontouch works more than one level deep, right? so that might work, if you can convert the touched EntityMap into a clojure map#2016-10-1109:11danielstocktoni thought that pull also worked for nested entities#2016-10-1109:17odinodinI'm looking for a generic solution for how to use spec with datomic, and I'm thinking my current solution is not the best way to go about it#2016-10-1109:18danielstocktoncount me also interested#2016-10-1109:38kristian@odinodin this “trick” will solve your problem#2016-10-1109:41borkdude@odinodin what about https://stackoverflow.com/a/39973918/6264#2016-10-1109:51odinodin@borkdude: interesting. However, I also want it to work with fdef#2016-10-1113:57martinklepschIs there a way to have a persistent datomic database without a separate transactor process? (for development)#2016-10-1114:27jaret@martinklepsch Persistent requires you to use a separate txor via DEV or FREE protocol. If persistent is not a requirement you can use MEM protocol.#2016-10-1114:29martinklepschok. hoped there might be something that doesn't require me to start another process#2016-10-1114:29martinklepschbut it's a database after all so I probably should just accept it 🙂 thanks @jaret#2016-10-1114:33pesterhazywelcome to the wonderful world of datomic, @martinklepsch !#2016-10-1119:50robert-stuttafordyeah, so if you update to the latest version of datomic, make very sure to update your aws-sdk too 🙈#2016-10-1211:33caspercI am wondering, does Datomic do efficient queries using <, >, >= and <= or does it end up doing full scans? We are seeing some bad query performance, compared to an SQL base, so wondering what can be expected.#2016-10-1211:55caspercThe query in question is something like this, which uses coordinates to get addresses inside a bounding box:
'[:find [?a ...]
:in $ ?xmin ?ymin ?xmax ?ymax
:where
[?a :adresse/etrs89koordinat-oest ?x]
[(< ?xmin ?x)]
[(>= ?xmax ?x)]
[?a :adresse/etrs89koordinat-nord ?y]
[(< ?ymin ?y)]
[(>= ?ymax ?y)]]
#2016-10-1212:54marshall@casperc Release 0.9.5130 and later include optimization of range predicates in query (see http://blog.datomic.com/2015/01/datalog-enhancements.html). Are you running on that version or newer?#2016-10-1213:04casperc@marshall: We are running 0.9.5394, so we should.#2016-10-1213:06caspercBasically, I am comparing a query against a datomic base with 3M entities of the type we are looking for and an sql query against a base with the same data.#2016-10-1213:08caspercThe result is about 500k entities and takes about 5s in Datomic and 1.2s in the SQL base. I am just wondering why the big difference.#2016-10-1213:08caspercAnd a count in the SQL base is 60ms and still 5s in datomic#2016-10-1213:09caspercThe :adresse/etrs89koordinat-oest field is a float if that matters#2016-10-1213:09marshallYou might want to review https://github.com/Datomic/day-of-datomic/blob/master/tutorial/decomposing_a_query.clj
In general, you may be able to reorder the datalog clauses to improve the efficiency#2016-10-1213:10marshallWithout knowing more about your dataset specifically it’s hard to recommend which clauses should be moved, but I would start with the two non-range clauses at the top#2016-10-1213:15caspercWell it is basically coordinates and the query is for a bounding box.#2016-10-1213:17caspercSo I don’t think there are any clause that reduces the result more than other#2016-10-1213:20marshallLooking again, since you supply min and max values to the query, you might actually see better performance by moving the non-range statements down. Again, it depends heavily on your dataset and the best approach may simply be a bit of testing.#2016-10-1213:24casperc@marshall: Its about the same with all permutations i can think of, including moving them up and down 😞#2016-10-1213:25caspercIs there any way to debug what the time is being spent on?#2016-10-1213:26marshallThat example I provided from Day of Datomic shows a way to do that#2016-10-1213:26caspercTo see cache hits/misses and that sort?#2016-10-1213:26marshallyou omit clauses from the query and look at how many results are returned#2016-10-1213:26marshallyou could also time individual sections/clauses and groups of clauses#2016-10-1213:27marshallthe peer metrics report cache and memcached hit/miss data#2016-10-1213:27caspercHow would I go about timing individual clauses?#2016-10-1213:28marshalltime the query (with whatever tooling you want) with only individual clauses included#2016-10-1213:29marshallthat may or may not be particularly enlightening, but you can also add additional clauses and groups of clauses and determine which sets and joins are expensive#2016-10-1213:32bkamphaus@casperc you might also test the results of the intersection of the two equivalent calls to index range: http://docs.datomic.com/clojure/#datomic.api/index-range#2016-10-1213:32caspercok, well first off removing the y-coordinate reduces the time it takes somewhat#2016-10-1213:33caspercDoes that mean it is doing a full scan for the remaining, and not the index?#2016-10-1213:34casperc@bkamphaus: Ah, will do.#2016-10-1213:36bkamphausto some extent there will just be limits on the performance of this shape of query against Datomic’s indexes vs. an R-tree or something.#2016-10-1213:44caspercWhat would be the best way to make the intersection from an index-range call? Is it possible (and performant) to do inside the query?#2016-10-1213:45caspercEach are very fast (sub-milisecond)#2016-10-1213:51caspercIs something like this possible:
'[:find ?a
:in $ ?xmin ?ymin ?xmax ?ymax
:where
[(datomic.api/index-range $ :adresse/etrs89koordinat-nord ?ymin ?ymax) [?a]]
[(datomic.api/index-range $ :adresse/etrs89koordinat-oest ?xmin ?xmax) [?a]]
]
#2016-10-1213:57bkamphausDidn’t end up testing that myself, you may just be encountering the laziness (re: timing test), so make sure to realize, e.g. with into. I probably wouldn’t make the API call in the query, it’s all going to be realized in memory anyways, just (into #{) (map :e (seq results))) the results or something and clojure.set/intersection the two calls.#2016-10-1214:02casperc(count (time (clojure.set/intersection (into #{} (map :e (d/index-range (d/db (get-conn :kildedata)) :adresse/etrs89koordinat-oest 718333.6321944933 731542.4349335412)))
(into #{} (map :e (d/index-range (d/db (get-conn :kildedata)) :adresse/etrs89koordinat-nord 6170381.489927892 6181147.04591752))))))
"Elapsed time: 1656.835585 msecs"#2016-10-1214:02caspercWhich is an improvement for sure#2016-10-1214:03caspercI tried this
'[:find ?a
:in $ ?xmin ?ymin ?xmax ?ymax
:where
[(datomic.api/index-range $ :adresse/etrs89koordinat-nord ?ymin ?ymax) [[?a]]]
[(datomic.api/index-range $ :adresse/etrs89koordinat-oest ?xmin ?xmax) [[?a]]]
]
But it never finished.#2016-10-1214:08bkamphausCan’t comment on the never finishing query, apart from saying that I’d in general keep index-range, datoms etc. calls out of query (basically you’re doing something from the primitives instead of querying). My guess is the time difference w/index range comes from the necessary structure of the query which starts with what’s probably an aevt lookup to limit to only entities with x and y values of interest, whereas the index-range call makes one pass per constraint just with the avet query.#2016-10-1214:11caspercAlright, cool. Well again it is an improvement for sure, so I hope it will be enough for our use case.#2016-10-1214:12caspercThanks alot to both of you @bkamphaus and @marshall :+1::skin-tone-2:#2016-10-1214:15drankardThere is options in the transactor config to push metrics to CloudWatch, is there any way of doing this for peers ?#2016-10-1214:19bkamphausyou can wire up w/e reporting to peer w/the metrics callback: http://docs.datomic.com/monitoring.html#sec-2-2#2016-10-1214:35drankardI see, but I then have to implement the AWS - PutMetricDataRequest MetricDatum units etc. myself, i was wondering as this is implemented in the transactor there might be a shortcut. 😉#2016-10-1222:17kennyIs there a way to tell if clojure.lang.ExceptionInfo: Error communicating with HOST is due to not being able to communicate with the host or going over your process limit?#2016-10-1222:37kennyAlso, what is the easiest way to update a license file on a running Datomic transactor deployed with CF?#2016-10-1223:46afhammadis it possible to use pull against entire db instead of a specific entity?#2016-10-1300:09marshall@kenny I believe the transactor log includes a message when you attempt to exceed your peer limit.
As for updating your license, if you have a paid license the easiest way is to start up a new stack against the ddb table with the new license key (i.e. run ensure cf script with new properties file followed by create stack command) . The new transactor(s) will be in standby and you can then kill the old stack. The new transactor will HA take over #2016-10-1300:10marshallWith a starter license you'd need to take the active one down and start the new one, as starter license doesn't include ha#2016-10-1300:12marshall@afhammad pull is only against a specific entity. You can use pull-many for multiple entities . If you need all data across the db and can't specify your entities, you'd want to use query#2016-10-1300:13kennyWe have a pro license. I tried updating the CF stack and it didn't seem to work. So then, as you suggested, I made a new stack, deleted the old one and that did the trick. Is this the recommended way of updating the transactor properties - creating a new stack and deleting the old one, not using the update stack feature?#2016-10-1300:13marshallhttp://docs.datomic.com/deployment.html#upgrading-live-system yes, that is the recommended approach#2016-10-1300:15marshallProperties are only set on startup so you'd need to restart the transactor either way and the provided cf and ami don't provide a method to restart the transactor within an active instance#2016-10-1300:16marshallUtilizing HA fail over for this purpose will have substantially less downtime than trying to do a restart anyway #2016-10-1300:21afhammad@marshall thanks#2016-10-1300:22afhammadcan a query return a collection of maps instead of vectors where key is the attribute?#2016-10-1300:23marshallNo, query will return tuples. You'd need to put them in maps yourself#2016-10-1300:23afhammadgot it, thanks#2016-10-1300:23kenny@marshall thanks!#2016-10-1300:25marshallYou can use something like zipmap against a query result to get a map collection #2016-10-1300:26marshall@afhammad ^#2016-10-1308:29yonatanel@afhammad @marshall You can use a pull expression in a query: http://docs.datomic.com/query.html#pull-expressions#2016-10-1308:30yonatanelI do it all the time. I hope it's not too horrible :)#2016-10-1312:16synzvatoWhat's odd is that the ensure-transactor command is able to create the required DynamoDB table. I also have tried to ping DynamoDB Local from the Datomic container, and that worked fine as well. Any help would be greatly appreciated 🙂#2016-10-1312:17synzvatoI'm also not sure why it's trying to reach random ports on 127.0.0.1#2016-10-1312:27marshall@yonatanel definitely using pull in the find specification is a great. I couldn't remember if it returned maps, but of course it does and is definitely the right approach for the use case @afhammad mentioned#2016-10-1312:30marshallhttp://docs.datomic.com/query.html#pull-expressions#2016-10-1312:31afhammad@yonatanel @marshall thanks guys#2016-10-1312:37afhammadHow can I run a query that returns all :card/number ids where :card/tags (cardinality/many) has at least one of :tags/selected (cardinality/many) OR return ids for ALL :card/number if :tags/selected is empty.#2016-10-1312:40afhammadthis satisfies the first part, but obviously returns nothing if ?tag-ids is empty
[:find [?e ...]
:where
[_ :tags/selected ?tag-ids]
[?e :card/number]
[?e :card/tags ?tag-ids]]
#2016-10-1312:55marshallYou may want to either use an OR clause or consider using two separate queries. The nice thing about the peer model is you don’t pay the same cost for multiple queries (i.e. roundtrip on the wire) that you do with more traditional client/server systems#2016-10-1316:57misha@marshall that is worth to be reminded about like once a week :+1:#2016-10-1321:38potetmIs there any way to stop a gc? (e.g. in the event it starts taking up too much write capacity)#2016-10-1321:59potetmMy only guess would be stop the transactor and let it fail over to the HA. But I'm not sure it would actually stop the job permanently.#2016-10-1405:30robert-stuttaford@marshall and @jaret, one small complaint about the newest Datomic — something about Artemis in either the transactor or peer no longer suffers laptop sleep mode any more. i have to restart everything after waking my laptop, which is annoying when you’re used to running it permanently#2016-10-1405:31robert-stuttaford2016-10-14 07:25:20.128 ERROR - o.a.activemq.artemis.core.client - AMQ214002: Failed to execute failure listener
java.lang.AbstractMethodError: datomic.artemis_client$wrap_as_failure_listener$reify__6542.connectionFailed(Lorg/apache/activemq/artemis/api/core/ActiveMQException;ZLjava/lang/String;)V
#2016-10-1408:44tengWhen I retrieve an entity with the entity function, I get all the attributes, but I also want to know the “current transaction id” that the database is in, so that I can use as-of (later) to check if any changes has been done to the entity.#2016-10-1410:43thegeez@teng (d/basis-t (d/entity-db entity))?#2016-10-1411:30thegeez@teng in datomic there's tx and t and the conversion between the two with d/t->tx & d/tx->t#2016-10-1411:31teng@thegeez Ok, thanks (just figured this out, 10 sec ago!) 🙂#2016-10-1411:34teng@thegeez So there is a 1-to-1 relation between t and tx? Is there any particular reason for this that you know of?#2016-10-1412:05thegeezI don't know for sure. I think t is an increasing int to number the transactions and tx is the entity id for a transaction and there's probably a reason why both are needed 🙂#2016-10-1412:37danielstocktonI know the tx is basically bit-shifted t xor-ed with the entid of :db.part/tx (which is 3)#2016-10-1412:38danielstocktonbut also can't think why both are needed right now#2016-10-1413:35teng@danielstockton maybe a performance optimization.#2016-10-1413:37danielstocktonHave a feeling its more obvious than that#2016-10-1502:36bhaganyI’ve been putting some hammock time into how one might spec a datomic entity, and right now I’m pondering what spec’s conformance might mean for an entity. My first impulse was that they would conform to maps, which might work. But then, I’m unsure if s/unform from a map to an entity is a valid concept, because for example, the resulting entity wouldn’t have anything to return from d/entity-db. I’m also not sure if there’s even a way to create an entity outside of a database context. Anyone have any thoughts on any of this?#2016-10-1520:01arohner@bhagany my bigger issue w/ speccing datomic is that the thing you hold in-memory might not be the whole entity. In spec if you say (s/keys :req [::foo ::bar]), you’re saying in-memory this must have :foo, :bar, which is different from saying the entity must have :foo, :bar. So IMO spec is fine for specific DB queries, but not for general purpose validation#2016-10-1520:01arohnerthough you might be able to do something with transactor functions#2016-10-1520:02bhaganyin my head, the spec would realize keys in memory as it checked for them#2016-10-1520:02bhaganyjust like normal key access for entities#2016-10-1520:04bhaganyI think that isn’t too different from what you’d expect from validating other lazily evaluated things#2016-10-1520:31arohnerso then you have an implicit pull on every validate?#2016-10-1520:31arohnerthat will work in some applications, but not all#2016-10-1522:02bhaganyMore like an implicit touch, but yes, it would potentially be the entire database#2016-10-1522:03bhaganyBut if you spec the whole db, that's kind of expected#2016-10-1605:36achesnaisSo I’m building up a spec to validate an input map, and one key of that map is a datomic database.
For this reason I’ve written a spec that checks for the datomic.db.Db type.
Because I want to be able to exercise etc. my spec, I’ve also written it with a generator:
(s/def :datomic/db
(s/with-gen
#(instance? datomic.db.Db %)
#(sgen/return (let [uri "datomic:"
conn (partial d/connect uri)]
(d/create-database uri)
(d/db (conn))))))
#2016-10-1605:37achesnaisIs this a good way to go?
And how would garbage collection work on the memory databases generated via exercise? Since the memory db doesn’t seem linked to a specific var but is created indirectly via a uri, is there a risk that I may fill up my memory with this?#2016-10-1605:38achesnaisOr would the fact that I’m reusing the same uri preventing multiple dbs from being created?#2016-10-1607:51robert-stuttaford@achesnais datomic connections cache, so you’d only pay for one in that instance. i believe it won’t gc until you d/release it#2016-10-1607:59achesnaisMakes sense 🙂#2016-10-1814:48dominicmhttp://docs.datomic.com/best-practices.html#use-pull-to-retrieve-attribute-values are there any performance hits/gains from using pull instead of {:find [?every ?attribute ?i ?am ?interested ?in] :where [[?e :blah/is 1] [?e :every ?every] ...]}? I have a lot of queries using this syntax.
I'm guessing I might see some improvement simply because I can more easily re-order clases (http://docs.datomic.com/best-practices.html#most-selective-clauses-first)#2016-10-1814:51robert-stuttafordi don’t think you’ll see a massive perf gain by doing that, but you will see cleaner code by using pull#2016-10-1814:51yonatanel@dominicm How does using pull expression make it easier to re-order where clauses?#2016-10-1814:51robert-stuttafordq/q to find which things, and d/pull to look at them#2016-10-1814:52robert-stuttafordeither way, the query cache will almost certainly have your data in local memory by the time :find or d/pull operate#2016-10-1814:52dominicm@robert-stuttaford As long as there isn't a significant hit, I'm happy. I'm trying to optimise a 20ms query that we currently run 2k times…
Hopefully I can get it to run once — when we actually look up the entities. But I can't understand the queries to begin with, so this seems like a good first step.#2016-10-1814:52robert-stuttafordthat sounds like fun 🙂 what are you doing for measurement?#2016-10-1814:53robert-stuttafordi’ve found https://github.com/ptaoussanis/tufte to be simply lovely#2016-10-1814:54dominicmThat's exactly what I am using! I have been wondering if this slipping under the radar isn't an excuse to have some kind of profiling for every request in dev, with significant spikes causing the speaker to notify at you.#2016-10-1814:55robert-stuttafordi love that idea#2016-10-1814:56robert-stuttafordi found us using a datalog query as a sort-by function using tufte -grin-#2016-10-1814:56dominicmhttps://blog.codinghorror.com/performance-is-a-feature/ Stack Overflow have this MVC mini profiler, which is just awesome. It'd be pretty cool to generate one for every request.#2016-10-1814:59robert-stuttafordoh man i’d love that#2016-10-1814:59dominicmhttps://github.com/yeller/clojure-miniprofiler I did play with this a little bit, it was quite fun 🙂#2016-10-1815:00robert-stuttaford😮#2016-10-1815:00robert-stuttafordyou just put something on the very top of my list, sir#2016-10-1815:01robert-stuttafordit has datomic measurements? i wonder how it does that...#2016-10-1815:01dominicmIt has some really neat ways to "label" parts of your code. I guess it's just that#2016-10-1815:02dominicmIt has a custom-timing function, which could be used for datomic:
(custom-timing "sql" "query" my-query-string
(execute-sql-query my-query))
#2016-10-1815:02robert-stuttafordah, so it cheats 🙂#2016-10-1815:02robert-stuttafordand makes you do the work#2016-10-1815:02robert-stuttafordfair enough#2016-10-1815:02dominicmYep, same as Tufte.#2016-10-1815:05robert-stuttaforddoes it deal with laziness sanely? or do you have to doall the things yourself?#2016-10-1815:34yonatanelI know datomic can support a "tiny" number of databases with the same transactor. Is it also true for abandoned databases that are never connected to anymore? A case might be migrating the data to a new database and leaving the old one as is.#2016-10-1815:48jaret@yonatanel Every database running against a transactor will have to maintain a small amount of overhead. You can delete the abandoned database by calling d/delete database. To be clear, you could also leave abandoned databases out there as it is a very small amount of overhead.#2016-10-1816:58dominicmOkay, I'm a little confused still, I have this query:
'{:find [...]
:in [$ ?input]
:where [[?input :e/name ?ea]
[?aaa :aaa/name ?ea ?tx]
[?aaa :aaa/locations ?location]
[?location :location/addresses ?address]]}
and it was performing OK (21ms mean avg) but I needed faster as it's running 2k+ times (If I can optimize that part, I'd love to, but can't see how right now...)
I figured the problem might be that I am doing an AVET lookup from ?ea on the :aaa/name value, so I added an index there, but haven't seen any performance improvement (I did sync-schema).
In case it's relevant, this query is being done on a as-of db.#2016-10-1816:59dominicmI'd like to know how to most efficiently figure out what the expensive lookup is, and how to optimize that part.
I might be into difficult territory, in which case I think it's better to re-evaluate if I need to do this, but that'll require me cleaning up an even larger query..#2016-10-1817:02jaret@dominicm Datalog is super nice for query de-composing. I recommend dropping each clause and confirming you have the optimized clause order.#2016-10-1817:02jaretThis example walks through the steps#2016-10-1817:02jarethttps://github.com/Datomic/day-of-datomic/blob/master/tutorial/decomposing_a_query.clj#2016-10-1817:03jaretThis will also tell you the biggest cost clause#2016-10-1817:09dominicm@jaret Thanks, I'll have a look through that tutorial now#2016-10-1817:12dominicmThe biggest hit was 15ms from the second clause. I believe it's due to https://github.com/Datomic/day-of-datomic/blob/master/tutorial/decomposing_a_query.clj#L81
I tried to get round it by adding an index.#2016-10-1817:14dominicmI'm finding a relationship through a shared, non-normalised value. I guess that's the hit, but I thought that was the purpose of an index. Or have I misunderstood?#2016-10-1817:22dominicm{:db/ident :aaa/name
:db/index false
:db/id #db/id [:db.part/db]
:db.alter/_attribute :db.part/db}
#2016-10-1817:23dominicmI'm so stupid#2016-10-1817:23jaretThe false?#2016-10-1817:24dominicmYeah#2016-10-1817:24jaret🙂#2016-10-1817:25dominicmLet's see if that offers any performance improvement 🙂#2016-10-1817:27dominicmMean time: 6.84ms. Excellent!#2016-10-1818:35robert-stuttafordlol 🙂#2016-10-1822:57whitecoopDoes anyone know if there are any other ports that I need to open up besides 4334-4336 to connect peers to a Datomic dev transactor (4336 is open to see the h2 database)?
I've got a transactor running on one box and when trying to connect from a peer on another box I can see how many databases I have from the peer (`(d/get-database-names "datomic:dev://<transactor-box-ip>:4334/*")`) – so the peer is connected – but if I try to create a new db (`(d/create-database "datomic:dev://<transactor-box-ip>:4334/test-db-two")`) I get an exception (`ActiveMQNotConnectedException AMQ119007: Cannot connect to server(s). Tried with all available servers. org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory (ServerLocatorImpl.java:799)`).
However, running the exact same create-database call on the transactor box succeeds. So it's only an issue with the peer.
I'm wondering if it has something to do with a port I haven't opened that should be and that's why the peer doesn't succeed. Couldn't find anyone with a similar issue on Google. Thanks in advance for any help.#2016-10-1823:07whitecoopWell, it's not ports. Temporarily disabling the firewall didn't fix it.#2016-10-1900:07marshall@whitecoop are you using the same version of Datomic on both peer and the transactor?#2016-10-1900:08whitecoopYep. 0.9.5404#2016-10-1900:08marshallAlso, you need to set the host property in the transactor properties file to an IP both the peer and transactor can resolve as the transactor machine #2016-10-1900:08marshallI suspect that's the issue#2016-10-1900:16whitecoopThat was it. Fixed once I changed the host property from localhost to the actual IP. Thanks#2016-10-1900:17marshall:+1: #2016-10-1913:15mlimotteThe transactor.properties file for my ddb storage contains two AWS role properties: aws-transactor-role and aws-peer-role. Are these properties actually used at runtime by the Transactor? I suspect not, since I don't see any AssumeRole priv granted, so I'm guessing these properties are only for the ensure-transactor. Is that correct?#2016-10-1915:38arohnerI’m not certain, but that sounds right. datomic by itself doesn’t use or care about AWS roles, the roles should only be used for cloudformation stuff#2016-10-1915:44marshallRoles also provide permissions for things like DDB and S3 access#2016-10-1916:58val_waeselynckDatalog question: are namespaced symbols officially supported for rule names ?#2016-10-2008:51Matt ButlerQuestion about not clause and cardinality many refs. If an entity has 2 favourites, one where the type is :like and another where it is not (lets say :thumb-up) will this query return the entity or not? In my testing is appears to return the entity.
In english I think what I’m asking for is a never satisfies clause.
[:find e?
:where [?e :favourites ?f]
(not [?f :type :like])]
#2016-10-2009:23teng@teng I think I have the answer. The retrieved attributes are those that can change. The :db/id never change, it’s the actual timeline for the entity!#2016-10-2009:31rauh@teng It's also because .keySet doesn't return :db/id, which is because :db/id is technically not an attribute of the entity.#2016-10-2009:31teng@rauh ok!#2016-10-2010:33val_waeselynck@mbutler not sure what you're trying to achieve, could you reformulate?#2016-10-2010:36dominicmI think the question is, given that :favourites could have many ?f results, and the desire is that if any of them match a clause, then it's not a matching ?e.
Does that make sense?#2016-10-2010:37val_waeselynck@dominicm yeah totally, just wanted to be sure#2016-10-2010:43Matt ButlerSeems to work, not entirely sure why though 😄#2016-10-2010:44Matt Butler@val_waeselynck Thanks so much 🙂, I thought not-join was about specifying which variables unified. Why in this case does it work for what I need?#2016-10-2010:46val_waeselynck@mbutler basically, your previous not clause did not work because it filtered out [?e ?f] pairs, not [?e] singletons#2016-10-2010:48val_waeselynckI'm not sure not-join is entirely necessary, maybe the following would work:#2016-10-2010:48val_waeselynck[:find e?
:where [?e :favourites ?f]
(not
[?e :favourites ?f1]
[?f1 :type :like])]#2016-10-2010:48val_waeselynckI just prefer not-join because I find it easier to understand what it does at a glance#2016-10-2010:52Matt Butler@val_waeselynck I think I understand, could you help explain why it works on all of the favourites rather than match on any favourite that isn’t the one specified in the not block?#2016-10-2010:53Matt ButlerBecause there is a favourite that belongs to ?e that doesn’t have a :type :like#2016-10-2010:54Matt ButlerSo why does datomic think that satisfies the query, I’m happy it doesn’t but am a little unclear on the logic.#2016-10-2010:55val_waeselynckI'm sorry I don't get what you wrote 🙂 probably too used to Datalog now. Maybe you could show the results with example data and show me what surprises you ?#2016-10-2010:58Matt Butler?e has 2 favourites {:type :like} {:type :dislike} The reading of that datalog query seems to say match where ?e has a favourite where its :type isnt :like. which is correct for one of the favourites, bother of which belong to ?e so why doesn’t that return ?e.#2016-10-2010:59Matt Butlerone of e?s favourites is :type :dislike which isn’t :like so why does the not work across all the favourites#2016-10-2010:59Matt ButlerI agree that is does, and its good that it does just surprising.#2016-10-2011:00val_waeselynckNo, the query says to match ?e where ?e has a favourite, and ?e does not have a favourite which type is :like#2016-10-2011:03val_waeselynckthe key difference is that the two mentioned favourites don't have to be the same#2016-10-2011:04Matt ButlerOkay, so not works in the way I had hoped I just needed to structure my query differently 🙂 When applied across cardinality many refs it works like saying none? Is this correct?#2016-10-2011:06Matt Butler(not
[?e :favourites ?f1]
[?f1 :type :like]) Is saying there is ?e does not have a favourite that has a type like#2016-10-2011:08val_waeselynck@mbutler yes and no, it depends if ?f1 is externally bound#2016-10-2011:08val_waeselynck(not-join [?e]
[?e :favourites ?f1]
[?f1 :type :like])
#2016-10-2011:08val_waeselynck^ this query definitely says so#2016-10-2011:11Matt ButlerI think I’m now on the same level, final but that leads me to thinking why doesn’t this work
(not
[?e :favourites ?f]
[?f :type :like])
Why does this give db.error/insufficient-binding [?i ?f] And is the [?e :favourites ?f] that you put outside the not just to prevent this error?#2016-10-2011:12Matt ButlerThe [?e :favourites ?f] is because we want to say that the [?e :favourites ?f] is a relation we want?#2016-10-2011:13val_waeselynckI don't know exactly if this is a theoritical limitation or a practical one. The Datomic team will be of more help than me for that#2016-10-2011:16Matt ButlerI’m going to say that [?e :favourites ?f] gets you an entity with favourites, then the
(not-join
[?e :favourites ?f]
[?f :type :like])
Says only return this entity if doesn’t satisfy this “internal query/clause” Treat them as separate like you were saying. @val_waeselynck Thanks so much for your help 👍 !!#2016-10-2012:23caspercI am wondering, are connections cached by datomic itself, or does it make sense to cache the connections in an atom in the application?#2016-10-2012:29val_waeselynck> Connections are
> cached such that calling datomic.api/connect multiple times with
> the same database value will return the same connection object.#2016-10-2012:29val_waeselynckfrom the doc of datomic.api/connect#2016-10-2012:30caspercAh thanks 🙂#2016-10-2013:16marshall@mbutler It helps me to think of not clauses along the lines of: consider everything in the not in ‘isolation’ - it will match a set of datoms; remove that set of datoms from the set matched by everything else in the query.
in other words, the stuff matched by the ‘not’ is “subtracted” from the result set#2016-10-2013:54val_waeselynckDatalog question: are namespaced symbols officially supported for rule names ?#2016-10-2013:54val_waeselynck(already asked above but not sure people saw it)#2016-10-2013:56Matt Butler@marshall Absolutely, thats how I began to think of it, like 2 queries where the return of one was removed from the other 🙂#2016-10-2015:24Matt Butler(d/q '[:find ?e
:in $ ?a
:where
[?e :favourite ?i]
(not-join
[?e]
[?e :favourite ?i]
[?i :size ?x]
[(> ?x ?a)])]
db 10)
This code returns the an insufficient-binding error for ?a however in a similar query where you pass in ?a and use it as the val in a clause as per below, rather than inside an expression clause there is no error.
(d/q '[:find ?e
:in $ ?a
:where
[?e :favourite ?i]
(not-join
[?e]
[?e :favourite ?i]
[?i :size ?a]
db 10)
to get the first query in the first snippet to work you need to specify ?a as one of the variables to unify despite not wanting to use it outside the not-join.
(d/q '[:find ?e
:in $ ?a
:where
[?e :favourite ?i]
(not-join
[?e ?a]
[?e :favourite ?i]
[?i :size ?x]
[(> ?x ?a)])]
db 10)
Any explanation for this behaviour? Thanks again 🙂#2016-10-2015:30marshallThe comparator (>) requires both arguments to be bound before it can execute
The second example will match all possible values to ?a#2016-10-2015:30marshallif you don’t have ?a in the list of bound variables it is effectively a different variable inside the not clause#2016-10-2015:31marshallso to bind it to 10, you must include it in the not-join join list#2016-10-2015:34Matt Butler@marshall Aha yes, thanks, turns out i was just matching on any val of ?a in the second example it was just giving the expected behaviour anyway. That makes much more sense and is consistent 🙂 Thanks again 🙂#2016-10-2015:34marshallno problem 👍#2016-10-2015:37Matt Butler@marshall Is a query such as the 3rd example capable of being expressed as a rule before I try to do so?#2016-10-2015:40marshallsure. pretty much any set of datalog clauses can be in a rule
from the docs:
"rules can contain any type of clause: data, expression, or even other rule invocations."#2016-10-2015:41Matt ButlerYep I read that and had concluded it was possible but thanks for the reassurance 🙂#2016-10-2017:10marshall@hunter Yes, if you’re restarting the transactor the peer will report a lost connection (what you’re seeing)#2016-10-2017:11marshallyou should see retry and reconnection once your transactor is replaced or becomes available again#2016-10-2017:35hunter@marshall All but one peer did not retry/reconnect, however it's transaction queue was still operational#2016-10-2017:56hunterIf it helps, the peer that is not reconnecting is using an asOf database filter, however the t in the log is increasing with each message from the transaction queue#2016-10-2017:59marshallif it’s getting new novelty from the replacement transactor, it has reconnected. that transition may have occurred seamlessly without reporting an error#2016-10-2018:00marshallbut if the original transactor is gone and a peer is getting new transactions, it must be connected to the newly active transactor#2016-10-2018:22potetmHere's a question: Is there a way to make something like this work?
(d/q '[:find ?out
:in $
:where
(or-join [?e ?out]
(and [?e :a1]
[(unify ?e ?out)])
(and [?e :a2]
[?out :subcomponent ?e]))]
[[123 :a1 "foo3"]
[456 :a1 "foo4"]
[456 :subcomponent 789]
[789 :a2 "bar"]])
#2016-10-2018:23potetmunify is obviously not a thing. What I'm kind of wanting to do there is say (== ?e ?d)#2016-10-2018:23marshall(ground)#2016-10-2018:23potetmPoint being: I've found some entity that may be a top-level type, or it may be a subcomponent. But I want to return the top-level type.#2016-10-2018:24marshallerm. ground requires a const tho#2016-10-2018:24potetmyeah.... 😕#2016-10-2018:26marshall[(identity ?e) ?out]#2016-10-2018:26marshalli think that might do it, although i’m not crazy about it#2016-10-2018:26potetmI guess the obvious thing to do is just run separate queries. Just wondering if there's some functionality I'm missing.#2016-10-2018:27potetmHmmm, if you're not crazy about it, I'm gonna avoid that then.#2016-10-2018:27marshalli.e. use a built in clojure function that returns the value right back to you and assign that value to a variable#2016-10-2018:27marshallwell, it’s not the use, it’s more the putting conditional logic in a query#2016-10-2018:27potetmgotcha#2016-10-2018:27marshalli’d probably tend toward separate queries#2016-10-2018:28marshallwith conditional logic in your application code#2016-10-2018:36bkamphaus@potetm I’m not sure I follow what you’re doing in that query.#2016-10-2018:37bkamphausis:
(d/q '[:find ?out
:in $
:where
(or-join [?out]
[?out :a1]
(and [?e :a2]
[?out :subcomponent ?e]))]
[[123 :a1 "foo3"]
[456 :a1 "foo4"]
[456 :subcomponent 789]
[789 :a2 "bar"]])
not equivalent?#2016-10-2018:43potetm@bkamphaus Yeah that is. I apparently oversimplified the example.#2016-10-2018:43potetmSomething like this:
(d/q '[:find ?out
:in $ [?e ?a]
:where
[?e ?a]
(or-join [?e ?out]
(and [?e :a1]
[(identity ?e) ?out])
(and [?e :a2]
[?out :subcomponent ?e]))]
[[123 :a1 "foo3"]
[456 :a1 "foo4"]
[456 :subcomponent 789]
[789 :a2 "bar"]]
[789 :a2])
#2016-10-2018:44potetm(Actual use case is we found some tx-data in the log by attr, and want to find the top-level entity.)#2016-10-2019:15bkamphaushmm, something somewhere feels off to me, but not entirely sure. I guess it’s just the fact that as Marshall indicated it feels more like branching logic than a set union, which is why identity seems like a hack there. But is the contrived example a match, where you only care about :a1 and :a2 on ?e — and ?e ?a are passed from tx data? Why provide ?a — aren’t you only passing in ?e values you know you care about without additionally limiting by some other attribute which occurs with the datom?#2016-10-2019:20potetmSo, in the real case, I'm passing in ?a to scan the logs for changes to certain attributes. In the end, I want all ?e that have ?a, or that has children that have ?a, where ?a is bound to a collection of attrs.#2016-10-2019:20potetmSo yeah, having typed that out, that sounds a lot more like branching logic than set unions.#2016-10-2019:36bkamphausthat description seems a bit of a mismatch for the queries you showed, maybe I’m just missing something. I.e. the last description sounds like:
(d/q '[:find ?out
:in $ [?a ...]
:where
(or-join [?out ?a]
[?out ?a]
(and [?e ?a]
[?out :subcomponent ?e]))]
[[123 :a1 "foo3"]
[456 :a1 "foo4"]
[456 :subcomponent 789]
[512 :subcomponent 332]
[332 :idontcare "snafu"]
[500 :subcomponent 123]
[789 :a2 "bar"]]
[:a2 :a1])
But it sounds as though you’re passing in the log and binding tx-data to values. Are you then actually hard-coding the attributes that identify a parent versus a child like in the example or-join? The interactions between are where I’m lost.#2016-10-2019:46bkamphausif you just want to drop the subcomponents with that kind of exhaustive logic, something like:
(d/q '[:find ?out
:in $ [?a ...]
:where
(or-join [?out ?a]
[?out ?a]
(and [?e ?a]
[?out :subcomponent ?e]))
(not [_ :subcomponent ?out])]
[[123 :a1 "foo3"]
[456 :a1 "foo4"]
[456 :subcomponent 789]
[512 :subcomponent 332]
[332 :idontcare "snafu"]
[500 :subcomponent 123]
[789 :a2 "bar"]]
[:a2 :a1])
#2016-10-2019:48potetmYeah dropping attrs is probably too much. Not worth it just to avoid doing separate queries. I'm only interested in a small number (4) attrs in the tree.#2016-10-2019:48potetmHmm... I'm doing very poorly at this. How about I start with the top-level use case?#2016-10-2019:49potetmSo I'm looking through the log for 4 attrs that have changed. Two of those attrs are on sub-entities, two are on the top-level entity.#2016-10-2019:49potetmBut I'm only interested getting the top-level entity.#2016-10-2019:54potetmThis example might be more realistic? I have 2 data sources, one that represents the db, one that represents the log.#2016-10-2019:54potetm(d/q '[:find [?out ...]
:in $d1 $log
:where
($log or
[?e :parent/attr]
[?e :child/attr])
($d1 or-join [?e ?out]
[?out :parent/attr]
(and [?e :child/attr]
[?out :parent/child ?e]))]
[[123 :parent/attr "foo3"]
[456 :parent/attr "foo4"]
[999 :parent/attr "shouldn't show"]
[456 :parent/child 789]
[789 :child/attr "bar"]]
[[123 :parent/attr "foo3"]
[123 :parent/attr "foo2"]
[789 :child/attr "bar0"]
[789 :child/attr "bar"]
[000 :dont-care "blerg"]])
#2016-10-2019:55potetm=> [123 456 999]
#2016-10-2019:57potetmI'm probably missing something, but it seems like I need to say (== ?out ?e) in that first or-join.#2016-10-2019:59bkamphausit’s because you don’t bind the :parent/attr and :child/attr distinctly.#2016-10-2020:01potetmAh, so ?p ?c instead of both as ?e#2016-10-2020:02bkamphausstill thinking through query structure.#2016-10-2020:11bkamphaussorry, don’t think that fixes the issue but it seems incorrect to treat them the same initially then try to split them later. In that case it seems to come down to not being able to use different data sources within one or.#2016-10-2020:13potetmI agree that it seems incorrect. Yeah, multiple sources in an or might fix it, but I agree that it's kind of a ridiculous ask.#2016-10-2111:37Matt ButlerHi again 🙂
(d/q '[:find ?e
:where
[?e :likes ?i]
[(< (count ?i) 5]] db))
:likes is a cardinality many ref, is there a way for me to filter/use in query clauses the number of ?i entities an ?e has.#2016-10-2112:43val_waeselynck@mbutler you'll have to use 2 queries for that#2016-10-2113:13jonpitherHi - using Cloud Formation Templates to setup in AWS#2016-10-2113:13jonpitherI've done it successfully with 2 other transactors, but for this new one I'm seeing this in the system logs:
user-data: ./startup.sh: line 26: kill: (1886) - No such process
/dev/fd/11: line 1: /sbin/plymouthd: No such file or directory#2016-10-2122:03matthaveneris it possible to restore a datomic backup to a memory database? or will I have to start a local dev transactor, etc?#2016-10-2122:06marshallYou can't restore into mem#2016-10-2222:25dominicmWith datomic, is there any optimization for "the most recent" or "the lowest"? To avoid doing a sort over a large dataset#2016-10-2222:28bkamphaus@dominicm if your query can be phrased in a form that makes use of the log: http://docs.datomic.com/best-practices.html#use-log-api that would work to the most recent case. For your own indexed attr/values of interest, there are many cases with comparison clauses where the :avet index is leverages, and you can also manually make use of index-range ( http://docs.datomic.com/clojure/#datomic.api/index-range ) when it makes sense to invoke the lower level API calls directly.#2016-10-2222:34dominicm@bkamphaus I'll have to dig into those apis. I was a bit scared of touching the log.
Are there any good links on making sure I don't abuse them? Or is it covered in the best practices doc?#2016-10-2222:38bkamphausI’d say it’s mostly covered in that doc. I would always start with query and only change if you need to optimize (and have measured differences, etc. of course). For Log API, the starting point would to be to make use of log in query: http://docs.datomic.com/log.html (last section)#2016-10-2300:27marshall@dominicm Little bit of shameless self promotion regarding using the log: http://blog.datomic.com/2016/08/log-api-for-memory-databases.html?m=1#2016-10-2416:16timgilbertSay, if I want to include another jar in the datomic transactor, do I just copy it into the resources directory? Specifically, I'm trying to get this logback appender working: https://github.com/logzio/logzio-logback-appender#2016-10-2416:24marshall@timgilbert Yes, the resources dir should be on the classpath if youre using the bin/transactor script to launch#2016-10-2416:24timgilbertGreat, thanks#2016-10-2418:56timgilbertHmm, actually it appears that doesn't work, the server gets started with java -server -cp lib/*:datomic-transactor-pro-0.9.5404.jar:samples/clj:bin:resources#2016-10-2418:57timgilbert...but just sticking a jar file inside resources doesn't work, I think the jar needs to be explicitly named#2016-10-2418:58timgilbertI tried sticking it in lib but the glob lib/* is never actually expanded#2016-10-2419:03timgilbertMaking this change to bin/classpath seems to do the trick:
$ diff /tmp/cp bin/classpath
6c6
< s="`echo lib/*`:`echo *transactor*.jar`"
---
> s="lib/*:`echo *transactor*.jar`"
#2016-10-2419:09timgilbertHmm, on further inspection it looks like some kind of dependency problem with the appender, will poke around more.
Oct 24 19:05:19 ip-172-31-14-11 transactor[25414]: Reported exception:
Oct 24 19:05:19 ip-172-31-14-11 transactor[25414]: java.lang.NoSuchMethodError: ch.qos.logback.core.Context.getScheduledExecutorService()Ljava/util/concurrent/ScheduledExecutorService;
Oct 24 19:05:19 ip-172-31-14-11 transactor[25414]: #011at io.logz.logback.LogzioLogbackAppender.start(LogzioLogbackAppender.java:166)
Oct 24 19:05:19 ip-172-31-14-11 transactor[25414]: #011at ch.qos.logback.core.joran.action.AppenderAction.end(AppenderAction.java:96)
#2016-10-2419:16timgilbertAh, and I just realized that Java itself supports the foo/* glob syntax, so the shell has nothing to do with it#2016-10-2419:25timgilbertOk, peering at the code more closely, it seems as though the problem is that the appender uses logback-classic 1.1.7, but the transactor itself uses 1.0.13#2016-10-2419:39joshgDoes Datomic still have a 10 billion datom limitation?#2016-10-2419:48joshgI was looking into using it for a data warehousing application instead of data vault.#2016-10-2419:53robert-stuttafordit’s not a hard limit @joshg, but an anticipation of the likely max size that a peer process can deal with#2016-10-2419:53robert-stuttafordbecause peers need to hold onto all the roots of the index and still have space for actual data#2016-10-2419:53robert-stuttafordtheoretical limit. if you have biiiig instances, your limit is higher#2016-10-2419:54robert-stuttafordafaik nothing in the code imposes a limit#2016-10-2419:54robert-stuttafordi’m not sure about the entity id space limitations though, perhaps @stuarthalloway can share 🙂#2016-10-2419:57joshgThank you for the clarification, that makes sense. So it’s possible to use Datomic with more datoms than 10 billion, but perhaps that wouldn’t be its best use case.#2016-10-2509:57robert-stuttafordwell, it’s all up to your hardware and your overall performance requirements 🙂#2016-10-2509:58robert-stuttaford… and how well you’ve partitioned things#2016-10-2510:03val_waeselynck@joshg have you seen this thread ? https://groups.google.com/forum/#!topic/datomic/iZHvQfamirI#2016-10-2513:04marshallkey point from that thread: https://groups.google.com/d/msg/datomic/iZHvQfamirI/RANYkrUjAEwJ#2016-10-2513:04marshall"10 billion datoms is not a hard limit, but all of the reasons above you should think twice before putting significantly more than 10 billion datoms in a single database.
"#2016-10-2513:40kirill.salykinIs there a way to rotate dbs and move outdated datoms to “archive” db?#2016-10-2513:40kirill.salykinkinda log rotate#2016-10-2513:44potetm@kirill.salykin You can "pour" select datoms into a new db, and then archive the old db.#2016-10-2513:44potetmI don't believe there's a tool or anything for it, since it's highly dependent on your data structure.#2016-10-2513:44kirill.salykinI guess there is not built-in#2016-10-2513:44kirill.salykinmakes sens#2016-10-2513:44kirill.salykinthanks#2016-10-2513:45potetmAll of the tools are at your disposal though. The log, the history db, the current db.#2016-10-2620:54jdkealyis it possible to change an attribute's type in datomic if I don't yet have any data associated with it ?#2016-10-2620:54jdkealyi accidentally made an attribute an instant, but i want it to be a keyword...#2016-10-2620:57jdkealycould i just find the attribute's ID and excise it ?#2016-10-2622:47luposlipYou cannot alter the :db/valueType.
But you should be able to delete it, and recreate it with the correct type.#2016-10-2705:30robert-stuttafordyou can’t delete, but you can rename#2016-10-2705:30robert-stuttafordhttp://docs.datomic.com/schema.html#sec-6 @jdkealy @luposlip#2016-10-2706:18luposlip@robert-stuttaford I was thinking about Excision. But I realize excision is only for data @jdkealy.#2016-10-2711:43Matt ButlerAre there any recommended JVM settings/args for an application running a Datomic peer? heapsize etc.#2016-10-2712:40jaret@mbutler http://docs.datomic.com/capacity.html#peer-memory#2016-10-2712:41marshall@jdkealy The recommended approach would be as suggested by @robert-stuttaford : to rename the incorrect attribute (i.e. myname-DEPRECATED) and create a new, correctly typed attribute for use using the now-freed original name#2016-10-2715:10Matt Butler@jaret Thanks for this, super useful 🙂#2016-10-2715:15mishaWill deprecated attribute show up in pulls? Or I'd need to retract the datoms too?#2016-10-2715:19Matt ButlerOn a unrelated note If I have a query that is concerned about the existence of datums that were transacted around a certain time period (say 2 weeks ago from whenever the query is run) how is it best to test this query? Do I add datums and overide the :tx/instant on the transaction to be 2 weeks ago or do I allow my query to pretend now is 2 weeks in the future and transact the data to query as normal?#2016-10-2715:26marshall@misha you mean if you pull * ?#2016-10-2715:27marshallin the case mentioned above, there were no datoms yet transacted against the attribute
if you pull everything, then yes the deprecated ones will show up, so you’d need to retract them#2016-10-2715:31mishaThanks, @marshall #2016-10-2715:31marshall👍#2016-10-2716:11karol.adamieci am going to use peer as http rest for now. are there any resources discussing how to deploy such peer on AWS?#2016-10-2721:59jfntnI’m trying to add aleph to a project that uses datomic-pro, but I’m running into deps conflicts over netty#2016-10-2721:59marshall@jfntn what version of Datomic?#2016-10-2722:00jfntnUsing the latest version of both i.e. [com.datomic/datomic-pro “0.9.5404”]#2016-10-2722:01marshallthere was some discussion of this here: https://groups.google.com/d/topic/datomic/pZombLbp-tQ/discussion#2016-10-2722:01marshalli believe that thread is discussing an older version of Datomic#2016-10-2722:02marshallthe latest uses a much more recent netty#2016-10-2722:02jfntnLooks like aleph depends on Netty 4.1.0, while datomic pro -> activemq ->Netty 4.0.39#2016-10-2722:03jfntnI tried to use aleph’s version of netty and that breaks datomic#2016-10-2722:19jfntnI just tried to run aleph’s test suite with netty 4.0.39 but it won’t compile, looks like a catch22 here?#2016-10-2722:20jfntn@marshall thanks saw this thread but the solution at the end didn’t work for me#2016-10-2809:51andersdatomic throws exception when i'm (speculatively) applying new attributes with a database as-of some point in time / txid, e.g.:
(d/with (d/as-of db #inst "2016-10-10")
[{:db/id #db/id[:db.part/db]
:db/ident :some/attribute
:db/valueType :db.type/instant
:db/cardinality :db.cardinality/one
:db.install/_attribute :db.part/db}])
is this unsupported by datomic or am i missing something here?#2016-10-2809:52andersworks fine when applying to db returned from d/db, i.e (d/with (d/db conn) [...tx data...])#2016-10-2810:18robert-stuttafordyou can’t as-of AND with, @anders ; as-of is a filter and will filter out all the with-provided data too#2016-10-2810:20robert-stuttaford.... at least, that’s what should happen. d/with basically ignores d/as-of and uses the latest db to build on, but then as-of filters it#2016-10-2810:20andersi certainly works when the data provided isn't attribute definitions#2016-10-2810:21robert-stuttafordit has weird interactions. basically, as-of + with don’t compose as you’d expect#2016-10-2810:21andershmm i see#2016-10-2810:21robert-stuttafordmay i ask what the idea behind speculative schema is?#2016-10-2810:22robert-stuttafordcurious what problem it’s solving 🙂#2016-10-2810:24anderswe're applying inferred facts to the db, so that much of our business logic can assume (non-durable / inferred) facts actually is part of the databaase#2016-10-2810:24robert-stuttafordah so you’re modelling temporary data#2016-10-2810:24andersyeah#2016-10-2810:24robert-stuttafordyou could make in-memory dbs and datalog-query across it and durable db#2016-10-2810:25andersinferred i would say#2016-10-2810:25robert-stuttafordit’s a bit extra work because now you have to add representations of durable entities to mem db#2016-10-2810:25andersto solve the problem in question, we will just add the inferred attributes to the stored schema#2016-10-2810:26andersattributes used for the inferred facts i mean#2016-10-2810:26robert-stuttafordyeah#2016-10-2810:27anderswe're heavily relying on entity refs, so multiple databases would not work#2016-10-2810:29andersthanks for the insight, @robert-stuttaford#2016-10-2810:42andersobviously adding the inferred attributes to the stored schema won't help; as you say, d/with ignores d/as-of. what a shame 😞#2016-10-2810:53magnars@anders @robert-stuttaford Would it work to "reimplement" as-of by doing a since and transacting the reverse of all those datoms using with?#2016-10-2811:59robert-stuttafordgoing to have to digest that for a bit, @magnars 🙂#2016-10-2811:59robert-stuttafordbeen a long week#2016-10-2812:55bkamphausI will comment that much of the goal of having a feature like 'with' is prospective. What if I did X. The X is then something you can do. If you apply 'with' to a past db state with as-of you won't be able to then transact datoms to persist that resulting database. You could, kind of as a cleaner suggestion in the same vein as @magnars , just retract all of the since datoms with one 'with' and then consider the resulting db value with the assertions in another 'with'.#2016-10-2812:56bkamphausThat then corresponds to a database value that could be persisted/made the canonical db with those transactions.#2016-10-2814:25marshallDatomic 0.9.5407 is now available https://groups.google.com/d/topic/datomic/OU-EasU3i2A/discussion#2016-10-2814:31karol.adamiecis there a well supported method of deploying a REST peer for datomic?#2016-10-2814:31karol.adamiecAMI image?#2016-10-2815:14jaret@karol.adamiec there is no automatic option. However, the command is 1 line and could be worked into a CF script or your own AMI. http://docs.datomic.com/rest.html#2016-10-2815:31karol.adamiec@jaret so any generic amazon ami should work. I copy over the /bin folder to s3 and set the download then start command in user data?#2016-10-2815:33marshall@karol.adamiec there are a number of approaches you could take, but yes, you should be able to use an Amazon linux AMI, get the datomic distribution (either by wget from my-datomic or via an s3 of your own config), unzip it, and start the REST peer#2016-10-2815:40karol.adamiecok, got it.#2016-10-2815:40karol.adamiec./rest -p 8001 dev "datomic:<ddb://eu-west-1/dev-datomic>"#2016-10-2815:41karol.adamiecso start command like aobve gives me acess to any db on dev-datomic table ?#2016-10-2815:42marshallyes#2016-10-2815:43marshallyou’ll want to control access to the REST endpoint via IAM or IP config or whatever your preferred method#2016-10-2815:44karol.adamieci was thinking that rest endpoint is absolutely internal inside VPC, with access only from my backend bridge#2016-10-2815:44marshallthat would be fine too as long as you’ve configured your instance to only allow access from with the VPC#2016-10-2815:47karol.adamiecfor sure, will lock that down :+1:#2016-10-3010:48caspercI am looking at the tx-report-queue and I am wondering if there is any way to know if the queue has been removed, closed or lost?#2016-10-3113:20marshall@casperc per: https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/BlockingQueue.html
“ A BlockingQueue does not intrinsically support any kind of "close" or "shutdown" operation to indicate that no more items will be added. The needs and usage of such features tend to be implementation-dependent. For example, a common tactic is for producers to insert special end-of-stream or poison objects, that are interpreted accordingly when taken by consumers."#2016-10-3115:11yonatanel@marshall If you lose the reference to the queue will it be deallocated?#2016-10-3116:08colindresjIf I update some of the values on a component entity as part of a transaction on the parent entity, I’ve noticed that a new component entity is created, instead of it’s values being updated. Is there some way to update component attribute values as part of a transaction on the parent?#2016-10-3116:16colindresjIE:
;; What I’m currently seeing:
(:person/address (d/pull db '[{:person/address [*]}] tony-soprano))
;; => {:db/id 1234 :address/city “Newark” :address/state “NJ”}
(d/transact
conn
[{:db/id (:db/id tony-soprano)
:person/first-name “Tony”
:person/address {:address/state “New Jersey”}}])
;; => #object
(:person/address (d/pull db '[{:person/address [*]}] tony-soprano))
;; => {:db/id 5678 :address/state “New Jersey”}
#2016-10-3116:31colindresjDiscovered that adding in :db/id as part of the nested component prevents that from being automatically assigned a new one#2016-10-3119:16timgilbertHi all... Is there an easy way to see whether a datomic transactor is the primary or failover transactor (in a set of 2)? I'm thinking about trying to run automated datomic backups from the failover one only#2016-10-3119:29jaretThe log will report a standby metric#2016-10-3119:36jaretThe active transactor writes its location into storage for peers to find it. The passive or standby transactors do not so you would have to look into the transactor logs for the metric#2016-10-3120:11domkmWhy doesn't datomic.query.EntityMap implement Clojure's metadata interfaces?#2016-10-3120:14timgilbertThanks @jaret, will see what I can do with that#2016-11-0114:03Matt ButlerIs there a simple way to tell pull api to return all attributes on the entity but no component entities? Wildcard with no depth I guess.#2016-11-0114:26jaret@mbutler there is no way to limit the component entities from the pull api.#2016-11-0114:26Matt ButlerSo the best/only way is to specify all the attributes you want in the vector?#2016-11-0114:27kirill.salykinor you can load them lazily via Entity#2016-11-0114:29kirill.salykinhttp://docs.datomic.com/entities.html#2016-11-0114:29Matt Butler@kirill.salykin cheers 🙂#2016-11-0117:41karol.adamiecis there a literal syntax for uri in datomic? i get errors when trying to insert facts?#2016-11-0117:41karol.adamiec:bracket/thumbnail "assets/thumbs/gt-c1.jpg”#2016-11-0117:41karol.adamiecthat one fails saying that it is not a valid uri#2016-11-0117:49karol.adamiechow does one insert an URI using REST api 🤔#2016-11-0117:53jaret@karol.adamiec :db.type/uri maps to java.net.URI.#2016-11-0117:54jaretso... @(d/transact conn [[:db/add #db/id[:db.part/user] :some/image (java.net.URI. "")]])
#2016-11-0117:58jaretshould be something like... curl -X POST -H "Content-Type: application/edn" -d
"{:tx-data [{:db/add #db/id[:db.part/user] :foo (java.net.URI. "")}]}"#2016-11-0118:18Matt ButlerWhen querying :tx-data return from a transaction, is there a way to refer to attributes by their db:ident rather than their :db/id?#2016-11-0119:01georgekI haven’t made any account or permission changes in the meantime. I also have a dev server that hits the same store if that makes any difference#2016-11-0119:01georgekBasically it requires me to completely rebuild my environment to get working again. Thoughts?#2016-11-0119:08robert-stuttaford@karol.adamie #uri “”#2016-11-0119:09robert-stuttafordeasy way to see is to make one with Java ctor and (prn) it 🙂#2016-11-0123:37psavine42@georgek, same thing just happened to me. I was using this install for 3 weeks, and now this happened. java.lang.ClassNotFoundException: org.jboss.netty.channel.socket.nio.BossPool
java.lang.NoClassDefFoundError: org/jboss/netty/channel/socket/nio/BossPool#2016-11-0123:37psavine42followed by your exception#2016-11-0123:39georgek@psavine42 Wierd, glad to know it’s not a completely isolated incident! Have also had to rebuild your env or did you find something less time-consuming? (assuming you have enough of a similar setup of course)#2016-11-0123:41psavine42@georgek no, this just happend at the end of my day, and I am just now getting back to it and figured i would come on here... and wouldn't you know. Is env rebuild what solves it for you?#2016-11-0123:42georgekYep, that tends to reset everything. I think it is correlated to load, as if some instances die or are born corrupt(?)….#2016-11-0123:42georgekI’m still investigating#2016-11-0123:42georgek😛#2016-11-0123:44psavine42@georgek I am thinking that I have netty dep in .m2 repository, and maybe clojure is picking that up. I had just added enlive and a few other things, thought it was that. going to go dig.#2016-11-0123:46georgekInteresting. Dunno. I’m not using anything that’s not in clojars. I thought it was something to do with the fact that I’m using nginx-clojure but that seems unlikely#2016-11-0201:05psavine42deleted my m2, same problem. when checking netstat, i do not see the datomic ports in use... however i can still do (get-database-names "...") . some part of transactor is just not happening 😞#2016-11-0201:37marshallClassNotFound definitely suggests class#2016-11-0201:37marshallA classpath issue#2016-11-0201:38marshallThe get-database-names call doesn't require communication with the transactor#2016-11-0201:41marshallI'd look at the transactor logs, possible you may find underlying issue there#2016-11-0201:42marshall@georgek ditto ^ do you see anything in transactor logs?#2016-11-0209:43karol.adamiec@robert-stuttaford #uri complains RuntimeException No reader function for tag uri clojure.lang.LispReader$CtorReader.readTagged#2016-11-0209:52karol.adamiec@jaret (java.net.URI. ") works from datomic REPL, but 500 for curl ;(#2016-11-0209:53karol.adamiecBTW: is there a way to get more meaningfull errors than 500 to anything that is wrong with REST service? it is rather annoying 😐#2016-11-0209:53robert-stuttafordcoulda sworn there was one for java.net.URI#2016-11-0209:53karol.adamiecwill check quickly how about curling #uri#2016-11-0209:54karol.adamiecnope, 500 😞#2016-11-0209:55karol.adamiec@robert-stuttaford i tried #uri from datomic repl#2016-11-0209:55karol.adamiecmaybe it is available in a full env...#2016-11-0209:56karol.adamiecbut i need to use REST endpoint, at least for now….#2016-11-0209:56karol.adamiecalso found Stuart Halloway mentioning that URI is a memory hog… so maybe best practice is to actually avoid that and use strings?#2016-11-0209:57robert-stuttafordwe use strings#2016-11-0209:57karol.adamiec🙂#2016-11-0209:57karol.adamiecthat was my first instinct when modelling, but then noticed that uri type and hey! looks cool 🙂#2016-11-0209:57karol.adamiecso i was baited into it 😆#2016-11-0209:58karol.adamiecstrings then. But still i do consider it to be actually a real PITA that docs do not mention that one can not insert URI datatype using rest. OR if one can, how to do it. 😞#2016-11-0210:03robert-stuttaforda short-coming, to be sure#2016-11-0216:16Matt ButlerIs there a simple way to get datomic to return me all the attributes under a certain ns? user/### or do I have to get all the attribute idents and do some pattern matching on them 🙂#2016-11-0216:32yonatanelIs a where clause like [?e :entity/status _] equivalent to not adding it at all?#2016-11-0216:39yonatanelI just tried and the attribute must exist.#2016-11-0216:40Matt ButlerIt should match on any entity has that attribute but any value @yonatanel#2016-11-0216:52yonatanel@mbutler Thanks. I wanted to parameterize the query with the desired status or not checking status at all, but without concat.#2016-11-0217:02Matt ButlerSorry, not sure I 100% follow, is your problem solved? Or are you saying you want it to match on either a status value you pass in or ignore it? Code example might help 🙂#2016-11-0217:06yonatanel@mbutler Well, I use concat to build the query according to some parameter. I either include a clause for the status or not include it at all (matching either on a value or ignoring it). I wanted to avoid concat but not sure it's possible or if there's any benefit to it like the query will somehow be optimized.#2016-11-0217:07Matt Butlerwhen you dont include it do you want it to match on any value of :entity/status or match even if there isn’t an :entity/status attribute on that entity#2016-11-0217:10yonatanelI want to match even if the attribute is missing#2016-11-0217:13yonatanelMaybe [(get-else $ ?e :entity/status :none) _]#2016-11-0217:13Matt ButlerThat should work#2016-11-0217:13Matt ButlerAn or should work as well#2016-11-0217:17yonatanelActually the whole thing is pointless since I need to alter the query according to some variable. Never mind.#2016-11-0217:17Matt ButlerNo problem 🙂#2016-11-0217:17Matt ButlerGood luck 🙂#2016-11-0218:41Matt ButlerFor anyone curious here is the query I ended up with to find all attributes of a certain ns
(d/q '[:find ?ident
:in $ ?input-ns
:where
[?e :db/cardinality _]
[?e :db/ident ?ident]
[(namespace ?ident) ?ns]
[(= ?ns ?input-ns)]] database input-ns)
Interestingly I couldn’t use :db.install/_attribute :db.part/db as a safety check that I was talking about an attribute, I assume because that attribute entity lives in the db.part/db and not user. Anyone know if there is another check I could do or is cardinality safe enough?#2016-11-0309:51yonatanel@mbutler You probably need to flip attribute and entity when using :db.install/_attribute and actually use :db.install/attribute. See here: https://github.com/Datomic/day-of-datomic/blob/master/tutorial/schema_queries.clj#2016-11-0311:25Matt Butler@yonatanel Awesome thanks 🙂#2016-11-0317:53robert-stuttaford@jaret or @marshall, could you confirm that this is sufficient S3 permissions for a restore process, please?#2016-11-0317:54robert-stuttafordhttp://docs.datomic.com/backup.html only shows ”s3:*"#2016-11-0317:55robert-stuttafordi want to restore from one AWS account’s backup to another AWS account, and don’t want to give write capability to the reading account#2016-11-0317:56marshallread only should be sufficient, but I’m not positive what specific perms that would be (other than get and list objects required for the full nested s3 bucket)#2016-11-0317:57robert-stuttafordok, great. i guess i will soon find out the Zen way 🙂#2016-11-0317:58karol.adamiec@marshall can one use pull query over a REST endpoint?#2016-11-0317:58marshall@robert-stuttaford sorry 🙂#2016-11-0317:59marshall@karol.adamiec I believe you can use pull in a find specification via REST, but I’d have to double check#2016-11-0318:00karol.adamiecif you could throw an example at me that would be great, i am having issues trying to google that#2016-11-0318:02karol.adamiecah#2016-11-0318:02karol.adamiecnevermind#2016-11-0318:02karol.adamiecmade it work 🙂#2016-11-0318:04karol.adamiec[:find [(pull ?e [*]) ...] :in $ :where [?e :bracket/name "GT-C1"] ]#2016-11-0318:06karol.adamiec@marshall so i q like above, how do i get all entities that do have attr :bracket/name?#2016-11-0318:07robert-stuttaford:where [?e :bracket/name] is valid datalog#2016-11-0318:07karol.adamiec:+1:#2016-11-0318:07karol.adamieci tried putting in _ :slightly_smiling_face:#2016-11-0318:10karol.adamiec_ works as well. lost a bracket ]. Argh.#2016-11-0318:10karol.adamiecthanks guys#2016-11-0318:33robert-stuttaford@marshall or @jaret; curious: does there need to be a running transactor while restoring?#2016-11-0318:35jaret@robert-stuttaford :dev and :free storages currently require a running transactor during restore, because storage resides inside the transactor process.#2016-11-0318:35jareton other storages#2016-11-0318:36jaretall system processes can be down, run restore, then start transactor and peers#2016-11-0318:37robert-stuttafordthanks!#2016-11-0406:35tengIs this the best way of doing sorting/pagination in Datomic?: https://groups.google.com/forum/?nomobile=true#!msg/datomic/NgVviV9Sw8g/nXm3Lvkk2dwJ#2016-11-0411:05pesterhazyIf I see a huge spike in DynamoDB WriteCapacity units used (20,000 instead of the usual 200), could that be the transactor re-indexing/re-partitioning/doing some other kind of maintenance work?#2016-11-0411:05pesterhazy@teng, I think it depends on your requirements#2016-11-0411:06pesterhazyif you return modest number of results (<10,000 ?) I'd do this:#2016-11-0411:07pesterhazy- d/q [:find ?e] (only query the entity id)
- sort
- subvec using the offset/limit provided
- then do further transformations (another query, or d/pull or map d/entity(#2016-11-0411:08pesterhazyif you want to use d/q, to my knowledge that's the best you can do as it always returns results unsorted#2016-11-0411:08pesterhazy(looking at my list, you probably also need to include some sort of sort key in the query)#2016-11-0411:10tengOk thanks @pesterhazy - looks very similar to what Francis Avila wrote.#2016-11-0413:01marshall@pesterhazy Spikes in DDB write usage are definitely present during indexing jobs. You can look at memory index metrics and/or the logs to get an idea if there is an indexing job running#2016-11-0413:13pesterhazy@marshall if usage exceeds provisioned capacity, does datomic work gracefully in that case?#2016-11-0413:15marshallyou will get backoff and exponential retry for a period of time. too much throttling will result in transactor termination#2016-11-0413:15pesterhazymakes sense#2016-11-0413:16pesterhazywe haven't seen transactor failures due to indexing spikes yet, so the retry mechanism seems to be working#2016-11-0413:16pesterhazy@marshall thanks!#2016-11-0413:20marshallsure!#2016-11-0414:43jdkealyHi, I'm trying to back up a datomic db and am running into an obscure error that ends with java.lang.NoClassDefFoundError: javax/xml/bind/DatatypeConverter... not sure where to go from here. Anyone have any suggestions?
https://gist.github.com/jdkealy/948068e40a47861c5d84b097bcbf39d5#2016-11-0414:46marshall@jdkealy can you post the backup command you’re running (elide any sensitive info like s3 paths or db names if necessary)#2016-11-0414:47jdkealybin/datomic -Xmx4g -Xms4g backup-db datomic:<ddb://us-east-1/my-datomic-new/test-table> <s3://datomic-backups/backup1>#2016-11-0414:48marshallare you running this from a clean unzip of the Datomic distribution?#2016-11-0414:53jdkealyi went through the steps to set up my transactor#2016-11-0414:53jdkealythis is on a ubuntu server that's currently running the transactor#2016-11-0415:01marshallMy suspicion is a classpath issue. Can you try downloading and unzipping a fresh distribution somewhere else on the box and running from there?#2016-11-0415:03tengIs it possible to get the last transaction id for an entity when using the pull syntax?#2016-11-0415:04jdkealyok will do thanks marshall#2016-11-0415:57jdkealy@marshall just tried on a fresh unzip, same thing... how is the read / write being authenticated in AWS#2016-11-0417:17marshallit should be using IAM#2016-11-0417:17marshallif you have roles configured for that instance#2016-11-0417:17marshallif not you’d need to have creds exported into the env#2016-11-0417:17marshallhttp://docs.datomic.com/aws-access-control.html#2016-11-0419:27jdkealygot it thanks#2016-11-0419:30jdkealyIt looks like the roles were configured using the configure script#2016-11-0705:50rnandan273any notes on using postgress on heroku with datomic. pointers can also help#2016-11-0705:51matthavenerrnandan273: there are buildpacks out there you can use #2016-11-0705:51matthavenerHandles most of it#2016-11-0707:38pesterhazy@teng, honestly I'm now just adding attributes, created-at and updated-at#2016-11-0707:38pesterhazySaves you a lot of hassle, and is faster to query#2016-11-0707:40teng@pesterhazy An idea I had was to store the transaction id instead and then “join” it with the transaction entity to get the date/time. That felt more right to me, but maybe there are downsides to that?#2016-11-0707:42pesterhazyI wouldn't rely on that#2016-11-0707:42tengWhy?#2016-11-0707:42pesterhazyWhen making db dumps, entity ids get changed#2016-11-0707:42pesterhazyNot normal backups but selective db dumps#2016-11-0707:43pesterhazyAlso you'd have to update the entity each time you update as the transaction changes#2016-11-0707:44pesterhazySo you might as well store the date directly#2016-11-0707:45pesterhazyRelying on transaction metadata makes it harder to re-transact data without losing information#2016-11-0707:45tengThey should have a similar thing as d/basis-t for entities (the current t for an entity), then you could just join in the transaction and get the date/time.#2016-11-0707:46pesterhazyI think it's best to avoid relying on metadata except for audit and historical analysis#2016-11-0707:47pesterhazyFor example transact the user name that initiated the tx#2016-11-0707:47pesterhazyFor that kind of thing it's awesome#2016-11-0707:49tengSo some type of dumps recreates all the transaction id’s, which makes it a bad idea to store transaction ids as facts?!#2016-11-0707:51pesterhazyThat's just my opinion#2016-11-0707:52pesterhazyBut think of the use case of recreating a db selectively#2016-11-0707:52pesterhazyOr developer db dumps for testing#2016-11-0707:52tengMakes sense.#2016-11-0707:55tengCan I ask the database to give me the current date/time? Otherwise, it can be a problem if I’m running more than one transactor on different machines (if their clock is not in sync).#2016-11-0708:00pesterhazyYes, in a db.fn that should work#2016-11-0708:01pesterhazyYou could create a touch fn that updates the timestamp for you#2016-11-0708:02pesterhazyNote that the fn will run on the transactor#2016-11-0708:02pesterhazyBut only one transactor is ever active so there shouldn't be contention there#2016-11-0717:11timgilbertSay is there a predicate I can use that will tell me whether a value is an entity returned from (d/entity) or not?#2016-11-0717:18timgilbertHmm, looks like #(instance? datomic.Entity %) does it. I do peculiarly get a NullPointerException doing #(satisfies? datomic.Entity %) for some reason though#2016-11-0717:20timgilbertAh, because it's a class. The wording of this in the docs led me to believe it was an attribute. http://docs.datomic.com/clojure/index.html#datomic.api/entity#2016-11-0807:54isaacCan I assign value to :db/id with (d/tempid :some/part unique-positive-long)#2016-11-0811:12yonatanel@isaac tempid in transactions is resolved to a different id later. It's only for referencing in a transaction.
(d/tempid :db.part/user)
=> #db/id[:db.part/user -1001047]
(d/tempid :db.part/user)
=> #db/id[:db.part/user -1001048]
(d/tempid :db.part/user)
=> #db/id[:db.part/user -1001049]
(d/tempid :db.part/user)
=> #db/id[:db.part/user -1001050]
(d/tempid :db.part/user -1)
=> #db/id[:db.part/user -1]
(d/tempid :db.part/user -1)
=> #db/id[:db.part/user -1]
(d/tempid :db.part/user -1)
=> #db/id[:db.part/user -1]
#2016-11-0818:53Matt ButlerDo Map Specifications (pull api) work when using the pull function inside the :find of a query?
[:find (pull ?e [:name {:friends [:first-name :second-name]}])]#2016-11-0820:20stuartsierra@mbutler yes#2016-11-0820:23Matt Butler@stuartsierra Interesting, I’ll keep trying then, thanks 🙂#2016-11-0905:55robert-stuttaford@jaret and @marshall — hi 🙂 is there any chance that Datomic’s backup/restore tooling, when using the s3:// destination, can use the AWS credential profile facility .. e.g. AWS_PROFILE=<my-profile-name> $DATOMIC_PATH/runtime/bin/datomic restore-db “$DATOMIC_PRODUCTION_BACKUPS_S3_BUCKET/<database>" $DATOMIC_URI/<database> with no AWS_ACCESS_KEY or secret set will currently error: com.amazonaws.AmazonClientException: Unable to load AWS credentials from any provider in the chain however AWS_PROFILE=<my-profile-name> aws s3 ls lists the bucket just fine#2016-11-0905:55robert-stuttafordif it’s not supported, i humbly request that it be added. if it is supported, i humbly request guidance on how to use it correctly. thank you 🙂#2016-11-0917:09karol.adamiecguys, need a pointer. has anyone some nice way to work with datomic style identifiers in javascript? what do you do? const name = product[':part/name’]; is unnaceptable in long run. just trying to get some opinions.#2016-11-0917:10karol.adamieci can map all keys using heuristics so :part/name becomes partName, or write DTO’s (sigh..) by hand.. or?#2016-11-0917:10karol.adamiecfishing for opinions! 😛#2016-11-0918:00lellisI donno if its a good approach but here we change / for _ when this data goes to our response.#2016-11-0918:15robert-stuttaford@karol.adamiec maybe look at how datascript does it (it has a JS api)#2016-11-0922:17timgilbert@robert-stuttaford: I agree this is annoying. FWIW, I've worked around it by having a script that sets AWS_ACCESS_KEY_ID=$(aws configure get aws_access_key_id) so at least I just have to manage the key in one place#2016-11-0922:19timgilbertThough I guess that's not exactly the same use-case you've got#2016-11-0922:20timgilbert@lellis: I solved that by using EDN as a transport layer and ClojureScript as a consumer 😛#2016-11-0922:22timgilbert...but when I was consuming datomic output with plain JS, I tended to use a JSON translation layer on the server (Clojure) side which converted the EDN to JSON, generally by dropping the namespaces from the keys.#2016-11-0922:22timgilbertOh,, sorry, that response was for @karol.adamiec#2016-11-0922:24timgilbertIMHO it depends on what you're using on the server side. One option might be using [transit](https://github.com/cognitect/transit-format) which is a performant serialization layer for EDN, but I've never tried using its JS API.#2016-11-0922:28timgilbertI have a completely unrelated question. I'm doing a bunch of queries where I start from one object and then navigate fairly far away from it traversing refs, and I'm wondering if I can get the pull syntax or something else to just retrieve some data at the end instead of needing to reach deep into a nested tree.#2016-11-0922:30timgilbertSo for example, I have a project, which has an ID and a floorplan, and the floorplan has an area which has an ID. I've got the area ID and I want the project ID. What I have now is the following but it doesn't seem very elegant:#2016-11-0922:31timgilbert(let [p (d/pull
db
[{:area/floorplan [{:floorplan/project [:project/id]}]}]
[:area/id my-area-id])]
(get-in p [:area/floorplan :floorplan/project :project/id]))
#2016-11-0922:32timgilbertI mean, I can wrap it up in a function, but I'd like to not need to have to specify the key sequence twice#2016-11-0922:33timgilbertAnyways, looking for any more pleasant ways to express this#2016-11-1001:08csmis query/transact supported for :db.type/bytes through the REST API? I’m getting back a #object["[B" 0x43d8762c “[#2016-11-1009:37karol.adamiec@timgilbert well for now i need to stick to javascript. Using datomic as backend though gives a nice migration path 🙂 , what is kind of a goal that i am going towards. transit looks exactly right, will try to use it to translate from EDN into json 🙂#2016-11-1010:03karol.adamieclooked over transit-js tutorial. There is no answer for wierd keys there really, unless a customized handler for a tag is used. Could piggyback on that…#2016-11-1013:05Matt ButlerOn a slightly related note to Tim, can you get the Pull API to return a just the value similar to how you can with the query API (scalar) instead of storing the result in a map.
(d/pull db [:name] Jeff's Entity)
=> “Jeff”
Rather Than:
=> {:name “Jeff”}
#2016-11-1014:21karol.adamiec@mbutler thanks, will explore that as well#2016-11-1016:01devthhi all. is there any interest or has anyone worked on a lib that operates over pull patterns? considering set operations like difference, intersection, and union.
my particular need is for an ACL implementation. access rules are specified as pull patterns. these need to be combined into a single pattern to determine the set of all attributes a subject entity can access.
when the subject submits a pull query it needs to be validated against that access pattern - essentially you take the intersection of the requested pull query and the access pull. also open to feedback on whether this is a good/bad approach or alternative ideas, of course.#2016-11-1017:33misha@mbutler (:name (d/pull db [:name] Jeff's Entity)) kappa#2016-11-1017:37misha@timgilbert why not just d/q? might be faster as well#2016-11-1017:38misha@mbutler if you need a single value, d/q will work for you too.#2016-11-1017:39timgilbertHmm, yeah, maybe I should just build up my (d/q) query dynamically.#2016-11-1017:41timgilbertWhat I am currently working on is this:#2016-11-1017:41timgilbert(defn- make-pull-navigate-pattern
"Turn a seq [:a :b :c ... :z] into a pull pattern [{:a [{:b [{:c ... [:z]}]}]}]"
[path]
(loop [[head & tail] path]
(if (empty? tail)
[head]
[{head (recur rest)}])))
(defn navigate
"Given a path of references from one datomic entity, walk the chain of them and return
the final attribute in the chain."
[db item path]
(let [pull-pattern (make-pull-navigate-pattern path)
results (d/pull db pull-pattern item)]
(get-in results path)))
#2016-11-1017:41timgilbert…but I need to get the first function into tail-recursive form#2016-11-1017:47timgilbert@devth: we have talked about doing stuff like that but haven’t got much progress to report. Some of our APIs let the client send a pull pattern over, and it doesn’t seem obvious how to implement a security system on top of that.#2016-11-1017:48mishadid you compare its d/pull execution time to generated d/q one?#2016-11-1017:49timgilbertYou can use the datomic filter operations to limit data at an attribute level, but more semantic “user x needs to be in group y to access data set z” operations don’t seem to be easy to do without some vetting at the server level#2016-11-1017:49devth@timgilbert that's what I'm aiming to do too - let the client send a pull pattern over - then intersect that with access control pull patterns to limit what the client actually retrieves.#2016-11-1017:49timgilbert@misha: no, will try a d/q implementation out and report back though#2016-11-1017:51mishaI think, comparing 3-5 attribute-long pull and query should be enough to get an idea, before jumping to implementation#2016-11-1017:52timgilbertYeah @devth, it seems tricky to do. One thing we speculated about but did not try was using the “as-if” facilities to restrict a database value to only the set of what a given user could see. It didn’t seem likely to be performant, but we never really tried it out.#2016-11-1017:53devthour first idea was to use filtering but it was hard to do in a general and performant way#2016-11-1017:53timgilbertYeah. The pull patterns themselves are arbitrarily nested too, which makes validating them potentially tricky#2016-11-1018:59devthif you had a way to reliably do (intersection access-rules-pattern client-request-pattern) then you're good. as long as they are rooted at the same subject entity the algo to do that isn't overly complex.#2016-11-1021:46timgilbertI look forward to checking out your library 😉#2016-11-1022:23devth🙂 will definitely post if/when i figure it out#2016-11-1111:54tengRealized that the replace function doesn’t work, need to use com.rpl/specter or something similar to get it work.#2016-11-1112:05rauh@teng I've sometimes assemble them by not quoting everything. So [:find (list 'pull '?user-id your-pull-pattern)]#2016-11-1112:05rauhNote you can also use maps instead of vectors for a query. That sometimes simplifies things#2016-11-1112:14teng@rauh Feels like I should create a macro that can both substitute and quote the rest of the content!#2016-11-1112:17tengThe quoting reduces the readability, but it’s probably ok for now. Sounds like it should be a solved problem already ?!#2016-11-1115:06timgilbertYou might look into quoting with backtick and tilde, maybe: `[:find [(pull ...) ...] :where ~where-clauses]#2016-11-1115:10timgilbertI agree that this seems like the type of thing specter would be good at#2016-11-1115:41rauh@teng https://github.com/gfredericks/misquote#2016-11-1116:09yonatanel@rauh @teng Wouldn't it be easier to pass the pull pattern as a :in query parameter? (third code sample at http://docs.datomic.com/query.html#pull-expressions)#2016-11-1116:16rauh@yonatanel Yes, but IIRC the queries get cached, and if you pass in the query as a parameter than it won't be. So it might be slower#2016-11-1116:23yonatanel@rauh I'm referring to parameterize the query itself, and so making the query data structure constant and cachable while the pull pattern changes. (http://docs.datomic.com/best-practices.html#parameterize-queries)#2016-11-1116:28rauh@yonatanel Yes I know, I just don't know if --for example-- datomic caches some kind of query plan if you pass in a pull with constant arguments. I also don't like to make something dynamic when it's really actually known at compile time.#2016-11-1116:39yonatanelOK. Would be interesting to know if datomic cares about pull patterns when it caches queries. I was under the impression that @teng's use case is to provide a pattern remotely at runtime.#2016-11-1204:26juliobarrosDo the instructions for deploying to aws with cloud formation still work? I#2016-11-1409:13jonpitherHi @robert-stuttaford diving into https://libraries.io/github/robert-stuttaford/terraform-example#2016-11-1409:29robert-stuttafordenjoy 🙂 you know where to find me if you have questions!#2016-11-1409:30robert-stuttaford@jonpither we went live with our closed-source version of that code two Saturdays ago. it’s doing great.#2016-11-1409:44jonpitherthanks for the link to the getting started series#2016-11-1411:57jonpitherhas anyone found a good approach for load testing a cloud based datomic install?#2016-11-1412:16robert-stuttaford@jonpither you mean artificially - by directly generating transactions, or by simulation, through exercising a website’s pages?#2016-11-1412:16robert-stuttafordwe’ve done the second with https://github.com/mhjort/clojider#2016-11-1412:17robert-stuttafordit’s pretty straightforward to do the first; ship a lein project with the peer library and an nrepl service included and connect emacs and generate transactions 🙂#2016-11-1412:17jonpitherThat looks good 🙂 I've primarily thinking of load-testing the transactor#2016-11-1412:17jonpither@robert-stuttaford that intro to Terraform resource you sent me is excellent#2016-11-1412:17robert-stuttafordyeah, so that’d be sustained transaction input#2016-11-1412:18jonpitherI'm skipping Terragrunt, but getting a lot from it#2016-11-1412:18robert-stuttaforduntil you reach indexing thresholds#2016-11-1412:18jonpitherCool#2016-11-1412:18robert-stuttafordterragrunt, neat! never heard of that before#2016-11-1412:19jonpitherWill check out - https://github.com/mhjort/clojider#2016-11-1412:19robert-stuttaford@mhjort is around if you have questions, too 🙂#2016-11-1412:20jonpithernice. You guys are being very helpful 🙂#2016-11-1412:20jonpitherthanks#2016-11-1412:20robert-stuttafordthat’s how we do it around here buddy 🙂#2016-11-1412:21robert-stuttafordgood luck!#2016-11-1415:00marshall@juliobarros If you mean these: http://docs.datomic.com/storage.html#provisioning-dynamo
http://docs.datomic.com/aws.html
Then, yes, they are up to date#2016-11-1420:52josh.freckletonI'm interested in working Datomic into my workflow, but it's rather unclear to me what the cost structure would be, and I don't have a profitable project to bolt it onto right now.
Do I need to maintain a separate database, and Datomic sits ontop of it?
And then I pay an extra licensing fee atop that?
I'd love to get started if I could deploy it for free (for now) to AWS or Heroku, possible?#2016-11-1420:58marshall@josh.freckleton Datomic Starter provides a no-cost way (from the perspective of Datomic license cost anyway) to deploy a Datomic application#2016-11-1421:01josh.freckleton@marshall I can get pretty far on a $0 budget for launching/testing an MVP on heroku/AWS. How much room do I have to grow with Datomic while staying at the free tier? IE how many users/MBs/other pertinent variables?#2016-11-1421:02marshallDatomic Starter is quite powerful. the scaling limit will come in if you want to add numerous peers. a Starter license is limited to 2 concurrent peers#2016-11-1421:02marshallsome details are available here: http://www.datomic.com/pricing.html#2016-11-1421:14josh.freckletonthanks @marshall !#2016-11-1509:01magnarsI would be very interested to see some information from the Datomic crew about using Spec with entities. It seems not to work right now, since entities aren't maps.#2016-11-1515:24alexmillerThere is a jira ticket about this FYI#2016-11-1515:25alexmillerCLJ-2041#2016-11-1515:41jonpitherAnyone using Terraform and Datomic?#2016-11-1515:42jonpitherRoughly following https://github.com/mrmcc3/tf_aws_datomic/blob/master/scripts/bootstrap-transactor.sh but my transactor EC2 nodes are coming up - just getting stuck installing package etc#2016-11-1516:08karol.adamiec@jonpither i have of fork of that repo that also sets up a HTTP rest peer. Most problems are coming from malformed license key.#2016-11-1516:09pesterhazy@jonpither also check out https://libraries.io/github/robert-stuttaford/terraform-example#2016-11-1516:09jonpitherdoes the AMI is uses for the transaction cater for SSH-ing on to see the state of it?#2016-11-1516:09jonpither@pesterhazy am working with @robert-stuttaford's repo now#2016-11-1516:09jonpitherbut when the transaction EC2 doesn't come up, I can't find any possible way to debug it#2016-11-1516:09karol.adamiec@jonpither is yout transactor cycling? ie starting up and shutting down repeatedly?#2016-11-1516:10karol.adamiecthe EC2 instance i mean#2016-11-1516:10jonpithertwice earlier it just never finished - the third time it seems to work, but without any logs or SSH access hard to determine if actually working#2016-11-1516:11jonpitherpresumably with the AMI used in the terraform-example, if you specific a key-name you should be able to ssh on?#2016-11-1516:11karol.adamiectake a peek into dynamo table#2016-11-1516:12karol.adamiecif it connects it will put data there with its own ip address#2016-11-1516:13jonpitherok doesnt seem to#2016-11-1516:15jonpitherDoes it usually put logs in s3?#2016-11-1516:17karol.adamiecnever looked there. Had a lot of headaches with spinning up a transactor and i checked the table to see if it is up.#2016-11-1516:29jonpitherWhat is the default username for this AMI image?#2016-11-1517:30jonpitherDoes the transactor not log by default?#2016-11-1518:35stuartsierraIf the Transactor seems to never "start" but it doesn't immediately terminate, you can sometimes infer some things by getting the EC2 Console log.#2016-11-1518:36stuartsierraFor example, I've seen images fail to start because the base image is configured to update core packages on boot, and it can time out hitting package repositories.#2016-11-1518:51jonpither@stuartsierra thanks. My transactor is working, but there's some issue in my setup that means I can't easily ssh on to check out logs - I think a fresh brain tomorrow will help 🙂 Is this SSH'ing on the transactor something you ever do?#2016-11-1518:52stuartsierraNo, you cannot SSH into the default Transactor AMI.#2016-11-1518:52marshall@jonpither If you’re using our provide AMI, SSH is not enabled#2016-11-1518:52marshallprovided*#2016-11-1518:52jonpitheryes I am using the AMI#2016-11-1518:52marshallyeah, you can’t ssh to it#2016-11-1518:52marshallit will rotate logs to S3 if you’ve configured the transactor properties file to do so#2016-11-1518:53jonpitherthere's no logs in S3 other than a probe - it's possible I simple haven't started using it yet - logs shall appear#2016-11-1518:54jonpitherdiff Q - do you have any resources / examples for setting up Console in AWS?#2016-11-1518:55marshalllogs are only rotated once a day or on shutdown#2016-11-1518:56marshallof course if you just kill the ec2 instance you won’t get log rotation#2016-11-1518:56jonpitherso logs will only appear in s3 at the end of the day. What do you advise for seeing real time transactor logs?#2016-11-1519:03marshallgenerally metrics are what i use to monitor live transactors#2016-11-1519:13donaldballThe map form of datomic queries seems easier to transform e.g. when looking to add shared complex conditions, but the list form is ubiquitous. Is there a datomic fn I’ve overlooked that transforms the list form into the map form?#2016-11-1519:29jonpither@marshall thanks#2016-11-1519:35donaldballPerhaps more to the point, when composing queries from complex shared criteria, do folk tend to pass the queries to fns that add the requisite criteria, or conj the shared criteria from whatever registry they live in into the query being constructed?#2016-11-1604:33georgekI’m not sure if this is the right place for this question but I’m trying to have an AWS Lambda make a datomic transaction that is backed by dynamodb.
This is using clojure/aws-java-sdk libraries which work fine using ec2, etc.
I’ve got the lambda wired properly and I’ve followed the various docs on giving the right permissions to the associated role. Here’s mine:
AWSLambdaDynamoDBExecutionRole
AWSLambdaVPCAccessExecutionRole
AWSLambdaExecute
AmazonVPCFullAccess
I’ve configured the lambda to be part of the default security group and set it to run in the VPC.
The problem is that I get an exception-less timeout when trying to connect to the store using the uri.
Thoughts?#2016-11-1604:45georgekOn a related note I see that most of the libraries and discussion is around using cljs over node. The one cljs datomic library requires that there be a running peer that provides the REST api access it requires.
There isn’t a lot of documentation on how to ensure that your lambda creates a peer upon invocation. Am I missing something ? I deployed via cloudfront following the basic install approach but the datomic docs on the REST api show a manual start of an api-ready peer. Is there some way to configure this on the transactor instance?
Thanks!#2016-11-1605:31kenny@georgek Datomic + Lambda will not work well because you would need to launch a peer for each request (see https://groups.google.com/forum/#!topic/datomic/OYJ4ghelmF0). As suggested in the Google Group post, you could setup the Datomic REST API and transact from the Lambda function by sending a request to the REST API. To prevent outside access to the REST API, you would probably need to setup the REST server inside an isolated VPC and only give your Lambda function access to that VPC.#2016-11-1610:30pesterhazyyeah, in my experience connecting a new peer can take up to 90 seconds, kind of defeating the purpose of lambda#2016-11-1610:38val_waeselynckJust checking, is it safe to migrate to AWS longer resource ids (https://aws.amazon.com/fr/blogs/aws/theyre-here-longer-ec2-resource-ids-now-available/) when using the Datomic Cloudformation stack ?#2016-11-1615:10georgek@kenny @pesterhazy That makes so much sense I’m a bit floored. Thanks for the reality check and pointer to how to approach this!#2016-11-1615:50conanI'm trying to do a restore-db. Whatever I try, I get this message:
clojure.lang.ExceptionInfo: :restore/roots-missing No database root for next-t 2568953 {:db/error :restore/roots-missing}
or this message:
clojure.lang.ExceptionInfo: :restore/no-roots No restore points available at file:/cn-catalog/db-backup/datomic/beta/ {:uri "file:/cn-catalog/db-backup/datomic/beta/", :db/error :r
estore/no-roots}
Here are some examples of the commands I'm running:
datomic/bin/datomic restore-db file:/cn-catalog/db-backup/datomic/beta datomic: `ls -1 ~/dev/cn-catalog/db-backup/datomic/beta/roots | sort -n | tail -n 1`
datomic/bin/datomic restore-db file:/cn-catalog/db-backup/datomic/beta datomic:
datomic/bin/datomic restore-db file:/cn-catalog/db-backup/datomic/beta datomic: 6591640
datomic/bin/datomic restore-db file:/cn-catalog/db-backup/datomic/beta datomic: -t 6591640
Any ideas? I'm on Windows.#2016-11-1615:54tengHow do I retract a value from an attribute with the cardinality many and with the type ref? For example, :user/role-id has references to the role entity. If the :user/role-id contains [1 2 3] and I want to retract 2 so that the result is [1 3], how do I do that?#2016-11-1615:55danstone@teng [:db/retract eid :user/role-id 2]#2016-11-1615:56danstonesame as a cardinality one retraction#2016-11-1615:56teng@danstone So maybe the problem was that we tried to pass a vector of values.#2016-11-1615:57danstoneyeah, if you want to retract multiple values you have to submit individual retractions for each value#2016-11-1615:57tengOk, thanks!#2016-11-1615:59tengNow it works!#2016-11-1616:23pesterhazywhen using the map form of transactions, it looks like using "reverse notation" is not supported: {:db/id .... :country/name "Germany" :city/_country #{[:city/name "Frankfurt"] [:city/name "Stuttgart"]}}#2016-11-1616:23pesterhazybut I don't see why this wouldn't work in principle; it looks like a useful enhancement#2016-11-1616:24pesterhazyor am I missing something?#2016-11-1617:19Matt ButlerJust to confirm, does setting an attribute to :db/unique :db.unique/value implicitly index the value? as implied by
To maintain AVET for an attribute, specify :db/index true (or some value for :db/unique) when installing or altering the attribute
#2016-11-1619:08stuartsierra@pesterhazy Yes, that's a known limitation. Reversed attributes aren't supported in transactions. I don't know if/when it will be.#2016-11-1619:09stuartsierra@mbutler Yes, any kind of uniqueness implies indexing values.#2016-11-1619:09Matt ButlerAwesome thanks 🙂#2016-11-1619:27jonpitherHi - do you have any resources on understanding Datomics relationship with Dynamo write capacity?#2016-11-1619:27jonpithercurrently getting com.amazonaws.services.dynamodbv2.model.ProvisionedThroughputExceededException in the transactor - would want to avoid this#2016-11-1619:28jonpitherI can up the provisioned write capacity - but would like to understand how Datomic does it's writes, i.e. presumably it's a transaction per unit?#2016-11-1619:31jonpitheror one "write capacity unit" per datom - think it could be this?#2016-11-1621:06marshall@jonpither Datomic’s use of DDB writes doesn’t correlate exactly to transactions or datoms
For every transaction, Datomic will write durably to the transaction log in DDB, but the transactor also writes a heartbeat to storage and, most importantly, will write large amounts of data during indexing jobs.#2016-11-1621:06marshallBecause of this, you need to provision ddb throughput based on the need during indexing jobs, not ongoing transactional load#2016-11-1621:07jonpitherGreat, thanks @marshall #2016-11-1621:07marshalla bit more info can also be found here http://docs.datomic.com/capacity.html#dynamodb#2016-11-1621:09jonpitherGreat my next Q is answered there about capturing the throttles#2016-11-1713:31kvltSo I know everybody has their own version of this… But I have just “finished" my first iteration of my “datomic helpers" library. Any feedback would be greatly appreciated: https://github.com/petergarbers/molecule#2016-11-1715:23Matt ButlerI want to do a case insensitive search on a string value, from googling I’ve gathered its possible but haven’t found any info on how to implement it, any pointers? 🙂#2016-11-1715:26kvlt@mbutler: [(.equalsIgnoreCase ^String ?db-val ?val)]#2016-11-1715:28Matt Butlerah awesome @petr , I seem to remember something about needing to require a function into the file if it wasn't in clj.core that the case here?#2016-11-1715:28kvltNot as far as I know#2016-11-1715:28kvltOr not anywhere I’ve done it#2016-11-1715:29Matt ButlerOk cool, well thanks again 👍#2016-11-1716:01timgilbertHi all... I've figured out how to use reified transactions such that I'm attaching a :request/person ref to each transaction pointing to the logged-in user, but I'm a little baffled by how I query the data to get the reified data out. Anyone have examples of that?#2016-11-1716:01alexmiller@mbutler that’s a function on java.lang.String. all java.lang classes are imported by default.#2016-11-1716:35marshall@timgilbert This blog post: http://blog.datomic.com/2016/08/log-api-for-memory-databases.html discusses how to use the log API to pull out data about transactions#2016-11-1716:35marshallI’d also suggest the reified transactions video here: http://www.datomic.com/videos.html#2016-11-1716:39timgilbertCool, thanks @marshall, that looks helpful. I did watch that video some time back and thought it was really good, but I've found it difficult to grep through 😉#2016-11-1716:39marshallthere’s your unicorn company idea - grep for videos#2016-11-1718:26jarppe@timgilbert I was just wondering the same. I made this to test just that: https://gist.github.com/jarppe/7a7b3234b6ce0b704df8046c67aad988#2016-11-1718:26jarppehope it helps#2016-11-1719:07timgilbertThanks @jarppe, that is helpful#2016-11-1809:39robert-stuttafordhey @marshall and @jaret, just a quick FYI that @geoffk is on Cognician’s infrastructure team and may have some questions around transactors at some point 🙂#2016-11-1813:59jaret@robert-stuttaford sounds good. He can drag us into a private chat or we can arrange a call. Let us know what works best.#2016-11-1923:08zaneWhat are best practices regarding pagination?#2016-11-1923:08zaneI'm searching the Google Group and there doesn't appear to be much consensus.#2016-11-2112:22tengI have a 2000 lines long script with datoms that I read into the database (slurp + transact). Is there a way to get better error messages, for example telling me which fact or line in the file that contains the error (I know what the problem is but need to scope it down)? Now I just get:
CompilerException java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException: :db.error/tempid-not-an-entity tempid used only as value in transaction#2016-11-2113:02gravI get Critical failure, cannot continue: Heartbeat failed when doing an restore to an empty datomic:dev database. What to do?#2016-11-2113:23val_waeselynck@teng not really a direct answer, but maybe you can transact speculatively (datomic.api/with) only small segments on your file to know where the problem is ?#2016-11-2113:26teng@val_waeselynck I found the error by commenting out parts of the script and ran the script again and again. It works, but a better error message would be preferable.#2016-11-2115:12Matt ButlerWhen retracting an entity what is the best practice for finding out if that entity existed/was retracted. Should you query the :db-before and :db-after or is it ok to interpret it based on the :tx-data (are there datums present that suggest the removal of an entity)?#2016-11-2115:23val_waeselynck@mbutler yes, that's the 5th element of a Datom#2016-11-2115:24val_waeselynckhttp://docs.datomic.com/javadoc/datomic/Datom.html#added--#2016-11-2115:25marshall@grav do you have more details of your failure? what OS, what is the restore command you’re running? Any exceptions or errors in the transactor log?#2016-11-2115:26Matt Butler@val_waeselynck yes, so you could look for a datum in the tx-data that says that some attribute on the entity you want removing (probably the one you did the lookup using) has a 5 element of false#2016-11-2115:27val_waeselynck@mbutler yes#2016-11-2115:28Matt Butlercool cool 🙂#2016-11-2115:28grav@marshall: I'm away from the machine, but I'll get the details tomorrow and post here.#2016-11-2123:04bbloomare offline docs available? i’d like to be able to grep locally#2016-11-2123:23bbloomwget -r 2 did the trick nicely#2016-11-2213:52Matt ButlerIf there are 2 datums in the same transaction setting a :db.unique/value attribute to the same value do either go through/are they merged or is the tx thrown out completely returning the a db.error/unique-conflict?
When processing a large number of transactions for about 1/5 I get value: x already held by: 17592186703132 asserted for: 17592186703135 It seems always to be 3 ids apart. Can this be happening within a transaction or is it that there are 2 transactions being created with the same datum value x.#2016-11-2214:09tengI found myself normalizing ten statuses (e.g. APPLICATION_COMPLETED) into its own entity, so that instead of storing e.g. :user/status with the value ACCEPTED, I have the entity 'user-status' with all the valid statuses, and :user/status-id referring to that entity with an id. Is this idiomatic in Datomic (I often model it like this in traditional relational databases, but sometimes I don’t because of the improved readability to store it as a plain value).#2016-11-2214:38tengI changed back to using values. Felt like incidental complexity otherwise.#2016-11-2214:40karol.adamieci have an interesting problem. I want to build a basket with line items in it. I assign the basket to user using identity ref. So subsequent transactions for same user result in only one basket always. So far so good. Problem is now that my Line items collection is actually growing. Is there a way to model cardinality many on a ref in sucha way that it is not duplicating the line items but replaces them?
;Line item
{:db/id #db/id[:db.part/db]
:db/ident :item/quantity
:db/valueType :db.type/long
:db/cardinality :db.cardinality/one
:db/doc "Quantity of item"
:db.install/_attribute :db.part/db}
{:db/id #db/id[:db.part/db]
:db/ident :item/part
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/one
:db/doc "Line item ref to product"
:db.install/_attribute :db.part/db}
;Basket
{:db/id #db/id[:db.part/db]
:db/ident :basket/owner
:db/valueType :db.type/ref
:db/unique :db.unique/identity
:db/cardinality :db.cardinality/one
:db/doc "the owner of the basket"
:db.install/_attribute :db.part/db}
{:db/id #db/id[:db.part/db]
:db/ident :basket/items
:db/valueType :db.type/ref
:db/isComponent true
:db/cardinality :db.cardinality/many
:db/doc "Items in the basket"
:db.install/_attribute :db.part/db}
#2016-11-2214:42karol.adamiec{
:db/id #db/id[:db.part/user]
;notice the lookup ref usage to obtain references. Lovely.
:basket/owner [:user/email "#2016-11-2214:43karol.adamiecso a transaction above executed multiple times always edits the same basket, but line items grows instead of being totlly replaced ….#2016-11-2214:48marshall@karol.adamiec I believe there was a google group post some time back about implementing a transaction function called assertWithRetractions or something like that#2016-11-2214:48marshallhttps://groups.google.com/forum/#!topic/datomic/_wIgitHKo6A#2016-11-2214:49karol.adamiec@marshall so in general the upsert niceness ends when cardinality many comes into play? is that correct understanding?#2016-11-2214:49marshallupsert behavior only occurs on cardinality one attributes. the semantic of cardinality many requires that assertions be additive, not replacing#2016-11-2214:50karol.adamiecmakes perfect sense, thanks! will check out the thread.#2016-11-2214:50marshall👍#2016-11-2216:48karol.adamiec(d/transact conn [{:db/id #db/id [:db.part/user]
:db/ident :assertWithRetracts
:db/fn #db/fn {:lang "clojure"
:params [db e a vs]
:code "(vals (into (into {} (map (comp #(vector % [:db/retract e a %]) first) (datomic.api/q [:find '?v :where [e a '?v]] db))) (into {} (map #(vector % [:db/add e a %]) vs))))"}}])
CompilerException java.lang.RuntimeException: Can't embed object in code, maybe print-dup not defined: #2016-11-2216:49karol.adamieci picked the function from one of the threads, but no idea why it is not installing?#2016-11-2216:52karol.adamiechmm, worked through REST api transaction...#2016-11-2217:01karol.adamiecsucess. i can retract cardinality many with that little function, and assert new stuff in.#2016-11-2217:01karol.adamiecso back to my original query...#2016-11-2217:02karol.adamiec{
:db/id #db/id[:db.part/user]
;notice the lookup ref usage to obtain references. Lovely.
:basket/owner [:user/email “#2016-11-2217:03karol.adamiechow can i glue that together with transaction
[[:assertWithRetracts 17592186045431 :basket/items []]] like that?#2016-11-2217:04karol.adamiectwo separate calls or is there a way to wire the two together?#2016-11-2217:10karol.adamiecah, i think i can just put both into a transact vector and it will run in one transaction?#2016-11-2217:33lellis@karol.adamiec, yes it will!#2016-11-2217:36karol.adamiec@lellis i am struggling wityh how to connect the two. I need the eid of a basket to match a thing created or retrieved by first transaction. the lookup ref will not work in the same transaction (per docs) and i am unsure if they nest anyway. temp id maybe?#2016-11-2217:39karol.adamiec[
{
:db/id #db/id[:db.part/user -1]
;notice the lookup ref usage to obtain references. Lovely.
:basket/owner [:user/email “#2016-11-2217:40karol.adamieci would like that transaction data to tie in nicely, regardless whether the basket just got created or it was there already, but it seems to not work ;/#2016-11-2218:44lellisU want create and retract data in same transact right? @karol.adamiec ?#2016-11-2220:46zaneIs there something I can read to better understand query caching and how to optimize queries?#2016-11-2221:34geoffs@zane have you read these in the datomic docs? http://docs.datomic.com/query.html#clause-order#2016-11-2221:34geoffshas some information about both topics#2016-11-2221:35geoffsit's not a ton of info, but it has the basics#2016-11-2222:01karol.adamiec@lellis well i want to create basket entity or get ahold of it if exists. that entity has a collection of items that need to be reset, thet is why i need the custom dbfn. Default behaviour for cardinality many is adding stuff in. I need to replace the items collection instead.#2016-11-2222:31zane@geoffs: Thanks! Yeah, I'm aware of clause order and reducing the result set upfront.#2016-11-2223:07zaneWhat's the most efficient way to retrieve the most recent transaction id for a given entity?#2016-11-2223:07zaneIf I have the entity id.#2016-11-2223:07zaned/log?#2016-11-2223:07zaned/datoms with :eavt?#2016-11-2223:11zaned/history?#2016-11-2223:17zaned/entity-db?#2016-11-2223:22zaneFeels like definitely not d/history.#2016-11-2307:15robert-stuttaford@zane, if you want the latest transaction for an entity, you’d need to query all its current attributes#2016-11-2307:16robert-stuttaford[:find (max ?tx) :in $ ?e :where [?e _ _ ?tx]] is one approach#2016-11-2307:29grav@marshall Ok, so regarding the Critical failure, cannot continue: Heartbeat failed error:
- OS: Mac OS X
- transactor command: ./bin/transactor -Xmx4g -Xms4g -Ddatomic.peerConnectionTTLMsec=20000 -Ddatomic.txTimeoutMsec=20000 config/samples/dev-transactor-template.properties
- restore command: ./bin/datomic -Xmx4g -Xms4g restore-db file:/Users/mgn/Downloads/import-2016-11-08 datomic:
- transactor log:
45741-2016-11-21 10:51:24.134 INFO default datomic.kv-cluster - {:event :kv-cluster/create-val, :val-key "5821b80f-e2af-4e7d-a02e-8fb9838bfd56", :bufsize 15561, :phase :begin, :pid 36803, :tid 64}
45742:2016-11-21 10:51:24.153 WARN default datomic.backup - {:message "error executing future", :pid 36803, :tid 10}
45743-java.util.concurrent.ExecutionException: java.util.concurrent.ExecutionException: org.h2.jdbc.JdbcSQLException: Connection is broken: "java.net.ConnectException: Connection refused (Connection refused): localhost:4335" [90067-171]
45744- at java.util.concurrent.FutureTask.report(FutureTask.java:122) [na:1.8.0_111]
45745- at java.util.concurrent.FutureTask.get(FutureTask.java:192) [na:1.8.0_111]
45746- at datomic.common$pfuture$reify__319.deref(common.clj:587) ~[datomic-transactor-pro-0.9.5407.jar:na]
45747- at clojure.core$deref.invokeStatic(core.clj:2228) ~[clojure-1.8.0.jar:na]
45748- at clojure.core$deref.invoke(core.clj:2214) ~[clojure-1.8.0.jar:na]
45749- at datomic.backup.ValueRestore.restore_node(backup.clj:446) ~[datomic-transactor-pro-0.9.5407.jar:na]
45750- at datomic.backup.ValueRestore.restore_node(backup.clj:437) ~[datomic-transactor-pro-0.9.5407.jar:na]
45751- at datomic.backup$restore_db$fn__9032$fn__9035.invoke(backup.clj:660) ~[datomic-transactor-pro-0.9.5407.jar:na]
45752- at datomic.backup$restore_db$fn__9032.invoke(backup.clj:656) ~[datomic-transactor-pro-0.9.5407.jar:na]
45753- at clojure.core$binding_conveyor_fn$fn__4676.invoke(core.clj:1938) [clojure-1.8.0.jar:na]
45754- at clojure.lang.AFn.call(AFn.java:18) [clojure-1.8.0.jar:na]
45755- at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_111]
45756- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_111]
45757- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_111]
45758- at java.lang.Thread.run(Thread.java:745) [na:1.8.0_111]
45759-Caused by: java.util.concurrent.ExecutionException: org.h2.jdbc.JdbcSQLException: Connection is broken: "java.net.ConnectException: Connection refused (Connection refused): localhost:4335" [90067-171]
45760- at java.util.concurrent.FutureTask.report(FutureTask.java:122) [na:1.8.0_111]
45761- at java.util.concurrent.FutureTask.get(FutureTask.java:192) [na:1.8.0_111]
45762- at datomic.common$pfuture$reify__319.deref(common.clj:587) ~[datomic-transactor-pro-0.9.5407.jar:na]
45763- at clojure.core$deref.invokeStatic(core.clj:2228) ~[clojure-1.8.0.jar:na]
45764- at clojure.core$deref.invoke(core.clj:2214) ~[clojure-1.8.0.jar:na]
45765- at datomic.backup.ValueRestore$fn__8956.invoke(backup.clj:422) ~[datomic-transactor-pro-0.9.5407.jar:na]
45766- at datomic.backup.ValueRestore.restore_val(backup.clj:419) ~[datomic-transactor-pro-0.9.5407.jar:na]
45767- at datomic.backup.ValueRestore$fn__8966$fn__8967.invoke(backup.clj:444) ~[datomic-transactor-pro-0.9.5407.jar:na]
45768- ... 6 common frames omitted
45769-Caused by: org.h2.jdbc.JdbcSQLException: Connection is broken: "java.net.ConnectException: Connection refused (Connection refused): localhost:4335" [90067-171]
45770- at org.h2.message.DbException.getJdbcSQLException(DbException.java:329) ~[h2-1.3.171.jar:1.3.171]
45771- at org.h2.message.DbException.get(DbException.java:158) ~[h2-1.3.171.jar:1.3.171]
45772- at org.h2.engine.SessionRemote.connectServer(SessionRemote.java:399) ~[h2-1.3.171.jar:1.3.171]
#2016-11-2308:12gravOh, I get some exceptions before that, eg:
2016-11-23 09:07:43.231 INFO default datomic.kv-cluster - {:event :kv-cluster/retry, :StoragePutBackoffMsec 0, :attempts 0, :max-retries 9, :cause "org.h2.jdbc.JdbcSQLException", :pid 47395, :tid 64}
2016-11-23 09:07:43.231 INFO default datomic.kv-cluster - {:event :kv-cluster/retry, :StorageGetBackoffMsec 0, :attempts 0, :max-retries 9, :cause "org.h2.jdbc.JdbcSQLException", :pid 47395, :tid 87}
2016-11-23 09:07:43.283 INFO default datomic.kv-cluster - {:event :kv-cluster/retry, :StoragePutBackoffMsec 50.0, :attempts 1, :max-retries 9, :cause "org.h2.jdbc.JdbcSQLException", :pid 47395, :tid 64}
2016-11-23 09:07:43.306 INFO default datomic.kv-cluster - {:event :kv-cluster/retry, :StorageGetBackoffMsec 50.0, :attempts 1, :max-retries 9, :cause "org.h2.jdbc.JdbcSQLException", :pid 47395, :tid 87}
2016-11-23 09:07:43.385 INFO default datomic.kv-cluster - {:event :kv-cluster/retry, :StoragePutBackoffMsec 100.0, :attempts 2, :max-retries 9, :cause "org.h2.jdbc.JdbcSQLException", :pid 47395, :tid 64}
2016-11-23 09:07:43.408 INFO default datomic.kv-cluster - {:event :kv-cluster/retry, :StorageGetBackoffMsec 100.0, :attempts 2, :max-retries 9, :cause "org.h2.jdbc.JdbcSQLException", :pid 47395, :tid 87}
2016-11-23 09:07:43.586 INFO default datomic.kv-cluster - {:event :kv-cluster/retry, :StoragePutBackoffMsec 200.0, :attempts 3, :max-retries 9, :cause "org.h2.jdbc.JdbcSQLException", :pid 47395, :tid 64}
2016-11-23 09:07:43.613 INFO default datomic.kv-cluster - {:event :kv-cluster/retry, :StorageGetBackoffMsec 200.0, :attempts 3, :max-retries 9, :cause "org.h2.jdbc.JdbcSQLException", :pid 47395, :tid 87}
2016-11-23 09:07:43.987 INFO default datomic.kv-cluster - {:event :kv-cluster/retry, :StoragePutBackoffMsec 400.0, :attempts 4, :max-retries 9, :cause "org.h2.jdbc.JdbcSQLException", :pid 47395, :tid 64}
2016-11-23 09:07:44.018 INFO default datomic.kv-cluster - {:event :kv-cluster/retry, :StorageGetBackoffMsec 400.0, :attempts 4, :max-retries 9, :cause "org.h2.jdbc.JdbcSQLException", :pid 47395, :tid 87}
2016-11-23 09:07:44.310 INFO default datomic.process-monitor - {:tid 11, :StoragePutMsec {:lo 0, :hi 18500, :sum 134937, :count 1898}, :AvailableMB 3190.0, :StorageGetMsec {:lo 0, :hi 3370, :sum 22493, :count 1917}, :pid 47395, :event :metrics, :StoragePutBytes {:lo 5641, :hi 19880, :sum 29298291, :count 1903}, :MetricsReport {:lo 1, :hi 1, :sum 1, :count 1}, :StoragePutBackoffMsec {:lo 0, :hi 400, :sum 750, :count 5}, :StorageGetBackoffMsec {:lo 0, :hi 400, :sum 750, :co
#2016-11-2309:23jonpitherHi - Anyone setup logstash with Datomic (am using the AMI at present) - any tips appreciated!#2016-11-2310:22karol.adamiecmorning, i will shamelessly repeat my question i case anyone missed it 🙂.
[
{
:db/id #db/id[:db.part/user -1]
;notice the lookup ref usage to obtain references. Lovely.
:basket/owner [:user/email "
Why the above is not working? is #db/id[:db.part/user -1] working only inside an ‘expression’ and not in a whole transaction? Is there any way to tie that in so i do not have to write logic in code? 🤓#2016-11-2311:48karol.adamiecBTW: what is transactional semantics in transact? all forms from the vector are part of one transaction i assume?#2016-11-2312:02drankardDoes anyone have an example of how to run gc-storage from the REST API ?#2016-11-2312:12jonpitherfollowing the datomic SQL script to creata the Datomic RB and I get ERROR: permission denied for tablespace pg_default in RDS#2016-11-2314:01zane@robert-stuttaford: Yeah, that's our current implementation. It's not particularly performant so we were looking for something faster. One option is to have an explicit attribute for updatedAt, but we're trying to avoid that.#2016-11-2314:38pheuterWe’d like to upgrade the process count on our current Datomic Pro License, how can we do so? Doesn’t seem like there’s a way via http://my.datomic.com#2016-11-2315:56alexmillercontact <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection> with any questions#2016-11-2409:48karol.adamiechi, have a question about UUID types. Docs recommend 'Squuids are valid UUIDs, but unlike purely random UUIDs, they include both a random component and a time component.’
Q1: Is there a way to ask Peer over REST for a Squuid?
Q2: I can generate v1 (time based) or v4 (RNG based). Is there a preference in case i am unable to use Squuids ?#2016-11-2409:52rauh@karol.adamiec The way they're created is no secret. It's a few lines of code. There is implemenation in java, javascript/cljs and lua and possibly more.#2016-11-2409:54karol.adamiecabsolutely right. my google fu got weak in the morning 😄#2016-11-2410:10karol.adamiecso basically it is replacing the first part of uuid v4 with a timestamp. :+1:#2016-11-2410:16rauh@karol.adamiec I did lua for openresty: https://gist.github.com/rauhs/b93bcf0d676f0335fd483d7c7c77303d#2016-11-2410:16seantempestaHow could I get the minimum diff (in datoms) between two databases? (same database, different points in time)#2016-11-2410:32seantempestaah, never mind. I found this article. https://blog.jayway.com/2012/06/27/finding-out-who-changed-what-with-datomic/#2016-11-2410:55karol.adamiecJavascript ES6 impl of Squuids, if anyone fancies 🙂 Thanks for hints @rauh
const [_,_,_,_,_,_,_,_, ...rest] = uuid();
Math.round((new Date()).getTime() / 1000).toString(16) + rest.join('’);
#2016-11-2412:07karol.adamiecdo lookup refs nest? it would be nice to be able to use that as navigation on identities. IE i have uniqe identity :user/email. i can always grab the user entity using [:user/email “. But then i have a linked entity that has a ref type that is also marked as uniqe identity that is binding to user entity. I would like to grab that entity by saying [:entity/owner [:user/email “#2016-11-2412:09karol.adamieci can upsert the :entity/, but i need its id to issue [:db.fn/retractEntity id-of-janes-basket]#2016-11-2412:12karol.adamiecit goes agaist the grain a bit though. I would never need that if not in the REST land ;/#2016-11-2413:41karol.adamiechow does one retrieve a SCALAR value over REST??
[:find ?eid . :in $ :where [?eid :basket/owner 17592186045429]] is not working. Same Q on clojure repl returns scalar value.#2016-11-2413:41karol.adamiecworks without the dot .#2016-11-2413:41karol.adamiecover rest, but i want scalar 😞#2016-11-2415:33karol.adamiec"At present, find specifications other than relations (and also pull specifications) are not supported via Datomic's provided REST API. "#2016-11-2415:36karol.adamiecguys seriously PLEASE PLEASE PLEASE!!! make a fix for REST API to return FULL errors instead of 500 status blackbox. This is a timesink of immense proportions and a hair pulling , head bashing ocean of frustration 😱#2016-11-2416:38bhagany@karol.adamiec fwiw, I do [?eid] and then unpack the result in cases like this#2016-11-2416:39karol.adamiec@bhagany over REST?#2016-11-2416:39bhaganyyes#2016-11-2416:39karol.adamiecfor me it throws#2016-11-2416:39bhaganyhrm, I’m a few versions back, maybe they changed it#2016-11-2416:39karol.adamiec[:find [?eid] …#2016-11-2416:39bhaganythat’s unfortunate#2016-11-2416:40karol.adamiecbasically i can make basic one, but the response is [[234123412]]#2016-11-2416:40karol.adamiecso i unpack it ;(#2016-11-2416:41karol.adamiec@bhagany would you say +1 to having errors from REST endpoint? or is it only me that gets constantly frustrated?#2016-11-2416:41bhaganyoh, I’m definitely with you there. endlessly frustrating.#2016-11-2417:23leovhi. quick question - datomic-free says " No matching ctor found for class clj_logging_config.log4j.proxy$org.apache.log4j.WriterAppender$ff19274a"#2016-11-2417:23leovdo I miss a library?#2016-11-2506:52robert-stuttaford@zane, why isn’t it performant? i can also think of a way to combine d/datom scans of eavt + vaet for an e, looking for the highest t#2016-11-2511:56jonpitherHow does the Memcached setup work - can peers have their own memcached instances that would become warmed up to their needs, or is it a secondary level distributed cache that all peers would use the same way?#2016-11-2512:43mitchelkuijpersAre there any people here who have some experience with saving money values in datomic? I am leaning towards bigint and then simply saving dollar values#2016-11-2513:07karol.adamiec@mitchelkuijpers i went with using long/bigint and saving cents. Then for display just divide by 100 and attach currency symbol.#2016-11-2513:08mitchelkuijpersYeah that was also my plan, Not sure is a long is alway big enough that is the reason I am leaning towards bigint#2016-11-2513:09karol.adamiecbut i did what i did because of javascript compatibility. I would use decimal otherwise#2016-11-2513:55robert-stuttaford@jonpither memcached as you need it. we connect everything to one cluster right now, but eventually backend webservers would have their own vs end-user webservers#2016-11-2513:56robert-stuttafordadvantage of one is that transactor pushes live index into the one it’s connected to#2016-11-2513:56jonpitherok - let's say you did that, does the transactor still need to be aware of all the various memcacheds out there?#2016-11-2513:56robert-stuttafordwhich turns memcached into the primary datastore and ddb the near-line backup 🙂#2016-11-2513:57robert-stuttafordi know you can only give txor one. i presume that any other url given to a peer but not txor would essentially be a private 2nd tier for all who share it. e.g. @stuarthalloway mentioned having a memcached on his computer for a production transactor (so remote repl queries are faster)#2016-11-2514:05jonpitherso you can give a peer a memcached and not tell the transactor?#2016-11-2514:06robert-stuttafordyes. you can give different memcached to different peers#2016-11-2514:07robert-stuttafordtransactor uses it as its own 2nd-tier cache (for the queries it does) and also writes live-index segments there. all other peers will use which ever they connect to as 2nd-tier cache. if it’s the same one for everyone, you obviously get leverage#2016-11-2514:28jonpithercool - thanks#2016-11-2516:05pesterhazyI'm working with a cardinality many ref. Does anyone have a transaction fn at hand that replaces all current refs with a new set?#2016-11-2516:06karol.adamiec@pesterhazy haha, i had the same days ago#2016-11-2516:06karol.adamiecin general i turned away from that solution#2016-11-2516:06karol.adamiecand i do a built in fn for that :db.fn/retractEntity#2016-11-2516:08karol.adamiecthe function i tried to use before had issues ;/ but here it is:
;;DB function that allows to replace collection with a new (or empty ) colection
{:db/id #db/id [:db.part/user]
:db/ident :assertWithRetracts
:db/fn #db/fn {:lang "clojure"
:params [db e a vs]
:code "(vals (into (into {} (map (comp #(vector % [:db/retract e a %]) first) (datomic.api/q [:find '?v :where [e a '?v]] db))) (into {} (map #(vector % [:db/add e a %]) vs))))"}}
#2016-11-2516:08karol.adamieccredit to google groups user….#2016-11-2516:08marshall@jonpither you can give the transactor multiple memcached instances and it will push segments to all of them#2016-11-2516:08pesterhazy@karol.adamiec interesting#2016-11-2516:09pesterhazymy use case is I want to re-create all dependent entities and remove old ones in a single tx#2016-11-2516:09pesterhazyideally the caller shouldn't need to know the entids of the dependent entities#2016-11-2516:10pesterhazybut obv not sure that this is the right approach#2016-11-2516:10karol.adamiecif you mark dependants as isComponent they will be retracted#2016-11-2516:10pesterhazywell I don't want to delete the original entity, only update it!#2016-11-2516:11karol.adamiecwell, i compromised on that 🙂. the origianl entity in my use case is not linked to anything other than owner, so it is ok for me#2016-11-2516:12karol.adamiectry the fn i pasted, it works if you pass ID . I had problems with it working with tempids or lookup refs#2016-11-2516:14rauh@pesterhazy https://gist.github.com/rauhs/0704f6492674ea79e935a9e01ac3a483#2016-11-2516:20pesterhazy@rauh, that looks great#2016-11-2516:27pesterhazycode looks a bit scary#2016-11-2516:28pesterhazy@rauh, could you give an example of how to use it?#2016-11-2516:29pesterhazythe negative numbers in the gist refer to tempids?#2016-11-2516:30pesterhazyand why do you have to supply a tempid and actual id for each value?#2016-11-2516:35pesterhazyhere's a simpler version: (defn replace-refs [db e attr vs]
(->> e
(d/q [:find '[?v ...] :in '$ '?e :where ['?e attr '?v]] db)
(map (fn [v] [:db/retract e attr v]))
(concat (map (fn [v] [:db/add e attr v]) vs))))#2016-11-2517:09rauh@pesterhazy There is an example in the doc string and just right afterwards is another one as a #_ comment#2016-11-2517:09rauhWhich you can run on the db (it will return you the transaction generated by the fn)#2016-11-2517:10rauhIf you already have all entities in your db, then you can nil the tempids, they won't be touched#2016-11-2517:10rauhBut it's flexible enough that you can add new entities at the same time.#2016-11-2517:18jonpithermanaged to crash DT on a load-test - Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "hornetq-expiry-reaper-thread"#2016-11-2517:26pesterhazy@rauh, thanks for the explanation#2016-11-2518:05rauh@pesterhazy I've seen the simpler version of this too but decided to go with my own one. It'll properly work even if you specify idents, lookup-refs or entity ids. It'll also work when you :db/add an already existing entity (the above version would fail). And in addition if you add some new entity (with a tempid) you can ref that in the same transaction and assert it. It'll just work. With the simpler version you'd have to transact twice, making your transaction history less transparent on what happened. I did just simplify my gist a little bit.#2016-11-2607:25robert-stuttafordrad @marshall good to know about multiple memcached for txor#2016-11-2717:53jarppeWhat does it mean when during stress testing Datomic logs "Critical failure, cannot continue: Heartbeat failed"?#2016-11-2717:55jarppeI'm making a lot of transactions and using DynamoDB#2016-11-2717:58jarppeOn AWS console I see a "critical alert" and "Consumed write capacity >= 40 for 5 minutes", so it seems that DynamoDB does not like me anymore, but should that kill transactor completely?#2016-11-2815:00marshallDatomic 0.9.5530 is now available https://groups.google.com/d/topic/datomic/xeo9my3gC88/discussion#2016-11-2815:18robert-stuttafordhttp://blog.datomic.com/2016/11/datomic-update-client-api-unlimited.html#2016-11-2815:19robert-stuttafordholy wow#2016-11-2815:19robert-stuttafordsuddenly a lot of old information riding around in my head 🙂#2016-11-2815:19robert-stuttafordthis is hugely exciting, @marshall !#2016-11-2815:19marshall@robert-stuttaford We certainly think so#2016-11-2815:19marshall😄#2016-11-2815:20karol.adamiecyay. Client API :+1: . Please add javascript soon 😄#2016-11-2815:20robert-stuttaford@marshall the post mentions the client library is open source. on github? or is that coming soon?#2016-11-2815:20robert-stuttaford… java release shortly. nm!#2016-11-2815:22marshallCurrently the source for the clients is provided via source jar.
The clients are currently in alpha, but we are working to move the APIs to their final state and provide the additional documentation, tooling, etc. required to build/fork/modify them. We wanted to get clients into our customer's hands and start getting feedback as soon as possible.#2016-11-2815:22marshallThe source jars are in maven central#2016-11-2815:25robert-stuttafordok, great. i’ll have a look soon#2016-11-2815:30robert-stuttaford@marshall, i haven’t read the comparison doc, yet, but will clients support a permissions model akin to sql GRANT USER READ?#2016-11-2815:31robert-stuttafordone thing that’s worried me about peers is the ease with which d/delete-database can be invoked at a repl#2016-11-2815:31marshallThere will be some controls around those sorts of administrative capabilities. As of now, clients can’t create or delete databases#2016-11-2815:32robert-stuttafordok. that’s great. i can immediately see some of our apps dropping to pure client#2016-11-2815:33robert-stuttafordwhat’s the wire protocol for clients? also tcp + fressian things?#2016-11-2815:33robert-stuttafordlooks like it, if it also has the cache!#2016-11-2815:34robert-stuttafordwow. this is fantastic. great work!#2016-11-2815:35robert-stuttafordclients give you a way to work with many large databases by working with multiple peer servers#2016-11-2815:36robert-stuttafordi was wondering how one would work with multiple 10bn-datom dbs in a single app, looks like Clients gives us one way#2016-11-2815:36robert-stuttafordthinking far into the future now#2016-11-2815:36marshallcertainly the hope is that these additions will enable more architectural flexibility#2016-11-2815:37robert-stuttafordare non JVM clients planned?#2016-11-2815:38robert-stuttafordlike, say, JavaScript? :-)))#2016-11-2815:39robert-stuttafordof course, permissions model vital for this, which may mean lots of work still to do. i don’t see anything in the comparison that excludes javascript#2016-11-2815:39marshallit is absolutely something we want to support#2016-11-2815:39marshalltimeline TBD#2016-11-2815:40robert-stuttafordi think it’s a great testament to Datomic’s design that so little of the model had to change to support Clients. i totally get the tempid variant#2016-11-2815:40robert-stuttafordexcellent 🙂#2016-11-2815:41ckarlsenam I dreaming?#2016-11-2815:42robert-stuttafordvery lucidly 🙂#2016-11-2815:44val_waeselynck@marshall you guys rock!#2016-11-2815:44ckarlsenbest christmas present ever 🙂#2016-11-2815:46marshall🙂#2016-11-2815:46curtosisexcellent news!#2016-11-2815:58curtosishmm… have to think about how the new Starter license terms change things. I think they make sense, but it’s a model I don’t think I’ve seen before.#2016-11-2815:59ljosathe starter terms seem great for evaluating, for smuggling a system into production so it can prove its worth before buying a license, or for startups.#2016-11-2816:00curtosisthough it does break the “build a small cheap app to run on DynamoDB and leave it alone” model.#2016-11-2816:01kirill.salykinwow!#2016-11-2816:02val_waeselynck@ljosa I can relate to that 🙂#2016-11-2816:02ljosaWe started with the old "two free peers" starter license, but that was too limiting even before it was in production. Then we bought 22 licenses, thinking that we would have a bunch of peers. As it turns out, we have fewer than 10 peers, for it was more convenient to write a server that answers queries (datomic or ADI) on behalf of most of our clients. Even for clients that are not microservices, it was good to avoid the cold object cache at startup.#2016-11-2816:07curtosismy model for this particular kind of app is really just a single peer - more like embedded, but with non-filesystem storage (either Dynamo or Postgresql). It’s a bit of a weird use case, perhaps, but there’s something to be said for using the existing storage backup mechanisms rather than managing file-based backups.#2016-11-2816:08curtosisBut that’s maybe just pushing complexity around without really making any real difference.#2016-11-2816:39drewverleeIf i just want to play with Datomic for learning purposes, whats the best route? Datomic Starter from http://www.datomic.com/get-datomic.html?#2016-11-2816:47alexmilleryes, you can do everything you need to with that with no initial cost#2016-11-2816:54dm3quite a significant release there!
Does anyone know how to interpret the "Maintenance/Updates Limited to 1 Year" under the new Datomic Starter licence terms?
Does this mean that if you get the licence for 0.9.5530 now and when another version is released after 1+ years you need to get a new (paid?) licence?#2016-11-2816:56luposlipHi @alexmiller, I have a license on Starter that expired the 11th of November. The transactor and peer I use is version 0.9.5344. Can that be upgraded?#2016-11-2816:56alexmillerI’m not the right person to answer that - check with @marshall or email <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>#2016-11-2816:58luposlipAlright - @marshall, I have a license on Starter that expired the 11th of November. The transactor and peer I use is version 0.9.5344. Can that be upgraded?#2016-11-2816:59alexmiller@dm3 I defer to someone from the Datomic team for anything official. but my understanding is that after 1 year, you can continue using the versions you have with your license (that’s the “perpetual” part) but that if you want to upgrade past that point, you have to acquire a paid license.#2016-11-2817:03marshallAlex’s interpretation is correct#2016-11-2817:03bhaganywhatttttttttttttttttttttttttttttttttttttttttttttttttttttttttt this is amazing#2016-11-2817:03marshallThe renewal only covers the ability to use newer versions of the software.#2016-11-2817:03marshallStarter was always intended as a path for customers to explore and use Datomic in a low-risk, low-cost approach as they developed their applications and moved toward production. We feel that a year is generally sufficient time to evaluate a product and develop a business application around that product. If you feel that you require a longer period to evaluate or develop against Datomic, please contact us.#2016-11-2817:04marshallThat includes if your Starter license maintenance has recently expired and you’d like to discuss the option to evaluate the latest release(s).#2016-11-2817:04marshallYou can always email me at <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>#2016-11-2817:05bhaganymy boss is so happy that we paid for a $3k license last week#2016-11-2817:05bhagany(sorry, datomic team)#2016-11-2817:28chadhskind of a bummer that the free starter pro tier went away for small projects. i suppose the idea is the free tier can suffice for that and needs beyond that you should pay for.#2016-11-2817:28alexmillerstarter pro is still free#2016-11-2817:28chadhsfor one year only i thought?#2016-11-2817:29alexmilleryou can use it forever, just can’t keep upgrading after a year#2016-11-2817:32chadhsright, i understand that alex thnx. it just seems like an odd limitation. i liked the idea of being able to upgrade / renew that so you had a way to mock a production setup easily. then if you were going to roll something out for “real” for yourself or a client you could by the appropriate licensing. Just my 2c#2016-11-2817:34pesterhazypersonally my experience has been that old datomic versions don't just stop working; I only update for new features/significant bug fixes#2016-11-2817:36chadhsalso, people that registered in the past when it was renewable won’t get to download the lastest version and play with things like memcache integation etc without creating a new account.#2016-11-2817:37chadhsi mean all this as helpful critique. not at all “whining” about the idea of having to buy datomic; i think people should pay for it in production and not try to “get by” with starter pro personally.#2016-11-2817:37jonas> With today’s release, we are making available the alpha version of the open source Client library for Clojure
Is the source available somewhere?#2016-11-2817:37chadhshappy to move this to mailing list as well to leave room for Qs#2016-11-2817:37andyparsons@marshall piling on- this is all great news. One question: what is the definition of "system" for the new pro pricing? As in, "ongoing maintenance fees of $5,000 per year per system"#2016-11-2817:37marshallGreat question#2016-11-2817:38marshallSystem is a production Transactor and it’s connected peers/clients#2016-11-2817:38andyparsonsgot it#2016-11-2817:38marshallso you can still run unlimited dev/staging/test instances#2016-11-2817:38marshallbut if you need 2 separate live production transactors, then that’s two licenses#2016-11-2817:39marshall@jonas The source is provided as a source jar. it is in Maven Central#2016-11-2817:39chadhs@marshall would a typical deployment then be ~$10k for primary and backup transactor?#2016-11-2817:39marshallno, sorry, HA doesn’t count#2016-11-2817:39jonas@marshall thanks#2016-11-2817:40marshallif you have a single transactor (+HA) + your peers/clients - that’s $5000 per year#2016-11-2817:40jonasare you planning to push it to https://github.com/datomic?#2016-11-2817:40chadhsso “system" is transactor, ha, and connected peers?#2016-11-2817:40chadhsoh just saw your answer; thanks @marshall !#2016-11-2817:40marshall👍#2016-11-2817:41marshall@jonas The clients are currently in alpha, but we are working to move the APIs to their final state and provide the additional documentation, tooling, etc. required to build/fork/modify them. We wanted to get clients into our customer's hands and start getting feedback as soon as possible. We would also love to have feature requests/feedback/etc on clients (and on all parts of Datomic). We've recently set up a system with http://Receptive.io to gather customer feedback. You can access it from the top nav bar of your http://my.datomic.com dashboard via the "Suggest Features" link#2016-11-2817:41andyparsonscongrats @marshall and team, this is a big deal for us (and for my ability to recommend Datomic without reservation to other teams)#2016-11-2817:41marshall@andyparsons Thanks! We’re really excited and glad you are too.#2016-11-2817:46bbloomGlad to see the string-based tempids thing! I noted the comment about the underutilization of partitions. I’m curious: Does anybody actually make good use of partitions? What’s the impact of using them?#2016-11-2817:48bbloomand then i guess the question is how are they automatic now? Just everything in one big default partition? Or dynamically partitioned somehow?#2016-11-2817:59jonas@marshall I can’t see the “Suggested Features” link at http://my.datomic.com for some reason#2016-11-2818:00marshall@jonas You need to have a license in the account. Do you have a Starter license on the account you’ve logged in with?#2016-11-2818:00jonasOk, I don’t have that yet#2016-11-2818:00marshallYep, the link will show up once you have a license in the account#2016-11-2818:18timgilbertThis is awesome, thanks guys. Definitely looking forward to more info about the tempid changes.#2016-11-2818:37dpsuttonthere's a hacker news article about datomic right now if you're interested in reading the comments: https://news.ycombinator.com/item?id=13055961#2016-11-2819:17weijust saw the article, thanks for the licensing change! am also a fan of tempid improvements#2016-11-2819:19haywoodwow, I just started a project with datomic and these changes are amazing!#2016-11-2820:06ljosaOne of our developers on the javascript/golang side just expressed dismay over the announcement that "with the introduction of client libraries, the REST server is no longer supported for new development." Why not keep supporting an HTTP API?#2016-11-2820:29marshallThe REST api will continue to ship with Datomic; our development focus will be on clients - including clients for other non-jvm languages#2016-11-2822:26danielcomptonhttps://danielcompton.net/2016/11/29/guide-to-datomic-licensing-changes#2016-11-2822:26danielcomptonI wrote a guide to the licensing changes, let me know if I've made any mistakes#2016-11-2822:37sparkofreasonIs anybody using Datomic with AWS Lambda? I had read somewhere that this wouldn't work, but seems like the new chunked query API makes it a nice fit for processing large query result sets.#2016-11-2823:50Matt ButlerHi, Is there a max number of datums for a single transaction?#2016-11-2912:57ustunozgurThe licensing changes seem to be 2 steps forward, 1 step back. However, from a business perspective, I do sympathize with Cognitect.#2016-11-2913:02kardanYes, I fully understand that there need to be a path to pay. But I do find myself asking myself if I think my next project will become serious enough in a year to make me want to pay that yearly fee.#2016-11-2913:03kardanBut I know too little about Datomic to really say much. Maybe running on the same version is totally fine#2016-11-2913:30robert-stuttafordconsidering what hassles using Datomic has saved us, the price is very cheap.#2016-11-2913:32kardanI can image that. Reading back I might have sounded a bit harsh. But still something that one needs to be convinced about to take that path#2016-11-2913:33robert-stuttafordof course. i suppose i didn’t take much convincing. don’t regret the decision for a second. it’ll be 4 years in production in Jan#2016-11-2913:42robert-stuttafordhttps://twitter.com/RobStuttaford/status/803594325405868032#2016-11-2913:48robert-stuttaford@jaret @marshall typo on the tutorial Notice that /:db.cardinality/many captures ...#2016-11-2914:20jaret@robert-stuttaford thanks!#2016-11-2914:59staskis there a way to automagically create a database when running peer-server if it doesn’t exist yet?
i’m trying to build environment consisting of dev transactor and peer-server using docker-compose#2016-11-2915:19marshallPeer Server can ‘create’ memory DBs, but you’ll need to use a Peer to create dev (or other storage) databases#2016-11-2915:51jdubiei doesn’t seem like there is an index for this but is there anyway to get a vector or lazy-seq of all entity ids in a datomic database?
these both throw exceptions
(datomic.api/index-range db :db/id nil nil)
(datomic.api/q '[:find ?e
:in $
:where [?e :db/id]]
db))
CompilerException java.lang.IllegalArgumentException: :db.error/not-an-entity Unable to resolve entity: :db/id, compiling: ...
...
datomic.api/index-range api.clj: 178
datomic.db.Db/indexRange db.clj: 1747
datomic.db/attr-index-range db.clj: 799
datomic.db/require-id db.clj: 555
datomic.error/arg error.clj: 55
datomic.error/arg error.clj: 57
datomic.impl.Exceptions$IllegalArgumentExceptionInfo: :db.error/not-an-entity Unable to resolve entity: :db/id
data: {:db/error :db.error/not-an-entity}
clojure.lang.Compiler$CompilerException: java.lang.IllegalArgumentException: :db.error/not-an-entity Unable to resolve entity: :db/id, compiling: …
#2016-11-2915:55stask@marshal thanks, i was hoping to be able to create a full environment (transactor, peer-server, applications using client api) using docker-compose, will probably add some utility that uses peer library and just creates new db after transactor starts and before the peer-server starts#2016-11-2916:08marshall@stask Yep - you should be able to ‘script’ that via a Peer#2016-11-2916:45timgilbertHey, looks like the Clojure API docs here are out of date: http://docs.datomic.com/clojure/index.html#datomic.api/log
...given that the memory database does now support the log API: http://blog.datomic.com/2016/08/log-api-for-memory-databases.html#2016-11-2916:46marshall@timgilbert Thanks - i’ll fix it!#2016-11-2916:47timgilbertThanks @marshall! Also, can you point me to any docs for (d/history) apart from the docstring in the API docs?#2016-11-2916:48marshall@timgilbert http://docs.datomic.com/filters.html#history and http://docs.datomic.com/best-practices.html#use-history#2016-11-2916:48marshallalso some discussion here: http://blog.datomic.com/2014/08/stuff-happens-fixing-bad-data-in-datomic.html#2016-11-2916:49timgilbertAwesome, thanks#2016-11-2916:50marshall👍#2016-11-2918:35shaunxcodeare there any published details on the implementation of the peer server/client e.g. what network protocol is it using etc?#2016-11-2919:39alexmillerThere will be a lot more info released on that in the future#2016-11-3010:16ovanDoes anyone have experience using Amazon Aurora as a storage backend with Datomic? They claim it's MySQL compatible, whatever that means, and what I understand about how datomic uses storage services it doesn't need much (or anything) in terms of database implementation specific features.#2016-11-3010:40karol.adamiecout of pure interest if on AWS then why not use DynamoDB? can you share some rationale? asking because in similiar position i have not even considered other storage so i am curious about what i might have missed 🙂#2016-11-3010:58ovanDynamoDB is definitely the other option we're considering (actually we're not sure we're going to go forward with Datomic at all but right now it feels promising). Taking Datomic into use means a lot of new stuff to learn from operational perspective. We have a lot of experience running with RDS MySQL but nobody in our team has tried DynamoDB yet. So basically if Aurora would work nicely as a backend that might be one less new thing to take into use right now. I would like to understand the options in general and how the storage affects things so we can make at least somewhat informed decisions. I couldn't easily find a whole lot of information about how to choose the storage backend for Datomic and what are the tradeoffs there.#2016-11-3011:00ovanPrice is another consideration. My hunch is that Aurora might be a cheaper option to start with. That said, we haven't done any calculations yet so I might be totally off base here.#2016-11-3011:01pesterhazy@ovan, aurora is a modified version of mysql, so in all probability it'll work just like mysql (plus datomic's sql needs are likely not to be very sophisticated as it uses it as a k/v store)#2016-11-3011:04ovan@pesterhazy, thanks. That matches with my understanding.#2016-11-3013:20mitchelkuijpers@ovan From a pricing perspective I can recommend dynamodb. Using it feels a bit like cheating because datomic caches almost everything#2016-11-3013:44jonpitherAny recommended JVM args for the transactor beyond the documented memory settings?#2016-11-3013:45jonpitherAnd the peer for that matter#2016-11-3014:25jonpitherAnyone have any helper code for converting entity maps into a sequence of datoms..?#2016-11-3014:43robert-stuttafordnot hard to write 🙂#2016-11-3015:21jonpithersubmaps etc, can get slightly tricky, but am doing it#2016-11-3015:27dominicmApparently there's a gist somewhere with the code already written#2016-11-3015:28rauh@jonpither Datascript does this as well in its implementation. Yoou could also just do a with on some temp db. Thoough needs a schema#2016-11-3015:29dominicmhttps://clojurians-log.clojureverse.org/datomic/2016-06-01.html#inst-2016-06-01T15:30:22.001012Z#2016-11-3016:23pesterhazyis map-form-tx->vec-form-txs a mechanical, pure fn, or does it require looking at the existing db?#2016-11-3016:38karol.adamiechow can i replace a ref of cardinality one that is a composite??
1) find the id
2) retract id (no loose datoms)
3) assert new entity
i reeeaaallyy want something simpler … any ideas?#2016-11-3016:42karol.adamieci can easily update the main entity (i have uniqe on it). But then each update creates a new referenced entity, even though the thing is marked as a composite… ;/#2016-11-3016:45karol.adamiec;User
{:db/id #db/id[:db.part/db]
:db/ident :user/email
:db/valueType :db.type/string
:db/unique :db.unique/value
:db/cardinality :db.cardinality/one
:db/doc "Email"
:db.install/_attribute :db.part/db}
{:db/id #db/id[:db.part/db]
:db/ident :user/shipping
:db/valueType :db.type/ref
:db/isComponent true
:db/cardinality :db.cardinality/one
:db/doc "Shipping address"
:db.install/_attribute :db.part/db}
#2016-11-3016:46karol.adamiec{:db/id [:user/email “#2016-11-3016:47karol.adamiecso on shcema like above the transaction is creating a NEW address entity each time 😞#2016-11-3016:49jcf@karol.adamiec you need to either merge the existing component entity with the new attributes, or retract it, and add a new entity. It can't be done with the map form of a transaction. You need :db/add and :db/retract.#2016-11-3016:50jcfHas anyone here hit the 10 billion datom limit recently? Wondering if the Datomic team are testing with larger databases these days.
I can partition data into separate databases, and maintain multiple connections, but I'd like to avoid that complexity for a while.#2016-11-3016:50jcfRelevant discussion from 2015: https://groups.google.com/forum/#!topic/datomic/iZHvQfamirI#2016-11-3016:50zaneHey all. I'm trying to introduce some memoization to some functions I've written that take datomic database values as arguments. Is there any way to uniquely identify the connection a given database value came from?#2016-11-3016:52jcf@zane: you can't use the URI you used to create the connection?#2016-11-3016:53zaneThe functions in question don't take the URI.#2016-11-3016:54zaneI could have them take extra arguments and cache based on those, but I'd rather not if I can avoid it.#2016-11-3016:54jcfYou could memoize outside of the query functions, but I guess you don't want that.#2016-11-3016:54zaneYeah.#2016-11-3016:54marshall@jcf Just yesterday @stuarthalloway mentioned 100B in the Datomic Workshop at Conj.#2016-11-3016:54marshallIf you think you’re building a system that will need 10-100B datoms, you should email me#2016-11-3016:54marshall<mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>#2016-11-3016:55marshalland we’ll talk about the specific details/challenges with administering a database of that size#2016-11-3016:55marshallbut it’s definitely not a “limit"#2016-11-3016:55karol.adamiec@jcf how can i get the id of :user/shipping entity to wrap it all up in one transaction?#2016-11-3016:56jcf@marshall my client has a paid support agreement in place. I can see if I can get them to add me to the ZenHub account (I'm assuming you guys are still using that?) and go through official channels if you want… we've already been sold - I just need to see if I can do this without Cassandra.#2016-11-3016:58marshallSure - Just send an email to support @ cognitect and let us know what client you’re working with#2016-11-3016:59jcf@karol.adamiec something like this:
(let [user-id (d/entity (d/db conn) [:user/email "
#2016-11-3016:59jcfYou might want/need to diff the components however. That's beyond what I can type up in Slack however.#2016-11-3017:00karol.adamiecthanks#2016-11-3017:00jcfThanks @marshall!#2016-11-3017:00marshallsure#2016-11-3017:00karol.adamiec@jcf but the real issue for me is how do i get shipping-id-to-remove?#2016-11-3017:00karol.adamiechaving only the email?#2016-11-3017:02jcfLoad the user, and you'll get the shipping IDs back from Datomic. (map :db/id (-> conn d/db (d/entity [:user/email " will give you a list of all the shipping IDs.#2016-11-3017:02jcfThat assumes email is unique. If not you'll need to query, and pick the right user.#2016-11-3017:02karol.adamieccan i do that inside of a transaction?#2016-11-3017:03karol.adamieci am fighting uneven battle trying to use REST API 😞#2016-11-3017:03jcfNo. Do the work in the peer, and send the transaction to the transactor.#2016-11-3017:03jcfOh. I'm using Datomic from Clojure. I don't know anything about the REST API.#2016-11-3017:03jcfMaybe the new client stuff will make your life easier. It was announced in the last couple of days.#2016-11-3017:04karol.adamiecoh yes. i am waiting. Ehh. Thanks. Will fire couple https req at the db then. Tried to avoid that 😞#2016-11-3017:06karol.adamiecon a realted note do lookup refs nest?#2016-11-3017:06karol.adamiec😆#2016-11-3017:07jcf@karol.adamiec not sure I follow. A lookup ref is of the form [attribute value], and you can't do something like [attribute [attribute value]].#2016-11-3017:07karol.adamiecyeah, i tried and failed, but that is exactly what i would liek to do 🙂#2016-11-3017:07marshallAlso, it was mentioned in the Blog post, but we now have a Feature Request & Feedback portal available - if you log into my.datomic there is a link to “Suggest Features” in the top nav; go there and vote for/suggest improvements and/or clients in your language of choice#2016-11-3017:08jcf@marshall open source. troll#2016-11-3017:08marshall…dont feed the trolls 😉#2016-11-3017:09karol.adamiec@marshall is a nested lookup ref a technical possibility or am i deeply misunderstanding how datomic works?#2016-11-3017:09jcfI'd love to see support for one of Google's Cloud storage engines.#2016-11-3017:09jcf@karol.adamiec you more than likely should be using a query.#2016-11-3017:09karol.adamiecyeah! but i need to transact! 🙂#2016-11-3017:10karol.adamiecthat means query first, transact next#2016-11-3017:10karol.adamieci clojure it is almost the same#2016-11-3017:10jcfDo the query, and then transact. That's fundamental to how Datomic works.#2016-11-3017:10karol.adamiecover rest you feel the pain#2016-11-3017:10jcfYou almost always want to offload work to your peers, and only transact simple additions and retractions.#2016-11-3017:10jcfOtherwise you hit timeouts doing full index scans etc.#2016-11-3017:11karol.adamiecyeah, i think i try to constantly abuse datomic 😮#2016-11-3017:11jcfIt sounds like it! 😉#2016-11-3017:12karol.adamiecit is the rest trap. However i try to convince myself that firing off requests is fine …. i always end up trying to minimize the amount of traffic, which is a datomic antipattern surely.#2016-11-3017:15jcf@karol.adamiec are you sending requests from the browser or some backend service?#2016-11-3017:15karol.adamiecbackend#2016-11-3017:15karol.adamiecnodejs 🙂#2016-11-3017:15jcfCan you not keep an HTTP connection open with the REST API?#2016-11-3017:15karol.adamiecyes#2016-11-3017:16jcfIf you keep connections alive, then it doesn't matter so much. It's the cost of establishing a connection I'd worry about.#2016-11-3017:16jcfSend the requests! 😄#2016-11-3017:16karol.adamieci think i am fine anywya#2016-11-3017:16karol.adamiecit is a small evcommerce shop#2016-11-3017:16marshallYou can’t nest lookup refs - in this case I think query is the right approach#2016-11-3017:17marshallif you need atomicity of the lookup and transact, you can either use a transaction function or use a more ‘optimistic’ concurrency strategy and use cas#2016-11-3017:18jcfCAS works really nicely. Transaction functions are a last resort for me because they can end up being slow (at least when I've abused them in the past).#2016-11-3017:18jcfFor efficient functions where you use a fast index it can be all good.#2016-11-3017:19karol.adamieci think query is right thing to do. get the id. if exists, retract, if not do nothing. then assert full user enity again with address.#2016-11-3017:19karol.adamiecbut navigating lookup refs like pulls would be nice 🙂#2016-11-3017:19karol.adamieci could abuse datomic longer 😄#2016-11-3017:20jcfIf you're using Clojurescript you can use clojure.set to work out what you need to retract etc. From JS I guess you have to write it all yourself. 😉#2016-11-3017:20jcfES6?? http://www.2ality.com/2015/01/es6-set-operations.html#2016-11-3017:20karol.adamiecyep es6#2016-11-3017:21jcfI haven't written vanilla JS in years. Before React was released at least.#2016-11-3017:22karol.adamiecwell anyway the right thing to do is query->transact. I do not need CAS semantics per se. All i wanted to do is to be lazy and fire off one, maybe a bit tricky transaction and have it do it for me 🙂#2016-11-3020:11curtosisfor the simple, “embedded” app case, is the recommended best practice still using the Peer library, correct? Rather than starting up a transactor AND a peer-server AND the app+client?#2016-11-3020:20robert-stuttafordi wonder if you can have a process be its own peer-server#2016-11-3020:21robert-stuttafordallowing you to code with client but only have one jvm run#2016-11-3020:21robert-stuttafordis that a possibility @jaret ?#2016-11-3020:21curtosisI thought peer-server still needs to talk to a transactor (& storage)#2016-11-3020:21robert-stuttafordat least keeps the code portable#2016-11-3020:21robert-stuttafordyes, you always need a transactor#2016-11-3020:21robert-stuttafordfor durable storage#2016-11-3020:22curtosisoh I see what you mean#2016-11-3020:22curtosisjvm:(client + peer-server) + jvm:(transactor)#2016-11-3020:22robert-stuttafordyes#2016-11-3020:23curtosisthe benefit would be primarily sticking to the client API, right?#2016-11-3020:23robert-stuttafordyes#2016-11-3020:23robert-stuttafordmeans you can make the decision to go separate peer-server later on when need be#2016-11-3020:23curtosisbut I’m still learning the peer API! 😉#2016-11-3020:24robert-stuttaford-grin-#2016-11-3020:24robert-stuttafordthen use the peer! 🙂#2016-11-3020:24curtosisif a client process can’t be its own peer-server, then it’s moot#2016-11-3020:25robert-stuttafordit’s verrrry early days yet. we’ll figure it out 🙂#2016-11-3020:28jaret@robert-stuttaford you cannot have a process be its own peer-server.#2016-11-3020:28curtosisI think from my quick read, for most of the “embedded” use cases I can think of, the peer library is a much better fit.#2016-11-3020:28curtosisjust missing string tempids 😛#2016-11-3020:29ovan@robert-stuttaford, just listened to a defn podcast where you talk about Datomic. In the light of recent changes it was fun to hear the part about the problems with peer-based licensing model. 🙂 Anyway, thanks for doing the podcast, really helpful information for our team as we're considering Datomic for our next project.#2016-11-3020:29marshallString tempi tempids are in the peer too#2016-11-3020:30curtosisah - that’s not in the summary table: http://docs.datomic.com/clients-and-peers.html#2016-11-3020:30curtosisbut it is in the text later: "Peers continue to support tempid structures, and in addition they also support the new client tempid capabilities."#2016-11-3020:30timgilbert@zane: better late than never, I hope, but database values have a :id attribute which generally points to the URL of the connection they came from#2016-11-3020:31robert-stuttafordthought so. so, if you want to use peer-server and a durable db in dev, you’re starting 3 processes now#2016-11-3020:32robert-stuttaford@ovan, yeah 🙂 how quickly our discussion became legacy! so totally happy about the changes this week. if you have any questions in aid of your decision, let’s have em. i love learning about other contexts#2016-11-3020:44ovan@robert-stuttaford, Thanks. I do have a couple of question if you don't mind. You mentioned in the podcast that you ran the first year or so with Postgres as a storage backend and only later moved to DynamoDB. Would you do the same again or just start directly with dynamo? I'm mainly concerned about operational aspects like tuning the capacity. Also, what's you experience with operating the transactors. Any surprises that were hard to debug or fix?#2016-11-3023:18ljosaIs there any reason not to use the dev storage in production in a situation where persisting to the local file system is okay? Is it named dev to discourage production use?#2016-11-3023:18ljosaAlso, is there any different between the dev storage and the free storage that is in the free version of Datomic?#2016-12-0101:17gdeer81@ljosa because if you have an issue in the middle of the night and you call the support team and they ask you what storage you are using and you say "dev" they hang up on you and go back to sleep#2016-12-0101:22gdeer81I'm not even being snarky, I literally heard that yesterday at the day of datomic training#2016-12-0101:23marshallFrom the mouth of Stu Halloway#2016-12-0101:24gdeer81I could have sworn it was Justin who said it, but either way, that's coming straight from the top#2016-12-0105:15robert-stuttaford@ovan we used postgres at first because, quite frankly, we understood it. I’d use Dynamo from the start, if i did it again. as for transactors, i recommend using the official AMI, and not trying to do anything fancy to it (we ran a Datadog agent on ours and i think it interfered with clean instance self-termination). learn to read the cloudwatch metrics. @jaret @marshall i actually think there’s massive scope for some screencast training videos around that whole topic — how to understand what your transactor is telling you through metrics#2016-12-0108:57robert-stuttaford@jaret typo a temporary cannot begin with a colon (:) character on http://docs.datomic.com/transactions.html#creating-temp-id#2016-12-0108:58robert-stuttaford@jaret or @marshall, how interoperable is 0.9.5530 with older peers / transactors? do we have to upgrade them all together, or could we e.g. upgrade peers now and transactor later or vice versa?#2016-12-0109:47jarppe@robert-stuttaford Just out of curiosity, why would you choose DynamoDB over Postgres now? I'm leaning towards Postgres at the moment exactly on the same reason.#2016-12-0109:49jonpitherwith PG you don't have to worry about throttling and limits#2016-12-0109:49jonpither(as much)#2016-12-0110:04robert-stuttaford@jarppe don’t want to have to worry about RDS instances, basically#2016-12-0111:12jarppeOk#2016-12-0112:46caspercFrom the docs, when doing an import you need to request index and then gc-storage. After that typically, you want to do a backup (at least I do). But how does one wait for the gc-storage to complete? Is there an event in the log, or is there a better way? And is waiting for the gc to complete needed in the first place?#2016-12-0112:47caspercAnd related to this, is there an overview of the events that can be logged by datomic, and what they mean? I can’t find any#2016-12-0113:10robert-stuttaford@casperc you can review logback.xml in the transactor download#2016-12-0113:30casperc@robert-stuttaford: I am not sure that tells me all that much. It lists some loggers, like datomic.kv-cluster, but not the events that can be thrown by that logger.#2016-12-0113:53pleasetrythisathomeis there a way to pass a variable as unbound so that nil values can be supported?#2016-12-0113:53pleasetrythisathomepassing nil and binding in :in throws#2016-12-0113:54pleasetrythisathome(d/q [:find '[(pull ?e selector) ...]
:in '$ 'selector (if id '?id '_)
:where
'[?e :job/job-id ?id]]
(d/db conn)
'[*]
id)#2016-12-0113:54pleasetrythisathomewords but seems like there should be a more elegant way to handle nil than having to do all that quoting every time#2016-12-0113:55pleasetrythisathomebut the following doesn’t work. assuming because ?id is actually bound to ‘_ instead of being unbound#2016-12-0113:55pleasetrythisathome(d/q '[:find [(pull ?e expr) ...]
:in $ expr ?id
:where
[?e :job/job-id ?id]]
(as-db conn)
'[*]
‘_)#2016-12-0113:57robert-stuttafordbetter to just write two queries#2016-12-0113:58robert-stuttafordthey’re semantically different. one is; find me X where Y. the other is find me X.#2016-12-0114:34pleasetrythisathomealright, i get that argument, seems redundant though#2016-12-0116:15robert-stuttaford@pleasetrythisathome datalog caches query compilation based on that first arg so there’s a perf benefit 🙂#2016-12-0116:16patwhen i try to connect to my running peer-server I get :cognitect.anomalies{:category :cognitect.anomalies/fault} , anybody have an idea what's going on?#2016-12-0116:22timgilbertSay, given a possibly erroneous lookup reference like [:user/id 42], is there an easy way to validate that it points to a real entity short of something like (try (d/entity lookup-ref) (catch ....))?#2016-12-0116:26timgilbertAnswering my own question, looks like (d/entity) returns nil if :user/id is a valid attribute, and throws IllegalArgumentExceptionInfo if it is not#2016-12-0116:36pleasetrythisathome@robert-stuttaford thanks!#2016-12-0116:37jaret@robert-stuttaford Thanks for catching the typo. In terms of interoperability you can upgrade piecemeal as described in the docs. http://docs.datomic.com/deployment.html#upgrading-live-system#2016-12-0116:37jaretWe still have the same compatibility breaking releases:#2016-12-0116:37jaret0.9.4609 (2014-03-13)
0.9.4470 (2014-02-17)
0.8.3705 (2013-01-09#2016-12-0116:38jaretSo you can upgrade the peers and transactors in any order you choose as long as its not on one of those ^ releases#2016-12-0116:49robert-stuttafordexcellent thank you @jaret!#2016-12-0121:26timgilbertSay, does the in-memory database operate in a different thread or something? I'm seeing some seemingly non-deterministic behavior where in a test I create two entities and then validate that some metadata I attached to the transactions is present#2016-12-0121:29timgilbertMost of the time it works, but every now and then my new transactions don't appear in the history. Adding a (Thread/sleep 1000) to the test seems to have stabilized it, though#2016-12-0122:22timgilbertOn an unrelated note, looks like there's a typo in the docstring here: http://docs.datomic.com/clojure-client/index.html#datomic.client.admin/list-databases Async, sell alos datomic.client namespace doc.#2016-12-0122:23timgilbertThink it's meant to be see also datomic.client...#2016-12-0213:01karol.adamiecis a separate partition for audit logs a good idea? I can easily store that on S3 as well, but why wyould i want to do that? if in db i can easily retrieve, query etc interesting audit infromation. Separate partition would also help in log rotation? And as one can see i do not work on high volume system by todays standards 🙂#2016-12-0214:24robert-stuttafordwhat would go in your audit logs that you couldn’t just put on your transaction entities directly, @karol.adamiec ?#2016-12-0214:24karol.adamiechmm, say user A tried to log in and failed on auth checks.#2016-12-0214:25robert-stuttafordok, so logging events for user gestures#2016-12-0214:25robert-stuttafordthen yes, a separate partition is a good idea#2016-12-0214:26robert-stuttafordwe do this, but not the partition. planning to re-build our db and put them into its own partition#2016-12-0214:27karol.adamiecok. if i ever need to rotate the logs (keep a year only) does partition give me any advantage?#2016-12-0214:27robert-stuttafordin that case, you’ll want a separate database, and you’d make a new database every time you rotate#2016-12-0214:28robert-stuttafordyou don’t want to put lots of data in that you plan to take out later. datomic isn’t really designed for that#2016-12-0214:28karol.adamiecyeah, most likely i will never need to rotate that anyway. It is not a huge system….#2016-12-0214:29karol.adamiecseparate partition seems like a good compromise for now…#2016-12-0214:29karol.adamiecat least will keep audit/log noise out of user space.#2016-12-0214:30karol.adamiecindexing is per partition as far as i understood#2016-12-0214:43jaret@timgilbert Thanks for the catch! I will update#2016-12-0216:03noogaIs it a bad idea to create identities while adding data? I have a case where I query external API that returns things like {:id “foo” :badges_collected [”badge_bar” “badge_baz"]}#2016-12-0216:06noogaI know that the set of badges doesn’t change very often but there’s a lot of them and I don’t have a decent way of extracting them all. I want to express the badges as enum so I’d transact something like: [{:db/id … :entity/id “foo” :entity/badges [:badge/bar :badge/baz]}]#2016-12-0216:07noogaAnd I would need to transact [{:db/id … :db/ident :badge/bar} {:db/id … :db/ident :badge/baz}] first for every entity I’m about to save.#2016-12-0216:07noogais that a bad idea?#2016-12-0219:48bhagany@nooga I’m going from memory here, but I believe idents are cached on each peer, so the more you have, the more memory you use. That might limit you.#2016-12-0220:01robert-stuttafordyep, and i think there’s a 32k limit on the overall count, @nooga and @bhagany#2016-12-0220:01robert-stuttaford(i think)#2016-12-0220:02noogayep, I know about the limit#2016-12-0220:02noogabut I expect no more than 40 kinds of badges#2016-12-0220:02bhaganyah, I wouldn’t worry about it then#2016-12-0220:02noogaI just don’t have a good way of listing them beforehand#2016-12-0220:02noogaso I wanted my DB to “learn them"#2016-12-0220:03bhaganyyeah, I think your transact-as-you-encounter strategy is fine for that low of a number#2016-12-0220:03noogathanks!#2016-12-0220:04bhaganynp 🙂#2016-12-0221:17azHi all, really want to get started with Datomic. Any good tutorials or books out that that I’m missing? What’s the best way to start learning this?#2016-12-0221:22sparkofreasonhttp://www.learndatalogtoday.org/#2016-12-0221:29azthank you#2016-12-0222:41glowzillaThis is probably a n00b question… but I’m trying to use the new client and I’m having some issues. If I use the REPL packaged with datomic, the connection works great. However, if I use the REPL in my own project… It gives me the
#:cognitect.anomalies{:category :cognitect.anomalies/unavailable, :message "java.net.ConnectException: Connection refused”} Anyone have a hint to what might be wrong?#2016-12-0222:43gdeer81@limix it wouldn't hurt to get a mentor for a few months to make sure you're on the right path. what things do you know about Datomic right now?#2016-12-0222:43az@gdeer81 would love to find a mentor#2016-12-0222:44azI have gone through the training videos on http://datomic.com#2016-12-0222:44azgoing to go through the learndatalogtoday#2016-12-0222:45azSo I feel I have a grasp on the idea and basics, but it’s still rough for me on how to work this into the stack#2016-12-0222:46gdeer81so the dev ops aspect of it?#2016-12-0223:21glowzillaFigured out my issue. Client uses jetty9…. which means if you’re using a ring server it does not play nicely since it’s a non jetty9 version 😞#2016-12-0300:24az@gdeer81, yes, I think that as well but also just how to work with it from the app#2016-12-0302:20gdeer81@limix well if you're talking about a Clojure app, you're in luck because Datomic queries just return data and Clojure is great with data. If you're using Clojurescript as the frontend then you're also in luck because it too likes data. I have a Datomic project I started and abandoned three years ago that might give you just enough of a jumping off point to get you feet wet. Fork https://github.com/gdeer81/functional-pharmacy and look at the schema and the different namespaces and the queries that are in there and see if you can build a UI for it or enhance the back end. When I demo'd that app I was literally just evaluating forms inside of emacs to show people at work how Datomic worked. If you want to take it to the next level you can, or you can just use it as a poor man's reference app#2016-12-0315:35val_waeselynckdatalog-rules: utilities for managing Datalog rulesets in Clojure. https://github.com/vvvvalvalval/datalog-rules#2016-12-0318:06az@gdeer81 thank you, going to checkout the project. Have you been working with datomic since that project?#2016-12-0318:59wilcovI'm sorry if this is too basic of a question but i've just recently gotten into Clojure (and looking into datomic). I understand that datomic focuses on data over time among others. But could somebody give a brief description of why they chose datomic over i.e. postgres?#2016-12-0319:04robert-stuttafordreally recommend any of Rich Hickeys talks on that, @wilcov#2016-12-0319:04robert-stuttafordhe says it best#2016-12-0319:06wilcovThanks @robert-stuttaford I've found 'intro to datomic' by him, i'll have a look at that#2016-12-0319:34robert-stuttaford@wilcov watch em all 🙂 http://docs.datomic.com/videos.html#2016-12-0319:35robert-stuttafordthis one may answer the questions you have http://channel9.msdn.com/posts/Rich-Hickey-The-Database-as-a-Value#2016-12-0319:35robert-stuttafordand if you’re going all the way down the rabbit hole, http://www.datomic.com/training.html#2016-12-0320:42ustunozgur@wilcov: datomic can use postgres as the underlying storage. Some trivia though: if you read the original postgres paper (or maybe ingres paper, can't recall), one of the main aims was tracking changes over time. And they had it, though no idea how it worked. But they removed it at a certain point in time. #2016-12-0320:45ustunozgurhttp://db.cs.berkeley.edu/papers/ERL-M91-62.pdf#2016-12-0320:45ustunozgurSection 4#2016-12-0320:46ustunozgurI guess if you turn off vacuuming, you can get some time travel possibility within Postgres. #2016-12-0320:48noogaLooks like the console doesn’t update its view of the db until I refresh the page. Is this normal?#2016-12-0320:49ustunozgurhttps://eng.uber.com/mysql-migration/ this outlines how Postgres internals is append only too. #2016-12-0406:23gary@nooga yes :snowman_without_snow: #2016-12-0406:26gary@limix I've played around with codeq and simulant but I don't have anything in production 🔥 :snowman_without_snow: 🔥 #2016-12-0410:31wilcov@robert-stuttaford @ustunozgur thanks, this should be able to keep me busy for a while!#2016-12-0410:39seantempestaDumb question, but it’s driving me crazy. How can I stop the [Datomic Metrics Reporter] DEBUG datomic.process-monitor messages in the repl?#2016-12-0416:58azHi all, could anyone give me some techniques for building a realtime clojurescript app with datomic?#2016-12-0417:43nonrecursivehey yall, I’m trying to use a database function and running into an issue#2016-12-0417:43nonrecursivethis function is
{:db/ident :watch/upsert
:db/id #db/id[:db.part/user]
:db/fn #db/fn
{:lang "clojure"
:params [db m]
:code (if-let [id (ffirst (d/q '[:find ?e
:in $ ?u ?w
:where
[?e :watch/user ?u]
[?e :watch/watched ?w]]
db (:watch/user m) (:watch/watched m)))]
[(assoc m :db/id id)]
[(assoc m :db/id (d/tempid :db.part/user))])}}
#2016-12-0417:44nonrecursiveI run (d/transact conn [[:watch/upsert {:watch/user 17592186045424 :watch/watched 17592186045442 :watch/scope :new-topic :watch/level watch}]])#2016-12-0417:44nonrecursiveand get an exception, "Cannot write #2016-12-0417:44nonrecursivehas anyone else run into this? any advice?#2016-12-0417:45nonrecursivei’m using [com.datomic/datomic-free “0.9.5344”]#2016-12-0417:45nonrecursiveoh shoot I found it#2016-12-0417:46nonrecursiveI was using an unbound symbol in the transact data#2016-12-0417:46nonrecursived’oh!#2016-12-0417:46nonrecursivethank you all for bearing witness to my struggles#2016-12-0419:06robert-stuttaford@nonrecursive quack 🙂#2016-12-0422:19noogaI’m getting :db.error/tempid-not-an-entity tempid used only as value in transaction error and I can’t spot anything off in my transaction#2016-12-0422:20noogais there a way to see the transaction transformed into datoms before transact throws this?#2016-12-0501:21jaret@nooga are you using 5530?#2016-12-0501:21jaret0.9.5530?#2016-12-0501:26jaret@nooga if you are on 5530 see this post https://groups.google.com/forum/#!topic/datomic/m6vSa6CjqjQ#2016-12-0508:42val_waeselynck@haywood for things like ~name, you can use query inputs#2016-12-0508:44val_waeselynckfor the dynamic pull expressions, I think you do have to make it programmatically, but you can minimize the variable area by using a map data structure#2016-12-0508:47val_waeselynckNote that you can also use datomic.api/pull-many#2016-12-0508:49val_waeselynckThat's actually the best approach for your problem IMO.#2016-12-0509:31robert-stuttafordyou’ll need to quote that datalog vector 🙂#2016-12-0511:09val_waeselynck@robert-stuttaford corrected, thanks#2016-12-0512:49nooga@jaret I’m running datomic-free-0.9.5407#2016-12-0513:59marshall@nooga Can you post your transaction data ?#2016-12-0514:02nooga@marshall sure, give me few minutes#2016-12-0514:48tengIf we have an entity with, let’s say five possible string values (about 5-10 letters), like an enum, should we model the type as string or ref (having another entity with the accepted values)? We use the pull syntax a lot, so having an extra entity doesn’t add much complexity when reading, but to have a string is simpler and more readable, but the other approach feels a little bit more solid. Pros and cons as always.#2016-12-0514:49haywoodthank you @val_waeselynck#2016-12-0515:21ovan@teng Sounds like a use case for Enums like described in the schema documentation: http://docs.datomic.com/schema.html. So basically you create entities with db/ident. This makes Pull API return them as values (and the Peer caches them).#2016-12-0515:25teng@ovan I played around with that, but it didn’t help us much actually (the way you use enums in Datomic). Firstly it was storing an id (referring to an Enum entity) any way, so it actually did things harder than to store them as a “real” entity, so I changed back from enums actually. It is nice to use enums in scripts and so (improved readability) but when sending them to client systems (and back again), then it made things harder in my opinion.#2016-12-0515:38robert-stuttaford@teng you get benefits when you want to e.g. find all things with enum value 1#2016-12-0515:38robert-stuttafordbecause it’s an entity you can walk references from#2016-12-0516:07teng@robert-stuttaford ok, I'll give it a second try 🙂#2016-12-0516:58gdeer81@limix the slack channels are for specific questions that can be answered in a sentence or two, I'll send you a DM about getting started on your journey 🙂#2016-12-0517:05timgilbertI agree with @teng, we found :db/ident values to be more of a hassle than a help because they just resolve to {:db/id 23} in pull requests. We went back to just using :db.type/keyword for our user roles and related small groups of enums#2016-12-0517:30pesterhazyI have a query that retrieves all users, pretty much this:
(d/q '[:find [(pull ?u [:db/id
:crm.user/email
:crm.user/guid
:fifteen.more/attributes]) ...]
:in $
:where
[?u :crm.user/email]]
db)
The query is simple and returns only 70,000 results, but it takes >500 seconds to complete.#2016-12-0517:31pesterhazyThe time is in a cold peer but excludes DynamoDB connection time.#2016-12-0517:32pesterhazyThe query is running on a peer inside an aws data center, so latency to dynamodb is low. It's the kind of query that mysql would return in a second or so.#2016-12-0517:33pesterhazyThis is an ETL job by the way. Any ways to improve the performance?#2016-12-0518:47Matt ButlerI have a query where I only want to return a single solution per entity, in short all solutions for a single entity get unified into a single return value. I have been using the collection find spec to do this [entity …] which seems to work great, is this the correct way to do what i want?
However this doesn’t work anymore when I want to return multiple values/variables from a query, is there a way do this?
[:find [(pull ?entity) ...]
:where (QUERY LOGIC)]
[:find (pull ?entity) ?someothervar
:where (QUERY LOGIC)]
#2016-12-0519:25pesterhazy@mbutler, what would you want the query to return?#2016-12-0519:31Matt Butler@pesterhazy a unique list of entities (in this case users) that satisfy the datalog (but also in this case the a variable from that datalog) [user var] would work#2016-12-0519:33Matt Butlerat the moment if a user satisfies the datalog in multiple cases (due to a cardinality many relationship) then 2 results will get returned for that user ?user1 ?someothervar1 and ?user1 ?someothervar2#2016-12-0519:33Matt ButlerIs the answer just use group-by or something outside of datomic to unify the results under a single user?#2016-12-0611:30karol.adamiec@timgilbert can you elaborate on the enum/keyword? can i easily query the keywords ? why would it be better than a string in a language that has no keywords? Seems to me that docs are directing people to use enums, but actually it is just painful ...#2016-12-0614:03karol.adamiecswapped to keywords, seems a lot better but still interested in opinions!#2016-12-0617:06marshallDatomic 0.9.5544 is now available https://groups.google.com/d/topic/datomic/N2hMneocI-0/discussion#2016-12-0617:25robert-stuttafordthis fixes that issue with tempids, presume, @marshall ?#2016-12-0617:28marshallYep#2016-12-0617:29robert-stuttafordnice!#2016-12-0702:39csmhow do you connect the peer server to the dev storage? datomic: fails because ‘foo’ isn’t in the catalog, but leaving off the db name causes it to not match predicate “database-uri?"#2016-12-0702:54marshall@csm you'll need to create the database with a peer first. You can use the bin/repl script included with the Datomic distribution to start a repl with datomic in the classpath#2016-12-0702:55marshallThen (require '[datomic.api :as d])#2016-12-0702:56marshall(d/create-database "datomic:)#2016-12-0702:57marshall(Assuming you're running a dev transactor locally on 4334)#2016-12-0702:58marshallAfter that you can serve the foo database using peerserver#2016-12-0708:18ovanI'm a little surprised about this. For some reason I just assumed idents would be returned as keywords in pull requests. I can see how it's a hassle to convert it every time you want to push those on wire as keywords/strings. I wonder what the use case here. Can't you just use the ident anywhere you could use entity id?#2016-12-0708:19robert-stuttafordit comes back as a map in pull because you may want to put other metadata on those entities#2016-12-0708:21ovanOk, didn't think about that.#2016-12-0708:22robert-stuttafordit is a bit fussy to use full entities for enums, but it’s really not that much extra code to deal with, and the benefits are worth it#2016-12-0708:23robert-stuttaford210,437 entities in our database are active#2016-12-0708:24ovanIs any of that information coming from metadata on enum entity?#2016-12-0708:24robert-stuttafordall of it is#2016-12-0708:24robert-stuttafordthe only context this code has is the query string#2016-12-0708:25robert-stuttafordit queries to find the dates and datom count, and the text at the bottom will display the :db/doc value once i have one#2016-12-0708:27robert-stuttafordhaving it as an entity allows you to e.g. mark it deprecated later, with a deprecated boolean schema attr you make for yourself#2016-12-0708:28robert-stuttafordit boils down to this, for me: enum values have semantic meaning to which we can attach behaviour and metadata#2016-12-0708:28robert-stuttafordmaking it a keyword forces you to keep that meaning / metadata outside of your database#2016-12-0708:30ovanMakes sense. Thanks for clearing that up. This idea about metadata on enums would perhaps be useful to have in datomic docs too.#2016-12-0708:30robert-stuttafordagreed 🙂#2016-12-0709:50karol.adamiec@robert-stuttaford thx for enum stuff. one question. How do i write query that returns something constant to clients? i have db on each environment, ids are different. Need to translate them to constants so clients dont have to. Is there any way to do that other than extra query and merge?#2016-12-0709:50karol.adamiecand on a related note when should one use keyword then?#2016-12-0709:56robert-stuttafordfor the case where i want to flatten idents into parent maps:#2016-12-0709:56robert-stuttafordclojure
(defn flatten-idents [m]
(walk/postwalk
(fn [item]
(if (and (map? item) (:db/ident item) (check-its-not-an-attr-here))
(:db/ident item)
item))
m))#2016-12-0709:56robert-stuttafordother than that, i haven’t really had to deal with it differently#2016-12-0709:58robert-stuttafordi haven’t ever used :db.type/keyword; in all cases i prefered an ident#2016-12-0709:58robert-stuttafordi realise this is a personal preference thing 🙂#2016-12-0711:20jarppeThe d/entity works differently when using "real" db vs. in-memory db? On real db it returns nil if entity does not exist. with memory db it always returns a datomic.query.EntityMap. Is this a bug or feature?#2016-12-0711:22jarppeoh, sorry, my mistake, it always returns EntityMap#2016-12-0711:23jarppeBut what is an idiomatic way to test that entity exists?#2016-12-0711:30robert-stuttaford(seq (d/datoms db :eavt id-or-lookup-ref)) is my goto, @jarppe#2016-12-0711:39karol.adamiec@robert-stuttaford nice bit of clojure. damn i do need to switch away from the REST endpoint 🤓#2016-12-0711:40robert-stuttafordlol you’re on your own buddy#2016-12-0711:47karol.adamiecyeah, good news is that having datomic as db i can spread clojure inside out 🙂#2016-12-0711:49mgrbyteMy "goto" is always: (d/entity db [:my/attr id]) or (d/entity db id)#2016-12-0712:04jarppe@robert-stuttaford thanks, I'll go with that. Although it would be great to be able to check that entity exists and get the entity in one step.#2016-12-0712:04jarppe@mgrbyte d/entity would be great, but afaik you cant use that to test that entity esists#2016-12-0712:05jarppe(d/entity (d/db conn) 42) returns {:db/id 42}#2016-12-0712:09rauhWhy not use entid?#2016-12-0712:10mgrbyte@jarppe Yeh, it seems that if you use a lookup ref you can (d/entity will return nil if it can't find) - but using a plain long id just gives you a lazy entity#2016-12-0712:11mgrbyte@rauh entid returns the id associated with a keyword, or if an id was passed instead of an ident keyword, it just returns the id#2016-12-0712:22jarppe@mgrbyte that explains it, I was so sure that d/entity returned nil, but I was using lookup ref at that time. Now I have actual id and must use the d/datoms as @robert-stuttaford suggested.#2016-12-0713:37mgrbyte@jarppe yep. (-> (d/datoms db :eavt id) first nil?) would be the way to go in the id case. I'd use this instead of seq, since you wouldn't want to realise all the datoms for a given id if they did exist (i.e you're just asking if an entity exists for a given id as opposed to all datoms related to that that entity id.#2016-12-0714:04leovhello. quick question: can I combine two entities under one pull inside d/q query?#2016-12-0714:04leov(can't say I fully understand pull syntax)#2016-12-0714:09leovnevermind. I think I just did it, however output is ungrouped#2016-12-0714:09danielstockton@leov in a query the pull expression takes an ?entity-var like ?e which can be bound to several entities#2016-12-0714:09leovso parent entity goes with one child entity, then goes next parent entity#2016-12-0714:10leovummm. several entities in the sense that they are of the same umm type?#2016-12-0714:11danielstocktonseveral entities in the sense that ?e will be bound to different values in your resultset#2016-12-0714:11leov(d/q '[:find (pull ?app [:heroku.app.name :heroku.app.id])
(pull ?config-key [:heroku.app.config.key :heroku.app.config.value :heroku.app.config.app-id])
:where [?app :heroku.app.id ?app-id]
[?config-key :heroku.app.config.app-id ?app-id]] @db/main)
#2016-12-0714:13leovwell. I have parent-children relation and trying to get {...parent entity fields... :children [{...} {..}]} if possible in one query#2016-12-0714:20danielstocktonyou might want to look at 'map specifications' from http://docs.datomic.com/pull.html#2016-12-0714:26leovthnx#2016-12-0714:32mishaWhy does everyone pull inside query lately?#2016-12-0714:45danielstocktonI suppose its a toss up between re-usability and terseness?#2016-12-0714:46danielstocktonHaven't used datomic enough to know much about the practical benefits of either approach.#2016-12-0714:53potetmWell one obvious answer is you want a named map instead of a set of tuples 🙂#2016-12-0714:54danielstocktonRight, ofc#2016-12-0714:54potetmI usually do it if I want something like limit, which afaik you can't get outside of pull.#2016-12-0714:59danielstocktonYou can't use limit expressions with both pull and in pull expressions within a query?#2016-12-0715:01danielstocktonThe other way would be using the datoms API i suppose, since this gives you a lazy sequence to iterate over?#2016-12-0715:09potetmYeah I meant you have to use the pull api in some capacity.#2016-12-0715:09potetmYeah, datoms would work if that's what you're into.#2016-12-0715:17leov@misha, well, I understood it that pull has magic powers#2016-12-0717:11tjtoltonHaving some issues getting datomic running per the tutorial...
I downloaded the zip,
brew installed maven,
ran bin/maven-install
however, still hitting this:#2016-12-0717:12tjtoltonSuggestions?#2016-12-0717:12tjtolton😮#2016-12-0717:14tjtolton@mgrbyte Moved per your suggestion#2016-12-0717:14marshallthe bin/maven-install script will install the peer library in your local maven#2016-12-0717:15tjtoltondoesn't lein automatically look in my local maven?#2016-12-0717:15marshallif you want to use the client library you should follow this:
http://docs.datomic.com/project-setup.html#2016-12-0717:15marshalland add [com.datomic/clj-client “0.8.606”] to your dependencies#2016-12-0717:16tjtoltonhah! oh man#2016-12-0717:16tjtoltonThanks. Boy did I overlook the obvious problem#2016-12-0717:16marshallno problem#2016-12-0717:16tjtoltonthere it goes. successful require#2016-12-0720:26timgilbertFollowing up on the entity thing a bit belatedly, I can definitely see the utility of having a distinct entity for enums that you can attach stuff to. For our application it didn't seem to outweigh the hassle of needing to do something like @robert-stuttaford's walk/postwalk code every time we needed to return some code to the client#2016-12-0720:29timgilbertWe generally have clients sending us pull syntax, so it can be a little hard to determine when we would need to execute that kind of post-processing, and doing it every time seemed possibly wasteful (plus wind up driving our client and server code farther apart. potentially)#2016-12-0720:30timgilbertIt seemed easier just to use keywords, which can then be just pulled in like any other attribute value in the pull syntax. If (d/pull) and friends auto-resolved idents in the output we would probably have gone with them.#2016-12-0811:00Matt ButlerHow do you pass a inputn created via collection binding to a rule
(myRule ?artist ?artist-name
[?artist :artist/name ?artist-name])
[:find ?release-name
:in $ [?artist-name ...]
:where
(myRule ?artist ?artist-name)
[?release :release/artists ?artist]
[?release :release/name ?release-name]]
#2016-12-0811:01Matt ButlerIt seems that it loses its orness when passed as a plain old variable
same question applies for when passing it into a not-join, (the query i wish to use it on is a not-join wrapped in a rule) so not sure how to construct the args to the not-join
(myRule ?artist ?artist-name
(not-join [?artist ?artist-name]
[?artist :artist/name ?artist-name]))
#2016-12-0816:27erichmondCan anyone recommend a good tool for visualizing a schema?#2016-12-0816:27erichmondIt doesn't need to be automated or datomic specific#2016-12-0816:27erichmondThe last time I used one, I was using ERWin and writing EJBs#2016-12-0818:07tjtoltonin the return value of a pull from datomic, what kind of data is indicated by the #:keyword symbols?
e.g.#2016-12-0818:07tjtoltonwhat is #:inv#2016-12-0818:08tjtoltonand #:db#2016-12-0818:08tjtoltonis that a normal clojure datatype?#2016-12-0818:08tjtoltoni dont think ive seen that before#2016-12-0818:08pesterhazyneither have I#2016-12-0818:09rauhThat's the new namespaced keywords in 1.9#2016-12-0818:09tjtoltonwas two colons not good enough?#2016-12-0818:09rauhhttp://dev.clojure.org/jira/browse/CLJ-1910#2016-12-0818:10tjtoltonhuh, thanks @rauh#2016-12-0818:12tjtoltonso that resolves to
{:inv/color {:db/ident :db/blue},
:inv/size {:db/ident :db/large},
:inv/type {:db/ident :db/dress}}
#2016-12-0818:13rauhYes, see the last comment in the ticket on how to disable it#2016-12-0818:13tjtoltonerr#2016-12-0818:13tjtoltonno, what I just said is wrong, only the keys are expanded#2016-12-0818:14tjtolton{:inv/color {:db/ident :blue},
:inv/size {:db/ident :large},
:inv/type {:db/ident :dress}}
#2016-12-0818:14tjtoltonthat's what it expands to#2016-12-0818:19rafaelzlisboashould :db.type/instant fields be indexed if i’m going to query for a time period?
e.g.
[:find ?action
:where [?action :action/started-at ?started-at] [(> ?started-at ?last-week)]]
#2016-12-0818:59camdez@rauh @tjtolton: thanks for explaining that. Even if the syntax hurts my brain a little (hoping that will change with time).#2016-12-0819:14timgilbert@tjtolton: I think it's actually this:
{:inv/color {:db/ident :db/blue},
:inv/size {:db/ident :db/large},
:inv/type {:db/ident :db/dress}}
...the #:db{ ... } syntax means the namespace will be applied to every key in the map#2016-12-0819:15timgilbertOh oops, never mind, you were right the second time, just the keys, not the values#2016-12-0823:01christianromneyJust bumped the Datomic Pro Starter Docker container (https://hub.docker.com/r/pointslope/datomic-pro-starter/) to version 0.9.5544#2016-12-0900:19danielcompton@christianromney does the docker image comply with the license? I thought redistribution of binaries wasn't allowed#2016-12-0900:20christianromney@danielcompton no redistribution. We use an ONBUILD approach to automate some download steps.#2016-12-0900:20christianromneybest example is here: https://github.com/pointslope/docker-datomic-example#2016-12-0900:21christianromneyyou’ll need to register for a DPS license and then add your key. we just automate curl, etc#2016-12-0900:21danielcomptongotcha#2016-12-0900:48shaunxcodeis it safe to assume the datomic peer server/client are not using websockets?#2016-12-0900:51shaunxcodehmm wait nm I just saw http://docs.datomic.com/architecture.html which does indicate "http + transit" (which means it could be websocket? or rest couple with SSE?)#2016-12-0912:16karol.adamiechow does one represent a 3d vector [x y z] in Datomic? is db.type/float cardinality many good enough? i need positions to be fixed, so it works. Going all way out and defining separate entities for x,y,z seems like a lot… ?#2016-12-0912:20karol.adamiecboils down to: does cardinality many guarantee the order of items?#2016-12-0912:20karol.adamieci can live with not being able to limit it to 3#2016-12-0912:35rauh@karol.adamiec If you don't query with it: Why not just use (float-array 3 [1. 2. 3.])? And store as bytes#2016-12-0912:42karol.adamiec@rauh sounds good. What would be literal syntax for that? i mean in edn file with just data for db seeding?#2016-12-0912:43karol.adamiecand to wrap up prev q, cardinality has set semantics.#2016-12-0912:57karol.adamiec@rauh
(d/transact conn [{:db/id #db/id[:db.part/user] :part.spec/position (float-array 3 [1. 2. 3.])}]) throws [/cdn-cgi/l/email-protection is not a valid :bytes for attribute :part.spec/position"#2016-12-0912:58karol.adamieci need to convert somehow#2016-12-0913:11karol.adamiecaww, i think bytearray will not work over REST#2016-12-0913:12karol.adamieci get back [:db/id 17592186122653] [:part.spec/position #object["[B" 0x166ad03 "[#2016-12-0915:29leovhihi. quick question - can I do range queries on attributes in datomic?#2016-12-0915:30leovsay I have [1 :process/name "ls"] [1 :process.env-var/LOCALE ".."] [1 :process.env-var/HOME "/home/.."]#2016-12-0915:30leovcan I query all the env-var keys?#2016-12-0915:30leovor I should not model env-vars of a process this way?#2016-12-0915:44tjtoltonSo, is there a good tutorial anywhere on how to sell datomic to your company? key questions to answer being
1) why do we need to change?
2) what happens to all of our existing data?#2016-12-0915:54danielstocktonIf you need to track history, that helps 1). There isn't really a comparable alternative and if you've tried to do it with another solution, you probably understand the pain.#2016-12-0915:56tjtoltoneveryone gets along fine when they don't have a choice. I need to convince people that they have pain that they dont even realize they have.#2016-12-0915:57danielstocktondon't you have a lot of nasty code and extra tables to track history that you can point to and say 'let's get rid of this?'#2016-12-0915:58danielstocktonmaybe for 2), you can load the data into datomic and demo something working#2016-12-0915:58tjtoltonhow does one load relational data into datomic?#2016-12-0915:59danielstocktonThere was a talk on this subject at the recent conj but it's quite high level: https://www.youtube.com/watch?v=oOON--g1PyU#2016-12-0915:59nrakoUsing the onyx platform example might help - https://github.com/onyx-platform/onyx-examples/tree/0.9.x/datomic-mysql-transfer#2016-12-0915:59tjtoltonawesome, ill definitely take a look at that#2016-12-0916:00tjtolton@nrako was that in response to me?#2016-12-0916:00nrakoYes.#2016-12-0916:00tjtoltonoh, a mysql transfer, neat!#2016-12-0916:01tjtoltonthe hell is onyx?#2016-12-0916:01tjtoltonive seen that name popping up lately#2016-12-0916:03nrakoConceptually had the same question, and that example really simplified the data import/export process for me#2016-12-0916:04nrakoOnxy is primarily for stream processing#2016-12-0916:06tjtolton@nrako stream processing? like a kafka pipeline?#2016-12-0916:10nrakoYou can read in data from Kafka, process, and send it along to where it needs to go (https://github.com/onyx-platform/onyx-kafka). Not sure if this answers your question...#2016-12-0916:10nrakoSo it does not replace Kafka#2016-12-0916:11tjtoltonright, didn't mean replace. I more meant that it's well suited for a kafka data stream system#2016-12-0916:12nrakoI don't have much experience with it, but everything is data instead of dsl#2016-12-0917:39gdeer81introduced the intern to Datomic this morning and he kept saying he was doing some "Datalogging" when he was writing queries and that a "Datalogger" is someone who writes datalog queries. kids say the darndest things#2016-12-1212:23conanis there a way to restore a db on windows? I'm having no luck whatsoever, it can't recognise the roots. here are some details: https://groups.google.com/forum/#!topic/datomic/YPBFdRl_fI0#2016-12-1213:47jamesWe’d like to store audit trail data each time someone updates the value of an attribute on a particular entity (e.g. store the user and datetime for the update).
http://docs.datomic.com/best-practices.html#add-facts-about-transaction-entity looks promising for this use-case, but I wanted to double check if that's the recommended best-practice for audit trails. I also wondered if anyone has a more elaborate example of adding and retrieving facts from transactions?
We want to render the audit data in the UI when a user hovers over the attribute’s text-field. Is there a way to merge the entity data, and the transaction audit trail data into one query? Or would we have to do it as multiple queries, and merge the results ourselves?#2016-12-1213:49marshall@james You might want to watch the Reified Transactions video here http://www.datomic.com/videos.html#2016-12-1213:49marshallIt includes discussion of using transaction metadata for audit trail#2016-12-1213:49jamesI’ll take a look, thank you!#2016-12-1214:27donaldballIn datomic’s datalog, is it possible to query for entities with an attribute that is a set of references to other entities, where all such entities satisfy some criterion? E.g.... um… all parents whose children all have red hair?#2016-12-1214:34marshall@donaldball Sure. Oh, @val_waeselynck beat me to it 🙂#2016-12-1214:37val_waeselynck@donaldball note the logical pattern here: conjunction = negation of the disjunction of the negations#2016-12-1214:38donaldballI think I follow, nice, thanks.#2016-12-1215:15misha@james remember, you don't actually need to get every bit of info from a single query, as with sql dbs#2016-12-1219:12weiis there a good way to import data from datomic to redshift? the problem I’m solving is allowing people with sql experience to query data#2016-12-1219:12weiI talked to Rich about it a while back and he suggested Redshift for exposing data to people who only know SQL (vs trying to write a translator from SQL->datalog)#2016-12-1219:14gdeer81@wei I was thinking about this issue as well. I didn't get very far writing a SQL to datalog DSL. Then I saw Stu's ETL talk and was wondering if you could just run that in reverse#2016-12-1219:17weipossibly. need to watch that talk#2016-12-1219:32gdeer81I don't have any experience with Redshift though, so I was just speaking generally. it would be an interesting exercise#2016-12-1219:37jarppeAny ideas how to write datomic.Entity with transit? I would like to send entity attributes to a client with transit, but now I have to manually convert it to a map before giving it to transit writer.#2016-12-1219:38jarppeI know how to make custome transit/write-handler instances, but I have no idea how to write something so that the receiver can read it as a map without custome read handler.#2016-12-1219:52jarppeI'm willing to admit that this might be a terrible idea...#2016-12-1221:40caspercWhen we are importing data, we mark the transaction entity with an import id to allow us to see which import a given datum came from. But what is the correct way to find entities with a given transaction associated with them? Everything I have come up with so far takes ages on a large base.#2016-12-1221:53caspercTagging @marshall for when he gets back.#2016-12-1311:45tengWe are using pull queries to read nested data structures from the database. Now we plan to add extra information to the :db/txInstant entity about who made the change. Is it possible to use the pull syntax to read all the attributes as before but also “join” in :db/txInstsant with the extra information for every attribute in the pull query?#2016-12-1311:46karol.adamiecwrapping a clojure peer library in a docker causes it to throw com.amazonaws.AmazonClientException: Unable to load AWS credentials from any provider in the chain . Works if i fire up standalone uberjar locally, but wrapped in docker fails. Any ideas how to make it talk?#2016-12-1312:19karol.adamieccome on guys, surely people do put clojure peer library into docker ? no?#2016-12-1312:32stijnsure, but not on AWS#2016-12-1312:33karol.adamieci try to run it locally for now#2016-12-1312:33karol.adamiecwith dynamo db transactor on aws#2016-12-1312:33karol.adamieci am clearly missing sth stupid#2016-12-1312:36karol.adamieci do not understand what credentials it talks about even. i am using iam roles anyway….#2016-12-1312:45robert-stuttaford@karol.adamiec do your docker containers receive iam creds? can you ssh into one and use the aws cli to connect to stuff?#2016-12-1312:45robert-stuttaford@teng d/pull can’t reach T values for EAVs. think about it. where would it go?#2016-12-1312:46robert-stuttafordonly if you put a tx id in as your eid will you get db/txInstant. same for d/entity.#2016-12-1312:48karol.adamiec@robert-stuttaford there is no aws cli inside of the container...#2016-12-1312:48robert-stuttafordmy point: debug the issue like you would any other bug 🙂#2016-12-1312:48robert-stuttafordpoke it with a stick until it twitches#2016-12-1312:49karol.adamieci am headbanging that for some time now#2016-12-1312:49karol.adamiecare there any magic ports that need to be open for the peer library?#2016-12-1312:51robert-stuttafordwhat storage are you using?#2016-12-1312:51karol.adamiecdynamo#2016-12-1312:52robert-stuttafordfirst hit on http://docs.datomic.com/search.html?q=ports#2016-12-1312:52robert-stuttafordpretty sure there’s a reference for this#2016-12-1312:53robert-stuttafordseems my terraform example has only 4334 https://github.com/robert-stuttaford/terraform-example/blob/master/modules/vpc/sg_internal.tf#L68#2016-12-1312:55karol.adamiecyeah i tried mapping 4334, nothing changes#2016-12-1312:56karol.adamiecwill set up the AWS_ keys hardcoded for now, maybe that will help#2016-12-1313:01robert-stuttaforddoes the container have ddb access via the iam instance profile?#2016-12-1313:01karol.adamiecit is local on my dev machine#2016-12-1313:01robert-stuttafordwhat does the full stacktrace show @karol.adamiec — what part of datomic is trying to do AWS?#2016-12-1313:01karol.adamiecso dont know ;/#2016-12-1313:01robert-stuttafordoh. then you need to set explicit aws keys always#2016-12-1313:02karol.adamiecawww#2016-12-1313:02robert-stuttafordyou said you’re using iam roles. i took that to mean you’re working on an instance. what did you mean by iam roles?#2016-12-1313:02robert-stuttaforddid you mean aws cli profiles?#2016-12-1313:02karol.adamiecwell, all my aws setup is on roles#2016-12-1313:02karol.adamieci do have aws cli installed locally#2016-12-1313:03robert-stuttafordafaik the peer library doesn’t use aws profiles at all#2016-12-1313:03karol.adamiecthat is why maybe it works when i run the jar#2016-12-1313:03robert-stuttafordi added it as a feature request via the new system#2016-12-1313:03karol.adamieci seee#2016-12-1313:03robert-stuttafordaws key + secret need to be in your env or otherwise provided as java props#2016-12-1313:03karol.adamieccool#2016-12-1313:03robert-stuttafordhttp://docs.datomic.com/aws-access-control.html#2016-12-1313:03karol.adamiecwell, the only mystery is how it works then i run java -jar#2016-12-1313:04karol.adamiecmust pick up stuff from aws-cli#2016-12-1313:04robert-stuttaforddo you have aws key/secret in your env? env | grep AWS#2016-12-1313:04karol.adamiecnope#2016-12-1313:05robert-stuttafordare you connecting to ddb in aws from your local machine?#2016-12-1313:06karol.adamiecyes#2016-12-1313:06karol.adamiecrepl or my little project i made that integrates peer lib#2016-12-1313:07robert-stuttafordit must be doing something fancy then. datomic doesn’t support profiles afaik#2016-12-1313:07robert-stuttafordanyway. set keys in your env and it’ll work#2016-12-1313:07karol.adamiecyeah, thanx Robert 🙂#2016-12-1313:07karol.adamieci got sidetracked by not seeing keys in my env...#2016-12-1313:08robert-stuttafordprogramming 🙂#2016-12-1313:08karol.adamiecbtw: are aws access keys needed even when container runs on ecs?#2016-12-1313:08karol.adamiecor only for outsiders?#2016-12-1313:09robert-stuttafordno idea. never run docker once in my life#2016-12-1313:09karol.adamiechah#2016-12-1313:12karol.adamiecworks. jeeesuss.. 🙂#2016-12-1313:13robert-stuttaford-fireworks-#2016-12-1313:14karol.adamiecon the good news front, no more javascript datomic silliness 😄#2016-12-1315:17marshall@casperc Can you expound on “ages” ?
My general first approach would be to query the log to get the set of EIDs for a specific transaction (based on the import id)#2016-12-1319:17uwonot really a datomic question but, is there a way to get pprint to print tempids as tagged literals instead of like {:part :db.part/user :idx -123}#2016-12-1320:59matystlHi Is there posibility to update some parts of nested component if i know only its parrent id?
;Schema
{:db/ident :someAttribute
:isComponent true}
;Data
{:db/id 11 :someAttribute 12}
{:db/id 12
:nestedAttributeA "aa"
:nestedAttributeB "bb"}
;Transaction that will replace whole subcomponent
(d/transact conn [{:db/id 11 :someAttribute {:nestedAttributeA "cc"}}])
Result of this is i lost information about :nestedAttributeB and i would like to just change value of :nestedAttributeA#2016-12-1323:13weidoes datomic support joint uniqueness? i.e. constrain that user/prop1 and user/prop2 together are unique#2016-12-1323:38val_waeselynck@wei see this mailing list discussion: https://groups.google.com/forum/#!searchin/datomic/key%7Csort:relevance/datomic/4cjJxiH9Lkw/N7lyVfPMAAAJ#2016-12-1323:39val_waeselynckbasically, you can either enforce this via transaction functions, or encode the 2 attributes into a scalar "compound" :db/unique attribute#2016-12-1323:57tjtoltonOkay, guys, so help me strategize. We're building out a new service that keeps historical data, but I'm a new hire, and the effort to get datomic into our tech stack is going to take some time. HOWEVER, I think I have some pull in how we structure our data in Postgres#2016-12-1323:57tjtoltonwhat's the best way to structure data so that it can be easily forward compatible with datomic?#2016-12-1323:59val_waeselynck@tjtolton from what I heard translating a relational schema to a Datomic schema is usually pretty mechanical. I think the most important thing to do in the first place is avoid relying on overwriting stuff.#2016-12-1400:00val_waeselynckFancy stuff like JSON queries and indexing may be difficult too. Same thing for triggers.#2016-12-1400:00tjtoltonso, don't have 30 fields that are blank for 60 percent of the records?#2016-12-1400:01val_waeselynckthat won't be a problem with Datomic#2016-12-1400:01val_waeselynckit might be in Postgres, in which case you will have one more argument for migrating 😉#2016-12-1400:01tjtoltontruth.#2016-12-1400:02tjtoltoninteresting. So, the events coming in are json documents. I take it what you're saying is to avoid storing the events as JSON fields, but rather store them as proper records#2016-12-1400:03tjtoltonlike, dont have a table thats just an autoincrementing id and a json doc#2016-12-1400:03val_waeselynckit depends. Some portions of those documents may be just "raw data" that you never query against#2016-12-1400:03val_waeselynckin which case you would store them as strings or bytes in Datomic aniway#2016-12-1400:03tjtoltonah, I see.#2016-12-1400:04val_waeselynckOh yes, also, you may have issues with ids that are unique at the table-level#2016-12-1400:04val_waeselynckyou should make them unique globally if possible#2016-12-1400:05tjtoltoninteresting. Does datomic even have a concept of multiple tables?#2016-12-1400:05tjtoltonor any kind of data segregation#2016-12-1400:05tjtoltonother than schema#2016-12-1400:05val_waeselynckno, everything is an entity. I find it easier to think of the Datomic model as a graph#2016-12-1400:06val_waeselynckwell, not even that. Everything is a Datom. Entities are a mental representation that is superimposed on Datoms#2016-12-1400:07tjtoltonlol. If I could, I'd get you to write a whole blog post on how datomic is like a graph. The business side of my company got hold of the concept of graph based data models, and they sell it as part of the product pitch#2016-12-1400:08tjtoltonIf I could prepare a solid presentation on why datomic is a better fit for a graph model than relational, I'd be 80% the way there.#2016-12-1400:08val_waeselynckBasically, datoms are edges, and entities and scalars are the vertices. Datalog is pattern matching in the graph. Really, visually, it's literally what it does.#2016-12-1400:08tjtoltonhuh, interesting!#2016-12-1400:09val_waeselynck@tjtolton maybe my previous blog post can help 🙂 https://vvvvalvalval.github.io/posts/2016-07-24-datomic-web-app-a-practical-guide.html#business_logic#2016-12-1400:09tjtoltonhadnt thought of it like that, but youre right#2016-12-1400:09tjtoltonyessss, nice#2016-12-1400:09val_waeselynckThe fact that querying is not remote lets you program graph traversal from your application code, in your favorite programming language#2016-12-1400:10val_waeselynckin that sense, the Entity objects are like cursors in the graph that is the DB.#2016-12-1400:11val_waeselynckFinally, regarding business logic, don't forget to mention Datalog rules (noun, not verb) as a means of business abstraction.#2016-12-1400:11tjtoltonhahahaha#2016-12-1400:11val_waeselynckThe last thing I would add is programmability. Generating Datalog is way easier than generating SQL, trust me.#2016-12-1400:12tjtoltonits going to be a tough sell nomatter how I do it. Our senior data architects are SQL engineers through and through.#2016-12-1400:12val_waeselynckTell them they can use Postgres as Datomic's storage service 😉#2016-12-1400:13tjtoltonHahahaha, I'm sure they'll tell me "we have bigger priorities than adding new things to our tech stack"#2016-12-1400:13tjtoltonbut hey, thats software#2016-12-1400:14val_waeselynckTell them that of all their issues, having to learn a new tool is the only one that is guaranteed not to get worse#2016-12-1400:15tjtoltonI might be able to spin that, yeah. Introducing it as a tool is the way to go.#2016-12-1400:15tjtoltonI shouldn't say that I want to change the system, I just want to add a new tool.#2016-12-1400:15tjtoltonI like that.#2016-12-1400:16val_waeselynckit depends how much you rely on stuff like ORMs I'd say.#2016-12-1400:17tjtoltonSo, you're probably going to double take when I say this, but that's not a problem, because our code base is all written in clojure so there are no objects.#2016-12-1400:17tjtoltonYes, that's right, I am in the strange position of fighting an uphill battle trying to sell datomic to a clojure shop.#2016-12-1400:18val_waeselynckstrange indeed#2016-12-1400:19val_waeselynckPersonally, Datomic was maybe the main reason I moved our stack to Clojure.#2016-12-1400:19val_waeselynckClojure is pretty cool on its own, but Datomic gives it a lot of leverage.#2016-12-1400:20tjtoltonIts a very strange thing. The company has used clojure since 2010#2016-12-1400:20tjtoltonin production mind you#2016-12-1400:21tjtoltonso a lot of the system was architected before some of clojure's interesting ecosystem had evolved around it#2016-12-1400:23val_waeselynckWell, one thing that is sure is that for them to add Datomic to the stack, the case for Datomic needs to be compelling, otherwise the additional complexity may just not be worth it. You should ask yourself this question honestly.#2016-12-1400:24val_waeselynckMaybe you can start with Postgresql, and watch out for situations where Datomic makes things much easier (N+1 problem, sparse data, graph-type queries, ...)#2016-12-1400:25val_waeselynckthen importing the data into Datomic and demoing how it deals better with these issues should not be too hard, maybe a couple of days#2016-12-1400:26tjtoltonI'll definitely be examining this question for a while before I try to make the sell#2016-12-1400:26tjtoltonI'll be catching up on your blog 🙂#2016-12-1400:27val_waeselynck@tjtolton thanks 🙂 be critical!#2016-12-1400:27val_waeselynckNeed to go to sleep now, but don't hesitate to let me know how it goes.#2016-12-1400:28tjtoltonThanks for the help @val_waeselynck! Goodnight.#2016-12-1401:45weiworking on a schema here, is it beneficial to have each entity type have its own uuid property? e.g. :user/uuid and :team/uuid vs plain :uuid? and what would be the benefit?#2016-12-1401:54weialso for one-one relationships what’s the tradeoff in pulling everything into one entity vs splitting it into a (component) entity? e.g. a team has a set of credentials. so I might have the following properties: :team/id, :team/members, :team.creds/email, :team.creds/token. should those go into one entity or two separate ones?#2016-12-1403:12bhagany@wei I’m no expert, but fwiw, I would use the same uuid attribute for entities that I expected to query across. So, using your example, if I thought I was going to do something like :where [?user-or-team :uuid ?uuid], I’d use the general one, but if it’s always like :where [?user :user/uuid ?uuid], then the more specific ones are fine.#2016-12-1403:12bhaganyFor my own use case, I use a general attribute, with a semantically broad namespace#2016-12-1403:13bhagany@wei for your second question, I would put all of those attributes in the same entity#2016-12-1403:15bhaganyI think components make the most sense for :cardinality/many references#2016-12-1409:54Matt ButlerWhen you pass a java.util.date into a query is it cast to a long so it can be compared against a :db/instant like so (> ?dbintant ?javautildate)?#2016-12-1417:23wei@mbutler I believe they are compatible (= (type (java.util.Date.)) (type #inst “2016”)) => true#2016-12-1417:24weiI find myself repeating (map (partial d/entity db)) a lot in my helper functions, e.g. (defn all [db]
(->> (d/q '[:find [?e ...]
:where [?e :team/uuid]]
db)
(map (partial d/entity db)))) is there a better abstraction that avoids this duplication?#2016-12-1417:25Matt Butler@wei Right, (> java.util.Date java.util.Date) is not valid clojure code as > needs integers. If you store a util date as a :db/instant it becomes a long so you can use < on it. However it also seems if you pass a util date into a query it is coerced into a long. Wanted to check I wasn’t incorrectly making a huge presumption.#2016-12-1417:25wei@mbutler also found this old thread https://groups.google.com/forum/#!topic/datomic/iWFKMItXWkM#2016-12-1417:26Matt Butler(> :db/instant java.util.Date) inside a query works which is super interesting and how I’ve been doing it 😄#2016-12-1419:13rnandan273Hi, Followed instructions using heroku datomic build as specified here https://elements.heroku.com/buildpacks/opengrail/heroku-buildpack-datomic but i keep getting error stating#2016-12-1419:13rnandan273Device "eth1" does not exist.
2016-12-14T18:56:43.386579+00:00 app[datomic.1]: sed: can't read /app/scripts/transactor.properties: No such file or directory
2016-12-14T18:56:43.398163+00:00 app[datomic.1]: Picked up JAVA_TOOL_OPTIONS: -Djava.rmi.server.useCodebaseOnly=true
2016-12-14T18:56:43.389335+00:00 app[datomic.1]: Launching with Java options -server -Xms256m -Xmx2g -Ddatomic.printConnectionInfo=false
2016-12-14T18:56:47.326896+00:00 app[datomic.1]: Critical failure, cannot continue: Error starting transactor
2016-12-14T18:56:47.327568+00:00 app[datomic.1]: java.lang.Exception: 'protocol' property not set
2016-12-14T18:56:47.327669+00:00 app[datomic.1]: at datomic.transactor$ensure_args.invokeSt#2016-12-1419:14rnandan273Any ideas? i have heroku postgress up and running#2016-12-1419:17weiechoing in case anyone has a good answer:#2016-12-1419:18djjolicoeurcould use the pull api inside the query, but that assumes you know the shape of the data you want up front and don’t want to leverage the cursor-like nature of entities#2016-12-1419:19robert-stuttaford@wei, basically, no. 🙂#2016-12-1419:21robert-stuttaford@wei well, that’s not strictly true. i wrote some transducer backed helpers:#2016-12-1419:22robert-stuttaford(defn to-entity-for-db [db]
(partial d/entity db))
(defn ids-as-entities
;; transducer
([db] (map (to-entity-for-db db)))
;; actually do the work
([db ids] (sequence (ids-as-entities db) ids)))
(defn datoms-as-entities
;; transducer
([db] (comp (map :e) (ids-as-entities db)))
;; actually do the work
([db datoms] (sequence (datoms-as-entities db) datoms)))
#2016-12-1419:22robert-stuttafordso your example would change to (ids-as-entities db (d/q '[...] db))#2016-12-1419:23robert-stuttafordyou can see i have one for use with d/datoms too#2016-12-1419:36wei@djjolicoeur @robert-stuttaford ah, thanks for the ideas#2016-12-1419:40wei@robert-stuttaford i’m finding your helpers useful, do you have a lib or gist of any more useful ones?#2016-12-1419:44robert-stuttaford@wei https://gist.github.com/robert-stuttaford/39d43c011e498542bcf8#2016-12-1419:45robert-stuttafordi’m toying with the idea of releasing a library with this stuff 🙂#2016-12-1419:48weithanks for sharing! I’ve also tried building helper layers in various forms, but am now of the opinion that small convenience functions work better than a substantial wrapper library. eventually I run into cases where those wrappers fail and I need to drop down into base api#2016-12-1419:49robert-stuttafordthe trick is not to wrap, i think, but to make useful shortcuts to compose the base api with#2016-12-1419:50robert-stuttafordif you try to hide the base api away, you’re losing (i have this t-shirt 🙈 )#2016-12-1500:32weidoes datomic have an assert if not exists mode? for example, in an upsert I want to generate a new :user/uuid if it’s a new user, but I wouldn’t want to replace an existing user’s uuid#2016-12-1504:11potetm@wei Unique identities: http://docs.datomic.com/identity.html#sec-4#2016-12-1504:11potetmWait, that doesn't quite fit the bill.#2016-12-1504:14potetmYeah that's probably something you'd have to do in a transactor fn. (Though I'm having trouble coming up w/ a use case for it.)#2016-12-1504:17potetmHow do you know it's the same user if it doesn't have the same id?#2016-12-1505:49mx2000How do I implement a database constraint, that all my emails have to be lower-case?#2016-12-1505:49mx2000I did not find any result on google.#2016-12-1507:42tengAre there any tools for making diagrams of a Datomic database? I’m used to have that for relational databases where you can see all the entities/attributes. And maybe also relations, if that can be figured out by ‘ref’ type + naming conventions.#2016-12-1508:09robert-stuttafordnot afaik, but not hard to produce a dataset that e.g. graphviz or mermaid can consume#2016-12-1508:23tengok, thanks.#2016-12-1510:15karol.adamieclatest datomic release breaks HTTP rest endpoint#2016-12-1510:15karol.adamiecjava.lang.UnsupportedClassVersionError: org/eclipse/jetty/server/Server : Unsupported major.minor version 52.0#2016-12-1510:27val_waeselynck@mx2000 validation is the kind of thing you want to do in the Peer, using any library available to your application language#2016-12-1510:27val_waeselynckthere is no implicit way to enforce this kind of constraint.#2016-12-1512:32magnarsOkay, maybe I'm overthinking this, but bear with me: Let's say I have a :db.type/keyword field, and 10 000 entities all have the same keyword. Let's say that keyword is 100 bytes long. Will this take up 1MB of space in datomic? Or do they all reference the same keyword somehow? Would I save lots of space by making it a :db.type/ref and point to an entity with a :db/ident instead?#2016-12-1512:53robert-stuttaford@magnars i think you’d save space, because now it’s 10,000 Longs and one keyword. @marshall or @jaret may correct me#2016-12-1515:00jaret@karol.adamiec I just used REST to transact and got no error. Where are you running into this? On transactions or on launching the REST service?#2016-12-1515:06karol.adamiec@jaret trying to run the service#2016-12-1515:14jaretAnd you're using 0.9.5544?#2016-12-1515:15jaretjbin at Jarets-MacBook-Pro in ~/Desktop/Jaret/Tools/releasetest/5544/pro/datomic-pro-0.9.5544
$ bin/rest -p 8001 dev datomic:
REST API started on port: 8001
dev = datomic:
#2016-12-1515:37karol.adamiecyes but on amazon ami#2016-12-1515:46karol.adamiecchecked local machine and is fine, must be sth with aws image then.#2016-12-1517:45dominicmQuick google didn't turn up anything, anybody got any experience with periodically dumping datomic queries into something like redshift or postgres?#2016-12-1517:49dominicmKinda figuring I'll have to build something with since and translate those to insert/updates?#2016-12-1517:52robert-stuttafordyou could use d/log and d/tx-range#2016-12-1517:52dominicmThat actually makes more sense, yes.#2016-12-1518:37kennyIs there a way to restart a Datomic restore from a specific “segment”? My large import failed due to exceeded throughput and the last thing it printed out was "Copied 414355 segments." I don’t really want to restart the process from the very beginning because even though it skips segments it has already imported, it still takes a long time to get to the place it died at.#2016-12-1518:47jaret@kenny No you can't restore to a specific segment.#2016-12-1518:48kenny@jaret Okay. What is the provisioned throughput my Dynamo table should have for an import? I had it set at 300 which seemed to work well.#2016-12-1519:16wei@potetm the problem arises when there are multiple identity properties (e.g. uuid and email). if an update comes in with {:user/email “ I’d want to be smart enough to assign a :user/uuid if we can’t find {:tag :mailto:meemail.commeemail.com, :attrs nil, :content nil}, but not reassign a uuid if an existing user is found. would be cool if there were a simple to do it without a db fn.#2016-12-1519:59jaret@kenny for fastest imports you will want DDB writes set to 1000 or more#2016-12-1520:02kenny@jaret What about read capacity?#2016-12-1520:04jaretSo I got a bit confused. You're running restore right?#2016-12-1520:04jaretBefore, I assumed you were asking about import specifically#2016-12-1520:04kennyYes. I am running a restore-db#2016-12-1520:05kennyDatabase has ~450m datoms#2016-12-1520:05marshallhttp://docs.datomic.com/system-properties.html#backup-properties#2016-12-1520:06jaretSo the recommendations are still to bump your provisioning for write if you are being throttled, but you might not need 1000#2016-12-1520:08kennyWell 300 did not work so 800? I just don’t want this restore to fail and need to start over again 😛#2016-12-1520:09jaretWell you know its a one time thing so mise well bump 1000#2016-12-1520:09kennyAlso there seems to be some read reqs as well. It looks like the restore is using ~100 read capacity#2016-12-1520:10kennyYeah I’ll leave it 1000, just to be safe.#2016-12-1520:10jaretYou can change the number of concurrent reads and writes using the properties marshall mentioned#2016-12-1520:10kennyBut would those need to be changed before the restore has started?#2016-12-1520:11jaretyes#2016-12-1520:11jaretI think you will be ok just bumping the provisioning temporarily#2016-12-1520:12kennyYeah I’m trying 150 and 1000.#2016-12-1522:27gdeer81@jonas 👏:skin-tone-2: http://learndatalogtoday.org is still my favorite place to go to refresh my datalog skills#2016-12-1522:28gdeer814 years later and there still isn't anything like it#2016-12-1522:31weiit’s great for evangelizing datalog#2016-12-1522:31gdeer81I'm giving a 15 minute demo of Datomic tomorrow and that movie schema is one that my practice audience caught on to the quickest#2016-12-1522:36adamfreylet’s say I have a Datomic database with a cardinality/many :user/aka attribute. How can I write a parameterized query to return entities that match all supplied akas? “Give me all users known as “The Dude” AND “El Duderino”.#2016-12-1522:38adamfreyI can make it work easily in the inline, non-parameterized version of the query, but I can’t figure out the query when I have to pass in the akas as input#2016-12-1522:39marshall@adamfrey http://docs.datomic.com/query.html#collection-binding#2016-12-1522:39marshallOh, all. Maybe look just above that section#2016-12-1522:39marshallTuple binding form#2016-12-1522:40adamfreyI noticed with tuple binding that the results depend on the order of the elements in the input vector#2016-12-1522:42adamfreyso if an entity just has aka “A” but not “B” if I pass in [”A” “B”] as input it returns, but not if I pass [”B” “A”], unless I’m mistaken#2016-12-1522:47adamfrey(da/q '{:find [?e]
:in [$ [?aka]]
:where [[?e :aka ?aka]]}
(-> (da/db conn)
(da/with [{:db/id (d/tempid :db.part/db)
:db/ident :aka
:db/valueType :db.type/string
:db/cardinality :db.cardinality/many
:db.install/_attribute :db.part/db}])
:db-after
(da/with [[:db/add 1 :aka "A"]
[:db/add 1 :aka "B"]
[:db/add 2 :aka "A"]])
:db-after)
["A" “B”])
=> #{[1] [2]}
But If I change it the input [”B” “A”] I get #{[1]}#2016-12-1522:59mitchelkuijpers@adamfrey you need to use [?aka ...] you currently only use the first value which explains your current behavior#2016-12-1522:59mitchelkuijpersIn the :in part#2016-12-1523:02adamfreyok. But if I change my in clause to :in [$ [?aka …]] then it becomes a collection binding and it does OR matching instead of AND#2016-12-1523:03mitchelkuijpersYes that's correct, hmm#2016-12-1523:03mitchelkuijpersI am on my phone not sure how to do that from the top of my head#2016-12-1523:06adamfreyso I guess tuple binding can give me AND, but only if I know the amount of things I want to AND ahead of time, and can destructure with :in $ [?aka1 ?aka2]. But it’s still unclear to me how/if I can AND with a list of values that variable at runtime#2016-12-1523:07adamfreyI’m going to play around with relation binding, because it’s the only option left#2016-12-1523:07marshallYou may be able to use a rule#2016-12-1523:08marshallI'm also on my phone so I can't try currently. I can look tomorrow morning #2016-12-1523:08adamfreyok. Thanks for your help#2016-12-1606:02wei@robert-stuttaford I made an edit to your datomic helpers gist https://gist.github.com/robert-stuttaford/39d43c011e498542bcf8#2016-12-1615:14tjtoltonSo, what are the disadvantages to using datomic? What are the pain points and how are they managed? What kind of scaling problems does it have? When I attempt to sell datomic to my company, these are questions they will want to hear addressed#2016-12-1615:22dexterthe biggest limitation I found was that you can't push down processing to the nodes so with terrascale datasets you have to ship a lot of data. This can be mitigated by denormalizing so you just keep pre-filtered datasets around#2016-12-1615:22pesterhazysome operations are definitely not fast by default#2016-12-1615:27tjtoltonterrascale datasets#2016-12-1615:28tjtoltonGood. that one is pretty far off, I believe.#2016-12-1615:29tjtoltonAlso, can someone give me a link to a primer or a blog post on what the hell impedance mismatch is? This is one of those buzzwords that I keep hearing and only understand inductively#2016-12-1615:36pesterhazyone downside is that you can't use your typical SQL-based analytics tools with datomic; if you're planning to use such tools, you need an etl job that exports data to an SQL database#2016-12-1616:58tjtoltonahh, that's a really important one.#2016-12-1618:56erichmondIs anyone from datomic sales / support here in the channel atm?#2016-12-1618:58stuartsierra@tjtolton: when talking about databases, "impedance mismatch" usually refers to the fact that typical SQL + OOP systems have two different ways of representing information: relations (SQL tables) and graphs (objects).#2016-12-1618:59stuartsierraWhen using a SQL database from an object-oriented language, there's almost always some translation that needs to happen between the database and your code.#2016-12-1619:01stuartsierraAs a consequence, you can end up with object-oriented code that breaks some assumptions about how objects should work (such as: calling a get method should be a fast, efficient operation) and also fails to take advantage of the features of the SQL database (joins, views, optimized queries).#2016-12-1619:05stuartsierraDatomic tries to avoid the impedance mismatch by using a data model (tuples) that has a perfect one-to-one correspondence with regular programming-language data structures (maps and sets). The translation is automatic and loss-less.#2016-12-1619:07stuartsierraOn the other hand, this means that applications using Datomic typically stay close to the Datomic data model through much of the application code. It is unusual to wrap "objects" around Datomic transactions or queries, as is commonly done with ORM frameworks in object-oriented languages.#2016-12-1619:08jaret@erichmond we're here. @marshall and I are here.#2016-12-1619:37tjtolton@stuartsierra okay, that makes sense.#2016-12-1620:23tjtoltonso, the advantages of datomic in a read heavy system seem pretty obvious, because of local querying. Are there advantages to datomic in a write heavy system compared to, say, postgres? one that reacts to n million changes per day?#2016-12-1620:42val_waeselynck@tjtolton your transactor limits the write throughput in a way that cannot be scaled horizontally (well, nor can postgres...). In addition, things can get challenging when your datasets get big (the famous "10 billion datoms" limit).#2016-12-1620:43val_waeselynck@tjtolton on the other hand, reads don't slow down writes#2016-12-1620:44val_waeselynck@tjtolton also, re: the impedance mismatch, I recommend this talk: https://www.infoq.com/presentations/Impedance-Mismatch#2016-12-1621:28weishould you be able to use lookup refs for other entities within the same transaction? from my experiments it seems that’s not possible, and I’m wondering why not.#2016-12-1622:10stuartsierra@wei Lookup refs in transactions are only supported for entities which existed in the database before the transaction.#2016-12-1701:55idiomancyOkay, so, it seems like datomic a is pretty well suited for replacement for append only relational models. Is there anywhere online that details a) how to build a relational, append-only database model and b) where that strategy falls apart. I need to be able to answer the question "what problem are we trying to solve here?" when I sell this as our database system for our next service#2016-12-1706:10thedavidmeister@idiomancy “what problem are we trying to solve” doesn’t seem related to questions a. and b.?#2016-12-1706:10thedavidmeisterthis is an interesting talk on that topic though https://www.infoq.com/presentations/Impedance-Mismatch#2016-12-1711:56idiomancyIn my particular case, its highly related. We'll be doing a change history table in cassandra or sql if I cant make the case for datomic#2016-12-1713:59thedavidmeister@idiomancy ok well i’ve worked with revision history tables before in sql#2016-12-1714:00thedavidmeisterwith version ids, etc.#2016-12-1714:01thedavidmeisterit’s not as powerful or elegant as being able to run queries “as at” a time, or being able to apply “what if” diffs to a db#2016-12-1714:04thedavidmeistercertainly trying to emulate exactly what datomic does by hand in sql sounds like a bad idea#2016-12-1714:45idiomancyrevision history might not be the right way to put it. we're actually maintaining a record of all the changes that happen to customers. (so when they change their birthday in the system or opt in to other features, or what have you)#2016-12-1715:03lvhIs the Datomic DynamoDB schema public and considered stable API? I’m building a dataset that I think datomic will be useful for at some point — I’d like to make it as easy as possible to sprinkle some sample Datomic on it down the line.#2016-12-1715:03lvh(It would already be EAVT-ish.)#2016-12-1716:25stuartsierra@lvh: Datomic uses all storage backends as simple key-value stores to hold compressed chunks of its indexes. You cannot mix storage-native queries or data with Datomic data.#2016-12-1716:25lvhah; too bad#2016-12-1716:25lvhETL it is/shall be then#2016-12-1717:22thedavidmeister@idiomancy even if you maintain a record of all the changes for a customer, how do you build functionality on top of that?#2016-12-1717:24thedavidmeisterhow does it tie into an orm?#2016-12-1717:24thedavidmeisterhow does sql help you replay/snapshot history?#2016-12-1717:28thedavidmeisterif all you want is conceptually just a log, then sql is probably fine#2016-12-1717:30thedavidmeisterbut trying to ask historical questions of a relational db, even if it is append only, sounds like a lot of gnarly code to write#2016-12-1805:54kwladykaThe Datomic Free transactor is limited to 2 simultaneous peers and embedded storage and does not support Datomic Clients. <- what embedded storage exactly mean in this case? Is it mean only dev or everything expect legacy storages?#2016-12-1806:02robert-stuttafordthe free storage; literally an H2 database stored alongside the transactor binaries on disk; requiring the transactor be running to connect to the storage. all other storage types require pro starter or pro#2016-12-1815:50kwladykathx#2016-12-1815:53kwladyka> Datomic free configuration - playtime only!!
Is it mean it is impossible to use datomic free on Heroku?#2016-12-1819:26tjtoltonSo, datomic has an upper limit of 10 billion datoms... is that just for one db? (i.e. once one DB has filled, can you just switch and start using another DB?)#2016-12-1819:27kwladykaSomebody can share good article or video to understand how to use partitions. It is confuse for me.#2016-12-1819:27tjtolton+1 to that#2016-12-1819:28kwladyka@tjtolton i am learning datomic, but i guess you can clear history in DB first to free space. If it is not enough i guess it is limit by each DB separately.#2016-12-1819:33kwladykaas i understand :db.part/db and :db.part/tx i don’t use it manually, but i can use :db.part/user or install my own partition… ? [:db/add #db/id[:db.part/user] :db/ident :community.orgtype/community] - what #db/id doing here? When to use :db.part/user and when install new one? Never use manually :db.part/db and :db.part/tx? So :db.part/tx is created during transaction… :db.part/db is defined by schema what mean :db/ident values :foo/bar and :foo/baz are in the same partition because of namespace :foo?#2016-12-1819:41kwladyka[[:db/add "foo" :db/ident :green] [:db/add "foo" :db/ident :red] will be in :db.part/txpartition? Also :db.part/db?#2016-12-1820:09kwladykaor maybe i can think about that in that way… :db.part/tx is about Tx value in this table http://docs.datomic.com/entities.html and db.part/db is about.... about what? about E?#2016-12-1820:14robert-stuttaford@tjtolton 10billion datoms is a practical limit based on ram needed by peer. no limitation in the code.#2016-12-1820:15robert-stuttafordso using two 10bn dbs from a peer would mean needing twice the ram to hold all the roots in memory for both#2016-12-1820:18robert-stuttaford@kwladyka @tjtolton partitions:
1. db partition. holds partitions themselves, schema, and all db primitives (try eval (seq (d/datoms db :eavt)) on a brand new in memory database!)
2. tx partition. transactions.
3. user partitions. one installed for you. holds your own data, your db functions, your enums. you can make as many as you like.
why make multiple user partitions? it affect the sort order of entity ids. partitions sort together. this means that index segments hold co-partitioned data together. basically, it improves peer cache performance at scale because less index segments need be read to satisfy a given query, because the stuff is packed together in storage.#2016-12-1820:18robert-stuttafordpartitions are expressed in entity ids as the high bits of the Long. the entity’s birthday plus a counter for that transaction make up the low bits.#2016-12-1820:20robert-stuttaford@kwladyka impossible to use transactor on heroku: no. painful: yes, because ephemerial storage. you’d need to be very good at keeping backups.#2016-12-1820:21kwladykado you know why heroku has this politic about datomic free?#2016-12-1820:22kwladyka@robert-stuttaford thank you for your explanation, unfortunately still i don’t get it in 100% 😞#2016-12-1820:22kwladykahttp://docs.datomic.com/entities.html Tx is :db.part/tx?#2016-12-1820:23tjtoltonhow much memory is 10b datoms?#2016-12-1820:23kwladykaE is.... :db.part/db ?#2016-12-1820:24tjtoltonprobably not a small amount, but I want to use the figure to make a point during my demo 😉#2016-12-1820:24robert-stuttaford@kwladyka partitions describe a numerical range for entity ids to live in#2016-12-1820:25robert-stuttafordfor schema et all, E is an integer
for your entities and for transactions, E is a Long
Tx is a reference to a transaction E elsewhere in the database#2016-12-1820:25robert-stuttaford@tjtolton you could always generate a 10bn datom database 🙂#2016-12-1820:25tjtoltonI suppose I could at that!#2016-12-1820:26robert-stuttafordbut basically, it’ll be big servers with > 10 gigs of ram i would imagine#2016-12-1820:27robert-stuttaford@kwladyka you can forget all about partitions, quite honestly. you only need to think about them if you plan to have a BIG database and you can clearly define arbitrary co-queried sections. e.g. a lot of stuff for client A and ditto for client B or what-have-you.#2016-12-1820:27kwladykaoh so :db.part/tx is for example Tx in range 0-1000 ?#2016-12-1820:27robert-stuttafordthe latest release allows you to use simpler string temporary ids#2016-12-1820:28robert-stuttafordthe actual range values are much bigger than 1000, of course#2016-12-1820:31robert-stuttafordlooking at your question again, @kwladyka;
1. use :db.part/db when installing schema
2. use :db.part/user when adding your own data
3. use :db.part/tx when annotating the transaction you’re processing#2016-12-1820:31kwladykaech i don’t get it.... but maybe answer how use it will help me… so… https://www.refheap.com/124290 how should i create this :product/label “enum” values in the right way?#2016-12-1820:31robert-stuttafordon 3, an example: (d/transact (d/connect …) [{:db/id (d/tempid :db.part/tx) :transacting-user-entity [:user/email “#2016-12-1820:32kwladykaWhy in examples about “enum” values i don’t see :db/isComponent parameter in schema?#2016-12-1820:32robert-stuttafordthis would put :transacting-user-entity on the transaction entity itself, which allows you to later see who made that change to the database#2016-12-1820:32kwladykaok so… 1. and 3. is done by automate yes? I don’t have to use it anywhere?#2016-12-1820:33kwladykahmm#2016-12-1820:33robert-stuttafordwhat is your goal, here? understand when to use them, or have an understanding of what partitions are and do?#2016-12-1820:33robert-stuttafordif it’s merely use, you could just use the simpler string tempid system#2016-12-1820:34robert-stuttafordhttp://blog.datomic.com/2016/11/datomic-update-client-api-unlimited.html has a section on this at the end#2016-12-1820:34kwladykai wanted understand what partitions are and do to know how to use… but i see it is too hard for my today so i can satisfy to know how to use it today 🙂#2016-12-1820:35robert-stuttafordok. so if you’re using the newest version of datomic, you don’t need to ever declare a partition any more.#2016-12-1820:35kwladykayupi, you simplified my life 🙂#2016-12-1820:35robert-stuttafordhonestly, i think you’d be in a great place if you work through the http://docs.datomic.com/tutorial.html#2016-12-1820:35robert-stuttafordit uses the new simpler system#2016-12-1820:35kwladykai read whole this doc#2016-12-1820:36robert-stuttaford[:db/add "foo" :db/ident :green] is using db.part/user in the background#2016-12-1820:36robert-stuttaford[:db/add "datomic.tx" :db/doc "remove incorrect assertion”] is using db.part/tx in the background#2016-12-1820:36kwladykabut this doc use client library which is in alpha version. In Clojure probably the best is use peer?#2016-12-1820:37robert-stuttaford{:db/ident :inv/sku
:db/valueType :db.type/string
:db/unique :db.unique/identity
:db/cardinality :db.cardinality/one}
is using db.part/db in the background#2016-12-1820:37kwladykaoh now i start understand (i hope)#2016-12-1820:37robert-stuttafordthese are transactions; both the peer and the client lib are identical in this aspect because they all just send stuff to the transactor#2016-12-1820:38robert-stuttafordthe old system had you explicitly declaring temp ids with a partition#2016-12-1820:38robert-stuttaford[:db/add (d/tempid :db.part/user) :db/ident :green]#2016-12-1820:38robert-stuttaford[:db/add (d/tempid :db.part/tx) :db/doc "remove incorrect assertion”]#2016-12-1820:39robert-stuttaford{:db/ident :inv/sku
:db/valueType :db.type/string
:db/unique :db.unique/identity
:db/cardinality :db.cardinality/one
:db/id (d/tempid :db.part/db)
:db.install/_attribute :db.part/db} ;; schema had extra ceremony!
#2016-12-1820:39robert-stuttafordso there’s the new vs the old#2016-12-1820:40robert-stuttafordi suggest you just start doing stuff and see how it feels 🙂#2016-12-1820:40robert-stuttafordhttps://github.com/clojure-cookbook/clojure-cookbook/tree/master/06_databases 6.10 to 6.15 has stuff for you to follow along with#2016-12-1820:41robert-stuttafordit’s older though, and uses the older temp id syntax#2016-12-1820:41robert-stuttafordgood luck!#2016-12-1820:42kwladyka@robert-stuttaford thank you so much for this explanation. Probably information about Datomic from sources about old and new versions made me so confuse.#2016-12-1820:43robert-stuttafordkey to note that both tempid systems still fully applicable#2016-12-1820:43robert-stuttafordyou can learn it all and apply it all. no breaking changes in any of the APIs.#2016-12-1820:46kwladykaso… if i have enum values for something how should i declare schema? https://www.refheap.com/124293#2016-12-1821:15kwladykahm… ok ok :db/isComponent doesn’t work in that way#2016-12-1821:33vinnyataidewhat is :tx-data from the docs and why the <!! (<!! (client/transact conn {:tx-data [{:db/doc "hello world"}]}))#2016-12-1822:17marshall<!! Is blocking take from a channel. Look at the core.async docs for more info on thay#2016-12-1822:17marshallThat#2016-12-1822:17marshallthe new client api is all asynchronous so everything returns a channel#2016-12-1822:19marshall:tx-data is a key to a map. The transact function takes a map as an argument and the map needs to contain the transaction data as a value associated with the :tx-data key#2016-12-1822:20marshall@vinnyataide ^^#2016-12-1822:21vinnyataide👏 👏 👏 👏 👏 #2016-12-1822:22kwladykahttps://www.refheap.com/124298 - how do you deal with transform data datomic <-> clojure spec structure?#2016-12-1822:22vinnyataideAs good as an explanation can get, thanks @marshall #2016-12-1822:22marshallNp#2016-12-1822:25tjtoltonin this post https://martintrojer.github.io/clojure/2015/06/03/datomic-dos-and-donts
about the DO's and DONT's of datomic, martin troljer says:
> Do your
>Datomic brings in lots of maven dependencies. Make sure you don’t suffer from clashes, spend time solving the ‘:exclusions puzzle’.
what does he mean by "the exclusions puzzle"?#2016-12-1822:26marshallTry running lein deps :tree #2016-12-1822:27marshallHe's referring to managing exclusions on various dependencies to avoid conflicts #2016-12-1901:23tjtoltonHmm, interesting#2016-12-1901:23tjtoltonanother from that post:
>Keep metrics on your query times
>Datomic lacks query planning. Queries that look harmless can be real hogs. The solution is usually blindly swapping lines in your query until you get an order of magnitude speedup.
what does he mean by "query planning"? is this a feature of databases that datomic lacks, or is this a comment on how datomic is used?#2016-12-1901:38marshallI'd take a look at https://github.com/Datomic/day-of-datomic/blob/master/tutorial/decomposing_a_query.clj#2016-12-1901:39marshallYou can usually do much better than "blindly swapping" clauses #2016-12-1904:13xk05getting ready to do a CloudFormation install#2016-12-1904:31robert-stuttaford@xk05 may be able to save you some pain longer term https://github.com/robert-stuttaford/terraform-example#2016-12-1904:32robert-stuttafordthis has turn-key datomic transactor + apps + memcached + ddb#2016-12-1904:33xk05reading...#2016-12-1904:37xk05this may be overkill for my purposes#2016-12-1904:38xk05it's good to see there is scaffolding in place, though#2016-12-1904:38xk05the clojure community is always leaving presents under the tree for me in this regard#2016-12-1904:47xk05I just listened to the Cognitect podcast with Paula? Gearon. Man, the things she was saying! Many of the very same things that have bugged me for years. What a joy to come back to this topic now, and a RDF layer in the works. I look forward to developing some ideas on this platform.#2016-12-1904:54xk05I made an attempt to write a functional triplestore with owl2 rules a few years back, after spending a winter studying the topic at Univ. of Washington. It was just a sort of moonlighting hobby thing to sketch out ideas of where I might want to go later. My implementation was in elisp with ttl flatfiles with allegrograph and gruff support. It's a boneyard of broken dreams. 😄 Well, really though, I look at it from a distance now, and it really does appear that this database and the support repositories growing up around it are a mature application of those very ideas. I guess this is a long-winded way of saying, "Wow! Somebody actually made this work!"#2016-12-1904:56xk05https://github.com/donlindsay/mx-org-rl#2016-12-1904:58xk05When my fire started to peter out I just sort of dumped it into a bucket here. I was sort of planning on organizing it, then I stumbled onto this. I may just convert the little bits I want to keep to clojure and work on this format instead.#2016-12-1906:08robert-stuttafordi highly recommend spending some time with Datomic, then @xk05 - i think you’ll find the more you use it, the more you see the beauty of its overall design.#2016-12-1912:35kennethkalmerI just bumped to 1.9-alpha14 and datomic 0.9.5544 and I’m suddenly seeing different variations on the same exception (still debugging)#2016-12-1912:35kennethkalmerjava.util.concurrent.ExecutionException: java.lang.IllegalArgumentException: :db.error/not-a-keyword Cannot interpret as a keyword: restricted, no leading :#2016-12-1912:36kennethkalmerwent through the datomic google group and google itself, but this one seems ungoogleable 😞#2016-12-1912:36kennethkalmerhas anyone else seen this happen yet?#2016-12-1912:40misha@kennethkalmer: when?#2016-12-1912:41kennethkalmerfacepalm facepalm facepalm#2016-12-1912:41kennethkalmerfound it, I’m passing a string value to a keyword attribute#2016-12-1912:41kennethkalmerguess in the past it would have just cast it#2016-12-1912:42misha:duck:#2016-12-1914:46adamfrey@marshall did you, by chance, test out the runtime-varying AND query I was trying to build on Thursday? https://clojurians.slack.com/archives/datomic/p1481841384000992#2016-12-1915:13adamfreyI’ve found I can generate the a where query at runtime like this
{:find '[?e]
:where (for [aka akas]
['?e :aka aka])}
and that might be the idiomatic way, I just want to make sure I’m not missing anything#2016-12-1915:13marshall@adamfrey I didn’t get to it over the weekend, but if you have a good programmatically generated solution, I’d go with that.#2016-12-1915:14adamfreyok, thanks#2016-12-1920:29unbalancedalright lovely folks, maybe you guys can help me convince my company to adopt Datomic (finally) 😄
Have this SQL problem. T1 has about 2.5 millions rows, about 20% are duplicates. T2 is a near clone of T1 with no duplicates. However, T2 has more recent entries than T1. I need to get all unique new entries from T1->T2. Seems to me like it should be a simple set operation but in SQL it is enormously computationally expensive. Is this something on which Datomic can save me or am I still screwed? 😛#2016-12-1920:31unbalancedI mean as the axis of time progresses I'm surely screwed in general but I'm hoping in this instance there may be some light 😂#2016-12-1920:38gdeer81well from a scalable architecture point of view you could argue that you could run a computationally expensive read query like that on a peer without bringing down the entire system because other peers will be able to make writes and do queries without being impacted#2016-12-1920:50tjtoltonin a related question, does the horizontal read scaling of datomic have any advantages over, say, read slaves?#2016-12-1920:51matthavenertjtolton: unlike read slaves, datomics “slave lag” is reified and you can determine which basis your current database is at#2016-12-1920:52matthavenerwith regular read slaves (at least in my limited experience), you cannot determine if the database you’re reading from has changes some arbitrary transaction#2016-12-1920:52tjtoltonthats pretty significant#2016-12-1920:53matthaveneryeah, I saw a nasty bug once due to a race with read lag. its worse because you only see the read lag bugs when load is high#2016-12-1920:53matthavener(or during some network partition)#2016-12-1920:54tjtoltonso, with datomic, i suppose you would use the sync operation to solve that?#2016-12-1920:56unbalanced@gdeer81 you are correct, however, I need to run this update ~ every 5 minutes#2016-12-1920:57unbalancedwould be very nice if could get something going like T3 := set(T1) - set(T2) (pseudo-code)#2016-12-1920:58tjtolton@goomba you'd spin up a new app process to do it each time, I think is what he's saying.#2016-12-1920:58unbalanced@tjtolton correct, only issue is that the above operation on SQL takes several hours to complete#2016-12-1920:58tjtoltonahh#2016-12-1920:59unbalancedand since I need to run it every few minutes...#2016-12-1920:59tjtoltonYeah.#2016-12-1920:59unbalancedthere are, I'm sure, other way to do it a la "well why don't you just..." ... but it would be REALLY nice if we could just leverage immutable data structures 😂 😂 😂#2016-12-1921:08gdeer81@goomba first you would need to convert your sql schema to datomic schema then migrate all your current data over, then rewrite your app to use datomic. doing all that might prevent the original problem you were having#2016-12-1921:08unbalanced@gdeer81 more than happy to do that, this is more of a feasibility question. And yes preventing the original problem would certainly count for me as solving it!#2016-12-1921:09unbalancedThis whole situation came about because trying to prevent duplicate entries on insert means an insert speed of about 20-25 records per second while we generate about 30-35 records/second#2016-12-1921:10unbalancedso to prevent this we created a table where we don't check for duplicates, giving us an insert speed of 4k records/second, but now we have duplicates#2016-12-1921:14gdeer81I'm not sure what Datomic's write throughput max is, but assuming that one record would break out into at least 10 datoms, you're talking about being able to transact 40k datoms per second.#2016-12-1921:14unbalancedwell. That would certainly work.#2016-12-1921:14gdeer81I'm saying I don't know if Datomic was designed for that kind of churn#2016-12-1921:14unbalancedohh I see#2016-12-1921:15unbalanced😢#2016-12-1921:15gdeer81but don't let me discourage you from trying it#2016-12-1921:15unbalancedwell the good news is use we mongoDB as a buffer to handle the inserts#2016-12-1921:16unbalancedand I can drip feed with some buffer the info from mongo into Datomic#2016-12-1921:16unbalancedand it doesn't have to be up-to-the-second accurate#2016-12-1921:16gdeer81so you'd be using mongoDB as a queue?#2016-12-1921:17unbalancedit's our long term storage#2016-12-1921:17unbalancedbut as you know not great for queries#2016-12-1921:22gdeer81I really hate to discourage anyone from trying Datomic but your original problem could probably be solved with Informatica or writing your own de-dup process in Clojure#2016-12-1921:23unbalancedNot discouraged, just looking for any excuse to push for Datomic right meow 😄#2016-12-1921:25unbalanceddamn, I might literally have to do this in Clojure. It just takes up too much damn memory in Python#2016-12-1921:42kennyWhen trying to require the datomic clj-client I get this error:
java.lang.UnsupportedClassVersionError: org/eclipse/jetty/client/HttpClient : Unsupported major.minor version 52.0
clojure.lang.Compiler$CompilerException: java.lang.UnsupportedClassVersionError: org/eclipse/jetty/client/HttpClient : Unsupported major.minor version 52.0, compiling:(cognitect/http_client.clj:1:1)
I don't have any dependency conflicts with jetty. How do I use the clj-client?#2016-12-1921:52gdeer81what jdk are you using?#2016-12-1922:19kennyI was using 7 and I just switched to 8 and it's working. If it's not already documented, that should be added 🙂#2016-12-1922:30kennyWhat params do I need to pass to connect in the client API if I have a database running on Dynamo? For :db-name I put my db uri "datomic:". It's not clear what I am supposed to put for :account-id and :access-key. Also, do I need to specify any more of the optional keys?#2016-12-1922:31kennyFrom the docs:
> :account-id - must be set to datomic.client/PRO_ACCOUNT
Does this mean I don't need to specify this value? Or should it be set to my AWS access ID? Or maybe my my.datomic account id?#2016-12-2000:41ckarlsenit's literally datomic.client/PRO_ACCOUNT#2016-12-2000:49kennyI tried that and I get Incomplete or invalid connection config:#2016-12-2001:33marshall@kenny you need to run a peerserver against your ddb database#2016-12-2001:33marshallThen connect clients to that peer server#2016-12-2001:34marshallDb name is the "alias" you provide to the peer server when you start it against your ddb uri#2016-12-2001:35marshallhttp://docs.datomic.com/peer-server.html#2016-12-2001:35marshall@kenny ^ running peer server documented there. Info on connecting a client to it is lower on the page #2016-12-2001:37kennyAh, that makes sense. The exact steps you need to follow to use the client API aren't clear in the docs 🙂 Thanks for the info#2016-12-2001:38marshallSure. Let me know if you have suggestions as to how/where that could be specified better#2016-12-2001:43kennyI think the confusion for me was actually needing to go through the "Getting Started" docs. I assumed that I didn't need to read "Getting Started" because we have been using Datomic for a while now and I know all the basics. Maybe it should be called "Getting Started with Clients"?#2016-12-2001:44marshallAh, yeah I think the new getting started does cover how to run and connect with peer server#2016-12-2001:44marshallI'll see where/ how I might clarify that#2016-12-2002:56xk05What does this mean?#2016-12-2003:02xk05oh, n/m, i confused <system-name> with <table-name>, i guess#2016-12-2004:06drewverleeIm trying to setup datomic on postgres in heroku. Im on this step:
> open a SQL window on that server and paste in the bin/sql/postgres-table script
From http://docs.datomic.com/storage.html
I’m not sure what a SQL window is referring to. Googling “SQL window” doesn’t get me close to what i imagine. I imagine somewhere there is a script to setup the tables the way datomic wants them, and i need to run that in this SQL window. But i dont know where either is.#2016-12-2004:24drewverleethis postgres file is probably in my project directory somewhere...#2016-12-2004:31drewverleelooks like i probably need to download datomic to get it 🙂#2016-12-2004:31drewverleehazaa#2016-12-2005:14drewverleesimilar issue on this step:
> If you are using a Heroku hosted PostgreSQL instance, edit your sql-transactor-template.properties as follows:
Which is simply that i dont see a ‘sql-transactor-template.properties’ file anywhere. I see a config/samples/free-transactor-template.properties which i could copy and change the file name. But im not sure thats right, especially as the next step suggest there should be a sql-url variable in the file (which there isn’t). I could write it down, but that feels odd.#2016-12-2005:22gdeer81@drewverlee the free transactor is only for in-memory, you'll have to set up a starter license if you want to use postgres for your storage layer#2016-12-2005:24drewverlee@gdeer81 Thanks again.#2016-12-2018:03drewverleeHmm ok, so i’m a tad stuck. I want to deploy a full stack clojure app to heroku using datomic ontop of postgres.
I assume i should include my datomic-pro dep in my project file:
[com.datomic/datomic-pro "0.9.5544" :exclusions [com.google.guava/guava org.slf4j/log4j-over-slf4j org.slf4j/slf4j-nop]]
Which seems to work as lein uberjar reports success. But then as i would expect it fails when i try to deploy to heroku it fails. I assume because it cant verify my license?
This walkthrough (https://github.com/jamesmartin/datomic-tutorial) seems to suggest i should add:
:repositories {"" {:url ""
:creds :gpg}}
to my project.clj. But i dont see any offical docs on the subject.#2016-12-2018:04drewverleeMaybe a better question would be, whats the simplest way to get something deployed 🙂#2016-12-2018:05marshall@drewverlee what specifically fails when you try to deploy?
launching the transactor is what will require a valid license; I’m not sure whether details have changed, but I believe there used to be some hoops to jump through to get a transactor running using heroku#2016-12-2018:05marshalldue to IP and inter-instance communication configurations#2016-12-2018:07marshallfor details: https://groups.google.com/d/msg/datomic/KwF6fjE8Msc/hUpc17gW35MJ#2016-12-2018:07marshalldo you have a transactor up and running? or are you attempting to deploy both a peer and a transactor?#2016-12-2018:11drewverlee@marshall Thanks for the help.
the failure message when i run git push heroku is:
remote: Could not find artifact com.datomic:datomic-pro:jar:0.9.5544 in central ()
remote: Could not find artifact com.datomic:datomic-pro:jar:0.9.5544 in clojars ()
remote: This could be due to a typo in :dependencies or network issues.
remote: If you are behind a proxy, try setting the 'http_proxy' environment variable.
remote: Uberjar aborting because jar failed: Could not resolve dependencies
remote: ! Failed to build.
> do you have a transactor up and running?
Not yet. Im not sure what the order should be, this is what im currently doing:
build clojure project e.g lein new luminus vending_machine_app +re-frame +datomic
edit my project.clj to use datomic pro
(x) deploy to heroku
follow setup datomic database instructions (which involves setting up the transactor)#2016-12-2018:12marshallgotcha.
So that error does indicate your app is unable to get the datomic peer library from the private maven repo#2016-12-2018:12marshallyou’ll need to configure that via the directions provided in your http://my.datomic.com dashboard#2016-12-2018:12drewverlee@marshall Its probably important to know that im going headfirst at this, with very little background 🙂#2016-12-2018:13marshallhowever, You’ll also need a running transactor up for your peer to connect to before it can “do” anything#2016-12-2018:14marshallin general, the ‘startup’ order for a Datomic system would be:
1) provision/configure storage (i.e. Postgres or Dynamo)
2) start your transactor instance
3) start your app, which uses the Datomic peer library#2016-12-2018:14drewverlee@marshall ok, great. I see the instructions i missed on http://my.datomic.com.#2016-12-2018:16marshallmore details on storage setup can be found here: http://docs.datomic.com/storage.html#2016-12-2018:17marshallthe writeup on running Datomic in AWS (http://docs.datomic.com/aws.html) might also be helpful, although of course the AWS-specific parts won’t work for Heroku#2016-12-2018:24drewverlee@marshall of the two, paths (heroku vs AWS) which is the smoothest. I’m doing a barebones toy application just to get this stuff live#2016-12-2018:24marshallin general the AWS path is the fastest for getting the storage & transactor up and running#2016-12-2018:24marshallwe provide a set of scripts that automate most of it#2016-12-2018:25marshallbasically, follow the docs here: http://docs.datomic.com/storage.html#provisioning-dynamo#2016-12-2018:25marshallthen here: http://docs.datomic.com/aws.html#2016-12-2018:25marshalland you’ll have a transactor up and running#2016-12-2018:25marshallafter that, just deploy your application that uses the Peer library to EC2 however you like#2016-12-2018:27robert-stuttaford@drewverlee you can also use terraform; @mrmcc3 put together a codebase for transactor + peers, you can use the transactor bit https://github.com/mrmcc3/tf_aws_datomic#2016-12-2018:58matthavenerdrewverlee: FWIW, if you are going with heroku, there are a few heroku datomic buildpacks that make setting up the transactor a little easier#2016-12-2018:58matthavenereg https://elements.heroku.com/buildpacks/opengrail/heroku-buildpack-datomic#2016-12-2019:31weiis there a general way to implement composite attributes on top of datomic? by that I mean attributes which are unique when taken together. my current method is using a special key where the values are concatenated, e.g. :composite/team.id|user.id but this adds a lot of extra complexity around upserting entities.#2016-12-2019:35gdeer81@wei I think there is a feature request for that, so go to your http://my.datomic.com profile and upvote it#2016-12-2019:38weiupvoted, thanks. was wondering what others were doing in the meantime#2016-12-2019:42robert-stuttaford@wei you’re talking about what SQL calls composite primary keys, and Datomic does not support them#2016-12-2019:42robert-stuttafordyou can do your own with transactor functions#2016-12-2019:44robert-stuttafordhere’s a snippet. we eventually caved in and had our tx fn make a string value out of the parts so that we had a simple ‘primary key'#2016-12-2019:45robert-stuttafordif you think it through, such a feature would add quite some complexity to the system, which i don’t think would be worth it#2016-12-2019:45robert-stuttaford… but i’m not Rich Hickey, so it’s very possible i’m ignorant in several key ways#2016-12-2019:46weithanks for sharing. yes, I was reading some previous literature around using tx fns for composite keys, and it looks like every solution so far has some major limitations (e.g. not being able to add multiple entities in the same tx). was wondering if I was missing something critical as well#2016-12-2019:47robert-stuttafordthe current system is based on a few solid rules. adding this would complicate things quite a bit i think#2016-12-2019:49weiin your schema, is :meta/tag-slug the only unique attr?#2016-12-2019:50robert-stuttafordcorrect#2016-12-2019:51robert-stuttafordwe have a helper function too#2016-12-2019:52weithink I’m not following completely. if you wanted to put a composite key on several entities, would you reuse this db function or create a separate function for each entity?#2016-12-2019:52robert-stuttafordso this is a concrete implementation for one entity kind#2016-12-2019:53robert-stuttaforda fully generic implementation would be so indirect and meta, it’d be useless (that’s an intuition)#2016-12-2019:53weithat’s what I’m thinking too. but we have many composite keys in our system and it’s getting tedious to re-implement this for each#2016-12-2019:54robert-stuttaforddo your comp-keys follow any sort of a pattern? e.g. 2-3 strings#2016-12-2019:56weinot an uncommon requirement in my opinion- for example, user ids in some external system are unique only within a group (team)#2016-12-2019:58weiyes, it’s usually 2 strings. but sometimes one part of the composite string is based on an attribute from a joined entity, which complicates things so I’m debating whether that’s necessary#2016-12-2020:00robert-stuttafordperhaps it may be useful to think about how you’d model this data in a new database, from first principles, rather than as an import from a different data modelling paradigm#2016-12-2020:09weitrue. currently we have things like :composite/team.slack-id|user.slack-id (need a user’s slack id and a team’s slack id to uniquely identify a user) and :composite/team.uuid|tag.name (need the team and tag name to uniquely identify a tag). I do think composite keys are necessary in these cases, but maybe splitting it along a different dimension would make things easier.#2016-12-2020:10weiat any rate, thanks for the tips. if you wouldn’t mind posting your helper function, that would help me as well#2016-12-2022:04kwladykahmm i think i found some kind of bug: when i have [com.datomic/clj-client “0.8.606”]in dependency it spoils jetty ring. When in run lein repl and then (use ‘ring.adapter.jetty) i have an error CompilerException java.lang.NoClassDefFoundError: org/eclipse/jetty/http/HttpParser$ProxyHandler, compiling:(ring/adapter/jetty.clj:27:11) - it makes this bug even on this simple example https://github.com/ring-clojure/ring/wiki/Getting-Started#2016-12-2023:32dominicmCan I do pull inside a query on a sequence based db (like the [fred :likes blah] example in the query docs)? By default I get an exception, but I might be doing it wrong#2016-12-2100:10weimy transaction function works correctly with the in-memory db but I get a java.lang.ClassNotFoundException when I use a dev transactor. am I missing something?#2016-12-2101:29weiwhen using :requires in a db function can I use my own code, or only standard libs? currently getting an error that it can’t locate the file on the classpath#2016-12-2101:34weiok, turns out I need to add a jar to the lib directory of the transactor, and then I can use :requires#2016-12-2109:28geoffkhey guys, wondering if there's any documentation anywhere about the full set of values that get passed to the metrics callback function when provided with -Ddatomic.metricsCallback=my.ns/handler ? This page is incomplete: http://docs.datomic.com/monitoring.html#2016-12-2109:44robert-stuttaford@jaret @marshall ^#2016-12-2110:23pesterhazyToday I learned that d/q can return d/entitys: (d/q '[:find [?ent ...] :in $db :where [$db ?e :bar _] [(datomic.api/entity $db ?e) ?ent]] db)
#2016-12-2110:24pesterhazyHow did I not know this?#2016-12-2110:30pesterhazywe should do an advent calendar of cool things you can do with d/q#2016-12-2110:30pesterhazya bit late now I guess 🙂#2016-12-2113:50marshall@kwladyka sounds like there is a dep conflict between versions of jetty in ring and in datomic client. can you resolve it with exclusions?#2016-12-2113:51marshall@geoffk the metric callback return value is described here: http://docs.datomic.com/monitoring.html#callback-argument-format#2016-12-2113:52marshallit can contain any/all of the keys listed in this table: http://docs.datomic.com/monitoring.html#sec-5#2016-12-2114:03geoffkthank you @marshall i've just found that the metrics that are actually received by the callback handler, if I enumerate through all the keys in that map include some keys that aren't documented on either of those pages - There are a bunch of Peer* metrics in the map for example when specifying the callback from a peer#2016-12-2114:09marshall@geoffk Which specific metrics are showing up? That table specifically includes the transactor metrics.#2016-12-2114:16geoffkthat makes sense then - is there somewhere in the docs that enumerates peer metrics ? I'm specifically trying to get peer metrics logged to CloudWatch via this handler and some custom code - a good example of a key that doesn't seem to be documented anywhere is PeerFulltextBatch#2016-12-2114:16geoffkPeerAcceptNewMsec is another#2016-12-2114:17marshallRight. No, there is not currently a list of peer-specific metrics in the docs; I can log a request to provide one#2016-12-2114:17geoffkthat would be awesome 🙂 Yeah I'd just really like to understand what additional metrics we can surface about peers specifically#2016-12-2118:36xsyno7#2016-12-2119:52geoffko7#2016-12-2207:27gravIt seems that datomic.api/delete-db does not free up the disk space. Does the API support freeing up the disk space, or do I need to delete the db directory by hand?#2016-12-2213:57marshall@grav What storage are you using?
You’re correct that delete-db does not reclaim the disk space. You need to run gc-deleted-dbs : http://docs.datomic.com/capacity.html#storage-size (very bottom)
Then depending on your storage you’ll need to run compaction/vacuum/etc#2016-12-2213:57marshallGenerally if you’re not in production it is easier to backup, delete the table entirely, recreate and restore#2016-12-2213:58marshallThe other thing to keep in mind about disk space usage is that you need to run gcStorage periodically http://docs.datomic.com/capacity.html#garbage-collection on live databases to remove garbage segments#2016-12-2215:55robert-stuttafordremember to use get-database-list to see what other dbs you have. i’ve deleted the http://www.stuttaford.me/codex database more than once doing that 😞#2016-12-2220:57eggsyntaxAnyone aware of a predicate that’ll return true for ordinary maps and EntityMaps, but not for vectors?#2016-12-2220:58eggsyntax(map? my-entity-map) is false; (associative? vector) is true.#2016-12-2220:58eggsyntaxI can create a custom pred, of course, but I hate to do that on something intended as a general-purpose fn.#2016-12-2221:17alexmiller#(instance? clojure.lang.ILookup %) will tell you whether it can do keyword lookup#2016-12-2221:18eggsyntaxOoh, very nice.#2016-12-2221:18alexmillerif that’s not exactly what you need, I would consider writing a predicate that describes exactly the traits you expect to use rather than trying to figure out some common parent#2016-12-2221:18alexmiller(as there likely isn’t one)#2016-12-2221:19eggsyntaxNo, keyword lookup was exactly what I was after. Thanks @alexmiller!#2016-12-2221:19alexmillermap? corresponds to IPersistentMap (which implies things like persistence)#2016-12-2221:20alexmillerassociative? corresponds to Associative which is more about being able to look up val by key#2016-12-2221:20eggsyntaxAh, and since you can look up items in vectors by index...#2016-12-2221:20alexmilleryeah#2016-12-2221:21alexmillerif you have a datomic entity in hand, you can also look at what it implements with (supers (class entity))#2016-12-2221:21eggsyntaxCool, wasn't aware of that one, thanks!#2016-12-2221:21alexmillerkind of interesting to compare across different types#2016-12-2221:21alexmillerthere’s also some interesting class diagrams out there#2016-12-2221:22eggsyntaxYeah, I'm going to end up doing that just to satisfy some curiosity 🙂#2016-12-2221:22alexmillerhttps://github.com/Chouser/clojure-classes/tree/master#2016-12-2221:22alexmillerand Stuart has something similar somewhere#2016-12-2221:22eggsyntaxCool!#2016-12-2221:23alexmillerhttps://github.com/stuartsierra/class-diagram#2016-12-2221:23alexmillerI found the latter particularly helpful before I had internalized a lot of that stuff#2016-12-2221:23eggsyntaxMy new wallpaper 😉#2016-12-2222:57jsimonWhere can I find docs for older versions of datomic? Looking specifically for 0.9.5344#2016-12-2305:07robert-stuttafordwow, that’s awesome @alexmiller ; til about supers and those two class graph libs#2016-12-2305:25weiis there a way to use spec that works with both maps and datomic's entitymaps?#2016-12-2311:42geoffkWe wrote a metrics callback handler for capturing Datomic peer metrics in CloudWatch which I hope might come in handy for someone else too. https://gist.github.com/geoff-kruss/4504cdcf7e017d289862ab75fc856720#2016-12-2313:34alexmiller@wei no - s/keys relies on being able to iterate the map and you can't do that with entity maps. There is a ticket about this. One option is to put the entity into a map first by either into or select-keys#2016-12-2315:03kwladykahttps://www.refheap.com/49bc0c1dfa442e26d4d2af766 - how you deal with data transformation datomic <-> spec? I mean what is your data structure in situation like that and what do you use to transform it from datomic -> spec and from spec -> datomic. I can use :product/id keys in app, but then i will feel like i am doing structure especially for datomic in the app.#2016-12-2318:00ghaskinsI currently run the transactor with "-Xms384m -Xmx384m" and ...#2016-12-2318:00ghaskinsbut no matter what i try, I get that error on start up on the peer side#2016-12-2318:01ghaskinsany help understanding what is wrong would be appreciated#2016-12-2318:01ghaskinsthis is free-0.9.5407 in case it matters#2016-12-2318:04ghaskinshmm, one thing I am noticing is I get this error even without the transactor running#2016-12-2318:07ghaskinsok, so setting -Xm*512m in the peer seems to have fixed it: its like the peer needs its own transactor.properties too?#2016-12-2318:55robert-stuttaford@ghaskins http://docs.datomic.com/capacity.html has a pretty comprehensive explanation of all the knobs and dials on this topic#2016-12-2318:56ghaskins@robert-stuttaford ty#2016-12-2322:21drewverleehas anyone had success using this buildpack to deploy an app + datomic to heroku? It looks so nice and then i hit a: Error: Could not find or load main class clojure.main
which suggests its not building the jar i think.#2016-12-2322:34drewverleehttp://blog.opengrail.com/datomic/heroku/postgres/2015/11/19/datomic-heroku-spaces.html <-- the builder of the buildpack i used seems to suggest using the openspace isn't just for security, but is necessary. That might be my issue?#2016-12-2322:36drewverleeI guess ill pivot back to the AWS methods 🙂#2016-12-2322:39wei@alexmiller re: spec and entity maps - I’ve been doing this but it’s not composable and you can’t validate a ref, for example. do you have any suggestions?
(defmacro valid? [spec m]
`(clojure.spec/valid? ~spec (into {} ~m)))
(defmacro assert [spec m]
`(clojure.spec/assert ~spec (into {} ~m)))
#2016-12-2322:54bhaganyWe finally and officially have Datomic in production wooooooooo #2016-12-2323:06weicongrats @bhagany! we’re running datomic in production as well 🙂#2016-12-2422:07dottedmagI'm reading "Identity and Uniqueness" page of Datomic docs, and it says the following about idents: "When you navigate the entity API to a reference that has an ident, the lookup will return the ident, not another entity." -- what does it mean? I can't wrap my head around "navigate the entity API to a reference".#2016-12-2422:50pesterhazy A reference, or ref, is a value type#2016-12-2422:51pesterhazyA car has a model#2016-12-2422:52pesterhazy:car/model, the attribute, is a ref#2016-12-2422:53pesterhazyIf you navigate from the car to the model entity and the model happens to have an ident, the entity map will return the ident#2016-12-2423:02dottedmagAh, I see. It is to implement enums.#2016-12-2423:02dottedmag@pesterhazy Thanks#2016-12-2423:19dottedmagInteresting. I have created an entity with a reference to non-existing entity, and it worked: (datomic.api/transact conn [[:db/add "jdoe" :db/lang 44444444]]) -- is that expected?#2016-12-2423:20dottedmagBoth this entity and entity with id 44444444 were created as a result.#2016-12-2423:21dottedmagAh, I see the rationale here: https://groups.google.com/forum/#!topic/datomic/jZYXqtB4ycY#2016-12-2423:31dottedmagIt's interesting that there is no type for a "closed" enum, unless one counts :db.type/boolean as one.#2016-12-2509:03pesterhazy@dottedmag I find that interesting too, though I personally never use closed enums in relational dbs either#2016-12-2509:04pesterhazythe hickey-halloway philosophy prefers open systems#2016-12-2510:02dottedmag@pesterhazy Every time one adds a transaction one uses a closed enum: added is a boolean, not a ref, and I doubt Datomic would handle anything else than true and false properly.#2016-12-2510:03dottedmagToo bad we can only model two-element enums directly :)#2016-12-2711:31kwladykaI want add data like [{:pl “a” :en “b”}{:pl “c” :en “d”}] what choice of schema i have? Not sure how to operate on vector of maps in datomic 🙂#2016-12-2715:00pesterhazy@kwladyka, at my company we came up with a pattern#2016-12-2715:01pesterhazy:product/name, :product/description are in English#2016-12-2715:01pesterhazy:product/translations is a ref and points to a "translation" entity with all the same fields#2016-12-2715:02pesterhazy{:db/id ... : :pl, :product/name "Polish name", :product/description "Polish descr"}#2016-12-2715:02pesterhazythat's of course assuming that there's a primary language (in our case, English)#2016-12-2715:04pesterhazyit works for us, but I'm curious how others work with i18n in Datomic#2016-12-2715:04kwladykahmm i wanted make it simple on the beginning, but maybe it is not possible 🙂#2016-12-2715:04pesterhazyI think our pattern is actually relatively simple#2016-12-2715:05pesterhazythe problem of course is that you need a relation with a primary key consisting of product_id and language (in SQL terms)#2016-12-2715:05kwladykawhy not refer the :product/name :product/description etc. instead of :product/translation?#2016-12-2715:06pesterhazythat would work too, but it's convenient to have the English name attache directly to the entoty#2016-12-2715:07kwladykai prefer to not stick to any language, because my native language and country doesn’t have english language as primary#2016-12-2715:08pesterhazysame for me, but our company does have a primary lang#2016-12-2715:08kwladykaanyway… is it possible to just save vector of maps in datomic in any way? Just to start from scratch?#2016-12-2715:09kwladykaor i have to choose this pattern or another?#2016-12-2715:09pesterhazyyou can only store EAV tuples#2016-12-2715:09pesterhazyyou cannot store a vector, but you can store a set of refs#2016-12-2715:10kwladykabut i need the right order#2016-12-2715:10pesterhazythen you need to store a position attribute too#2016-12-2715:10pesterhazyAFAIK, though maybe there's some other smart way#2016-12-2715:10kwladykaso as i understand i can’t use :db.cardinality/many#2016-12-2715:10kwladykabecause i have to store ingredients etc.#2016-12-2715:11kwladykammm i really don’t want make complexity by position#2016-12-2715:12kwladykabut if i have to....#2016-12-2715:13pesterhazyit's not that bad, but you need to do the sorting yourself (in clojure code, outside of d/q)#2016-12-2715:14kwladykai mean i can add this, but it will be nice to write/read in datomic in the same order without adding additional position value#2016-12-2715:17kwladykathx for share your solution#2016-12-2715:19pesterhazynp#2016-12-2715:20pesterhazylet me know if you discover a better way#2016-12-2715:21marshallThe other option for ordering is to store things as a ‘linked list’ (i.e. each entity has a ‘next’ ref) instead of storing an index explicitly with each individual entity#2016-12-2715:23pesterhazyso you'd use a recursive query to get all the elements?#2016-12-2715:23marshallyep. Since all the work of query is local to the peer and caching provides local access, you don’t have to worry about the n+1 problem the same way as you would with a traditional DB#2016-12-2715:24marshallof course, the decision is similar to that of array vs linked list for a data structure - each is better at some things and worse for other things#2016-12-2715:24marshalli.e. insert in the middle vs random access, etc#2016-12-2715:26pesterhazyinteresting. at first glance it sounds scary, because of the danger of circular lists#2016-12-2715:30dottedmagIf order is an "insertion order", then one does not need a separate attribute and might use transaction id for sorting.#2016-12-2715:31dottedmagor entity id.#2016-12-2715:31marshall@dottedmag also correct - if you don’t need to “update” it later#2016-12-2715:31dottedmagretract and re-insert everything :)#2016-12-2715:31marshall@pesterhazy You can always limit the recursion (i.e. with pull)#2016-12-2715:32marshall@dottedmag true, if you’re using tx id - that also mandates that you transact each entity in a separate transaction#2016-12-2715:32marshallsince transactions themselves are atomic there’s no order within them#2016-12-2715:34pesterhazyrelying on txid works but can be limiting if you want to dump/restore portions of dbs#2016-12-2715:35pesterhazye.g. a development snapshot of the prod db, or just copying a few products from one, say, supplier to another#2016-12-2715:44jdkealyis there some best practice for stringing together datomic queries... I have a situation where i'd like to provide the customer with a variable number of filters. I have tried passing some DIY syntax to the backend and reducing the client inputs into one large datomic query, and it works with some success however the code looks quite brittle and I've yet to apply any appropriate order of executing queries (biggest diffs to smallest diffs)... So I was just wondering if anyone had any recommendations basically about how to abstract a datomic query into modular filters.#2016-12-2716:09jdkealylike the basic problem is... let's say i wanted someone to search for User entities. I provide a search by email, name, city inputs. On the backend the combinations of email,name,city could result in 8 or 9 defined query functions. ['name', 'city', 'email', 'name,city', 'city,email'] etc etc#2016-12-2720:28kwladyka@pesterhazy can you show me shema and how you operate on this data structure? For now i have https://www.refheap.com/9347805f7fb2e7604e132f761#2016-12-2720:29kwladykai am learning datomic and after thinking what you said me.... i am confuse how it work#2016-12-2720:30kwladykai am confuse how to create schema, add data, read data for solution what you showed me#2016-12-2720:31kwladykait probably means i totally miss same important part of datomic 😕#2016-12-2805:56xsynIf I do a count inside a datalog query: (q/q '[:find (count ?x)… and ?x is nil, does the count return nil or 0 ?#2016-12-2805:57xsynI mean, does the query return nil or 0 ?#2016-12-2806:01xsynTurns out I can check for myself......#2016-12-2819:08val_waeselynck@xsyn if there is no match for the where clause, it will return nil, which can be annoying.#2016-12-2820:34xsynYeah, I was running into a null pointer because I expected it to work differently. But it’s cool, now that I know#2016-12-2923:09wilkerluciohey, does someone can help me deploying Datomic to Heroku? I'm following this tutorial: https://elements.heroku.com/buildpacks/opengrail/heroku-buildpack-datomic#2016-12-2923:09wilkerluciobut I can't get the datomic dependency to be found when deploying my app#2016-12-3004:02drewverlee@wilkerlucio One thing to note is you need a heroku private space in order for that to work. At least thats what i concluded when i tried that.#2016-12-3004:03drewverlee@wilkerlucio you also need the Datomic Pro version.#2016-12-3004:07drewverleeI haven’t gotten that to work. When i deployed it along side a luminus plus reframe setup the procfile didn’t seem to trigger a build of the clojure app, so i ended up with a clojure.main not found. Which lead me to believei had several more steps to go in order to deploy an app with heroku on datomic.
Its possible that buildpack deploys only datomic on heroku and you have to somehow deploy a separate app to heroku and tell them how to communicate. It wasn’t clear to me, and at that point i figure it might be easier to just use AWS where i have more control...#2016-12-3015:55wilkerluciothanks @drewverlee , I was able to build the clojure but it's failing to install the datomic dependency, I guess I have to setup a private repository for it... I'm starting to think about aborting Heroku, the reason to pick it was easy of use, since it's not being very easy maybe I'm better on AWS like you said#2016-12-3016:12drewverlee@wilkerlucio
> I'm starting to think about aborting Heroku, the reason to pick it was easy of use, since it's being very easy maybe I'm better on AWS like you said
Those thoughts echo my own. But take what i say with a grain of salt as this is a new realm for me.#2016-12-3016:20wilkerluciomaybe we can have the transactor in AWS and the Clojure app on Heroku, deploying a non-datomic app seems easy there#2016-12-3017:25curtosis@drewverlee I don’t know about your dependency problem, but yes - you need to deploy two apps (transactor + peer/client). IIRC getting them to communicate is why you need access to Heroku Spaces, which is not currently a standard part of the service.#2016-12-3018:24drewverlee@curtosis Makes perfect sense, i dont think thats explicitly stated in the buildpack i was looking at. But it probably assumes you can put two and two together. Sadly Heroku Spaces seems to require heroku enterprise which costs thousands of dollars. Not ideal for setting up a for fun personal app.#2016-12-3019:45curtosisI think when that article was posted originally, Spaces was in limited beta so you could get it enabled on request.#2016-12-3121:52mrgThis may be the wrong place to ask, since I'm doing this in datascript, but maybe the solution is similar: I am wondering how to check for set membership of a value, like you would do with some in plain clojure#2016-12-3121:52mrgI would expect something like this to return the eid of max#2017-01-0103:01notanonwhats the cost of the AWS resources that are created by this guide? http://docs.datomic.com/aws.html#2017-01-0119:53mrgfwiw this was the answer#2017-01-0218:08thegeezIf transaction A was done before tx B is it then the case that the 'tx entity id for A' < 'tx entity id for B'? It seems like it is, but I can only find that documented for the t value of a transaction.#2017-01-0220:02notanontransaction ids always increment.#2017-01-0303:07weiwhen you call (into {} some-entity-map), why isn’t :db/id part of the resulting map? just wondering if there’s a good reason, before I make an entity->map helper that does copy the :db/id#2017-01-0309:18val_waeselynck@wei probably because :db/id is not part of the keySet(): http://docs.datomic.com/javadoc/datomic/Entity.html#keySet--#2017-01-0309:21kwladykaWhat do you use to store timestamp with timezone?#2017-01-0311:27lenjava.util.Date does not contain timezone info, so if you are using :db.type/instant you will need to store the timezone in a different datom#2017-01-0311:36pesterhazyI'm looking into set intersection query ("give me all products with features a, b AND c")#2017-01-0311:37pesterhazyHere's what I could come up with: https://gist.github.com/pesterhazy/1e0ce9f18035b1693ee9e55241b23be6#2017-01-0311:37pesterhazyIs there a more elegant way than programmatically generating queries?#2017-01-0311:53robert-stuttaford@pesterhazy entity ref collections are sets; i’d try filtering over all entities with :cat.product/features and doing set intersections with clojure.set/intersection on the set of entities for the slugs#2017-01-0311:58robert-stuttafordsomething like this#2017-01-0311:58robert-stuttaford(let [slugs ...
db ...
ent-fn #(d/entity db %)
features (into []
(comp (map (fn [slug]
[:cat.feature/slug slug]))
(map ent-fn))
slugs)]
(into []
(comp (map ent-fn)
(filter #(seq (clojure.set/intersection (:cat.product/features %) features))))
(d/datoms db :aevt :cat.product/features)))
#2017-01-0312:00robert-stuttafordyou can make dynamic datalog queries, and it should cache its own pre-processing on the result of your make-intersection-q for each of the n values for future use. you might instead want to produce a value for http://docs.datomic.com/clojure/#datomic.api/query, which puts args and datalog syntax into a single structure#2017-01-0312:08thegeez@pesterhazy perhaps build up a rule for the AND-ing of the slugs?#2017-01-0312:09thegeez(let [rule [(into '[(slug-and [?e])] (map #(vector '?e :cat.feature/slug %) feature-slugs))]]
(d/q
'{:find [?e]
:in [$ %]
:where [(slug-and ?e)]}
db
rule))#2017-01-0312:10pesterhazyah dynamically generating a rule, interesting!#2017-01-0312:11pesterhazy@robert-stuttaford wouldn't your solution be slower, as it has to generate the feature set for each product?#2017-01-0312:12pesterhazy> you might instead want to produce a value for http://docs.datomic.com/clojure/#datomic.api/query, which puts args and datalog syntax into a single structure#2017-01-0312:12pesterhazywhat do you mean by that?#2017-01-0312:12pesterhazyah there's a difference between q and query#2017-01-0312:13pesterhazyvery interesting stuff#2017-01-0312:43robert-stuttafordpossibly, @pesterhazy - i suppose it would!#2017-01-0312:43val_waeselynck@pesterhazy dynamic conjunction can be achieved using double negation and dynamic disjunction in Datalog#2017-01-0312:44robert-stuttafordi like @thegeez ‘s idea#2017-01-0312:51robert-stuttaford@val_waeselynck is there a code sample that illustrates your idea?#2017-01-0312:51pesterhazyDe Morgan's law?#2017-01-0312:51val_waeselynck@robert-stuttaford working on it 🙂 not sure it's possible in this case because of the constraint that not-join must operate on one data source#2017-01-0312:51val_waeselynck@pesterhazy yeah essentially#2017-01-0312:52robert-stuttafordmy autodidactism bites again. never did CS -pout-#2017-01-0312:52robert-stuttafordneat!#2017-01-0312:52val_waeselynckbut i don't think this one will work out of the box#2017-01-0312:53val_waeselynckbecause of the fact that not-join works only on one data source#2017-01-0312:53pesterhazyI'm in the same boat Robert#2017-01-0312:54val_waeselynckthe following may work#2017-01-0312:55val_waeselyncknot sure about the performance though 🙂#2017-01-0313:00pesterhazyNice!!#2017-01-0313:01val_waeselynckThis is not the first time I see something like this come up - should probably be part of some best practices / tips and tricks section of the Datomic docs#2017-01-0313:06val_waeselynckIf this does not work-out performance-wise, then yeah, probably either generate a query or rule, or call a function in-query which performs the EAVT traversal, or a combination of both 🙂#2017-01-0313:18pesterhazymind if I write a blog post summarising your approach?#2017-01-0313:29val_waeselynck@pesterhazy no problem whatsoever 🙂#2017-01-0315:59uwohas anyone written a library for using pull syntax on local data structures?#2017-01-0319:57stuartsierra@uwo Sort of https://github.com/stuartsierra/mapgraph#2017-01-0319:57uwo@stuartsierra thanks!#2017-01-0320:55weithanks, makes sense. wondering about the justification for not including it.#2017-01-0321:04pesterhazy@wei, the db/id is not really an attribute of the entity#2017-01-0321:05pesterhazyalthough it is shown when you print the entity#2017-01-0321:05pesterhazyI suppose you could (-> entity pr-str read-string)? 🙂#2017-01-0417:07zmaril@uwo datascript can do that#2017-01-0417:09uwo@zmaril thanks. I’ve used datascript before and enjoyed it very much. However, I was thinking of working with data structures that are not indexed or stored in a ref.#2017-01-0417:09zmarilahhh okay#2017-01-0417:10uwoI’m also familiar with Specter, but I prefer pull syntax to that#2017-01-0417:10zmarilsame#2017-01-0420:39ghaskinsThis is probably a noob fail, but I am struggling to return all attributes for specific entity for which I know the index value of#2017-01-0420:40marshallby know the index value, you mean you have the entity ID?#2017-01-0420:40ghaskinsno, but :refdata/symbol is indexed#2017-01-0420:40ghaskinsno matter what I try, the pull is not happy with the attribute spec#2017-01-0420:42marshall@ghaskins as indicated here: http://docs.datomic.com/query.html#pull-expressions
you’d want something like:#2017-01-0420:42marshall[:find (pull ?e `[*])
#2017-01-0420:42marshallthen your in clause#2017-01-0420:43ghaskinsah, the backtick might be the thing I was missing#2017-01-0420:43marshallyou actually shouldnt need the backtick#2017-01-0420:43marshallsorry, that was a typo#2017-01-0420:43marshallsee the example here:
[:find (pull ?e [:release/name]) (pull ?a [*])
:in $ ?artist-name
:where [?e :release/artists ?a]
[?a :artist/name ?artist-name]]
#2017-01-0420:44ghaskinshmm, yeah, ive been trying that but it doesnt work#2017-01-0420:44marshallyour example shows your syntax differing#2017-01-0420:44marshallno need for the [(datomic/pull#2017-01-0420:44marshalljust (pull#2017-01-0420:45ghaskinsI am getting this: "java.lang.IllegalArgumentException: Argument [*] in :find is not a variable"#2017-01-0420:45ghaskinsfor#2017-01-0420:45ghaskinsand I just cant spy my error, nor bash it into submission, heh#2017-01-0420:46ghaskinsyeah, the vector notation was an act of desperation#2017-01-0420:46marshalltry this:
(datomic/q '[:find (pull ?e [*])
:in $ ?symbol
:where [?e :refdata/symbol ?symbol]]
db
symbol)
#2017-01-0420:46ghaskinsin this case, the cardinality is 1:1 with the symbol but I was just trying that#2017-01-0420:46ghaskinsoh, i see#2017-01-0420:46ghaskinsno ns/#2017-01-0420:47ghaskinsyes!#2017-01-0420:47ghaskinsthank you @marshall#2017-01-0420:47marshallnp
👍#2017-01-0420:47ghaskinsthat was the opposite of obvious 😉#2017-01-0420:47ghaskinsthank you very much#2017-01-0420:50ghaskinsout of curiosity, is the (pull) form some kind of special form in this context, or is it merely that the datomic/ ns was not valid in the transactor context and datomic is implicitly :refer :all ?#2017-01-0420:51ghaskinse.g. can I run arbitrary clj in the :from as long as its ns-resolved properly?#2017-01-0420:51ghaskinsor is the language constrained to specific keywords#2017-01-0420:52ghaskinsi suppose I can experiment to find out#2017-01-0420:53ghaskinsnope...i tried (datomic.api/pull) form and it wont take it#2017-01-0420:53ghaskinsit seems (pull) is special case#2017-01-0421:32tjtoltonso, trying to import all my company's data into datomic to get an idea of how many datoms it will look like, and so that I can demo using datomic.
Is there a "day of datomic" section about importing data?#2017-01-0421:36marshall@ghaskins Sorry - I got pulled away. No pun intended.
Correct, you can’t run arbitrary code in the find specification. You can see the allowed forms in the grammar here: http://docs.datomic.com/query.html#grammar#2017-01-0421:38marshallYou can define a custom aggregate function, but it will require some specific parameters#2017-01-0421:38marshallhttp://docs.datomic.com/query.html#custom-aggregates#2017-01-0421:41jaret@tjtolton https://stackoverflow.com/questions/27778339/migrate-to-datomic-from-postgres specifically check out the video Ben links in edit to his answer. We don't have a day of datomic like tutorial, but hopefully that answer and the video will help.#2017-01-0421:41tjtoltonawesome! Thanks!#2017-01-0500:27eoliphantI have a noob question lol. I’m working with the ‘dev’ mode database. I’ve got a REPL running and the console, it seems like when i transact some stuff in, the console doesn’t ‘see’ it unless i refresh the page entirely. I’ve also noticed that my repl session doesn’t seem to always have everything I’ve transacted in as well unless, I again, create a new connection, etc. I’m using the peer api in my repl session#2017-01-0500:42eoliphantok duh, nvm 'A database value is immutable and calls to query, pull, or entity on the same database value will return consistent results.’ some of this takes some getting used to#2017-01-0500:43eoliphantbut it still seems a little awkward from the perspective of the console. The “reload” button doesn’t seem to help#2017-01-0508:04robert-stuttafordfurther to your #clojure discussion, @val_waeselynck and @pesterhazy, you can do fun stuff like this with entities https://gist.github.com/robert-stuttaford/50acaa23986a52281f15982baa4922c9#2017-01-0508:04robert-stuttafordi18n wrapper#2017-01-0508:07val_waeselynckfunny indeed!#2017-01-0508:08pesterhazyCool#2017-01-0508:09val_waeselynckI'm thinking too of writing my own version of entities to add more features#2017-01-0508:09pesterhazyYou came up with the same pattern to store translations as we did#2017-01-0508:10robert-stuttafordwhat features, @val_waeselynck ?#2017-01-0508:10pesterhazyWhat kind of features could you add?#2017-01-0508:10val_waeselynckderived data essentially#2017-01-0508:10robert-stuttafordcan you share what a sample invocation would look like?#2017-01-0508:12robert-stuttafordyou’d hide computation behind key lookups?#2017-01-0508:12robert-stuttafordlike accessors in OO#2017-01-0508:13val_waeselynck(select-keys my-entity [:person/children :person.derived/n_children]) => {:person/children [{:db/id ...} {:db/id ...}] :person/n_children 2}#2017-01-0508:13val_waeselynckyeah exactly#2017-01-0508:13val_waeselynckI'm still a bit worried about this aspect to be honest 🙂#2017-01-0508:13robert-stuttafordworth mucking around with, i reckon 🙂#2017-01-0508:13val_waeselynckI may instead choose to make this kind of call explicit#2017-01-0508:13robert-stuttafordmain thing to worry about would be caching behaviour#2017-01-0508:14val_waeselynckbut still work on an entity if you know what I mean#2017-01-0508:14pesterhazyOr you could use supdate to do this :)#2017-01-0508:15val_waeselyncksupdate won't work on entities out of the box, you need to support assoc 🙂#2017-01-0508:15robert-stuttafordyeah, supdate + pull is a better fit#2017-01-0508:15pesterhazyHmm but select-keys does#2017-01-0508:17val_waeselynckactually, I think I'll make my own version of pull instead of making my own version of entities#2017-01-0508:17val_waeselynckthanks guys for helping me brainstorm this 🙂#2017-01-0508:18pesterhazyAlways happy to rubber-duck#2017-01-0508:19val_waeselyncknever heard that expression, just looked it up, funny ^^#2017-01-0508:22val_waeselynckalthough I do consider you guys a more educated audience than a rubber duck#2017-01-0509:18robert-stuttaford:duck:#2017-01-0517:39cap10morganAnyone else gotten the "Could not read transactor location from storage" error from a peer pointed at a dev transactor when the peer tries to connect very soon after the dev transactor was started (i.e. in a script or docker-compose stack)?#2017-01-0518:05jaretIs the peer eventually able to connect?#2017-01-0518:06jaretWhen transactors launch, they write their location to storage. They do this also on every heartbeat message. If you see this when the peer tries to connect, its likely because no transactor location has been written.#2017-01-0518:06jaretOr that the peer itself is unable read from storage.#2017-01-0518:07jaretSo its possible if you are getting this initially that the transactor has yet to write the location to storage and if your subsequent peers (post initial startup or heartbeat) are able to connect then its just a matter of a peer trying to connect too soon.#2017-01-0518:13jaret@cap10morgan ^#2017-01-0518:13cap10morgan@jaret yes peers that wait longer can connect#2017-01-0518:14cap10morganjust trying to figure out a reliable way to wait until datomic is truly ready for connections#2017-01-0518:14jaretthe metric for HeartBeat is written to the logs. Not sure if you can wait until you see that metric#2017-01-0518:15cap10morgannot easily in a docker-compose stack, I don't think#2017-01-0518:15jaretAnd the interval between heartbeats is defined in your transactor properties file#2017-01-0518:15jaretusually its 5 seconds by default#2017-01-0519:53eoliphanthi, i have yet another datomic graph question. I’ve modeled some attributes for a simple hierarchy say :ent/name, and :ent/parent as a ref. I’m trying to figure out say assuming a few levels of nesting how to, after matching say [? :ent/name “Foo"], how to return Foo and its parents to any level#2017-01-0520:02marshallYou could use a recursive pull pattern#2017-01-0520:02marshallhttp://docs.datomic.com/pull.html#recursive-specifications#2017-01-0520:03marshall@eoliphant ^#2017-01-0520:04eoliphantsweet, totally missed that thank you will give it a whirl#2017-01-0520:05marshallsure#2017-01-0520:41domkmAre there any best practices when you need to alter an attribute to become fulltext searchable?#2017-01-0520:41domkmRelated, any chance Datomic will support this in the near future?#2017-01-0521:01jaretThe best approach is to rename your old attribute something like attribute-old and then re-create the attribute with db/fulltext. You will then want to pour in or import the values from the old attribute as necessary. You cannot alter :db/valueType or :db/fulltext.#2017-01-0522:30eoliphantHi, question. can a collection binding be matched against a cardinality/many attribute? I have a many attribute, that I need to check against a list of inputs. It’s not working as I expected so just wondering if it’s possible at all#2017-01-0522:34potetm@eoliphant So you're wanting to ensure set equality between input and db values?#2017-01-0522:34eoliphantactually i guess i’m actually looking for more of an intersection#2017-01-0522:35eoliphantif the db has [1,2,3] and my collection binding has [3,4,5] then it should ‘pass'#2017-01-0522:35potetmAFAIK there's that's not syntax for that, but you could do it w/ predicates, or you could just query the values out and do set/intersection#2017-01-0522:35potetmAh, just a normal query should work.#2017-01-0522:36eoliphantso say [?e :many/attribute [id1,id2,id3]]#2017-01-0522:36potetmcollection binding on the inputs, then just [?e :my/attr ?v] will match any entity that has any of those values in the collection. I believe.#2017-01-0522:37eoliphantok will play around with it some more#2017-01-0522:37eoliphantbecause it seemed to be working#2017-01-0522:37potetmhttp://docs.datomic.com/query.html#sec-5-7-2#2017-01-0522:38potetmJust like that ^#2017-01-0522:38eoliphantbut then i added a negative case, passed in some values that shouldn’t have matched and the query still returned results#2017-01-0522:38eoliphantyeah that’s what I’m working form#2017-01-0522:38eoliphantfrom#2017-01-0522:38potetmHmm... gimme one sec to confirm what I thinking#2017-01-0522:39eoliphantbut that example is against what looks like a cardinality/one attr#2017-01-0522:49potetmSo something like this is what I was thinking @eoliphant:
(d/q '[:find ?n
:in $ [?alb-name ...]
:where
[?alb :album/name ?alb-name]
[?a :artist/albums ?alb]
[?a :artist/name ?n]]
(d/db (d/connect uri))
["Stadtaffe"
"The Clown"])
=> #{["Peter Fox"] ["Charles Mingus"]}
(d/q '[:find ?n
:in $ [?alb-name ...]
:where
[?alb :album/name ?alb-name]
[?a :artist/albums ?alb]
[?a :artist/name ?n]]
(d/db (d/connect uri))
["FooBar"])
=> #{}
#2017-01-0522:54eoliphantsure and I ahve that working with cardinality/one attrs .#2017-01-0522:55eoliphantbut i think the possible complication is with /many. For instance, if you were binding to :artist/albums#2017-01-0522:55potetmAh, wait, I messed it up. Yeah one more sec.#2017-01-0522:57potetm(d/q '[:find ?n
:in $ [?alb ...]
:where
[?a :artist/albums ?alb]
[?a :artist/name ?n]]
(d/db (d/connect uri))
[17592186045420
17592186045422])
=> #{["Peter Fox"] ["Charles Mingus"]}
(d/q '[:find ?n
:in $ [?alb ...]
:where
[?a :artist/albums ?alb]
[?a :artist/name ?n]]
(d/db (d/connect uri))
[17592186045420])
=> #{["Peter Fox"]}
#2017-01-0522:58eoliphanthmm ok so it works#2017-01-0522:58eoliphantthat means i’m hosing something on my side lol#2017-01-0523:00eoliphantok cool thanks a mil#2017-01-0523:00potetmFrom what I saw you were trying to collection-bind inside the where clauses, which isn't supported#2017-01-0523:00potetmBut you don't need it. You basically get ORing of inputs on individual datoms.#2017-01-0523:00potetmNo prob!#2017-01-0523:01eoliphantright that’s what i understood from the docs. that it’s effectively an ‘or'#2017-01-0523:01eoliphantbut like i said, I’ve obviously manage to jack something up#2017-01-0523:01eoliphantlol thanks again#2017-01-0523:04eoliphantah crap lol, i figured it out, ugh. I did [?e …] on the find not the :in. I’m relatively new to this, so it all runs together sometimes#2017-01-0523:04potetmlol#2017-01-0523:04potetmhappens 🙂#2017-01-0523:49eoliphant`
(d/q '[:find (pull ?e [*])
:in $ [$scopes ...]
:where [?e :assignment/subject [:subject/identifier "Andre Awarder"]]
[?e :assignment/roles ?roles]
[?e :assignment/contexts ?scopes]
[?roles :role/privileges ?privs]
[?privs :privilege/action :action/View]
[?privs :privilege/object :object/Award]
] db [17592186045446] )
=>
[[{:db/id 17592186075478,
:assignment/subject {:db/id 17592186075476},
:assignment/contexts [{:db/id 17592186045444}],
:assignment/roles [{:db/id 17592186075470}],
:assignment/allow [true]}]]
`#2017-01-0523:50eoliphantUgh it’s still not working. so as you can see, I’m passing in ..5446 to match against :assignment/contexts, the only value in there at the moment is …5444 but this still ‘passes'#2017-01-0523:53marshallYour in clause names the 2nd input variable $scopes, but you use it is ?scopes#2017-01-0523:54marshallAs*#2017-01-0523:54eoliphanttypo.. ugh.. lol.. thanks#2017-01-0523:54eoliphantand of course it works now#2017-01-0523:54marshall:+1: #2017-01-0523:54eoliphant20 yrs of sql… 2 weeks of datalog lol#2017-01-0523:55potetm🙂 We call @marshall The Fixer.#2017-01-0523:55marshallRoflmao#2017-01-0523:55eoliphantlol#2017-01-0523:55marshallI'm totally taking a screenshot#2017-01-0601:18eoliphanthey guys quick question. I’ve got some datomic ‘enums’ i’ve created. do you know of any way to say list all of them with a certain prefix. say I’ve db/ident’ed :action/View, :action/Edit, etc and I want to get back a list of everything “:action"#2017-01-0601:23potetm@eoliphant Not built-in AFAIK. Could probably use (comp #{"action"} namespace) as a predicate in a query or just a plain query+`filter`.#2017-01-0601:26potetm(d/q '[:find [?i ...]
:in $
:where
[?a :db/ident ?i]
[((comp #{"artist"} namespace) ?i)]]
(d/db (d/connect uri)))
=> [:artist/albums :artist/name]
#2017-01-0601:26potetmI'm a sucker for toy problems.#2017-01-0601:34eoliphantlol i’ll bear that in mind thanks#2017-01-0602:57eoliphantif any of you guys are still around @potetm , now running into a problem, i was using the collection binding to get going, but i need to pass in other variables, so I’ve switched over to a relation binding, but i’m getting a weird error.
(d/q '[:find ?e
:in $ [[?subject ?scopes ]]
:where
[?s :subject/identifier ?subject]
[?e :assignment/subject ?s]
[?e :assignment/contexts ?scopes]
[?e :assignment/roles ?roles]
[?roles :role/privileges ?privs]
[?privs :privilege/action :action/View]
[?privs :privilege/object :object/Award]
] db [["Andre Awarder"] scopeids] )
IndexOutOfBoundsException clojure.lang.PersistentVector.arrayFor (PersistentVector.java:158)
I’ve got the 2 vars in the in clause and a list of 2 lists as the variable. But no idea why it’s complaining here#2017-01-0603:02marshallTry :in $ ?subject [?scopes ...] #2017-01-0603:03marshallAnd pass db "Andre Awarder" scopeids as args#2017-01-0603:04marshallThe form you have specified there indicates that the input should be a list of 2-tuples#2017-01-0603:06eoliphantah so you can mix and match bindings? I was wondering about that#2017-01-0603:07eoliphantThe Fixer strikes again.. Thanks!#2017-01-0609:00tengIs there a function that, like temp-id?, that checks if an id is a temp-id or not?#2017-01-0609:11tengI did this:
(-> id last val neg?)#2017-01-0610:14jonpitheranyone used Packer to build DT AMIs?#2017-01-0610:24karol.adamiectransactor or peers? have not done any, by the way 🙂, but could be interested...#2017-01-0610:27karol.adamiecspecifically for peers. i wrap them in docker atm, but sliding level down to packer and using systemd for lifecycle could be beneficial at some point 🤔#2017-01-0610:33jonpitherBuilding a packer setup for the transactor... blog on it's way#2017-01-0610:33jonpitherthen you can fine tune things like SSH access, log handling etc#2017-01-0610:33karol.adamiec@jonpither fantastic. cant help but will read blog for sure 😄#2017-01-0610:34jonpitherit'll be short and sweet 🙂#2017-01-0610:34karol.adamiecyeah, setting up transactor is a bit of kludgy atm. Even with terraform 😕#2017-01-0610:48jonpitherI think at least the packer bit can remove the fetching and installing of Datomic, if not the running with precise args (that last bit you'd still want Terraform)#2017-01-0610:48jonpitherAlso with Packer you'd then have complete control of your AMI, so can choose to open up SSH etc#2017-01-0610:49jonpither(you get complete control with Terraform also, but ideally the AMI should prebake as much as is sensible)#2017-01-0610:50robert-stuttafordwe’re using the stock AMI with TF#2017-01-0610:51robert-stuttafordnow that we’re no longer injecting the Datadog agent into the instance, it’s working great. for some reason it interfered with self-termination#2017-01-0613:48souenzzoHello, I'm having problems with some characters in the fulltext
(d/q '[:find ?e ?name
:in $ ?search
:where [?e :user/name]
[(fulltext $ :user/name ?search) [[?e ?name]]]]
db search)
When the search contains a !(and same other chars, in some positions), an exception occurs.#2017-01-0613:54souenzzoIs this foreseen (I did not find anything in the docs)? Is there a blacklist of characters?#2017-01-0613:57magnarsWhen looking at :tx-data coming off the txReportQueue, it seems that :db/txInstant is always the first datom in the list. This makes it easy to find it. Is this part of the API proper, or should I not rely on it being like this?#2017-01-0614:02magnarsRight now I'm doing this:
(some (fn [[e a v]]
(when (= :db/txInstant (:db/ident (d/entity db a))) v))
tx-data)
#2017-01-0614:22marshall@magnars I don’t believe the API makes any guarantees about order#2017-01-0614:22marshallI’d be wary of relying on it#2017-01-0614:22magnarsThanks, @marshall - I shall keep the existing code then.#2017-01-0710:08ezmillerhyper basic question: where is the transactor properties file where I should paste my license key? this isn’t really explained in the getting started docs that I can see…#2017-01-0710:32pesterhazy@ezmiller http://docs.datomic.com/storage.html#install-license#2017-01-0710:32pesterhazyah you mean where the file is?#2017-01-0710:33pesterhazy> Once your Storage Service is configured, you can start the Transactor locally by running bin/transactor. It takes a properties file as input.#2017-01-0710:41ezmillerHuh.#2017-01-0710:43ezmillerActually, I think that that’s not what I need. I’m trying to go through the getting started docs. They say to start a db like so:
bin/run -m datomic.peer-server -p 8998 -a myaccesskey,mysecret -d firstdb,datomic:#2017-01-0710:44ezmillerI’m guessing myaccesskey and mysecret need to be provided in this command. But not sure where to get those.#2017-01-0710:48ezmillerhttp://docs.datomic.com/first-db.html#2017-01-0710:52pesterhazyA Peer Server is a new concept, I haven't worked with those yet#2017-01-0710:53ezmillerAhhh…#2017-01-0710:54ezmillerThinking of following along with this as a start instead: https://www.youtube.com/watch?v=ao7xEwCjrWQ#2017-01-0710:54ezmillerSeems to install datomic in the context of a leinigen app...#2017-01-0711:24ezmillerI think that vid is too old. Getting all sorts of weird errors.#2017-01-0711:25ezmillerThe [docs on the peer-server](http://docs.datomic.com/peer-server.html) say this with regard to the access key/secret:
> The access-key and secret are opaque strings. Clients must provide a matching access-key and secret to connect.#2017-01-0711:31ezmillerSo, I guess maybe in the command above the myaccesskey is actually just meant to be the access key, and the same for the secret.#2017-01-0723:01eoliphanthi guys, is there somethng like an ‘explain plan’ for datomic? I have a query that’s not doing what I told it to do lol, but it ‘appears’ to be correct. Just wondering if there’s a way to dump/log, what it’s doing in terms of evaluation#2017-01-0801:08eoliphantanyone running datomic on AWS? I’m trying to set it up, but the docs, and the video they reference (which is pretty old) don’t seem to line up. The video has you run an ‘ensure-transactor’ subcommand which isn’t enumerated in the text just wondering what’s actually authoritative#2017-01-0801:12eoliphantnvm, RTFM lol#2017-01-0804:00notanonkeep a close eye on the Cloudformation stack#2017-01-0804:00notanoni tried to create one on aws and the ec2 instance just went from starting - running - stopping - terminated over and over again for an hour#2017-01-0804:00notanonbefore i deleted the stack and gave up#2017-01-0804:00notanoncost $3 🙂#2017-01-0817:45chris_johnsonI’m actually running Datomic transactors in AWS in production, using the stock CF template/stack right now#2017-01-0817:46chris_johnsonthe instance will log to an S3 bucket - when you get that starting - running - stopping - terminated cycle over and over you can go look in the logs bucket to see what’s up. In my experience it’s usually a problem with not being able to “see” the DynamoDB storage (either the URI is wrong or the roles have been misconfigured somehow)#2017-01-0818:06notanonyeah it wasnt creating the s3 bucket, that's a big reason why i gave up. i suspect it was due to a security privilege (all though i gave it full access to s3:* or w/e the security format is), if i try again i'm just going to give the user full access to everything (scary).#2017-01-0818:07notanoni really wish there was a DBaaS option available on a cloud#2017-01-0818:08notanonseems like a good small business idea 🙂#2017-01-0819:19kvltHey y’all. I have written a simple datomic library to make your lives easier. If someone could give me feed back regarding anything. I’d really appreciate it: https://github.com/petergarbers/molecule#2017-01-0819:22kvltIt’s a simple datomic wrapper that removes a lot of the cruft around writing queries. I’ve found it to be extremely useful#2017-01-0819:33eoliphantyeah I had the starting - running - stopping - terminated problem as well#2017-01-0820:13pesterhazy@petr, your first example looks like you're re-inventing lookup refs#2017-01-0820:15kvlt@pesterhazy Hmm interesting. That’s not what I was trying to show. That could be any value. I just chose to use a ref there. Do you think a different example would be better?#2017-01-0820:16pesterhazynot sure#2017-01-0820:17pesterhazythe other thing is that hiding the datomic connection (as an implicit argument) is frowned upon by some (unless you only intend the library as a convenience for the repl)#2017-01-0820:17eoliphant@chris_johnson i’d not configured the bucket name, in the transactor config, so I redid everything and recreated the stack. So the bucket has been created, but whatever’s failing is doing so before it manages to dump any logs#2017-01-0820:19kvlt@pesterhazy I’m assuming you’re referring to the database value? It’s actually an optional argument. I should add that to the documentation. In my usage of datomic I have usually wanted the latest value of the database#2017-01-0820:31pesterhazystill it's better to explicitly call d/db once (at the beginning of a request handler, say) and then use the same value for the same piece of work#2017-01-0820:31pesterhazythat way you get a consistent "view of the world"#2017-01-0820:34kvlt@pesterhazy I agree with that. A lot of the time I have found would be a rest endpoint where I’m retrieving a value and just returning that object in JSON format. Using this library there is nothing stopping you from explicitly calling d/db at the beginning of the request and passing it as such (m/entities <db from wherever> {:something/else “moo”})#2017-01-0820:35kvltThanks for that. I’ll add it to the documentation#2017-01-0820:35pesterhazythe point is, that should be the default IMO, unless you're exploring from the repl#2017-01-0820:36kvltDo you mean in the documentation?#2017-01-0820:36kvltBecause there is nothing stopping anybody from doing that as their way of doing things in their own app#2017-01-0820:37kvltYou have the option to do either#2017-01-0821:42notanon@eoliphant thats what happened to me. and why i gave up. p.s. the docs say that if you leave the bucket name empty it will create one for you (it never did for me, i then tried manually and it still didnt log)#2017-01-0821:44eoliphantyeah this is annoying lol, it’s probably something simple, was trying to avoid having to stand one up by hand#2017-01-0821:44notanonhonestly think that would be easier#2017-01-0821:44notanoni dont know anything about aws or cloudformation#2017-01-0821:45notanonbut it's not hard to start an empty ec2 instance and just ssh into the box and configure it all myself#2017-01-0821:45eoliphantyeah, gonna give it a whirl, we actually do a lot of aws stuff. CF is great when everything works lol. We actually use another tool called Terraform as well for setting stuff up on AWS.#2017-01-0821:45eoliphantyeah I think that’s what I’m going to do#2017-01-0821:45eoliphantor just roll my own AWS image#2017-01-0821:46eoliphanti left a note in the google group, but will just give it a manual whirl in the meantime#2017-01-0821:46notanonbest of luck#2017-01-0821:46eoliphantthanks!#2017-01-0902:36chris_johnson@eoliphant @notanon can you share what Datomic version you’re using? I’d be happy to plug that in to my known-good CF template and spin one up for five minutes to see if it works on my machine#2017-01-0902:36notanoni used the latest version as of a couple days ago#2017-01-0902:37notanonand followed the instructions here: http://docs.datomic.com/aws.html#2017-01-0902:38notanon[com.datomic/datomic-pro "0.9.5544"]#2017-01-0904:48tcarlsAre the acceptable values for :tx-data different between client and peer APIs? I get "unable to resolve entity" errors trying to do a schema creation via the client API, whereas the peer API works fine, and I'm trying to determine whether it's my code at fault.#2017-01-0910:45val_waeselynck@petr Thanks for sharing this with us. Here is my 2c on molecule:
1. I would not use the word 'simplified' for describing the value proposition of molecule, as the word 'simple' has a very precise connotation in the Clojure/Datomic community. I would rather say convenient or practical or ergonomic (which is valuable too!)
2. I also disapprove of the (m/init) and (m/db) api. This introduces lifecycle and global state with little practical value in return. IMO, you should leave this kind of program organization up to the client.#2017-01-0912:45marshall@tcarls can you share what you're trying to transact using the client ?#2017-01-0912:47marshall@eoliphant did you follow the instructions for setting up ddb storage first (ensure-transactor etc?)#2017-01-0912:48marshall@eoliphant if so, you can try to start a local transactor first to make sure all your settings are correct then start the CF transactor#2017-01-0912:50marshall@ezmiller you're correct, the secret and access key are to be provided by you. They're used to secure the peer server client interaction#2017-01-0912:52marshallIn a "real" system you'd probably want to use Uuids or something like that. For the tutorial you can just use the strings "myaccesskey" and "mysecret" as provided in the example#2017-01-0912:55marshallFor the getting started tutorial, you just use the mem db, so no need to start a transactor (and no need for license key in properties file). When you want to have a persistent db then you'll need to run a transactor (with key in properties file)#2017-01-0912:56marshallIf you're using clients and peer server a persistent system would be the transactor, the peer server, and your client#2017-01-0913:15robert-stuttafordhappy to be running on our newly paid-for license renewal; now with unlimited peers 🙂#2017-01-0914:14eoliphanthi @chris_johnson i was using 0.9.5530 didn’t realize there was a new release, will try 0.9.5544 as well in the meantime#2017-01-0914:44kvlt@val_waeselynck I understand your concern. It is possible to use without using either the init or db functions. I have just found them to be convenient for me. If you liked you could simply pass your own database value to the functions and they would work without the atom being assigned a value. Would this address your concern or am I missing something?#2017-01-0914:50val_waeselynckI agree that I could simply not use init and db, i.e their mere presence in the API would not deter me from using the lib. I'm more worried about other libraries that would potentially build upon or take inspiration from molecule on that point. This is why, if I were you, I would not consider providing these functions until users start begging for them 🙂.#2017-01-0914:55pesterhazyI use something similar https://gist.github.com/pesterhazy/5b428176f84b06d1d68bdc97ea7823f8#2017-01-0914:55pesterhazySo I can easily type (rdq '[:find ...]) from the REPL etc#2017-01-0914:56pesterhazyrdoit! is particularly useful 🙂#2017-01-0914:57kvlt@val_waeselynck Ok. What is missing, in your eyes aside from the conneciton atom?#2017-01-0914:57kvltIt seems everyone has variations of what I have written but nobody is sharing them. It’s silly#2017-01-0914:58val_waeselynckAs for me, I don't use much variations 🙂 quite happy with the Datalog and Entity APIs!#2017-01-0914:58val_waeselynckOne thing I would love is to have an Entity-like navigational API for Datalog rules. Sadly the current performance of the Datalog engine makes this a bit prohibitive#2017-01-0915:02val_waeselynck@petr this pattern-matching API is quite reminiscent of MongoDB to me. Maybe you'll want to extend it with more stuff inspired by MongoDB, such as range predicates etc.#2017-01-0915:21kvlt@val_waeselynck Thank you for the feedback#2017-01-0915:23kvltI am probably not going to extend it that much. I like the datalog and entity apis. I just found that they seemed to have a lot of setup involved even for simple queries, which is why this library was born.#2017-01-0915:23val_waeselynck@petr I agree there's a pain point here, and molecule makes a valuable contribution to easing it!#2017-01-0915:24val_waeselynckI would recommend focusing on that and dropping the lifecycle stuff, which can be easily handled by the consumers, but that's just my opinion of course 🙂#2017-01-0916:59eoliphanthas anyone had issues with data loss with the dev transactor, it looks like it uses hsql under the covers. I had to hard reboot my system and i’m missing a bunch of datoms#2017-01-0917:00marshall@eoliphant Are they datoms that were durably committed prior to your reboot?#2017-01-0917:00marshalli.e. successfully transacted#2017-01-0917:01eoliphantyes.. AFAIK. I’d been working with it for probably a week or so#2017-01-0917:01eoliphanthad been runnig queries agaisnt them etc etc#2017-01-0917:02eoliphantand jsut to be sure, a ‘good’ (d/transact ..) means they are committed or am i missing something#2017-01-0917:02marshallThat’s fairly surprising, although the dev protocol is not intended for production usage, so no guarantees.
Your database is still available and has other info in it but some things are missing?#2017-01-0917:02marshallyes, a successful transaction#2017-01-0917:02eoliphantyes, as far as I can tell, it looks ‘old'#2017-01-0917:02eoliphantlike things that I’ve transacted in over the past few days#2017-01-0917:02eoliphantare missing#2017-01-0917:07eoliphantso do people like you typically use the dev transactor for local development? Just wondering if I should point it to a local postgres or something in case this is going to be a problem#2017-01-0917:10marshallYou’re using dev or mem?#2017-01-0917:12marshallat any rate, yes I use dev quite a bit - it’s surprising that your DB is still consistent; Have you started a transactor from a different absolute path with a properties file that might be referring to a different local disk location for the H2 storage?#2017-01-0917:13marshallalso, you’re not using an as-of or anything like that?#2017-01-0917:14marshallfinally, you can always run backups (http://docs.datomic.com/backup.html) from your dev database#2017-01-0917:18eoliphantdefinitely dev#2017-01-0917:19eoliphantyeah no as-ofs or anything like that it’s been pretty vanilla stuff. The only ‘weird’ thing i can think of is that i just upgraded to the latest version#2017-01-0917:19eoliphanti shut down the transactor and copied the data directory over#2017-01-0917:19eoliphantas well as my config files#2017-01-0917:19marshallinteresting#2017-01-0917:19eoliphanti noticed the issue after starting up on 5544#2017-01-0917:20eoliphantonce i saw it#2017-01-0917:20eoliphanti shut it down, then restarted on 5530#2017-01-0917:20marshallif you run the older version again can you still ‘see’ the other data?#2017-01-0917:20eoliphantbut saw the same issue#2017-01-0917:20eoliphantno#2017-01-0917:20eoliphantso it would appear that the problem was present prior to copying it over#2017-01-0917:20marshallah#2017-01-0917:20marshalli suspect i know what happened#2017-01-0917:21marshalldid you use implicit db.install/attribute ?#2017-01-0917:21eoliphantyeah i’m a new user lol, so ahve been using the shorthand syntax for my attribute defs#2017-01-0917:22marshallyep, that’s it;
Bugfix in 5544 :
* indexing jobs correctly handle the implicit use of
:db.install/attribute and :db.alter/attribute.#2017-01-0917:22marshallthere was a bug that could cause implicit use of those to be ignored by a subsequent indexing job#2017-01-0917:22marshallit was fixed in 5544#2017-01-0917:22eoliphantbut would that cause the loss of ‘old’ datoms#2017-01-0917:22marshalltry re-asserting your schema now that you have the newest version#2017-01-0917:22eoliphantalso like i was saying in 5530#2017-01-0917:22eoliphantwith the separate data dir#2017-01-0917:22eoliphantthey’re still missing#2017-01-0917:23eoliphantbut i’ll give that a try#2017-01-0917:23marshallyeah, it only appears after an indexing job#2017-01-0917:23eoliphantah#2017-01-0917:23eoliphanti see#2017-01-0917:23marshallso i suspect you eventually got enough data in there to cause indexing#2017-01-0917:23eoliphantso the restart#2017-01-0917:23eoliphantmight have forced this#2017-01-0917:23marshallor you requested an indexing job explicitly#2017-01-0917:23eoliphantand the associated loss#2017-01-0917:23marshallyeah, it’s not actually lost#2017-01-0917:23marshallyou just cant see it#2017-01-0917:23eoliphantyeah didn’t do an explicit one#2017-01-0917:23eoliphantright the ‘apparent’ loss#2017-01-0917:24marshalltry re-asserting your schema, it should all be fine then and the issue has been fixed#2017-01-0917:24eoliphantok will try that#2017-01-0917:29eoliphantcrap that didn’t work 😞 ‘re-transacted’ all my schema defs#2017-01-0917:29eoliphantlooks the same#2017-01-0917:29marshallyou may have to re-assert schema with the explicit :db.install/_attribute#2017-01-0917:29marshallone second#2017-01-0917:30eoliphantand just as an FYI#2017-01-0917:30marshalltry adding :db.install/_attribute :db.part/db to your transaction data map for your schema elements#2017-01-0917:30eoliphantI have some entities of the form :scope/type, :scope/name#2017-01-0917:30eoliphantwell I see some of them, but not all of them#2017-01-0917:30eoliphantjsut the ‘older’ ones#2017-01-0917:31eoliphantok will try that as well#2017-01-0917:31marshalland just to be sure, you can call request-index http://docs.datomic.com/clojure/#datomic.api/request-index#2017-01-0917:32marshall(i.e. (d/request-index conn) (d/sync-index conn (d/basis-t (d/db conn)))#2017-01-0917:32marshallto make sure you’ve indexed up to “now"#2017-01-0917:33eoliphantok doing that too#2017-01-0919:25eoliphanthey @marshall tried all that stuff, still no joy. I ended up just transacting all the stuff back in, it was mainly setup data. But will keep an eye on things to see if it happens again#2017-01-0920:04marshall@eoliphant OK. I’m still a bit surprised, but glad it wasn’t anything critical#2017-01-0920:26dominicmWhat's the best way to filter for matching entities from the d/datom function? Are there any performance gotchas to consider?#2017-01-0921:00eoliphantbeyond matching on the index and its components?#2017-01-0921:02chris_johnson@eoliphant I just now spend a few minutes following the docs to stand up a fresh 0.9.5544 CloudFormation stack and the EC2 instance converged and ran happily - no cycles of endless termination and recreation#2017-01-0921:03chris_johnsonI don’t know how that information is supposed to help, since it doesn’t provide a smoking gun for your setup, but I said I would do and I did 🙂#2017-01-0921:03eoliphantok, i’ll try it ‘fresh’. walk through the process again#2017-01-0921:03eoliphantlol thanks good to know it’s at least supposed to work#2017-01-0921:03eoliphantquestion are you doing the manual or automatic setup?#2017-01-0921:05chris_johnsonautomatic - bin/datomic ensure-transactor aws/ddb-sample.properties aws/ddb-sample.properties.ensured, likewise with cf-sample.properties, roll those two files into bin/datomic create-cf-template and finally bin/datomic create-cf-stack us-east-1 MyExampleTransactor aws/cf-template.json#2017-01-0921:05chris_johnsonthe only thing I did “by hand” was paste in my license key#2017-01-0921:06chris_johnsonI didn’t even change any dynamo settings, so my table was named my-system-name hehe#2017-01-0921:27SoV4Hey everyone, I just wanted to say thanks for working on amazing stuff. I made a Datomic-based project over a year ago and after I re-added my credentials to my ~/.lein/credentials.clj everything works just as smoothly. So, kudos! Not often you can revisit software a year later on a different machine and have it just work.#2017-01-0921:56dominicm@eoliphant yeah, was wondering if there might be some tricks to ensuring that the large sequence you get doesn't kill the jvm#2017-01-0921:56dominicmI'll probably just look into core.reduce, transducers, and maybe some other stuff too.#2017-01-0922:00dominicmI've actually found a blog post with a eerily similar name#2017-01-0923:04eoliphantok yeah @chris_johnson I ‘think’ i pretty much did that. I did make a few more changes the s3 bucket name, etc. Will try your minimalist approach 😉 If it blows up again, i’m wondering if there’s perhaps some subtle issue with roles/policies or something#2017-01-0923:07eoliphantWe’ll I’m a newbie @dominicm lol, but I got the impression that ‘datoms’ was more for rare, low level type stuff, it looks like the client api version does paging or something though would that help?#2017-01-0923:11marshallYou should definitely look into transducers + clients#2017-01-0923:12marshallBe aware, however that the peer server still needs to be able to handle the final query result#2017-01-0923:14marshall@dominicm is there a particular reason you want to use datoms instead of query?#2017-01-1001:24eoliphantanyone here done much with CQRS/ES in datomic? I’m still getting my head around many of datomic’s ‘weird’ but good features (like I’ve finally gotten out of subconsciously ‘reducing server round trips’ lol) I’ve played around with using it to store projections, but the ‘time sense’ makes me think it might make sense to store the events themselves in datomic. I’ve seen a couple bits around the net, Bobby Calderwood’s talk about datomic, kafka,etc, then there’s that Yuppiechef deal (it’s a little out of date but seems to have some cool idea) that uses datomic, kafka and onyx. Just wondering about any ‘in the trenches’ experience folks might have had#2017-01-1005:22robert-stuttafordDatomic is a very natural fit for storing both events and their aggregations#2017-01-1009:14val_waeselynck@robert-stuttaford do you store derived data in Datomic? How does it work?#2017-01-1009:21robert-stuttafordas noHistory attrs on (what would, in sql terms, be) join entities dedicated to keeping stats#2017-01-1009:21robert-stuttafordwe have an Onyx system which watches the tx log and does the calc and writes the results back to Datomic#2017-01-1009:22robert-stuttafordallowing us to query across both events and aggregates as needed at read-time#2017-01-1009:23val_waeselynck@robert-stuttaford interesting!#2017-01-1009:25robert-stuttaforda pragmatic approach, taken by generating events, writing queries over those events to satisfy views, and then improving the perf of those queries by pre-calculating and storing intermediate results - whether as actual derived values, or as short-cut collections that embody multiple ref jumps and maybe entity status as well - e.g. a user group caching a collection of all active documents generated by all of its users, allowing a join from group straight to active documents#2017-01-1009:37val_waeselynck@robert-stuttaford do you have a nice way of handling cache misses in such cases ?#2017-01-1010:07robert-stuttafordnot at the moment, unfortunately#2017-01-1010:21dominicm@marshall Trying to lazily consume an extremely large sequence. We have to generate JSON/CSV reports which is essentially "give me 90% of the entities in the database" and seeing memory consumption go very high when doing so.
The idea is to consume the sequence lazily so that we don't blow through all our memory. Then do streamed encoding of JSON and pipe it out over HTTP immediately so there's no large string in memory either.#2017-01-1011:20robert-stuttafordthat’s a perfect use case for d/datoms @dominicm#2017-01-1011:21dominicm@robert-stuttaford I thought so. Wanted to make sure I didn't end up wasting the advantage by doing d/datoms and then filtering in a slow way.#2017-01-1011:22robert-stuttafordi’ve found transducers + sequence play very nicely with d/datoms#2017-01-1011:24robert-stuttafordi guess it depends how complex your filtering is?#2017-01-1013:27eoliphant@robert-stuttaford thanks for the input. So is your approach similar to that yuppiechef POC? though you seem to be using datomic as the event store as well. Any issues with scale?#2017-01-1013:27eoliphanton the event store aspect that is#2017-01-1013:28robert-stuttafordyes, we use Datomic for events too. it’s similar in principle to YC’s, but simpler because it’s really just Datomic and some Clojure apps, some of which are web facing, and some not#2017-01-1013:28eoliphantgotcha#2017-01-1013:28robert-stuttafordwell, you’re really scaling the storage. we use DynamoDB#2017-01-1013:28robert-stuttafordwhich is all-you-can-eat#2017-01-1013:28eoliphantgood point, and that’s our target prod storage#2017-01-1013:28robert-stuttafordthere is a comfort boundary at about 10bn datoms, but we’re so far away from that right now#2017-01-1013:30eoliphantok, will keep that in mind, what we’re working on is by no means ‘web scale’ lol. It’s more a new look on some traditional data processing type stuff, biggest challenge is probably getting the years of legacy data in#2017-01-1013:31robert-stuttaford@stuarthalloway did a recent talk on writing ETL stuff with Datomic#2017-01-1013:31eoliphantyeah I actually think that’s in my ‘watch later’ 🙂#2017-01-1013:32eoliphantwill check it out today#2017-01-1014:23dominicm@robert-stuttaford I think depending on the filtering, it might be a job for reducers. As reducers can parallelize. Then in a final reducer I can write to a stream, I think. This is mostly me just planning ahead, so not deep in the code yet#2017-01-1014:43robert-stuttaford100%#2017-01-1015:04karol.adamiecwhen making a query with lookup ref, if datomic can not find entity it throws quite nasty errors. any ways to handle that? reg query with :where just returns no results, but if done through lookup ref it bombs…#2017-01-1016:03isaacWhy Datomic doesn’t provide reverse index access, like: d/reverse-datoms?
the (first (reverse (d/datoms ...))) is slower than (first (d/datoms …))#2017-01-1016:08rauh@isaac I thought seek-datoms with a nil coudl do that? But I haven't tried it myself#2017-01-1016:34isaacI read the doc http://docs.datomic.com/clojure/#datomic.api/seek-datoms, it seems has no this feature?#2017-01-1016:45rauh@isaac What are your datoms parameters?#2017-01-1016:48isaac(first (datoms :avet :some/attr)) => min-value#2017-01-1016:48rauh@isaac Have you tried if datomic is smart enough about last?#2017-01-1016:51rauhWhat type is your attr?#2017-01-1016:52isaacreverse implemented by reduce1, both last & reduce1 implemented via recur#2017-01-1016:53lellisHello, I'm having problems with some characters in the fulltext
(d/q '[:find ?e ?name
:in $ ?search
:where [?e :user/name]
[(fulltext $ :user/name ?search) [[?e ?name]]]]
db search)
When the search contains a !(and same other chars, in some positions), an exception occurs. Is this foreseen (I did not find anything in the docs)? Is there a blacklist of characters?#2017-01-1016:53isaacbigdec#2017-01-1016:55rauh@isaac Why not use max function? I'd expect that to be optimized#2017-01-1016:55isaacreverse & last need traverse entire seq#2017-01-1016:55rauhYeah you're right#2017-01-1016:56isaacI tried, max more slower last datoms#2017-01-1016:59isaacs/more slower/more slower than#2017-01-1016:59isaac😀#2017-01-1019:01marshallI believe there is a feature request for reverse iteration of indexes via datoms API#2017-01-1019:01marshallI’d suggest you login to the feedback portal and vote for it if youre interested in that functionality#2017-01-1019:04jaretYou can get to the feedback portal from you my-datomic account page. Its in the top right, "suggest feature"#2017-01-1019:15jaret@lellis could you try escaping the characters '+ - && || ! ( ) { } [ ] ^ " ~ * ? : \ /' with a '\' so '\(1\+1\)\:2'#2017-01-1019:15jaretugh formatting#2017-01-1019:16jaretcould you try escaping the characters + - && || ! ( ) { } [ ] ^ " ~ * ? : \ / with a \ so \(1\+1\)\:2?#2017-01-1020:40souenzzo@jaret, That's exactly what the problem is!
There is some "right" way to do that? I need to do it on all my queries?#2017-01-1021:02jaret@souenzzo you can check out https://clojuredocs.org/clojure.string/escape#2017-01-1023:39favilaI am remembering there used to be a problem with tempid issuance where you could potentially get an id in the transactor which collided with some tempid the peer created and sent in the tx#2017-01-1023:39favilawas this ever resolved or a workaround found?#2017-01-1106:00robert-stuttaford@favila, may i suggest you scan through the Datomic changelog?#2017-01-1106:00robert-stuttafordhttp://my.datomic.com/downloads/free > click Changes in the the first row in the table#2017-01-1109:51pesterhazyI've found the following Datomic db maintenance helper update-all useful: https://gist.github.com/pesterhazy/479303224559cf7fa372c5af3c992768#file-datomic_maintenance-clj-L28#2017-01-1109:52pesterhazywhat does everyone else use in terms of quick 'n dirty db manipulation/transformation helpers?#2017-01-1109:58robert-stuttaford@pesterhazy every-val is (into #{} (map :v (seq (d/datoms db :aevt attr)))) 🙂#2017-01-1109:58pesterhazyindeed it is#2017-01-1109:59robert-stuttafordcan do the same with (map :e) for every-eid#2017-01-1110:00robert-stuttafordand you can switch every-entity to use a transducer (sequence (map #(d/entity db %)) (every-eid db attr))#2017-01-1110:01robert-stuttafordoh, you want a set; (into #{} (map #(d/entity db %)) (every-eid db attr))#2017-01-1110:02robert-stuttafordyou could do the transducer thing in every-val too of course#2017-01-1110:17pesterhazygood thing about datoms is that it's lazy#2017-01-1110:18pesterhazyso maybe no do the set thing...#2017-01-1110:22kurt-yagramIn previous versions of datomic, one would use :db/id #db/id[:db.part/user -X] to add entity references in a transaction. In the newer verions, one can use :db/id "someid", which is really cool. However, what if the entity is in another part of the database? (If one has a entity reference like :db/id #db/id[:db.part/custom -X], how to use the newer version of datomic to reference this? :db/id "someid" will write it in the db.part/user, but I want it in another part.)#2017-01-1111:05robert-stuttaford@kurt-yagram the v1 syntax is still supported#2017-01-1111:07kurt-yagramyeah, I know, question is, can we use the 2nd syntax together with the 'named' references somehow?#2017-01-1111:58robert-stuttafordno#2017-01-1112:40kurt-yagramok, thanks!#2017-01-1114:19zalkyHi all. I've been trying to understand the performance implication of datomic's partitions. Specifically, the datomic docs state that "entities you'll often query across... should be in the same partition to increase query performance". It just so happened that I ended up with a group of entities with a similar ":entity/type" attribute across two partitions that were being returned on a query result, and when I fixed it, there was no effect on performance. The database is relatively small. My thinking was that maybe it had to do with how the db is being cached in memory, but I'm really not sure. Can anyone shed more detailed light on how poor use of partitions might decrease performance? Specifically, a simple test I could run where I would see the performance effect of partitions? Thanks!#2017-01-1114:35robert-stuttaford@zalky less index segments need to be read into peer memory from storage if all the datoms necessary for a query are partitioned together#2017-01-1114:36robert-stuttafordthis means less read-side pressure — on storage itself, in the peer cache (when it fills up) and in the 2nd-tier cache (memcached). for small databases (or really any database that fits in peer ram) this doesn’t really matter at all. large databases can suffer read performance if they have to read a lot of segments in for normal queries and so cause the cache to churn#2017-01-1114:36robert-stuttafordbasically, unless you think you’re going to have a large database, i wouldn’t worry about it 🙂#2017-01-1114:37robert-stuttafordthe fact that the newest release gives us a simpler partitioning scheme (essentially just sticking to one user partition) shows that it’s not such a worry for most users#2017-01-1114:55zalkyThanks @robert-stuttaford for the response. So to test out the performance effects, do you think it would be sufficient to generate a db of sufficient size, then run a query across partitions? A second question: are there any performance drawbacks of just using a single partition? Or is it just the flip side of the coin: not much to worry about for most use cases.#2017-01-1114:58robert-stuttafordyou’d have to generate a substantial database, and be watching your peer and storage metrics pretty carefully to notice a difference#2017-01-1114:59robert-stuttafordunless you’re planning a big database and you have strict read-side SLAs to conform to, i wouldn’t worry for now#2017-01-1115:10zalkyGotcha, thanks again for the response.#2017-01-1119:17SoV4What's the preferred way of keeping track of an ordered sequence in a datomic store? say i have lots of elements that users of my site vote on, and from these votes I derive a ranking... how can I keep track of a ranking? :item/rank ? seems a bit funky to try and keep uniqueness.#2017-01-1119:39robert-stuttafordthis is a great question, @sova.#2017-01-1119:40robert-stuttafordyou could use a linked list - 2 links to 1, 3 links to 2, which makes it so small changes don’t require renumbering everything#2017-01-1119:40robert-stuttafordbut this does make determining position costly to do, due to having to traverse the list#2017-01-1119:41eoliphantquestion, datomic/datalog doesn’t seem to have a concept of paging, offset, etc? How do folks usually implement something like this ? Just walk back and forth through the list of keys?#2017-01-1119:41robert-stuttafordor you could store a vector of entity ids in a string as a pr-str edn blob#2017-01-1119:41zalkyrobert,if you modified your suggestion to also have :item/rank, reindexing would be a matter of swapping :item/rank, no?#2017-01-1119:42robert-stuttafordbut that means managing a large string for large collections#2017-01-1119:42robert-stuttafordif you do rank, then (as with linked list) items can only participate in a single ranking list#2017-01-1119:43robert-stuttafordif you do rank, you’d have to re-calc all the items between the lower and upper rank for any given change#2017-01-1119:43zalkyyes indeed#2017-01-1119:43robert-stuttaforde.g. something moving from 72 to 45 means altering all of 45 through 72#2017-01-1119:43robert-stuttafordwhich may be totally ok#2017-01-1119:44marshallIt's the linked list vs array CS question. Depends on your use pattern#2017-01-1119:45robert-stuttaforddo you know of anyone actually doing either at large scale, @marshall ? e.g. 1000s or 10,000s or gulp 1,000,000s?#2017-01-1119:47marshallNot off the top of my head#2017-01-1119:54robert-stuttafordcool 🙂#2017-01-1120:14SoV4Well the rankings will be changing rather rapidly...#2017-01-1120:14SoV4As people vote on things#2017-01-1120:16SoV4Maybe it's better to calculate them on the fly. But I like the idea of persisting :item/rank in storage. Then I can do some excellent time-travel and see how the entity went down or up in rank over time.#2017-01-1120:18SoV4As you said, Robert, "or you could store a vector of entity ids in a string as a pr-str edn blob" .. this is an interesting suggestion. Just keeping a file of all the entity-ids in their rank-order.... Hmm.. I'll have to consider my options a bit.#2017-01-1120:31SoV4@eoliphant i'm curious about pagination as well. Based on some googling... http://docs.datomic.com/clojure/#datomic.api/seek-datoms seems very useful#2017-01-1120:33SoV4In my own use case I think I'm set on every item having its own unique :item/rank ... so I'll have to make sure they are unique and that they get iteratively updated when the rankings change... I'm curious how this will function on the order of thousands of elements. will let y'all know eventually 🙂#2017-01-1120:48jdkealyIf I wanted to import data into a local DB and then push it to prod, can I be using datomic:free locally and restore to datomic:pro ?#2017-01-1120:49eoliphantYeah i’d looked at that @sova , but i’m needing to do it with query results sometimes#2017-01-1121:12jaret@jdkealy Yes you can backup a free and restore into a pro. The only restriction is that backup to S3 is only available in Datomic Pro#2017-01-1121:23jdkealyah cool thanks... so when you back up... it makes many directories. would you then tar it, scp onto your transactor server and then backup from local?#2017-01-1121:54marshallYou can do that. Alternatively, if you're using pro or starter locally you can backup directly to s3 and restore in production from s3#2017-01-1122:13timgilbertFWIW, in a past project I just kept an :item/rank integer and recalculated it on move. It worked fine for me. I was always using pretty small lists (on the order of 10-30 elements or so), and reading the lists was much more common than reordering the lists#2017-01-1122:23jdkealyi can never seem to get the export to s3 command right#2017-01-1122:25jdkealynot sure what i'm doing wrong but i actually just tried on a fresh install after running ensure transactor#2017-01-1122:25jdkealyhttps://gist.github.com/jdkealy/7b84243cbef95228677f16f0c16a3d5a#2017-01-1122:29timgilbertSay, I have a general question. I want to produce a filtered database value representing the universe of data that a specific user can access (based on security rules). I'm thinking about doing this by using (d/with) to nuke a bunch of entities out of the database value before passing it down to code that will run client-provided pull patterns against it. Is this likely to be performant, or is there a better way to do it?#2017-01-1122:31timgilbertAnother option I've considered is using (d/filter), but my rules are too general to filter at the datom level (eg, Bob can see Alice if and only if Bob's project has the same company as Alice's project), so it doesn't seem applicable#2017-01-1122:32favila@timgilbert surely you can express that as a query?#2017-01-1122:32timgilbertYeah, and to a limited extent I've been doing it with rules#2017-01-1122:33timgilbertBut my dream system does this at the top level and then I don't need to add the same boilerplate logic into every single one of my queries#2017-01-1122:35stuartsierra@timgilbert It is difficult to do this kind of access filtering while still supporting the full syntax of datalog queries or pull expressions. It is usually easier to define your own query syntax, perhaps as a subset of datalog or pull expressions, and enforce the access rules in your query evaluator.#2017-01-1122:37stuartsierraRetracting a large number (thousands?) of entities with d/with is unlikely to perform well. d/filter with complex queries will also not be "fast."#2017-01-1122:38timgilbertHmm, ok, I guess I'll reevaluate my approach#2017-01-1122:41favila@timgilbert d/filter may be fast enough. It doesn't sound like you have tried it yet?#2017-01-1122:42timgilbertThe problem we're trying to address is that if our clients send us straight-up pull patterns, they can traverse from entities that they should be able to access to entities they shouldn't be, like Alice is an admin for two different companies, and suddenly Bob from BobCo can see all the data in CarlCo by back-navigating through :company/_admin or whatnot#2017-01-1122:43timgilbert@favila, I'll think about it some more but since the argument to the filter predicate is a single datom I don't think it will work#2017-01-1122:43timgilbertLike I have a semantic context that I'm trying to enforce#2017-01-1122:43favila@timgilbert it's a db and also a single datom#2017-01-1122:44timgilbertOh, hmm, didn't realize that, thanks!#2017-01-1122:45favila@timgilbert that said, sounds like you could also either whitelist pull attrs (preprocess the pull expr) or completely cover over the search/retrieval "ops" the users are allowed, so that you know they are safe#2017-01-1122:45stuartsierraIf you can't trust the client, don't accept raw d/pull patterns. It will be hard to ensure you've covered all the cases for restricting access.#2017-01-1122:46timgilbertYeah, that's what I've been learning. 😉#2017-01-1122:46stuartsierraThink of it like SQL: you wouldn't let your clients send in raw SQL queries.#2017-01-1122:47timgilbertI mean, I wouldn't accept raw SQL for a postgres-backed service... haha jinx#2017-01-1122:47stuartsierra🙂#2017-01-1122:49timgilbertOk, well I'll think about this some more. Thanks for the advice @stuartsierra and @favila#2017-01-1207:58SoV4Would it be appropriate to call Datomic a graph database?#2017-01-1208:00rauh@sova Yes absolutely, you can walk along the edges to other entities from an entity object.#2017-01-1208:02SoV4@rauh cool. thanks.#2017-01-1208:02SoV4I never thought to call it that, but you're totally right#2017-01-1208:04rauh@sova FWIW: That's how I use datascript on the client. Get an entry point somewhere and then let my components walk along the graph (with entites) and let them decide what they need. Almost no queries necessary this way.#2017-01-1208:04SoV4Could you tell me more about that?#2017-01-1208:08rauhWell I just get an entity out at some point, let's say (d/entity-by-av :post/id post-id), then pass this to a react component. I can then get out anything it wans :post/title etc, or pass it to children (mapv display-comments (:post/comments post) which again could pass it on to child components (display-user-small (:comment/user comment)) etc etc.#2017-01-1208:10SoV4oh very cool! @rauh are you using Reagent?#2017-01-1208:10rauhNo, rum.#2017-01-1208:11SoV4Okay. Maybe it's time I take another look at rum.#2017-01-1208:11SoV4I just wrote some pseudo-code for what I would want for ideal component+query handling on the ui side.. and that looks pretty close to what i've got going on#2017-01-1208:13SoV4dayum github just went down x.x#2017-01-1208:13SoV4at least in my neck of the woods#2017-01-1208:14rauhWorks fine here. The graph walking should also work for reagent the same way#2017-01-1210:33pesterhazyWhat do people do to return paginated, sorted large result sets?#2017-01-1210:34pesterhazyFor example, suppose the SQL statement select order_number, product_name, price from orders sort by order_date desc limit 50
#2017-01-1210:35pesterhazyshould be translated to Datomic. Also suppose there are 200,000 orders in the database.#2017-01-1210:37pesterhazyThe first approach would be [:find ?order ?date :where [?order :order/date ?date]]
, followed by (->> results (sort-by second) (map first) (take 50) (d/pull-many '[my-pull-spec])
#2017-01-1210:39pesterhazyHowever, with 200,000 results, this is already relatively slow at >1000ms, with the equivalent SQL query taking <10ms.#2017-01-1210:40pesterhazyWhat's more, query time will grow quickly along with the size of the result set.#2017-01-1210:40pesterhazyHas anyone developed any patterns for this sort of use case?#2017-01-1211:43robert-stuttafordi wouldn’t use Datalog, if you can generate the initial set on a single attribute. i’d use d/datoms, which at least would only cause me to seek to the end of the intended page, rather than realise the whole set#2017-01-1211:44robert-stuttafordthis assumes you can lean on the prevalent sort of the index for your order. if you need an alternate sort, you’d have to get the full set. there’s an open feature request to allow performant index traversal in reverse order, which will help with this sort of thing#2017-01-1211:54pesterhazyprevalant sort for java.util.Date would be ascending I assume?#2017-01-1211:55pesterhazyso I'd need to come up with some sort of "reverse date" (e.g. negative Unix timestamp)#2017-01-1211:56pesterhazytwo other problems#2017-01-1211:56pesterhazy- I'd need a separate attribute for each entity type (`:order/sort-key`, :product/sort-key)#2017-01-1211:58pesterhazy- there's no easy way to further restrict the search (e.g. give me the last 50 orders that have ":order/status :shipped")#2017-01-1211:59pesterhazya separate "pagination" key for each entity isn't too bad I guess#2017-01-1211:59pesterhazyany way to do filtering though?#2017-01-1211:59rauh@pesterhazy Please also vote for reverse seek on https://my.datomic.com/account -> "Suggest features" if you want this#2017-01-1212:02robert-stuttafordall prevalent sorting is ascending, yes, @pesterhazy#2017-01-1212:02robert-stuttafordyou’d have to put filtering into your d/datoms processing pipeline ahead of drop + take#2017-01-1212:03robert-stuttafordstill going to be faster than realising the whole set with Datalog#2017-01-1212:05pesterhazycan you elaborate on how to do filtering in the datoms pipeline, @robert-stuttaford ?#2017-01-1212:07robert-stuttaford(->> (d/datoms) (filter (fn [[e _ v]] <go to pull or entity or datalog with e and/or v>) (drop) (take))#2017-01-1212:07rauh@pesterhazy One pragmatic approach is to keep a client side datastructure that stores the date of the (last - 100), (last - 200) etc date. This would allow you to seek quicker at the end. Looks like order_date is append only and immutable? And then just iterated to the end and take the last n datoms.#2017-01-1212:08rauhThen refresh that data structure when you iterated above (or below if they can be removed) a threshold#2017-01-1212:10robert-stuttafordyeah, this is taking the linked-list approach. you may be able to use d/seek-datoms to iterate through the raw index from some mid point#2017-01-1212:10robert-stuttafordhttp://docs.datomic.com/clojure/#datomic.api/seek-datoms#2017-01-1212:10robert-stuttaford> Note that, unlike the datoms function, there need not be an exact match on the supplied components. The iteration will begin at or after the point in the index where the components would reside. …#2017-01-1212:10pesterhazy@rauh, I voted for the "reverse index" feature#2017-01-1212:11pesterhazythanks for the pointer#2017-01-1212:12pesterhazy@robert-stuttaford, ah I see, using entity or pull makes sense#2017-01-1212:13pesterhazyI could even grab batches of, say, 1024 and run d/q on each 🙂#2017-01-1212:13robert-stuttafordyou could 🙂#2017-01-1212:14robert-stuttafordthis is very much a do-it-yourself part of Datomic, though (which is great, because you’re in control) but i agree it would be good to establish some patterns. it’s very similar to yesterday’s discussion about arbitrary sort; the linked-list vs array CS question#2017-01-1212:14pesterhazy@rauh, so basically the idea would be to have the api send back a next token, rather than a page number?#2017-01-1212:18rauhI'd call it a sketch of the index. A sorted-map like {100 "some-date" 200 "some-date" 300 "some-date" ...} which "approximatedly" seeks into the datoms a (d/datoms :avet :order/date (get sketch 100)) and then seek to the end. The result, unless you removed orders, should be >= 100 datoms. Then just take the last 50 for your pagination. Then update the 100 key of the map to (nth datom-result (- (count datoms) 100))#2017-01-1212:18rauhObviously, lots of details missing here. Rounding etc.#2017-01-1212:19robert-stuttafordkinda like google maps does when you zoom in. starts with a rough zoomed out blurred view, and fills boxes in with detail as you focus in#2017-01-1212:20rauhFirst time you do the seek, you won't have any info, so you have to iterate all datoms. Then keep that "approximate sketch" in a map.#2017-01-1212:20robert-stuttafordi said that without thinking about it too much, i may be way off 😊#2017-01-1212:21rauhObviously don't store all 200,000 / 100 but only {last-10,000 ... last} since people seldomly paginate more that that#2017-01-1212:21rauhThat way your memory usage is bounded above.#2017-01-1212:21rauhThough, come think of it, a map with 2k entries is probably tiny.#2017-01-1212:22rauhThe whole thing becomes much more complicated (== brakes down) if the dates are edited/removed a lot and queried very infrequently.#2017-01-1212:23rauhIf you end up implementing it, make sure to share some code 🙂#2017-01-1212:26rauhAnd if you add a lot of new orders between the querying, then the datoms call will also be more expensive and might be very large. Though, you could listen on the datomic transactions and even keep it really updated all the time... Lots of room for optimizations.#2017-01-1212:27robert-stuttafordso, if i understand correctly, you’re using the initial seek to set up bookmarks in the dataset to seek from via e.g. seek-datoms, for the data that users paginate infrequently#2017-01-1212:36pesterhazyYeah, order/date is immutable#2017-01-1212:36pesterhazyCan't say I understand the approach completely but it sounds intriguing#2017-01-1212:37pesterhazyI think there's room for a datomic patterns library that for provide auxiliary indexes and other helpers as functions#2017-01-1212:38robert-stuttafordi agree, but we’d have to preface it with a big disclaimer: warning—here be opinions 🙂#2017-01-1212:39pesterhazyTrue#2017-01-1212:39pesterhazyIt seems there's room for general solutions here#2017-01-1212:39robert-stuttafordtotally agree#2017-01-1212:40pesterhazyI like the idea of bookmarks for pagination#2017-01-1213:13rauhYeah I think a general lib could be useful to create such bookmarks/sketches. If the data is immutable you can even cover the entire index and just append to it at the end as data grows.#2017-01-1213:13rauhThough, if things get deleted only START and END bookmarks become usable.#2017-01-1213:54rauh@pesterhazy @robert-stuttaford : Brainstorming: https://gist.github.com/rauhs/aa58d748abf851543d57ef3403f23edb#2017-01-1213:57pesterhazygreat!#2017-01-1213:57pesterhazy"We keep a datastructure on the client code" -- what do you mean by "client" here?#2017-01-1213:58rauhJust a (def histogram (sorted-map {}))#2017-01-1213:59rauhOr maybe clojure/data.int-map#2017-01-1213:59rauhOr maybe java.util.HashMap/etc#2017-01-1214:00rauhIn an atom.#2017-01-1214:23jdkealyTurns out my datomic backup db script was failing because of wrong java version. Oh well...#2017-01-1214:24jdkealyhow are you supposed to backup to s3 from localhost? I thought the S3 backup used AMI#2017-01-1214:35pesterhazyyou can access S3 from local your machine, no problem#2017-01-1214:35pesterhazy@rauh, I see. Can't we store the index itself in datomic?#2017-01-1214:36rauh@pesterhazy That's a great idea! That would get rid of a lot of the problems from the last section#2017-01-1214:43rauhNow I wonder how :nohistory behaves when having multiple db's from different timepoints of the connection#2017-01-1214:56stuartsierranoHistory means no history, meaning there are no guarantees that you will ever see anything but the most recent value of the attribute, even if you're looking in the "past"#2017-01-1214:56plexushi everyone, I'm trying to get datomic running, and I'm completely stuck#2017-01-1214:57plexusI registered and downloaded the "Datomic Pro Starter Edition", and unzipped it#2017-01-1214:57plexusnow according to the docs I should do#2017-01-1214:57plexusbin/maven-install
bin/run -m datomic.peer-server -p 8998 -a myaccesskey,mysecret -d firstdb,datomic:
#2017-01-1214:58plexuswhich results in#2017-01-1214:58plexus% bin/run -m datomic.peer-server -p 8998 -a myaccesskey,mysecret -d firstdb,datomic:
Exception in thread "main" java.io.FileNotFoundException: Could not locate datomic/peer_server__init.class or datomic/peer_server.clj on classpath. Please check that namespaces with dashes use underscores in the Clojure file name.
at clojure.lang.RT.load(RT.java:456)
at clojure.lang.RT.load(RT.java:419)
at clojure.core$load$fn__5677.invoke(core.clj:5893)
at clojure.core$load.invokeStatic(core.clj:5892)
at clojure.core$load.doInvoke(core.clj:5876)
at clojure.lang.RestFn.invoke(RestFn.java:408)
at clojure.core$load_one.invokeStatic(core.clj:5697)
at clojure.core$load_one.invoke(core.clj:5692)
at clojure.core$load_lib$fn__5626.invoke(core.clj:5737)
at clojure.core$load_lib.invokeStatic(core.clj:5736)
at clojure.core$load_lib.doInvoke(core.clj:5717)
at clojure.lang.RestFn.applyTo(RestFn.java:142)
at clojure.core$apply.invokeStatic(core.clj:648)
at clojure.core$load_libs.invokeStatic(core.clj:5774)
at clojure.core$load_libs.doInvoke(core.clj:5758)
at clojure.lang.RestFn.applyTo(RestFn.java:137)
at clojure.core$apply.invokeStatic(core.clj:648)
at clojure.core$require.invokeStatic(core.clj:5796)
at clojure.main$main_opt.invokeStatic(main.clj:314)
at clojure.main$main_opt.invoke(main.clj:310)
at clojure.main$main.invokeStatic(main.clj:421)
at clojure.main$main.doInvoke(main.clj:384)
at clojure.lang.RestFn.invoke(RestFn.java:805)
at clojure.lang.Var.invoke(Var.java:455)
at clojure.lang.AFn.applyToHelper(AFn.java:216)
at clojure.lang.Var.applyTo(Var.java:700)
at clojure.main.main(main.java:37)
#2017-01-1214:59plexusok never mind... now it seems to be running#2017-01-1215:00plexusI just spent a couple hours on this... but ok 😆#2017-01-1215:01rauh@stuartsierra Just to clarify: So a
(let [db (d/db conn)]
(change-no-hist-attr!)
(:acces-no-hist-attr (d/entity db some-ent)))
might see the new value?#2017-01-1215:12stuartsierra@rauh I'm not certain, but I would not be surprised if you saw the new value in that case.#2017-01-1215:14rauh@stuartsierra Could you find out? That would change some things for me.#2017-01-1215:15stuartsierra@rauh I do not have a way to prove it without running a lot of tests. But I do know that noHistory is defined to mean "I only ever care about the most recent value of this attribute."#2017-01-1216:05marshall@rauh you’re calling d/entity on a value of the DB from before you transacted a change#2017-01-1216:06marshallthat database value is immutable#2017-01-1216:07rauh@marshall Well the use-case was from the above sketches (not sure if you read the conversation?). So it might be not like the code above#2017-01-1216:07rauhBut more nested... The db value might be few hundred ms old.#2017-01-1216:18marshallThe DB value is immutable#2017-01-1219:19tjtoltonso, here's a question I get when I explain datomic's peer/transactor model:
"So you mean that as our data grows, we have to scale the memory on every single one of our running application nodes, instead of just our data nodes?"#2017-01-1219:20tjtoltonPart of the problem is that I haven't actually done the work of ETLing our domain data into datomic to be able to intelligently respond to the "that's too much data in memory" objections. But in general, is that statement accurate?#2017-01-1219:21tjtoltonThat application nodes have to scale in memory as the database grows#2017-01-1219:21tjtoltoni guess logically it is#2017-01-1219:37jaret@tjtolton Your peer memory needs to account for its copy of the memory index, its own object cache, and whatever it needs to run its application. This does not necessarily scale as your data grows, but if you have a peer running a huge full system report and your data grows then you need to account for that in peer memory, but you would have to make that consideration in any system. Granted I am a little biased as I work at Cognitect, but being able to scale your peer memory is where you get bang for you buck specific to the peers. Being forced to scale your entire system for one use case is lame. So I am not sure if that helps you when you get this question, but I see this as a strength of Datomic's model.#2017-01-1219:37jarethttp://docs.datomic.com/capacity.html#peer-memory#2017-01-1219:45tjtoltonInteresting. the memory index. right, i occasionally forget that datomic isnt just storing a naive list of datoms (or several of them)#2017-01-1219:46tjtoltonI'll take a look at that info, thanks!#2017-01-1219:47marshall@tjtolton @jaret is totally right - your peers (application nodes) have to scale if your queries scale (i.e. if you say give me 20% of the db and your DB is growing), but that’s arguably true of any database#2017-01-1219:48marshallone cool thing about the peer model, though, is that you can horizontally scale instead
so add a peer - you’ve just increased both your system cache size and your compute (query) power#2017-01-1219:49marshalland if you want to further optimize, you can shard traffic to specific peers.
So users 1-5 always hit peer A and users 6-10 always hit peer B
This means that peer A’s cache is ‘hotter’ for users 1-5#2017-01-1219:49marshallor use Peer A for the web app and Peer B for an analytics process#2017-01-1219:50marshalleach cache automatically tunes itself for that workload#2017-01-1219:53tjtoltonThat indicates that I don't fully understand the way datomic memory works. I thought that the entire database was locally cached on each peer#2017-01-1219:54marshallah. no, datomic is a “larger than memory” database#2017-01-1219:54tjtoltonalso, @marshall, isnt what you just said only applicable to peer severs, and not applications that are themselves peers?#2017-01-1219:54marshallif you happen to be running a small database that fits completely in your peer’s memory, then you can effectively have it all cached locally, but that isnt required#2017-01-1219:55marshallDatomic caches segments of the DB that it uses to answer queries#2017-01-1219:55marshallwhen a peer is answering a query it first looks in local cache, then in memcached, then goes to backend storage#2017-01-1219:56marshallthe peer needs to have enough JVM heap space to hold the final query result in memory (as would any application realizing a full query result from any DB), but that’s the only ‘requirement'#2017-01-1219:56tjtoltoninteresting. and the transactor streams changes to only the parts of the data that are cached on each peer?#2017-01-1219:56marshallalmost. it’s a bit subtle. The transactor streams novelty to all connected peers#2017-01-1219:56marshallspecifically, that’s the memory index#2017-01-1219:57marshallperiodically, the transactor incorporates that novelty into the persistent disk index#2017-01-1219:57marshallvia an indexing job#2017-01-1219:57marshallwhich happens as a separate process#2017-01-1219:57marshalland when it finishes that, it notifies the peers that there is a new disk index#2017-01-1219:57marshallso they know where to go if they need to retrieve segments#2017-01-1219:58tjtoltonthats pretty slick#2017-01-1219:58marshallbut the segments that are cached are never updated#2017-01-1219:58marshallall segments are immutable#2017-01-1219:58marshallso once a segment is cached on a peer it’s always that same value#2017-01-1219:58tjtoltonright, update wasn't the right word#2017-01-1219:58marshallif a datom in that segment is updated via a transaction, the resulting “new value” is either in mem index or a new segment#2017-01-1219:59tjtoltoni suppose upsert#2017-01-1219:59tjtoltonis the new cannonical term#2017-01-1219:59marshalland like clojure data structures datomic uses structural sharing for all of this stuff, so a lot of the “new” index tree may still be the same segments you’ve already cached#2017-01-1220:02tjtoltonhuh, interesting. So, to review:
* peer starts off "cold", doesn't have much of the database locally cached
* queries ask the peer for info that it doesn't have, it pulls info from the storage service (or memecached) and caches the new info
* subsequent queries for the same data are served from that peers warm cache.
* the transactor knows about new information, and pushes it to all peers
* subsequent queries will fold in that knowledge when serving from their cache#2017-01-1220:03marshallyep#2017-01-1220:03marshalland caching / retrieval from storage happens at the segment level#2017-01-1220:03marshallso it doesnt just fetch the datom you ask for#2017-01-1220:03marshallit’s the whole segment that contains that datom#2017-01-1220:04marshallwhere segments contain 100s or 1000s of datoms#2017-01-1220:04marshall(i.e. a chunk of the index tree)#2017-01-1220:04marshallso a lot of use cases might ask for some data, then based on that ask about some related data and that second query may only need to use the same segment#2017-01-1220:05marshallthe idea is that you effectively amortize the n+1 problem away#2017-01-1220:05marshallso you don’t have to do everything in a single query the way you would with a traditional client/server db#2017-01-1220:12tjtoltongotcha. The n+1 problem is a word I've heard many times#2017-01-1300:48jdkealycan i restore to an existing database? I've been trying to seed my database for my clients and running and re-running my import scripts, I get the error "The name 'restore1' is already in use by a different database"... The problem is it takes over 30 mins to build and deploy my code, which i need to do when i want to create a new database#2017-01-1301:10SoV4total beginner's question: what's the difference between an ident and a lookup ref ?#2017-01-1307:44luposlip@sova ident is usually a keyword you use instead of an attribute ID. Such as :user/likes, :user/email or :object/type.
A lookup ref is used to identify a specific entity. You can lookup an entity using :db/id 124312345123254 (using the entity ID), or with a lookup ref such as :db/id [:user/email ". For this to work you need to have the attribute (i.e. :user/email) defined as :db/unique :db.unique/identity in your schema.#2017-01-1310:37karol.adamiecon top of that i would add that lookup ref throws when no entity is found. if one does :db/id 124312345123254 that assumes that id got queried before, and in case it is not in db you get back nil. so you can check aot instead of catching exception...#2017-01-1314:05jdkealyWhere can i find more info on paginating using (d/datoms) ? I read you can return lazy sequences this way.#2017-01-1315:10jdkealyn.m looked at mbrainz sample... this is really an underdocumented way to assist with pagination. very nice#2017-01-1315:13mitchelkuijpers@jdkealy Do you have a link to that example?#2017-01-1315:17jdkealyhere's an example of something i just put together... i import all my entities to elasticsearch#2017-01-1315:18jdkealy:content/fk is an indexed attribute... it only seems to work on indexed attributes#2017-01-1315:18mitchelkuijpersAha we do almost the same#2017-01-1315:18mitchelkuijpersBut we do a simple query for for example only the db/id and the company/name for us#2017-01-1315:19mitchelkuijpersand then sort by company/name and then the drop and take#2017-01-1315:19jdkealyjust realizing i shouhld be calling _db and not my function that returns a new db in the function.#2017-01-1315:19jdkealyare you using seek-datoms?#2017-01-1315:19mitchelkuijpersAnd then do a pull on the entities you actually want to show#2017-01-1315:19mitchelkuijpersNo#2017-01-1315:19jdkealyin my case with 3M records, i run out of java heap space when i use the query api#2017-01-1315:20mitchelkuijpersAha that seems like a problem, if you have them in elasticsearch why not serve them from elasticsearch?#2017-01-1315:20jdkealybecause it needs to pull the whole collection... with 2M records (and enough RAM) it would take like a minute per page#2017-01-1315:20jdkealythis is how i import them to elasticsearch#2017-01-1315:20mitchelkuijpersAh ok#2017-01-1315:20jdkealyon create / update, i have a hook to import to ES, but this is for my initial import#2017-01-1315:21jdkealyalso... probably will run daily just in case something gets out of sync#2017-01-1315:21mitchelkuijpersAh ok, we will also have to reach out to ES in the future, but currently our collections are not bigger then 10.000 luckily..#2017-01-1315:21mitchelkuijpersReally a shame that datomic does not fix this somehow 😞#2017-01-1315:22mitchelkuijpersThank you for the example seek-datoms seems like something I might need in the future#2017-01-1315:25jdkealyi love ES#2017-01-1315:25jdkealyi wish there was one DB that could do it all though#2017-01-1316:06pesterhazy@jdkealy if you scroll up a bit there was a lengthy discussion on pagination the other day#2017-01-1319:39bballantinePerhaps a dumb question: trying to simply determine if two entity references refer to the same entity in a datomic query. This is non-working code shows what I’m trying to do:
(defn same-person? [db p1 p2]
(d/q '[:find ?p2 .
:in $ ?p1 ?p2
:where (= ?p1 ?p2)]
db p1 p2))
Of course I get: IllegalArgumentException Cannot resolve key: =#2017-01-1319:47bballantine.. and this the function can be called like (same-person? ddb [:person/slug "romeo-montague"] [:person/email "#2017-01-1319:49marshalltry equal? instead of =#2017-01-1319:52favila= is not a rule. use [(= ?p1 ?p2)] @bballantine#2017-01-1319:52potetmTIL equal? is part of datalog#2017-01-1319:53marshallmy bad; I think I mis-spoke 🙂#2017-01-1319:53potetmAh yeah. It parses, but doesn't do what you would hope.#2017-01-1319:54favila@bballantine be aware that if you do not know ?p1 and ?p2 are both entity ids you may be lead astray#2017-01-1319:54potetmYeah I was thinking what @favila said.#2017-01-1319:54marshallindeed, @favila said what I thought 🙂#2017-01-1319:54favila@bballantine If necessary, use datomic.api/entid to normalize, inside or outside of the query#2017-01-1319:55bballantine@favila yeah.. been messing with that.. seems a bit clunky, but this works currently:
(defn same-person? [db p1 p2]
(d/q '[:find ?p .
:in $ ?p1 ?p2
:where [(datomic.api/entid $ ?p1) ?p1-id]
[(datomic.api/entid $ ?p2) ?p2-id]
[?p1-id :person/slug ?p]
[?p2-id :person/slug ?p]]
db p1 p2))
#2017-01-1319:55marshallhttp://docs.datomic.com/query.html#built-in-expressions#2017-01-1319:56favilathat looks good to me#2017-01-1319:56favilaor you can just do this:#2017-01-1319:56favila(I think, try with lookup refs)#2017-01-1319:57favila[?p1 :person/slug ?p][?p2 :person/slug ?p]
#2017-01-1319:57favilasince you only care about ?p#2017-01-1319:57bballantineYeah.. right, so this works:
(defn same-person? [db p1 p2]
(d/q '[:find ?p .
:in $ ?p1 ?p2
:where [?p1 :person/slug ?p]
[?p2 :person/slug ?p]]
db p1 p2))
#2017-01-1319:58favila(and I assume slug is unique)#2017-01-1319:58bballantineright#2017-01-1319:59favilaclojure idiom is functions that end in ? return boolean, so you may want to coerce at the end, but that's besides the point#2017-01-1320:00bballantineah, good point#2017-01-1320:00bballantineActually this works and seems more.. direct:
(defn same-person? [db p1 p2]
(d/q '[:find ?p1-id .
:in $ ?p1 ?p2
:where [(datomic.api/entid $ ?p1) ?p1-id]
[(datomic.api/entid $ ?p2) ?p2-id]
[(= ?p1-id ?p2-id)]]
db p1 p2))
#2017-01-1320:00faviladoes not guarantee types, right? these two ids may not be people#2017-01-1320:01bballantineI guess = only works in certain context of primitive types?#2017-01-1320:01favila(also this doesn't need a query at all)#2017-01-1320:01favilayour earlier example checked for :person/slug#2017-01-1320:01favilathat guarantees that the entity id represents a person entity, no?#2017-01-1320:02favila(in your domain model)#2017-01-1320:02favilathis is trivially true for e.g. (same-person? 0 0) => true#2017-01-1320:02bballantinere needing a query… it’s actually a part of a bigger query#2017-01-1320:02favilagotcha#2017-01-1320:02favilai fugured#2017-01-1320:03favilaI'm still a bit fuzzy on how datalog handles lookup refs and keywords#2017-01-1320:03bballantineI guess to answer your other question, it could be generalized (if it was stand-alone) to be called same-entity.#2017-01-1320:05favila[?p1][?p2][(= ?p1 ?p2)] may work as such a rule, assuming datalog normalizes idents and lookups to entids#2017-01-1320:05favilabut I would test that first#2017-01-1320:06favilaI know it will understand them in the E or A slot (not the V slot ever!) of datalog match clauses#2017-01-1320:07favilabut = treats its arguments as values not relations, so not sure what the actual values will be#2017-01-1320:14bballantine@favila - thanks again. Was just experimenting with the last suggestion. Unless I get the entity-ids out and compare them, the entity refs don’t pass the equivalency check. As you said, it seems = is just treating the entity refs as values. i.e. [:person/slug "romeo-montague"] is not equal to [:person/email “<mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>"]#2017-01-1320:16favilaA good habit is to call d/entid on inputs to queries or at the top of a query on its entity-id-typed arguments in :in that the query will use#2017-01-1320:16favilathen you can write the rest of the query forgetting out this sublety#2017-01-1320:17bballantineIn the end, this Imma go with something like.. Actually might turn it into a rule.
(defn same-entity? [db p1 p2]
(some?
(d/q '[:find ?p1-id .
:in $ ?p1 ?p2
:where [(datomic.api/entid $ ?p1) ?p1-id]
[(datomic.api/entid $ ?p2) ?p2-id]
[(= ?p1-id ?p2-id)]]
db p1 p2)))
#2017-01-1320:17bballantine@favila you can call datomic.api/entid in the :in part?#2017-01-1320:17favilano, I just mean call it on :in args to normalize them#2017-01-1320:18favilalike you do here#2017-01-1320:18favilaso the rest of the query can assume that they are actual ids#2017-01-1320:18bballantineright#2017-01-1320:19favilae.g. of a gotcha where this matters [?e :person/friend ?p1], if ?p1 is not an entid it will never match#2017-01-1320:19bballantinecool#2017-01-1320:19favilabecause datomic won't resolve entity identifiers in the V slot#2017-01-1320:20bballantineok thanks again for the help and info#2017-01-1321:18SoV4in Datomic is it possible to get the "max 9 " of a db? Instead of just the absolute max?#2017-01-1323:00SoV4Man, I could really use some more Pull syntax examples.#2017-01-1323:35SoV4(defn get-blurb-info [bid]
(d/q '[:find [(pull (d/db conn) [*] bid) ...]
:in $
:where
[?bid :blurb/title ?title]
[?bid :blurb/link ?link]
[?bid :author/email ?author]
[?bid :blurb/content ?content]] (d/db conn)))
... is there something really wrong with my Pull syntax?#2017-01-1323:37SoV4I'm just experimenting trying to track down what I've got mixed up... Ideally I'd just like to get all the results possible in this case (without supplying an entity ID)#2017-01-1323:38dominicmYes. Ther eis.#2017-01-1323:38dominicmYou don't need to give (d/db conn) to the pull in the :find#2017-01-1323:39SoV4Hmm. Okay.. but removing that still leaves (pull [*] bid) which is an "invalid-pull" ...#2017-01-1323:42dominicmI think you want (pull [*] ?bid)#2017-01-1323:49favila@sova @dominicm (pull $ ?bid [*])#2017-01-1323:50dominicm@favila I don't think that is right#2017-01-1323:50favilaThe pull in :find clauses uses a different argument order from datomic.api/pull#2017-01-1323:50dominicmYou shouldn't need $#2017-01-1323:50favilawhich is *maddening*#2017-01-1323:50favilayou don't need it, but it accepts it#2017-01-1323:50dominicmBut you're right, (pull [*] ?bid)#2017-01-1323:50dominicmoh really? interesting.#2017-01-1323:51favilaI only just now realized that is not documented#2017-01-1323:52favilaWhat I really want to know is whose bright idea it was make arg order inconsistent#2017-01-1323:52favilaI get them mixed up several times a day#2017-01-1400:14SoV4Hmm.. Maybe my setup is at fault, but even conj'ing $ to the arguments list didn't do it.#2017-01-1400:14SoV4Oh!#2017-01-1400:14SoV4Pattern goes Last?#2017-01-1400:15SoV4Wow.#2017-01-1400:15SoV4Thank you very much @favila#2017-01-1400:16SoV4Man. Mind blown.#2017-01-1419:19seantempestaSo, I’m I’d like to monitor the tx-report-queue for changes and react to them, but I’m noticing the attribute keywords are not present. It appears they are being represented by the entity ids that were transacted when creating the schema. Is that correct?
(d/transact conn* [{:db/id (tempid), :test/attr “Abcde”}])
{:db-before datomic.db.Db,
@234dd5c2 :db-after,
datomic.db.Db @d16ab2c7,
:tx-data [#datom[13194139534314 50 #inst"2017-01-14T19:15:46.413-00:00" 13194139534314 true]
#datom[277076930200555 64 "Abcde" 13194139534314 true]],
:tempids {-9223090561879065153 277076930200555}}
#2017-01-1419:20seantempestaIn the above example it appears that :test/attr is being represented as 64#2017-01-1419:23potetm@seantempesta Attributes are entities as well!#2017-01-1419:24potetmThose are the entity ids for each attribute.#2017-01-1419:24seantempestaIs there an easy way to look them up? I was hoping calling (.a datom) would do it, but it just returns 64.#2017-01-1419:24potetmYou can access the corresponding keyword for an attribute id via d/ident.#2017-01-1419:25seantempestarad! thanks!#2017-01-1419:25potetmnp!#2017-01-1421:57tcarlsSilly question here -- I'm trying to cut over from Datomic Pro to Datomic Free on a project, to let other developers spin up with less overhead, I'm having some trouble documenting how to get services started up -- with datomic-free-0.9.5544, bin/run -m datomic.peer-server fails saying it can't find datomic/peer_server__init.class or datomic/peer_server.clj, and indeed, I don't see them around in the distributed jars -- what am I missing?#2017-01-1422:04tcarls...hmm; datomic-transactor-pro-0.9.5544.jar contains AOT'd datomic/peer_server, but datomic-transactor-free-0.9.5544.jar doesn't.#2017-01-1422:05tcarls...it's not that new of a release, so I can't see this being an undetected bug...#2017-01-1422:17tcarls...ahh.#2017-01-1422:17tcarlsdidn't grok that the peer is what the client connects to. Now this makes sense. 🙂#2017-01-1423:03SoV4@tcarls did you figure out your issue?#2017-01-1423:03tcarlsYup.#2017-01-1500:16marshall@tcarls Peer server is not included in Datomic Free#2017-01-1500:16tcarlsindeed -- I misunderstood its role.#2017-01-1500:16marshallDatomic Free doesn't support clients#2017-01-1500:17tcarlss/exist/support/#2017-01-1500:17marshallGotcha#2017-01-1500:18tcarls(perhaps "exist to support" would actually have been the better wording? Regardless).#2017-01-1615:55favila@seantempesta yes, :tx-data of tx report queue entries gives you the raw datoms asserted/retracted in the transaction#2017-01-1616:45lellisHi all,
Im having problems with accent's in my querys with fulltext (ex: Pelé not found when search Pele).
Its possible to use ASCIIFoldingFilter when search is fulltext? The api doc say "Fulltext search is constrained by several defaults (which cannot be altered)". Adding an ASCIIFoldingFilter to the Lucene Analyser is one of these constrains? ty#2017-01-1617:59robert-stuttaford@lellis i think you’re out of luck; that Lucene is used for fulltext search is an implementation detail. the only api you get is the one in datalog.#2017-01-1618:04lellis@robert-stuttaford Yeah, i found some emails in our group where you clarify this! I m looking for an elasticsearch datomic integration for do these accent insensitive search. You know anything that is better? tks#2017-01-1618:12devthsomewhere inside Datomic Map transactions are converted to List form. is this conversion fn accessible?#2017-01-1618:43jdkealyhow can i see how many datoms my database has#2017-01-1618:44jdkealywould this be accurate??? (count (vec (d/seek-datoms (db/_d) :avet )))#2017-01-1618:46eoliphanthey guys question. I have yet another ‘tree walk’ question lol. I have a tree structure, built around say a :ent/child ref attribute. Given an entity, I want to grab children to say any depth. I can do this with the pull api’s recursive specs. But in this case I'd really just want the ids/entities in a list as opposed to the nested maps i’ll get back from pull in this case. and I also need to potentially do some filtering on them. So say, all the :ent/child matches for ?e where :ent/foo equals bar as well. Is there any way to do this in datalog?#2017-01-1618:59favila@eoliphant You are aware of this post I where I replied to your original question about tree walking a while ago? https://groups.google.com/d/msg/datomic/_YiBRnBkeOs/0Gd-6lJmDwAJ#2017-01-1619:00eoliphantah crap you did didn’t you lol. I’ve been covering a lot of groud over the past few weeks. thanks again 😉#2017-01-1619:00favila@eoliphant Your new scenario is different in its particulars but the approach is the same: use a recursive rule#2017-01-1619:01eoliphantyeah and actually rules where i’m a bit weak and focusing more at the moment so this is perfect#2017-01-1619:02favilaalso notice that you need to know whether you intend to filter children during or after your recursive walk#2017-01-1619:02favilait makes a difference#2017-01-1619:03favilai.e., should the children of non-:ent/foo="bar" be included or not?#2017-01-1619:03eoliphantright, in my case at the moment it would be during#2017-01-1619:03eoliphantright#2017-01-1619:03eoliphantdoes the match failure of a node automatically exclude it’s children or not#2017-01-1619:04favilato filter during I'm pretty sure you need to include the filter inside the rule#2017-01-1619:04eoliphantand i do have another scenario where it will need to be done ‘after'#2017-01-1619:04eoliphantright#2017-01-1619:04eoliphantthat makes sense#2017-01-1619:05favilato filter after is just another datalog clause appearing after the recursive rule#2017-01-1619:05favila(children ?parent ?child)[?child :ent/foo "bar"]#2017-01-1619:05eoliphantexactly. the results of the tree walk rule just ‘fall’ into the subsequent clauses#2017-01-1619:06eoliphantok gonna give this a whirl 😉#2017-01-1620:45asolovyovI wonder what's the story with fulltext predicate - is it possible to specify tokenizer? If something I want to search is not in English?#2017-01-1620:49stuartsierra@asolovyov: No, fulltext indexing in Datomic is only supported with the default Lucene tokenizer/analyzer. If you have more specific fulltext search requirements, it is recommended to run your own Lucene/Solr/... for fulltext search.#2017-01-1620:50asolovyov@stuartsierra I see.. eh, too bad, I just grew tired of ElasticSearch (because of unpredictable performance and convoluted queries) and thought maybe I should look elsewhere 🙂#2017-01-1620:52stuartsierraDatomic's fulltext is just Lucene under the hood, so the end result is not much different from running Lucene yourself.#2017-01-1620:54asolovyovwell... I do faceted search on top of that, so either I'm doing that by hands, or I should use something which does it for me - and that's ES right now. But could be Datomic, which has a little bit more predictable performance when you're just filtering by field.#2017-01-1621:26pesterhazyalso consider algolia if you can use a saas#2017-01-1621:35andyparsonsI know this has been asked before (apologies): how can I stop the [Datomic Metrics Reporter] DEBUG datomic.process-monitor messages in the repl?#2017-01-1621:54jaret@andyparsons You need to edit logback.xml and turn down the logging level. If its set to DEBUG it's likely this was changed in the past as I believe logback.xml defaults to INFO. " <logger name="datomic" level="warn" />"#2017-01-1621:55andyparsonsah yes @jaret that makes sense. thanks!#2017-01-1700:48eoliphanthey has anyone looked at datomic as a chat backend? We’re looking at some chat features taht need to be pretty tigthly integrated with an in-house app. I was thinking that streaming the tx log would make it effectively ‘reactive'#2017-01-1701:01favila@eoliphant datomic doesn't scale writes well and has some practical datom limits. Not saying its impossible but it's probably not a good fit#2017-01-1701:01eoliphanthmm ok good to know#2017-01-1701:01eoliphantit’s not large scale, but will consider that#2017-01-1701:02favilayeah at small scale nothing matters, it's all fine#2017-01-1701:02favilaeasy-to-use tx queue is a nice feature#2017-01-1711:56pseudSo I just downloaded datomic-free & tried to install the console into the extracted datomic directory. Launching the console, I get Exception in thread "main" java.lang.IncompatibleClassChangeError: Implementing class.
Is it no longer possible to run the console with the free edition ?#2017-01-1714:14mitchelkuijpers@pseud I don't think that was ever possible because the free version does not have a transactor, which is the thing you have to connect the console too#2017-01-1714:20jaret@pseud @mitchelkuijpers You should be able to use console with free and you do have a transactor with free. You have to download console separately and install it into the datomic install directory. I just tested and got the same error so I am thinking we might have broken something with standalone console. I am currently investigating to figure out what exactly broke#2017-01-1714:21jaretFor free you run a free protocol transactor with:
bin/transactor config/samples/free-transactor-template.properties
#2017-01-1714:24mitchelkuijpers@jaret Did not know that, sorry#2017-01-1714:26jaretIt can definitely be confusing because of the free protocol, but as long as you aren't using an in MEM db you can connect console to it#2017-01-1714:26jaretor should I say, you should be able to connect console to it 😉#2017-01-1714:29pseudYea, that was my understanding from reading the guides too.
I'm guessing the problem now stems from some compiled code which console depends on being different between free & pro bundles.
I wonder if console shouldn't just be bundled in with free these days, given its availability to free users also ? Why the extra hoops to jump ? I'm already committing to learning a new query language, use a proprietary closed-source, one-off database - seems like that's enough commitment.
I know that comes off as a bit mean, but understand that leadership adopts a technology, and other employees (such as I) are forced to come to terms with it. Jumping through hoops to approach feature-parity (in terms of dev tooling) only breeds resentment.#2017-01-1714:36jaretI understand where you are coming from. I would suggest that you register for Starter edition(its free for registered users). Its identical to PRO in all respects and comes bundled with console. I think that would get you going and working.#2017-01-1715:05Matt ButlerIs there a simple way to clear the object cache for benchmarking sake?#2017-01-1715:11Matt ButlerAnd on an unrelated note is is possible to limit a many cardinality ref relationship in datalog, so that the where clause only applies to the newest?
[?entity :likes ?items]
[?items :attr val]
Does the limit expression in the pull API guarantee any kind of order newest/oldest first? If so, can this be manipulated?#2017-01-1717:03jfntnFor integration purposes we're looking at using squuids as unique identities for all our entities (about 20 now), but I'm not fully grasping the respective trade-offs of reusing a global :db/uuid attribute vs. a per-entity :<entity>/uuid attribute. Concerned about impact on performance and indexing I guess?#2017-01-1717:37seantempestaIs there a way to get all datoms associated with an entity? I’m trying to send a filtered version of my database down to a datascript client. My schema has a clear separation of data at the :organization entity level (all datoms pointing from there can be sent to the client).#2017-01-1717:41lellis@seantempesta Dont know if is your case, take a look at touch method? http://docs.datomic.com/entities.html#2017-01-1717:49seantempesta@lellis: Interesting. So (d/entity (d/db conn) 17592186047972) and then traverse that for a list of :db/id, calling (d/datoms (d/db conn) :eavt %) on each entity id?#2017-01-1717:51rauh@seantempesta I usually just do pull and then send that data to the client, datascript will also be fine with nested maps, given a proper schema. So pretty simple overall#2017-01-1717:52seantempesta@rauh: Sorry. Not sure if I follow. Do you have the relevant code snippet?#2017-01-1717:52rauhBut if you want more magic, then I agree with lellis: Walk the entity along and send that map.#2017-01-1717:53rauh(d/pull eid [:post/id :post/title {:post/comments [:comment/id]}] [:post/id post-id]) e.g.#2017-01-1717:54rauhbut you have to manually opt in for every entity, no magic involved. Which might be what you want in case you store secrets in the db in the future#2017-01-1717:54lellis@seantempesta no, i mean (d/touch (d/entity (d/db conn) 17592186047972))
will initialize all entity attributes.#2017-01-1717:54rauhOf course you can also just do * for the pull and also do recursion in the pattern#2017-01-1717:55seantempestaDon’t I need to get the individual datoms to transact on the datascript side?#2017-01-1717:55rauhNope, nested maps is just fine.#2017-01-1717:56rauhGiven the right schema of course.#2017-01-1717:57seantempestaOkay. I’ll try that. Thanks!#2017-01-1718:22seantempesta@rauh: Okay, so transit wasn’t able to encode the entity map. java.lang.Exception: Not supported: class datomic.query.EntityMap. Am I missing a step?#2017-01-1718:23rauh@seantempesta A pull shouldn't return an entity#2017-01-1718:24seantempestaOh right. I was still using the d/entity#2017-01-1718:25rauh@seantempesta You can try this code: https://github.com/zalf-lsa/berest-castra-service/blob/997023a9c4000ef4870c0f59802afbc990bb3c2d/src/de/zalf/berest/web/castra/api.clj#2017-01-1719:40jdkealyhow would i do something like the following: I have a permission that might be tied to a user and a user has an email, or a permission is "pending" and would have an email associated directly.
https://gist.github.com/jdkealy/50b1e64f27f5973054b0ce99c313de95#2017-01-1719:41jdkealyTrying (or ) gives me the error
Assert failed: All clauses in 'or' must use same set of vars, had
[#{?email ?user ?e} #{?email ?user}] (apply = uvs)#2017-01-1720:27favilaI think you meant to put [?e :permission/user ?user] outside the or? @jdkealy#2017-01-1720:40jdkealythat still doesn't seem to work either... same error
(d/q '[:find ?e ?email
:in $ ?ent
:where
[?e :permission/ent ?ent]
[?e :permission/user ?user]
(or
[?user :user/email ?email]
[?e :permission/email ?email])]
_db id)#2017-01-1720:41jdkealyalso i think even if that did work, none of the permission/email would return because they don't have a permission/user @favila#2017-01-1720:42favilaDon't you mean [?user :permission/email ?email]? that's what you had before#2017-01-1720:43jdkealysorry, it's... either [?permission :permission/email ?email] or [?user :user/email ?email]#2017-01-1720:44jdkealybut i was just calling ?permission ?e just to make things confusing 🙂#2017-01-1720:45favila[:find ?e ?email
:in $ ?ent
:where
[?perm :permission/ent ?ent]
(or-join [?perm ?email]
[?perm :permission/email ?email]
(and
[?perm :permission/user ?user]
[?user :user/email ?email]))]#2017-01-1720:46favilaIs that what you mean?#2017-01-1720:49jdkealyhmm maybe... lemme try#2017-01-1720:50jdkealylooks right#2017-01-1720:50favilathe critical thing is or-join, resolves the problem that was giving you an error#2017-01-1720:51favilaby default or tries to unify all its vars with the surroundings, but you had different vars in each half of the or, so it could not do that#2017-01-1720:52favilaor-join restricts the unification with what is outside the clause to only the vars mentioned, essentially allowing ?user to act as a kind of private variable only unified within its clause#2017-01-1721:02jdkealyawesome that works thanks @favila#2017-01-1721:29zaneIs there a reason why or-join clauses only allow you to pass in a single source?#2017-01-1721:42tmortenHello all! I have an interesting problem...I'm using Pedestal/Datomic combination and when I use retractEntity function to retract an entity all of my other data clears on the site until (I believe) I run a pull at a very top level entity. I get a new "db" for every request so I'm not sure why it actually takes "pull" on a top level entity to get the data to come back...#2017-01-1722:38SoV4@tmorten hmm... not knowing much about pedestal, how does it merge in new data when you get it?#2017-01-1722:43tmorten@sova: I have an interceptor wrapper per request that injects a new "(d/db conn)" into my request chain. So essentially, every page refresh I would get a new datomic db...what is odd is that, I experience this at the REPL too... I retract the entity...get a new db per (d/db conn) and begin to query the db...which I am left with ONLY entity IDs and no other associations. Doesn't come back until I (d/pull ... a top level entity). For what it is worth, I am retracting an entity that was linked to that entity...#2017-01-1722:45SoV4Hmm.. Would you mind posting the lines you are using to retract? I'm interested in getting to the root of your issue... will read a bit about pedestal#2017-01-1722:47SoV4@tmorten ... follow-up question, So when you say "top level entity" what do you mean?#2017-01-1722:47SoV4because in my use with Datomic all the entities just kinda live in a big ... entity ocean#2017-01-1722:57tmortenYeah, sorry. By top level I mean it has association to entity I am retracting. In a one many assoc#2017-01-1723:03tmortenhttps://bitbucket.org/snippets/tylermorten/M7997#2017-01-1723:12SoV4looks good to me.. let me check it against my typical code...#2017-01-1723:12SoV4Hmm so for my retracts#2017-01-1723:12SoV4they look like this#2017-01-1723:13SoV4(defn remove-blurb [bid]
(let [blurb-info (get-blurb-by-bid bid)
b-title (:title (first blurb-info))
b-content (:content (first blurb-info))
b-author (:publisher (first (get-publisher-email bid)))]
(d/transact conn [[:db/retract bid :blurb/title b-title]
[:db/retract bid :blurb/content b-content]
[:db/retract bid :author/email b-author]])))
#2017-01-1723:16SoV4So maybe you are losing attributes because you're not specifying on which attribute+val it should remove. Do you know more about the entity you want to retract? If you do, you could supply the :werds/ident and the "value" @tmorten#2017-01-1723:42tmortenPerhaps I need to remove the entity ID from the association also? Does datomic automatically remove that ID from the list if you use the retractEntity FN?#2017-01-1800:00tmortenIt would I think, as it would also retract the link => event/_bids for instance...#2017-01-1800:00tmortenwhen I say remove, I mean "retract" 🙂#2017-01-1800:55SoV4@tmorten Ah are you providing the entity IDs yourself? I have not done it that way. I have simply provided the name and the value and I let Datomic resolve which EID it is.#2017-01-1801:14tmortenNo, I'm not. I'm using the db.fn/retractEntity however.#2017-01-1801:14tmortenJust wondering if that will break the "ref" association from the other entity...#2017-01-1802:01tmortenI found a bug in some of my logic that was causing the issue. Thanks for your help, though, @sova#2017-01-1802:11peterromfeldhi, is it possible to query when a :db/id was created? == tx of :db/id#2017-01-1803:06SoV4anytime @tmorten . glad you got it resolved. so db.fn/retractEntity just needs the eid? that is nice.#2017-01-1803:15tmortenCorrect. It is very nice#2017-01-1805:28peterromfeldi guess not since :db/id is just one part of the index (entity) and you can't transact a db/id on its own i think, so only with the rest A/V/op you have a tx.. well i can always use [?e _ _ ?tx]#2017-01-1809:16andrethehunterIs it possible to update multiple entities in a single transaction? something like:
(d/transact conn {:db/id [123 126] :my-entity/value “YAY”})
#2017-01-1810:16robert-stuttaford@andrethehunter not the way you’ve declared it. (d/transact conn [{:db/id 123 :my-entity/value “YAY”} {:db/id 126 :my-entity/value “YAY”}])#2017-01-1811:37andrethehunter@robert-stuttaford what if the entity ids are not known and need to be found? a "nested" query. something like: (d/transact conn {:db/id [[:find ?e :where [?e :my-entity/value "Nay"]] :my-entity/value "YAY"})#2017-01-1811:38andrethehunterthe only way I've found was to create a datomic function via db/fn and then use that in two transactions#2017-01-1811:38andrethehunterthis seemed overly complicated and verbose#2017-01-1811:38andrethehunteris there a simpler way?#2017-01-1811:57robert-stuttafordwhat problem are you trying to solve by doing it this way, @andrethehunter ?#2017-01-1811:57robert-stuttaforddo you need to find and update it inside a consistent transaction?#2017-01-1811:58robert-stuttafordif not, surely (let [id (d/q …)] @(d/transact conn [{:db/id id …}])) is sufficient?#2017-01-1814:46tmorten@andrethehunter: if you don't know the entity id, then you could also use the lookup ref (hopefully you have another identifier in your entity). #2017-01-1815:01pesterhazy@andrethehunter I'd also be curious what you mean by "need to be found". Maybe you could give an example?#2017-01-1815:12tmortenMy approach has been to generally ignore entity IDs and use lookup refs in their place. I'm guessing most people do the same?#2017-01-1815:15stuartsierraLookup refs are generally easy to work with. There's one place you can't use a lookup ref: In the same transaction that created the entity you're looking up.#2017-01-1815:16karol.adamiec@tmorten i sway to the oposite, lookup refs throw, finding entity id myself gives me finer control over what happens#2017-01-1815:17karol.adamiecbut maybe someone could elaborate on pros and cons?#2017-01-1815:21pesterhazyI agree with @tmorten, I always use lookup refs#2017-01-1815:21pesterhazyand treat entity ids as a sort of implenetation detail#2017-01-1815:23stuartsierraBoth have their uses. Most Datomic API functions will accept either an Entity ID or a lookup ref.#2017-01-1816:33Matt ButlerSorry to repeat my questions, unable find any answers online 🙂
Is there a simple way to clear the object cache for benchmarking sake? (Does releasing the connection work?)
And on an unrelated note is is possible to limit a many cardinality ref relationship in datalog, so that the where clause only applies to the newest for example?#2017-01-1816:36marshall@mbutler to clear the object cache i would recommend restarting your JVM#2017-01-1816:37marshall@mbutler cardinality/many attributes don’t have any sense of ‘order’, they’re based on set semantics, so if you need to identify one element of the set you’d need to do it based on a secondary attribute of the referenced entity#2017-01-1816:39favila@mbutler (d/shutdown false) may also work#2017-01-1816:39Matt ButlerAwesome, thanks 👍#2017-01-1816:42Matt ButlerI’ll have to think more about how to construct my query then @marshall maybe ask for the :db/ids of them all and get the first largest somehow. Alternatively may be better to flip the query on its head and traverse the ref in the other direction.#2017-01-1816:42Matt ButlerThanks again 🙂#2017-01-1821:34erichmondhey everyone, order of magnitude performance question. if we wanted to update a couple of hundred entities, modifying 2-3 attributes per entity, I assume that update would happen in the ms range, not the s range, right?#2017-01-1907:08val_waeselynck@erichmond are you worried about latency or throughput? Keep in mind that there's a part of latency that does not depend on the number of updates, that is the Peer-Transactor roundtrip#2017-01-1907:09val_waeselynckThis one should be between 10 and 100 ms I'd say, depending on your network I guess#2017-01-1912:01dominicmIf I'm only interested in all entities with a certain attribute (and not it's value), will AVET be quicker, or AEVT? Or will they be exactly the same?#2017-01-1912:38erichmond@val_waeselynck Yeah, I meant non-network latency. Thanks for the answer!#2017-01-1913:42karol.adamiechmm, as i work on seed data for the DB, i realized it is just data, so instead of manually crafting, amending the dataset or contorting regex… i can just define it in repl and then map over it and grab result and put into conformity norm file. works great with two caveats:
1) ordering and formatting is disturbed, but i can live with that…
2) reading in #db/id[:db.part/user] evals to #db/id[:db.part/user -1020063] . And that one worries me.
Any ideas how to get around reader macro expansion?#2017-01-1913:45val_waeselynck@karol.adamiec not sure I understand what you're trying to do, but regarding your questions:
1) disturbed compared to what?
2) what's the problem with that?#2017-01-1913:46karol.adamiecignore 1, just my keys are out of order, and formatting is not nice like handcrafted.#2017-01-1913:46karol.adamiecabout 2 well, i have a lot of conformity norms#2017-01-1913:46karol.adamiecand i do process them independently#2017-01-1913:46karol.adamiecit worries me that there might be conflicts?#2017-01-1913:47karol.adamiecbut saying that aloud i realize it is tempids, scoped per transaction#2017-01-1913:47karol.adamiecso it is probably fine?#2017-01-1913:47val_waeselynckas long as you don't have more than 1M tempids per transaction, you should be okay 🙂#2017-01-1913:48karol.adamiecso just to put to bed my worries, the magical numbers in conformity norms are absolutely fine, no risks whatsoever… ?#2017-01-1913:49val_waeselynckmagical number likes -1020063 you mean?#2017-01-1913:49karol.adamieci am 99% sure of that, but a confirmation would be cool 😄#2017-01-1913:49karol.adamiecyes#2017-01-1913:50val_waeselynckoh, you mean that they appear in your edn file, correct?#2017-01-1913:50karol.adamiecyeah, handcrafted norms are nice and tidy, automated ones do include nasty -10234 identifers#2017-01-1913:51karol.adamiecnice one is
{:db/id #db/id[:db.part/user]
:price/currency 1000
:price/country :GB}
#2017-01-1913:51val_waeselynckI see, weird indeed#2017-01-1913:51karol.adamiecafter going through repl mapping :
[{:db/id #db/id[:db.part/user -1020062], :price/currency 1000, :price/country :GB}]#2017-01-1913:52karol.adamieci then grab the value and paste into file#2017-01-1913:52stuartsierraJust edit out the numbers in #db/id, as long as they aren't used twice.#2017-01-1913:53karol.adamiec@stuartsierra but am i right assuming that as long as the numbers are unique in a transaction they will not do harm? other than visual nastiness?#2017-01-1913:54stuartsierraI can't see how they would hurt.#2017-01-1913:54karol.adamiec:+1:#2017-01-1913:56stuartsierraIn fact, with the latest Datomic releases, you don't even need :db/id.#2017-01-1913:58stuartsierrahttp://blog.datomic.com/2016/11/datomic-update-client-api-unlimited.html#2017-01-1914:00karol.adamiecha#2017-01-1914:00karol.adamiecneed to upgrade then 🙂#2017-01-1916:13jfntnIs it ok to configure the transactor’s host to its external ip and the alt-host to the loopback interface?#2017-01-1916:14jfntnHad it the other way around at first, but the peer was printing error messages when starting, I figured by swapping them it’d try the publicly accessible one first, which worked#2017-01-1916:42karol.adamiechow can i find what is version of transactor running? i suspect i have older version on AWS than what i specified in my automation scripts… ;/#2017-01-1916:45karol.adamiecActiveMQNotConnectedException AMQ119007: Cannot connect to server(s). Tried with all available servers. org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory (ServerLocatorImpl.java:799)#2017-01-1916:46karol.adamiecdowngrading to peer library 0.9.5394 helps to alleviate the problem#2017-01-1917:07marshallThe transactor logs show version during startup#2017-01-1917:08marshallwhen you upgrade the transactor (assuming you’re using our cloudformation scripts), you need to specify the new version in the template CF file then run ensure-cf again before starting the stack#2017-01-1917:08marshallto regenerate the configuration json with the updated version ##2017-01-1917:18karol.adamiecyes, i located the s3 logs. version is not what i expected 🙂. thx#2017-01-1917:24karol.adamiecwhole problem was stemming from the fact that update to autoscaling group did not bring down old transactor… once i bring it down by hand , new one started correctly facepalm#2017-01-1917:47marshallah#2017-01-1917:48marshallwhen I upgrade I tend to stand up a whole new stack (2 transactors) with a new name (i.e. alternate between stackName-left and stackName-right)#2017-01-1917:48marshallthen once the new set are up and i see metrics from them i remove the entire old stack#2017-01-1917:48marshallso then you end up with 2 new txor instances running upgraded version#2017-01-1918:14souenzzoThere is some how to seek datoms by a (pull) pattern?#2017-01-1918:24stuartsierra@souenzzo No, d/pull only supports navigation among entities through :db.type/ref attributes.#2017-01-1918:29souenzzoSo @stuartsierra, is there any (simple) way to turn (nested)maps into EAV?#2017-01-1918:32stuartsierra@souenzzo It's not part of the Datomic API. It's not that hard to write a recursive function to do it. If you want to use Datomic query (`d/q`) over collections of maps, you could transact them into a temporary, in-memory Datomic database.#2017-01-1918:33devthsurely there's a fn somewhere inside of Datomic that does it. is it not possible to run internal fns? (or is there some licensing issue?)#2017-01-1918:33devth(I also need to do this - convert map form tx to list)#2017-01-1919:48favila@devth @souenzzo There are some subtleties here. E.g, should tx function invocations be expaned; you need the db to access attribute schema; are you ok that the db might change; do you want to eagerly resolve lookup refs or not; should string tempids be converted to numeric tempids; should we auto-create tempids if it's missing (with partition inference)#2017-01-1919:49favilaso there's unlikely one function that meets all needs#2017-01-1919:49favilaYou need more knobs#2017-01-1919:50devthso datomic is internally making those decisions when you call transact with a list of maps right?#2017-01-1919:51favilayes, but it only cares about the final expansion, and lots of that can remain internal impl details#2017-01-1919:51favilaand it does it atomically--so no worry about schema changing (for e.g.)#2017-01-1919:51devthah, right#2017-01-1919:52favilaso the input contract of this function is set, but the output is not#2017-01-1919:52favilamaking it a proper public api function would require setting the output contract, and there can be significant variation in what people want#2017-01-1919:52favilathis is probably why it is not exposed#2017-01-1919:54favilathat said, you can write a very simple (but naive) map to datom expander if you don't need to handle the full range of possible input#2017-01-1919:54favilaimplicit and string ids are harder cases#2017-01-1919:55favilaand if you are disciplined about value types you can even do it without a db#2017-01-1919:55favilae.g. assume a vector value is a lookup ref not multiple cartinality-many values#2017-01-1919:56devth(or assume the opposite)#2017-01-1919:57devthpossibly a need for an open src lib that exposes and implements all the knobs#2017-01-1919:58favilachallenge accepted 🙂#2017-01-1919:58favilaI've written many variations that just did part of it just to get work done#2017-01-1919:58favilaI suppose I should try to merge them all together into a do-all fn with knobs#2017-01-1919:59favilaI've never dealt with the new string-id and implicit-id features though#2017-01-1920:00devthsounds awesome. out of curiosity, what are potential use cases for needing tx to be in List form rather than Map? mine is authorization of transactions.#2017-01-1920:01favilawe wanted to break up big txes semi-automatically, but that requires extra cross-tx tempid tracking#2017-01-1920:02favilalist form facilitates but does not provide by itself#2017-01-1920:02stuartsierra@devn @favila For example, https://github.com/stuartsierra/mapgraph is a trivial "database" that flattens nested maps. https://github.com/tonsky/datascript must contain a similar procedure.#2017-01-1920:52jfntn@stuartsierra that looks great, been meaning to implement something like mapgraph for a while now#2017-01-1920:56jfntnHave you considered an api where the graph implement the associative interfaces so you could just use clojure.core/update-in et al and it’d automatically follow the links?#2017-01-1921:00stuartsierra@jfntn No. That would greatly increase the scope of the library. The point was to make something simple. update-in isn't really needed — you can pull the entity you want, update it like an ordinary map, and add it back into the graph.#2017-01-1921:02stuartsierraI considered implementing an interface similar to Datomic's d/entity, but even that is probably more complexity than I want to deal with.#2017-01-1921:09jfntnRight I was mentionning update-in because we have use cases where we want to perform updates at a path deep in the graph without incurring the cost of a normalization round-trip#2017-01-1921:13stuartsierraI'm sure you could write something for that specific use case. It wouldn't work generally because entity references may be inside collections.#2017-01-2011:54limistWhen running datomic/bin/maven-install I saw this error message toward the end; is it anything to worry about?
[WARNING] Some problems were encountered while building the effective model for com.datomic:datomic-pro:jar:0.9.5544
[WARNING] 'dependencies.dependency.exclusions.exclusion.artifactId' for com.datastax.cassandra:cassandra-driver-core:jar with value '*' does not match a valid id pattern. @ line 125, column 18
[WARNING]
[WARNING] It is highly recommended to fix these problems because they threaten the stability of your build.
#2017-01-2021:10bbssI'm getting namespace 'datomic.client.config' not found trying to require datomic.client. My dependency is [com.datomic/clj-client "0.8.606"] and can't really find any results on Google.#2017-01-2021:14jaretWhat does your require look like?#2017-01-2021:14jaret(require '[clojure.core.async :refer (<!!)]
'[datomic.client :as client])
worked for me#2017-01-2021:14bbss(ns scraper.core
(:gen-class)
(:require [alandipert.enduro :as e]
[mount.core :refer [defstate start stop]]
[perseverance.core :as p]
[clojure.core.async :as async]
[datomic.client :as client]
[scraper.cljc.configurations :refer [get-actions-creating-fn]]
[scraper.crawling.core :refer [get-driver handle-actions]]
[scraper.reporting :refer [report send-report]]
[scraper.xiaomi.core :as xiaomi]
[scraper.apple-appstore :as apple]
[scraper.common.api-query :refer [run-through-store]]
))
#2017-01-2021:15bbssBut just trying that out doesn't work either.#2017-01-2021:16jaretAre you using boot or leiningen?#2017-01-2021:16bbsslein#2017-01-2021:16marshallhave you tried it without AOT?#2017-01-2021:16bbsslet me try#2017-01-2021:17bbssRemoving :gen-class didn't seem to work either, no 😞.#2017-01-2021:21bbssI tried removing ~/.m2/com/datomic too.#2017-01-2021:40matthavenerHas transacting a map with a reverse attribute always worked? E.g. {:db/id “somethingnew” :parent/_children [:parent/name “foo”]} ?#2017-01-2110:21ezmiller77Hello, I’m facing a tech choice problem that I can’t solve and wonder if people here have some thoughts. I want to build a server that can serve text (markdown) files edited in a text editor. These files would represent individual research notes or some kind of longer text, essay or “blog”. But I want the server also be able to serve versioning information about these texts. I started by trying to use git as a database for this, but it felt like I was using a tool really meant for something else. Trying to write a sql-like language for git seemed wrong. Then I thought about datomic, which has versioning, and is just really interesting on any number of levels. It’s almost perfect. BUT: I would need to write some sort of editor interface to save changes etc, which is something that I’m really trying to avoid. Does anyone here have any thoughts about what tech I should choose for this? I’ve wondered if there is a way to automatically port updates tracked by git into datomic. If anyone has any thoughts, I’d be much obliged.#2017-01-2119:01ezmiller77Is there a way to put the datomic peer server command in the system path, or what is the appropriate way to run the peer-server when using it for a project?#2017-01-2119:01ezmiller77It seems odd to link the datomic /bin directory to the path, given that the command to run it is just the very general run.#2017-01-2211:06dominicmThere is a importer of git to datomic, by the datomic team, might be of interest#2017-01-2211:45ezmiller77@dominicm interesting. do you know where I can find some info about it?#2017-01-2211:47pesterhazyhttps://github.com/Datomic/codeq#2017-01-2214:33ezmiller77Wow. What are the use cases for this kind of thing?#2017-01-2216:39witekHi, I have a conceptual question. When I want to change an attribute, why do I have to db/retract and then db/add? Why isn't there a db/set?#2017-01-2217:25pesterhazyyou don't have to retract for :db.cardinality/one attribute#2017-01-2217:25pesterhazyif you add a new value, the old value is automatically retracted#2017-01-2218:15zalkyAnyone have any idea what would be the most efficient way to return, given a cardinality many attribute, only those entities for which there is actually more than one value?#2017-01-2218:29zalkyThe best I can think of is something like:
(d/q '[:find ?a ?b
:where
[?a :some/attribute ?b]
[?a :some/attribute ?c]
[(not= ?b ?c)]]
db)
Is there anything better?#2017-01-2218:31pesterhazyuse (count) and filter out those a count less than two?#2017-01-2219:18ezmiller77When I try to run ensure-transactor with an aws ddb configuration like this datomic ensure-transactor transactor.properties ensured-transactor.properties, I get a bizarre error:
> java.io.FileNotFoundException: transactor.properties (No such file or directory)#2017-01-2219:18ezmiller77I even opened permissions on the config file all the way up in case for some reason this was a permissions problem.#2017-01-2219:18ezmiller77No cigar.#2017-01-2219:36marshall@ezmiller77 you need to run it from the root of the Datomic distribution. So something like bin/datomic ensure-transactor config/my.properties #2017-01-2219:37ezmiller77Huh, interesting. That connects to a larger confusion i Have about where datomic is supposed to sit in my system.#2017-01-2219:37marshallWhere config/my.properties is the path to your actual properties file#2017-01-2219:38ezmiller77So when running datomic commands they need to be run from within the root directory of the unzipped package?#2017-01-2219:39marshallYep#2017-01-2219:39marshallFor the most part anyway#2017-01-2219:39ezmiller77Does that include commands that run, say, a datomic transactor?#2017-01-2219:39marshallYes#2017-01-2219:40ezmiller77Hmmm#2017-01-2219:40marshallTransactor is usually started via bin/transactor -yourJVMFlags path/to/properties.file#2017-01-2219:41ezmiller77Do most people place a unique instance of datomic then directly in their project path?#2017-01-2219:41marshallNone of this is strictly required, bit the included startup scripts that do things like set your classpath and such more or less assume that is what you're doing #2017-01-2219:42marshallNot really. You only need the distribution for running the transactor. You use the peer or client dependency via maven or lein or whatever for your actual app#2017-01-2219:49ezmiller77Huh. Okay. I’m still trying to understand all these difference “services” if that’s the right word.#2017-01-2219:50ezmiller77I’m just trying to build out a little project that would serve “notes”, like research notes, from a datomic server. Can you suggest which service I should try to use?#2017-01-2219:50ezmiller77I was under the impression that in order to setup a peer server that used the file system locally, I need to configure a transactor, and then it seemed necessary to configure a storage service as well…#2017-01-2219:51marshallIf you want to use clients then yes you'll need a transactor and a peer server#2017-01-2219:51marshallIf you're running it all locally you can use the dev storage protocol for local disk storage#2017-01-2219:52ezmiller77“clients” here refers to what exactly?#2017-01-2219:52marshallDev isn't supported for production use, but is fine for running locally#2017-01-2219:52marshallThe Datomic Client library#2017-01-2219:52marshallAs opposed to the Datomic Peer library#2017-01-2219:53marshallhttp://docs.datomic.com/clients-and-peers.html
#2017-01-2219:53ezmiller77Right okay. Saw that doc.#2017-01-2219:53marshallReviewing this might help too http://docs.datomic.com/architecture.html#2017-01-2219:55ezmiller77Okay thanks.#2017-01-2220:45val_waeselynck@ezmiller77 regarding your tech choice question: I would advise you not to expect too much of the 'time travel' features of Datomic as a tool for versioning. In my experience, you usually don't want to query history in your application code, because you have very little control over history (example: imagine you want to import several dated revisions of a document at once. You could not do that if the way you store revisions is using Datomic history).#2017-01-2220:47val_waeselynckIf I were you, I'd just store all the revisions on cheap storage like s3. If you really need to save space, you could store a base document along with a list of edits, and recombine them using a diffing library.#2017-01-2220:50val_waeselynckIn summary: there are tons of very good reasons for choosing Datomic (one of them being that it's fundamentally more sound than most alternatives) but your versioning use case isn't one IMHO.#2017-01-2221:02val_waeselynck@zalky if you're after efficiency, you'll probably want to perform an AEVT traversal instead of a Datalog query#2017-01-2221:07val_waeselynck(->> (d/datoms db :aevt :my/attribute) seq (partition-by :e) (filter #(> (count %) 1)) (map #(-> % first :e)))#2017-01-2221:11val_waeselynck(Have not tried this code, but you get the idea)#2017-01-2221:23ezmiller77@val_waeselynck I less interested in loading multiple version of a document at once, than I am in having access to those versions, and perhaps also in the diff that I think datomic provides…#2017-01-2221:26ezmiller77I’m interested, though, in what you mean by one having “very little control over history”. Is it not possible to query the revision history of an entity?#2017-01-2221:26ezmiller77In any case, what’s proving most frustrating about datomic is the challenge of setup and the inscrutability of the documentation.#2017-01-2221:27ezmiller77I’ve been trying to get a setup working all day with very little progress. Quite frustrating.#2017-01-2221:29val_waeselynckDatomic does not give you diffs. You provide writes, and it gives you an aggregated view of those writes. It's the opposite from Git.#2017-01-2221:49ezmiller77@val_waeselynck huh. Well I hope to have a sense of what you mean when I can get it running.#2017-01-2221:56val_waeselynck@ezmiller77 what kind of setup are you aiming for exactly? Setting up Datomic for production can be a bit of a hassle, but setting up for development should not be too hard#2017-01-2221:56val_waeselynckYou can even start coding with just the in-memory storage#2017-01-2222:00ezmiller77Yeah, the only thing I’ve been able to get running is the in memory peer-server.#2017-01-2222:02ezmiller77I think the main thing that may be confusing me is the variety of services and combinations thereof, combined with a confusion about what a basic (even dev) project setup might look like in, say, leinigen with clojure (which is what I’m working with).#2017-01-2222:04ezmiller77What I think I understand is that one runs some sort of datomic database using a peer-server or a client, and that process runs outside the project. Then, depending on what kind of db process one runs, you connect to it/interface with it in different ways.#2017-01-2222:05zalky@val_waeselynck , that's an interesting approach, I've never really tried traversing the index directly that way. I will give that a shot thanks!#2017-01-2222:06ezmiller77I tried to get a dynamo-db local service running by: 1) installing dynamodb-local, 2) running ensure-transactor on a properties file setup for ddb-local, and then 3) running a transactor on that properties file. But then I started getting an error “Unable to load license key”.#2017-01-2222:07ezmiller77That’s as far as I was able to get, and had to waste a bunch of time just to get there because it wasn’t apparent that the ensure-transactor needed to be run in the datomic root.#2017-01-2222:09marshall@ezmiller77 for dev work and exploration I'd definitely suggest using a dev transactor.
Put your starter license key in the dev transactor properties sample included under config/samples and start the transactor with something like bin/transactor config/samples/dev.properties#2017-01-2222:10marshallThen you can connect via peer with a uri like: datomic:<dev://localhost:4334/mydbname>#2017-01-2222:11marshallYou could also start a peer server against the dev transactor with the approach here: http://docs.datomic.com/peer-server.html
#2017-01-2222:12marshallDetails on starting dev transactor can be found here http://docs.datomic.com/run-transactor.html
#2017-01-2222:19ezmiller77@marshall i will give it a shot. i guess i shied away from dev transactor for not knowing what that really means. what is it using as storage etc… but at this point if i can get up and running that’d be great.#2017-01-2222:20marshallIt uses an internal H2 db for local disk storage#2017-01-2222:20marshallSo it is persistent on your hard drive#2017-01-2222:21marshallNot for production work, but totally appropriate for development work. You can also backup and restore to and from dev just like any other storage#2017-01-2222:23ezmiller77@marshall when i run with that file i also get this “Unable to load license key” message.#2017-01-2222:24marshallDid you put your Starter license key in the file?#2017-01-2222:24ezmiller77I put the key there that I have from my http://my.datomic.com page.#2017-01-2222:24marshallHrm. Directly copied from the email? #2017-01-2222:24marshallShould be like 6 lines#2017-01-2222:25ezmiller77Oh no.#2017-01-2222:25marshallYeah the license key comes via email#2017-01-2222:25marshallThe key on the site is a download key for getting the distro via curl#2017-01-2222:25ezmiller77That must be it. Thanks!#2017-01-2222:25ezmiller77Stupid me.#2017-01-2222:26marshallSure#2017-01-2222:26marshallNah, lots of keys and creds. You're definitely not the first to miss it#2017-01-2222:29ezmiller77Nevertheless, much appreciated.#2017-01-2222:38ezmiller77@marshall so I’ve run the transactor. the next step I think is to run a peer-server. i tried that using this: bin/run -m datomic.peer-server -p 8998 -a datemo,datemo -d datemo,datomic:. Now I get an error regarding the db itself I think:
> Exception in thread "main" java.lang.RuntimeException: Could not find datemo in catalog#2017-01-2222:38marshallYeah, you need to create the db from a peer first#2017-01-2222:39ezmiller77Huh.#2017-01-2222:39marshallRun bin/repl#2017-01-2222:39ezmiller77But isn’t this server the peer?#2017-01-2222:39ezmiller77I mean this peer-server?#2017-01-2222:39marshallHeh. This is admittedly a bit confusing#2017-01-2222:39marshallPeer server can only serve existing databases#2017-01-2222:40marshallIt doesn't create them #2017-01-2222:40ezmiller77That makes sense I guess.#2017-01-2222:40marshallYou should run bin/repl#2017-01-2222:40ezmiller77ahhh so the repl is kind of like running mysql...#2017-01-2222:40ezmiller77and the manually creating a db#2017-01-2222:41marshallThen:
(require '[datomic.api :as d])
(d/create-database "datomic:)
#2017-01-2222:42marshallafter that you should be able to close the repl and connect your peer server#2017-01-2222:43ezmiller77bingo!#2017-01-2222:44ezmiller77Much much thanks!#2017-01-2222:44marshallNp#2017-01-2222:44marshallI'll see about improving the docs around this stuff #2017-01-2222:50ezmiller77I might write up a blog post or something to sum up this process, maybe that can help fill in some of the blanks. I find it’s usually just missing steps especially where stuff isn’t intuitive. In this case, I think the potential for confusing is higher because there are so many different pieces (by design of course).#2017-01-2222:52ezmiller77I watched the day-of-datomic training videos and one had this picture with the db services all decomposed, and that looked cool on a high level, but I realized that that was also why I hadn’t gotten anything running yet.#2017-01-2222:53marshallWell it also doesn't help that those videos were from before clients and peer server existed:)#2017-01-2222:54ezmiller77Yes, that too.#2017-01-2222:55ezmiller77Yeah, I partly got into this mess because I wanted to be able to play with the console to get a feel for the query language and the structure of the db visually. But discovered that that wouldnt work with the in memory db peer-server.#2017-01-2222:56marshallRight, console requires a storage backed db#2017-01-2223:02ezmiller77@val_waeselynck is there an easy test that you can think of whereby I could see the limitation vv history to which you were referring?#2017-01-2223:05val_waeselynckTry to edit history in any way :)#2017-01-2223:05ezmiller77I’m not sure what that means…#2017-01-2223:06ezmiller77I thought that in datomic you don’t edit history because you accumulate new information.#2017-01-2223:07val_waeselynckSure, but that's not really the same as having several versions of a document.#2017-01-2223:08val_waeselynckBut the fact that history cannot be edited is the reason that it should not be used to implement first class notions of your domain model#2017-01-2223:10val_waeselynckIf that's not clear, you should go ahead and try implementing versions using history, the limitations of that approach will soon become evident to you#2017-01-2223:12ezmiller77Well. That sounds ominous!#2017-01-2223:12ezmiller77Haha#2017-01-2223:13val_waeselynckOne situation where querying pasr dbs is especially problematic is when you need to evolve your schema#2017-01-2223:13ezmiller77Admittedly, I can’t really follow the meaning of what you wrote. I think I’m not keyed into the lingo well enough yet. So that “first class notions of your domain model” sounds like a foreign language for the most part.#2017-01-2223:14val_waeselynckThere has to be a mailing list thread where someone has expressed this more clearly :)#2017-01-2223:15ezmiller77My assumption/hope was that I could simply provide text (which would just be markdown) and then be able to rewind somehow to see previous versions.#2017-01-2223:16ezmiller77And have the power of a db to be able to query at the same time. Git gives me much of what I need, but it’s not queryable really.#2017-01-2223:17ezmiller77My understanding was that with datomic you can go back in time, and also query a history.#2017-01-2223:17ezmiller77Something perhaps like this example in the tutorial code in the docs:
(require '[clojure.pprint :as pp])
(def db-hist (client/history db))
(->> (<!! (client/q
conn
{:query '[:find ?tx ?sku ?val ?op
:where
[?inv :inv/count ?val ?tx ?op]
[?inv :inv/sku ?sku]]
:args [db-hist]}))
(sort-by first)
(pp/pprint))
=> ([13194139534399 "SKU-21" 7 true]
[13194139534399 "SKU-42" 100 true]
[13194139534399 "SKU-22" 7 true]
[13194139534400 "SKU-22" 7 false]
[13194139534402 "SKU-42" 1000 true]
[13194139534402 "SKU-42" 100 false])
#2017-01-2223:19val_waeselynckThe way to do that would be to simply store all the versions as different entities with a timestamp attribute#2017-01-2223:21ezmiller77Hmmm.#2017-01-2302:56drewverleecan an entity have attributes with a different namespace?
[<e-id> <attribute> ...]
[1 :person/name]
[1 :movie/title]#2017-01-2302:57drewverleedoes the "/" in attribute carry any meaning?#2017-01-2304:41podviaznikovI started getting strange error while deleting entity. Message says "java.lang.IllegalArgumentException: :db.error/reset-tx-instant You can set :db/txInstant only on the current transaction.”
.My delete function looks like this (defn delete-entity [conn entity-id person-id]
(t/info "delete-entity" entity-id person-id)
(d/transact conn [[:db.fn/retractEntity entity-id]]))
. Not sure what is going on and what does that error mean. Any tips? I was able to find only one google search with this error and that one didn’t help#2017-01-2305:12andrethehunter@drewverlee yes, an entity can have attributes from multiple namespaces, it has no special meaning#2017-01-2305:12andrethehunternamespaces do help keep things organised so I’d recommend you don’t mix them#2017-01-2305:17andrethehunterDoes anyone know if there’s a way for a database function to return/execute multiple transactions?#2017-01-2305:21andrethehunterI’m doing a data migration in a database function and it needs to do a :db/add for the entity and have a transaction#2017-01-2305:23andrethehunterso (datomic/transact conn [[:db/add id :attribute value]{:db/id #db/id[:db.part/tx] :tx-attribute value2}]) for each entity#2017-01-2312:15Matt ButlerIs there a way of running a 'migration function', with either vanilla datomic or conformity? In conformity you might add a migration that says all my users have :user/first-name attributes. But what is the best way to then iterate over all my users and transact that new data for each user?#2017-01-2313:19pesterhazy@mbutler I've needed a "data migration" like that as well#2017-01-2313:20pesterhazyI'd just build my own "migrate" tool that does both schema migrations and data migrations#2017-01-2313:20pesterhazyno need to make a transactor fn for that IMO#2017-01-2313:36Matt Butler@pesterhazy Thanks, that does appear the best answer atm. If anyone else has some experience of this let me know 🙂#2017-01-2313:50marshall@podviaznikov Can you provide more info about the entity ID you’re passing to the function? I would expect that error if, for instance, you tried to call retractEntity on a transaction eid#2017-01-2313:54pesterhazy@mbutler I do this all the time with sql migrations - I don't like any of the existing frameworks and it's so simple to build#2017-01-2316:11marshallDatomic 0.9.5554 is now available https://groups.google.com/d/topic/datomic/d74excL8JaI/discussion#2017-01-2320:47drewverleeis it possible to use datalog without datomic?#2017-01-2320:53podviaznikov@marshall I’ll double check if it was regular entity id. I think it was, but will check. Thanks!#2017-01-2321:14ejelomehi, can I get an enlightenment how datomic sends and receives data (esp. does it store data or is just a layer to a database)? for example, my old understanding of Client-Server is:
Client (e.g. mobile) -> sends data (e.g. JSON) over HTTP -> Server (e.g. Liberator/Pedestal?) -> stores it to Database (e.g. Datomic)
... is this wrong? or something is missing in the flow?#2017-01-2322:16favilaIn datomic, there are three concepts: storage, transactor, peer#2017-01-2322:17favilastorage is where all the data is, mostly key-value of blob values#2017-01-2322:17favilapeers read out of storage directly#2017-01-2322:18favilain storage is the address of a transactor (which is the only process allowed to write to storage)#2017-01-2322:19favilapeers see this address and connect to a transactor. The transactor gives peers a live stream of writes to datomic (the tx-queue), and also accepts transactions to write.#2017-01-2322:21favila@ejelome That is a rough summary of datomic's system architecture#2017-01-2322:26ejelomehey, awesome summary @favila, so in that case, datomic has its own storage#2017-01-2322:27favila@ejelome well, it has its own expectation for how its keys and values are laid out, but it does not have its own storage service#2017-01-2322:27favila@ejelome it is parasitic on some other tech for storage, e.g. dynamodb, a sql database, cassandra, etc#2017-01-2322:28favilabut the only thing it needs is transactional update, blob storage, and retrieval by a key#2017-01-2322:29favilathe dev/free transactor uses an embedded h2 database (sql)#2017-01-2322:30ejelomeahhh, so it just says how to store and retrieve data but the actual storage is a separate layer#2017-01-2322:31favilanot only storage but networked retrieval#2017-01-2322:31favilae.g. if storage is sql, peers make a sql query (select * from datomic_kvs where id="xxx") against storage to perform "reads"#2017-01-2322:32favilatransactors use an insert/update to perform writes#2017-01-2322:32ejelomeoh, so it kind of has pre-made way to deal with kind of database you'll store/retrieve the data#2017-01-2322:33favilaIt's not something you control#2017-01-2322:34ejelomeI would assume it's differently handled on diff databases, e.g. rdbms is diff. to nosql is diff. to <insert another kind of db>#2017-01-2322:35favilayes, that is true. but they're all used as key-value stores, there's not that much difference#2017-01-2322:35ejelomeok, now I understand better, but I want to ask two more important questions#2017-01-2322:36ejelome? <-> datomic <-> ?#2017-01-2322:36ejelomebasically, the in between#2017-01-2322:37favilahttp://docs.datomic.com/architecture.html#2017-01-2322:37favilaAre you aware of the diagram on this page?#2017-01-2322:37ejelomeoh, no XD#2017-01-2322:38ejelomewait a sec I'll read#2017-01-2322:43ejelomegreat! this answered one of the two question (storage):
http://docs.datomic.com/storage.html#outline-container-1#2017-01-2322:46ejelomeit's a bit daunting to understand tbh#2017-01-2322:47ejelomeso then, knowing that it just provisions a database, does datomic also need some sort of way to handle incoming traffic? e.g. liberator/pedestal?#2017-01-2322:48ejelomeor i can handle it on its own#2017-01-2322:49favilawhat kind of traffic?#2017-01-2322:49ejelomelooking at the chart: app <-> client <-> http+transit <-> server
the client is liberator/pedestal?#2017-01-2322:49favilano#2017-01-2322:50favilathese are the arch of datomic itself#2017-01-2322:50favilaeverything else is in the circle "App"#2017-01-2322:50favilathat's where your code is that "uses" datomic as a db and which doesn't care about datomic's arch#2017-01-2322:52ejelomeI'm actually confused with the app and client because I usually think of the app as the client itself#2017-01-2322:52favilae.g. if you have a traditional CRUD webapp, you have client, server, and db. swap out DB with something else (e.g. mysql vs datomic), the arch remains the same#2017-01-2322:52favilaYour web server is a client to the db#2017-01-2322:52favilathe browser is a client to the web server#2017-01-2322:52favila"client" and "server" are relative/relational, not absolute terms#2017-01-2322:53favilawhere the "client" is depends on your perspective#2017-01-2322:55ejelomeso something like: cljs <-> liberator/pedestal <-> datomic#2017-01-2322:56favilafor a full web app, sure#2017-01-2322:56ejelomesorry favilla, too dumb about this stack#2017-01-2322:56favilajs/html <-> php <-> mysql is an exact analogy#2017-01-2322:57favilathere are two clients and two servers#2017-01-2322:57ejelomeoh, I was limiting myself with the jargon (e.g. client as strictly front-end), that's why#2017-01-2323:00ejelomeok, going with the big picture: client 1: app (cljs) <-> client 2: server (liberator/pedestal) <-> db (datomic)#2017-01-2323:01ejelomeoh nevermind, I'm crammed up#2017-01-2323:02ejelomethat should be better I think#2017-01-2323:11ejelomehey @favila, thanks for the patience, really appreciate it, a bit slow to learn the architecture and its flow but I'm finally getting it,
have a good dawn 😄#2017-01-2409:21abhir00pI am new to datomic and I was trying to understand how altering schema works in datomic?#2017-01-2409:21abhir00pI came across this in the documentation
Because Datomic maintains a single set of physical indexes, and supports query across time, a db value utilizes the single schema associated with its basis, i.e. before any call to asOf/since/history, for all views of that db (including asOf/since/history). Thus traveling back in time does not take the working schema back in time, as the infrastructure to support it may no longer exist. Many alterations are backwards compatible - any nuances are detailed separately below.
#2017-01-2409:22abhir00pGiven this Thus traveling back in time does not take the working schema back in time, as the infrastructure to support it may no longer exist.#2017-01-2409:22abhir00pHow do people using it handle schema changes in production?#2017-01-2409:25pesterhazywell what kind of schema changes are you thinking of?#2017-01-2409:26pesterhazyusually what you'll do is to add an attribute or add an index to an existing attribute#2017-01-2409:26pesterhazyremember the datomic schema is very lightweight#2017-01-2409:28chrisblom@abhir00p generally by avoiding non backward compatible schema changes#2017-01-2409:30chrisblomin my experience most schema changes are are just adding new attributes#2017-01-2409:30chrisblomfor breaking changes is often a good idea to use a new attribute, and convert the old data to the use the new attribute#2017-01-2409:31chrisblomDoes anyone know id its possible to specify the partition when using string temp ids?#2017-01-2409:32abhir00pNeed to read a bit more to understand this.#2017-01-2411:29karol.adamiechello, what is the idiomatic way in datomic go get a monotonically increasing number? i need it for constructing order ids and stuff like that. Transaction number?#2017-01-2411:30pesterhazy@karol.adamiec you need a transactor fn#2017-01-2411:32karol.adamiechoped to avoid that 🙂#2017-01-2411:34karol.adamieci am looking for something that is in datomic already that i can piggyback off. the numbers do not need to be in order, just uniqe and legible ie on the phone… guids out 🙂#2017-01-2411:34pesterhazyhere's something that you can use#2017-01-2411:34pesterhazy{:db/id (d/tempid :db.part/user),
:db/fn (d/function '{:lang :clojure, :imports [], :requires [], :params [db eid],
:code "(let [new-number (d/q ' [:find (max ?number) . :in $ :where [_ :my.order/number ?number]] db)] [{:db/id eid, :my.order/number (inc (or new-number 0))}])"})
:db/ident :my.order/new-order}#2017-01-2411:34pesterhazyassuming your order number attr is :my.order/number, you can generate a new order using [:my.order/new-order]#2017-01-2411:35karol.adamiecwell looks exactly like a thing i need#2017-01-2411:35karol.adamiec🙂#2017-01-2411:35karol.adamiecthank You 😄#2017-01-2411:35pesterhazynp#2017-01-2411:35pesterhazyit's not ideal btw#2017-01-2411:36pesterhazyI wish there were default db.fn's like that, e.g. :db.fn/auto-increment#2017-01-2411:37karol.adamiecbtw: hos does one manage clojure db functions? is that an issue?#2017-01-2411:37pesterhazymigrations#2017-01-2411:37karol.adamieci can put a function in my conformity schema, but are there any practices, workflow if that code needs to evolve?#2017-01-2411:37pesterhazyif the code evolves, a new migration 🙂#2017-01-2411:38karol.adamiecyeah so the flow is develop a new version in repl, when happy do the migration#2017-01-2411:38karol.adamiec?#2017-01-2411:38pesterhazyright#2017-01-2411:38karol.adamiecsounds good enough 🙂#2017-01-2411:38pesterhazyit's just a function really#2017-01-2411:39karol.adamiecwell yes, but passed in as string ==> which means copy and paste 😄, triggers alarms 😄#2017-01-2411:40karol.adamiecanyway good enough for today 🙂#2017-01-2411:40pesterhazyyou can just pass it as a quoted data structure#2017-01-2411:40pesterhazyinstead of a string#2017-01-2411:40karol.adamiecha, that is way nicer#2017-01-2411:41karol.adamiec:+1:#2017-01-2411:41pesterhazyit's a string here because I got it from my db 🙂#2017-01-2411:41karol.adamiecperfect, will brush up on the api today 🙂#2017-01-2411:44pesterhazyfor exploration you can also call a fn installed in your db: http://docs.datomic.com/clojure/#datomic.api/invoke#2017-01-2411:45karol.adamiecha, looks like a great way for an ‘integration' test#2017-01-2411:47pesterhazy(d/invoke (rdb) :my.order/new-order (rdb) (d/tempid :db.part/user))
[{:db/id #db/id[:db.part/user -1000004], :my.order/number 16256}]#2017-01-2413:23val_waeselynck@abhir00p re: migrations, please tell me if this helps https://vvvvalvalval.github.io/posts/2016-07-24-datomic-web-app-a-practical-guide.html#2017-01-2414:13jaret@Chrisbloom To your question on specifying the partition when using string temp ids. The string temp id will use the partition that is configured as the default. So you can specify what partition all string temp ids will use, but not on a per transaction basis.
http://docs.datomic.com/transactions.html#default-partition#2017-01-2414:14jaret@chrisblom ^#2017-01-2414:14chrisblom@jaret thanks, looks like i'll be using good 'ol tempids then#2017-01-2416:26alqvistWhat am I doing wrong here? https://pastebin.com/28vSPhTd#2017-01-2416:26alqvistThe query works in datomic console#2017-01-2416:28alqvistAnd I can use the same repl to transact successfully#2017-01-2416:36marshall@alqvist are you passing a database that has had that schema transacted?#2017-01-2416:45alqvist@marshall Yes, tried it just before#2017-01-2416:46alqvist(d/transact conn (read-string (slurp "schema.edn")))#2017-01-2416:46alqvistand console reflects the schema#2017-01-2416:48marshallcan you provide the full query you’re issuing?#2017-01-2416:49alqvistIt is in the previous link#2017-01-2416:49alqvistBut this also fails `(d/q '[:find ?e ?doc
:where
[?e :db/doc ?doc]])`#2017-01-2416:50alqvistwell..#2017-01-2416:50alqvistthat is by design..#2017-01-2416:50marshallwhat args are you passing?#2017-01-2416:50alqvistmy last query is working#2017-01-2416:50alqvistbut not the linked one#2017-01-2416:51marshallthe last one ends after the query string - it needs to be passed a database#2017-01-2416:51alqvistYea, caught that#2017-01-2416:51alqvistthis is from my link (d/q '[:find ?org-name
:where
[?e :alarm/org-name ?org-name]]#2017-01-2416:52alqvistoh, sorry#2017-01-2416:52alqvist(d/q '[:find ?org-name
:where
[?e :alarm/org-name ?org-name]]
db)#2017-01-2416:52marshalltry this:#2017-01-2416:52marshall(d/q '[:find ?org-name
:where
[?e :alarm/org-name ?org-name]] (d/db conn))#2017-01-2416:52marshallI’m guessing you def’d db before transacting your schema#2017-01-2416:53marshallso it is an immutable value of the database from before schema was added#2017-01-2416:53marshallyou need to get a new value of the db (with (d/db conn))#2017-01-2416:53marshallto pass to your query#2017-01-2416:54alqvistAhhhh.... That makes perfect sense#2017-01-2416:54alqvistThanks for your help!#2017-01-2417:02marshallno problem#2017-01-2420:48curtosiswhat are folks currently using for schema/migrations etc these days?#2017-01-2420:54jaret@curtosis I think Conformity is what I see most often around schema/migration discussions.#2017-01-2420:54jarethttp://yellerapp.com/posts/2015-03-09-datomic-migrations.html#2017-01-2420:54jaretThat's an older experience report from Yeller#2017-01-2420:55jaretThere are others out there. Generations to name one#2017-01-2420:55jarethttps://github.com/ilshad/generations#2017-01-2420:56apseyHi, regarding s3 log rotation, what does cognitect mean with time-status-reached on:
* Better naming convention for logrotation:
`{bucket}/{system-root}/{status}/{time-status-reached}`,
where status is "active" or "standby".
#2017-01-2420:56curtosisthanks… yeah, I remembered Conformity but it was a while ago.#2017-01-2420:57curtosis(cue typical clojure complaint: not sure if library abandoned or just very stable)#2017-01-2421:09robert-stuttafordhappy to announce that https://github.com/Cognician/datomic-doc is now open source!#2017-01-2421:11jaret@robert-stuttaford awesome! This looks great!#2017-01-2421:12robert-stuttafordrad 🙂 hope others find it useful!#2017-01-2421:17robert-stuttafordi must admit that spec is probably overkill for parsing options, but it was just so much fun to do https://github.com/Cognician/datomic-doc/blob/master/src/cognician/datomic_doc/options.clj#2017-01-2509:45val_waeselynckWhat EC2 instance type would you recommend for running a Datomic Peer ? (use case: a 4G-RAM, OLTP web server)#2017-01-2509:46val_waeselynckI've been running mine on t2.medium so far but I don't know how to balance compute, memory and I/O specifically for a Peer#2017-01-2509:46val_waeselyncknote that I do mean a Peer, not a Peer Server#2017-01-2510:09robert-stuttafordwe’re on c4.xlarge#2017-01-2517:01karol.adamieci have a query like :
[:find (pull ?tx [:db/txInstant]) (pull ?e [* {:order/items [* {:item/part [*]}]}])
:in $
:where [?e :order/number _ ?tx true]]
#2017-01-2517:02karol.adamiecworks fine but returns array of tuples [instant, order-object]#2017-01-2517:03karol.adamiechow can i rewrite the query so the txinstant is ‘merged’ into the entity?#2017-01-2517:11pesterhazyyou can pull in a where clause#2017-01-2517:12pesterhazybut not sure if that's worth the trouble (instead I just map after querying)#2017-01-2517:16karol.adamiechow would a pull in where look like? sounds new to me...#2017-01-2517:17karol.adamiecmap definitely sounds easier though 🙂#2017-01-2519:29marshall@ezmiller77 I’ve added a new page to the docs that covers getting a local dev setup running:
http://docs.datomic.com/dev-setup.html#2017-01-2520:45marshallGiven the recent discussion about schema lifecycle in here, I wanted to mention the blog post from Stu Halloway from today:
http://blog.datomic.com/2017/01/the-ten-rules-of-schema-growth.html#2017-01-2521:39dominicmI'd love to know, under Stu's recommendations, how I can break up a field?
I have a field which is currently stored as edn strings, and are converted as the entity is read. Now those attributes need querying, so I want to deprecate that field. But how can I ever (from a query position) trust that field is entirely gone? Currently we do things like get all X, read their edn and filter (goodbye, optimizations from datomic!)
Must I always keep this code around which is extremely unperformant in order to never break backwards compatibility?#2017-01-2522:28donmullenInteresting mozilla project drawing on datomic / datascript : https://github.com/mozilla/mentat — originally clojurescript now rewriting in Rust.#2017-01-2601:15jdkealyI was wondering if it was possible to pass optional arguments to a datomic query like in this example here.
https://gist.github.com/jdkealy/174741e33b88b09b66e6f0281e3cd6ca
I saw this google groups question which was pretty similar to mine and the answer seemed pretty complex, showing yes, you can have optional arguments, like optional attribute names in a datomic query https://groups.google.com/forum/#!topic/datomic/20hHmzXK3PE but my query above has a certain structure to it and yes, the gist above works, but it's really unnecessarily duplicating code there.#2017-01-2603:28val_waeselynck@dominicm why do you want the field to be gone?#2017-01-2607:35dominicm@val_waeselynck it's incredibly unperformant to query details about that field.#2017-01-2609:05caspercAnyone know what the best way to count the total number of datoms in a database is?#2017-01-2609:06caspercIs it output via some metric in the transactor logs?#2017-01-2609:08tengWe are using the pull syntax to retrieve nested data structures (quite big ones sometimes). Now we want to add auditing, so that for every value that pull returns we also want to retrieve who was adding the fact and when. We are free to model the schema to solve this problem, because we are not in production yet. We need a simple solution and we own all the code that is writing to the database. It seems like the pull syntax does not have support to also retrieve the tx-id for every attribute, for example not only first-name and last-name but also first-name-tx-id and last-name-tx-id, which would otherwise solve our problem, because then we could use the log database.#2017-01-2609:50val_waeselynck@dominicm so why do you query it? You can just stop referring to this attribute in your queries, right ?#2017-01-2609:51dominicm@val_waeselynck it seems to me under the terms of Stu's recommendations (never destroy a field), queries must maintain references to deprecated attributes. Or am I misreading?#2017-01-2609:52val_waeselynckI don't think that's what he meant. You don't delete a deprecated attribute from the schema, but you can totally stop writing it and querying it.#2017-01-2609:54val_waeselynck@robert-stuttaford may I ask what drove your choice to compute-optimized vs general-purpose or memory-optimized?#2017-01-2610:14pesterhazyWe see this occasionally when a nightly job tries to open a datomic connection: HornetQException[errorType=INTERNAL_ERROR message=HQ119001: Failed to create session#2017-01-2610:15pesterhazyWhat does this message mean?#2017-01-2610:16robert-stuttaford@val_waeselynck we don’t do any of our own caching; we generate html afresh for every request. thank you Datomic peer library.#2017-01-2610:27dominicm@val_waeselynck if the intention is to grow schema & never break, then no longer writing to the old attribute seems like it would break existing queries which are looking for new data based on that field?#2017-01-2612:00erichmondDoes anyone have a good solid example of adding additional data to transactions?#2017-01-2612:00erichmondAll the examples online (including datomic docs) seem to have murky or incomplete examples (another acceptable answer is that I am a moron :D)#2017-01-2612:10danielstockton@erichmond Have you seen the reified transactions talk? http://www.datomic.com/videos.html#2017-01-2612:11erichmondwatching now! thanks !#2017-01-2612:43robert-stuttaford@erichmond tl;dr: {:db/id “datomic.tx” :your/thing “here”}#2017-01-2614:09jdkealy@casperc i have the same question. let me know if you find the answer!#2017-01-2614:22val_waeselynck@dominicm, well I'm assuming that if it's deprecated then at some point you actually stop using it 🙂#2017-01-2614:24dominicm@val_waeselynck I think I'm struggling to tease out when "stop using it" is different from removing it, in cases where you're no longer adding entities with that field, but you will still be breaking old programs, as they can no longer get-latest-xyz#2017-01-2614:24robert-stuttaford@casperc have you tried simply (count (seq (d/datoms db :eavt))) ?#2017-01-2614:26val_waeselynck@dominicm ah, so there are some clients that you don't control which are accessing a field, and you'd like to change the implementation of reading from that field?#2017-01-2614:27dominicm@val_waeselynck This is slightly into territory of theory now I'll admit. But if you control all clients, is the only reason to "grow" schema is to minimize impact of refactorings?#2017-01-2614:27jaret@casperc In addition to Roberts approach you can get a count of datoms in the datoms metric line in the log.#2017-01-2614:27val_waeselynck@dominicm ease of deployment and developer environment management is also a concern.#2017-01-2614:29dominicm@val_waeselynck not sure how growth plays into those?#2017-01-2614:29val_waeselynck@robert-stuttaford I'm sorry, I think I'm missing a step in your reasoning here.#2017-01-2614:30val_waeselynckcould you elaborate please?#2017-01-2614:31robert-stuttaford@val_waeselynck well, remember that we have mature apps that Do Lots Of Stuff for a fair number of users. we’re using java apps that love to make threads, so we decided to give the jvm plenty of compute to handle that#2017-01-2614:31robert-stuttafordhaving said that, we were on t2.mediums for a good long while#2017-01-2614:32val_waeselynck@robert-stuttaford I see. Funny, I would have thought a Datomic Peer would have more pressure on memory and IO, what with the Object Cache and the loading of segments#2017-01-2614:32robert-stuttafordwe haven’t got any of our own caching code (e.g. for html view fragments; we’ve been leaning on the fact that Datomic caches datoms in the peer). so, trading paying for cpu over writing and maintaining and working with custom caching code#2017-01-2614:33robert-stuttafordkeeps our code flexible and our ability to reason less burdened#2017-01-2614:33robert-stuttafordat the cost of more AWS bill 🙂#2017-01-2614:33val_waeselynckWe do use our own caching for aggregations, but I guess you guys have Onyx for that 🙂#2017-01-2614:33robert-stuttafordyeah#2017-01-2614:38casperc@jaret: I have looked in the transactor log, but I am not finding it. Can you give an example of what I should be looking for?#2017-01-2616:06jaret@casperc Datoms gets reported when a new index is adopted. It will look like this:#2017-01-2616:06jaret2017-01-12 16:44:24.642 INFO default datomic.update - {:IndexSegments 477, :FulltextSegments 71, :Datoms 1113239, :IndexDatoms 2228416, :event :update/adopted-index, :id "my-test-db", :pid 8009, :tid 12}
#2017-01-2616:09drewverleeDoes datalog support any notion of GroupBY?#2017-01-2616:19casperc@jaret: And Datoms is the one I want to watch out for, so that it is not bigger than 10 billion?#2017-01-2616:19caspercOr is it IndexDatoms?#2017-01-2616:21jaretYeah you'll want to watch Datoms. If you think you are approaching 10 billion we should arrange a call to discuss.#2017-01-2616:22jaret@drewverlee Datalog does not have internal support for aggregation. However in datomic you can control grouping via :with#2017-01-2616:22jarethttp://docs.datomic.com/query.html#with#2017-01-2616:25casperc@jaret: Thanks!#2017-01-2616:27drewverlee@jaret thanks!#2017-01-2616:28casperc@jaret: Last question: I have been looking for documentation on the events in the datomic log, but have come up with nothing. Do you have some around that I am not finding?#2017-01-2616:28pesterhazySorry to repeat this, does anyone know under what conditions this message is shown (datomic pro)? HornetQException[errorType=INTERNAL_ERROR message=HQ119001: Failed to create session#2017-01-2616:31jaret@casperc I am sure you found the monitoring section of the docs. http://docs.datomic.com/monitoring.html What events are you specifically talking about?#2017-01-2616:31jaret@pesterhazy is this in the peer?#2017-01-2616:31pesterhazy@jaret, yes, we get this occasionally when it tries to connect#2017-01-2616:32pesterhazyin periodic nightly jenkins jobs#2017-01-2616:32casperc@jaret: Well taking the :update/adopted-index event as an example. Some explanation of what it means would be useful.#2017-01-2616:38jaret@pesterhazy I would want to see what is going on with the transactor at the time of these exceptions. A metrics report before and after if nothing stands out around the time stamp#2017-01-2616:38jaretHQ exceptions are a bit general and might not indicate a real problem as long as HQ eventually connects#2017-01-2616:38jaretfor instance you would get these exceptions if you ran out of memory#2017-01-2616:40pesterhazygood point, I'll check the transactor's log on s3#2017-01-2616:42erichmond@robert-stuttaford haha thanks!#2017-01-2616:44jaret@casperc I agree I think this would be a useful addition to the documentation. I am going to look through the events we report and see if we can create a table in the docs with a basic definition. I know the rule of thumb was to make the name as self evident as possible so the adopted-index event for instance, represents the adoption of a new index after indexing has completed.#2017-01-2616:55pesterhazy@jaret, in the transactor log I see a couple of warnings like this around the time of the connection failure#2017-01-2617:02abhir00p@val_waeselynck Nice post. Wanted to ask
As we've seen, adding an attribute (the equivalent of adding a column or table is SQL) is straightforward. You can just reinstall your whole schema at deployment time. Same thing for database functions.
When we add an attribute, how do we ensure type safety when querying the same attribute at an older time instance when the attribute did not exist. I assume the application code must be full of null checks? Am I right in thinking so? And
Modifying an attribute (e.g changing the type of :person/id from :db.type/uuid to :db.type/string) is more problematic, and I suggest you do your best to avoid it. Try to get your schema right in the first place; experiment with it in the in-memory connection before committing it to durable storage. If you have committed it already, consider versioning the attribute (e.g :person.v2/id).
Isn’t there any better approach to this rather than just trying to avoid it?#2017-01-2618:04vinnyataidebin\run -m datomic.peer-server -p 8998 -a obviouslynotmykey,mysecret -d firstdb,datomic:
is there anything wrong in this command?
there's this error happening
n: [:db "firstdb" 1] val: nil fails spec: :datomic.peer-server/non-empty-string at: [:db 1] predicate: string?
In: [:auth "obviouslynotmykey" 1] val: nil fails spec: :datomic.peer-server/non-empty-string at: [:auth 1] predicate: string?
#2017-01-2618:21jaret@vinnyataide I just checked your command on Windows OS and on iterm on mac. It worked in both cases#2017-01-2618:21jaretjbin at Jarets-MacBook-Pro in ~/Desktop/Jaret/Tools/releasetest/5554/datomic-pro-0.9.5554
$ bin/run -m datomic.peer-server -p 8998 -a obviouslynotmykey,mysecret -d firstdb,datomic:
Serving datomic: as firstdb
#2017-01-2618:21vinnyataideI ran inside a datomic-pro folder#2017-01-2618:22jareton Windows OS?#2017-01-2618:22vinnyataideyeah#2017-01-2618:22jaretwhich version of Datomic?#2017-01-2618:22vinnyataidelatest 0.9.5554#2017-01-2618:22jaretok let me try that#2017-01-2618:23jaretI am on a virtual machine so it takes me a bit to get it over there#2017-01-2618:23vinnyataidelol, it worked with this string#2017-01-2618:24vinnyataidebut not with my actual key#2017-01-2618:24vinnyataidewtheck#2017-01-2618:24jaretSo the key you use when starting the peer is not the license key you get from http://my-datomic.com#2017-01-2618:25jaretThe key is used to connect to the peer from the client#2017-01-2618:25jaretit has to be a non-empty string and there may be other requirements which caused you to have an error#2017-01-2618:26vinnyataideok the issue appears when using powershell only#2017-01-2618:27marshallit may be that powershell does some kind of non-standard character escaping#2017-01-2618:27vinnyataidesorry for bothering#2017-01-2618:27vinnyataideyeah#2017-01-2618:27marshallthat causes what you’re putting in for the key get sent as something that fails the “must be a string” check#2017-01-2618:27vinnyataideit is resolved now, and I didn't know yet the architecture#2017-01-2618:28vinnyataideyeah#2017-01-2618:28vinnyataideprobably#2017-01-2618:36jaretno bother. We're all in the same boat 🙂#2017-01-2619:16jfntnWe switched to using a single :db/uuid attribute for all our entities' identity. For integration purposes I’m now considering using distinct :<entity-type>/uuid attrs instead. However I’m not grasping the relative trade-offs of each approach. Is anyone familiar with this?#2017-01-2619:21drewverleeSo Datomic can use Cassandra, which gets used frequently for iot data. Due to it being a column store. But i don’t see much “buzz” around Datomic as a “timeseries” database. Why is that?#2017-01-2619:23drewverleeIt seems possible to express similar queries through Datomic to Cassandra as i could envision using the Karios Client (https://kairosdb.github.io/). But i dont know where to begin comparing these two.#2017-01-2619:24jaret@seantempesta need to see the query to figure out whats going on.#2017-01-2619:24seantempesta(regarding my previous question) ah never mind. looks like it was just a sample data issue.#2017-01-2619:24jaretah cool#2017-01-2620:35limistBumping a question from last week: When running datomic/bin/maven-install I saw this error message toward the end; is it anything to worry about?
[WARNING] Some problems were encountered while building the effective model for com.datomic:datomic-pro:jar:0.9.5544
[WARNING] 'dependencies.dependency.exclusions.exclusion.artifactId' for com.datastax.cassandra:cassandra-driver-core:jar with value '*' does not match a valid id pattern. @ line 125, column 18
[WARNING]
[WARNING] It is highly recommended to fix these problems because they threaten the stability of your build.
#2017-01-2621:06val_waeselynck@abhir00p the issue you raised is one of the reasons why querying historical dbs in datomic is much more limited than people think. I recommend to simply not do it in application code.#2017-01-2621:19vinnyataidewhat happened to datomic.api?#2017-01-2621:28vinnyataideclojure.lang.ExceptionInfo: Could not locate datomic/api__init.class or datomic/api.clj on cl
asspath.
file: "om_learn\\db.clj"
line: 1
#2017-01-2622:07bballantineHi. I have a question that might be simple for someone who knows more than me… How do you bind an entity to itself in a datalog where clause?
Here’s a contrived example. I want a query that, given a parent, returns the whole nuclear family (including the passed-in parent):
:find ?p2
:in $ ?p1
:where
(or
[?p1 :person/children ?p2]
[?p1 :person/partner ?p2]
[(= ?p1 ?p2)])
This throws an exception: :db.error/insufficient-binding [?p2] …#2017-01-2622:08favila@bballantine do you mean "or"?#2017-01-2622:08bballantineI think I mean or… either ?p2 is a child or a spouse or the parent herself#2017-01-2622:10favilato "rename" a var (for binding purposes) use identity, example here: https://groups.google.com/d/msg/datomic/_YiBRnBkeOs/0Gd-6lJmDwAJ#2017-01-2622:11favilaso change [(= ?p1 ?p2)] to [(identity ?p1) ?p2] at least#2017-01-2622:12favilato (and [?p1] [(identity ?p1) ?p2]) to be safe#2017-01-2622:12bballantinealright, let me play with that.. thanks @favila#2017-01-2622:25bballantine@favilla it works, thanks — although I have to admit I don’t quite understand it.#2017-01-2622:27bballantineIf I just have [?p1 ?p2], it also works.#2017-01-2622:34bballantineAfter chatting about it with my colleague, I get why your example works. Thanks @favila#2017-01-2623:00spieden@bballantine care to elaborate? not sure i do.. =)#2017-01-2623:06bballantine@spieden - someone should correct me if I’m wrong, but [(identity ?p1) ?p2] is the syntax to bind something to the result of a function. So this binds ?p2 to the value of (identity ?p1).#2017-01-2623:07spiedenah ok, so it differentiates the syntax i guess#2017-01-2623:07bballantineIn this snippet (and [?p1] [(identity ?p1) ?p2]), the [?p1] part ensures that ?p1 isnt’ nil#2017-01-2623:08bballantine(I think)#2017-01-2623:08bballantineyeah @spieden I think that’s right#2017-01-2623:08spiedencool thanks#2017-01-2714:17stuartsierra@bballantine and in Datomic datalog query isn't precisely the same thing as clojure.core/and. As I read it, [?p1] would bind ?p1 to the entity position in a datom.#2017-01-2716:40teng@favila Thanks, now both works! It was hard to figure out when reading the documentation. Still waiting for that book that Stu is going to write 😉#2017-01-2717:18apseyHi, regarding s3 log rotation, what does cognitect mean with time-status-reached on:
* Better naming convention for logrotation:
`{bucket}/{system-root}/{status}/{time-status-reached}`,
where status is "active" or "standby".
#2017-01-2717:34potetmDoes anyone happen to know how to interpret the GarbageSegments CloudWatch metric? I assumed it would be an accumulation of unreferenced segments over time, but it appears to go up and down.#2017-01-2717:34potetmAnd I don't see it on the http://docs.datomic.com/monitoring.html page#2017-01-2717:34potetmI'm guessing it's the garbage a particular indexing job creates?#2017-01-2717:53bmaysAnyone else seeing lein deps failing on org/javassist/javassist/3.18.1-GA/javassist-3.18.1-GA.jar?? I believe it’s a dependency of com.datomic/datomic-pro “0.9.5372"#2017-01-2717:54potetmI am!#2017-01-2717:54bmaysAWESOME.#2017-01-2717:54bmaysit’s not just me losing my mind.#2017-01-2717:54potetmIt literally just started.#2017-01-2717:55potetmObviously not a datomic thing. There's a checksum mismatch.#2017-01-2717:55potetmFor us anyways.#2017-01-2717:56bmaysRetrieving com/datomic/datomic-pro/0.9.5372/datomic-pro-0.9.5372.jar from
Could not transfer artifact org.javassist:javassist:jar:3.18.1-GA from/to central (): Checksum validation failed, expected d9a09f7732226af26bf99f19e2cffe0ae219db5b but is 1153878fa3db0c164318521e8e77106f9099f4e5
#2017-01-2717:56bmaysI noticed it resolves for my transit dependency though#2017-01-2717:59bmaysI’m fairly new to all this so feel free to redirect if this isn’t the appropriate channel. this is quite strange though, a routine build began to fail and now it’s affecting all developers#2017-01-2718:00potetmI don't know where the proper place to ask would be 🙂#2017-01-2718:00potetmThe maven authorities?#2017-01-2718:00potetmI'm getting this:
Could not transfer artifact org.javassist:javassist:jar:3.18.1-GA from/to central (): Checksum validation failed, expected d9a09f7732226af26bf99f19e2cffe0ae219db5b but is 1153878fa3db0c164318521e8e77106f9099f4e5
#2017-01-2718:05bmaysAdding this exclusion fixed the deps but possibly broke ring:
[ring-middleware-format "0.7.0"
:exclusions [org.clojure/test.check com.cognitect/transit-clj]]
#2017-01-2718:48potetmLooks like @r0man has got our backs: https://github.com/jboss-javassist/javassist/issues/120#2017-01-2718:48potetm🙂#2017-01-2718:52akjetma❤️ awesome#2017-01-2720:39akjetma^ checksum passes now#2017-01-2721:35jdkealyis it normal for a query like this to take up to 10 mins with datomic ?
2 organizations, 10k collections, 2.5 million "content"
https://gist.github.com/jdkealy/b9d62366c46ba078051bad849e82c200#2017-01-2721:49stuartsierra@jdkealy You can learn a bit about the behavior of a query by executing each :where clause separately.#2017-01-2721:51stuartsierraIn your example, start with [:find ?collection :in $ ?org :where [?collection :collection/organization ?org]], then add the next clause, and so on.#2017-01-2721:57stuartsierraYou can also think about how indexes will be used to resolve each clause in the query. Since you start with a binding for ?org, it's a single lookup in VAET to get ?collection, then another VAET lookup for ?content, finally a separate EAVT lookup for each ?content to get ?fk, then an iteration to find the max.#2017-01-2721:57laujensenEvening gents. Is there a way to start a datomic transactor from within clojure?#2017-01-2721:58stuartsierra@jdkealy Most of those lookups will probably be handled from the Peer's local cache, after warmup. But if ?content entities are large or randomly distributed there might be a lot of EAVT segments to fetch to answer that query.#2017-01-2722:24jdkealygot it thanks @stuartsierra i realized i didn't have an index on content/collection... that appears to be making it extra slow#2017-01-2722:40jdkealyhmm now that i have indexes my query runs out of java heap space#2017-01-2722:40jdkealyif i were to jack up memory locally... would i do so on the transactor or when starting my server#2017-01-2722:42pesterhazynot on the transactor, on the peer#2017-01-2722:48jdkealythis creates a peer right? "(d/db (d/connect uri))" ?#2017-01-2802:47shaun-mahoodHas anyone worked on a Datomic system that links to something like PostGIS? I’m thinking about keeping all the important info in Datomic and linking to the location data, but I’m wondering if anyone else has given any thought to how to keep that link as immutable as possible or any other real life gotchas that I haven’t considered yet.#2017-01-2814:48ezmiller77If anyone can set me right re this that would be great: https://stackoverflow.com/questions/41910405/unable-to-resolve-entity-error-when-trying-to-transact-a-dataomic-taxonomy#2017-01-2814:48ezmiller77Where above it says taxonomy read “schema"#2017-01-2815:56alqvistIf I have an attribute :event/order declared as :db.unique/value is there any way of asking datomic for a set of all attribute values?#2017-01-2815:56alqvistWith good performance. Want to use it for autocompletion#2017-01-2815:59alqvistThere are magnitudes more events than orders#2017-01-2816:01alqvisterr#2017-01-2816:03alqvistNot :db.unique/value since that wouldn't work... The question is more if I can get the set of values in a fast way#2017-01-2816:05alqvistI could make another entity :order and do references, but it seems kind of wasteful#2017-01-2817:15robert-stuttaford@alqvist Datalog? [:find ?order :where [_ :event/order ?order]] or transducer + datoms index? (into #{} (comp (map :v) (distinct)) (seq (d/datoms db :avet :event/order)))#2017-01-2817:15robert-stuttaforddoubt you’d find a faster way than one of those two#2017-01-2817:16robert-stuttafordif you have a LOT of usages, you could maintain a set separately as you discover new ones. so, move the work to transactor functions or the like.#2017-01-2817:17robert-stuttafordmy code assumes you’re :db/index true or :db/unique#2017-01-2817:22pesterhazy@ezmiller77 well what is the transaction?#2017-01-2817:24ezmiller77@pesterhazy you mean what was the actual transact call? It looked like this:
(def arb-tx (-> (io/resource "schemas/arb.edn")
(read-all)
(first)))
(pprint (<!! (client/transact conn {:tx-data arb-tx})))
where “schemas/arb.edn” is the schema listed in that stackoverflow question. I guess I should have included that in the Q.#2017-01-2817:42pesterhazyTry calling the transactions manually from the repl#2017-01-2817:46pesterhazywhy do you call first there?#2017-01-2817:47pesterhazytx-data is supposed to be a collection of txs#2017-01-2817:54ezmiller77just because (read-all) spits out a vector#2017-01-2817:57ezmiller77so arb-tx looked like: [[ <taxonomy definition maps> ]]#2017-01-2817:57ezmiller77instead of [ <taxonomy definition maps> ]#2017-01-2817:58ezmiller77read-all looks like:#2017-01-2817:58ezmiller77(defn read-all
"Read all forms in f, where f is any resource that can
be opened by io/reader"
[f]
(Util/readAll (io/reader f)))
#2017-01-2817:58ezmiller77(copied from day-of-datomic)#2017-01-2818:01pesterhazydo you see the same error when you run the code from the repl?#2017-01-2818:01ezmiller77although removing first, I get this error, also totally inscrutable to me:
{:datomic.client-spi/request-id "9c59aa1c-6127-45fd-87b2-afce72223ce1",
:cognitect.anomalies/category :cognitect.anomalies/incorrect,
:cognitect.anomalies/message
":db.error/not-a-data-function Unable to resolve data function: {:db/id {:idx -1000000, :part :db.part/db}, :db/ident :arb/title, :db/unique :db.unique/identity, :db/valueType :db.type/string, :db/cardinality :db.cardinality/one, :db/fulltext true, :db/index true, :db.install/_attribute :db.part/db}",
:dbs
[{:database-id "datomic:",
:t 1004,
:next-t 1009,
:history false}]}
#2017-01-2818:01ezmiller77i haven’t been able to figure out how to run it from the repl yet b/c I haven’t figured out how to do the edn file import from there#2017-01-2818:02pesterhazywhy don't you try transacting a simple map first?#2017-01-2818:02pesterhazythen add stuff one by one#2017-01-2818:03ezmiller77not sure what you mean by a simple map?#2017-01-2818:04pesterhazy@(d/transact (rdc) [{:db/id (d/tempid :db.part/user) :db/doc "hello"}])#2017-01-2818:05pesterhazyI guess from the client api it looks a bit different#2017-01-2818:06pesterhazyas shown here: http://docs.datomic.com/first-transaction.html#2017-01-2818:07ezmiller77oh well that i’ve done before#2017-01-2818:07ezmiller77but i can try it again if you think it’s helpful.#2017-01-2818:09pesterhazywell if you start with that and then add things bit by bit, I think you may be able to narrow in on the problem#2017-01-2818:11ezmiller77the command you have there is using the datomic api right?#2017-01-2818:11ezmiller77as opposed to the client?#2017-01-2818:23ezmiller77damn, can’t even manage to enter that edn structure into the repl#2017-01-2818:23ezmiller77gonna have to pick this up later.#2017-01-2818:23ezmiller77thanks for trying to help @pesterhazy#2017-01-2818:27ezmiller77the first line there in the schema defn :db/id #db/id [:db.part/db], is being interpreted as a function, which now that i look at it makes sense, so I get this error:
user=> (def arb-title [{:db/id #db/id [:db/part/db]}])
IllegalStateException Attempting to call unbound fn: #'datomic.db/id-literal clojure.lang.Var$Unbound.throwArity (Var.java:43)
RuntimeException Unmatched delimiter: } clojure.lang.Util.runtimeException (Util.java:221)
RuntimeException Unmatched delimiter: ] clojure.lang.Util.runtimeException (Util.java:221)
RuntimeException Unmatched delimiter: ) clojure.lang.Util.runtimeException (Util.java:221)
#2017-01-2818:32alqvist@robert-stuttaford Datalog. Probably going with maintaining a separate set since there might be millions of events.#2017-01-2818:32alqvistthanks for the answer#2017-01-2819:09pesterhazy@ezmiller77 that ain't right: :db/part/db#2017-01-2910:50ezmiller77@pesterhazy, sorry typo there. the problem I was having is that the syntax i was using there was in edn; in clojure, it needs to be different. so i ended up doing this in the repl:
(def id (datomic.api/tempid :db.part/db))
(def tx [[:db/add id :db/ident :arb/title]
[:db/add id :db/unique :db.unique/identity]
[:db/add id :db/valueType :db.type/string]
[:db/add id :db/cardinality :db.cardinality/one]
[:db/add id :db/fulltext true]
[:db/add id :db/index true]
[:db/add :db.part/db :db.install/attribute id]])
(pprint (<!! (client/transact conn {:tx-data tx})))
#2017-01-2910:51ezmiller77I ended up getting the same error as before, which I just can’t make sense of:
> Unable to resolve entity: {:idx -1000000, :part :db.part/db} in datom [{:idx -1000000, :part :db.part/db} :db/ident :arb/title]"#2017-01-2910:56pesterhazy@ezmiller77 hm that's strange, it works for me using d/transact#2017-01-2910:57pesterhazyah I see#2017-01-2910:57pesterhazythe problem is you're using the client api with regular tempids#2017-01-2910:57pesterhazyClients will support string tempids only. http://blog.datomic.com/2016/11/datomic-update-client-api-unlimited.html#2017-01-2910:57pesterhazyif you use the regular peer api, it should work as expected#2017-01-2911:01pesterhazyit would be nice if that was documented here: http://docs.datomic.com/clojure-client/index.html#2017-01-2913:44ezmiller77okay, i got the impression at some point that the client peer was the more basic/easier-to-use.#2017-01-2914:05ezmiller77Oh yes it was this paragraph:
> There are two ways to consume Datomic: with an in-process peer library, or with a client library. If you are trying Datomic for the first time, we recommend that you begin with a client library. When you are ready to plan a system, you can return to this page for guidance on whether to use peers, clients, or both.#2017-01-2914:05ezmiller77http://docs.datomic.com/clients-and-peers.html#2017-01-2914:45ezmiller77Yep @pesterhazy that was the key! I just excluded these lines: :db/id #db/id[:db.part/db] and :db.install/_attribute :db.part/db} b/c apparently tempids re also optional.#2017-01-2916:18noogaI’m trying to assess if using datomic for an open source IoT hub project makes sense#2017-01-2916:19noogaI see that there’s datomic starter license now#2017-01-2916:21nooga> The Datomic Starter license is perpetual, but is limited to 1 year of maintenance and updates
does that mean that after a year, I won’t be able to upgrade to a newer version any more?#2017-01-2919:50ezmiller77Is it possible to describe a schema entity that is both stand-alone and embedded, i.e. a component and also not a component?#2017-01-3005:31robert-stuttaford@ezmiller77 yes. define two attributes 🙂#2017-01-3011:22biscuitpantscould someone help me with altering the :db/fulltext attribute, please?#2017-01-3011:22biscuitpantsi have this tx:#2017-01-3011:22biscuitpants[{:db/id :content-item/description
:db/fulltext true
:db.alter/_attribute :db.part/db}]
#2017-01-3011:22biscuitpantsand datomic gives me an error saying:#2017-01-3011:23biscuitpants:db.error/invalid-alter-attribute Error: {:db/error
:db.error/unsupported-alter-schema, :attribute :db/fulltext, :from
:disabled, :to true}
#2017-01-3011:24biscuitpantswait nvm#2017-01-3011:24biscuitpantsyou cannot alter fulltext#2017-01-3014:09jaret@nooga That is essentially correct. With a starter license you will get 1 year of maintenance and upgrades. After which, you will still be able to use your license/Datomic but will not be able to upgrade to any new releases.#2017-01-3014:09noogathanks @jaret#2017-01-3014:12jaret@biscuitpants :db/fulltext cannot be altered. So you cannot add full text search to an attribute after its creation. You will need to create a new attribute, pouring in or importing values from the old attribute. http://docs.datomic.com/schema.html#2017-01-3014:12jaretOh looks like you realized that#2017-01-3014:12jaretnvm#2017-01-3014:14biscuitpantsyes, thank you @jaret! maybe the docs should be updated to give an example of how to do it?#2017-01-3014:14biscuitpants(its not a super straightforward process)#2017-01-3014:15biscuitpantsor, it is, but its not completely evident 🙂#2017-01-3014:44pesterhazyyou do, however, have a disconcertingly robert-stuttaford-like avatar photo, @biscuitpants#2017-01-3014:45biscuitpantsha, we have a discussion about this often at work#2017-01-3014:45pesterhazyeven the position of the head relative to the frame#2017-01-3014:54biscuitpantspossibility that we planned it#2017-01-3014:54biscuitpantsbut @robert-stuttaford would have to confirm#2017-01-3014:57pesterhazyhah#2017-01-3015:01robert-stuttafordi ain’t dun nuffin#2017-01-3015:07jaretI can't tell whose talking#2017-01-3016:33robert-stuttafordi’m the handsome one 😁#2017-01-3017:46bmaddyDoes anyone know where I would find info on how to query for transaction annotations? My Google searching has yielded nothing (also, I’m a newbie at datomic). I’ve added an annotation to a transaction and want to write a test to ensure it added the annotation.#2017-01-3018:08pesterhazyhere's an example: https://github.com/Datomic/day-of-datomic/blob/59186b4b39c124e2d9d0e79243f3e373b0a0b9d9/tutorial/provenance.clj#2017-01-3018:09favila@bmaddy Transactions are normal entities. You need to get either a t or a tx somehow, then just access it as you would any entity#2017-01-3018:09pesterhazyor this one: https://github.com/Datomic/day-of-datomic/blob/59186b4b39c124e2d9d0e79243f3e373b0a0b9d9/tutorial/log.clj#L31#2017-01-3020:17limist@shaun-mahood I was wondering similarly recently, how to handle geo data with/from Datomic. Besides PostGIS, I wonder if ElasticSearch could do spatial searches on properly structured Datomic data. Found anything so far?#2017-01-3020:27bmaddyHmm, I’ll try that out. Thanks @pesterhazy & @favila.#2017-01-3020:28shaun-mahood@limist: I've done a very brief amount of research and it looks like it would be a lot of work to implement anything performantly - essentially, implementing r-trees and adding some sort of indexing that PostGIS and other GIS systems already have built in. I can't see it being anything easy or quick, so in all likelihood it makes more sense to keep the location info outside of datomic and figure out a good way to link things together. I've only just started looking into GIS stuff in general so I could be wildly off base on anything at this point though. 🙂#2017-01-3020:29noogaI’m gathering data from a bunch of sensors, they send me timestamps with their readings. I can’t trust their clocks and I can’t trust they they will send their readings from in order. On the other hand, I wand these timestamps in my DB since reading times can significantly differ from acquisition times. How should I approach this?#2017-01-3020:41favila@nooga As a normal data-modelling problem. Datomic's time/history features concern "meta" time (i.e. time of record) not data-model time. Think git commit or file-modified times.#2017-01-3020:43favila@nooga Sometimes you can get these to line up and you can use txInstant to store all time info, but it sounds like you can't in your application#2017-01-3020:44noogaI'd like my entities to reflect the sensor state at any given time, I can store the timestamp as an attribute there, but this doesn’t guard me from out of order scenarios#2017-01-3020:44eoliphanthi, i need to model a ‘composite key’. Any best practices for that? I was thinking about using a transaction function to create the ‘real’ key from the associated fields#2017-01-3020:45favila@eoliphant https://groups.google.com/d/msg/datomic/4cjJxiH9Lkw/dAvGM7XTAAAJ#2017-01-3020:46favila@nooga but that's exactly what you need, right? you need to store sensor data as a domain time so you don't confuse it with the time you recorded the sensor data#2017-01-3020:46favilait may be wrong, out of order, corrected in the future, etc#2017-01-3020:46eoliphantthanks ! @favila also, you helped me with a tree waking approach a while back over in google groups. I’m trying to generalize it to search ‘up and down’, but i’m having some issues, I realize what I did wrong in what I posted over there, but still not sure how to make it work lol#2017-01-3020:48favila@nooga if you want to use datomic time-travel abilities to explore domain time, consider using two dbs: one for raw data, and a projected one where you transact sensor data in domain-time order (asserting txInstant on each tx)#2017-01-3020:51favila@eoliphant the "ancestors" query is inefficient when working backwards, think about what var is bound when evaluating the recursive rule with your change and what the query eval would have to do to fix it#2017-01-3020:51favila@eoliphant you can use the [?var] syntax to force a var to be bound, so you won't let the query engine try to eval rules in the "wrong" direction#2017-01-3020:52eoliphantyeah, that’s that I’d kind of figured out. Will try tweaking#2017-01-3020:52favila(in fact ancestors is badly named without it, since rules are inherently bidirectional)#2017-01-3020:53eoliphantyeah that’s whhere I messed up initially, I just tried to be cute and switch the internal call, but the [?var] wasn’t switched#2017-01-3020:53eoliphantso it went all over the place lol#2017-01-3020:53noogaI guess I can ask datomic about values changed between T1 and T2 and then sort results by domain time?#2017-01-3020:53eoliphantok i think i know what I need to do#2017-01-3020:53favila@eoliphant try to write a blood-relations rule with three impls#2017-01-3020:54favilaone that handles parent<->child (in any direction), one that follows ancestors, and one that follows progeny#2017-01-3020:54favilause forced-binding on the latter two#2017-01-3020:55favila@nooga depends on what you are after. If you just want to sort all sensor data by the time it reported, a simple indexed date field is enough#2017-01-3020:57favilain what way does it matter that the sensor time is not trustworthy? you want to rely on transactor's time? you want to correct sensor time in a second pass?#2017-01-3020:59favilaDo you want to use datomic as-of/since to explore a reconstructed view of the world as seen by the sensors? or you don't care about that and have some application on top which does it? or you just want to store it, not process/query at all#2017-01-3021:00noogaThe sensors sometimes don’t have realtime clocks on board, somethimes they don’t have any NTP configured etc. I might want to prefer transactor time over sensor time in some cases.#2017-01-3021:02noogaMy idea was to simply update facts about sensors rather than treat each observation as a new entity related to the sensor.#2017-01-3021:02noogadomain time is another reading, in some sense#2017-01-3021:03noogaMy transactions would look like this: [sensorid :reading/time 123456] [sensorid :reading/temp 19.5] [sensorid :reading/pressure 1012]#2017-01-3021:04noogathis would make me use as-of/since to see the history#2017-01-3021:05favilaYou could do that, but keep in mind the history you see is always the history sensor data was recorded, not time it happened in the world#2017-01-3021:06favilaThis is why I suggested generating a second db as a projection, where :reading/time is corrected and used as the txInstant#2017-01-3021:06favilawithout each observation as a separate entity, you can't correct old readings#2017-01-3021:07favilaif you don't care about that, it's fine to just use tx history#2017-01-3021:07noogathis is mostly for acquisition#2017-01-3021:07noogareading will be less frequent and heavier with computation#2017-01-3021:08favilakeep in mind if you go this route, as-of/since is really just useful to debug your applications (e.g. whatever collects sensor data), not so much to scrub through old sensor readings#2017-01-3021:09favilaby confusing transaction time and reading-time you may make your life harder later (depending on what you plan to do)#2017-01-3021:09favilabut it is still possible to use the history database and pull out the individual observations if you need#2017-01-3021:11favilaalso keep in mind that you may need to batch writes for performance, which will further confuse the sensor-time vs. record-time#2017-01-3021:11favila(datomic is designed to scales reads, not writes)#2017-01-3021:56noogaI see#2017-01-3021:56nooga@favila then maybe I should reconsider using datomic for this task#2017-01-3021:58favila@nooga Likely there are better sensor data db packages out there. It may still be useful to create a projection in datomic, as I said#2017-01-3021:59favilawhat is the scale you are looking at?#2017-01-3022:00noogait’s an open source, community driven air quality aggregator#2017-01-3022:00noogaso, depending on the success, we may have thousands of facts incoming per minute#2017-01-3022:01noogaI was looking at datomic because its fact orientation and time travel abilities seem really nice for this case#2017-01-3022:03noogawe’re collecting facts about the world, from multiple sources, without making any hard assumptions about their validity and reliability#2017-01-3022:03noogaand then, we want to view the state of the world at any given time or time range#2017-01-3022:04favilaI'm not sure how you would do this at scale with a single storage technology#2017-01-3022:05favilaseems like you need a write silo (kafka queue, time-series db, etc) to store observations, and an aggregation silo for reads and exploration#2017-01-3022:07noogaoh yes, I was placing datomic on the aggregation end#2017-01-3022:10favilaThat would still work for near-real-time but I would use datomic to make throwaway projections#2017-01-3022:11faviladepends on how much processing occurs from writes to aggregation#2017-01-3022:12favilaI have more experience with heterogenous data (records, documents, human artifacts, etc) not homogenous data so I can't give much more advice#2017-01-3022:13favilaDatomic is a life-changer for that kind of data, not so sure about time-series data#2017-01-3022:14favilathere were many long discussions about people trying to use datomic for time-series data (financial mostly) on the google group. Perhaps that would help#2017-01-3022:14noogaI used datomic for graph models coming from web scraping/mapping#2017-01-3022:14noogaand it was awesome#2017-01-3023:25weiusing a heroku postgres backend, and my (d/create-database ..) call is taking a long time to run (on the order of 2-3 minutes). after connecting, querying is relatively fast. any suggestions for pinpointing the bottleneck for the initial connection?#2017-01-3101:31mruzekwHas anyone thought of using the Pull API like a GraphQL query for client-side systems?#2017-01-3102:21jeff.terrell@mruzekw - Are you familiar with DataScript? https://github.com/tonsky/datascript#2017-01-3108:08pesterhazy@wei, that seems long but connecting to datomic does take a while#2017-01-3108:08pesterhazyWith dynamo, a minute or so#2017-01-3108:09pesterhazyRemember that datomic peers have a more active role than jdbc clients#2017-01-3108:10pesterhazyI assume they need to pull still some segments to get started#2017-01-3109:22robert-stuttafordthey need to download the database roots, all idents, and the live index#2017-01-3111:13dominicmI'm struggling a little bit with figuring out how to use conformity with transactor functions. Particularly as it seems that edn parses ' in a way I wouldn't expect:
(edn/read-string "['[db]]") returns [' [db]] instead of core/read-string which returns ['[db]], so the code (particularly for the datomic query in my tx function) will not read correctly into edn.#2017-01-3111:44dominicmHits me after I ask, (quote [:find ...]) is the solution to the ' problem#2017-01-3111:50apseyHi, I have two questions related to datomic’s infrastructure and provisioning:
1) Cognitect’s AMI: the used AMI dates back to May of 2014. Are there plans to release a new AMI with security updates and newer Amazon Linux version?
2) Logback config: To change logback.xml, should I simply replace it via the AWS EC2 Userdata?#2017-01-3112:22tengAfter two weeks of work we now have a generic save function that can “update”, create and retract arbitrary nested data structures, originally returned from pull queries (and then modified by e.g. a web client). Is there any similar libraries available already for Datomic? Otherwise it could be an idea to open source it.#2017-01-3113:04stijn@teng: we have built something similar for comparing data coming in from external sources. curious how you approached it#2017-01-3113:08danielstockton@mruzekw Yes, om.next is basically that.#2017-01-3113:08mruzekw@jeff.terrell I have. I’ve been using it client side. I’m currently investigating best ways to communicate with the server, which is why I ask about Datomic#2017-01-3113:08mruzekw@danielstockton I like some parts of Om but am more a fan of Rum. Does om.next have isolate into different libraries?#2017-01-3113:11danielstocktonNot sure I understand the question, sorry.#2017-01-3113:11mruzekwI wondered if I could take the reconciler part and use it with Rum instead of Om#2017-01-3113:12danielstocktonSorry, I don't have experience with rum at all. It might be possible to combine the two somehow. What do you think rum provides that om is missing?#2017-01-3113:17tonskySimplicity :) and full control#2017-01-3113:19danielstocktonI find om (next) to be quite simple. Underneath, it's just plain react components, there isn't too much left when you take away the reconciler. For example, you can use a different templating library if you wish, although I prefer plain functions myself.#2017-01-3113:20danielstocktonI haven't tried rum, perhaps I don't know the simplicity and control I am missing.#2017-01-3113:24danielstocktonI can understand why it might seem overkill for simple applications with uncomplicated state. I also would like to see much tighter integration with datascript, instead of atoms + normalization helpers.#2017-01-3113:36teng@stijn We define every “foreign key”, how the entities are related to each other + update/retraction/creation rights. Then we can perform all CRUD operations recursively to arbitrary data structures. Very neat. Took a week to write the 40+ tests!#2017-01-3113:38teng@stijn How did you solve it?#2017-01-3113:48stijn@teng: the idea is that data coming in from e.g. XML files that needs to be updated in the database has a 'scope' which is basically a pull spec + a query that identifies the entities. If an xml file contains entities for all users with property x=a, the query will select the entities in the db that match the xml file scope. It might be that some files contain different details about the users, so the second thing needed is the pull spec to compare to. With that info we can generate adds and retracts by walking both the data that comes from the db and the data from the xml file.#2017-01-3113:49stijnso it's a bit different from your use case i guess#2017-01-3114:05teng@stijn Interesting! Another thing. We are adding auditing now where we store when and who that updated the db, like this: {:db/id #db/id[:db.part/tx] :auditing/changed-by “system-x”}. I retrieve the tx information in separate queries (with help from @favila, so thanks again!) by using find queries that have “sub” pull queries. I couldn’t figure out how to retrieve the auditing data in the same pull query as the original one. Do you know if it’s possible to also retrieve tx related data (like auditing) for all attributes in a nested pull query, so that we don’t need to do several queries?#2017-01-3114:19stuartsierra@teng Pull queries navigate relationships between entities. Transactions are entities just like any other. If you wanted to get both transactions and other entities in a pull query, you would need to have a :db.type/ref attribute linking the transaction entity to those other entities.#2017-01-3114:24teng@stuartsierra Ok, thanks, I will try that!#2017-01-3114:50karol.adamiec@stuartsierra how would that look? one needs to explicitly define that in schema and then in every assertion/retraction provide that data? or can i grab txInstant from any entity by default?#2017-01-3114:51teng@stuartsierra I’m not sure if that solves our problem. The tx id can vary between attributes for an entity. We can’t add an extra attribute for every attribute?!#2017-01-3114:52karol.adamiecatm i am having `[:find (pull ?tx [:db/txInstant]) (pull ?e [*])
:in $
:where [?e :order/identifier “asdf" ?tx]]`#2017-01-3114:52karol.adamiecand process that collection later to merge txInstant into order record.#2017-01-3114:56stuartsierraTransactions are collections of datoms. A datom links an Entity, Attribute, Value, and Transaction. The pull API only knows about Entities.#2017-01-3114:58stuartsierraIf you wanted to navigate associations between Entities in your data and Transaction entities, you would need to add those associations in your data when you transact it.#2017-01-3115:00teng@stuartsierra …by doubling the number of attributes in every entity?#2017-01-3115:02stuartsierraThat was not the use case I had in mind. What I have seen in the past is associations between Transaction entities and the top-level entities that are known to be "affected" by that Transaction. If you need the association at the granularity of individual attributes — i.e., individual Datoms — then you will need to use the query API.#2017-01-3115:05teng@stuartsierra Ok. We need the granularity to be at the attribute level, so then I will just continue with the query API as you suggested. Thanks.#2017-02-0116:12dominicmReading the docs on keywords, it's mentioned that they are "interned" — does this make them closer in efficiency to a number rather than a string?#2017-02-0116:22pesterhazyI think it refers to the fact that (identical? (clojure.lang.Keyword/intern "a") (clojure.lang.Keyword/intern "a"))
#2017-02-0116:24pesterhazyhttps://stackoverflow.com/a/10579062/239678#2017-02-0117:11dominicmAh, I wasn't familiar with that terminology. So I guess the efficiency is more like a number than a string, neat#2017-02-0119:20weiwhat’s a good way to store a value that only has one instance in the entire db? e.g. configuration or global values#2017-02-0119:21weior another example, a running list of the last five created entities#2017-02-0119:42favila@wel Special "TOC" entity with an ident, put the changing things on attributes.#2017-02-0119:44weinot sure what a TOC entity means, can you link to any info?#2017-02-0119:46weido you mean something like this? https://support.cognitect.com/hc/en-us/articles/215581428-When-to-Use-Idents#2017-02-0119:52favila@wei "TOC"= table of contents#2017-02-0119:54favila@wei example {:db/ident : :database-owner "..."}#2017-02-0119:54favilaYour "last five created entities" example is just a query, no?#2017-02-0119:56favilaIn any case, we often have some entities which describe something about the database itself or reference other entities which are special in some way. A way to deal with this is to have a single well-known named entity in the db (named with an ident), and on it are attributes+values which assert those special things.#2017-02-0119:57favilaDatomic itself uses this technique for the :db.part/db partition (entity 0), where all the schema is held#2017-02-0120:33weii see, so : is a global entity#2017-02-0120:45favila@wei Yes, but it's better to have the ident remain unchanging than to move the ident to a different entity#2017-02-0122:02eoliphantHi, i’m having a weird issue with the console. All my data are still in the db, I can run queries from both my app with an embedded peer and the query page just fine#2017-02-0122:04eoliphanthowever, the ‘schema’ area doens’t show my attribute defs, and when I go to the indexes tab, and try to run say an :avet query, the attribute dripdown doesn’t list the ones I’ve added to my schema, even though again, I can run queries that use them just fine#2017-02-0216:45devthare db/ids guaranteed to never change? e.g. during migration from one underlying storage to another, or some maintenance event that i'm not aware of. trying to determine if i should rely on them for things like URLs#2017-02-0216:47pesterhazyI think the conventional wisdom is that you shouldn't rely on entity ids#2017-02-0216:48potetm^^^#2017-02-0216:48pesterhazyThey might change in future versions of datomic, perhaps due to repartitioning (but that's not happening currently)#2017-02-0216:48pesterhazyEven now, they make it hard to re-import a dataset#2017-02-0216:49potetmIt's very much a "datomic internally managed id".#2017-02-0216:49pesterhazyEntity ids don't change when restoring a backup, but when you build your own backup/restore solution (for a partial database), you'll end up with a new set of entity ids#2017-02-0216:49potetmRight.#2017-02-0216:50pesterhazyA good strategy is to pick a primary key and to use lookup refs#2017-02-0216:50potetmThey don't allow you to set it. So anything shared externally than needs to move from Database A to Database B should have another id assigned.#2017-02-0216:50pesterhazy[:user/guid #uuid "..."] is a drop-in replacement for entids almost everywhere#2017-02-0216:51devthright. this all makes sense. thanks!#2017-02-0216:51pesterhazyyou're welcome!#2017-02-0223:48SoV4I have a beginners' question... how can I do a d/transact and also get the entity ID as a result / return val?#2017-02-0223:53favila@sova http://docs.datomic.com/clojure/#datomic.api/resolve-tempid#2017-02-0223:54favilaMake a tempid, keep it, transact, keep the :tempids result, then use that function to get a real entid from the two#2017-02-0223:56favila(note I rarely do this anymore, using upserting attrs and refetching by lookup ref is what you want to do most of the time)#2017-02-0223:56weiis there a good answer yet for using spec on EntityMaps? s/keys fails because an EntityMap is not a map?#2017-02-0223:58SoV4@favila thanks! I was just coding up the same thing (d/transact and then do a query to get the entity id)#2017-02-0223:59SoV4the joy of clojure is real ! 😃#2017-02-0309:09pesterhazy@sova, I've found, by the way, that I usually don't need to get the entity id when the code is written in idiomatic way#2017-02-0309:09pesterhazyalthough of course that depends on what you're transacting 🙂#2017-02-0313:35reitzensteinmhas anyone come across an :db.error/invalid-entity-id on valid entities before?#2017-02-0313:36reitzensteinm(d/transact (conn) [{:db/id 17592186076008, :account/owner-phone-number "047041****"}])
=> #object[datomic.promise$settable_future$reify__6480 0x49b7d935 {:status :failed, :val #error {
:cause ":db.error/invalid-entity-id Invalid entity id: 17592186076008"#2017-02-0313:36reitzensteinm(d/touch (d/entity (cal.utils.database-connection/db) 17592186076008))
=> {:account/owner-phone-number "<snip>", :account/base-commission 0.0, ...}#2017-02-0313:37reitzensteinmI'm using the dev db, deleted the data and restored a backup of prod, where we seemed to be having a bit of trouble#2017-02-0315:05favila@reitzensteinm You cannot transact arbitrary entity ids explicitly#2017-02-0315:06favilaAre you sure the entity existed before?#2017-02-0315:06favilathe only way to "mint" a new entity id is to transact with a temp id and let the db assign an actual id#2017-02-0315:14reitzensteinm@favila yes I'm sure it existed (see the second output)#2017-02-0315:19favilawhat is the basis-t of the db? @reitzensteinm#2017-02-0315:21reitzensteinmit says 540270#2017-02-0315:21reitzensteinmis that a transaction count?#2017-02-0315:23favilait's the "t" part of the last transaction#2017-02-0315:24favilathere is one counter inside the db which is incremented for every minted entity id. its value goes in the "t" part of an entity id (the lower bits)#2017-02-0315:25reitzensteinmah, I see#2017-02-0315:25favilathe higher bits of an entity id contain the t of a partition id#2017-02-0315:25favilaso 17592186076008 has a t of 31592, and a partition of 4#2017-02-0315:25favila(the user partition)#2017-02-0315:25reitzensteinmmakes sense. it's a very old entity#2017-02-0315:26reitzensteinmbeen in the db for nearly a year#2017-02-0315:26favilaI was just sanity-checking that the t value had advanced beyond this entity's t value#2017-02-0315:26reitzensteinmi'm suspecting corruption of the db. I was investigating an issue where, on prod, a transaction with a tempid overwrote an existing entity#2017-02-0315:27reitzensteinmthe error occurs on a backup of the prod db, restored locally#2017-02-0315:27reitzensteinm(not the same entity)#2017-02-0315:27reitzensteinmbut the restored db is apparently very broken#2017-02-0315:28favilaother sanity checks I can think of: are you sure that db is actually from the correct connection? Are you connecting to both transactors from the same process with an older datomic version? (older datomic versions would reuse the connection)#2017-02-0315:29favilado you see any 0-size files in the values/ dir of your backup? (this happened to us--a corrupted backup)#2017-02-0315:29reitzensteinmthere's just the one transactor (the same process does not connect to prod)#2017-02-0315:30reitzensteinmwill check the backup dir#2017-02-0315:30reitzensteinmi've deleted and rebacked up several times#2017-02-0315:30favilaif all of this is ruled out, I would file a support ticket#2017-02-0315:30favilathis does sound like corruption#2017-02-0315:30favilawhat is your prod storage backend?#2017-02-0315:30reitzensteinmit's on postgresql, hosted on RDS#2017-02-0315:31favilayou can also query the db directly to look for corruption#2017-02-0315:31favila(if it manifests in prod too)#2017-02-0315:31reitzensteinmthe same error (writing to existing entity) does not, but I'm thinking that the other shoe is going to drop pretty soon#2017-02-0315:32reitzensteinmjust the one error on prod indicating corruption (tempid reusing an existing entity)#2017-02-0315:32reitzensteinmdo you mean using the datomic integrity diagnostics fn?#2017-02-0315:33favilaI'm not aware of that? I meant the blob column in the sql db will be size 0 or start with 0x00#2017-02-0315:33reitzensteinmah I see#2017-02-0315:33favila(for the value rows, their id value looks uuid-ish)#2017-02-0315:34reitzensteinmthanks for your help! I'll try to look for corruption and file a ticket#2017-02-0315:37favilaselect * from datomic.datomic_kvs where id like "pod%" will get you the (mutable) roots, may have some interesting things in it#2017-02-0315:39reitzensteinmno 0 sized files in the backups dir, lowest are 60#2017-02-0316:10jdkealyis there some sort of trick for manual indexing to help me get around a terribly slow query ?#2017-02-0316:11jdkealyphotos belong to a collection, collection belongs to organizations. I have 2M photos and I can't get the highest "foreign key" from my photos because running out of RAM.#2017-02-0316:12marshallhighest?#2017-02-0316:12marshallas in largest numerically?#2017-02-0316:12jdkealyyes#2017-02-0316:12marshallfor a specific attribute?#2017-02-0316:12jdkealyyes, but it's not necessarily unique#2017-02-0316:13jdkealybecause each organization will have their own FK#2017-02-0316:13jdkealyorg 1 has 10 photos with ids 1-10, org 2 has 20 photos ids 1-20#2017-02-0316:13marshallas long as the attribute is index/true you could simply walk (d/datoms :avet :foreign/key) until you get to the largest#2017-02-0316:14marshall^ that is lazy so it should be pretty memory efficient#2017-02-0316:14jdkealythere's no way to isolate those datoms to just an org's datoms is there?#2017-02-0316:14marshallif you want to realize the whole set you could (seq ) it#2017-02-0316:14marshallhow are orgs defined?#2017-02-0316:15jdkealyit's basically a user. firstname / lastname... they create a collection :collection/organizaiton and then you upload photos to your colleciton :collection/photo#2017-02-0316:15marshallthat’s all in a single entity?#2017-02-0316:15jdkealyno#2017-02-0316:15jdkealy3 entities#2017-02-0316:16jdkealy:user/name :db/type :string
:collection/organization :db/type :ref
:photo/collection :db/type :ref#2017-02-0316:16marshallhave you tried optimizing the ordering of clauses in your query?#2017-02-0316:16jdkealyyes#2017-02-0316:16jdkealythere's only about 4 orgs now#2017-02-0316:16marshallah#2017-02-0316:16jdkealyso i go org -> collection -> photos#2017-02-0316:16jdkealybut in my big org, it still has to filter through 2M photos#2017-02-0316:17jdkealyso... i guess one thing that would help would be being able to quickly iterate through an org's photos#2017-02-0316:17jdkealywould it make sense ( i'd hate to do it ) but to add an attribute :photo/organization#2017-02-0316:17jdkealywhich bypasses the collection ?#2017-02-0316:18marshallyou could do a combination of query and datoms#2017-02-0316:19marshallquery to get the entity IDs of the collections#2017-02-0316:19marshallthen use datoms on :eavt or :aevt#2017-02-0316:19marshallfor each of the collections#2017-02-0316:19jdkealyas one query or 20k queries ( i have 20k collections )#2017-02-0316:19marshallah#2017-02-0316:20jdkealyeach collection averages like 100 photos#2017-02-0316:20marshalland you’re looking to find the ‘largest’ collection?#2017-02-0316:22jdkealyi'm trying to find the single photo that has the highest foreign-key#2017-02-0316:22jdkealyi have a counter funciton in datomic#2017-02-0316:22jdkealyso it doesn't need to happen every time i create a new photo#2017-02-0316:22marshallis the highest foreign key the most recently transacted?#2017-02-0316:23jdkealybut i'm importing old data. so after import i'm trying to set the counter to the highest fk#2017-02-0316:23jdkealysure... the highest foreign key for that organization recently transacted#2017-02-0316:23jdkealyi mean... it's recent if the thing isn't broken#2017-02-0316:23marshallyou could use the log#2017-02-0316:24marshallwalk backwards through transactions until you find it#2017-02-0316:24jdkealyright... i guess i'm trying to find ways though to offset some of my performance anxiety about these kinds of queries in the future#2017-02-0316:25jdkealyis the log really a sustainable solution#2017-02-0316:25marshallit’s a tree just like the indexes#2017-02-0316:25marshallit just happens to be indexed by t first#2017-02-0316:26jdkealyoh interesting#2017-02-0316:26jdkealyi don't think though that i could trust the last attr to be the highest#2017-02-0316:26marshallhttp://blog.datomic.com/2016/08/log-api-for-memory-databases.html#2017-02-0316:26marshallah. well, that might not work then#2017-02-0316:27jdkealydo you think I should think about duplicating the attr :collection/organization to :photo/organization ?#2017-02-0316:27marshalli don’t love it, but it might work best#2017-02-0316:27jdkealythat way i could query the datoms#2017-02-0316:28marshallyeah, having the intermediate entity means you have a join required#2017-02-0316:29jdkealyright... i guess it's like an index, it would be cool if you could have these kinds of indexes happen in the background#2017-02-0316:29jdkealylike... if an entity belongs to another ent which belongs to another X levels deep#2017-02-0316:30marshallbasically compound keys#2017-02-0316:30jdkealysince the datoms api seems to be the only way to query big sets quickly#2017-02-0316:31jdkealyis it in the roadmap to expand on d/datoms to allow you to add multiple parameters ?#2017-02-0316:47favila@jdkealy what do you mean?#2017-02-0316:52favila@jdkealy you could use d/datoms within the query too#2017-02-0316:59jdkealyoh?#2017-02-0317:01favilayou can call arbitrary functions in a query. Remember the query is run on the client#2017-02-0317:02favilayou're trying to avoid realizing a large intermediate set you only want an aggregation for#2017-02-0317:03favilayou can do that by performing the aggregation in the where of the query via a function call instead of using datalog to aggregate (which will realize the intermediate set)#2017-02-0317:05favilae.g. (defn last-datom [db idx & matches] (last (apply d/datoms db idx matches)))#2017-02-0317:06favila':where [(my.ns/last-datom $ :eavt ?org ?whatever) [?e ?a ?v ?t]]#2017-02-0317:07favilaAlthough I'm not sure how relevant this is from your discussion because I'm not sure how the entities are connected. I would have to see the original query#2017-02-0317:07faviladatoms is still an index scan though#2017-02-0317:08favilawhy are you interested in the highest entity id anyway? that seems like a strange thing to care about?#2017-02-0317:09jdkealythe highest foreign key... i'm migrating data from databases that used an RDBMS and i want to keep the foreign key counter going seamlessly when they migrate to new system#2017-02-0317:09favilaah, its not an entity id, it's a long#2017-02-0317:10jdkealyyeah#2017-02-0317:10favilawhat if you put that clause first?#2017-02-0317:11jdkealywhich#2017-02-0317:11favilareverse the order of your :where clauses#2017-02-0317:11favilaso you start with all fk, then filter by org#2017-02-0317:11favilarather than starting with org and finding all its fk#2017-02-0317:11jdkealywould that make it faster ?#2017-02-0317:12favilamaybe#2017-02-0317:12jdkealyi have 4 orgs, 20k collections, 2M photos#2017-02-0317:12jdkealyor "content"#2017-02-0317:12jdkealyi thought the idea was to go from lowest to highest#2017-02-0317:13favilanormally, but there's also benefit to traversing in a sorted order#2017-02-0317:13favilaperhaps query engine is smart enough to dispose of some intermediate sets#2017-02-0317:14favilaprobably I am wrong though and the size of the intermediate set is what dominates#2017-02-0317:14marshallits definitely worth testing#2017-02-0317:15marshalli also suspect what @favila said - the join is dominating#2017-02-0317:16marshallif you have some prior knowledge of the values you’re looking for you might be able to leverage http://docs.datomic.com/clojure/#datomic.api/index-range#2017-02-0317:16marshalli.e. start at your last known largest value#2017-02-0317:16marshallthat would reduce the size of the required index scan#2017-02-0317:17favilayes, avoiding a full datoms seq for the index segment#2017-02-0317:17favila(think array bisection)#2017-02-0317:17jdkealycool thanks guys#2017-02-0317:17jdkealyi'll give it a try now#2017-02-0317:17favilahonestly if orgs is small I would just write a reduction function#2017-02-0317:18marshallagreed#2017-02-0317:18marshalluse an async transducer if you’re using clients#2017-02-0317:18jdkealyare there some examples somewhere of something like this ?#2017-02-0317:19favilaI'll write one up for this#2017-02-0317:19jdkealywow thanks 🙂#2017-02-0317:22marshallhttps://github.com/Datomic/client-examples/blob/master/examples/crud.clj#L63-L72#2017-02-0317:23marshallI believe that ^ is an example using a transducer across an async query#2017-02-0317:24marshallah, i lied#2017-02-0317:24marshallit could be easily made into one#2017-02-0317:29favila@jdkealy something like this https://gist.github.com/favila/a93662a47952c5eda6708a1e5fddd791#2017-02-0317:30favila(As a side note, you can avoid need for ffirst with :find (max ?x) .)#2017-02-0317:31jdkealyoh got it#2017-02-0317:31jdkealywouldn't you want to make sure the new fk is higher than the previous seen ?#2017-02-0317:31jdkealyif we could assume that we could just grab the last one, no ?#2017-02-0317:31favilahttp://docs.datomic.com/query.html#find-specifications#2017-02-0317:32favilais type of :content/fk a :long?#2017-02-0317:33jdkealyyes#2017-02-0317:33faviladatoms are stored in sorted order, so the :avet index will sort by a, then v, then e, then t#2017-02-0317:33jdkealyah got it#2017-02-0317:33favilav here is :content/fk, so it is always increasing#2017-02-0317:33jdkealyoh ok... so this creates a map of all the orgs and their fk's#2017-02-0317:33favilayes#2017-02-0317:34favilait seqs over all content/fk, for each one gets its org, then writes that to a map#2017-02-0317:34jdkealycool great#2017-02-0317:34jdkealyat what point would this not be sustainable ? what if i have 100M photos ?#2017-02-0317:34favilalots of orgs#2017-02-0317:34jdkealyi can deal with slow...#2017-02-0317:34jdkealyjust not timing out#2017-02-0317:34favilathis should never time out#2017-02-0317:34jdkealycool great#2017-02-0317:35favilathis is the smallest amount of mem you can possibly usee#2017-02-0317:35jdkealyhonestly i'm using elasticsearch for most of my queries... datomic i'm just using to give me the facts#2017-02-0317:35favilaso it may not be fast if there are a lot of datoms to seq over, but they will never all be in memory at the same time#2017-02-0317:35jdkealycool awesome#2017-02-0317:36marshallneat side effect - you’ll populate your local cache with all the segments about your orgs and photos by running it#2017-02-0317:36marshalldepending on the size of your local (and/or memcached) instances#2017-02-0317:37jdkealyinteresting#2017-02-0317:37jdkealydo you populate the cache by calling by-id ?#2017-02-0317:37jdkealyi mean... (d/entity _db id)#2017-02-0317:37marshallthe datoms call technically#2017-02-0317:37marshallalthough entity probably would too#2017-02-0317:38marshallbasically anything that fetches data will cache#2017-02-0317:38marshallso query, datoms, etc#2017-02-0317:38marshallmy point was only that if you ran that reducer on a “cold” peer by the time it ran you’d have cached potentially most of your DB#2017-02-0317:38marshallobviously depending on the size of the db and size of your cache#2017-02-0317:38favilaso a repeated run on the same peer would likely execute much more quickly#2017-02-0317:39jdkealycool, got it#2017-02-0317:39jdkealywow holy shit that was pretty fast#2017-02-0317:39jdkealyi mean like 10 seconds fast ... but definitely beats timing out!#2017-02-0317:40favilahah I was not expecting so large a difference#2017-02-0317:40marshalllol#2017-02-0317:40marshallawesome#2017-02-0317:40marshallthose big set to small set joins can really be a bear#2017-02-0317:40jdkealywithout an indexes, it ran for like an hour and i gave up, with indexes GC overhead, and this way like 10 seconds#2017-02-0317:40marshallmost of the time query is great and preferred, but there are a few cases where you definitely can’t beat direct datoms#2017-02-0317:40favilayeah that access pattern is a weakness of queries. Queries can't do the aggregation until the entire set you are aggregating over is realized#2017-02-0317:41favilaso if that set is very big, you may OOM#2017-02-0317:41jdkealyright that makes sense#2017-02-0317:50jeff.terrellQuestion: what is a Datomic database value, really? Is it just an entity id or something?
I'm asking because I used an exception service today to diagnose a production exception (in a Rails app), and I thought, man, it'd be so nice to be able to query the database at that point in time. I got to wondering whether Datomic database values were lightweight enough to include in the exception information, so that I could copy the value from an exception report, then go and paste it into a REPL session connected to the production database for forensic investigation.#2017-02-0317:55favila@jeff.terrell You can recreate a db value from its basis-t, as-of-t and since-t (both usually nil). We log the t in situations like you describe#2017-02-0317:56favila(d/basis-t db) => t to get it, (d/as-of db t) to recreate it#2017-02-0317:59favilaIf you have a timestamp on your log you can also grab a db value by time#2017-02-0318:30jeff.terrell@favila - Thanks! That makes sense. Filing this little fact away for future Datomic advocacy. :-)#2017-02-0321:25stuartsierra@jeff.terrell More technically, a database value is a pointer into one set of "root" nodes in the immutable persistent trees that make up Datomic's indexes in storage. New versions of these roots are created each time the transactor does an indexing job; old roots are discarded when you run gc-storage. The d/as-of call gives you a "filtered" view of the database as it existed at time t.#2017-02-0322:08jeff.terrell:+1: neat. Thanks for the explanation, @stuartsierra.#2017-02-0415:00nottmeyHi, is it intentional, that the first is not working and second is? Is there a better/clearer workaround?#2017-02-0415:02nottmey[:find ?e ?v
:where [?e :db/ident ?v] [(not (clojure.string/starts-with? (str ?v) ":db”))]]
...Exception: Unable to resolve symbol: ?v in this context
[:find ?e ?v
:where [?e :db/ident ?v] [(-> ?v str (clojure.string/starts-with? ":db") not)]]
#2017-02-0417:33jarppeI'm implementing a "recent changes" view, where I'm supposed to show who did
what, starting from most recent change and going back in time. I need to filter
out some changes based on users rights and active filters etc. I'm thinking
something like this:
(->> (d/tx-range (d/log conn) nil nil)
(reverse)
(filter interesting-to-user?)
(map tx->changes)
(take enough-to-fill-screen))
I'm worried about the performance of the (reverse). Is this sensible approach,
or is there some better way to traverse from now (or some given point of time)
to back in time?#2017-02-0419:50amarTrying to run through the getting started guide using datomic-free-0.9.5554. Running ./bin/run -m datomic.peer-server -p 8998 -a myaccesskey,mysecret -d firstdb,datomic: gets me java.io.FileNotFoundException: Could not locate datomic/peer_server__init.class or datomic/peer_server.clj#2017-02-0419:50amaram i missing something?#2017-02-0419:59thegeezthe peer server doesn't work with datomic free, see the bottom of: http://www.datomic.com/get-datomic.html#2017-02-0421:11amar@thegeez thanks. Was just following the tutorial on http://datomic.com Is there another tutorial for getting started?#2017-02-0421:13amarok came across http://gigasquidsoftware.com/blog/2015/08/15/conversations-with-datomic/ hopefully that'll work#2017-02-0514:31favila@nottmey bindings are only replaced with values at the first level#2017-02-0514:31favilathat is why the first works and the second does not, in the first the ?v is nested#2017-02-0514:33favila@nottmey this is a better way: [(namespace ?v) ?v-ns] [(!= ?v-ns "db")]#2017-02-0514:34favilasee clojure name and namespace functions, don't use string ops to examine parts of symbols or keywords#2017-02-0515:44nottmey@favila ahh, one needs to leverage the binding nature of datalog, clever. Thanks!
I was confused, how you would negate your predicates then... (seems like a common thing)
But I found, instead of [(not (predicate ...))], you use (not [(predicate ...)]). Also very handy.#2017-02-0516:34pesterhazyyou can also write your own predicate fn and use it using a fully-qualified name#2017-02-0516:35pesterhazyif you need anything more complex#2017-02-0603:56podviaznikovI tried to retract entity with transacted payload [[:db.fn/retractEntity 13194139534407]]
and got java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException: :db.error/reset-tx-instant You can set :db/txInstant only on the current transaction.
error. Not really sure what I did wrong. Any tips?#2017-02-0605:19akjetmais that entity id a transaction id?#2017-02-0611:07pesterhazyhttps://juxt.pro/blog/posts/datomic-packer-terraform.html#2017-02-0613:27favila@podviaznikov You are attempting to retract a transaction id, but you are not allowed to alter the :db/txInstant assertion on transaction entities.#2017-02-0617:09podviaznikovfavila: that id is entity id. Unless they can be the same. But there is definitely an entity behind the id#2017-02-0617:10favilatransactions are entities#2017-02-0617:10favilathey are entities in the :db.part/tx partition#2017-02-0615:52donaldballWe’re beginning to write web service handlers around datomic, and I’m grappling with the question of where and how to enforce authorization. Do folk tend to e.g. write a predicate fn for a user and an arbitrary datomic transaction, or check authorization for specific mutations?#2017-02-0616:07favila@donaldball I know some people use db filter with user-based predicate to enforce visiblity, but that is belt-and-suspenders#2017-02-0616:08favilayou really do need to design specific ops to be safe#2017-02-0616:08favila(at least that is what we discovered)#2017-02-0616:09favilaan idea we had was post-validation: run the tx with a db/with, then validate no constraints were violated (data or security), then transact with a conditional to ensure integrity#2017-02-0616:10favilawe have not tried it yet at scale though#2017-02-0616:12pesterhazyI've used a list of (prismatic) schemas to validate incoming transactions#2017-02-0616:13pesterhazyit worked well but wasn't fine-grained (all admins can transact all transactions matching any whitelisted schema)#2017-02-0616:23val_waeselynck@donaldball we handle authorization on a per-operation basis.#2017-02-0616:25val_waeselynck(I should add that we don't provide our clients an expressive language à la GraphQL / Datomic Pull)#2017-02-0616:26donaldballWe’re anticipating using om.next, but as I understand it, the common path even there is for the client to send the server a named mutation operation with some arguments, so y’all’s advice is well taken. Thanks.#2017-02-0616:44devthwe handle authz by sanitizing incoming tx vectors and pull queries. (still WIP)#2017-02-0616:45devthcompletely agnostic to the datomic schema / model, supports any number of roles, access groups and access rules.#2017-02-0617:10marshallTransactions are entities#2017-02-0617:10marshallEvery transaction creates an entity#2017-02-0617:11marshallthat is the ‘reified’ transaction itself#2017-02-0617:11marshallat a minimum, it contains the txInstant of that transaction#2017-02-0617:11marshallit can also have other attributes (transaction metadata)#2017-02-0617:11marshallas Francis mentioned, you can’t retract Transactions#2017-02-0617:12favilaif (d/part <entity-id>) is 3 (= :db.part/tx), then entity-id is a transaction#2017-02-0617:12favilathis is true of the entity id you posted#2017-02-0617:40d._.bHey @marshall or someone who can fix it: http://www.datomic.com/videos.html#2017-02-0617:41d._.bThe videos from Datomic Conf are not showing on that page#2017-02-0617:41d._.b> Sorry, Because of its privacy settings, this video cannot be played here.#2017-02-0617:42d._.b@marshall bah, it was due to my use of ghostery/privacy badger/adblock#2017-02-0617:42marshallah. glad you got it figured out#2017-02-0617:43marshallthe video content is stored elsewhere from the website#2017-02-0617:43marshalla lot of ad blockers prevent embedded video when it’s not from the same host#2017-02-0617:44d._.b@marshall by any chance do you have links to the videos themselves (non-embedded)#2017-02-0617:44d._.bi was unable to inspect and snag them from the page#2017-02-0617:45marshalli don’t think they’re available for local download#2017-02-0617:45d._.bi was interested in linking to tim's video directly in another slack channel#2017-02-0617:45d._.bbut no big deal#2017-02-0617:46marshallyeah, looks like they’re only available on that page ATM#2017-02-0622:03devthyou know how a query for a set of entities will return them in form #{[eid1] [eid2]}? can the query be changed so that the result would simply be #{eid1 eid2}?#2017-02-0622:17favila@devth http://docs.datomic.com/query.html#find-specifications#2017-02-0622:18devth@favila thanks! was only familiar with the ?a . spec#2017-02-0711:17jwkoelewijnHi there, I have a question regarding Starter Edition & Memcached, the documentation on http://docs.datomic.com/caching.html does not seem to be very clear: the first paragraph in the Memcached chapter states: "Memcached is an optional addition to a paid Datomic Pro system,......” while in the second paragraph "This setting will only take effect on transactors configured with a Datomic Pro or Datomic Starter license key,......"#2017-02-0711:17jwkoelewijnthe latter sentence leads me to believe we can use Memcached on starter licenses as well, is this correct?#2017-02-0711:26robert-stuttafordyou can#2017-02-0711:26robert-stuttafordonly free can’t use it#2017-02-0713:11jwkoelewijnthanks!#2017-02-0713:11jwkoelewijn@robert-stuttaford#2017-02-0714:04tengDoes Datomic have support for counters, like an alternative id that is increased by one for a given new entity, or do I need to implement it myself like this:
https://github.com/jcrossley3/datomic-counter/blob/master/src/counter/db.clj#2017-02-0715:08marshall@jwkoelewijn Since version 0.9.5530 Starter has provided HA and memcached. Thanks for catching that - I’ll update the text;#2017-02-0715:10karol.adamiec@teng afaik one should use dbfn for that.#2017-02-0715:10karol.adamiecno built in stuff atm#2017-02-0715:11tengOk, thanks @karol.adamiec.#2017-02-0715:40jwkoelewijn@marshall cool, thought something like that, but just wanted to be sure, thanks!#2017-02-0716:06tjgA coworker informally mentioned that Datomic’s crashy on his laptop. (Mac: blows up RAM. Debian VM: freezes & leaves a lot of orphaned processes.)#2017-02-0716:06tjgA couple years ago when I used Datomic, I recall it would become slow, IIRC explained in google groups as a sleep issue.#2017-02-0716:06tjgIs this expected?#2017-02-0716:07tjg(And therefore I should set up dev DBs on a server somewhere?)#2017-02-0716:35pesterhazythe sleep issue was resolved a while ago I think, according to release notes#2017-02-0716:35pesterhazyDatomic's dev transactor has never crashed on me, on mac or linux#2017-02-0716:36pesterhazydo you have a ton of data?#2017-02-0716:41favilaif you don't have much ram, using -Xmx startup args on both client and server processes is pretty important#2017-02-0716:42favilaIt's possible client or server by itself is fine, but both together leads to OOM#2017-02-0716:42favila(sorry, peer or transactor)#2017-02-0720:11tjg@pesterhazy (Sorry for the slow response.) I’m pretty sure my coworker’s using very little data, not nearly enough to stress the 8GB RAM. Asking now for verification...#2017-02-0720:12tjgIf no one else has any weirdness running it locally for dev, I’ll try to replicate on my machine...#2017-02-0720:14favila@tjg just make sure transactor + peer are started with -Xmx2G or something, to limit max ram they can use#2017-02-0720:21tjg@favila Thanks, will do!#2017-02-0803:24rabbitthirtyeightI'm trying to use the new string temp id feature but keep getting this error: :db.error/not-a-keyword Cannot interpret as a keyword: userid, no leading :
{:db/error :db.error/not-a-keyword}
The transaction: [{:db/id "userid"
:user/email "
The reftype is defined as: :widget/purchased-by {:db/valueType :db.type/ref
:db/cardinality :db.cardinality/one}
Does anyone see something obvious that I'm missing?#2017-02-0805:04favila@rabbitthirtyeight what is the db/id for your second map?#2017-02-0805:12rabbitthirtyeight@favila you mean in the transaction? At this point I've actually got the temporary string id working on the widget, so it's id is being set as [{:db/id "userid"
:user/email "
And I've gotten it to where "widget-id" is creating a temp id, but for some reason I still can't get it working with "userid" (I eventually just got the transaction working by reverting to the macro syntax for the user temp id)#2017-02-0805:16rabbitthirtyeightSo now I can use a string temp id and use it with this attribute::thing/has-widget {:db/valueType :db.type/ref
:db/cardinality :db.cardinality/one}
But not this one: :widget/purchased-by {:db/valueType :db.type/ref
:db/cardinality :db.cardinality/one}
#2017-02-0805:24favila@rabbitthirtyeight is it any different if every map includes a :db/id pseudo-attr?#2017-02-0805:24rabbitthirtyeightHmm... good question.#2017-02-0805:24favilaso the third map in your last example, second one in your first example#2017-02-0805:27rabbitthirtyeightThat... fixed it!#2017-02-0805:27rabbitthirtyeightThanks @favila#2017-02-0805:28rabbitthirtyeightDo you have any idea why that was the issue?#2017-02-0805:28favilaseems like a bug to me#2017-02-0805:29favilabut my hunch was that without a db/id to anchor the assertions it was not invoking the string-tempid-as-value semantics#2017-02-0805:30favilaand trying to parse as ref would normally be, i.e., a map, a tempid object, a lookup ref, or a keyword#2017-02-0805:32rabbitthirtyeightYeah that does look like what was happening. In the morning I'll see if I can reduce it to a simple reproducible case (my actual transaction had more noise in it)#2017-02-0805:33favilaDefinitely seems like a bug to me, so file an issue on cognitect's zendesk support page if you can make a small test case#2017-02-0805:34favilathere were a number of really silly bugs related to string tempids and auto-tempids fixed in last two releases, would not be surprised if there are more#2017-02-0805:34rabbitthirtyeightWill do. Now that that's settled I'm going to bed. Thanks again for your help!#2017-02-0808:30pesterhazyIt used to be that you would always need to specify :db/id. Is that changed in the new datomic version?#2017-02-0809:47karol.adamiec@pesterhazy yes, it is inferred now#2017-02-0809:48karol.adamiecsame with db.install/alter#2017-02-0809:49karol.adamiechttp://blog.datomic.com/2016/11/datomic-update-client-api-unlimited.html#2017-02-0809:49karol.adamiecsection about tempids#2017-02-0809:56pesterhazythanks#2017-02-0811:51souenzzohow to compare instant's?
Example: I want all :user/name changed after login1.
Some like [find ?e :in $ ?login :where [?e :user/name _ ?tx][?tx :db/txInstant ?inst][?login _ _ ?txLogin] [?txLogin :db/txInstant ?loginInst] [(> ?inst ?loginInst)]]
But not sure on (> ?inst ?loginInst)#2017-02-0811:52pesterhazy.after#2017-02-0811:52pesterhazy?#2017-02-0812:01val_waeselynckWould like some feedback here: https://stackoverflow.com/questions/42112557/datomic-schema-for-a-to-many-relationship-with-a-reset-operation#2017-02-0812:12rauh@val_waeselynck I woudln't recommend doing this. With a proper transaction function you'll see exactly what has changed and have nice transaction, so you have a nice paper trail.#2017-02-0812:12rauhI implemented this a while ago: #2017-02-0812:13rauhIt's for refs but can be modified for values.#2017-02-0812:24karol.adamiec@val_waeselynck have the same. I need/want semantically have a collection, but instead of managing the items i always set in a new collction. I do that using a ref attr that is removed before update with ':db.fn/retractEntity’ , then new one is inserted. With careful management of isComponent works fine and no leftovers are in db, and retract entity does not remove too much as well 🙂. But if there is a nicer apttern i am all ears as well.#2017-02-0812:26pesterhazyI've had the same issue in the past#2017-02-0812:28pesterhazyin an admin interface, a product has 0..n features. A user can add and remove features. When she clicks Save, I compute the diff to the previous state, retract the obsolete ones and assert the novel ones.#2017-02-0812:29pesterhazyI don't actually use a transactor fn for this, though using that would be safer I suppose#2017-02-0812:32karol.adamiecthink it depends on what the domain interest is. If one cares about data in collection to keep some kind of references to other entities (that are linked through entity id instead od domain keys like i.e. email) then one has to calculate the diffs and do minimal update. If the data in collection are not “linked” directly to any other part of system calculating the diff seems like accidental complexity. Then the delete/create seems more appropiate.#2017-02-0812:33karol.adamiecbut no idea about wider performance etc.. implications of this#2017-02-0812:34rauh@karol.adamiec That's exactly what my db-fn above does. It even allows newly transacted items to be ref'd. Removing/re-adding means your history is messed up, which may or may not be bad depending on what your application is interested.#2017-02-0812:35karol.adamiec@rauh yeah. i bookmarked that gist 😄. thanks. but still not a wizz in clojure enough to just swallow that bit of code 😄.#2017-02-0812:35rauh@val_waeselynck To add why I would not do it like you suggest: You'll have non-used entities (the collection) after a while unless you carefully garbage collect.#2017-02-0812:36karol.adamiecthat is why isComponent in schema is crucial, and deleting using built in fn.#2017-02-0812:43karol.adamiecbut yeah, history will most likely be messed up#2017-02-0812:47karol.adamiec@pesterhazy i think dbfn is a must for that. you are definitely at risk. when loads get high, mess will follow imo.#2017-02-0812:49val_waeselynckthanks guys 🙂 don't hesitate to add these comments on SO for the future generations 😉#2017-02-0812:49pesterhazy@karol.adamiec diffing is not unnecessary complexity in this particular case as you cannot retract and assert the same fact in the same tx#2017-02-0812:50val_waeselynck@rauh I have implemented this fn too, and still find it limited. Could you elaborate on the drawbacks of the proposed approach?#2017-02-0812:50val_waeselynck@rauh#2017-02-0812:51val_waeselynckthanks hadn't seen your previous comment. Not sure garbage is that big an issue.#2017-02-0812:52rauh@val_waeselynck I'd say your version is ok if you're fine with "quick 'n dirty", but the proper way (ie you have a proper history) is to do all this in a single transaction with all the proper minimal changes#2017-02-0812:52rauhYou do open yourself up to race conditions too.#2017-02-0812:52rauhIn what sense do you find the fn limiting?#2017-02-0812:53val_waeselynckThe fn kinda breaks the 'nested maps' writes for one thing#2017-02-0812:54val_waeselynckit also has the limitation of running on the transactor, although I'm okay with that in this particular case.#2017-02-0812:54val_waeselynckI don't see the race conditions you're mentioning.#2017-02-0812:56rauhWhat do you mean with nested maps breaking? Can you give an example?#2017-02-0812:57val_waeselynck@rauh going further with my example code:#2017-02-0812:59pesterhazybut there could be an extended version#2017-02-0812:59pesterhazythat accepts a vector of txs#2017-02-0813:00rauh@val_waeselynck My version is for refs only, but there you def can use nested maps as much as you like, you just have to assign them temp-ids and pass them into the vector#2017-02-0813:00val_waeselynck@pesterhazy which you'd have to parse in order to find the entity ids on which to perform the diffs... still not completely easy 🙂#2017-02-0813:00pesterhazyand uses some special syntax marker to replace #{a b c} with #{:only! a b c}#2017-02-0813:01pesterhazynot easy but it should be possible to implement this as a general reusable tx fn#2017-02-0813:02pesterhazythat would be neat#2017-02-0813:02pesterhazynot sure what to use as the syntax extension mechanism though#2017-02-0813:06val_waeselynckStill, the tx-fn approach still feels a bit hacky to me. For some reason, I feel this is a contrived way of using a cardinality-many attribute - in my view cardinality-many attributes are design to model facts that don't collide with each other, not to model cohesive sets of entities#2017-02-0813:06pesterhazywhat would you use cardinality-many for then?#2017-02-0813:07rauh@val_waeselynck Datomic very much encourages to use tx-fns. It's not hacky at all IMO. This isn't pg-sql.#2017-02-0813:08val_waeselynck@rauh the db fn is not what disturbs me most#2017-02-0813:08rauhIf a user removes a favorite movie from his/her list then the proper transaction is to [:db/retract...] and not a complete rewrite IMO#2017-02-0813:09val_waeselynck@rauh well it's not especially harder to retract from the list. That's what I like with this approach: that absolute and incremental operations are equally easy.#2017-02-0813:09rauhMaybe down the road in a year you want to display the history of the user's favorite movies... You'll have to manually do the work with your approach.#2017-02-0813:12val_waeselynck@rauh on contrary, I would argue that the intermediate-entity approach is more sound for that use case, because you don't have to resort to querying history, which has many limitations. The notion of "version" of your favorites movie list becomes a first-class citizen of your model (maybe at the cost of adding a couple more attributes).#2017-02-0813:14val_waeselynck@pesterhazy In my view, there's a difference between "Stu likes pizza (among other things)" (for which cardinality-many attributes are well suited) and "There's a list of the things that Stu likes, which contains ..."#2017-02-0813:17pesterhazynot sure I get the distinction#2017-02-0816:41favila@rabbitthirtyeight I think this is the root of your problem last night: (d/entid (d/db (d/connect "datomic:")) "str")
IllegalArgumentExceptionInfo :db.error/not-a-keyword Cannot interpret as a keyword: str, no leading : datomic.error/arg (error.clj:57)
d/entid does not understand string tempids.#2017-02-0816:42favilaIt should work the same way as tempids probably: (d/entid (d/db (d/connect "datomic:")) (d/tempid :db.part/user)) ;=> -9223350046623220288#2017-02-0816:57rabbitthirtyeightHuh. Yep that's the error message.#2017-02-0816:58favilaBreaks one of my tx functions#2017-02-0817:32nottmeyUsing the pull api with pattern [:release/country] I get {:release/country {:db/id 17592186045550}}.
How do I get "enums" to automatically become either/or
1. {:release/country {:db/id :country/DE}}
2. {:release/country {:db/id 17592186045550 :db/ident :country/DE}
3. {:release/country :country/DE}
I know that I could prefetch the schema and apply transformations on the pull result, but that seems counterintuitive.#2017-02-0817:33favila@nottmey That is the only way#2017-02-0817:34favilaI retrieve :db/ident on the leaves and do a prewalk transformation#2017-02-0817:39favila(->> (d/pull db '[{:release/country [:db/ident]}] eid)
(clojure.walk/prewalk
(fn [x]
(if (and (map? x) (:db/ident x))
(:db/ident x)
x))))#2017-02-0817:44nottmeyok ty, I see. There is no pattern to say “display all :db/ident regardless of which level of nesting” right?#2017-02-0817:47nottmeyHmm I guess one could also walk over the finished pattern and insert :db/ident at each stage, before calling pull. Seems good enough.#2017-02-0817:52favilayes, that's true#2017-02-0817:53favilathat is a schema-ignorant way of doing it (meaning you don't need to know which attrs you expect to have enum values)#2017-02-0817:53favilaI had been putting them in manually#2017-02-0817:53favilabased on which I expected to have enum values#2017-02-0817:56nottmeyI still need to know, which attributes are refs and * is still hiding ref attributes 😕 (but thats ok, I guess I need to do it the right way)#2017-02-0817:56robert-stuttafordthe reason it’s like this is because you can use :db/ident on any entity. it’s just useful as enums. pull allows you to work with idents for other-than-enums.#2017-02-0817:57favilaYou can also include the db in the post-walk#2017-02-0817:57favilathat way all you need is a db id#2017-02-0817:59favila(defn pull-enums [db pat eid]
(->> (d/pull db pat eid)
(clojure.walk/prewalk
(fn [x]
(cond
(and (map? x) (:db/ident x)) (:db/ident x)
(and (map? x) (:db/id x)) (or
(d/ident db (:db/id x))
x)
:else x)))))#2017-02-0818:00favilano need to change the pattern in this case#2017-02-0818:02favilaI think ident is atemporal though (looks at ident cache, not db snapshot)#2017-02-0818:02nottmey@robert-stuttaford yea resolving anything else than an enum might not be a good idea, I exclude that corner case#2017-02-0818:02favilamaybe safer would be (:db/ident (d/entity db (:db/id x)))#2017-02-0818:05nottmeythe usecase is more like “pull entity tree with resolved enums”
I like the “ask db again for ident” approach when it’s about very generic pulls. Would calling this for every leaf map be a bad idea?#2017-02-0818:10favilaif you include :db/ident in the pull, there is hardly any cost#2017-02-0818:10favilaif you don't you need a lookup which is likely to be in the object cache anyway#2017-02-0818:10favila(and so fast)#2017-02-0818:10favilaso I doubt it makes a difference practically#2017-02-0818:15robert-stuttafordas Rich says, why worry, when you can measure 🙂#2017-02-0818:17favila(defn entity-pull
"Like pull, but returns values consistent with d/entity, i.e.,
entities with :db/ident are represented as keywords and sets are
used instead of vectors."
[db pat eid]
(->> (d/pull db pat eid)
(clojure.walk/prewalk
(fn [x]
(cond
(and (not (map-entry? x)) (vector? x)) (set x)
(and (map? x) (:db/ident x)) (:db/ident x)
(and (map? x) (:db/id x)) (or
(:db/ident (d/entity db (:db/id x)))
x)
:else x)))))#2017-02-0818:17favilaI've wanted that for other things, never thought about getting the ident after-the-fact#2017-02-0818:17favilathat technique makes it much easier to write#2017-02-0820:09oscarI'm using the Datomic Client library and when I try adding the expression [(namespace ?ident) ?ns] as a where clause I get the error message "The following forms do not name predicates or fns: (namespace)". I tried namespacing the symbol but got the same error. When I run the same query in the Peer library, it works just fine. Does anyone have an idea of what I'm doing wrong?#2017-02-0820:19favilaclojure.core/namespace also does not work?#2017-02-0820:32oscarYeah. By the way Datomic Peer Server is a version old in case that has some significance.#2017-02-0821:52azHi all, can anyone point me in the right direction when it comes to building realtime web apps with clojure and datomic? Are there any frameworks I should look at?#2017-02-0823:11zaneI feel like Cognitect just announced something that fits this description. I'd check out their blog.#2017-02-0911:16nottmeydon't see anything fitting on their blog in the last months#2017-02-0919:32zane@nottmey: https://github.com/cognitect-labs/vase#2017-02-0919:33nottmeyahh I know vase, is it supposed to cater to “realtime” apps?#2017-02-0919:34zaneI'm not sure! I haven't looked into it yet.#2017-02-0823:10zaneHmm. datomic.api/pull-many is NPE-ing at me in one of my tests. Are there common gotchas I should be aware of when using it?#2017-02-0823:19zaneOh damn. I think I might be hitting this issue:#2017-02-0823:19zanehttp://docs.datomic.com/release-notices.html#2017-02-0823:20zane> Datomic versions prior to 0.9.4699 cannot read adaptive indexes, and will fail with the following stacktrace:
> java.lang.NullPointerException
at com.google.common.base.Preconditions.checkNotNull(Preconditions.java:191)
at com.google.common.cache.LocalCache.getIfPresent(LocalCache.java:3988)
at com.google.common.cache.LocalCache$LocalManualCache.getIfPresent(LocalCache.java:4783)
at datomic.cache$eval2673$fn__2674.invoke(cache.clj:65)
#2017-02-0823:21zaneOr something similar.#2017-02-0823:21zaneOur Datomic version is significantly newer than the one listed.#2017-02-0901:59anmonteirocan someone point me to docs explaining the new tempid behavior?#2017-02-0902:00anmonteirospecifically, I'm trying to figure out which values are allowed in :db/id when annotating transactions#2017-02-0902:01anmonteirooops, after 30 min looking for it, writing here is what made me find it 🙂
http://docs.datomic.com/transactions.html#outline-container-2-1-1#2017-02-0909:27stijnis there a way to reliably catch 'Non-existing look-up ref' exceptions?#2017-02-0909:27stijn(try
(d/q
'[:find ?e
:in $ ?some
:where [?some :other/attr ?e]]
db
[:some/id (d/squuid)])
(catch Exception e
(some-> e .getCause .getCause .getMessage (.startsWith "Cannot resolve key"))))#2017-02-0909:27stijndoes not seem like the best way 🙂#2017-02-0909:36rauh@stijn Call d/entid beforehand.#2017-02-0909:44stijnthanks @rauh!#2017-02-0914:02jcfAnyone come across the following error before?
java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.lang.RuntimeException: Can't let qualified name: db/id, compiling:(NO_SOURCE_PATH:0:0)
at datomic.promise$throw_executionexception_if_throwable.invoke(promise.clj:10)
at datomic.promise$settable_future$reify__6755.deref(promise.clj:54)
at clojure.core$deref.invokeStatic(core.clj:2228)
at clojure.core$deref.invoke(core.clj:2214)
I'm not using db/id in my code anywhere. This is inside a Manifold deferred BTW.#2017-02-0914:03favilasomeone's trying to do (let [db/id x]). Never seen before#2017-02-0914:04jcf@favila pretty sure it's not me. 🙂#2017-02-0914:04favilalook for macros?#2017-02-0914:04favilaor maybe a map needs some wrapping?#2017-02-0914:04val_waeselynck@favila couldn't it be a {:keys [db/id]} destructuring ?#2017-02-0914:05jcfI have a macro for measuring time taken, but removed it to make sure that's not it.#2017-02-0914:05jcf@val_waeselynck that would make sense if it were in a macro.#2017-02-0914:05favila@val_waeselynck ah, it could be, but on an older clojure#2017-02-0914:05val_waeselynckyeah that's what I was thinking#2017-02-0914:05jcfBut I'm not doing
(defmacro blah []
`(let [{:keys [db/id]} (d/transact conn ...)))
anywhere.#2017-02-0914:06val_waeselyncksearch Datomic's source code... oh wait, you can't.#2017-02-0914:06favilaare you transacting?#2017-02-0914:06favilawhen you hit this error?#2017-02-0914:06jcf@val_waeselynck 🙂#2017-02-0914:06jcf@favila yep.#2017-02-0914:07favilacould you perchance be doing something like (d/transact conn {:my "map"})#2017-02-0914:07favilai.e. forgetting to wrap in vector?#2017-02-0914:07jcf(if (seq txes)
(let [{:keys [db-after]} @(datomic/transact datomic txes)]
;; ...
(datomic/entity datomic db-after (:db/id task)))
;; ...
)
#2017-02-0914:08jcfIt's something like that with irrelevant bits removed. I'll sniff around the txes to make sure they're valid.#2017-02-0914:09jcf[{:db/id 17592186045447, :task/state :task.state/failed,
:task/next-run #inst "2017-02-10T17:09:02.353-00:00"}
[:inc-attr 17592186045447 :task/retries]]
#2017-02-0914:09jcfTxes are a vector of a map and a db/fn.#2017-02-0914:10favilaperhaps there's something in the impl of :inc-attr#2017-02-0914:10jcfOh! That could be it!!#2017-02-0914:10favilatry to invoke it directly#2017-02-0914:11jcf{:db/id #db/id [:db.part/user]
:db/ident :inc-attr
:db/fn #db/fn
{:lang "clojure"
:params [db lookup-ref attr]
:code
(let [{:keys [db/id] :as ent} (d/entity db lookup-ref)]
[[:db/add id attr (-> ent (get attr 0) inc)]])}}
#2017-02-0914:11jcfI have a test for that db/fn mind.#2017-02-0914:11favilaah there it is#2017-02-0914:11jcfOh yeah. Parens wrong.#2017-02-0914:11jcfWait…#2017-02-0914:12jcf@favila I must be missing something. Where's the problem with that db/fn?#2017-02-0914:12favilalooks good to me, unless it runs on an older clojure#2017-02-0914:12jcf1.8.#2017-02-0914:13favilabut it's unlikely the transactor's classpath is being messed with#2017-02-0914:13val_waeselynck@jcf does it work when you run the function locally ?#2017-02-0914:14val_waeselynck(e.g in a d/with)#2017-02-0914:14favila(d/invoke db :inc-attr db 17592186045447 :task/retries)#2017-02-0914:14favilawill be more direct#2017-02-0914:14jcf@val_waeselynck I have a couple of tests that use the function, but with an in-memory database.#2017-02-0914:15val_waeselynck@jcf and so, do those work ?#2017-02-0914:15jcfYes 😄#2017-02-0914:15val_waeselynckor do they throw the error#2017-02-0914:15val_waeselynckok that narrows the scope#2017-02-0914:15val_waeselynckI'm guessing either the transactor runs an older version of Clojure, or there's a problem in the serialization of the function code#2017-02-0914:19jcfJust written this test, and it passes:
(deftest ^:integration t-inc-attr
(t/with-system [{:keys [datomic]} (test.datomic/new-datomic-system)]
(let [db (sut/db datomic)]
(is (= [[:db/add 17592186045447 :task/retries 1]]
(d/invoke db :inc-attr db 17592186045447 :task/retries))))))
#2017-02-0914:19jcfI'll double check the version of Clojure on the transactor, but pretty sure it'll be the same everywhere.#2017-02-0914:21jcfTransactor's running Clojure 1.8 too.#2017-02-0914:22val_waeselynck@jcf does it work if you rewrie the function to not use destructuring ?#2017-02-0914:24jcf@val_waeselynck let's see…#2017-02-0914:33jcfRemoving the destructuring from the Transactor fixed it.#2017-02-0914:34jcfStrange. We must not be running the version of Clojure I thought, or maybe there's a bug fix we're missing.#2017-02-0914:34jcfThanks for the help chaps.#2017-02-0915:02val_waeselynck@jcf you can write a transaction function which returns the current Clojure version (in an exception)#2017-02-0915:03val_waeselynckI mean the one running in the Transactor.#2017-02-0915:03jcfThat's one way to do it. I checked the transaction-pom.xml and found Clojure 1.6. I've recommended an upgrade.#2017-02-0915:04favila@jcf look at the files in lib/ in the txor distribution#2017-02-0915:05favilaunless you are adding to that classpath, those are what you are getting#2017-02-0915:05favilaon the latest transactor, clojure-1.9.0-alpha14 is in use#2017-02-0915:06favilaare you using a very old transactor? or including jars on the transactor's classpath which may overwrite the clojure libs?#2017-02-0919:31zaneWhat's the right way in Datalog to express that the set of all values bound to a given variable must be equal to the set of all the values bound to another variable?#2017-02-0919:32favilagiven set is strict subset of other set?#2017-02-0919:33favilaor the two sets are exactly equal?#2017-02-0923:06zane@favila: The latter.#2017-02-0923:07zaneAlso, what is the most appropriate way to ETL data into datomic?#2017-02-1000:24zaneSeems like this: http://docs.datomic.com/best-practices.html#pipeline-transactions#2017-02-1001:26alexmillerThere was a great talk on this by Stuart Halloway at the Clojure Conj last fall #2017-02-1014:52zaneYeah! Jaret just recommended it to me.#2017-02-1008:15grounded_sageHow does everyone normally run Datomic? What's the smallest possible instance I could run 2gb.. 4gb..? I'm exploring what would be involved with an architecture involving AWS Lambda for a super lightweight server side database for basic websites. Im assuming that a Reserved EC2 instance paid in advance is the best way. I've also been looking at the Client Library and I am just wondering where further documentation is regarding this "Can support non-JVM languages" http://docs.datomic.com/architecture.html#2017-02-1010:44robert-stuttafordnon-JVM languages is not a thing yet @grounded_sage. super WIP. if you use the client library from lambda (totally a thing) you’ll need to stand up a Peer Server on EC2 as well#2017-02-1010:45robert-stuttafordour transactors are c4.large#2017-02-1011:05karol.adamieci have nicely working transactors on m3.medium, but would gladly move down so if anyone runs lesser instances without problems do tell! 🙂#2017-02-1011:15pesterhazythat kind of information is really useful#2017-02-1011:28grounded_sageThanks for that info @karol.adamiec that's the ball park I was looking at but would be fantastic if it could be pushed down that extra tier. @robert-stuttaford thanks for letting me know. I'm not needing it right now. Still learning and building my front end. Eventually want to build JAMstack style websites for clients and if they need anything more custom I would prefer to use Datomic over learning another database. It seems so much better than anything else. I have barely touched serverside programming and databases yet so if I can keep myself firmly in Clojure land I will be a happy man. A Clojurescript Client Library on lambda would be the bees knees!#2017-02-1013:25val_waeselynckMy transactor is on c3.large, my Peers on t2.medium#2017-02-1013:28robert-stuttafordt2.mediums are going to choke as soon as you give it any sort of load. java wants cores. lots of em.#2017-02-1013:39val_waeselynckcan't claim we've been under heavy loads indeed (maybe 1000ish concurrent users) but we've been OK so far#2017-02-1013:40val_waeselyncknever exceeded 30% CPU usage, at least not from traffic#2017-02-1020:12timgilbertSay, is there anything built-in to datomic that will recursively walk the graph from one entity to all of its :db.type/ref attributes and grab the whole set? I don't need it to be performant, just looking for auditing / development aids#2017-02-1020:12timgilbertI guess I'm looking for a recursive (d/touch)#2017-02-1020:40azHi all, has anyone worked on building a realtime web or mobile app backed by clojure and datomic? Would love some general guidance#2017-02-1020:57azBasically, is there a way for the peers to track the queries from clients, and treat them as live queries. Then each peer can listen for transactions, and then somehow invalidate those live queries when pertinent data changes in the system. If we look at systems like firebase, they are essentially holding live queries on each asked for key. Just want to see how we would do something like this without having to create many custom endpoints and special handling. Rather, use the single endpoint philosophy. Imagine if a datalog query could somehow be used as the query key, and if there was some equality check or some check to see “did this transaction affect this query?"#2017-02-1022:57timgilbert@limix: I would think you could roll such a system yourself but I'm not aware of anything that already exists#2017-02-1022:58timgilbertIn totally unrelated news, I have a question about database filters...#2017-02-1022:59timgilbertLooking at this page: http://docs.datomic.com/filters.html#joining-filters
...it seems as though this could only be performant if (d/filter) is applied lazily, can anyone confirm whether that is the case?#2017-02-1023:00timgilbertOtherwise, passing a filtered db into a function would need to scan every atom in the source db#2017-02-1023:04timgilbertAlso, FWIW m3.medium EC2 instances have been working great for my org, but we don't have much load in general#2017-02-1107:39codxseHi, I'm not sure to ask this question here. But I didn't find the answer on the net.. what is datomic valueType equivalent to sql text type (I want store blog post for example). Is that string? Thanks#2017-02-1108:29stijn@limix you could subscribe to the transaction queue, get the db-before and db-after from the transaction result, apply the query to both and see if there's any difference#2017-02-1115:17val_waeselynck@codxse you could use string or bytes, but keep in mind that Datomic is not well suited to store blobs. You may want to store this e.g in S3 and store only the key or URL in Datomic.#2017-02-1116:46codxse@val_waeselynck thanks for the answer, let say if the text that I want to store is not that large (not as large as image), it roughly 100 500 characters (perhaps not so much change), is that good practise to store it in string? I don't know the limitation string in datomic.#2017-02-1116:48val_waeselynckShould be okay#2017-02-1121:19robert-stuttaford@codxse there’s no limit; it’s just that many large strings put pressure on the read side of the database, slowing it down as your database grows#2017-02-1123:50SoV4@limix a single endpoint philosophy is the whole backing to Om.next which is Clojurescript and UI smashed together with a graph query on the backend. I'm doing a realtime app that plays over sockets with Sente. subscriptions to queries as you say, is a great way to go. There are several frameworks if you are interested in building a website / webapp. Firstly, you can try Rum and Datascript. These are awesome and basically #datascript is a mini datomic on the clientside and when you have a pertinent transaction all you do is push the transacted delta to subscribed peers. #rum is html/reactjs-event-cycle templating (so there are components, and they get different signals like "i'm about to update/re-render, and some other "lifecycle" events) @tonsky has an incredible cat chat app up that is very elegant. It would be easy to extrapolate this to particular subsets users of users for particular updates. Depends on what you want to do overall, but Rum + Datascript is a great way to break into the realtime app world. Currently, I'm using Om.next and I have had the fortune to have already made a working version of my application in "stop-time" and now I am making a new version in "real-time" For all the things I want, it's changed the stack a bit: there is datomic on the backend, there is an http-kit server that runs sente, there is a clojurescript app that is loaded in the html, and once the app is loaded it communicates over the wire via sente / sockets to know when database changes happen. It sounds complex but it's not, I promise, it's just a lot of aggregated understanding. I would love to eventually streamline all the aspects I need into some sort of generic "app template" that I could use for all sorts of projects, and actually greater minds are already on the issue and there is #untangled which is an open source framework for building applications using om.next +clojurescript on the front-end and some sort of storage solution on the backend, datomic fits just fine.#2017-02-1123:50SoV4Wow that's a lot of words. That's like a mind-dump 2 months in the making#2017-02-1123:51SoV4@limix anyway, the main thing you were asking about "can i send just a tx delta to the subscribed users?" answer: yes totally and as far as I know that's the most elegant way.#2017-02-1202:20az@sova: thank you for this. Really happy to hear it.#2017-02-1317:39uwoI know this is kinda silly, but is there a short cut to update a value on a nested ref? e.g.
@(d/transact
conn
[{:root-entity/_ref-attr <eid-on-hand>
:nested-ref/value-attr “updated-value"}])
#2017-02-1317:40uwolike that, but obviously that clobbers the entire ref. I’m looking for more of an assoc-in#2017-02-1317:41uwo(sorry, edited: was missing reverse reference)#2017-02-1318:10pesterhazyReverse refs don't work in transactions afaik#2017-02-1318:11uwothey work fine 🙂#2017-02-1318:14uwobut here’s another example for clarity. Again this doesn’t do what I’m looking to do, because it clobbers the entire ref instead of updating a value. And it’s a silly request because I know I can just use the root entity to query the entity id of the nested value and then use that to transact:
@(d/transact conn
{:db/id <the-db-id-on-hand>
:nested/ref {:ref/attr <new-value>}})
#2017-02-1318:34favila@uwo you need the id of the nested ref. Refs have identity semantics, not value semantics#2017-02-1318:34uworight, I know that if I already have the eid or a :db.unique/identity attribute I can use that. was just wondering if there was a way to navigate from parent without first looking up#2017-02-1318:35favilaThere is no way to do what you want using just transaction maps. You could write a tx function that did what you want, but it would have to work by reading#2017-02-1318:36uwothanks! exactly what I wanted to know#2017-02-1318:36favilathought experiment: tx maps work by expanding to the equiv db/add. How would you expand your assoc-in to a db/add without reading the nested id?#2017-02-1318:37favilatx-maps have no more or less power than db/add and db/retract, it's purely syntax#2017-02-1318:37uwoheh. I don’t know, but I could imagine expanding a reverse reference with the parent id in the same way that lookup refs are expanded#2017-02-1318:38uwoerrr… guess I should say lookup refs are ‘resolved’ not ‘expanded'#2017-02-1318:38favilayou also have to explicitly include them#2017-02-1318:39favilabut like I said, you could write a tx function that does what you want#2017-02-1318:39favilaan assoc-in, essentially. Reusing existing refs#2017-02-1318:39uwoyeah, that makes sense. just do the read inside the tx fn#2017-02-1318:39favilacardinality-many attrs would break the model#2017-02-1318:39uwoyeah#2017-02-1318:40uwothanks for your time!#2017-02-1318:48pesterhazyis using reverse refs in txs a new feature?#2017-02-1318:49favilano#2017-02-1318:49favilaIn fact, it was the old default way of installing attrs and partitions#2017-02-1318:50favila{... :db.install/_attribute :db.part/db} remember that?#2017-02-1318:51favilareverse refs are no challenge, the :db/id of the map just gets put in the value slot of the :db/add instead of the entity slot#2017-02-1318:52pesterhazyah yeah#2017-02-1318:52pesterhazyI always wondered why (as I though) it wasn't working because I figured the implementation would be trivial#2017-02-1318:53pesterhazynow I know why - it did work all along 🙂#2017-02-1319:22marshallDatomic 0.9.5561 is now available https://groups.google.com/d/topic/datomic/hHfr_N6m5Q8/discussion#2017-02-1319:58curtosisis anyone integrating e.g. conformity with component to manage schemas?#2017-02-1320:01curtosisthe idea I’m looking at is allowing certain components to depend on certain schema bits being loaded. I don’t think the component model is right for transacting the schema(s), but maybe something along the lines of a component providing the conformity ids that other components can then validate and decide what to do (rather than fail in a query).#2017-02-1320:01curtosisis that crazy? being done? being done better differently?#2017-02-1403:47devthanyone generating squuids in clojurescript?#2017-02-1403:49devthi see https://github.com/lbradstreet/cljs-uuid-utils – is this compatible with datomic.api/squuid?#2017-02-1407:36pesterhazyThere is an implementation in datascript#2017-02-1410:06chrisblomis there a way to get the t value of the transaction within a transaction?#2017-02-1410:07val_waeselynck@chrisblom (datomic.api/next-t db)#2017-02-1410:07chrisblomcool thanks, i'll need to create a tx function for this right?#2017-02-1410:07val_waeselynckWell I assume that's what it does.#2017-02-1410:08val_waeselynck@chrisblom yeah, probably#2017-02-1410:08val_waeselynckwhy do you need this though ?#2017-02-1410:09chrisblomfor syncing with datascript, i want to store the t of latest transaction so i can use it to generate diffs#2017-02-1410:10chrisblomi could use the transaction id, but the t value is a more readable#2017-02-1410:10val_waeselynckbut is that not what d/basis-t gives you?#2017-02-1410:12chrisblomyes, normally it would, but only some of my transactions are relevant for the diffing#2017-02-1410:13chrisblomi have transaction that update some data model, and transactions that update metrics#2017-02-1410:13chrisblomand only the data model needs to be synced with datascript#2017-02-1410:14chrisblomi was tagging each transaction with a type#2017-02-1410:15chrisblomand using a query to find the latest datamodel transaction#2017-02-1410:51chrisblomok thanks, i got it working with the transaction function#2017-02-1410:53chrisblommaybe i should use separate db's instead#2017-02-1410:53val_waeselynckyeah storing derived data is currently a bit hacky in Datomic IMHO#2017-02-1410:54chrisblomis querying across 2 databases easy? is there a performance penalty?#2017-02-1410:55val_waeselynckI don't think there's a performance penalty, but there may be an expressivity penalty, especially when using Datalog rules, and not / not-join / or / or-join clauses#2017-02-1410:56val_waeselynckHaving said that, you may be able able to circumvent those limitations using db functions in query#2017-02-1410:57chrisblomyeah, sounds good, i will try that, i don't need complicated queries for my use case#2017-02-1410:57val_waeselynckWell there may be an performance penalty too, you may need to go through 2 scalar indexes instead of 1 ref index at some point.#2017-02-1410:58val_waeselynckIf you run into the expressivity penalties, don't forget that you can use the secondary db as a dumb key-value store using db functions (a dumb key-value store with awesome local caching and time-travel features 😉 )#2017-02-1411:00chrisblom:+1:#2017-02-1416:06timgilbert@curtosis: we use conformity with mount, but we just load all norms when we initialize the datomic connection#2017-02-1416:08timgilbertI think your idea is mildly crazy, but if you want to do it, the result of running conformity does give you a list of the norms that were transacted, and conceivably you could stash those somewhere and hook them into the component lifecycle#2017-02-1416:10timgilbertFor my use case it's much easier just to load everything all at once so as to ensure the latest code runs against the latest schema. I could see needing to mess around with it in a more finicky manner if you have more complex deployments though#2017-02-1416:49curtosis@timgilbert thanks, that’s helpful. Realistically, the codebase isn’t likely to be that disjoint from the schema. The main use case I’m working toward is “build this version using these 4 of 7 available schema chunks”, but I should probably resist the temptation to overabstractify it.#2017-02-1419:33b2berryAnyone know of a lib for visualizing datomic schema. Something reminiscent to a generated ERD of sorts, even if the relationships themselves were missing? ED instead of ERD, heh.#2017-02-1420:06wei@b2berry there’s https://github.com/shaunxcode/datomicism#2017-02-1420:10weianyone have a solution for being able to directly upsert entitymaps? would be nice to do something like (d/transact conn [(s/assert ::some-entity (assoc some-entity :prop :val))]) for example#2017-02-1420:11b2berryThanks a lot @wei#2017-02-1420:14weireason for doing this over just using :db/add is to validate the entity with spec before putting it in the db#2017-02-1503:33seantempestaI’m writing an update function that takes user supplied data from a form and updates their user entity. Can someone look over this and tell me if I’m doing it in an idiomatic way? Or if I have any security problems?#2017-02-1503:57podviaznikovis it possible to retract entity and specify additional arg in query? Like delete in SQL with where clause. What I have [:db.fn/retractEntity entity-id] but I also want to say where [user-id :book/creator-id]. Wasn’t able to find such example in the docs#2017-02-1507:19weifor those on AWS, what are the recommended cloudwatch alarms? for starters, I’d like to know if a transactor is low on memory or goes down#2017-02-1508:17seantempesta@val_waeselynck: Woops! So used to creating a db value for querying I musta just tossed that in. And you’re right, I don’t need to merge anything except the :db/id. Thanks! Much appreciated.#2017-02-1520:34dazldhi everyone. I’m an experienced JSON & restful oriented developer, and I’m interested in trying out datomic for a couple of new projects. tips for some step by step tutorials would be really welcome!#2017-02-1520:35dazldthe marketing material worked! I want to believe 😄#2017-02-1520:43baptiste-from-parishello guys, I bet this question has already been asked but my datomic console when launched with bin/console -p 8080 mbrainz datomic: (after install script) throw this error =>
Exception in thread "main" java.lang.IncompatibleClassChangeError: Implementing class...
#2017-02-1520:56jaret@baptiste-from-paris Are you using datomic FREE? There is a known issue with Datomic FREE and console. We're working on a fix. If you register for Datomic starter (which is free to use) the Datomic PRO edition has no classpath issue with using console.#2017-02-1520:58baptiste-from-parisOh ok#2017-02-1520:58baptiste-from-parisThx #2017-02-1520:58baptiste-from-parisDid not know that#2017-02-1521:00jaretI believe we broke this on 5544 so 5407 FREE has a working console#2017-02-1521:00nottmey@baptiste-from-paris you need to use the original version the console was intended for as free user#2017-02-1521:01nottmeytry datomic-free-0.9.5372 and datomic-console-0.1.214 - that worked fine for me (they were released on the same day)#2017-02-1521:01jaret@nottmey is correct any Free version up to and including 5407 will work. But on 5544 and after the console is currently broken.#2017-02-1521:02jaretAlso an important side note. Free does not include Clients. You'll need to use Datomic Pro if you're looking to test out Clients#2017-02-1521:02nottmeyyou only need to do this for the bin/console command though, the transactor can run on the newest version#2017-02-1522:08baptiste-from-parisThx guys#2017-02-1522:34timgilbert@dazld: you might start with https://github.com/Datomic/day-of-datomic. I'm not aware of any other tutorial-type materials. I also found the videos at http://www.datomic.com/videos.html to be super helpful when I was just learning datomic.#2017-02-1522:35timgilbertAlso these ones: http://www.datomic.com/training.html#2017-02-1605:32kardanI know there is a feature request for "Google Cloud Datastore” support. I guess now when Spanner became public this changes things a bit on how one could run Datomic in the Google cloud. Anyone know if Spanner would work out of the box as a SQL backend? I admit to not thought enough about this, but I’m as I said a little curious.#2017-02-1608:08pesterhazyI think spanner doesn't support INSERT statements#2017-02-1608:09pesterhazyFrom what I've read on hacker news, may not be accurate#2017-02-1612:50caspercI am just wondering, given that :db/retract takes [:db/retract entity-id attribute value], what happens when I retract a value that is not the current value of the attribute on the entity?#2017-02-1612:51lucascsno-op#2017-02-1612:52lucascsit asserts that that fact is not valid#2017-02-1612:52lucascssince it wasn't, nothing happens#2017-02-1613:04nicolaHi, i’m newbie to datomic - could you explain current state of planner & statistic in datomic?#2017-02-1613:36stuartsierra@nicola Not sure, what exactly do you mean by "planner & statistic"?#2017-02-1613:36nicolaI mean, is there planner in datomic or not yet 🙂#2017-02-1613:37nicolai’ve seen “most selective clause” in docs - which means - i’m a planner as developer?#2017-02-1613:38stuartsierra@nicola That is correct. Datomic datalog queries execute clauses in the order they are written. To ensure your query performs well, you should order the clauses so as to keep the "working set" of results as small as possible.#2017-02-1613:38nicolaBut what if effective plan depends on data distribution?#2017-02-1613:39stuartsierraThe datalog query engine has some optimizations built-in, such as range queries and predicates, but it does not attempt to analyze the distribution of your data.#2017-02-1613:40nicolathis means if my data is not the same shape - some query runs will be fast and some extremely slow?#2017-02-1613:40baptiste-from-paris@nottmey working indeed for 0.9.5372 with the free version#2017-02-1613:41stuartsierra@nicola It means that you are responsible for monitoring and optimizing the performance of your queries based on your data. Future releases of Datomic may be bundled with tools to assist in this analysis, but they are not part of the current release.#2017-02-1613:42nicola@stuartsierra ok, thanks#2017-02-1613:45nicolaThere are to edges of this problem - black magic of planners or manual optimisation - i hope clojurians will come up with something interesting in this area!#2017-02-1613:46stuartsierraYes, the query-order execution in Datomic's datalog engine is by design: query behavior is completely deterministic.#2017-02-1613:47nicolai.e. - now it’s isomorphic to execution plan, may be build next more declarative level abstraction on top of it#2017-02-1616:28karol.adamiechow can one extract pull pattern outside of query?
(d/q '[:find (pull ?tx [:db/txInstant]) (pull ?e order-pattern)` where order-pattern is ie [*] ?#2017-02-1616:29karol.adamieci have some more less hairy pull expressions that i would like to fold into one, and reuse, but it seems to escape me 😕#2017-02-1616:39misha@karol.adamiec what do you want to achieve with this query? (I am having hard time to understand what it'd actually do)#2017-02-1616:40karol.adamieci want to def a pull pattern like [*] outside of query#2017-02-1616:41mishawhy do you even use pull inside a query?#2017-02-1616:42karol.adamiec(d/q '[:find (pull ?e [*]) i want to be (d/q '[:find (pull ?e my-def)#2017-02-1616:42karol.adamiecwell i need to tell it to pull stuf that is not a component#2017-02-1616:44karol.adamiecof course IRL the [*] pattern is more complicated 🙂#2017-02-1616:45mishawhy not:?
(let [ids (d/q [:find [?e ...] ...])
entites (d/pull db ppattern ids)])
#2017-02-1616:45karol.adamiecwell that definitely escapes the quoting issue#2017-02-1616:45misha(and map/reduce afterwards, if you need interleave/group/etc.)#2017-02-1616:46mishaanyway, did you try (pull ?e ?my-def)?#2017-02-1616:47karol.adamiechmm, no, that has not occured to me. in that case ?my-def is argument to query that must also be referenced in the :in clause right?#2017-02-1616:48mishayes#2017-02-1616:49karol.adamiec:db.error/invalid-pull Invalid pull expression (pull ?e ?pattern)#2017-02-1616:49karol.adamiecseems not to work#2017-02-1616:51karol.adamiecfull q:
(d/q '[:find (pull ?tx [:db/txInstant]) (pull ?e ?pattern)
:in $ ?pattern
:where [?e :order/number _ ?tx]]
db
[*])
#2017-02-1616:52mishaI'd like to hear from some one "official" pros/cons of pull within query, compared to (->> q pull-many (map/reduce))#2017-02-1616:52karol.adamieci have none to offer, maybe my style of writing queries is a takeover from REST endpoint…. 🙂#2017-02-1616:54mishayou are still doing both on client, there is no shame in doing grouping outside the query - it'the same machine, and no extra round trips to "db" or leveraging external CPU power#2017-02-1616:55karol.adamiecon one hand yes, on other hand we move from declarative query into code territory 🙂#2017-02-1616:57mishasomething like
(let [my-pp '[*]]
(->> db
(d/q '[:find [?i ?e]
:where [?e :order/number _ ?tx
?tx :db/txInstant ?i]])
(map (fn [[t id]]
[t (d/pull db my-pp id)]))))
#2017-02-1617:00mishawhat declarative actually does for you in this case?#2017-02-1617:01karol.adamieci see how that would work, but still would like to make the unqouting work. the query is a ‘[], there must be a way to unroll a datastructure inside somehow using ~#2017-02-1617:02mishahttps://clojure.org/reference/reader#syntax-quote#2017-02-1617:04mishayou might not need to specify pp in :in then (can't test it now, don't have datomic nearby)#2017-02-1617:05mishathis might be useful for you too:
As people ponder experimenting with ahead-of-time planners, I'll remind you that:
'[:find ?desc
:in $ %
:where
[?root :num 100]
[descendant ?root ?desc]
[?desc :num 4]]
is just sugar for:
'{:find [?desc]
:in [$ %]
:where
[[?root :num 100]
[descendant ?root ?desc]
[?desc :num 4]]}
#2017-02-1617:08karol.adamieclooks good, thx. had no luck with that today. I am missing some fundamental thing here…#2017-02-1617:14mishahttp://docs.datomic.com/query.html#sec-4-2
Query
find-elem = (variable | pull-expr | aggregate)
pull-expr = ['pull' variable pattern]
pattern = (input-name | pattern-data-literal)
#2017-02-1620:18timgilbert@dazld: forgot to mention another tutorial resource: http://www.learndatalogtoday.org/#2017-02-1620:20dazldthanks tim#2017-02-1620:20dazldthat site is down though...?#2017-02-1620:20dazldnevermind! it's back :)#2017-02-1620:21timgilbertSweet. I haven't actually looked at it much, but I've heard people mention it and tutorials are pretty thin on the ground#2017-02-1622:09gardnervickersWe’re currently trying to stand up a Datomic transactor in the cn-north-1 AWS region. The DynamoDB endpoint for that region is . I understand the transactor.properties file allows for setting an override endpoint for things like ddb-local. Doing that won’t allow me to set an aws-dynamodb-region, and removing the region won’t let me use the ddb protocol.#2017-02-1622:09gardnervickersAre there any workarounds for this? ddb-local used against real Dynamo causes the transactor to crash.#2017-02-1622:17jdkealyis it possible to get an entity's created time from an entity itself ( not from a query )? e.g. (:tx-time entity) or something like that?#2017-02-1623:24pesterhazy@jdkealy I don't think it's possible using d/entity or d/pull#2017-02-1623:57jdkealycool thanks#2017-02-1710:30raymcdermottlooking to parse / edit datomic schemas .. does anybody know if there has been some previous work on this? ( maybe my google-fu is off but web search has not helped! )#2017-02-1712:06nottmey@raymcdermott please specify, do you want to alter your schema? parsing the edn schema format? what’s the case?#2017-02-1712:08raymcdermott@nottmey parsing the edn schema - use case is to make an editor that will assist the generation of correct datomic schemas#2017-02-1712:13nottmeySince the schema that you are transacting is essentially in a [{…} …] edn format, you can use standard edn tools.
If your editor is written in ClojureScript, you can for example use https://clojuredocs.org/clojure.edn/read-string for parsing and https://clojuredocs.org/clojure.pprint/pprint for displaying the schema.#2017-02-1712:15nottmeyThere may already be specs (https://clojure.org/about/spec) for validation, or you write your own.#2017-02-1713:02stijn@raymcdermott Something like https://github.com/shaunxcode/datomicism ? It is apparently written in Coffeescript though...#2017-02-1713:20raymcdermott@nottmey I know you're trying to be helpful and I know the tech you describe - I wanted to know if there actually are specs or if there is a BNF for use in CLJ/S. Don't know is also a good answer ;-)#2017-02-1713:25raymcdermott@stijn I don't like that editor but yes the basic material to parse the BNF of the schema (not just the EDN) is what I was looking for before I start working on it ... happy to reinvent the wheel or at least put a new type of tire on it but wondered if I also have to reinvent the rim, hub and spokes too#2017-02-1713:40nottmey@raymcdermott well, at least we now know what you are searching for. 🙂#2017-02-1713:41raymcdermottLOL 😉#2017-02-1714:03misha@raymcdermott I think vase has to have something in it:
https://github.com/cognitect-labs/vase/blob/master/src/com/cognitect/vase.clj#L26#2017-02-1722:14uwoI’ve deployed my peer and transactor, however when my peer attempts to connect to the transactor I get this error. Any ideas?
org.h2.jdbc.JdbcSQLException: Connection is broken: "java.net.UnknownHostException: datomic: Temporary failure in name resolution: datomic:4335" [90067-171]
#2017-02-1722:37uwoI think i’ve resolved this#2017-02-1800:30weiare there any idiomatic ways to represent an ordered list in datomic? more specifically, a queue#2017-02-1800:33pesterhazyeither by adding a "position" attribute, or by using a linked list#2017-02-1801:26misha@pesterhazy linked list? like here?
http://jmhofer.johoop.de/datomic/2012/08/18/linked-lists-in-datomic.html
[{:db/id #db/id[:db.part/db], :db/ident :content/name, ...
{:db/id #db/id[:db.part/db], :db/ident :linkedList/head, ...
{:db/id #db/id[:db.part/db], :db/ident :linkedList/tail, ...
#2017-02-1802:06weithere’s also this, evaluating it now https://github.com/dwhjames/datomic-linklist#2017-02-1802:06weihttps://github.com/dwhjames/datomic-linklist/blob/master/doc/api.md#2017-02-1802:16mishaI keep indexes in a list in a separate attribute.
{:x/ys #{1 2 3}
:x/ys-order #{{:o/id 1 :o/idx 0}
{:o/id 2 :o/idx 2}
{:o/id 3 :o/idx 1}}}
#2017-02-1802:18misha(ys in my case are not components of x, and can have multiple "parent" collections)#2017-02-1802:18weionly annoying thing is popping off the queue requires reshuffling all idxs#2017-02-1802:19mishayou can just grow idxs and pop the lowest one#2017-02-1802:21mishabut for queues linked list might be more suitable.
I need to be able to swap/shuffle elements, so this :x/ys-order works for me#2017-02-1802:22weimakes sense#2017-02-1802:22mishait also helps with recursive pull patterns, if you happen to have parent/child of the same "type".#2017-02-1802:23mishapreviously I had a wrapper object instead, so it looked like :x -> :w -> :y#2017-02-1802:23mishapull patterns and queries were nightmare unpleasant#2017-02-1802:28weithat’s a good consideration. wrappers do make queries more annoying#2017-02-1802:29weialso thought about serializing to EDN a vector of uuids. makes some things easier, but then it’s not queryable#2017-02-1802:30mishatrue, + extra special step#2017-02-1809:56pesterhazyalso storing blobs is not efficient in Datomic#2017-02-1809:57pesterhazyonce they're getting a bit bigger#2017-02-1908:59ezmiller77anyone know why trying to delete a database like so (with the client api`): (delete-database “datomic:) might yield the following error:
> ClassCastException [trace missing]#2017-02-1909:19ezmiller77Figured out that the syntax for the client is different, of course. So for list-databases it looks like this:
(pprint (<!! (list-databases {:account-id client/PRO_ACCOUNT, :secret "datemo", :region "none", :endpoint "localhost:8998", :service "peer-server", :access-key “datemo”})))#2017-02-1909:19ezmiller77The map is like what you provide to connect.#2017-02-1909:32ezmiller77But still can’t get the delete-database command to work using this syntax, and providing in addition a :db-name symbol.#2017-02-1909:33ezmiller77{:cognitect.anomalies/category :cognitect.anomalies/incorrect,
:datomic.client/http-error {:cause "Invalid Op”}}
#2017-02-1909:34ezmiller77http-error made me think that maybe the peer-server wasn’t running, but it is. then, though, i wondered if it doesn’t make sense in the first place to delete a database through a peer-server connection since the peer-server start-up syntax seems to require you to specify a particular db by name.#2017-02-1909:35ezmiller77i.e. you’d be deleting the database to which the peer-server is connected. logically, it seems that that should invalidate the peer server connection…#2017-02-2014:09marshall@ezmiller77 Peerserver can’t delete or create databases. Since a single peerserver could be connected to multiple transactors/backend storages at once the semantics aren’t clear what ‘create’ would mean - which backend should it use for example?#2017-02-2015:46ezmiller77@marshall thanks, I was starting to suspect that that was the case.#2017-02-2015:49ezmiller77Question, though, the client api has these functions available in datomic.client.admin. Why would these methods exist if they can’t be used. Is there another way t ouse the client library other than in combination with the peer-server?#2017-02-2016:10marshallNot at present#2017-02-2017:04nottmey“composite keys” seems to be a heavily discussed topic, I read the threads from 2015-16 on how to solve individual issues with custom implementation.
Whats the current status on that? Are they considered or even possible as a feature? I would love to see “composite key upsert” and “composite lookup refs”.#2017-02-2018:26pesterhazyyou could make your own composite key attribute#2017-02-2018:27pesterhazylookup ref: [:my/key (pr-str [:foo :bar])]#2017-02-2018:27pesterhazyyou'd have to make sure you always update the composite key when you update the individual attributes though#2017-02-2018:29lucascsplus the storage requirements for the string can be way higher than each key's#2017-02-2019:05taylor.sandoI am trying to set the datomic.txTimeoutMsec property. Can you directly set it before you create the connection using, (System/setProperty "datomic.txTimeoutMsec" "1") I want to set it really low so that it will cause a timeout, but it doesn't seem to be changing the transaction timeout time.#2017-02-2019:39nottmey@pesterhazy yea, currently I will need to funnel all composite key parts through one self-chosen encoded additional key attribute, with tx-fns and so on.#2017-02-2019:41nottmey@lucascs what do you mean?#2017-02-2019:49lucascsnottmey: if your composite keys are two longs, you can fit them in 4 bytes. (pr-str [a b]) however can be a string with 30 bytes, for example#2017-02-2019:49lucascsusually that's not a problem, thou#2017-02-2019:50nottmeyahh I see, yea it is indeed overhead.#2017-02-2020:07favila@taylor.sando I think the datomic.txTimeoutMsec value is captured at static initialization time, so I don't think you can change it other than at the command line and have datomic see the new value.#2017-02-2020:08favila@taylor.sando however, you can achieve the same effect with (deref (d/transact-async conn [,,,]) 1 nil) and check for nill#2017-02-2020:09favila@taylor.sando Or write a helper fn to deref+throw exception for you#2017-02-2109:54dazldhey, any suggestions for a node client for datomic..?#2017-02-2112:40dazldnevermind, was a silly idea#2017-02-2117:57devthpull queries can't invoke db functions, correct?#2017-02-2118:05miltCorrect#2017-02-2118:05devththanks. seems like it'd be useful feature.#2017-02-2118:09miltIt would be, the syntax would be a bit tricky, given db-fns are identified by keyword (note that they used symbols for the built-in expressions like limit, etc)#2017-02-2118:11devthah, right. i was only thinking about built-in fns.#2017-02-2118:15milt@dazld I might be mistaken, but I believe the Datomic Client Protocol is going to be documented and available before too long, so a node client could totally be a thing.#2017-02-2118:17dazldthanks @milt - we had a little datomic hackathon this morning, and some of the team didn’t know clojure#2017-02-2118:17dazldthat was the background to the question#2017-02-2118:17miltah, got it#2017-02-2200:01bbloomi’m curious how folks are handling joining/merging data from systems that aren’t quite as, ahem, time-friendly, as datomic#2017-02-2200:03bbloomgraphql endpoints for example tend to just give you some inconsistent interleaving of whatever backend services happen to get queried at the time - which seems like the best you can do for distributed systems w/o shared clocks (like the transaction id number) - and it also seems like you’re just out of luck for merging db history in many cases#2017-02-2200:03bbloomi realize i’m rambling a bit, but wondering if any of the folks here have spent time thinking about this#2017-02-2209:49nottmeyNow I’m curious how you would do it even with only datomic databases/transactors. Using the asOf filter with the request time for each? This still leaves you with inconsistencies, when the data is not transacted atomically over both databases/systems. But how does datomic help with this? It looks like you still need the same techniques you mentioned (whether you have datomic or not).#2017-02-2211:27pesterhazy@bbloom, when I encounter such a problem, I stare it in a face and then walk away#2017-02-2211:27pesterhazy"ignoring the problem" usually works#2017-02-2214:32stijnis there a way to convert an Entity to a map recursively, but only include the data that's in the entity cache?#2017-02-2214:32stijn(other than using the pull api)#2017-02-2214:48pesterhazy(into {} entity)#2017-02-2214:48pesterhazyoh well that's not recursive but you can extend that pretty easily#2017-02-2214:49pesterhazyif you need a quick and dirty way, (-> entity pr-str read-string) may work 🙂#2017-02-2214:49stijn🙂#2017-02-2214:49rauh@stijn In reflection i can see the (.cache ..) method#2017-02-2214:51stijn@rauh: that's exactly what I would need!#2017-02-2214:52stijnit is probably not a public api then#2017-02-2214:53rauhWell it's not "java-private" 🙂#2017-02-2219:08bbloomHaving re-read some stuff about transaction processing and database isolation, I think I can better formulate my question from above ^^ but also formulate my il-conceived desires#2017-02-2219:09bbloomDatomic’s entity API provides snapshot isolation, which is awesome. However, most GraphQL resolvers provide repeatable read committed isolation in that it lets you read from disparate backend services and memoizes all of the results.#2017-02-2219:10bbloomI almost want an entity API that offers that behavior inherently, such that i can effectively join up data from multiple sources with the nice object-oriented-ish navigatable entity graph#2017-02-2219:10bbloomSo my question then is: Has anybody here tried anything like that? Curious to see solutions.#2017-02-2219:25ddellacostaapologies if this is a FAQ but is there more in-depth documentation for how indexes work in Datomic past what is here? http://docs.datomic.com/indexes.html#2017-02-2219:25ddellacostaI’d also love to have something like explain for PostgreSQL for datomic queries, but guessing that’s just a dream#2017-02-2219:27favila@ddellacosta query explain: https://github.com/dwhjames/datomic-q-explain#2017-02-2219:27favila@ddellacosta datomic internals: http://tonsky.me/blog/unofficial-guide-to-datomic-internals/#2017-02-2219:27ddellacostaI saw that but it’s kinda old, and seems like one dude’s conception of what is actually going on in a query…is it actually a good thing?#2017-02-2219:28ddellacostawill poke at the second one, thanks#2017-02-2219:29ddellacostathat unofficial guide already seems better than the official docs#2017-02-2219:29favilaq-explain is a very well educated guess. It doesn't always match up because of bugs in datomic datalog, but there is very little query optimization that goes on#2017-02-2219:29ddellacostabut why they don’t include this kind of info in the official docs makes no sense#2017-02-2219:30ddellacosta@favila I see…so some of this arises naturally based on how datalog works, is that right? That is helpful to understand if so#2017-02-2219:30ddellacostain the sense of, queries can only really work in certain ways#2017-02-2219:32favila@ddellacosta We know for a fact that datomic does not reorder where clauses, does not have any index statistics, etc. So mostly what the query plan can do just follows what indexes are available#2017-02-2219:32favilaI think he made a good effort to discern which datomic actually does#2017-02-2219:33ddellacostagotcha, thanks for the context…I’m assuming you know that because of how datomic datalog works (insofar as it is documented publicly) and based on other statements Cognitect folks have made…?#2017-02-2219:33favilaand I remember he got the query grammar docs corrected in a few places during is work, so it's not a half-assed effort#2017-02-2219:33favila(that said, I don't use it)#2017-02-2219:34ddellacostayeah to be clear I didn’t think it was half-assed or anything, it’s just barely documented and old-ish, and figuring out its worth is, in and of itself, a bit of an effort#2017-02-2219:34ddellacostabut will check it out further#2017-02-2219:34favilagoogle groups, talks, etc#2017-02-2219:34favilasame sources as for the internals guide#2017-02-2219:34ddellacostak#2017-02-2219:34ddellacostathanks @favila, all of this is super helpful, much appreciated#2017-02-2219:35favilaq-explain looks like it is useful to give someone who is unfamiliar with datalog some idea of how a query will perform#2017-02-2219:35ddellacostawell, it could be helpful to me then because I feel like I’m cargo culting it much of the time, even if I can generally make it do what I want at this point#2017-02-2219:35favilaor who is unfamiliar with the data in their db (e.g. selectivity of attributes, cardinality of values, etc)#2017-02-2219:35ddellacostabut performance is still a bit of a mystery to me at times#2017-02-2219:36ddellacostacompared to how I used to construct SQL queries in PostgreSQL#2017-02-2219:36ddellacostaanyways, I digress#2017-02-2219:36ddellacostathanks again#2017-02-2219:36favilasql optimizers are generally much, much more complicated and aggressive than datomic#2017-02-2219:36ddellacostagotcha#2017-02-2219:37ddellacostaeven that I didn’t realize#2017-02-2219:37favilaglad to help#2017-02-2310:32val_waeselynck@marshall just wanted to make sure you'd seen this question: https://groups.google.com/forum/#!topic/datomic/d74excL8JaI#2017-02-2310:33alex-glvHello, datomic-ers!#2017-02-2310:36alex-glvI have a question. Fooling around with Datomic tx-es and trying to translate SQLish concepts.. I have a :job/tasks , that is cardinality - many, references to “task” entity. When my front-end updates, I want to completely wipe current references and update with new ones. (if I just transact new references they add up, instead of replace). Any suggestion how to do it? I tried :db/retract, but don’t want to remove existing references knowing what they are, basically, I don’t care. Thanks!#2017-02-2310:37pesterhazyyeah that's a common pattern but I don't think there's a consensus answer#2017-02-2310:40pesterhazythings you can do:
- remember previous values and retract them (easy, may lead to race conditions)
- create a transactor fn that replaces all values for a cardinality/many attribute atomically (safer)#2017-02-2310:41pesterhazy- store the set of tasks in a separate entity (task-set) and store a ref to the task-set in a the job, then simply replace the ref and recreate a taskset every time#2017-02-2310:41pesterhazyif you go with 2 (seems cleanest), you'll need to do a diff as you can't retract and add the same fact in a single transaction#2017-02-2310:43val_waeselynck@alex-glv related: https://stackoverflow.com/questions/42112557/datomic-schema-for-a-to-many-relationship-with-a-reset-operation#2017-02-2310:44val_waeselynck(will answer it when I have better explored all the options)#2017-02-2310:45alex-glvDidn’t realise it wasn’t very trivial! Will read up. @pesterhazy didn’t know about transactor fns, thanks.#2017-02-2310:48pesterhazy@val_waeselynck could you make a gist out of that? Slack is so forgetful#2017-02-2310:54val_waeselynck@pesterhazy sure https://gist.github.com/vvvvalvalval/4f1736fab9b4ab0e3e03b805ad35a78c#2017-02-2310:56robert-stuttaford@val_waeselynck @pesterhazy i took a simpler approach#2017-02-2310:58pesterhazyif you don't wrap this in a transactor fn, it can lead to race conditions though, no?#2017-02-2310:58robert-stuttafordthis works with any values, but assumes you’re providing ids for refs#2017-02-2310:59robert-stuttafordyes#2017-02-2310:59pesterhazywhich is okay IMO in many circumstances but not all#2017-02-2310:59robert-stuttafordindeed#2017-02-2310:59val_waeselynck@pesterhazy race conditions don't matter much with "reset" semantics#2017-02-2311:00val_waeselynckoh well I see what you mean#2017-02-2311:00val_waeselynckalright#2017-02-2311:01pesterhazyuser A could set jobs to 1 2 and user B could set it to 3 4, but it could end up 1 2 3#2017-02-2311:01pesterhazysomething like that anyway...#2017-02-2311:02val_waeselynckyeah totally, what I mean is when you implement reset semantics you usually expect that users won't access the resource concurrently 🙂#2017-02-2311:02pesterhazywhen you add/remove tags to a product, that may not matter really#2017-02-2311:03alex-glvactually atomic retract and add seems even better, more network calls but oh well#2017-02-2311:04val_waeselynck@alex-glv yeah if you're architecture is compatible with that, it's better to do it this way.#2017-02-2311:04alex-glvawesome, thanks!#2017-02-2311:05rauh@alex-glv Here is another version: #2017-02-2322:36jdkealyis it possible to save a hashmap in datomic ?#2017-02-2322:37devthif you serialize it#2017-02-2322:37devthor if all the keys in the map already existed in the schema#2017-02-2322:37jdkealyso save it as text ?#2017-02-2322:38devthyeah, (pr-str your-map)#2017-02-2322:39jdkealyand what if all the keys in the map existed in the schema#2017-02-2322:39jdkealywould it have to be a component ?#2017-02-2322:39devththen you could transact it as-is#2017-02-2322:39jdkealyok got it#2017-02-2322:40devthcomponent only applies if it's nested, but even then i think it'd be optional#2017-02-2322:40devth(i'm also curious if there are ideas for better ways of storing arbitrary data structures)#2017-02-2322:40jdkealymy data looks like :user/permissions {:can_edit_galleries {:global? true}, :can_create_galleries {:photo_credit "jdkealy"}}#2017-02-2322:41devthif your structure is well-known ahead of time i'd represent it with a schema#2017-02-2322:41jdkealywhat i have been doing is mapping that and transforming into :user/permissions [{:permission/type :can_edit_galleries, :permission/is_global? true}]#2017-02-2322:42jdkealyright yeah that makes sense... for all its failings, i did enjoy this feature of mongo#2017-02-2322:44jdkealyup until now.... i had been saving permissions as keywords in cardinality many, and now client needs metadata on the permissions, so i gotta make them refs... the nested refs are starting to make my head spin#2017-02-2322:45devthi assume you've already read Nested maps in transactions in http://docs.datomic.com/transactions.html#2017-02-2322:46devthdo the work upfront to define the schema, then you can transact deeply nested maps#2017-02-2414:50tengWe recognised i strange behaviour of the as-of function when sending big values of t. We had a database with t-basis 1019, and by mistake sent a t that was an id, e.g. (d/as-of db 277076930200565). It returned this for different values of t:
t = 277076930200565 => asOfT=1013
t = 27707693020056 => asOfT=1319413953432
t = 2770769302005 => asOfT=-1627277209099
t = 277076930200 => asOfT=277076930200
(basisT was 1019 in all cases).
We are currently running datomic-pro-0.9.5372.#2017-02-2415:49jonpitherDatomic case-study.. https://twitter.com/juxtpro/status/835039132812455936#2017-02-2416:56favila@teng A transaction id (t) is mechanically derived from a long by extracting the bottom 42 bits#2017-02-2416:56favila@teng (map (juxt identity d/part d/tx->t)
[277076930200565 27707693020056 2770769302005 277076930200])
=>
([277076930200565 63 1013]
[27707693020056 6 1319413953432]
[2770769302005 0 -1627277209099]
[277076930200 0 277076930200])
#2017-02-2416:57favilaThe sign flip is the 42nd bit being promoted to the sign bit. (seems like a bug to me, because it means only 41 bits are actually useful)#2017-02-2420:15stijn> Datomic supports 2.0.X, 2.1.X, and 3.0.X versions of Cassandra.#2017-02-2420:15stijnDoes that mean that it doesn't support 3.10 or are the docs out of date?#2017-02-2420:20stijnOK I see, they have changed their versioning schemes. This is kind of confusing.#2017-02-2421:00csmI’m seeing an error {:cognitect.anomalies/category :cognitect.anomalies/incorrect, :datomic.client/http-error “Throttled”} connecting the clojure client to a peer server, but see zero read/write throttles in DynamoDB. Does the peer server throttle requests on its own?#2017-02-2423:08csmis there a possible issue using datomic against DynamoDB in a private subnet; that is, going through a NAT gateway to reach DynamoDB?#2017-02-2502:52shaunxcodeI am trying to figure out if it is a bug or feature that inside of a transaction function my vector is being turned into an Arrays$ArrayList#2017-02-2502:52shaunxcodeit is definitely problematic as it makes testing a txfn directly give different results then when it is called inside of a transaction#2017-02-2502:52shaunxcodee.g. (sequential? x) fails for said ArrayList#2017-02-2502:53shaunxcodeand (or (sequential? val) (instance? java.util.AbstractList val))#2017-02-2502:53shaunxcodefeels... gross#2017-02-2502:55shaunxcodeto be fair I am on a slightly older version of datomic (0.9.5372)#2017-02-2502:55shaunxcodeI will test with latest and provide a minimal test case, was just wondering if this is a known thing#2017-02-2622:34SoV4How would I query for elements that were transacted in the last 24 hours?#2017-02-2623:23SoV4Aha! http://docs.datomic.com/log.html ()#2017-02-2623:23SoV4=)*#2017-02-2623:37SoV4Okay, Got a question for the experts!#2017-02-2623:37SoV4So, I have a bunch of things (entities). Some of them are blurbs. Some are ratings. Some are comments. I do a time query on $log and get all the entities, but all I want to know is which blurbs have occurred between time1 & time2.#2017-02-2623:37SoV4So my question is, does it make sense to:#2017-02-2623:38SoV41) time query on $log, because fast.
2) sequentially iterate-query through result set, find out which eids are blurbs
3) use those matching eids to get a result set for later.#2017-02-2623:39SoV4"Last 24 hours of blurbs" is pretty much my mark / aim here, so I think this approach is good, but I'm wondering what the best way is to do step 2 without duplicated efforts#2017-02-2701:55beppu> Datomic Pro is issued with a perpetual license, with ongoing maintenance fees of $5,000 per year per system.
(source: http://www.datomic.com/get-datomic.html)
What is the definition of "system" in this context?#2017-02-2705:42robert-stuttaford@sova
(let [db (d/db conn)
blurb-ids (d/q '[:find [?blurb ...] :in $ :where
[?blurb :blurb/attr-only-asserted-when-created]]
(d/since db #inst "24 hours ago"))]
(d/pull-many db '[*] blurb-ids))
#2017-02-2705:43robert-stuttafordno need to use d/log at all. just query a database that only has datoms from the period you care about.#2017-02-2705:47robert-stuttafordyou don’t need to use datalog either:
(->> :blurb/attr-only-asserted-when-created
(d/datoms (d/since db #inst "24 hours ago") :aevt)
seq
(map :e)
(d/pull-many db '[*])
#2017-02-2705:48robert-stuttafordnote that we only use the time-constrained database for the initial seek; the ‘now’ db for the rest, because you may want to query for things asserted in the past (e.g. relationships to previously existing entities, like author etc)#2017-02-2708:32thedavidmeisterpotentially dumb question, but how would i construct a :where so that i can pass a sequence of potential values something can match?#2017-02-2708:32thedavidmeistere.g. a list of entity ids#2017-02-2708:35misha@thedavidmeister
(q '[:find [?e ...] :in $ [?x ...] :where [?e :foo/bar ?x]]
db xs)
#2017-02-2708:36thedavidmeister@misha with the ...?#2017-02-2708:36misha[?x ...] instead of just ?x#2017-02-2708:36thedavidmeisterkk 1 sec, i'll try that out 🙂#2017-02-2708:42thedavidmeistercool#2017-02-2708:42thedavidmeister@misha thanks#2017-02-2708:43mishahttp://docs.datomic.com/query.html#sec-5-7-2#2017-02-2708:43thedavidmeisterso is ... some clojure syntax i haven't learned yet?#2017-02-2708:43mishait is a part of a query grammar#2017-02-2708:44thedavidmeisterah ok, so specific to datomic?#2017-02-2708:45mishaI'd say yes#2017-02-2708:45thedavidmeistercool, makes sense#2017-02-2708:46thedavidmeisternot sure why i couldn't find this before#2017-02-2708:46thedavidmeisterprobably putting the wrong words into google 😛#2017-02-2708:46mishadatomic documentation is a good read#2017-02-2708:49thedavidmeistertbh i've read over it a few times on different occassions, but it sinks in better when i have a real problem to solve#2017-02-2709:12robert-stuttaford@thedavidmeister a slow, careful read of http://docs.datomic.com/query.html will save you a lot of time#2017-02-2714:17marshall@beppu A single production transactor + all associated peers and clients#2017-02-2716:36SoV4@robert-stuttaford Beautiful. Thank you kindly. So the first step asks the query over a specific time-constrained db, and the second uses that info against the Now version to get all the good stuff, if I'm understanding correctly.#2017-02-2717:45robert-stuttafordprecisely @sova#2017-02-2720:32dm3@marshall that still covers several transactors working in a HA setup, right?#2017-02-2720:33marshallYes. single production transactor
You can, of course, also have an HA transactor for that prod system and as many dev/staging/testing instances as you require#2017-02-2722:16jfntnIs it possible with datomic/q to “map” a pull expression over multiple entities in a single query?#2017-02-2722:18favila@jfntn yes: http://docs.datomic.com/query.html#pull-expressions#2017-02-2722:18favilathere is also d/pull-many for use outside queries#2017-02-2722:19favila(I am not sure why this exists vs just d/pull with map. Maybe it can be more efficient?)#2017-02-2822:44dominicmfavila: significantly more efficient ime#2017-02-2722:21favilaAlso not documented for some reason is that the pull find expression can take a database, (pull $ ?e [:pull-expr])#2017-02-2722:25jfntnOk that worked, thank you @favila#2017-02-2722:26jfntnSince you mentioned it, I don't understand the need for the explicit db binding in declarative queries?#2017-02-2722:31favilaif you have more than one database @jfntn#2017-02-2722:34favila@jfntn Example: (d/q '[:find [(pull $b ?e [*]) ...] :in $a $b :where [$a _ :some-attr ?e-id][$b ?e :id-attr ?e-id]] db-a db-b)#2017-02-2808:11tengI can list all the attributes and their type in the database with this query:
(d/q '[:find ?i ?t
:where [?e :db/ident ?i]
[?e :db/valueType ?t]]
(d/db conn))
Is there a entity that I can "join in” to get the types in text instead of a number? I can of course add that as a source with hardcoded values, but if there is an entity for that already, then that’s better.#2017-02-2810:46val_waeselynck@marshall should I expect a memory leak if I hold on to a Log instance and perform repeated calls to .txRange() ?#2017-02-2810:47val_waeselynckin other words: can I rely on the tx ranges being garbage-collected even if I hold on to the Log ?#2017-02-2820:29devthare datomic docs by any chance served from s3? http://docs.datomic.com/ down for me#2017-02-2820:30devthhttp://isitdownorjust.me/docs-datomic-com/ confirms it's down.#2017-02-2820:32stuartsierrayes, it's related to S3 issues.#2017-02-2823:17bmaddyI tried finding the answer to this in the docs and by searching, but had no luck. Why would this query
(take 2 (query '[:find (pull ?id [*]) :where [?id :account/name]]))
give me something like this?
([{:db/id …}] [{:db/id …}])
I would expect something like this:
({:db/id …} {:db/id …})
#2017-02-2823:18bmaddy(Oh, sorry, query there just does a (d/q % (d/db conn)))#2017-02-2823:22favila@bmaddy [:find ?a ?b] returns [[:a1 :b1] [:a2 :b2] ...]#2017-02-2823:23favilaso [:find (pull ...)] is no different#2017-02-2823:23favila(find returns relations by default)#2017-02-2823:23favilayou probably want [:find [(pull ?id [*]) ...] :where ,,,]#2017-02-2823:24favilahttp://docs.datomic.com/query.html#find-specifications#2017-02-2823:24bmaddyYeah, ok. That makes sense.
Hmm, I’ve never seen that syntax, I’ll check it out. Thanks!#2017-02-2823:26bmaddyWorked like a charm! Thanks a ton @favila!#2017-03-0108:02val_waeselynckreleasing Datomock 0.2.0, with support for the Log API: https://github.com/vvvvalvalval/datomock#2017-03-0115:39dm3I’m trying to reuse the rules and failing to produce the most basic working example:
(def rules '[[(doc-is [?x] ?e) [?e :db/doc ?x]]])
(d/q '[:find ?e :in $ % ?x :where [(doc-is ?x ?e)]] db rules “test-doc”)
The query compiler throws Unable to resolve symbol: doc-is in this context - what am I doing wrong?#2017-03-0115:51dm3ok, you’re not supposed to put the rule invocation inside square brackets - problem solved!#2017-03-0116:03misha@dm3 paste full solution here, please#2017-03-0116:24dm3(d/q '[:find ?e :in $ % ?x :where (doc-is ?x ?e)] db rules “test-doc#2017-03-0120:16stijnI have a question connecting datomic console to a production database#2017-03-0120:17stijnWe are running on cassandra and I was trying to forward port 9042 to a local port and have console connect to datomic:#2017-03-0120:17stijnit seems this connection works, but as soon as you select a database Artemis will throw a ActiveMQNotConnectedException[errorType=NOT_CONNECTED message=AMQ119007: Cannot connect to server(s). Tried with all available servers.]#2017-03-0120:18donaldballIs there a simple way to restrict a variable in a datalog query to all entities that are (or are not) referenced by any other entities, via any attribute?#2017-03-0120:19stijnHow could I possibly solve this? Which connection is failing? From the transactor to the localhost? Or the other way around?#2017-03-0120:20stijnI assume it's from localhost to the transactor, but how does cassandra store the transactor's address?#2017-03-0120:21stijnis it an IP? a hostname?#2017-03-0120:22favila@stijn The peer connects to cassandra storage#2017-03-0120:22favilaCassandra storage has written into it two transactor addresses, as provided by your transactor.properties file#2017-03-0120:22favilaThe peer then pulls those and uses them to connect to the transactor#2017-03-0120:22favilaSo you need to make sure those hosts are correctly resolveable from wherever you are#2017-03-0120:22stijnthe 'host' and 'alt-host' field?#2017-03-0120:22favila(whereever your peer is)#2017-03-0120:23favilayes#2017-03-0120:23stijncool, that should be doable with /etc/hosts 🙂#2017-03-0120:23stijnthanks a lot @favila#2017-03-0120:24marshall@val_waeselynck sorry i missed your message yesterday. Do you have a small repro?#2017-03-0207:08val_waeselynckmarshall: not seeing a leak at all, I just want to know if I can count on that to be the case for future versions - I don't want to be relying on an implementation detail#2017-03-0207:11val_waeselynckMy use case is the implementation of Datomock forked connections, in which forked connections keep a reference to the Log of the original connection.#2017-03-0213:56marshallHrm. I’m actually not sure. It may depend on Clojure’s handling on GC actually#2017-03-0120:24marshallare you seeing a potential leak?#2017-03-0120:27favila@donaldball (d/q '[:find ?e
:where
[?vt :db/ident :db.type/ref]
[?a :db/valueType ?vt]
[_ ?a ?e]])#2017-03-0120:27marshall@donaldball you could put in a clause with the entity ID of your entity of interest in the v position#2017-03-0120:28marshallah. welp, @favila beat me to it 🙂#2017-03-0120:28favila@donaldball or use (into #{} (map :v) (d/seek-datoms :vaet)) to get all referenced entities#2017-03-0120:29favilaGetting the opposite (not-reference entities) is harder#2017-03-0120:29favilamostly because entities don't exist#2017-03-0120:29marshallnot-join with your 3 clauses above ?#2017-03-0120:30marshalloh, sorry, all entities#2017-03-0120:30marshallyeah that’s harder#2017-03-0120:30favilaYou need to refine what you mean by "not referenced entities"#2017-03-0120:30favilabecause theoretically that is any integer#2017-03-0120:31favilaMaybe you mean "any entity which has other assertions on it, but which is not referenced by other assertions"#2017-03-0120:32favilaI think this would do it, but I'm not sure it would run: [:find ?e
:where
[?vt :db/ident :db.type/ref]
[?a :db/valueType ?vt]
(not [_ ?a ?e])
[?e]#2017-03-0120:34donaldballI suppose I mean “there do not exist any datoms in the current database which have attributes which are references to the datom in question"#2017-03-0120:34favilaattributes? you mean values?#2017-03-0120:35donaldball… yes#2017-03-0120:35donaldballSorry, I haven’t quite gotten the datomic language fully integrated yet 😛#2017-03-0120:43favila[?vt :db/ident :db.type/ref]
[?a :db/valueType ?vt]
(not [_ ?a ?e]) will work, if you are able to bind ?e earlier in the query#2017-03-0120:44favilaIf you want all unreferenced entities, you may not be able to query in a sensible amount of time in a query (you will need a table scan, and the query engine will not allow that)#2017-03-0121:21favila@donaldball OK I think this will do it efficiently: (let [partition-key-by (fn [f]
(fn [xf]
(let [last (volatile! (Object.))]
(fn
([] (xf))
([r] (vreset! last nil) (xf r))
([r x]
(let [k (f x)]
(if (= k @last)
r
(do
(vreset! last k)
(xf r k)))))))))]
(->> (sequence
(comp
(partition-key-by :e)
(remove #(first (d/seek-datoms db :vaet %))))
(d/datoms db :eavt))
;;; just so we don't explode
(take 10)))#2017-03-0122:05csmwith the 0.9.5530 peer server/clojure client, I’m getting an error {:datomic.client-spi/request-id 782ba011-a723-4f9e-b25c-93338d389785, :cognitect.anomalies/category :cognitect.anomalies/incorrect, :cognitect.anomalies/message :db.error/tempid-not-an-entity tempid used only as value in transaction, :dbs [{:database-id datomic:, :t 137256, :next-t 137258, :history false}]}, but this same transaction succeeds for me using an in-memory database#2017-03-0122:05csmI get failures with dev storage and dynamodb#2017-03-0122:42matthavenercsm: using strings as tempids?#2017-03-0122:42csmyes#2017-03-0122:43matthavenerI think upgrading to 5544 has the fix. Check out this thread https://groups.google.com/forum/#!topic/datomic/m6vSa6CjqjQ#2017-03-0122:45csmaha, nice. Thanks!#2017-03-0123:40SoV4Hi everyone, I'm doing something interesting that uses time-querying. Basically, I want to know the users username when they posted a thing#2017-03-0123:40SoV4since they can change their username (but never their email) i want to find out what they had their username set to at the time of posting#2017-03-0123:40SoV4to stay historically consistent#2017-03-0123:41SoV4firstly, thoughts? should i even bother? i think it's a cool feature, and an interesting practical use of datomic's capabilities...#2017-03-0123:42SoV4secondly, if i were to do this, how do I query for it ? get the creation date of the item, query for the user's username at that db instant, and synthesize them, i think it's a winning approach...#2017-03-0202:13eraserhdOK, here's a tx syntax question: How does datomic decide whether something in an entity map's value is a lookup ref or a list of two items, the first being referred to by :db/Ident and the second by eid?#2017-03-0202:14eraserhdE.g. {:a/b [:c/d 42]}#2017-03-0203:37matthavenerthe cardinality of :a/b in the schema#2017-03-0221:16lucasbradstreetHi everyone. I’m working on onyx-datomic’s log reader, and I recently changed it so that it wouldn’t just call (d/tx-range last-tx nil) with an open end range, since I was a bit worried it might not read the txes lazily. The problem is that I don’t have a good way to ask “what’s the max tx id for the next 10 transaction”. Is this a job for a query?#2017-03-0221:22favilatx-range is supposed to be lazy#2017-03-0221:25favilaI'm trying to think of another way to answer your question that isn't just (->> tx-range (drop 9) first :db-after d/basis-t)#2017-03-0222:28lucasbradstreetI had originally thought so, but it came up in the past https://clojurians-log.clojureverse.org/datomic/2016-10-04.html#2017-03-0222:28lucasbradstreetbkamphaus 19:05:40
#2017-03-0222:35favilahm#2017-03-0222:52favilaI'm pretty sure it's lazy. (tx-range log nil nil) on a large db does not act like it's realizing everything.#2017-03-0222:52favilaI have an alternative though#2017-03-0222:55lucasbradstreetYeah, that’s what I was originally assuming. That was quick for me too.#2017-03-0223:05favila@lucasbradstreet https://gist.github.com/favila/62276cdb479060b782158e808e1113aa#2017-03-0223:05favilaThat is my attempt#2017-03-0223:22favila@lucasbradstreet I improved it, added some more paranoia, dropped the need for partition-key-by#2017-03-0223:23favilaverified it gives same results as tx-range#2017-03-0223:23favila(but I still think tx-range is lazy)#2017-03-0302:04lucasbradstreetThanks @favila! I'll give it a try. #2017-03-0315:27timgilbertSay, can someone tell me why this query is giving me an error Query is referencing unbound variables: #{?prj-prd}?
(d/q '[:find ?prj-prd .
:in $ ?prj ?prd
:where [?prj :project/purchase-list ?li]
[?li :project-line-item/project-product ?prd-prj]
[?prd-prj :project-product/global-product ?prd]]
db project-ref product-ref))
#2017-03-0315:30jgdaveyYour find clause doesn’t match what’s in the where#2017-03-0315:31timgilbertArgh, I see what you mean now, mistyped the variable#2017-03-0315:32timgilbertThanks!#2017-03-0315:32jgdavey👍#2017-03-0315:32timgilbertslaps forehead#2017-03-0317:21lellisHi everyone, I have a recipe on how to do a migration from a datomic database to a new datomic database, because my schemas have some errors that can not be fixed, such as fulltext true | false dbType, etc .... Has anyone ever experienced this?#2017-03-0317:25favila@lellis for attribute-at-a-time problems, I make a new attribute, move all assertions from the old attribute to the new attribute, then rename the new attribute to the old name#2017-03-0317:25favilaI don't recopy the db#2017-03-0317:33lellis@favila Nice, but i read something about these "hard" dump and restore. I want to clean my databse and my schemas.#2017-03-0317:36favila@lellis When I just want to reproduce the state of a db, I use transact-backup-datoms in this: https://gist.github.com/favila/785070fc35afb71d46c9#2017-03-0317:36favila(written before memory dbs had a log)#2017-03-0317:37favilaBut this only reconstructs the datoms visible from a db value, not all datoms in the db#2017-03-0317:38favilaThe approach would be similiar, instead of reading d/datoms grouped by tx, you read tx-log and turn those into transactions#2017-03-0317:38favilaI haven't personally ever done that, but it wouldn't be hard#2017-03-0516:47ezmiller77If I submit a transaction, and then immediately thereafter get the entity id for the entity that I created using d/resolve-tempid; and if I then (in the repl) attempt to pull that entity use d/pull, the value that i get back includes only the :db/id, like so:
{:db/id 17592186045517}
And not the whole set of nested datoms. If i then close out the repl, restart the repl, and reinitiate the db, and then do the pull, then I get the whole thing:
{:db/id 17592186045517,
:arb/value
[{:db/id 17592186045519,
:arb/value [{:db/id 17592186045521, :content/text "Title"}],
:arb/metadata [{:db/id 17592186045520, :metadata/html-tag :h1}]}
{:db/id 17592186045522,
:arb/value [{:db/id 17592186045524, :content/text "Paragraph"}],
:arb/metadata [{:db/id 17592186045523, :metadata/html-tag :p}]}],
:arb/metadata [{:db/id 17592186045518, :metadata/html-tag :div}]}
Can someone clue me into why that might be?#2017-03-0516:48rauh@ezmiller77 Are you using db-after for the db?#2017-03-0516:49ezmiller77@rauh I believe so. I get the entity id after the transaction with this fn:
(defn transact-and-get-id
"Transact tx and return entity id."
[conn tx]
(let [tempid (:db/id tx)
post-tx @(d/transact conn [tx])
db (:db-after post-tx)
entid (d/resolve-tempid db (:tempids post-tx) tempid)]
entid))
#2017-03-0516:50rauhAnd the pull?#2017-03-0516:50ezmiller77Ahh!#2017-03-0516:50rauhThat would explain everything 🙂#2017-03-0516:50ezmiller77That must be it. I’ve saved db before the transaction!#2017-03-0516:51ezmiller77Thanks much. Still very much getting used to thinking datomically.#2017-03-0516:51rauhWell pretty much everything in clojure is immutabe, now you have a db that is too! 🙂#2017-03-0516:52rauhI'd probably return [db entid] in your fn.#2017-03-0605:11mx2000Hi, why do “"All clauses in 'or' must use same set of vars”” in datomic ‘OR’ query?#2017-03-0605:11mx2000I want to make a different query based on a boolean input.#2017-03-0607:44lowl4tencyHi guys#2017-03-0607:45lowl4tencyI'm reading the docs and watching 3 SQL engines Psql, mysql and oracle, in my understanding here sqlite is missing, so, could I configure sqlite as backend engine for a transactor?#2017-03-0607:46lowl4tencycc @bkamphaus ^#2017-03-0614:15favila@lowl4tency I have done this just for fun, but why not use dev transactor?#2017-03-0617:31wilkerluciohello, is it possible to increase the transaction timeout for a specific transaction on datomic?#2017-03-0617:31wilkerluciowe are having an issue that a particular transaction (that has to be atomic, we can't break it) needs more time to process, we are wondering if it's possible to change the timeout while running this tx in specific, without having to change the general configuration, is that possible?#2017-03-0617:33marshall@wilkerlucio Unfortunately the timeout is system-wide#2017-03-0621:08Matt ButlerGetting a the following error when querying a datomic db frequently, mapping over d/datoms (lazily I believe)
2017-03-06 20:43:49,580[ISO8601] [clojure-agent-send-off-pool-56] WARN datomic.slf4j - {:message "Caught exception", :pid 1297, :tid 202}
org.hornetq.api.core.HornetQNotConnectedException: HQ119010: Connection is destroyed
at org.hornetq.core.protocol.core.impl.ChannelImpl.sendBlocking(ChannelImpl.java:296)
at org.hornetq.core.client.impl.ClientSessionImpl.deleteQueue(ClientSessionImpl.java:365)
at org.hornetq.core.client.impl.ClientSessionImpl.deleteQueue(ClientSessionImpl.java:375)
at org.hornetq.core.client.impl.DelegatingSession.deleteQueue(DelegatingSession.java:326)
at datomic.hornet$delete_queue.invokeStatic(hornet.clj:256)
at datomic.hornet$delete_queue.invoke(hornet.clj:252)
at datomic.connector$create_hornet_notifier$fn__8108$fn__8111.invoke(connector.clj:210)
at datomic.connector$create_hornet_notifier$fn__8108.invoke(connector.clj:206)
at clojure.lang.AFn.applyToHelper(AFn.java:152)
at clojure.lang.AFn.applyTo(AFn.java:144)
at clojure.core$apply.invokeStatic(core.clj:657)
at clojure.core$apply.invoke(core.clj:652)
at datomic.error$runonce$fn__48.doInvoke(error.clj:148)
at clojure.lang.RestFn.invoke(RestFn.java:397)
at datomic.connector$create_hornet_notifier$fn__8085$fn__8086$fn__8089$fn__8090.invoke(connector.clj:204)
at datomic.connector$create_hornet_notifier$fn__8085$fn__8086$fn__8089.invoke(connector.clj:192)
at datomic.connector$create_hornet_notifier$fn__8085$fn__8086.invoke(connector.clj:190)
at clojure.core$binding_conveyor_fn$fn__6772.invoke(core.clj:2020)
at clojure.lang.AFn.call(AFn.java:18)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Code looks something like this, causing roughly 20 datomic queries a second against a single db snapshot.
(let [db (d/db (d/connect uri)]
(doseq [element (->> (d/datoms db)
(map #(.e %))
(filter #(some-query db %)
(map #(some-other-query db %))
(map #(another-query db %)))]
(try
(transact-and-http-io element))
(catch (log-error e)))
Do i need to look at refactoring my code to reduce the frequency of queries? Happens at different times during processing the datoms.
Thanks 🙂#2017-03-0621:20favilaqueries should not impact this at all#2017-03-0621:20favilaare you creating and destroying connections frequently, maybe inadvertently?#2017-03-0621:21favilaor network problems?#2017-03-0621:27Matt ButlerThis single function call uses the same db snapshot created from a single connection#2017-03-0621:27timgilbertSay, is there a (prev-t) function to get the t value of the transaction immediately before (basis-t), or should I just grab it out of (d/tx-range)?#2017-03-0621:30Matt ButlerAdded db binding for clarity in case it its doing something i dont understand fully#2017-03-0621:45favila@mbutler the d/datoms call is pseudocode right? you are including index and segment etc (I would expect a different error)#2017-03-0621:46Matt Butlersorry, yes. The code works perfectly in localdev#2017-03-0621:47favilaand so this works only sometimes in prod? I am still suspecting a network problem#2017-03-0621:47favilais it any different if you d/connect + d/db only once at startup?#2017-03-0621:47Matt ButlerI would think maybe im accidentally realising the sequence (which is ~700k datoms) and maybe running out of memory, but the jvm doesnt die#2017-03-0621:48Matt ButlerI dont understand, if i d/db once at startup my entire app would use the same snapshot no?#2017-03-0621:48Matt Butler@favila#2017-03-0621:48favilayou say you are running this code many times a seecond?#2017-03-0621:49Matt Butlerno this code runs once, and maps over a seq of datoms once calling datomic queries on the sequence as i go.#2017-03-0621:50favila(-> (d/connect uri) (d/db) (d/datoms :eavt 1000) (->> (take 5))) does that run?#2017-03-0621:50favila(I am seeing if you ever connect at all)#2017-03-0621:51Matt ButlerI should update the code, snippet as i think its not quite clear#2017-03-0621:52favilaI understand. But the stacktrace is related to connection failures with hornetq, which is transactor communication#2017-03-0621:52favilaqueries do not communicate with the transactor#2017-03-0621:53favilaSo I suspect you are not even connecting#2017-03-0621:53favilaor you connect, but it fails later#2017-03-0621:53Matt ButlerI think that error might actually be unrelated#2017-03-0621:54Matt ButlerSo in short i see the seq being consumed and performing some-oi but that after a random amount of time the code just stops running#2017-03-0621:55Matt ButlerThat error appeared at the same time in my logs and was datomic related, if its transactor related then its likely something else it my app not happy that it cant transact.#2017-03-0621:56Matt ButlerHowever the above code stopped running again but there was no hornet error this time (in the logs).#2017-03-0621:56Matt ButlerWhat would happen if there was a network issue when querying a datomic db?#2017-03-0621:57favilaI am not sure if queries continue when connection dies#2017-03-0621:57favilathe transactor connection uses hornetq, keepallives, reconnects, etc#2017-03-0621:57favilait is used to send transactions and receive the transaction queue#2017-03-0621:58favilathe queries use the storage connection#2017-03-0621:58favilaan error there would look storage-engine specific#2017-03-0621:58favilae.g. if sql is engine, would see jdbc in the stacktrace#2017-03-0621:59Matt ButlerBut one would expect an error? this is run in a java thread surrounded with a try/catch and my system has an uncaughtExceptionHandler#2017-03-0622:00stuartsierraIf there is network disruption between the Peer and Transactor, you will see an exception only when connecting or transacting. If there is network disruption between the Peer and Storage, you will see exceptions when reading (querying).#2017-03-0622:00favilathe exception you paste is probably in one of datomic's threads, not yours#2017-03-0622:01Matt ButlerYes, I am now assuming that the stack trace was an anomaly#2017-03-0622:01stuartsierraIn addition, the Peer's background threads will log connection errors with the Transactor, which may show up as HornetQ / Artemis exceptions.#2017-03-0622:01Matt ButlerMaybe suggesting that there is a connection issue, but not in the code above which is the code that fails.#2017-03-0622:03favilahow does it fail? not by exception?#2017-03-0622:03Matt ButlerSo i am left with no error but code that silently dies#2017-03-0622:04Matt ButlerThe cpu usage on the box drops to idle levels and the (some/io) that happens in the doseq stops being logged#2017-03-0622:04weihas anyone written a way to serialize EntityMaps by db/id and entity-t? or is that a bad idea?#2017-03-0622:04favilano return value from the expression?#2017-03-0622:05favilahow do you know it is not finished rather than dead?#2017-03-0622:06Matt Butlerso due to the doseq nil is returned. Not sure if anywhere id expect this to end up.#2017-03-0622:06Matt ButlerReasoning for the code not being finished is that if i call that function again it starts up again and carries on a little further#2017-03-0622:07Matt Butlerbased on the i/o at the end elements would be filtered out at the filter stage#2017-03-0622:09faviladoes it work if you replace some-io with something trivial? like printing progress?#2017-03-0622:09favilaor can you log that the doseq actually finished?#2017-03-0622:11Matt ButlerYes, those might be good steps. One of those unfortunate prod only bugs, that happens when dealing with large (d/datoms)#2017-03-0622:14Matt ButlerI forgot something that might matter, the (some-io) does transact, but again it only produced the hornet error when it ran/failed the first time. Subsequent times no error was logged.#2017-03-0622:14faviladoes some-io apply backpressure?#2017-03-0622:15favilamaybe you overwhelmed a downstream system#2017-03-0622:15favila(consistent with larger input)#2017-03-0622:15Matt ButlerIs it possible something is caching the error#2017-03-0622:16favilai don't know what's in some-io. behavior of things when flooded is not always sensible#2017-03-0622:17favilacould be just hanging, retrying forever, whatever#2017-03-0622:17favilano exception in that case#2017-03-0622:18Matt Butlerconstructs a map, does another query and performs a http request#2017-03-0622:19Matt ButlerI think that maybe removing the i/o part would be a good idea. Maybe my http request (sync) is hanging forever#2017-03-0622:19favilaYeah, my first instinct is to simplify this and get solid confirmation that the process is hanging#2017-03-0622:20Matt ButlerIm using clj-http which you'd expect to time out.#2017-03-0622:20favilaif you suspect datomic, take everything non-datomic out#2017-03-0622:20Matt Butler😄#2017-03-0622:20Matt Butlerprobably a wise bit of advice#2017-03-0622:20favilaand get some observable indicator of progress or doneness#2017-03-0622:21Matt ButlerThe some i/o + doseq logs but none of the map/filter stages#2017-03-0622:21Matt Butlerwhich is probably a mistake#2017-03-0622:21favilado you know the last datom that should appear after all filtering?#2017-03-0622:22favilaor approx how many datoms to expect?#2017-03-0622:22Matt Butleryes#2017-03-0622:22Matt Butlerthe latter#2017-03-0622:22Matt Butleraround 701k#2017-03-0622:23Matt Butleractually i can take a specific number of datoms off the front for now#2017-03-0622:23Matt Butlerso i can say exactly which datom + how many#2017-03-0622:24favilacould storage be throttling? (just a crazy idea)#2017-03-0622:24Matt Butlertotally#2017-03-0622:24favilawould explain low cpu, queries waiting for their io requests to come back#2017-03-0622:24favilaI would expect timeout eventually though#2017-03-0622:25Matt Butlerthis is between an ec2 node and dynamodb#2017-03-0622:25Matt Butlerand the storage metrics seem super low vs my provisioned capacity#2017-03-0622:26favilawell good luck figuring this one out#2017-03-0622:26Matt Butlerthanks 🙂#2017-03-0622:26Matt Butlerspitting it into generating the seq and consuming it was a good idea so thanks 🙂#2017-03-0622:43marshall@mbutler check ddb metrics and alarms. Ddb throttling on reads could be responsible #2017-03-0623:23Matt Butleryeah seems to be super low, single digit % of provisioned read. Getting late here, going to do some further testing tomorrow and report back 🙂#2017-03-0710:36Matt ButlerHi @favila
Done some thinking. Yesterday you asked
> are you creating and destroying connections frequently, maybe inadvertently?
It was my understanding that calling datomic connections were cached, calling (d/connect) on the same uri multiple times just returned the same connection and it was never destroyed.
Could you elaborate on what you meant by creating/destroying a connection?
Lets say my code does something similar to this for example:
(doseq [n (range 10000)]
(d/transact (d/connect uri) tx))
Is calling d/connect in quick succession not advised and should i be passing a connection whenever doing frequent transactions? Could this be the cause?#2017-03-0713:41jonpitherHi. Anyone deployed Datomic onto Docker + OpenShift?#2017-03-0713:57stijnonto docker yes, openshift no#2017-03-0713:58stijndocker on kubernetes#2017-03-0713:59jonpitherany learnings?#2017-03-0714:03favila@mbutler I was talking about a d/connect d/release lifecycle pair. I was speculating you had some lifecycle management in your app. I think release is async, or else maybe you exposed a race in datomic. All pure speculation, none of it ended up applying to your case#2017-03-0714:09Matt ButlerOkay, no problem. So in theory calling (d/connect uri) is functionally identical to passing around an established connection. I performed the refactor anyway as im at a loss 🙂#2017-03-0714:33stijn@jonpither very easy to get running, but we haven't tested a production workload yet and are not using a high-availability transactor yet#2017-03-0714:34stijnthe backend is a cassandra cluster and is also running inside kubernetes#2017-03-0714:42jonpitherthanks @stijn#2017-03-0715:07stuartsierrad/release is only necessary if you know you are not going to use that connection again, and the java process is going to stick around. For processes which keep a database connection for their entire lifetime, d/release is not needed.#2017-03-0715:09stuartsierrad/connect does cache connections based on the URI, so calling d/connect repeatedly should have no effect.#2017-03-0812:03dominicm@robert-stuttaford I saw this https://gist.github.com/robert-stuttaford/3bd5240c988f05092504 and your comment about a few issues. Wanted to know if they were with onyx integration or with that strategy as an input?#2017-03-0812:05robert-stuttaford@dominicm can’t remember the issues; but i can tell you that we are happily using this strategy in production to this day 🙂#2017-03-0812:05dominicm@robert-stuttaford so you're doing it with that gist instead of onyx-datomic?#2017-03-0812:06robert-stuttafordno, we’re using onyx-datomic#2017-03-0812:08dominicm@robert-stuttaford ah okay, so you're using tx-range and not the tx-report-queue then#2017-03-0814:59robert-stuttafordyes 🙂#2017-03-0818:08robert-stuttafordTIL in Datomic you can alias :db/idents (the most recently assigned alias becomes the default ident). effect: rename it, but support all past names.#2017-03-0820:41timgilbertYeah @robert-stuttaford, we've been renaming like :product/name to :deprecated/product.name and the like as we evolve our schema#2017-03-0822:43csmhow are t and next-t updated in the client API? I think I’m seeing an issue where one query gets an older t than expected when using (datomic.client/db conn)#2017-03-0822:51csmI’m seeing an issue where one service transacts some facts, and another service performs a query (five minutes later), but the query gets the t from just before the transaction#2017-03-0822:53csmliterally the t just before the transaction: the db-before is :t 1343698, :next-t 1343699, the db-after is :t 1343699, :next-t 1343700, and the query side has :t 1343698, :next-t 1343699#2017-03-0822:54csmis (d/as-of db (:next-t db)) ever a good idea? or (d/as-of db (Date.))?#2017-03-0914:00marshall@csm do you have a small repro case? Also, what version of datomic?#2017-03-0915:29dominicmI'm doing some profiling, and I'm seeing some threads: query-1, query-2, query-3, query-4. They're holding the bulk of our allocations right now. Wanted to know if they're from datomic & what they do, should I be concerned about their usage.#2017-03-0915:30dominicmThey aren't actually growing, so it may be caches or something. But thought I'd ask.#2017-03-0915:31favila@dominicm I believe they are the threads that actually execute queries#2017-03-0915:32favilaI always see them light up (and no other threads) during a long-running query#2017-03-0915:32dominicmI see… Particularly for d/q or for all queries e.g. d/datoms? @favila#2017-03-0915:32favilathere are no other queries#2017-03-0915:32favilaor am I missing something?#2017-03-0915:33dominicmSorry, do they get used for d/datoms do you know?#2017-03-0915:33favilaI believe not#2017-03-0915:33favilad/datoms has hardly any cpu impact#2017-03-0915:33favilaso I can't be sure#2017-03-0915:34dominicmNo, I thought it shouldn't.#2017-03-0915:34dominicmBit surprised at what would be querying right now is all.#2017-03-0915:34dominicmAnything big should have been converted to d/datoms already#2017-03-0915:35favilaI think you get a # equal to number of cores#2017-03-0915:36favilaand a single query will run on many of those threads at once#2017-03-0915:54dominicmHmm, I'm only seeing a single query thread light up allocations at once. Also, all 4 threads (before my job started) were sat at ~25% usage of the memory.#2017-03-0915:54dominicmNow they're at 20% (big job running)#2017-03-0915:54favilamaybe there's no parallelism to exploit?#2017-03-0915:54dominicmThe big job isn't supposed to allocate much (lazy sequence), so I suspect we're doing something wrong there.#2017-03-0915:54dominicmAh, only when parallelism can happen. That makes sense. 🙂#2017-03-0915:55dominicmI kinda thought 99% of datomic was parallelisable.#2017-03-0915:55favilaonly large intermediate sets benefit#2017-03-0915:56favilaif a clause is very selective, there isn't anything to parallelize over#2017-03-0915:57dominicmMakes sense. I think we're doing enrichment of data from other entities. Inferring certain properties, I'm not overly familiar with the intimate details of the queries we use running over these datoms though#2017-03-0917:25devthwatching google next keynote live stream – spanner looks really cool. i know it's been discussed before, but my next thought was immediately "i wonder if this would work as a datomic backend"#2017-03-0918:43csm@marshall I don’t have a repro case yet (this was provoked by two µ-services, one writing, one reading) though I would like to put one together. This is with 0.9.5561.#2017-03-0918:46csmI actually tried (dissoc (d/db conn) :t :next-t), and at least couldn’t reproduce the issue#2017-03-0918:50shaun-mahoodHas anyone integrated Datomic as part of a project that includes a separate GIS system? I'm going to building a PostGIS backed system and I want to see if I can use Datomic as well for as much of the non-spatial data as possible, but I'm not sure exactly where I should be putting those boundaries and if there are any real-life issues that I'll run into that I haven't considered yet.#2017-03-0920:43stuartsierra@shaun-mahood I don't know about GIS specifically, but I do know people have used Datomic alongside other storage/indexing systems.#2017-03-0920:44stuartsierraThe general rule is to store things in the system that's optimized for storing that kind of data: small, relational, transactional values in Datomic; geo-spatial data in a geo-spatial system; binary blobs in a blob store; etc.#2017-03-0923:21shaun-mahood@stuartsierra: Thanks, that's kind of what I'm leaning towards - good to know there is some success out there for other areas and it's not just a plain old bad idea. In unrelated news, the Lambda Island video on component finally got me over the hump to start using it 🙂#2017-03-1017:13dominicmWhat are the trade-offs to adding an index to an attribute? Is it just storage?#2017-03-1017:13dominicmThinking a string e.g. emails#2017-03-1017:18stuartsierraStorage & indexing costs.#2017-03-1017:19stuartsierraGenerally we recommend starting with db/index=true, since it's easier to remove an index than add one later.#2017-03-1017:28dominicmOh really? Interesting!#2017-03-1018:57phillipcIn a project I'm currently involved in, I'm using datalog to pull a subset of data from a datomic database into an edn file. An issue arises when pulling anything typed bigint. It seems to render as type long in the edn file (no trailing N). The project errs when attempting to load the edn file as it expects a bigint but is getting a long. Is there anything in datalog that would allow me to force the bigint typing on export to the edn file?#2017-03-1019:06alexandergunnarsonHey! So I just got this fun error: ("heartbeat failed")
2017-03-10 18:27:30.006 INFO default datomic.lifecycle - {:event :transactor/heartbeat-failed, :cause :timeout, :pid 7943, :tid 15}
2017-03-10 18:27:30.009 ERROR default datomic.process - {:message "Critical failure, cannot continue: Heartbeat failed", :pid 7943, :tid 1722}
2017-03-10 18:27:30.010 INFO default datomic.process-monitor - {:tid 1722, :AlarmHeartbeatFailed {:lo 1, :hi 1, :sum 1, :count 1}, :MemoryIndexMB {:lo 1, :hi 1, :sum 1, :count 1}, :AvailableMB 428.0, :RemotePeers {:lo 1, :hi 1, :sum 1, :count 1}, :HeartbeatMsec {:lo 5000, :hi 9692, :sum 44694, :count 8}, :Alarm {:lo 1, :hi 1, :sum 1, :count 1}, :pid 7943, :event :metrics, :SelfDestruct {:lo 1, :hi 1, :sum 1, :count 1}, :MetricsReport {:lo 1, :hi 1, :sum 1, :count 1}}
#2017-03-1019:07alexandergunnarsonIt’s not immediately apparent to me what caused the heartbeat failure. How should I read the log?#2017-03-1019:07marshallmost likely storage unavalabl#2017-03-1019:08alexandergunnarsonThanks for responding so quickly @marshall! I was thinking it might be a memory insufficiency, because the transactor is simply transacting to DynamoDB#2017-03-1019:09alexandergunnarsonBut I suppose DynamoDB might be read/write throttling, or maybe some latency was introduced#2017-03-1019:09marshalltransactor on aws or local?#2017-03-1019:10alexandergunnarsonSo I’m following bad practice and co-locating the transactor with the server JVM on the same AWS (EC2) instance.#2017-03-1019:10marshallalso, look around/just before that in the transactor log#2017-03-1019:10marshallif ddb is throttling you’ll see that#2017-03-1019:10alexandergunnarsonMemory creeps down steadily but not steeply#2017-03-1019:11alexandergunnarsonWhat should I look for if it’s throttling? It’s just a bunch of heartbeats#2017-03-1019:11marshallyou’d see throttling notifications#2017-03-1019:11marshallor storage backoff#2017-03-1019:11alexandergunnarsonAh okay#2017-03-1019:11alexandergunnarsonYeah, not seeing any notifications of that kind#2017-03-1019:11marshallcould also just be a transient network failure#2017-03-1019:12marshallare you running HA?#2017-03-1019:12alexandergunnarsonNo, for cost reasons#2017-03-1019:12alexandergunnarsonThis is a small project#2017-03-1019:12marshallyeah, so that is the case for HA#2017-03-1019:12alexandergunnarsonBut times like this make me wish we had HA...#2017-03-1019:12marshallif your primary goes down#2017-03-1019:12marshallfor somethign like a network hiccup#2017-03-1019:12marshallHA takes over#2017-03-1019:13alexandergunnarsonRight#2017-03-1019:13marshallbut you really only get the advantage if you’re running the pieces on separate instances#2017-03-1019:13alexandergunnarsonRight — which is more expensive 😕 Though I suppose not by much#2017-03-1019:13alexandergunnarsonIt’s just myself and another person on the team and we have no revenue yet haha so we’re trying to minimize costs#2017-03-1019:14alexandergunnarsonOtherwise I’d say, pay the extra few dollars a month and save ourselves the headache because developer time is much more expensive#2017-03-1019:14alexandergunnarsonI’ll look around for some other feasible AWS options#2017-03-1019:15alexandergunnarsonMight have to buckle down and get two super-small AWS instances large enough for the two transactors (one active, one failover/backup) as well as the server instance#2017-03-1019:15marshallwhat size instances are you running now?#2017-03-1019:15alexandergunnarsonLet me check — it’s the 4GB memory with 2 cores#2017-03-1019:17alexandergunnarsonA t2.medium#2017-03-1019:18alexandergunnarsonWe don’t have autoscaling or replication either so if that server goes down, I’m on duty 😅 Oh, to have money so permanently crossing fingers wouldn't be necessary#2017-03-1019:18marshallyeah, you can get away with a smaller instance for the transactor, but you have to be careful about load#2017-03-1019:19marshallwell, worst case shoestring you could put the transactor up on a single instance ASG#2017-03-1019:19marshallthen if it goes down you have temporary outage while the ASG brings it back up#2017-03-1019:19marshallit’s not HA failover, but at least it’s automatic#2017-03-1019:20alexandergunnarsonHere’s my transactor memory settings — pretty minimal
memory-index-threshold=32m
memory-index-max=128m
object-cache-max=64m
#2017-03-1019:20zaneWhat kind of performance characteristics does datomic.api/since have?#2017-03-1019:21alexandergunnarsonASG? Auto-scaling … G?#2017-03-1019:21marshallgroup#2017-03-1019:21alexandergunnarsonAh right#2017-03-1019:21alexandergunnarsonYeah that’s true#2017-03-1019:21marshallhttps://docs.aws.amazon.com/autoscaling/latest/userguide/AutoScalingGroup.html#2017-03-1019:21alexandergunnarsonNot HA, but a smart idea#2017-03-1019:22marshalljust make sure if you’re not using our provided CF/launch scripts that you terminate the instance when the transactor goes down#2017-03-1019:22marshallthat way the ASG replaces it#2017-03-1019:23alexandergunnarsonI used the CF/launch scripts at a previous startup when we had HA, which was nice, but we don’t have that luxury (really, necessity) now sadly#2017-03-1019:24alexandergunnarsonThanks for the tip about termination!#2017-03-1019:24marshallyep. and yeah, the CF/AMI we have will work that way too - just set the ASG size to 1#2017-03-1019:24alexandergunnarsonSo I can use the CF/launch scripts to create an ASG for the transactor without having to use HA?#2017-03-1019:24alexandergunnarsonOh I get what you’re saying now — you just answered that#2017-03-1019:25alexandergunnarsonSounds good — thanks so much!#2017-03-1019:26alexandergunnarsonDo you think a t2.micro instance is big enough? We might swing HA if we had two t2.nanos...#2017-03-1019:26marshalli think people have done it, but you definitely can’t push much data through ti#2017-03-1019:26marshallit#2017-03-1019:26alexandergunnarsonWith the t2.micro it’s hard too?#2017-03-1019:27alexandergunnarsonI guess the bandwidth is really small#2017-03-1019:27marshall1gb isn’t much memory#2017-03-1019:27marshallyou figure you need at least a few hundred mb for OS#2017-03-1019:27alexandergunnarsonThat’s true, especially since that’s including the OS#2017-03-1019:27alexandergunnarsonRight#2017-03-1019:27marshallso you can’t even have a 1gb heap#2017-03-1019:28alexandergunnarsonYeah so t2.nano is infeasible I suppose#2017-03-1019:28alexandergunnarsonI remember trying to get a transactor running while setting the max heap size to something like 384MB and it crashed right away#2017-03-1019:28alexandergunnarsonHmm, well this gives me something to think about — thanks!#2017-03-1022:23timgilbertI was never able to get a transactor up with less than 4GB (t2.medium), personally. It still relatively cheap though, $34 / mo according to http://www.ec2instances.info/#2017-03-1102:17nathansmutzCan Datalog pull groups of things satisfying some predicate about the group?
For instance: sets of store-items whose prices sum to $20.#2017-03-1109:26yonatanel@nathansmutz I can only think of a solution for a constant size sets which is easy but not for different sizes in the same query#2017-03-1110:58robert-stuttaford@nathansmutz you can do that with sub-queries, where the sub does (sum ..) and the super does the <= 20
pseudo:
[:find ?item
:where
[(datomic.api/q [:find (sum ...) ?item :where ...]) [?val ?item]]
[(<= ?val 20)]]
#2017-03-1110:59robert-stuttafordyou’d have to figure out the right way to pass the vals from sub to super, using the appropriate :find expression etc#2017-03-1111:26yonatanel@robert-stuttaford Will the subquery generate all subsets of items? I don't understand the pseudo code.#2017-03-1111:51robert-stuttafordapologies, i was really just showing how you’d assess aggregations in datalog by nesting queries. as for finding sets, ¯\(ツ)/¯#2017-03-1120:34devthnoticing some unexpected behavior with string tempids. this appears to be happening intermittently.
when i create a tx that references another entity in the same transaction like this:
(d/transact! conn
[{:db/id "user-tempid" :user/username "mitnick"}
{:db/id "team-tempid"
:is-kind/team? true
:team/name "Test team"
:team/users #{"user-tempid"}}])
about half the time it will resolve to a ref (expected), and
the other half it will result in a literal string "user-tempid" (unexpected) in the collection of :team/users
anyone else experienced this?
peer is 0.9.5561 and transactor is 0.9.5561#2017-03-1120:38devthi'm deleting and re-creating database across every test run.#2017-03-1120:40devthexample unexpected result:
#:team{:name "Test team", :users ["user-tempid"]}
of query:
(d/q '[:find (pull ?team [:team/name :team/users]) .
:where [?team :is-kind/team? true]]
db)
#2017-03-1120:44nathansmutzThanks @robert-stuttaford and @yonatanel . That lets me know I'm probably barking up the wrong tree with Datalog for this part of my problem. I was looking into core.logic; but it currently has trouble with arbitrarily sized sets too. It's Hammock Time#2017-03-1123:17yonatanel@nathansmutz Maybe with recursive rules?#2017-03-1200:01nathansmutz@yonatanel I might look into that more. My larger project involves returning a set satisfying lots of aggregate predicates (is that a good term?) like that.
I'm generating (or proving the non-feasibility of) education plans for finishing a Bachelor's degree within 4 years. There are requirements like "no more than 15 credits per academic term/semester" and "must contain at least 6 credits worth of this subset of courses"#2017-03-1213:19ezmiller77Does anyone know if there’s a tutorial out there that explains how to update nested entities, ie. entities that are components? Or explain here?#2017-03-1213:54ezmiller77Let’s say I have nested structure like this:
[:x/value [{:x/value [{:text “test”}]}]]
where :x/value is defined in the schema as a component. When transacted, I get the entity id for this entity 12345. Later, I want to update the nested value. How would I do that?#2017-03-1219:15favilaGet the db/id of the component entity, and add/retract on that entity like any other @ezmiller77#2017-03-1219:20ezmiller77@favila i think add is the piece i’ve been missing. since I’ve just been using (d/transact) so far.#2017-03-1219:20ezmiller77thx#2017-03-1219:21favila@ezmiller77 no this is not a function. Db/add, db/retract assertions etc#2017-03-1219:22favilaOr a map with db/id on it or some unique attr value #2017-03-1219:23favila@ezmiller77 component entities are not special, it's the component attribute that is special#2017-03-1219:51ezmiller77@favila I think I follow.#2017-03-1219:54ezmiller77I think I have two separate problems that I’ve been treating as one. The first is just how to do that update, which I think you answered. The second is more related to my particular problem: that if the value of :text has changed than I won’t know which entity to update unless I keep track of the :db/id.#2017-03-1309:24rc1140hi all , is it possible to return a map when using find with values , i..e (datomic.api/q '[:find ?name ?age ?email .......etc#2017-03-1309:24rc1140atm that returns an array of values#2017-03-1309:24rc1140from what i understand if i want a map with :user/name etc , i need to use the pull api#2017-03-1309:55misha@rc1140
;; query
[:find (pull ?e [:release/name])
:in $ ?artist
:where [?e :release/artists ?artist]]
#2017-03-1309:56rc1140correct thats thats what i said , if i want a map i have to use the pull api as you showed not the :find ?name ?age (dont know what that format is called)#2017-03-1309:56mishahttp://docs.datomic.com/query.html#pull-expressions#2017-03-1309:57mishaor just get the ids, and call d/pull-many on them afterwards#2017-03-1310:58rc1140thanks#2017-03-1414:00chrisblomWhat can I expect when not running datomic.api/gc-storage regularly?#2017-03-1414:00chrisblomWill it degrade performance, fill up storage etc?#2017-03-1415:25marshall@chrisblom it will result in accumulation of garbage segments in storage#2017-03-1415:26marshallunlikely to affect performance substantially, although that may depend on the specific storage#2017-03-1415:26marshallwill take up storage space.#2017-03-1415:27chrisblom@marshall thanks, we're using DynamoDB, does it affect performance there?#2017-03-1415:28marshalli wouldn’t expect it to have much if any perf effect#2017-03-1415:28marshallbut why are you hesitant to run gcStorage?#2017-03-1415:31chrisblomoh, i want to run it, but we didn't run it for a long time as our gc task was misconfigured#2017-03-1415:32marshallah. you should be fine - just go ahead and start running it again#2017-03-1415:33chrisblomwhen I ran it manually, i got a AlarmGCStorageFailed, which kills the transactor#2017-03-1415:34chrisblomshould it just try with an older timestamp?#2017-03-1415:34chrisblombtw, we're using a backup transactor, so a transactor going down is not a problem#2017-03-1415:35marshallhow far back did you try running it?#2017-03-1415:35chrisblom1 month#2017-03-1415:37marshalldid you have anything in the log? an exception possibly?#2017-03-1415:38kirill.salykinWhy not Datomic runs gc-storage on regular basis?#2017-03-1415:38kirill.salykinwhy it should be scheduled outside?#2017-03-1415:38marshallmostly likely you are getting throttled by dynamo#2017-03-1415:48chrisblom@marshall i found the exception, it was indeed throttling by dynamodb#2017-03-1415:48chrisblom14:19:39 2017-03-13 14:19:38.972 INFO default datomic.garbage - {:event :garbage/collect, :dbid "production-2017-01-09-c-cf03505f-7aac-4653-acab-913f4b3a086b", :older-than #inst "2017-01-13T10:31:35.781-00:00", :msec 1.3699999999999998E7, :phase :end, :threw java.util.concurrent.ExecutionException, :pid 9, :tid 1121}
14:19:39 2017-03-13 14:19:38.974 WARN default datomic.garbage - {:message "Cluster gc failed", :pid 9, :tid 1121}
14:19:39 java.util.concurrent.ExecutionException: com.amazonaws.services.dynamodbv2.model.ProvisionedThroughputExceededException: The level of configured provisioned throughput for the table was exceeded. Consider increasing your provisioning level with the UpdateTable API. (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: ProvisionedThroughputExceededException; Request ID: STKU1ATRLK677TKLOARC6BVP
14:19:39 Caused by: com.amazonaws.services.dynamodbv2.model.ProvisionedThroughputExceededException: The level of configured provisioned throughput for the table was exceeded. Consider increasing your provisioning level with the UpdateTable API. (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: ProvisionedThroughputExceededException; Request ID: STKU1ATRLK677TKLOARC6BVPBBVV4KQNSO5AEMVJF66Q9ASUAAJG)
14:19:39 2017-03-13 14:19:38.975 WARN default datomic.garbage - ... caused by ...
14:19:39 com.amazonaws.services.dynamodbv2.model.ProvisionedThroughputExceededException: The level of configured provisioned throughput for the table was exceeded. Consider increasing your provisioning level with the UpdateTable API. (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: ProvisionedThroughputExceededException; Request ID: STKU1ATRLK677TKLOARC6BVPBBVV4KQNSO5AEMVJF66Q9ASUAAJG) at com.ama
#2017-03-1415:49marshallyou may want to turn down your gcStorage throughput settings#2017-03-1415:49marshalland/or up ddb provisioning#2017-03-1415:50marshalldatomic.gcStoragePaceMsec#2017-03-1415:50marshallas a command line argument when you launch the transactor#2017-03-1415:51marshallalso discussed near the bottom of this: http://docs.datomic.com/capacity.html#garbage-collection-deleted-production#2017-03-1415:53chrisblomthats for deleting db's right? i was using datomic.api/gc-storage for live databases#2017-03-1416:35marshallyes, that region of the docs is about deleted databases, but the gcStoragePaceMsec applies to both#2017-03-1515:18caspercI am wondering about this case I am encountering when setting txInstant in a unit test:
(let [t1 #inst "2001-01-01"
t2 #inst "2002-01-01"
conn (scratch-conn)
_ (init-schemas conn)]
(d/transact conn [{:db/id (d/tempid :db.part/tx) :db/txInstant t1} {:db/id (d/tempid :db.part/user) :bygning/id 1 :bygning/attr1 "1"}])
(d/transact conn [{:db/id (d/tempid :db.part/tx) :db/txInstant t2} {:db/id (d/tempid :db.part/user) :bygning/id 2 :bygning/attr1 "2-1"}])
(d/transact conn [{:db/id (d/tempid :db.part/tx) :db/txInstant t2} {:db/id (d/tempid :db.part/user) :bygning/id 1 :bygning/attr1 "2"}])
(prn "without as-of" (d/q '[:find ?v ?tx
:where
[_ :bygning/attr1 ?v ?tx]
[?tx :db/txInstant ?txInst]]
(d/history (d/db conn))))
(prn "with as-of" (d/q '[:find ?v ?tx
:where
[_ :bygning/attr1 ?v ?tx]
[?tx :db/txInstant ?txInst]]
(d/as-of (d/history (d/db conn)) t2))))
prints
"without as-of" #{["2-1" 13194139534315] ["1" 13194139534313] ["2" 13194139534317] ["1" 13194139534317]}
"with as-of" #{["2-1" 13194139534315] ["1" 13194139534313]}
#2017-03-1515:20caspercThe second and third transaction are both at :db/txInstant t2, but when doing an as of, I don’t get the one on :bygning/id 1 and :bygning/attr1 “2”.#2017-03-1515:21caspercIs it because datomic is adding “some extra” time, so all txInstants are unique?#2017-03-1515:28favila@casperc as-of with an instance is resolved to a tx value as with (-> (d/datoms :avet :db/txInstant the-instant) first :tx)#2017-03-1515:29favilaSo the as-of point is precisely 13194139534315, because that is the first match for that instant#2017-03-1515:29favilaso the one after that is not seen#2017-03-1515:30caspercAh, that explains it. Thanks!#2017-03-1515:30favilad/datoms is actually wrong, it's more like seek-index#2017-03-1515:30favilabecause inexact matches are allowed#2017-03-1515:41a.espolovGuys this query [:find (count ?e) :where [?e :а-entity/а-attribute]]
return outOfMemory
How to be to count all the entities?#2017-03-1516:03favilayou need to do it lazily with d/datoms#2017-03-1516:05dominicmI had assumed that dereffing a transaction would be a form of backpressure, but I'm starting to question it. In the docstring the "completion" of the transaction is mentioned, but I'm not osure what it means in this context#2017-03-1516:06favilaCompletion means transaction committed#2017-03-1516:07faviladeref is backpressure only if you wait for it before issuing new txes#2017-03-1516:07danielstocktonI think total datom count can also be monitored from the transactor:http://docs.datomic.com/monitoring.html#sec-3#2017-03-1516:08dominicm@favila I am doing a bulk lot of transactions (divided up into chunks). In some cases transacting millions of datoms. I want to avoid overwhelming the transactor#2017-03-1516:13favilaFor example, (run! #(deref (d/transact-async conn %)) txes) would ensure only one tx is in flight at a time#2017-03-1516:13favilaas long as the individual txes are small, txor will not be overwhelmed#2017-03-1516:13favilawhen indexing kicks in, tx rate will slow#2017-03-1516:13favilahowever one-at-a-time is very slow, so cognitect recommends pipelining#2017-03-1516:14favilakeep a few uncompleted txes in the air at a time#2017-03-1516:14favilahttp://docs.datomic.com/best-practices.html#pipeline-transactions#2017-03-1516:15dominicm@favila We've just increased the size of the individual txes, to increase the throughput of our queries to generate the txes.#2017-03-1516:16dominicmWill take a look into pipelining. I had wondered how you'd keep a few in flight at a time.#2017-03-1516:16favilalarge tx sizes are generally bad#2017-03-1516:16favilabetter more txes more frequently than larger txes less frequently#2017-03-1516:17favilaI think the guideline they gave is ~1000 datoms per tx#2017-03-1516:17favilaalthough we have done larger just fine#2017-03-1516:17favilabut tens-of-thousands and tx timeouts become a problem#2017-03-1516:17favilaand the jitter upsets other txes from other peers (on an active db)#2017-03-1516:18favilaThere was some docs somewhere on tuning for a bulk import job too, but I can't find them now#2017-03-1516:18dominicmI'd been given the 10k datoms number. Hmm. We've just increased from ~10/tx to 10*1000 (chunking the job into 1000s)#2017-03-1516:19favilathere were some transactor tuneables to set, as well as temporarily raising storage if you use e.g. dynamo#2017-03-1516:21favilae.g. raising the memory-index-threshold to avoid indexes as long as possible, then doing an explicit requestIndex at the end#2017-03-1516:22favilaanother technique is to do the bulk index locally (dev storage) on a big machine, then backup+restore to remote storage#2017-03-1516:22favilahttps://hashrocket.com/blog/posts/bulk-imports-with-datomic#2017-03-1516:22favilabut that doesn't talk about memory-index-threshold#2017-03-1518:37eraserhdAm I correct in assuming that, in pull expressions, defaults can't be supplied for reverse lookups?#2017-03-1518:38eraserhde.g. [{(default :foo/_bar []) [:foo/uuid]}]#2017-03-1518:38favilaMaybe it can be if it is a component entity (where reverse-lookup is a scalar)#2017-03-1518:39favilacomponent attribute rather#2017-03-1518:39eraserhdOh, good lord.. THere's some spec in my own app rejecting it.#2017-03-1518:40favilaI'm not sure you can default cardinality-many attributes is the thing#2017-03-1518:49eraserhd@favila verified (not possible)#2017-03-1520:03djjolicoeursetting the object-cache size on the transactor controls the object-cache size of the transactor, right? If I wanted to set that on a peer I would do so via a java option on the peer, is that right?#2017-03-1520:44marshall@djjolicoeur correct#2017-03-1520:44marshallMemory index setting on the transactor is used by all peers
Object cache size is set independently on each, but defaults to 50% of the heap#2017-03-1520:45djjolicoeur@marshall thanks, that what I though but wanted to make sure#2017-03-1521:36djjolicoeur@marshall is there a good way to ensure that a transactor is running, i.e. something we would monitor? we have an HA setup, and we want to have some monitoring on both the primary transactor and the failover to ensure we get alerted if either dies#2017-03-1521:37marshallHeartbeatMsec and HeartMonitorMsec#2017-03-1521:37marshallfor the primary and standby, respectively#2017-03-1521:37marshall@djjolicoeur ^#2017-03-1521:42djjolicoeur@marshall those are settings internal to both the primary and standby, right? I was looking for something we might be able to monitor externally#2017-03-1521:46marshallare you using CloudWatch?#2017-03-1521:46marshallfor metrics#2017-03-1521:47marshallor some other custom metrics callback?#2017-03-1521:59djjolicoeurour transactors don’t run on AWS, so we don’t use CloudWatch. we track metrics in riemann with the yeller /datomic-riemann-reporter. but what I’m looking for is actually less around metrics and more around telling our automation framework that the transactor is up and ready. a port that is open or something along those lines. and if a call to that fails, or a number of calls to that fails, restart the transactor.#2017-03-1522:00djjolicoeurif no such thing exists off the top of your head, that is fine, we will figure something out.#2017-03-1522:19marshallthe heartbeat would definitely serve that purpose#2017-03-1522:21marshalli can’t think of much else other than trying to connect#2017-03-1522:24djjolicoeurthanks, I’ll look into what we can do with the heartbeat#2017-03-1621:18uwoI have a set of eids I know I want to exclude from my query results. what’s a good way to achieve this. (here’s a pseudo-code query I know doesn’t work):#2017-03-1621:20uwo(just as clarification, ?gid is a uuid and ?exluded-bills is a set of entity ids)#2017-03-1621:24favila'[:find [?b ...]
:in $ ?gid [?excluded-invoice-group-id ...]
:where
[?g :invoice-group/id ?gid]
(not [(= ?excluded-invoice-group-id ?g)])
[?g :invoice-group/bills ?b]]
#2017-03-1621:24favilaBut it'd be even better probably to just test set membership directly#2017-03-1621:25favila'[:find [?b ...]
:in $ ?gid ?excluded-invoice-group-id-set
:where
[?g :invoice-group/id ?gid]
(not [(contains? ?excluded-invoice-group-id-set ?g)])
[?g :invoice-group/bills ?b]]
#2017-03-1621:26uwoahh, yeah. that is simpler.#2017-03-1621:26uwo@favila thanks!#2017-03-1621:26favilaprobably faster also#2017-03-1621:26favilanp#2017-03-1712:39thomashi, I have been running into the problem where Datomic Entities are not maps....#2017-03-1712:40thomasand I have found this bit of code that https://gist.github.com/jtmarmon/0a644fbca15a1742964c seems to solve it (Almost)#2017-03-1712:40thomasbut now I get NPE's#2017-03-1712:41thomasI added a (if (nil? e)... to my code, but then it looks like I get way too much data.#2017-03-1712:41thomaswhat is the best way of dealing this?#2017-03-1713:02stuartsierra@thomas What are you trying to accomplish? If you want maps to start with, d/pull may be an easier API to use, since it does return normal Clojure maps.#2017-03-1713:04stuartsierraAs a side note, (-> e class .toString (clojure.string/split #" ") second (= "datomic.query.EntityMap")) is a complicated way to write instance?.#2017-03-1713:06thomasI am using an rather old version of datomic... so can't really use d/pull 😞#2017-03-1713:07thomasand yes... re the second item I think you are correct.#2017-03-1713:14thomasand I am trying to do a postwalk on the EntityMap. and that was causing problem.#2017-03-1713:15thomasthen @dominicm pointed out that EntityMaps are not Clojure maps... there is a (into {} entity) in the code, but from what I understand that won't work on nested maps#2017-03-1713:53thomasSo what is the best way of turning EntityMaps into Clojure maps? With (into {} entity) I can't do a postwalk and with the above mentioned function I am getting different results it seems and those result in failure higher up the stack.#2017-03-1714:00dominicm(clojure.walk/postwalk (fn [x] (if (instance? datomic.query.EntityMap x) (into {} x) x)) input)
Does this work @thomas?#2017-03-1714:06thomaslet me try#2017-03-1714:08thomasno, that gives me an java.lang.AbstractMethodError error again 😞#2017-03-1714:08thomasfrom what I understand a prewalk should do the trick....#2017-03-1714:08thomaslet me try htat#2017-03-1714:09thomasthat looks like it...#2017-03-1714:10thomasprewalk instead of postwalk... let me try that in our slightly bigger env#2017-03-1714:13thomasthat looks ok!#2017-03-1714:14thomasprewalk it is...#2017-03-1715:42thomasThank you @stuartsierra and @dominicm !!!#2017-03-1715:42thomasNow it is working as expected.#2017-03-1715:48dominicmoh, of course. Makes sense.#2017-03-1716:02thomasmost things makes sense after you have fixed them 😉#2017-03-1716:16a.espolovGuys for query
[:find (max 5 ?e) (pull ?r [*])
:with ?player
:where [?player :player/region ?r]
[?player :player/attr.stats.wins ?e]]#2017-03-1716:16a.espolovI understand that in order to get all of the attributes for the entity layer:p I'll have to write your function aggregation?#2017-03-1717:41favilawhat?#2017-03-1717:42favilaI don't understand the question#2017-03-1717:43favilawhat is ?e here? why are you calling max on it?#2017-03-1717:45a.espolov@favila want to choose 5 entities which have a maximum value of an attribute :player/attr.stats.wins#2017-03-1717:48a.espolovquery is works
just me instead of one attribute (:player/attr.stats.wins) are interested in the whole essence of :player#2017-03-1717:48favilawhat is the type of :player/attr.stats.wins?#2017-03-1717:48a.espolovlong#2017-03-1717:49favilacardinality-one?#2017-03-1717:49a.espolovyes#2017-03-1717:51favilaso you want (d/pull db ?player [* {:player/region [*]}]) for the 5 players with the most wins?#2017-03-1717:54a.espolov@favila i'm sorry))
(d/q '[:find (max 5 ?e) (pull ?r [:region/key])
:with ?player
:where [?player :player/attr.stats.wins ?e]
[?player :player/region ?r]
(d/db conn)#2017-03-1717:54favilatop 5 players per region?#2017-03-1717:55a.espolovyes#2017-03-1717:57favila[:find (max 5 ?n-wins) ?r (pull [*] ?player)
:where [?player :player/attr.stats.wins ?n-wins]
[?player :player/region ?r]#2017-03-1717:58favilasorry hit enter too soon#2017-03-1717:58favilais that what you are thinking?#2017-03-1717:58favilasince you are aggregating two different ways I'm not sure you can do it in a single query#2017-03-1718:00a.espolov@favila the query above does not work(
'invalid expression pull'#2017-03-1718:01faviladid you remove the :with?#2017-03-1718:01a.espolovyes#2017-03-1718:01favilaI don't think that query gives you what you want anyway#2017-03-1718:01favilaoh, you have to reverse arg order#2017-03-1718:01favilaman I hate that so much, I always get that wrong#2017-03-1718:01a.espolovI actually agree on something to get :db/id for each :player#2017-03-1718:01favilad/pull and pull reverse arg order#2017-03-1718:04a.espolov@favila I don't quite understand the use case, it is not difficult to show the minimum sample?#2017-03-1718:05favilaanything in :with is kept for result-set, but removed for aggregation in :find#2017-03-1718:06favilaso you can't both aggregate by a subset and also keep the full tuple#2017-03-1718:06a.espolovoh(#2017-03-1718:07favilaSo you must either query twice, or do the aggregation yourself afterwards#2017-03-1718:13a.espolov@favila Thank you, but I still don't understand how two queries you can get at least :db/id for :player with the highest number of victories#2017-03-1718:13favilaIn first query, you get [?region #{?max-five-wins}]#2017-03-1718:14favilain second query, you find players whose regions and wins match#2017-03-1718:14favilayou may get more than 5 results per region if multiple players have the same number of wins#2017-03-1718:14favilabut you can sort+limit that yourself#2017-03-1718:14favilaI will write an example#2017-03-1718:17favila(let [max-wins-per-region (d/q '[:find ?r (max 5 ?n-wins)
;; No point to :with ?player
:where
[?player :player/attr.stats.wins ?n-wins]
[?player :player/region ?r]] db)
max-wins-per-region-relations (into []
(mapcat (fn [[region-id n-wins]]
(map #(do [region-id %]) n-wins)))
max-wins-per-region)]
(d/q '[:find ?r-key (pull [*] ?player)
:in $ [[?r ?n-wins]]
:where
[?player :player/region ?r]
[?r :region/key ?r-key]
[?player :player/attr.stats.wins ?n-wins]]
db
max-wins-per-region-relations))#2017-03-1718:18favila(I assume :player/region is cardinality-one? players have only one region?)#2017-03-1718:18a.espolovboth question answer is yes#2017-03-1718:19favilaso here is another approach#2017-03-1718:20a.espolov@favila thanks)
for a short time discovered many newlifer#2017-03-1718:20favilaYou can always do the aggregation yourself, too, if you can hold the intermediate result set#2017-03-1718:21favilaquery that returns [?player-id ?region-id n-wins], then do a reduce over it#2017-03-1718:22favilain that case the player cutoff will be arbitrary. If 6 players in a region all have 10 wins, which player is not listed?#2017-03-1718:23favilaif that doesn't fit in memory, you can use (d/datoms :aevt :player/region), and reduce over it, grabbing player data and aggregating at the same time#2017-03-1718:25favilaRemember, queries run on the peers. You can do everything with code instead of queries if you want. The client is going to download every intermediate value either way.#2017-03-1718:25favilaThere's no need to fit everything into queries, certainly not into one single query#2017-03-1718:56a.espolov@favila when in the base of the 305 000 players request to get top players totally leaves 2.2 minutes#2017-03-1718:58a.espolovthx)#2017-03-1719:14favila@a.espolov if you have an index on :player/attr.stats.wins, it may be faster to put that clause first#2017-03-1719:15favila(depends on the selectivity of the index)#2017-03-1719:16a.espolov@favila you can add an index to the entity attribute if already exist in the database instances are selected?#2017-03-1719:17favilaIf :db/index = false, you can add an index#2017-03-1719:18favilaBy selectivity I just mean how many records you will get for a given value#2017-03-1719:19favilae.g., if number-of-wins clusters around a few common values, it may touch fewer datoms to find the region then the number of wins. But if number of wins tends to vary widely the index is more selective, so getting by number of wins, then the region will touch fewer datoms#2017-03-1719:45a.espolov@favila la index i can disable attribute have no problems with existing records in the database?#2017-03-1719:56favila@a.espolov I don't understand?#2017-03-1720:04a.espolov:db/index = true => :db/index = false
such an operation for data in database is executed without problems?#2017-03-1720:05favilaThese are the permitted schema alterations: http://docs.datomic.com/schema.html#altering-schema-attributes#2017-03-1720:06favilayou can change :db/index by itself at will#2017-03-1720:07favilaBut removing an index in your scenario will not help query performance. The reasons to remove an index are: less storage for the index, less indexing time.#2017-03-1720:08favilaI think it's usually better to index everything, then turn it off later if either of those becomes a problem#2017-03-1720:12a.espolov@favila thanks#2017-03-1800:49SoV4Hi, how can I get the latest 9 entries from my datomic data store. I want the latest 9 :author/email .. how would I go about doing that?#2017-03-1807:46val_waeselynck@sova you can iterate on the Log and search for the corresponding datoms... but oftentimes its best to have an explicit time attribute on the entity.#2017-03-1815:47ezmiller77Is it possible to add entities to the db without a schema, say for testing simple db utility functions?#2017-03-1815:59camdez@ezmiller77: I don't think so. But, depending what kind of utility functions you're testing, it might be worth noting that datomic.api/q can be used against a static list of facts rather than a Datomic DB. This way you could conjure up whatever data you wanted without touching a schema. #2017-03-1816:00ezmiller77Yeah… was thinking about that. I might be testing for no good reason, since I was basically just testing (d/pull)#2017-03-1816:02camdezFair enough. :)#2017-03-1818:38yonatanelHow suitable is datomic-free as a private in-memory database of a stream processing node, or an actor, agent etc?#2017-03-1822:39riadvargasHi! I want a case insensitive where clause. Someone know if it's possible and how can I do that? I already tried this https://stackoverflow.com/questions/32164131/parameterized-and-case-insensitive-query-in-datalog-datomic but not worked for me.
Worked partially but if I use "you" as input and have this records "you" and "Youtube", it'll return both of them. How can I just retrieve the exact match?#2017-03-1901:26yonatanel@riadvargas What about using clojure.string/lower-case instead of regex?#2017-03-1903:05SoV4@val_waeselynck thanks val, I think I have a timestamp attrib on my data, that will come in handy.#2017-03-1911:43isaacwhy datomic client query does not support bindings? eg:
[find .... :where [(quot ?time 60) ?minute]]#2017-03-1911:45isaacgot this error:
{:cognitect.anomalies/category :cognitect.anomalies/incorrect, :cognitect.anomalies/message "The following forms do not name predicates or fns: (quot?)", :dbs [{:database-id "58ce6a9d-287a-434a-b282-0107\
237fc4f9", :t 1003, :next-t 1008, :history false}]}
#2017-03-1913:12favila@isaac Error message is about quot not binding. Try fully qualifying it: clojure.core/quot#2017-03-1913:33isaac@favila I tried, but still got the same error#2017-03-1913:36favilaExactly the same?#2017-03-1913:38favilaDo you maybe have a typo? It says it can't find quot? (note the question mark)#2017-03-1913:38isaacyeah,
{:cognitect.anomalies/category :cognitect.anomalies/incorrect, :cognitect.anomalies/message "The following forms do not name predicates or fns: (clojure.core/quot?)", :dbs [{:database-id "58ce6a9d-287a-434a-b282-0107\
237fc4f9", :t 1003, :next-t 1008, :history false}]}
#2017-03-1913:40isaacyeah, I also tried mod, it’s neither not working#2017-03-1913:46isaacit is strange that the same query working in peer#2017-03-1913:46isaac@favila#2017-03-1914:34favilaCould you have copy+pasted a control character or something? The question mark is still suspicious.#2017-03-1914:44isaac(<!!
(client/q conn {:query '[:find ?e ?tx
:where
[?e :db/txInstant]
[(quot ?e 2) ?tx]]
:args [(client/db conn)]}))
#2017-03-1914:44isaac@favila#2017-03-1918:18nickikHas anybody explored datomic transactor in kubernetes?#2017-03-1918:19nickikCould kubernetes handle the hot failover?#2017-03-1918:20devthi've played with it but haven't fully productionalized yet. HA would simply be a Deployment with 2 replicas, right?#2017-03-1918:21nickikThats what I am hoping.#2017-03-1918:21devthi'm not totally sure what the readiness and liveness probes would look like#2017-03-1918:23nickikIm a real beginner with all this kubernetes stuff so I have no clue. I think for starters I will put the transactor and AWS directly and only deploy the peers in the kubernetes. That should be easy.#2017-03-1918:23devthyes, peers on k8s should be straightforward#2017-03-1918:24nickikMaybe put memcachd in the cluster to reduce the pressure on dynamodb#2017-03-1918:24nickikPutting everything into the cluster seems hard, all the stateful stuff in kubernetes is confusing as hell.#2017-03-1918:24devthstateful stuff only recently became supported with StatefulSets#2017-03-1918:25devththere's a (very active) k8s slack btw#2017-03-1918:25nickikThe question is with what storage backend it would be best used.#2017-03-1918:26devththe transactor isn't actually very stateful (besides caches i assume) - mainly just the storage#2017-03-1918:26devthcould work with a variety of storages#2017-03-1918:27devthif on aws dynamo makes sense. if gcp, cloud sql (mysql or postgres) maybe#2017-03-1918:27nickikWork yes. Easy to set up and run is the question.#2017-03-1918:28nickikI will use whats native to the cloud right now, but I think it would be much cooler to have setup that is all in kubernetes and can be moved to different places.#2017-03-1918:29devthyou mean including storage?#2017-03-1918:29devthi'm running a postgress + transactor + peers all in a k8s cluster but like i said, not productionalized. been a few months since i played with it.#2017-03-1918:29nickikYeah, that would be fun.#2017-03-1918:30devthpostgres is a statefulset with attached ssd persistent disks#2017-03-1918:30nickikIs that opensource? I have been thinking about that setup as well.#2017-03-1918:30devthi haven't open sourced it .. just stuff i was playing with#2017-03-1918:31nickikCould you show me the container img for the transactor?#2017-03-1918:31nickikI am also just playing around, so it does not need to be production quality.#2017-03-1918:32devthi used https://github.com/pointslope/docker-datomic to build a datomic docker image#2017-03-1918:32devthmy Dockerfile:
FROM pointslope/datomic-pro-starter:0.9.5390
ADD restore-db.sh /usr/local/bin/restore-db
ENV MYSQL_CONNECTOR_URL
# from
# download mysql connector ()
# and put it in datomic's lib dir
RUN \
wget $MYSQL_CONNECTOR_URL -O mysql-connector-java-5.1.24.zip \
&& unzip mysql-connector-java-5.1.24.zip \
&& mv mysql-connector-java-5.1.24/mysql-connector-java-5.1.24-bin.jar $DATOMIC_HOME/lib \
&& rm -fr mysql-connector-java-5.1.24 mysql-connector-java-5.1.24.zip
ENTRYPOINT ["bin/transactor", "-Xmx4g", "-Xms4g"]
#2017-03-1918:33devthoh right, i remember now i switched to Cloud SQL (on GCP)#2017-03-1918:33devthso scratch that postgres part.. still - it'd be relatively straightforward#2017-03-1918:35devthif you want to get into K8S you should check out Helm Charts for canonical ways of running standard open src projects https://helm.sh/#2017-03-1918:35nickikCool. I have tried that docker img, but I had some problems with it. Maybe I did something wrong. I guess I will try again.#2017-03-1918:36nickikI have heard of helm, but I was not sure what it was doing for me.#2017-03-1918:36devthbasically you can helm install postgres or helm install zookeeper (those aren't the exact commands)#2017-03-1918:36nickikWhat I would like is a oneclick deployment thing that gives me storge with datomic and all I have to put in is my clojure program.#2017-03-1918:37devthit packages up the deployment(s), service(s), ingress, persistent disks, etc into a single logical unit#2017-03-1918:37nickikSo that I get a hole cluster with storge, monitoring around my Clojure server.#2017-03-1918:37devthit could do exactly that#2017-03-1918:37devthbut no Datomic chart exists#2017-03-1918:37devthanyone could make one though#2017-03-1918:37nickikNice. I need to get into that. There are just so many different new developments in this space.#2017-03-1918:38devthyes. very active, but k8s seems to be beating everyone 🙂#2017-03-1918:38nickikThats why I am jumping on the bandwagon as well. Im trying to set up a cluster in AWS and doing my own projects inside of it.#2017-03-1918:39devthhere's the site for all the stable Helm charts https://kubeapps.com/#2017-03-1918:39devthincludes both mysql and postgres#2017-03-1918:40nickikThose do persistence as well?#2017-03-1918:40devthhaven't checked but i doubt they'd be recommended if they didn't#2017-03-1918:41devthsource for the postgres chart includes https://github.com/kubernetes/charts/blob/master/stable/postgresql/templates/pvc.yaml#2017-03-1918:41devththat gives you a persistent volume#2017-03-1918:41devthcloud agnostic (this is why k8s is awesome)#2017-03-1918:42nickikI was considering this: https://github.com/sorintlab/stolon#2017-03-1918:42nickikbut its probably overkill.#2017-03-1918:43devthlooks cool. not familiar#2017-03-1918:43nickikI thought if everything is so awesome and automatic, why can't I get a full setup for my 5 line clojure ring web server 🙂#2017-03-1918:44nickikThanks for your help.#2017-03-1918:44devthsounds like a good blog post 🙂#2017-03-1918:44devthnp#2017-03-1918:44nickikWell, I first need to setup the cluster to deploy a blog 🙂#2017-03-1918:44devthjust use a managed cluster#2017-03-1918:44nickikI don't actually have one#2017-03-1918:45nickiklike on google cloud?#2017-03-1918:45devthaws: EC2 Container Service
gcp: Google Container Engine#2017-03-1918:45devtheven Azure has one#2017-03-1918:45nickikYeah, I have started with that as well. But I also wanted to explore doing more low level stuff myself.#2017-03-1918:46devthi haven't tried setting up my own cluster but there's a lot automation i've seen around it#2017-03-1918:46devthcheck CoreOS' Tectonic#2017-03-1918:46nickikThats what I am setting up.#2017-03-1918:46devthgood luck 🙂#2017-03-1918:47nickikA friend of mini is hosting DNS for me, but I cant continue until he makes some changes. So in the meantime I am looking into deploying datomic once I have it running.#2017-03-1919:29nickik@devth Another question. In the transctor file, there is the holt and alt-host. How did you configure those while running in kubernetes.#2017-03-1919:54devthhost: 0.0.0.0
alt_host: datomic
where datomic was the name of the service i.e. dns#2017-03-1919:56nickik@devth So that is the kubernetes service name? Is that configured in the kubernetes yaml file?#2017-03-1919:56devthyep#2017-03-1919:57nickikok. that makes sense. The service is the one that routes to the transactor pod.#2017-03-1919:57devthe.g. apiVersion: v1
kind: Service
metadata:
name: datomic
namespace: {{.Values.global.namespace}}
labels:
app: datomic
tier: db
release: {{ .Release.Name | quote }}
spec:
ports:
- port: {{.Values.datomic_port_1}}
name: datomic1
- port: {{.Values.datomic_port_2}}
name: datomic2
- port: {{.Values.datomic_port_3}}
name: datomic3
selector:
app: datomic
tier: db
(the template syntax is because i'm doing it as a Helm Chart template - replace with real values)#2017-03-1919:58devthright#2017-03-1919:58nickikSo you actually have a Helm package?#2017-03-1919:58devthyeah just a local one. they're very easy to create#2017-03-1919:58nickikThanks for the file. I think Im slowly getting the hang of how everything connects.#2017-03-1919:59devthbasically:
Chart.yaml with some metadata
values.yaml with any vars you want
templates/k8s-yaml-files-here.yaml
#2017-03-1919:59nickikCool. Make sense.#2017-03-1920:00devthmaybe i'll write it up in a blog post if there's interest.#2017-03-1920:00nickikWhats the namespace usually?#2017-03-1920:00devthdepends on your conventions. i was using env as my namespaces, e.g. dev, prod, staging ...#2017-03-1920:00nickikI am defiantly interested in all things "clojure stack" on kubernetes.#2017-03-1920:00devthif you don't care just use default#2017-03-1920:01devthothers use teams as the namespace#2017-03-1920:01nickikWhy are there 3 ports?#2017-03-1920:01devthstandard datomic ports#2017-03-1920:01devthdon't remember their purposes offhand#2017-03-1920:01nickikah, I did not remember there were 3.#2017-03-1920:01devthin my values.yaml i'm defining them as:
datomic_port_1: 4334
datomic_port_2: 4335
datomic_port_3: 4336
#2017-03-1920:02nickikWhats your opinion on using peer or client api.#2017-03-1920:02nickikI am still a little unsure on what to use when.#2017-03-1920:02devthalways prefer peer when possible (i.e. when your app is on the jvm)#2017-03-1920:03devthclient api won't be as performant iiuc (i haven't used it)#2017-03-1920:03devth# The dev: and free: protocols typically use three ports
# starting with the selected :port, but you can specify the
# other ports explicitly, e.g. for virtualization environs
# that do not issue contiguous ports.
# h2-port=4335
# h2-web-port=4336
#2017-03-1920:03devththere's some docs on ports in the transactor props sample#2017-03-1920:03nickikOk. I can figure that stuff out. I just tought there might be another reason.#2017-03-1920:05devthkey to understanding clj on k8s is really just docker. you need to understand two mostly-orthogonal pieces:
1. how to containerize a clojure app
2. how to run containers on k8s
there's nothing really clojure-specific about #2#2017-03-1920:06devthdockerizing clojure resources.
- http://www.rkn.io/2014/09/13/clojure-docker/
- https://hub.docker.com/_/clojure/#2017-03-1920:07nickikSure. Datomic was more of a question for me. Clojure itself should be straight forward (I might try with OSv but that is besides the point)#2017-03-1920:07devthcool#2017-03-1920:08nickikhttps://github.com/Quantisan/docker-clojure#2017-03-1920:08nickikI had my eye on that.#2017-03-1920:08nickikBut pedestal already provides a docker file#2017-03-1920:08nickikand one for osv#2017-03-1920:09devththat repo is what produces the official image i linked (https://hub.docker.com/_/clojure/)#2017-03-1920:09devthbut yeah, many ways to do it. should be straightforward any way you choose#2017-03-1920:11nickikI really want to have a kubernetes and datomic setup, so that I can have quick devlopment process where I can automatically deploy the apps into that cluster.#2017-03-1920:11devththat's what i wanted too 🙂 i got it working with GitLab repo + GitLab CI#2017-03-1920:12devthyou can also develop locally against a remote transactor in a k8s cluster, if you choose#2017-03-1920:12nickikI am sort of betting on the CoreOS stuff and I am using Quay, but I have only just registered.#2017-03-1920:12devthquay's been good IME#2017-03-1920:13nickikIt strange, I tried to hock a repo up to a github repo and it did not work.#2017-03-1920:13nickikWith this repo: https://github.com/nickik/blackjack#2017-03-1920:14nickikBut I will figure it out 🙂 Nothing as easy as you hope it is.#2017-03-1920:15devthit's only easy after you go through the pain of figuring it out the first few times#2017-03-1920:16nickikWith the Quay thing its just strange because there is little you can do wrong "create new repo" -> "add github hook" done.#2017-03-1920:16nickikBut it fails to add the github hook. Authentication has worked.#2017-03-1920:16devthoh, weird. maybe a bug#2017-03-1920:17nickikYeah, I was thinking about contacting support, but I did not want to seem like a idiot. So I will just wait until tomorrow and try again.#2017-03-1920:18devthcan also use your cloud's docker registry#2017-03-1920:18devthbut you'd have to push the image there manually or setup your own CI#2017-03-1920:25nickikI want to try this security scanning stuff, that sound cool.#2017-03-1920:26nickikAlso I sort of like how CoreOS approches a lot of stuff, so if I really like quay I would not mind paying for it.#2017-03-1920:26devthi think gcp's registry does that#2017-03-1920:26devthyeah, i like their stuff#2017-03-1920:26nickikI just love that they work on trusted boot stuff.#2017-03-1920:27nickikI think even docker has added something as well.#2017-03-1920:29devthonly on paid plans 😪#2017-03-1921:32fentonHere is a datomic tutorial I worked on... Aimed at the beginner... Feedback appreciated: https://www.reddit.com/r/Clojure/comments/5zu1oc/my_datomic_tutorial_feedback_sought/#2017-03-2007:39nickik@fenton Looks really cool. Thank you, a interduction of that size was a bit missing.#2017-03-2007:40nickikI hope i will remember to give feedback when read it.#2017-03-2008:00richardwongI also tried similar code here, probably due to a documentation Error? It works fine in the peer side, but tips me error once switch to client. Pretty strange. Or, does it mean that I should use a peer for stability(internal fns)?#2017-03-2008:37yonatanel@fenton Maybe since this is a tutorial you should stick to the recommended fully qualified keywords (:user/cars instead of :cars)#2017-03-2012:20dominicmIn the docstring for tx-range, tx numbers are referenced. What are they? A count of transactions? (0, 1, 2…)#2017-03-2022:28timgilbert@fenton: I think this is pretty great, and a much needed resource. One thing you might want to consider is that as of recently, you can use string temp-ids or in many cases omit the :#db/id field altogether, which might make some of your sample data a little more precise. See "tempids" here: http://blog.datomic.com/2016/11/datomic-update-client-api-unlimited.html#2017-03-2023:11fentonThose are great points Tim, I'll update the tutorial soon!#2017-03-2023:32fenton@nickik @yonatanel thank you guys for your suggestions and feedback... I'll include.#2017-03-2023:36fenton@yonatanel I never really understood exactly what one gained by namespacing. Is there more to it than just keeping things a bit more orderly? Are they a kind of standing for RDBMS tables. Any pointers to commentary would help me explain the topic to others. Thanks. Keep in mind I'm rather daft 😉#2017-03-2100:16yonatanel@fenton Sometimes I find it easier to read e.g enums like {:reject/reason :reject.reason/unknown-user}#2017-03-2100:18fenton@yonatanel ok sounds good.#2017-03-2100:19yonatanelAnother reason might be aliases, where you still have the context of the keyword but it isn't disturbing, like ::s/invalid instead of :clojure.spec/invalid#2017-03-2100:20thedavidmeister@fenton can make refactors easier too#2017-03-2100:21thedavidmeisteri didn't follow this advice and used :min and :max for a lot of different things#2017-03-2100:21thedavidmeisteri had some bugs that i could only fix by going back and being careful about exactly what this was a min/max of#2017-03-2100:22thedavidmeistere.g. some needed to be strings and some needed to be numbers, which was screwing up my sorting in places#2017-03-2100:22thedavidmeisternamespacing would have made the causes of the bugs a lot more obvious, or avoided them#2017-03-2117:11timgilbertSay, if I have a set #{:a :b :c}, is there an easy way to find entities that have a :cardinality/many attribute :e/attr that contain exactly those values?#2017-03-2117:11timgilbert…or do I need to query for the attributes and construct my own set outside of the query?#2017-03-2117:15devthcould you simply use an (= #{:a :b :c} ?attr) expression in a where clause? (i haven't tried this)#2017-03-2117:17timgilbertHmm, I’ll try that. But I suspect that in the where clause I’m only looking at a single attribute value at a time, eg it would be checking for :a, then :b, then :c#2017-03-2120:22devthdid it work? (curious)#2017-03-2220:38timgilbertSorry, didn't see this. I wound up approaching the problem in a different way that worked, so I didn't get around to trying it.#2017-03-2220:38devthcool, np#2017-03-2118:33djjolicoeurdoes anyone have experience running a high throughput transactor? specifically regarding memory settings and increasing from the recommended prod settings?#2017-03-2201:04djjolicoeurthis is probably a silly question but if I have a fn that transacts, then derefs the future of the transaction, and then calls datomic.api/db on the db connection, is that call to datomic.api/db guaranteed to reflect a db value consistent with the tx, or would I need to use the db-after from the transaction to ensure the tx is reflected in the db value? for instance (defn test-tx [tx-data] @(d/transact (:conn db) tx-data) (d/db (:conn db)))#2017-03-2201:07djjolicoeurI generally use db-after, but wondering if that is strictly necessary#2017-03-2201:44favilaDb-after is the db immediately after the tx. D/db is some db after the tx (could possibly be after other later txs). The fact that the deref completed means that tx is now visible to the peer and any d/db on that peer will now include it @djjolicoeur #2017-03-2201:53marshall@djjolicoeur assuming the txn is successful. If it fails or times out your call to d/db might still succeed (depending on how you handle, or don't handle, the failure) #2017-03-2201:54marshallTimed out or failed txn will throw when you dereference the future http://docs.datomic.com/clojure/#datomic.api/transact#2017-03-2201:56djjolicoeurthanks @favila @marshall wasn’t sure, assuming a successful tx, whether the the peer was guaranteed to reflect that tx when a value was requested. I generally use db-after anyways to isolate from other tx’s in the system anyways, but we were having a discussion in the office today and I wasn’t sure on whether it was strictly necessary.#2017-03-2210:14dominicmhttps://github.com/Datomic/day-of-datomic/blob/master/tutorial/schema_queries.clj#L18-L20 :db/id doesn't show up in this query, is it considered "special"?#2017-03-2210:22rauh@dominicm You mean in the result of this query? Yes, there is no such attribute, it's just the e in the eavt indices.#2017-03-2210:25rauh@dominicm Also count an entity, and you won't see ;db/id counted, or seq it. It's a special field in the entity.#2017-03-2211:38rc1140hi all , is it possible to rename a field when using the pull api , i am fetching deeply nested data and would like to alias it , the nested value is returned something like {:person {:company {:location "somewhere}}} , when accessing the pull'd data i would prefer to access person-location instead of (:location (:company (:person data))) , hope that makes sense#2017-03-2215:24favila@rc1140 pull can only do two transformations: limit cardinality-many results, and default cardinality-one values#2017-03-2215:24rc1140thanks , saw that in the docs , just wanted to make sure didnt miss something obvious#2017-03-2215:25favila@rc1140 you can assert :person-location as another ident for the same attribute#2017-03-2215:25favilathat may work#2017-03-2215:25favilabut it's a really bad idea#2017-03-2215:25rc1140not sure i follow what you mean ?#2017-03-2215:26favilawhen you assign an ident, the old ident still works#2017-03-2215:26favila(until the old ident is assigned to something else)#2017-03-2215:26rc1140right , i get why that would be a bad idea as well , im happy doing a long form extraction out of the map for now#2017-03-2215:26favilaah, nevermind, I misunderstood the transform you want#2017-03-2215:27favilayou actually want to collapse levels#2017-03-2215:27rc1140correct#2017-03-2215:27rc1140using threading makes it a little less painful atm#2017-03-2215:28favilaIf you are unfamiliar with the clojure.,walk namespace, it is very handy for post-processing pull expression results#2017-03-2215:29rc1140not familiar with it , but will go read up on it , any links you have on hand that you think would be a good intro#2017-03-2215:29favilajust the docs: https://clojure.github.io/clojure/clojure.walk-api.html#2017-03-2215:29favilaAn example: https://gist.github.com/favila/6366516f2bef6b77b07f7349d4ff009e#2017-03-2215:30rc1140ta#2017-03-2215:30rc1140wow#2017-03-2215:30rc1140have never seen that style of using the pull api before#2017-03-2215:31favila?#2017-03-2215:56rc1140using d/pull and then passing it to the walk stuff , i have only using the pull api by calling d/q and then passing in i a query vector#2017-03-2215:57favilaoh#2017-03-2219:35uwovague question here. using the tx-pipeline from best practices (http://docs.datomic.com/best-practices.html#pipeline-transactions) i’m getting clojure.lang.PersistentArrayMap cannot be cast to java.util.List. I can see that I’m passing a map to transactAsync within the pipeline, but I would expect that to work. Does this sound familiar to anyone?#2017-03-2219:38uwonvm. I see what I did wrong !#2017-03-2219:39favilatransactions can never be maps#2017-03-2219:39favilaan item in a transaction can be a map#2017-03-2219:41uwoyeah, I’m wishing I would delete that embarrassing question … but it’s good to humble oneself periodically#2017-03-2220:24souenzzoHi, there is some way to same datomic's :db/txInstant in other atribute?
(Example: I want to save {:db/id (d/tempid :db.part/user) :msg/txt "Hello" :msg/inst )#2017-03-2220:25souenzzoYes, I now that I can make a query where [?msg-id :msg/txt _ ?tx][?tx :db/txInstant ?v], but it's the only way?#2017-03-2220:26favila@souenzzo You can't know the time a transaction completes until it completes?#2017-03-2220:27timgilbertYou can get the current datomic t value (before your transaction) by calling (d/basis-t db)#2017-03-2220:27timgilbert...which would give you "transaction before this one" as a potential (d/as-of) target for historical data#2017-03-2220:30timgilbert(Assuming that no other new data is transacted from a different process in between the time your call to (d/db conn) completes and when your {:msg/txt "Hello"} transaction completes)#2017-03-2220:35souenzzo(clj-time.core/now) will do for now#2017-03-2220:35souenzzo🙂#2017-03-2220:35timgilbertGood idea 😉#2017-03-2313:23dm3hello, is the datom seq obtained from (datoms db :aevt :attr) guaranteed to be ordered by datom transaction time?#2017-03-2313:53uwoduring imports (using async tx-pipeline), at a certain point the transactor became unavailable and never recovered. Is there a likely culprit?#2017-03-2313:53dm3uwo: did the transactor die?#2017-03-2313:55uwothat’s a good question. Unfortunately, I was pairing with someone and didn’t think to look at transactor log at the time on his computer#2017-03-2313:56uwothe process itself didn’t fall over#2017-03-2313:56dm3I’d guess OOM on the transactor then if the peer didn’t recover but the import process was still working#2017-03-2313:59uwoshould that be expected when attempting to run large imports on a dev computer (you, know potentially limited resources)? or are we not configured correctly?#2017-03-2314:00dm3it should be expected if your working set is larger than the configured available memory 🙂#2017-03-2314:01dm3but we are guessing here as you don’t even know if an OOM happened#2017-03-2314:04marshall@dm3 the order of datoms is determined by the particular index you’re using. the only one that is fully ordered by t is the log#2017-03-2314:04marshallhttp://docs.datomic.com/indexes.html#2017-03-2314:07dm3marshall: thanks#2017-03-2314:11dm3what’s the best way to find the time of the latest assertion for a given :attr?#2017-03-2314:13dm3I guess (max-by :tx (datoms :aevt :attr)) isn’t the worst option#2017-03-2314:19dm3actually I’ll just annotate the transaction with a marker#2017-03-2314:31marshallNew Datomic Training Videos and Getting Started Documentation:
http://blog.datomic.com/2017/03/new-datomic-training-videos-and-getting.html#2017-03-2314:55uwo@dm3 thanks for the help!#2017-03-2316:36dominicmAm I right in thinking that there is only 1 tx-report-queue per client? And if I want to subscribe in multiple places I'll need a pubsub implementation?#2017-03-2316:48favilaThere is only one tx report queue per peer per connection. You don't need pubsub necessarily, you could just have one listener that multiplexes#2017-03-2316:48dominicmSure. Multiplex would probably have been the correct term, yeah.#2017-03-2316:48spieden@dominicm just curious: what’s your use case?#2017-03-2316:50dominicm@spieden I'm ETL'ing datomic -> sql. But in the near future we need to do start reacting in a streaming manner to transactions too (I've admittedly forgotten what for)#2017-03-2316:50spiedenah ok, cool so a real time ETL type thing#2017-03-2316:50dominicmYeah.#2017-03-2316:50dominicmNeed it for Tableau / Business Inteligence software. That wants ODBC/SQL/etc.#2017-03-2316:51spiedenmakes sense#2017-03-2316:52spiedenany ideas for how you’ll do checkpointing?#2017-03-2316:57juliobarrosHi, quick question. I'm looking to deploy a transactor in an enterprise setting and ops is asking what ports are required and why. Are all three, 4334, 4335 and 4336, required for peers and is there a short description of what they are used for. Thanks. #2017-03-2317:00dominicm@spieden writing to a side table for every tx, within a sql transaction. Should let me reliably catch up & resume.#2017-03-2317:01spieden@dominicm nice, checkpoint goes with each chunk of data#2017-03-2317:01dominicm@spieden Yup. If a write ever fails, the checkpoint never goes in. So there's no problem. Datomic's log is ordered, so no problem there either.#2017-03-2317:02dominicmI'm probably not going to use the data from the transaction queue, just use it as a "wake up"#2017-03-2317:02dominicmBut I'm undecided on that thus far.#2017-03-2317:15favila@juliobarros port 4334 is the transactor port (peer to transactor communication). On non-dev storage that is all you need#2017-03-2317:17favila@juliobarros 4335 is for dev storage only. It's an sql endpoint to the embedded h2 database in the transactor that runs when you use dev storage#2017-03-2317:18favila@juliobarros 4336 is for dev storage only. It is a browser-based sql management interface to the embedded h2 db#2017-03-2317:27juliobarrosThanks @favila do you know why host is specified in the transactor properties file? What it is used for? We'll be using docker in an env where host names may be fluid. #2017-03-2317:28favilaThe host and alt-host are written into storage as names the peer should resolve to contact the transactor#2017-03-2317:29favilathe peer connection process is 1) connect to storage 2) read magic heartbeat record written into storage by the active transactor for that storage (includes host and alt-host) 3) connect to host or alt-host (it tries both)#2017-03-2317:35juliobarros@favila hmmm ok. I get the heartbeat part. But how does this all relate to the host in the connection URL? Thanks again btw. #2017-03-2317:36favilaThe host in the connection url is the STORAGE host. The host in the transactor.properties is the TRANSACTOR host#2017-03-2317:36favilatransactors are more fluid than storage#2017-03-2317:36juliobarrosAh. Ok. Thanks. #2017-03-2318:40erichmondsorry. dumb question, but is there a similar idea to materialized views in datomic?#2017-03-2320:29stuartsierra@erichmond no#2017-03-2321:47juliobarros@favila or anyone who knows. I have to proxy the peer transactor connections through a load balancer. But setting the host to be the loadbalancer (so that the peer can find it by reading the magic record in the db) causes activemq not connected errors. #2017-03-2321:53favila@juliobarros I don't have any advice except verify the load balancer is working correctly#2017-03-2321:53favilaright port, right layer (not an http balancer, for eg), forwarding to right host, etc#2017-03-2321:55favilayou can't actually balance these connections, you are aware of that right?#2017-03-2321:55favilaYou are just using it to forward?#2017-03-2321:58juliobarrosYeah just to forward. It's working fine. It looks like a host alt-host ('internal' vs 'external' up) issue. Going to try and swap their settings. #2017-03-2321:58favila@juliobarros that should not make a difference, both are tried#2017-03-2321:58favilapeer will use whichever one succeeds#2017-03-2322:00juliobarrosBy the peer. But the current issue is with activemq. #2017-03-2322:02favilaactivemq (artemis/hornetq) is how the peer communicates with the transactor#2017-03-2322:02favilathat is the connection#2017-03-2322:04favilatransactor.properties host=http://foo.org is so peer can connect to transactor via http://foo.org#2017-03-2322:04favilausing an artemis connection#2017-03-2322:05favilaalt-host is just so you can give a different ip/dns by which you can connect to the same transactor#2017-03-2322:05favilapeer will try both and use whichever one works#2017-03-2322:06juliobarrosAh. Ok. So maybe a socket issue at the LB. (like you suggested) let me investigate. Thanks. #2017-03-2322:19erichmond@stuartsierra thx!#2017-03-2402:49wistbhi, just started learning datomic. Is the following picture correct representation of a-usage ?#2017-03-2403:01wistbAnd this#2017-03-2403:11wistbAnd this#2017-03-2412:55stuartsierra@wistb That is broadly correct, yes. In addition, both the Transactor and the Peer will communicate with Storage. There isn't really any "transactor" for the in-memory database, it's just the Peer library providing a local version of the d/transact API.#2017-03-2414:58timgilbertIs there an easy way to see what version of datomic a transactor I'm connecting to is?#2017-03-2414:59timgilbert(Like, get it out of the connection object, ideally)#2017-03-2415:14jeremyare tx ids guaranteed to be sequential, or is that an implementation detail?#2017-03-2415:39favilaIt is really hard to imagine a datomic with non sequential tx#2017-03-2415:39favilaTx is just the t with the tx-partition bits added#2017-03-2416:03jeremyyeah, i understand. though, my question still stands 😉#2017-03-2419:39stuartsierra@jeremy Yes, transaction entity IDs increase monotonically over time. The same is true of any entity ID within its partition.#2017-03-2419:39jeremyok, thank you!#2017-03-2419:57uwohow would one go about overriding the print-method for db/ids?#2017-03-2420:07stuartsierra@uwo Call class on a dbid, then defmethod print-method#2017-03-2420:07favila@uwo you mean for tempids?#2017-03-2420:08uwoyes. I think maybe what I’m looking to do is more along the line of (. clojure.pprint/simple-dispatch addMethod datomic.db.DbId pprint-myrecord)… I may have been mistaken about print-method..?#2017-03-2420:08uwo@favila yes#2017-03-2420:08uwobasically aiming to pretty print some data that has tempids, but I want to keep the tempids in the format that prn would produce#2017-03-2420:16uwothis was my solution
(defn pprint-myrecord [b] (.write *out* (str b)))
(def my-print (. clojure.pprint/simple-dispatch addMethod datomic.db.DbId pprint-myrecord))
(clojure.pprint/with-pprint-dispatch my-print
(clojure.pprint/pprint
etc..
#2017-03-2420:38val_waeselynck@jeremy note that the ts don't necessarily increase by one from one transaction to the next.#2017-03-2420:43jeremyval_waeselynck: good to know ,thanks#2017-03-2501:14uwoI’ve noticed some of my colleagues writing {:unique/id “some-val”} where I would expect to see a lookup ref i.e. vector. Is this fine/not problematic?#2017-03-2508:53val_waeselynck@uwo what you've seen was probably a map form inside a transaction (http://docs.datomic.com/transactions.html)#2017-03-2508:53val_waeselynckthe advantage of using those compared to lookup refs is that the target entity does not need to exist yet.#2017-03-2715:51juliobarrosHi, I’m looking for a way to manage ‘migrations’ with Datomic. I found https://github.com/rkneufeld/conformity by Ryan Kneufeld. Is this still a good way to go or do people have different/better strategies? Thanks in advance.#2017-03-2716:10marshall@juliobarros You might want to read http://blog.datomic.com/2017/01/the-ten-rules-of-schema-growth.html#2017-03-2716:11juliobarros@marshall yeah but I still need a way to manage growth. I didn’t necessarily mean changing or breaking the schema.#2017-03-2716:20marshallgotcha#2017-03-2716:49pesterhazy@juliobarros I prefer a simple .clj file with migrations#2017-03-2716:49pesterhazyI do that for SQL as well as for Datomic#2017-03-2716:50pesterhazymigration frameworks are frequently overengineered#2017-03-2716:50pesterhazybesides the Datomic schema is often idempotent#2017-03-2716:51pesterhazyI suspect re-asserting the schema on each peer start would not be harmful#2017-03-2716:53juliobarrosThanks @pesterhazy … I haven’t kept up with Datomic lately but I thought I remembered the guideline to not reassert datoms (including schema). Anyway, conformity looks lightweight enough so I’ll go with that. Thanks again.#2017-03-2720:25brettHi, I’m trying to load some data into datomic. The entities I’m trying to load have a reference to entities already in the db. Do I have to look up those entity reference before hand, or is there a way to do it all in one go.#2017-03-2720:28brettExisting:
{:db/id 1234 :location “New York”}
New
{:name “John” :location <entity ref>}#2017-03-2720:30brettCan you do this
{:name “John” :location [:find e? :where [e? :location “New York]]}#2017-03-2720:33marshall@brett http://blog.datomic.com/2014/02/datomic-lookup-refs.html#2017-03-2720:33marshallyou’ll need a unique attribute on the ‘target'#2017-03-2720:33marshalli.e. if your location entity has a :location/name attribute that is db.unique/identity#2017-03-2720:34marshallyou could use [:location/name “New York”]#2017-03-2720:34marshallinstead of having to look up the entity ID#2017-03-2720:34brettok, I was looking at that, but thought it was only for updating same entity. Let me give that a try. thx.#2017-03-2720:34marshall^this would be the preferred approach in fact - generally you should avoid using entity IDs directly#2017-03-2800:26weiis there a version of db.fn/cas that is a nop instead of throwing an exception? my use case is for upserting entities, I’d like to add a uuid if the entity is new, but not change the existing uuid if an entity exists. looking for an elegant way to do this#2017-03-2800:37zaneThat'd be pretty easy to write!#2017-03-2801:04zaneSee "Create a function". http://docs.datomic.com/database-functions.html#2017-03-2809:45augustlwithout an execption, your entire tx would have to return nil in order to be no-op, so afaik you would need to write a db function that wraps your entire transaction#2017-03-2809:45augustli.e. [:myCas e a v [list of other facts]] so that if your fact e a v is a no-op you won't return the list of other facts either#2017-03-2809:46augustlexecptions for control flow in functional programs? 🙂#2017-03-2813:59brett@marshall I figured out my problem. My :tx-data wasn’t in a vector, it was just a map. The error message wasn’t helpful
:cognitect.anomalies/message “Server Error”
It’d be nice if the spec caught this and returned a help message telling a vector was required.#2017-03-2814:06marshallglad you figured it out. I’ll pass along the feedback about the error#2017-03-2816:39timgilbertHey, just noticed a typo on the datomic training videos page, under "part II":
> Stu also covers the topics of Datomic databses, entities, and schema
http://www.datomic.com/training.html#2017-03-2817:15marshall@timgilbert Thanks - Ill fix it!#2017-03-2817:53wei@augustl what if it just adds the same fact again if it exists? so more like upsert than cas#2017-03-2818:18augustlWell, if you have N facts and one call to your own cas fn, if your own fn just returns nil it won't affect the other facts#2017-03-2908:39dm3is there an easy way to do the following - if the param is present, bind it as in [:find ?x :in $ ?param :where [?x :x/param ?param] ...]; if it’s nil - effectively skip the clause: [:find ?x :in $ ?param :where …]?#2017-03-2910:56dm3also - what’s the best way to get the time of the transaction from the tx report queue? I understand the relevant attribute is :db/txInstant, but getting to the tx entity seems too onerous, e.g. (:db/txInstant (d/entity (:db-after tx) (d/t->tx (d/basis-t (:db-after tx)))) - is there a simpler way?#2017-03-2910:59dm3This also works:
(let [a (:id (d/attribute (:db-after tx) :db/txInstant))]
(:v (m/find-first #(= a (:a %)) (:tx-data tx))))
I guess I could cache the attribute id...#2017-03-2915:31favila@dm3 when I have "optional" params in a query I add/remove query clauses and params using cond->#2017-03-2915:31favilaYou can't really use the same query#2017-03-2920:33dm3favila: thx, that’s what I thought#2017-03-3020:55timgilbertSay, if I have a transaction that includes :db.fn/cas and some other data, like [[:db.fn/cas [:some/id 234] :my/attr 1 2] {:db/id 123 :something/else 45}], will the entire transaction be aborted if the compare-and-swap fails?#2017-03-3021:03marshallyep#2017-03-3021:03marshallhttp://docs.datomic.com/database-functions.html#processing-transaction-functions#2017-03-3021:10timgilbertAwesome, thanks @marshall#2017-03-3021:56jdkealywhere can i find more info about using multiple databases?#2017-03-3021:57jdkealyI can't really find much information about it. I want to start preparing for 10-20 billion datoms and don't want any major mistakes in my architecture.#2017-03-3022:02jdkealyLet's say for example i had a photos application. If I say every photo has ~ 10 attributes that get edited on average 10 times each, that's 100 datoms which means i can store 100M photos before the DB starts to choke. Eventually I'd have to start putting photos in a second database. Then when I do a lookup for a photo, I'd need to know which database it's in before I try to retrieve it. It's sounding pretty complicated.#2017-03-3102:00devthis there a reference for datomic connection strings?#2017-03-3102:42favilaThe docstring for datomic.api/connect#2017-03-3116:01eraserhdSo.. I'm seeing a weird case where d/pull isn't resolving a :db/ident. It works when I use d/entity.#2017-03-3116:01eraserhdI'm pretty sure this had to have worked before.#2017-03-3116:03eraserhddev=> (d/pull (db) [:taskexec/status] [:taskexec/uuid taskexec-uuid])
#:taskexec{:status #:db{:id 17592186045419}}
dev=> (:taskexec/status (d/entity (db) [:taskexec/uuid taskexec-uuid]))
:taskexec.status/available#2017-03-3116:07pesterhazy@eraserhd that's know difference between pull and entity I believe#2017-03-3116:07eraserhdAh... I'm seeing some code to fix this up in post. 🙂#2017-03-3116:08eraserhd@pesterhazy Thanks#2017-03-3116:08pesterhazyI swear it was documented somewhere but I can't find the section in the docs#2017-03-3116:08pesterhazy@reitzensteinm ^^#2017-03-3118:01devthcan datomic peers with a newer version of datomic connect to transactors running an older version of datomic? is there a limit to how many versions out of sync they can be?#2017-03-3118:04devthi'm debugging Cannot connect to server(s). Tried with all available servers. with both peer and transactor running in my staging env. verified everything can hit underlying sql storage. verified the certificates are valid. verified peer can hit the transactor endpoint.#2017-03-3118:05marshalldepends on the versions @devth#2017-03-3118:05marshallcheck the CHANGES.md#2017-03-3118:05marshallin the datomic distro#2017-03-3118:05reitzensteinm@eraserhd yeah, I found that quite frustrating. We were using an ident as essentially an enum#2017-03-3118:06devthpeer on 0.9.5561 transactor on 0.9.5390. thanks @marshall will do.#2017-03-3118:06marshallsome of the somewhat recent updates aren’t backwards compatible#2017-03-3118:07devthwould that manifest during connection or during transactions?#2017-03-3118:07marshallolder peers to newer txors are largely OK#2017-03-3118:07marshallduring connection#2017-03-3118:07devthah, good to know. so i need to be careful about coordinating upgrades#2017-03-3118:07marshallUpgrade: the transactor and the peer now use ActiveMQ Artemis
1.4.0. Peers from this release forward cannot connect to older
transactors. When upgrading to (or past) this release, you must
upgrade transactors first.#2017-03-3118:07marshallThat’s in 5404 ^^#2017-03-3118:08marshallso, yes, your issue is likely version#2017-03-3118:08devthgot it. thanks. we're not in prod yet but preparing to be.#2017-03-3119:14denikIs this possible?
(d/q '[:find ?un (pull ?u [*])
:where
[_ :app/user ?u]
[?u :user/first-name ?f]
[?u :user/last-name ?l]
[(str ?f " " ?l) ?un]]
@conn)
;; result
(["Alyssa Hacker"
{:db/id 1,
:user/first-name "Alyssa",
:user/last-name "Hacker",}])
;; desired result:
{:db/id 1
:user/first-name "Alyssa"
:user/last-name "Hacker"
:user/full-name "Alyssa Hacker”}#2017-03-3119:21pesterhazyYou're missing the middle initial, P#2017-03-3120:07denik@pesterhazy it’s there just not querying for it 😉#2017-03-3123:00weiis there any way to set the transactor timeout at run-time?#2017-03-3123:05marshallWhich timeout? And do you mean at launch or to change it while running?#2017-03-3123:56weithe transaction timeout. and actually, to restate my question, is there an easy way to set the timeout if I’m using the default cloudformation template?#2017-03-3123:56weiI assumed while running would be easier, if it’s supported#2017-04-0100:12marshallIt needs to be specified at txor startup
I can't remember off the top of my head if the provided CF template has a Java options parameter #2017-04-0100:14marshallYou might look at the json generated by the ensure scripts before you run the create stack script and see if there is a place to add java opts. If you'll shoot me a quick email to remind me, I'll try and take a look myself this weekend#2017-04-0100:14marshall@wei ^^#2017-04-0116:13weithanks @marshall, I’ll take a look as well#2017-04-0304:23csmany ideas what this means? {:cognitect.anomalies/category :cognitect.anomalies/incorrect, :datomic.client/http-error {:error :cluster.error/with-db-not-found, :dbs [{:database-id datomic:, :t 6614071, :next-t 6614072, :history false}]}}#2017-04-0312:04joseph@devth you can check this release notice: http://docs.datomic.com/release-notices.html#2017-04-0312:06josephhas anyone met this error message:
Critical failure, cannot continue: License not valid for this release of Datomic
When I use my license in the latest datomic, 0.9.5561#2017-04-0312:54jaret@joseph That error message indicates that your license has expired. We have perpetual licensing so you can continue to use your license on versions that came out while it was valid, but you cannot upgrade to any version past your license expiration date. If you want to DM @marshall or myself, we can confirm that your license is expired.#2017-04-0314:07joseph@jaret thanks#2017-04-0318:01csmis it possible to query with the result of calling with? I’m using the client API#2017-04-0319:05marshall@csm you should be able to use the value returned in :db-after for subsequent queries#2017-04-0319:16csmto also get things “transacted” with the with call? or no#2017-04-0319:16marshallyep#2017-04-0319:17marshallpass the db-after to your next query#2017-04-0319:17marshalland it will query against the db value that contains the things transacted by the with call#2017-04-0319:18csmhmm, I don’t seem to get what I’m writing back; let me make sure I’m doing it right#2017-04-0321:17favila@csm make sure you are using 0.9.5544 or later#2017-04-0321:17favilathis was broken in prior versions#2017-04-0321:17favilawell, the prior version#2017-04-0321:18csmhah, yeah, I’m running 0.9.5530 locally#2017-04-0321:19csmthanks!#2017-04-0323:22csmanother question: does the clojure client handle multiple/changing DNS resolutions? i.e., if an endpoint resolves to IP address a.a.a.a when we connect, and then later that endpoint resolves to IP addresses a.a.a.a and b.b.b.b, does the client pick up both addresses?#2017-04-0410:20karol.adamiechow does one usually store config values in datomic? lets say i need to define a margin value for a broad category of products. What is the most convienient way to do that with regards to easily manipulating that value, resetting it, and having only one instance of it per db ?#2017-04-0410:46robert-stuttafordwe have a ‘system’ entity on which we put global values#2017-04-0410:49karol.adamiechmm#2017-04-0410:50karol.adamiecso you define a system entity that has know unique name#2017-04-0410:50robert-stuttafordactually just a particular attr#2017-04-0410:50karol.adamiecso one can easily grab that anytine?#2017-04-0410:50karol.adamieccould you paste an example declaration and usage pattern? 🙂#2017-04-0410:51karol.adamieci thinnk it would be easiest to understand the pattern#2017-04-0410:52robert-stuttafordliterally just :db/ident :system with a doc string and a bool type, and a function which finds the first entity with :system true. all other code paths use that function. we manually transacted the entity’s creation#2017-04-0410:52robert-stuttafordafter that, it’s a normal entity with schema for whatever#2017-04-0410:56karol.adamiecah i see. one could also slap a unique on that to always have enforced only one :system true …. and could also retrieve it by lookoup refs… correct?#2017-04-0410:56robert-stuttafordsi 🙂#2017-04-0410:56karol.adamiecgracias 😄#2017-04-0410:58robert-stuttafordde nada!#2017-04-0411:02karol.adamiecand upserting works as well…. sweet 😄#2017-04-0420:09jdkealyI'm running into a strange error i can't really reason about. When I try to do a specific transaction, I get a stackoverflow error:
https://gist.github.com/jdkealy/6a31372d05327c29df80c19a6180eac1
I have an attribute called user/org_touches ... which is cardinality = many.
I have 2 user ids.... let's say 1 and 2
If i transact {:db/id 1 :user/org_touches [2] } it works fine, but if i do {:db/id 2 :user/org_touches [1] } it errors out with the above error#2017-04-0420:15jdkealyahh sorry nm.... was actually not the root of the error#2017-04-0511:15rnandan273Is it a option to use amazon s3 as a storage option ?#2017-04-0511:15rnandan273as a file storage?#2017-04-0512:51luke@rnandan273 It’s not built into Datomic as one of Datomic’s supported storages. However, it’s pretty common to write an application layer which stuffs larger blobs into S3 and stores content-addressed URIs in Datomic.#2017-04-0512:55rnandan273@luke thanks for your response. I had a customer who is biased towards simpleDB from amazon, so was wondering if i can push datomic with s3 as file storage#2017-04-0512:55lukeYou’ll still need one of Datomic's offical storages as well. But Amazon DynamoDB sounds like a solid choice for them, then?#2017-04-0512:56rnandan273for them simpleDB is cost effective compared to DynamoDB#2017-04-0512:57lukewell all those equations will change when using Datomic, since Datomic stores index segments that are opaque to the underlying DB. It might be more or less expensive than “natively” using whatever storage you chose.#2017-04-0512:58rnandan273the fact is many people don't know about datomic and sometimes i have to do an alternate solution to show the datomic version working better#2017-04-0513:16misha@robert-stuttaford greetings! do you by any chance know how to solve this? https://clojurians.slack.com/archives/C07V8N22C/p1491395330746925#2017-04-0513:17robert-stuttafordi believe they are tested in the order they are declared (in the vector)#2017-04-0513:17robert-stuttafordi can’t speak to possible disparity between Datomic and datascript#2017-04-0513:19mishaif there is a hit on dynamic rule - it returns both result and default (there is a snippet in #datascript next to message above)#2017-04-0513:20robert-stuttafordah, i see. i’m not sure. perhaps one of the Datomic officials can tell you, @marshall or @jaret#2017-04-0513:21mishathinking of it: if there are 2 results on the rule - both are returned as well, so reducing the query result outside a query is somewhat inevitable and is ok...#2017-04-0513:23misha#2017-04-0518:36camilleAny guidance on how I can search for email addresses where I only have the domain name?#2017-04-0520:43robert-stuttaford@camille: (d/q '[:find ?e :in $ ?pattern :where [?e :attr ?v] [(re-find ?pattern ?v]] db (re-pattern "@gmail.com"))#2017-04-0520:43camillethanks @robert-stuttaford !#2017-04-0520:44robert-stuttafordyou can use re-pattern to build a regex dynamically from an input string, which is why i include it here despite it being unnecessary in this case#2017-04-0520:44robert-stuttafordbecause you could just use a regex literal #"@gmail.com"#2017-04-0520:44robert-stuttafordthis leverages datalog’s ability to call out to arbitrary functions#2017-04-0520:45camilleoh great. i think i was looking for that literal#2017-04-0520:53lvhIs there an equivalent of a gensym inside a rule? Like I have a rule that expands to a few statements and I care that it uses the same value for all matching datoms, but I want that to be invisible externally#2017-04-0520:53lvhmaybe that’s the default 🙂#2017-04-0617:47ddellacostasooo I’m sure this has come up before but my google-fu is failing me. I’d like to test inserting values at different times and ensure that I get different values out of datomic for those date ranges--is there any way for me to insert values in the past (or some other approach that I could take advantage of here)?#2017-04-0617:49favilaYou can commit transactions in the past (set :db/txInstant on the transaction entity), but you can't commit transactions before other transactions#2017-04-0618:27marshall@ddellacosta see: http://docs.datomic.com/best-practices.html#set-txinstant-on-imports#2017-04-0618:28ddellacostajeez, I didn’t think to look in the official docs 😕#2017-04-0618:28ddellacostathanks @favila and @marshall#2017-04-0621:16csmIf I’m understanding correctly, the datomic.client api caches by database, and not by peer host?#2017-04-0700:24devthso when i try to connect using a sql connection string, the peer:
- hits storage
- determines the configured host and alt-host
- peer tries to hit host or alt-host to talk to the transactor
is that right? wondering how i'm going to connect to my in-cluster datomic from my dev machine when its dns isn't accessible from outside.#2017-04-0701:11favila@devth yes that is right. There is always /etc/hosts, vpn, ssh-tunnel, etc#2017-04-0712:04devththanks. yep i'm sure i can make it work.#2017-04-0713:30eraserhdIs there a way to extract all current (non-retracted) datoms without getting all of them, comparing timestamps and added? flags, and building a model of the database?#2017-04-0713:32eraserhd(essentially a current-state dump to EDN)#2017-04-0713:32augustlhave you looked into the index access APIs?#2017-04-0713:33eraserhdAlso, is there a difference between :eavt and :aevt?#2017-04-0713:33eraserhdDo you mean like d/datoms or is there something else?#2017-04-0713:33augustlyeah, that one#2017-04-0713:35eraserhdOhh.. perhaps you just don’t see those on a non-`history` database?#2017-04-0713:36augustlyeah exactly#2017-04-0713:36augustlif you use a normal db, eavt is exactly what you're looking for#2017-04-0713:37augustlref "building a model for the database" - not sure what you're planning there, but I would just dump plain facts, no structured entities etc#2017-04-0713:40eraserhdYeah, that works.#2017-04-0713:42augustlthe difference between eavt and aevt is just the access pattern#2017-04-0713:43eraserhdIt looks like the first few are system facts that I don’t need to copy out. Is there an easy way to detect that?#2017-04-0713:43augustlyou can look at the partition they're in#2017-04-0713:44eraserhdUser attributes are added to the db partition, too, right? So that doesn’t really tell me.#2017-04-0713:44eraserhdOh, wait. Maybe not.#2017-04-0713:44augustlhmm, good point, not sure#2017-04-0713:46eraserhdI could create an empty database and remove common facts. Assuming eids of the system things never change, I should the be able to transact that to a different database (in a test environment).#2017-04-0713:52eraserhdOh, the db.install stuff makes that harder.#2017-04-0713:53augustlyou could look up the transaction that wrote those entities and see if it has something interesting on it#2017-04-0713:53augustlperhaps they're all from the same transaction, and that transaction has something you can use to identify that it is the "boostrap" tx#2017-04-0713:53augustland then ignore all facts from that tx#2017-04-0713:54eraserhdIt seems there’s more than one.#2017-04-0713:54augustlI would guess that they have something in common. Buy maybe not#2017-04-0713:55eraserhdOh, perhaps it’s two. The first seems to make the :db.part/db partition.#2017-04-0714:08eraserhdWell, it’s greater than two, which is effectively unbounded 😄#2017-04-0714:39favila@eraserhd https://gist.github.com/favila/785070fc35afb71d46c9#2017-04-0714:40favilaThis is a bit old now, but some of the gotchas about the "dump a restorable db to edn" are still there#2017-04-0714:40favilai.e. are explained and accounted for#2017-04-0714:41favilae.g. the one you mentioned, that there are bootstrap datoms you should not keep/restore#2017-04-0714:41favilaalso this is tuned for memory dbs back when they did not have logs. Nowdays if you want to keep full history it's better to use the tx log directly#2017-04-0714:43eraserhdnice.#2017-04-0714:44eraserhdso far… the bootstrap txs I have are > 1000.#2017-04-0714:44favilathat is the rule#2017-04-0714:45favilaif id < 1000, is a bootstrap datom#2017-04-0714:46favilamore precisely, if the t of an id is < 1000#2017-04-0714:46eraserhdI mean, its not true for me: (first (d/datoms (db) :eavt))
#datom[0 10 :db.part/db 13194139533312 true]
#2017-04-0714:47favilatx of that datom is < 1000#2017-04-0714:48favila(d/tx->t 13194139533312) => 0#2017-04-0714:48eraserhdOh…#2017-04-0714:48favilain that gist I compute the max tx#2017-04-0714:48favila(def ^long ^:const min-user-tx (long (d/t->tx 1000)))#2017-04-0714:49favilaThen filter out all datoms with a tx lower than that#2017-04-0714:49favila(defn bootstrap-datom? [{^long tx :tx}]
;; Datoms in a transaction with t < 1000 are from a bootstrap transaction and
;; cannot be reissued.
(< tx min-user-tx))#2017-04-0714:50eraserhdgot it#2017-04-0714:51eraserhdnot sure what I’m going to do, but this is useful. Thanks!#2017-04-0714:51favilanp#2017-04-0905:04isaac(d/connect uri) got this error
ActiveMQSecurityException AMQ119031: Unable to validate user org.apache.activemq.artemis.core.protocol.core.impl.ChannelImpl.sendBlocking
#2017-04-1010:39cjmurphy@kwladyka (you asked a jetty clj-client deps question late last year, seems similar to this) or others - possible to solve this SO question? https://stackoverflow.com/questions/43291069/lein-ring-server-headless-fails-when-including-datomic-dependency#comment73705091_43291069#2017-04-1010:41kwladyka@cjmurphy i think i asked… year ago 😉 I found solution.#2017-04-1010:42kwladykabut not really remember… it was something about broken dependency i think..#2017-04-1010:46kwladykaanyway i am using http-kit now instead of jetty#2017-04-1010:46cjmurphySeems similar - the asking person might come back here. I'm not really a datomic person and never had to solve a triangular dependency problem myself so going to have to give up. Seems strange there's no easy was to go back to a prior version of clj-client for instance.#2017-04-1010:47kwladyka[com.datomic/datomic-free "0.9.5544"] [ring/ring-core "1.5.0"] [http-kit "2.2.0"] - it works for me#2017-04-1010:48cjmurphyThanks - I'll relay 🙂#2017-04-1010:48kwladykaheh ok 🙂#2017-04-1011:34dominicmI notice that d/datoms doesn't have any guarantees about laziness in the docstring, is that intentional?#2017-04-1015:11isaacshould datomic use datasource for sql schema?#2017-04-1019:28djjolicoeurdoes anyone have experience running a higher than normal throughput transactor? We are currently running on the 4G set up from the docs and appear to be outgrowing it. wondering if there are tradeoffs associated with going above the 4G recommended settings and increasing the object-cache size?#2017-04-1019:39marshall@djjolicoeur yes I’ve run higher heaps fairly consistently for load testing/etc
What in particular indicates to you you’re outgrowing 4g?#2017-04-1019:40djjolicoeur@marshall you mind if I DM you specifics?#2017-04-1019:40marshallsure#2017-04-1019:53pbostromHow do folks handle failover in a HA transactor configuration on AWS? I understand that the standby transactor is going take over if the primary transactor does not send a heartbeat. But how do you signal to kill the instance of the failed transactor? My first instinct is to just have something that checks port 4334 and if it's not listening just kill the instance, but I'm wondering if anyone has any other ideas#2017-04-1019:58marshall@pbostrom The provided Datomic AMI launch script calls shutdown -h now when it detects that the transactor process terminates#2017-04-1019:58marshallchoose your favorite bash-fu method of process monitoring 🙂#2017-04-1019:59marshallthe problem with checking 4334 would be that it wouldn’t work for the standby transactor; i believe the port is only opened once the transactor becomes active#2017-04-1020:01pbostrommakes sense, thanks @marshall. is the launch script available anywhere? I decided to build my own AMI#2017-04-1020:03marshalli’m not sure if it’s publicly accessible or not
it really doesn’t do much. basically downloads the bits, unzips them, and runs bin/transactor#2017-04-1022:53csmone thing I’ve done is register a health callback with datomic, that just dumps the stats it gets to a file#2017-04-1022:54csmI then have a simple web server that responds OK if that file has been written to within a timeout, so it works as an ELB health check#2017-04-1110:51foobarI want to use a copy of my postgres db. I don't understand how datomic decides what storage (and transactor) it should use. At the moment it appears to ignore the db details in the url that I give it and just uses the datomic db name. This means that it uses the original db not the copy#2017-04-1110:54foobarFor example this: (def uri "datomic:<sql://live?jdbc:postgresql://postgres:5430/datomic_dev?user=datomic_dev_admin&password=pass%22|sql://live?jdbc:postgresql://postgres:5430/datomic_dev?user=datomic_dev_admin&password=pass">) seems to happily use a db called datomic_live#2017-04-1111:19foobarOk, I guess I need to overwrite the pod-coord entry#2017-04-1116:18Matt ButlerIs there a way to get/calculate the total segments in a db? (potentially via cloudwatch?) trying to calculate the % progress of a backup.#2017-04-1213:09isaacDatomic transact-async use pipeline throws java.util.ConcurrentModificationException#2017-04-1213:57kurt-yagramHow to do a 'group by'-like query? Having a dataset like:
1 yes
2 no
3 no
4 yes
...
I'd like to count how many times there's a yes and a no, so resulting in:
[["yes" 23]
["no" 19]]
#2017-04-1213:58marshall@kurt-yagram http://docs.datomic.com/query.html#with#2017-04-1214:09kurt-yagram@marshall got it, thx!#2017-04-1214:16marshallnp#2017-04-1215:09eggsyntaxHey folks, I have a colleague who's really stuck on Java setup of Datomic. If it were Clojure, I could help and/or point him here, but I have zero idea where folks go for help with Java Datomic. Can anyone clue me in on that?#2017-04-1215:10eggsyntaxHe's got it installed; he's just having problems with setup in his Java project.#2017-04-1218:09weiI’m thinking of serializing entitymaps as an id and a t-value to store in some external system (e.g. Redis). has anyone taken this approach, or have already written an EDN tag-reader?#2017-04-1218:20alexmiller@eggsyntax the mailing list would probably be good https://groups.google.com/forum/#!forum/datomic#2017-04-1218:20eggsyntaxThanks Alex 🙂#2017-04-1220:29bmaddyI’m trying to specify a time for a transaction as described here: http://docs.datomic.com/best-practices.html#set-txinstant-on-imports
Does anyone see what I’m doing wrong here?
@(d/transact conn [{:db/id (d/tempid :db.part/user) :ti.location/name "somewhere"} {:db/id "datomic.tx" :db/txInstant #inst "2017-04-12T20:26Z"}])
datomic.impl.Exceptions$IllegalArgumentExceptionInfo: :db.error/not-an-entity Unable to resolve entity: datomic.tx in datom ["datomic.tx" :db/txInstant #inst "2017-04-12T20:26:00.000-00:00"]
#2017-04-1220:47favila@bmaddy what version of datomic?#2017-04-1220:51bmaddy@favila My project.clj has this: com.datomic/datomic-pro "0.9.5359"I’m trying to figure out what the db version is…#2017-04-1220:54favila@bmaddy string tempids were not introduced until 0.9.5530#2017-04-1220:55favilayou can use the old method: {:db/id (d/tempid :db.part/tx) :db/txInstant #inst"..."}#2017-04-1220:57bmaddyYep, that did it. Thanks @favila!#2017-04-1314:07dm3If a Transactor serves multiple databases, does it only send the datoms for the dbs the Peer is connected to into the memory index of the Peer? I assume there’s only one memory index on the Transactor, shared by all the databases.#2017-04-1414:01stuartsierra@dm3 some background here https://groups.google.com/d/topic/datomic/t0TqLSV-Mf4/discussion#2017-04-1414:04stuartsierra@dm3 and here https://groups.google.com/d/msg/datomic/TpH2UKRXFd4/jLwU4xc330gJ#2017-04-1416:40dm3@stuartsierra thx, this confirms my assumptions. I have a situation where a bunch of peers are using a subset of databases with an extremely low write activity in read-only mode (+ tx subscriptions) and I’m trying to come up with an upper bound on the memory requirements. I guess I’ll try to estimate the actual memory index usage empirically.#2017-04-1714:12celldeeCan anyone point me to an example that explains how to obtain all of the results from a Datomic client api query that returns a significant number of results?#2017-04-1714:16celldeeI'm running a query and can get the first chunk of results but I'm not sure how to get subsequent chunks until I have the whole result set.#2017-04-1714:38matthavenercelldee: looks like you just pass the :offset param#2017-04-1714:47celldeematthavener: do you have an example that you could show?#2017-04-1714:48matthavenercelldee: sorry, I don’t. I haven’t used the client API but I was curious so I read through the docs.#2017-04-1714:51celldeematthavener: no worries. This is my first attempt to use it for querying. The client/q functions seems to return a channel that I can take from but it chunks results. I read the :offset parameter to mean that you want to skip a number of results.#2017-04-1714:52celldeeMy query looks like this -
(def first-query
{:query '[:find ?e ?id
:where
[?e :active-chance/id]
[?e :active-chance/lastName]
[?e :active-chance/id ?id]]
:args [db]
:limit -1
:chunk 10000})#2017-04-1714:54celldeeIf I leave out the :limit argument then I only get 1000 results returned and with the :chunk argument I get 10000 results back. I'm expecting over 90000 items in the result set, which is what I get when I run the query in the Datomic console.#2017-04-1715:00celldeeAlso, if I leave out the :chunk argument then I get 1000 results#2017-04-1715:03celldeeI'm imagining a mechanism where I keep taking from the channel until the result set is exhausted. Just not sure how to do that. I suppose if I was more conversant with core.async this would be obvious, however, I'm a bit of a Clojure novice.#2017-04-1715:14uwoWhile running an importer I’ll intermittently get :db.error/transactor-unavailable. We are pipelining our transactions, as in http://docs.datomic.com/best-practices.html#pipeline-transactions, with some logic to attempt a few retries with a 2 second timeout, but that doesn’t appear to be sufficient. What’s the right way to handle back pressure, or is this another issue?#2017-04-1715:19uwoalso, any advice on what to do if this occurs during an import? Critical failure, cannot continue: Heartbeat failed#2017-04-1715:26uwo(this is with a dev: db)#2017-04-1715:40uwowhen the reference material says “be willing to wait for several minutes for a transaction to complete”, should I just use an exponential backoff retry when I get transactor unavailable?#2017-04-1715:46marshall@celldee you can just repeatedly 'take' from the channel with the <!! Operator#2017-04-1715:46marshallOnce it's empty it will return nil#2017-04-1715:47marshall@uwo heartbeat failure means the transactor is unable to write to storage. It will self destruct as a result. #2017-04-1715:48celldee@marshall Thanks. What would that look like please? I'm still trying to learn Clojure.#2017-04-1715:49marshallhttps://stackoverflow.com/questions/33497426/idiomatic-clojure-taking-multiple-items-off-a-channel#2017-04-1715:49marshallThat might help ^#2017-04-1715:49celldee@marshall Thanks very much!#2017-04-1715:50marshallAlso helpful : https://cognitect.com/events/training-sessions/core-async-clojurewest-2015#2017-04-1715:50marshallLuke does a good job of covering the concepts of core async in those videos#2017-04-1718:06alexandergunnarsonQuick question — is it possible to query Datomic backed by DynamoDB without a transactor running? It doesn't look like it.#2017-04-1718:07alexandergunnarson(datomic.api/connect "datomic:ddb://...")
org.apache.activemq.artemis.api.core.ActiveMQNotConnectedException: AMQ119007: Cannot connect to server(s). Tried with all available servers.
type: #object[org.apache.activemq.artemis.api.core.ActiveMQExceptionType$3 0x2512bf7b "NOT_CONNECTED"]
clojure.lang.ExceptionInfo: Error communicating with HOST localhost on PORT 4334
#2017-04-1718:08alexandergunnarsonI know transactions aren't possible without one of course but I figured that queries might be, given that in the case of a transactor failure, queries are still possible#2017-04-1720:46luke@alexandergunnarson no, a transactor is required for queries. The the fact that it continues to work (for some amount of time) after a transactor fails is an implementation detail, not something that should be depended upon.#2017-04-1720:46alexandergunnarsonAh okay, makes sense. Thanks!#2017-04-1720:46lukeDatomic’s architecture could in theory support read-only models but it doesn’t right now#2017-04-1720:48seancorfieldDoes that mean that in the time between the primary transactor failing and the standby transactor coming into play, you can’t run queries, even against the peers? So you (temporarily) lose reads as well as writes?#2017-04-1721:03luke@seancorfield I’ve never seen a read fail in practice. I’d have to check if that’s a guarantee or just the way it always works (eventually, a peer will “notice” that it’s transactor is dead and stop working, but that is longer than the failover window)#2017-04-1721:04lukeGood question for @jaret, he probably knows the answer.#2017-04-1721:05lukeactually @seancorfield I should have read the docs before answering: Datomic does indeed guarantee that reads are available during a failover: http://docs.datomic.com/ha.html#2017-04-1721:08seancorfieldOK, glad to hear that. I would have been (unpleasantly) surprised if that wasn’t the case 🙂#2017-04-1816:15val_waeselynckReleased Datofu, a library of common Datomic utilities https://github.com/vvvvalvalval/datofu#2017-04-1816:26dominicm@val_waeselynck The idempotency of the db functions, is that implemented on top of your migration system? (conformity style basically)#2017-04-1816:29val_waeselynck@dominicm no, it's just that installing the schema for a transaction fn is idempotent#2017-04-1816:29uwoI’m getting a failed heartbeat on a datomic:dev:// database. Incidentally, it keeps happening after running a large import, though I’m not suggesting that’s related. Any ideas?#2017-04-1816:40dominicm@val_waeselynck the ordering, do you know how well that performs out of interest?#2017-04-1816:42val_waeselynck@dominicm never tested it in production with more than a few dozen items, in which case it showed no performance issue#2017-04-1816:43val_waeselynckbut that question is a bit difficult to answer - it depends on the data distribution and the queries you run on it#2017-04-1816:44dominicm@val_waeselynck I agree, thought as much 😛 #2017-04-1818:44jaret@uwo heartbeat fails when the transactor can no longer write to storage. I know this is just DEV but have you configured HA ? http://docs.datomic.com/ha.html#2017-04-1818:45jaretits kind of hacky, but you might also be able to up the heartbeat-interval-msec in your transactor properties file to give you some more breathing room before failover#2017-04-1818:46jaretbut if its failing routinely that likely won't matter and you need to figure out what might be causing storage to be unreachable#2017-04-1819:08uwo@jaret thanks, that’s very useful info! I suspect our culprit was a deployed artifact with a mismatched peer version number, though I can’t be sure. But I haven’t run into the issue since ensuring that we were using the right version of the peer library. lame explanation, sorry!#2017-04-1901:37kschradervia @whitchapman today if people have thoughts: https://blog.clubhouse.io/auditing-with-reified-transactions-in-datomic-f1ea30610285#2017-04-1911:14souenzzokschrader: on a pedestal app, have a :tx-data key on request, that hold arbitrary tx info generated by interceptors, and a audit interceptor ;)#2017-04-1901:37kschraderfeedback welcome 🙂#2017-04-1911:31bherrmann@kschrader Humm. I thought with the newer datomic, you dont need to use “d/tempid” - you can just use strings. A slight simplification.#2017-04-1911:54foobarI get connection refused when trying to connect to my peer server, any thoughts?#2017-04-1911:59souenzzoNo tempid -> :db.part/user.
He want to install this data on :db.part/tx
@bherrmann#2017-04-1912:23bherrmannOk, makes sense - thanks#2017-04-1912:48maleghastHello everyone... Does anyone have a quick answer to this:
What data type should I use to store lattitude and longitude in Datomic?#2017-04-1912:54stuartsierra@maleghast Depends on how you want to query it. Two decimal attributes seems like the most obvious. Datomic doesn't have any spatial-indexing capability, so if you're doing something like looking for entities within a range of latitude/longitude, it will be 2 index hits plus a join.#2017-04-1912:54maleghast@stuartsierra Thanks that's very helpful 🙂#2017-04-1912:54stuartsierraIf you need true geospatial indexing, you'll need to maintain a separate index in a tool that supports that.#2017-04-1912:58maleghast*nods* I am building a postgis database to handle geospatial data, but I want to associate the data I am putting in Datomic with as many attributes as possible, so that the data is richer beyond the bare minimum.
I am using Datomic to store daily meteorological observations from a number of measuring stations, and so in defining the stations I am including their location and elevation as attributes.#2017-04-1914:06dominicmWhat's the best way to get a one-off number of transactions in the txor?#2017-04-1914:15robert-stuttaford(count (seq (d/datoms db :aevt :db/txInstant))) ?#2017-04-1914:16dominicm@robert-stuttaford If I'm honest, wanted to make sure that wouldn't blow up my box 😛#2017-04-1914:17dominicmBut actually, I imagine count is pretty good#2017-04-1914:17robert-stuttafordi can do this fine and we have in excess of 40 million#2017-04-1914:18dominicmGood enough for me!#2017-04-1914:20robert-stuttafordthe problem is t values are sparse, so its not a valid representation of the count at all#2017-04-1914:25dominicmYeah, I noticed that. I also tried txid, but that gave me a different value.#2017-04-1916:02eraserhdI have two databases, named “dev” and “prod”. I’ve backed up “prod”, and now I want to restore it over “dev”. I get the error “The name ‘dev’ is already in use by a different database.” This is using Dynamo. Is it sufficient to delete and recreate the Dynamo DB, then restore to it?#2017-04-1916:05dominicm@eraserhd I believe so#2017-04-1916:22lvhLet’s say I have a bunch of rules. They normally use an implicit database. However, I may want to query against a database and a seq of facts. It’s a little annoying to always have to type the $ arg as an argument to the rule. Is there a convenient way to do like, (with-db $2 ...)#2017-04-1916:48eraserhdApparently, if I d/delete-database then d/create-database, I still can’t restore the database because it’s already in use by the other database.#2017-04-1916:52eraserhdSo, it seems if I want to dump and restore different databases, they have to have the same ending segment in the database URI.#2017-04-1916:57marshallJust delete and restore#2017-04-1916:57marshallThe restore will create it#2017-04-1916:57marshall@eraserhd ^#2017-04-1917:05eraserhdahhh#2017-04-1917:05eraserhdthis worked#2017-04-1917:36Petrus TheronAny known issues running Datomic in a Docker container on OSX via docker run my-image based on Pointslope image? (https://github.com/pointslope/docker-datomic).
Exception on d/connect:
java.util.concurrent.ExecutionException: org.h2.jdbc.JdbcSQLException: Connection is broken: "java.net.ConnectException: Connection refused: localhost:4335" [90067-171]
Datomic seems to be running:
docker run 8f4db5ba0824
Launching with Java options -server -Xms1g -Xmx1g -XX:+UseG1GC -XX:MaxGCPauseMillis=50
Starting datomic:, storing data in: dev-data ...
System started datomic:, storing data in: dev-data
Exposed ports: 4334, 4335, 4336#2017-04-1917:47marshallYou'll need to set alt-host to the ip of the docker machine @petrus#2017-04-1917:51Petrus TheronManaged to connect by mapping ports explicitly: docker run -p 4334-4336:4334-4336 0ac10e837e84
Now I'm getting Could not find mydb in catalog#2017-04-1917:55marshallSounds like you need to create-database#2017-04-1919:08csmso, I’m trying to initialize a database with a schema, but I’m getting errors about duplicate datoms in the transaction, but the ones it complains about are not duplicates.#2017-04-1919:09csm{:d1 [63 :db/ident :user/guid 13194139534312 true], :d2 [63 :db/ident :user/email 13194139534312 true]}#2017-04-1919:12favila"duplicate" meaning conflict @csm#2017-04-1919:12favilaboth these datoms cannot be true#2017-04-1919:13favilaentity 63 cannot have both ident :user/guid and :user/email#2017-04-1919:13csmI’m not specifying 63 anywhere#2017-04-1919:13favilaare you using the same tempid for two different schema maps?#2017-04-1919:14favilathis is the minimum that reproduces?#2017-04-1919:14csmit is not, no. It is part of a larger schema#2017-04-1919:15csmwhich I can’t share#2017-04-1919:15favilaI mean, if I issue this tx in isolation, will it cause an error?#2017-04-1919:15favilaI guess I'll just try#2017-04-1919:16csmI wasn’t able to reproduce it on my machine, it’s only happening on my colleague’s system. Same version of datomic#2017-04-1919:16favilaworks for me with test db#2017-04-1919:16favilait is suspicious that :user/email lacks a db/id but the other one has#2017-04-1919:16csmthis happened before with two unrelated attributes; I added the :db/id "user/guid" to :user/guid and that worked#2017-04-1919:17csmso, do I need to specify a temporary :db/id for every attribute?#2017-04-1919:17favilain theory you don't but that's a new feature#2017-04-1919:18favilaBut this sometimes happens when the transactor itself creates a tempid, and the client-set and server-set tempids collide#2017-04-1919:18favilaare you sure this is exactly what hits the d/transact call? no other transformations (e.g. adding tempids) beforehand?#2017-04-1919:19favilaor using a tx function#2017-04-1919:19csmno tx functions, and I’m reasonably sure nothing adds tempids#2017-04-1919:20csmwe are also going through the peer server#2017-04-1919:20favilaah, so this is sent from client api#2017-04-1919:21favilaI didn't test that, I just issued it directly#2017-04-1919:21favilasomething is fishy#2017-04-1919:21favilahaving explicit :db/id is likely going to resolve it#2017-04-1919:21favila(as a workaround)#2017-04-1919:21favilabut this needs more investigating#2017-04-1919:30csmyeah, I’m just going to add very unique :db/ids to everything#2017-04-1919:34csm..which did indeed work#2017-04-1919:36favilaif you are paying for datomic support you should open a ticket @csm#2017-04-1919:37favila(i.e. if you are licensed)#2017-04-1919:53csmwe have a starter license, and are planning on paying later (we are still only in development now)#2017-04-1920:18jaret@csm You should still open a ticket with us. We'd like to take a look at any repro you could provide.#2017-04-1920:21jaret<mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>#2017-04-1922:05foobarCan you install db fns using the datomic client api?#2017-04-2004:07jimmyhi guys, is there any macro to convert current clojure function to datomic function ? And btw, is it the right way to use datomic function as migration function ?#2017-04-2006:56isaacWhat is the best way to pull last nth entities (`order by some/field desc`)?#2017-04-2013:56stuartsierra@isaac With a typical schema & query, you have to query for all entities of that type and sort them in the Peer. If you need to do this kind of operation frequently, it may be more efficient to create a dedicated attribute with "time" in reverse order. Then you can use d/index-range to scan for the latest.#2017-04-2016:19jdkealyAre there any examples i can use for inspiration regarding using multiple databases? I have one type of entity which I know will exceed 10BN datoms and I imagine I'll have to be adding databases and transactors maybe once every 6 months to a year. I frequently google the topic but always seem to come up blank for an example or ideas on how to model your application.#2017-04-2020:58stuartsierra@jdkealy This tends to be specific to each use case. I recommend contacting Cognitect for support and recommendations.#2017-04-2021:00jdkealyhmm yeah i don't really have budget to hire the big guns#2017-04-2021:03stuartsierra@jdkealy The Datomic Pro license includes support; http://www.datomic.com/get-datomic.html#2017-04-2021:04jdkealyahhh cool not bad#2017-04-2102:55isaac@stuartsierra Does coginect has a plan to support reverse index? eg. (rseq (d/datoms ....))#2017-04-2114:31matthavenerisaac: (d/datoms :vaet) ?#2017-04-2114:32matthavenerohh.. nevermind, I see, reading it in reverse#2017-04-2114:33marshallI believe that may be a current feature request in our customer feedback portal - I’d suggest adding your vote!#2017-04-2114:56karol.adamieci am trying to find a nice way of determining the last transaction time for a given query. I can get the whole DB value like so:
(:db/txInstant (d/entity db (d/t->tx (d/basis-t db))))
what i really want/need though is a txInstant that relates to arbitrary query. Lets say my Q is:
(d/q ’[:find [(pull ?eid [*]) ...]
:where [?eid :part/type :solution]]
db)
How can i merge the two in a manner that is composable and efficient?? 🤔#2017-04-2115:01favila@karol.adamiec You know about the ?tx segment?#2017-04-2115:01favila(d/q ’[:find (pull ?eid [*]) ?txinst
:where [?eid :part/type :solution ?tx]
[?tx :db/txInstant ?txinst]]
db)#2017-04-2115:02favilawhew that was hard to edit#2017-04-2115:03favilaso that gives you the txinst of the [?eid :part/type :solution] datom#2017-04-2115:04favilabut you can do more elaborate things, like e.g. find the max tx of all datoms on an entity, or all datoms on an entity and any entity reachable via isComponent attrs (what (pull ?eid '[*]) would get)#2017-04-2115:05karol.adamiechmmm#2017-04-2115:06karol.adamiecyes so i can get a list of entities with their txinstant#2017-04-2115:06karol.adamieclike in first example of yours#2017-04-2115:06karol.adamiecbut… my query might be arbitrarily complex . pull a lot of things…#2017-04-2115:07karol.adamiecmetaphorically i want to pour my arbitrary d/q into a ‘new db’ object, and then ask that ‘new’ db what is its latest transaction….#2017-04-2115:08karol.adamiecpreferably in a general way so i do not need to amend any queries in the system…#2017-04-2115:08karol.adamiec😐#2017-04-2115:10favilapull loses the tx of its datoms, so you either have to keep that info in parallel or write your own pull which takes datom-like input#2017-04-2115:11favilathe latter might not be so bad. You could return [?e ?a ?v ?tx] from query, and then build maps from that directly. (no pull expression, you just mapify what you get)#2017-04-2115:13favilathe first approach looks more like this:(d/q
'[:find (pull ?eid [*]) (max ?tx)
:in $ %
:where
[?eid :part/type :solution]
(component-reachable ?eid _ _ _ ?tx)
]
db
'[[(component-reachable [?se] ?e ?a ?v ?tx)
[(identity ?se) ?e]
[?e ?a ?v ?tx]]
[(component-reachable [?se] ?e ?a ?v ?tx)
[?se ?sa ?sv ?tx]
[?sa :db/isComponent true]
[?sa :db/valueType ?type]
[?type :db/ident :db.type/ref]
(component-reachable ?sv ?e ?a ?v ?tx)]])#2017-04-2115:17karol.adamiecoh, that is a wall of datalog that is way beyond me 🙂#2017-04-2115:22karol.adamiecthx @favila , i have to mull that over and experiment in order to grasp that 🙂#2017-04-2215:07ezmiller77What's the standard way of doing something limit and offset queries in datomic? -- specifically with the datomic api (not client api).#2017-04-2216:10robert-stuttaford@ezmiller77 drop and take on the results of datomic.api/q or datomic.api/datoms. datomic.api/pull has some syntax around limits when traversing relationships.#2017-04-2218:50ezmiller77okay, thanks.#2017-04-2303:26jimmyhi guys, is using #db/fn with conformity the correct way to run data migration in datomic ?#2017-04-2308:20val_waeselynck@nxqd I cannot speak for "the correct way", but yes, it's a pretty effective one. There's nothing magical to it, this blog post may help you understand how it all works: https://vvvvalvalval.github.io/posts/2016-07-24-datomic-web-app-a-practical-guide.html#data_migrations. Shameless plug: equivalent functionality is also provided in Datofu, with some built-in generic db functions to help you write migrations, see here (https://github.com/vvvvalvalval/datofu#managing-data-schema-evolutions).#2017-04-2312:56degI want to move my toy Vase app to a permanent DB, rather than the default in-memory. I've installed the Datomic Starter version, but am not sure what to do next. I've found a variety of docs that each seem to have 80% of what I need, but there are enough redundancies and contradictions that I don't see a clear path forward.
I'm initially interested in doing a dev configuration, but also want to be able to configure for production pretty soon.#2017-04-2314:25marshall@deg this might help http://docs.datomic.com/dev-setup.html
#2017-04-2314:27marshallAfter that, prod storage setup is covered here http://docs.datomic.com/storage.html
#2017-04-2314:30degThanks! I'll go through that carefully in a bit. I see one question already: that doc describes specifying the access key and secret as parameters to client/connect. But, Vase uses a datomic-Url instead. What is the syntax for passing in the parameters there? I found major hints in http://docs.datomic.com/javadoc/datomic/Peer.html#connect-java.lang.Object-, but none that seemed to match this case exactly. 😞#2017-04-2314:35marshallSecret and access key are used by peer server and client, not by peers#2017-04-2314:36marshallPeer access is secured via access to storage#2017-04-2314:36marshallI.e. sql user and pw or IAM role access control to ddb#2017-04-2315:28deg@marshall Ok, thanks. I'll work through the docs and will moan back if I hit any snags. Probably won't get to this until tomorrow.#2017-04-2408:54maleghastOK, so I have a weird one... (I am new to Datomic and may be just making a fool of myself, but hey-ho).
I created a Datomic, in-memory DB from a vector of 23 maps, having defined the following schema:
(def meteorological-observation-stations-schema
[{:db/ident :station/id
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one
:db/doc "The Unique Identifier for the monitoring station"}
{:db/ident :station/name
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one
:db/doc "Long name for the monitoring station"}
{:db/ident :station/elevation
:db/valueType :db.type/double
:db/cardinality :db.cardinality/one
:db/doc "The Elevation at which the monitoring station is placed in metres."}
{:db/ident :station/latitude
:db/valueType :db.type/double
:db/cardinality :db.cardinality/one
:db/doc "Latitude for the monitoring station"}
{:db/ident :station/longitude
:db/valueType :db.type/double
:db/cardinality :db.cardinality/one
:db/doc "Longitude for the monitoring station"}])
but, as far as I can tell the 23 maps in that ^^ format have created 161 nodes. I wrote a simple datalog query to check that the nodes had been created correctly and it comes back with 161 results and almost none of them are correct 😞#2017-04-2408:55maleghastI've visually checked my input (CIDER + Emacs ctl+x-e) so I know that I am putting the correct data into the d/transact...#2017-04-2408:55maleghast *confused *#2017-04-2408:58dominicm@maleghast when you say "nodes" do you mean "entities"? What query did you use?#2017-04-2408:59maleghastHold on, will paste:
(def stations-query '[:find ?station-name ?elevation
:where [_ :station/name ?station-name]
[_ :station/elevation ?elevation]
[(> ?elevation 200)]])
#2017-04-2408:59maleghastand yes, I mean entites, with attributes#2017-04-2408:59maleghastI did wonder if the schema was creating an entity for each definition, but that would be 115, not 161 - 161 is (* 23 7)#2017-04-2409:01maleghastWhat I am getting back is 7 "answers" per station, and 6 of the values for elevation are wrong.#2017-04-2409:01maleghastLike, they are numbers that I have not put into the DB at all__#2017-04-2409:05maleghastI am fairly certain that I am simply missing something about how to write the query, tbh, but after a day's worth of banging my head against it I thought that I ought to just ask... 😉#2017-04-2409:16maleghastHere's the data I am pushing in after I've added / created the schema:
https://pastebin.com/0QCLBj9b
(Side-note, when did refheap,com stop working..?)#2017-04-2410:18kirill.salykin:where [_ :station/name ?station-name]
[_ :station/elevation ?elevation]
you want names for stations with elevations > 20 is this correct?
I think you should join them like this
:where [?id :station/name ?station-name]
[?id :station/elevation ?elevation]
#2017-04-2410:18kirill.salykin@maleghast#2017-04-2410:19maleghast@kirill.salykin - OK, thanks, I will try that.#2017-04-2410:23dominicm@maleghast To expand on what @kirill.salykin said. I think you want to make sure that the elevation & the station entity come from the same entity. If you don't you will get station^2 number of entities (- a few where the elevations aren't high enough)#2017-04-2410:24dominicmSorry, I worded that poorly, hopefully my meaning comes across 😛#2017-04-2410:24maleghast@kirill.salykin - That works, but I need to understand how to write a query that does the above but then also only returns the entities with an elevation greater-than 200#2017-04-2410:25maleghast@dominicm - Yeah, I think that I see what you mean#2017-04-2410:25dominicmThat's the query that @kirill.salykin suggested I think#2017-04-2410:25dominicmYou can include your [(> ?elevation 200)] still#2017-04-2410:26maleghastNope, if I do include it I get an error about not being able to resolve the symbol ?elevation#2017-04-2410:26maleghastHold on I will put it back in and get the actual error...#2017-04-2410:27maleghastNope, I was being an idiot - forgot to put the s-expression for the predicate inside a vector.
*facepalm *#2017-04-2410:40maleghastThanks both 🙂#2017-04-2410:43maleghastDo either of you have a strong recommendation for learning Datalog - I found the Cognitect docs / tutorial to be less than optimal... 😉#2017-04-2410:50kirill.salykinhttp://www.learndatalogtoday.org/#2017-04-2410:50kirill.salykinthis is pretty good#2017-04-2410:51kirill.salykin@maleghast#2017-04-2410:54maleghast@kirill.salykin - Thanks very much; will have a look now(ish)#2017-04-2414:42dominicmhttp://docs.datomic.com/best-practices.html#sec-14 I think the pre-processor got confused here. Not sure who to ping about it? 😄#2017-04-2414:46jaretAh I see#2017-04-2414:46jaretThe link is broken#2017-04-2414:46jaretI can fix it#2017-04-2414:47jaretThanks @dominicm for the report#2017-04-2414:53kschraderCan anyone from Cognitect comment on the following line Note that large caches can cause GC issues, so there is a tradeoff here. (from http://docs.datomic.com/capacity.html)#2017-04-2414:54kschraderwe’re thinking about trying a big Object Cache to keep most of our data locally on our peers#2017-04-2414:54kschraderbut I’m wondering what that’s going to do to as far as adding a lot of GC overhead#2017-04-2414:55kschraderwe can profile this ourselves, I’m just wondering if there’s anything obvious that I should be thinking about here#2017-04-2414:59marshall@kschrader how big is big?#2017-04-2415:00marshallOne thing to consider is you could also use some proportion of the box memory for a local memcached instance#2017-04-2415:01kschraderright now we have 8GB allocated, and 4GB goes to the ObjectCache#2017-04-2415:02kschraderso we were just thinking about making it ⬆️#2017-04-2415:07marshallwell, I definitely know folks running 16G in prod. You just want to make sure to keep any eye on your system and ensure you’re not getting into GC hell#2017-04-2415:07marshallhow much “extra” ram do you have?#2017-04-2415:45kschraderwe run on EC2, so hypothetically I have as much extra ram as I want 🙂#2017-04-2415:45kschrader@marshall do you mean a 16G object cache in production?#2017-04-2415:46kschraderand how would I configure a local memcached instance? 🤔#2017-04-2415:47souenzzo(d/transact conn [[:db.fn/retractEntity (d/tempid :db.part/user)]]) dont thows error... Can I use it?#2017-04-2415:56marshall@kschrader i meant a 16g heap, half of it ocache
you can run memcached on the same instance#2017-04-2415:56marshalljust install and launch it with whatever excess memory you have#2017-04-2415:57marshallconfigure both your peer and your txor to use it#2017-04-2415:57marshallsince it’s local to the peer you’ll get really great read throughput and latency#2017-04-2415:58marshallfor example, I’m running a test right now on an m4.2xlarge, which has 32G of memory (IIRC); My jvm has an 8g heap and i’ve got a 22GB memcached instance running on it#2017-04-2415:59marshallthen i configure the memcached endpoint in both the peer and transactor to that instance address & the memcached port#2017-04-2415:59kschraderI see, that’s an interesting idea#2017-04-2416:01kschraderso the transactor would be configured to point at several memcached instances?#2017-04-2416:01marshallsure#2017-04-2416:01marshallif you have a separate system-wide memcached instance#2017-04-2416:01kschraderthe problem is that our peer boxes scale up and down during the day#2017-04-2416:01kschraderauto-scale#2017-04-2416:02kschraderand I don’t think that there’s a dynamic way to update the memcached config on the transactor#2017-04-2416:02marshallah.#2017-04-2416:02marshallno, there isnt#2017-04-2416:02marshallyou could still do it#2017-04-2416:02kschraderwe have a memcached cluster though, I don’t think that that’s our problem#2017-04-2416:02marshallyou just wouldn’t get the txor pushing new segments to your peer local memcached instance#2017-04-2416:04kschraderI think that our problem is churning segments during some of our queries#2017-04-2416:04kschraderwhich seems to be a high-overhead activity#2017-04-2416:07kschraderwhen I profile locally in a memory constrained environment I see a lot of time spent in org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse and com.fasterxml.jackson.core.json.UTF8StreamJsonParser.getText#2017-04-2416:07kschraderif I increase the memory allocation then those hotspots go away#2017-04-2416:08kschraderand once the cache is warmed the response time is about 40x faster for the queries that I’m profiling with#2017-04-2416:09kschrader(which obviously isn’t the same load as our production infrastructure, but it’s clearly something)#2017-04-2416:10kschraderwe also see a bunch of time spent in java.io.BufferedInputStream.read, java.io.DataInputStream.readFully, and java.io.DataInputStream.readInt#2017-04-2416:11kschraderwhich all feels like cache churn to me, but I could be misunderstanding something#2017-04-2416:20robert-stuttaford@marshall @jaret any way we can configure a longer timeout for S3 restores?
Copied 0 segments, skipped 128128 segments.
Copied 0 segments, skipped 128128 segments.
Copied 0 segments, skipped 128128 segments.
java.util.concurrent.ExecutionException: java.net.SocketTimeoutException: Read timed out
#2017-04-2416:20robert-stuttafordthese are becoming tiresome to retry-a-thon our way through#2017-04-2416:20robert-stuttafordthis is when we restore to a dev machine#2017-04-2416:34kschrader☝️ we also have this problem on a regular basis#2017-04-2416:37marshall@kschrader that may or may not be cache ‘churn’ - it is potentially just the cost of reading a ton of data#2017-04-2416:37marshallwhich may also be alleviated by local memcached#2017-04-2416:38marshall@robert-stuttaford and @kschrader I don’t believe that is configurable currently - I’d suggest adding it as a feature request#2017-04-2416:38marshalli will also pass it along#2017-04-2416:38kschraderok, but if I increase the heap size then the problem goes away#2017-04-2416:39marshallhow do your object cache hit rate metrics look?#2017-04-2416:39marshallin the prod env#2017-04-2416:39kschraderclose to 1#2017-04-2416:39marshallunlikely to be churn then
is it possible you’re under memory pressure from your app?#2017-04-2416:40kschraderthat’s also a possibility#2017-04-2416:40marshallthere’s nothing horrible about running a 12 or 16 heap#2017-04-2416:40marshalljust make sure you keep an eye on it at first#2017-04-2416:40kschradergot it#2017-04-2416:40kschraderI think that we’ll try to bring up another cluster with 16GB heaps and see what happens#2017-04-2416:41marshalland have a reasonably scaled compute with it - you don’t want to have a huge heap with i.e. a 1 or 2 core processor#2017-04-2416:41marshallyou can get in a situation where G1 can’t keep up#2017-04-2416:41marshallwithout more compute#2017-04-2416:41kschraderyeah, we’d move to 8 CPUs along with it#2017-04-2416:41marshall👍#2017-04-2417:09robert-stuttaford@kschrader please upvote 🙂 https://receptive.io/app/#/case/26143#2017-04-2417:11robert-stuttaford@marshall i hope you give substantial weight to each vote on http://Receptive.io because each one counts for an organisation which represents many people 🙂#2017-04-2417:33jaret@robert-stuttaford We are absolutely considering organizational weight and power users when looking at our feature request feedback#2017-04-2421:53erichmondthis isn’t technically datomic support, I know, but can we ask questions here?#2017-04-2503:07jaret@erichmond feel free. thats the primary purpose of this channel. Marshall and I (datomic support) watch slack and the slack/general community is made up of helpful, knowledgable users who do most of the answering.#2017-04-2518:18kvltWhen adding :db/unique :db.unique/identity to an attribute. How long should I expect to wait until those attr’s are queryable using that index?#2017-04-2518:19marshall@petr after the next indexing job#2017-04-2518:19marshallhow long that takes will depend on how many attr/val of that attribute are in the db#2017-04-2518:20kvltI just tested it on my local in memory db. There is one entity transacted.#2017-04-2518:21marshallshould be immediate in mem#2017-04-2518:21marshallbut there are some known limitations to schema changes with mem dbs#2017-04-2518:21marshallas they dont’ have persistence#2017-04-2518:21kvltThat’s what I figured#2017-04-2518:21kvltThanks#2017-04-2519:11kvlt@marshall If I remember correctly, there is no way to remove and attribute, is there?#2017-04-2519:11kvltI can’t excise on :db/ident can I?#2017-04-2519:12marshallno, you cannot excise schema#2017-04-2519:12marshallyou can ‘deprecate’ unused/old schema elements#2017-04-2519:13marshalland/or rename them as part of that#2017-04-2613:07jonpitherhi all, Anyone used Datomic with Gemfire/Apache-Geode rather than Memcached?#2017-04-2614:06baptiste-from-parishello friends, I need some help to sell Datomic to my client. What are the best arguments that I can give for selling Datomic to a business profile (and not technical at all) ? Thanks a lot.#2017-04-2614:27jimmy@baptiste-from-paris good question, I would love to hear this from ones have done this before as well 🙂#2017-04-2615:18val_waeselynck@baptiste-from-paris I definitely need to make a blog post for this, as it was a tough sell to the non-technical founder of my startup as well (which later acknowledged it was a tremendously good choice)#2017-04-2615:19val_waeselynckof course it's hard to justify a technical choice with a non-technical vocabulary, but here are some ideas.#2017-04-2615:20val_waeselynck1. high query power (thanks to Datalog and the fact that querying is effectively non-remote) means I can get new features done more quickly.#2017-04-2615:23val_waeselynck2. you never lose data -> means you recover easily from human or programmatic errors. Example: once my Operations director calls me while I'm out buying lunch "Hey Val, I just accidently deleted one of our customers organization along with 100s of user profiles". Me: "OK, don't worry, I'll put it back in 30 mins."#2017-04-2615:24val_waeselynck3. high schema flexibility -> means it's easy to modify features.#2017-04-2615:24baptiste-from-paris@val_waeselynck that’s what my slide looks like lol#2017-04-2615:24baptiste-from-parisgreat synthesis skill 😉#2017-04-2615:26val_waeselynck4. Very effective testing at a low cost thanks to speculative writes, a feature unique to Datomic with far-reaching consequences -> means I don't have to do quick and dirty, I can just do quick and clean, with huge benefits once your codebase is a few months old.#2017-04-2615:27val_waeselynck5. The hardest technical problems of traditional databases are just gone : Impedance Mismatch, Cache Invalidation, N+1 problem, Concurrency / Distributed systems issues.#2017-04-2615:31val_waeselynck6. Easy modeling thanks to the universal schema -> means you don't spend time anticipating how you'll query your data when you're storing it#2017-04-2615:36val_waeselynck7. Trivial Change Data Capture thanks to the Log, which means Datomic is an excellent data source to integrate to other data systems (BI, derived data, materialized views etc.) -> means Datomic will be an asset, not a liability, when you need to integrate other data engines as your system grows.#2017-04-2615:40val_waeselynckNote that I haven't included the 'time travel' features of Datomic - I think most people who use Datomic to keep several versions of some piece of data end up disappointed#2017-04-2615:41jeff.terrellI just totally shared that list with my team, as part of my efforts to evangelize Datomic. Thanks @val_waeselynck!#2017-04-2615:41val_waeselynck(Personally, my main reason for choosing Clojure for my startup was idiomatic access to Datomic. You can usually work around the deficiencies of your programming language - I'm looking at you, JavaScript!- but not those of your database system)#2017-04-2615:43val_waeselynckOK putting the blog post on my todo list.#2017-04-2616:15val_waeselynck@baptiste-from-paris also, I just discovered this page on http://datomic.com http://www.datomic.com/benefits.html#2017-04-2616:15val_waeselynckGuess it wasn't here a few months ago, you may have missed it too.#2017-04-2617:45sineerHow can I log transactions using log4j? I've enabled <logger name="datomic" level="DEBUG" /> and I see lots of things like peer connections but I don't see any trace for my queries#2017-04-2618:10souenzzo(fn equal? [entity1 entity2] (= (:db/id entity1) (:db/id entity2))) There is an "smarter" way to compare entities?#2017-04-2618:12favila@souenzzo entities have identity semantics#2017-04-2618:12favilacomparing them is application-specific#2017-04-2618:16souenzzoI can safely make compare datomic.query.EntityMap with =?#2017-04-2618:16favilaI mean datomic entities, not the EntityMap#2017-04-2618:17favilaI don't know if EntityMap implements iequiv, or what it does#2017-04-2618:18favilayou also have to consider databases, and whether they have filters applied#2017-04-2618:19souenzzoI will check :db/id. Working for now.#2017-04-2618:22alexmillerif you are trying to compare whether two entity maps are for the same entity that seems like what you should do#2017-04-2618:23alexmilleran entity map is a collection of attributes from a point in time so you can have different entity maps for the same id with different things in them#2017-04-2618:24alexmillerand then they also have lazy read semantics so that is also worth consideration#2017-04-2618:29souenzzoI have a single db that instance all entityes. Sometimes I'm "walking" on entites and sometimes I need to compare#2017-04-2618:31alexmiller“compare” is too weak a word to describe the operation you mean :)#2017-04-2618:31alexmillerit can mean both “have the same contents” or “refer to the same entity”?#2017-04-2618:36souenzzorefer to same entity... Entity has same db/id... (fn equal? [entity1 entity2] (= (:db/id entity1) (:db/id entity2))) < it's working.#2017-04-2618:40alexmilleryes, that makes sense#2017-04-2618:45souenzzoSo, in this semantics (= entity1 entity2) will work or is better get the id's?#2017-04-2618:48favila@souenzzo If that isn't just java object identity, the semantics are so ambiguous and unclear that you are better off being more specific#2017-04-2618:48favilathe problem of entity equality is the same as of Java object equality, except worse because of time travel or db filtering#2017-04-2618:50favilaso you need to be specific about what constitutes equality for you. If it means "same db/id", that's fine. But the entity maps may have different attributes or values on them, be for a different db t, or be from completely different dbs#2017-04-2618:51favilathey may even have the same visible assertions, but be from different dbs; or from the same db with different filters, as-of-t, or since-t#2017-04-2618:51favilaand that's not even considering shallow vs deep equality#2017-04-2620:29marshall@sineer Datomic doesn’t provide logging of the queries themselves. What specifically are you wanting to record/identify?#2017-04-2620:57sineer@marshall I'm just beginning to learn how to use datomic and my pedestal app does a simple query and I guess I was expecting to see the query being logged having turned on full logging. I'm surprised that it doesn't provide that debug feature?#2017-04-2621:16marshall@sineer all the work of query occurs on the peer. If you enable peer logging you will see some reports of what the peer is doing#2017-04-2621:41sineerI enable log ALL for datomic and all I see for peer is this:
2017-04-26 17:39:30 INFO datomic.peer - {:event :peer/connect-transactor, :host "0.0.0.0", :alt-host "192.168.99.100", :port 4334, :version "0.9.5561", :pid 80946, :tid 87}
followed later by:
2017-04-26 17:39:30 INFO datomic.peer - {:event :peer/cache-connection, :protocol :dev, :db-name "xxx-dev", :system-root "192.168.99.100:4334", :host "192.168.99.100", :port 4334, :db-id "xxx-dev-8c7fdb4b-86ec-46c5-95cb-a9e6932b99fd", :pid 80946, :tid 87}#2017-04-2621:42sineerI suppose it does tell me :tid 87 and I must learn how to access the transaction log if I care to see the query?#2017-04-2622:46marshall@sineer it depends what you mean by “see the query”
Queries themselves aren’t logged by Datomic
Transactions do result in a line in the transactor logs#2017-04-2623:18sineerYeah I believe it's the transaction log that I expected to see when I enabled logging on ALL. I haven't figured that one out yet, I'll just read more doc 🙂 Thanks!#2017-04-2709:14dimovichhello fellow clojurians#2017-04-2709:15dimovichwhat would be a good place to start learning about datomic?#2017-04-2709:15dimovichsome tutorial, or maybe a specific book?#2017-04-2709:15dimovichthanks!#2017-04-2709:18jimmy@dimovich You should check this first
http://www.datomic.com/training.html#2017-04-2709:21dimovich@nxqd thanks!#2017-04-2709:26val_waeselynck@dimovich see also http://www.learndatalogtoday.org/#2017-04-2709:29dimovich@val_waeselynck :+1:#2017-04-2710:19kirill.salykin@dimovich maybe you want also look at cookbook (datomic related chapters)
https://github.com/clojure-cookbook/clojure-cookbook/tree/master/06_databases#2017-04-2710:22karol.adamiec@dimovich or a lightweight contender http://gigasquidsoftware.com/blog/2015/08/15/conversations-with-datomic/ 🙂#2017-04-2714:01degWhat is the cheapest/smallest AWS EC2 instance type that can run Datomic? (This is for tiny toy apps; my data size is negligible).#2017-04-2714:15nottmey@deg my dev instance runs just fine on a free tier ec2 instance with 1gb ram. Using bin/transactor -Xmx512m -Xms512m mytransactor.properties and I think I changed the following properties:
memory-index-threshold=32m
memory-index-max=256m
object-cache-max=128m
Runs for a few months now, so it should be ok. ¯\(ツ)/¯#2017-04-2714:22deg@nottmey Great, thanks. I had tried 0.5gb (the smallest lightsail tier) and EOM'd at startup. I then found https://github.com/WormBase/db-prototypes/wiki/Installing-Datomic, which suggested that much more was needed, and kindof freaked. But 1gb is not too bad. Means that I can't play for free, but still a pretty reasonable cost.#2017-04-2714:23degBtw, are you running with Docker, or bare on the machine? My current plan is to run with Docker, which I imagine will add a tiny bit more overhead too.#2017-04-2714:33nottmeyNot using docker for this one, but some other processes are also running. Since the DB process itself only takes 54% of the memory there is some room for other processes (e.g. the datomic console - java process - with 16%). Maybe scale down my numbers a bit and try the 0.5gb again.#2017-04-2714:47degWill do, thanks.#2017-04-2714:50jeff.terrell@deg - I remember having the same experience at some point. But it's been a while since I tried. Maybe a more recent version of Datomic can work with lower memory requirements? I don't know.#2017-04-2714:52degThanks. I'm playing with a bunch of moving pieces now (mostly https://github.com/pointslope/docker-datomic-example), and a pile of outside world interrupts. But, I'll report back when I figure out what is the smallest size that works.#2017-04-2714:57degHmm. Is there any way to specify the memory usage in the .properties file, rather than on the cmdline?#2017-04-2714:59degThe tooling that I'm trying (https://github.com/pointslope/docker-datomic) has the call to bin/transactor hardcoded with no parameters. Changing it will mean forking two repos and maybe adding images to the Docker cloud; actions I'd just as soon avoid for this simple playing around.#2017-04-2715:00mgrbyte@deg You can set the mem options via environment variables (e.g XMX)#2017-04-2715:02degcool, thanks!#2017-04-2715:03degNo, actually, not so cool. The environment is in the docker image, so that still requires forking the repos.#2017-04-2715:06mgrbyte@deg the Dockerfile in the repo you link to above is quite small. There may be other things in there you want to change, perhaps rolling your own is not too bad?#2017-04-2715:07degAgreed. I almost certainly will do so soon. But, had wanted to play quickly before I had to relearn dockerfile syntax. Ah well; I'm just trying too hard to avoid work. 🙂#2017-04-2715:10mgrbyteNote that in their Dockerfile, at the bottom, apparently you can augment the ENTRYPOINT command (Provide a CMD argument with the relative path to the
# transactor.properties file it will supplement the ENTRYPOINT)#2017-04-2715:17karol.adamiec@deg do not know your use case but fiddling with datomic inside docker seems like making it hard. You mentioned you want to play. is memory/fileseystem local thing not enough? and if you need to deploy is official way on AWS prohibitely expensive? it is a latte once a week no??#2017-04-2715:20deg🙂 All good points. Mostly trying to use Docker out of philosophy (not polluting the machine; reproducibility of porting to new envs; etc.). Then, I descended down a yak-shaving path here, wanting to test if I could run on a smaller machine.#2017-04-2715:21karol.adamieci do have docker for my peers (jvm clients for datomic). Datomic transactor itself is vanilla one on AWS (not vanilla, terraformed one but still close to the metal)#2017-04-2715:22karol.adamiecm3.medium works absolutely fine#2017-04-2715:22karol.adamiec$0.073 per Hour#2017-04-2715:22degI have decided to let the yak roam free. Thank you for that wake-up call.#2017-04-2715:23karol.adamiec👍#2017-04-2715:24karol.adamiecjust remember than when yak is satisfied and you have to scale production workloads it will be more messy to get help/diagnose etc when transactor is inside docker. i see all the time docker/transactor related stuff here 😄#2017-04-2715:24karol.adamiechave fun 😄#2017-04-2718:59slpssmHi, I’m running into this error: :db.error/not-an-entity Unable to resolve entity: {:part \"db.part/user\", :idx -1001669} in datom [{:part \"db.part/user\", :idx -1001669} :field/docId \"44d3a3b6670dfc39bef95993\"] and I’m not sure how to fix it. Using version 0.9.5302 of datomic. (Yes, it’s a bit old)#2017-04-2719:00slpssmThe :db/id specification looks like: :db/id #db/id[:db.part/user] if that helps.#2017-04-2719:01marshalltry replacing the reader macro with a call to tempid:
:db/id (d/tempid :db.part/user) #2017-04-2719:02marshall@slpssm ^#2017-04-2719:03slpssmSame error. The :idx value is different.#2017-04-2719:03marshallcan you share the entire transaction#2017-04-2719:07slpssmSure: [{:data/docId "44d3a3b6670dfc39bef95993"
:data/state "disabled"
:data/repo "1e9b"
:data/dimensions "728x90"
:data/name "name"
:db/id (d/tempid :db.part/user)
}] I should mention that this is massaged a bit and expands like: [({:data/docId "44d3a3b6670dfc39bef95993", :data/state "disabled", :data/repo "1e9b", :data/dimensions "728x90", :data/name "name", :db/id {:part "db.part/user", :idx -1001669}})]#2017-04-2719:09marshallyou do the expansion to the nested :db/id map?#2017-04-2719:10slpssmI shouldn’t?#2017-04-2719:10marshallwhat happens if you submit first thing as the tx data?#2017-04-2719:11slpssmEasier said than done. 🙂#2017-04-2719:12weianyone have ideas for representing a spreadsheet (i.e. tabular data) in datomic? main functions I want to support are looking up rows by column value, and updating a spreadsheet entity with a new one (with rows potentially added and deleted)#2017-04-2719:13slpssm@marshall That gives me a good clue. Let me go poke at this a bit more. Thanks!#2017-04-2719:34bmaddyThe Datomic docs say to “prefer a collection binding in a parameterized query over running multiple queries” (http://docs.datomic.com/best-practices.html#collections-as-inputs). Does anyone know how a collection binding compares to a logical or like this?
(d/q '[:find (pull ?a [:artist/name])
:in $
:where [?a :artist/country ?country]
(or [?country :country/name "Canada"]
[?country :country/name "Japan"])]
db)
#2017-04-2719:38bmaddyAlso, how would those relate to using ground?
(d/q '[:find (pull ?a [:artist/name])
:in $ [?c ...]
:where [?a :artist/country ?country]
[?country :country/name ?c]
[(ground ["Canada" "Japan"])] [?c ...]]
db)
#2017-04-2720:09favila@bmdaddy The point of that advice is that query plans can be reused if they don't change. If you inline dynamic parameters into the query, a query plan needs to be rebuilt each time. if you supply them as input to an unchanging query, the query plan is reused and the input changes#2017-04-2720:10favilaif your collection is truly a static part of the query itself, then inlining it is fine#2017-04-2720:10favilaI don't know if there's any measurable difference between ground and or in your example#2017-04-2720:21drewverleeThis might sound wanky, but are their any automation tools that leverage Datomic out there? I’m just starting to learn ansible and i feel like its super brittle. Like grouping things together and searching for things is a huge afterthought. I have the same feeling about Artifactory…#2017-04-2720:29bmaddyCool, thanks @favila.#2017-04-2808:17ezmiller77can anyone tell me how i'd dynamically create a transaction spec? i want to do something like:
(defn latest-query
([] '[:find (pull ?e [*]) :where [?e :arb/doctype]])
([doctype]
(let [doctype-attr (keyword "doctype" doctype)]
`[:find (pull ?e [*])
:where [?e :arb/doctype ~doctype-attr]])))
But this doesn't work since the syntax quote creates a mess of stuff that doesn't belong due to the namespacing:
[:find
(datemo.routes/pull datemo.routes/?e [clojure.core/*])
:where
[datemo.routes/?e :arb/doctype :doctype/note]]
On the other hand, using a regular ' wont' work because it doesn't allow for unquoting...#2017-04-2808:19ezmiller77Actually, just figured out one way to do it that's kind of ugly:
(defn latest-query
([] '[:find (pull ?e [*]) :where [?e :arb/doctype]])
([doctype]
(let [doctype-attr (keyword "doctype" doctype)]
[:find '(pull ?e [*])
:where ['?e ':arb/doctype doctype-attr]])))
#2017-04-2808:19ezmiller77Maybe there's a nicer way to do this?#2017-04-2808:30val_waeselynck@ezmiller77 what do you mean, 'a transaction spec' ?#2017-04-2808:31ezmiller77@val_waeselynck I just mean the edn for a transaction...#2017-04-2808:32augustlyou mean a query, I guess? 🙂#2017-04-2808:32ezmiller77I guess the docs call it "tx-data"#2017-04-2808:32augustlyou could always just pass it in as a parameter instead of building a data structure#2017-04-2808:32val_waeselynck@ezmiller77 it does look like a query 🙂#2017-04-2808:33ezmiller77Well @val_waeselynck I dont' really care to quibble about symantics.#2017-04-2808:33augustl[:find (pull ?e [*]) :in [$ ?doctype] :where [?e :arb/doctype ?doctype]]#2017-04-2808:33ezmiller77@augustl that's a nice soln!#2017-04-2808:33augustl@ezmiller77 only mentioning it because a query and a transaction are fundamentally very different things#2017-04-2808:33augustlif something takes tx-data it certainly does not expect a query 🙂#2017-04-2808:34val_waeselynck@ezmiller77 I don't think it's fair to call it quibbling, you should know the difference just so you can choose between d/q and d/transact#2017-04-2808:34augustlas in, Datomic is not like SQL where you use the same system for reading and writing, they are pretty different#2017-04-2808:34val_waeselynckbut hey, sorry to have interrupted you with my quibbling. good day!#2017-04-2808:34augustl@ezmiller77 the advantage of parameterizing the query is that datomic will cache the query as well#2017-04-2808:35augustlit means that db and doctype needs to be passed in to latest-query of course#2017-04-2808:35augustlthat makes more sense to me at least#2017-04-2808:36augustlI'd also rename doctype-attr to doctype-val 🙂#2017-04-2808:36augustlI would say that the attr here is :arb/doctype#2017-04-2808:44ezmiller77right you are.#2017-04-2815:18kjothenDoes anybody know if it's possible to backup/restore and rotate log files to an S3-compatible store? I don't see a way of specifying an endpoint URL in the command line tools, and all attempts so far have resulted in the default http://s3.amazonaws.com endpoint appearing in the logged exceptions.#2017-04-2815:19dominicmI think the aws build for datomic does this / can do this.#2017-04-2815:20dominicmWe seem to have a aws-s3-log-bucket-id in aws.properties#2017-04-2815:20dominicmafaik it only runs every 24h#2017-04-2815:25kjothen@dominicm Yes, I've specified a custom address in the aws-s3-log-bucket-id property but it's ignored. The implementation seems fairly determined that the s3 host name is http://s3.amazonaws.com, whereas I need to specify an alternative host name for a compatible s3 store that we run internally.#2017-04-2815:26dominicmah, I missed s3-compatible.#2017-04-2815:27marshall@kjothen It isn’t currently possible to specify a non-s3 but s3-compatible endpoint for log rotation
I’d recommend that you request it as a feature in the customer feedback portal (Suggest Features link in the top nav of my.datomic dashboard)#2017-04-2815:32kjothen@marshall I figured this was the case, but thought I'd check nonetheless. AWS is not an option at my workplace just now, but figured I could take advantage of our internal S3 store for backups in particular. I'll raise some feedback - thanks!#2017-04-2815:33marshall@kjothen yep, totally understand the desire/feature - definitely think it’s one to put on the portal#2017-04-2816:24kjothen@marshall I followed your suggestion and raised the feature in the portal, thanks again for your prompt response#2017-04-2815:51robert-stuttaford@marshall have any of the portal requests become released features yet?#2017-04-2815:51marshallyep#2017-04-2815:52marshalli believe you can click on ‘released’ or ‘releases’ to see which ones#2017-04-2815:52marshalllog in mem db was one#2017-04-2820:03robert-stuttafordnice#2017-04-2903:39cauli.tomazHey! How are you doing?
If I have an identifier for a :db/ident (e.g.: 17592186045423), is there a way to check/query its corresponding keyword?
I'm using these :db/idents as enumerated types, and when I grab the ref type in a query they are retrieved as a :db/ids, and I want to know their meaning 🙂#2017-04-2903:53cauli.tomazWoot! Got it.
(defn query-schema-idents! []
(let [idents (<!! (d/q conn {:query '[:find ?id ?ident
:where [?id :db/ident ?ident]]
:args [db]}))]
(println idents)))
#2017-04-2908:36val_waeselynck@cauli.tomaz just use datomic.api/ident :) #2017-04-2909:02cauli.tomazThat is nice to know, @val_waeselynck, thanks. But I'm using the Client API and ident is not available, I forgot to mention that.#2017-04-2916:14celldeeHi, I'm trying to include the Datomic Peer library in my leiningen project. The documentation says that I can include something like this - [com.datomic/datomic-EDITION "VERSION"] - in my project.clj. I put [com.datomic/datomic-pro "0.9.5561"] in but got an error saying:
Could not find artifact com.datomic:datomic-pro:jar:0.9.5561 in central (https://repo1.maven.org/maven2/)
Could not find artifact com.datomic:datomic-pro:jar:0.9.5561 in clojars (https://clojars.org/repo/)
Can anyone tell me how to include the Peer library in my project correctly?#2017-04-2916:44favilaYou also need to add the datomic maven repository with credentials (which are specific to you). See https://my.datomic.com after logging in#2017-04-2916:45favilaIt's right above the order history#2017-04-2916:45favilaIt's right above the order history#2017-04-2916:48celldee@favila Thanks very much.#2017-05-0111:15noogaIs there a way to change ids into keywords while querying? so I don’t have to [?e :blah/enumattr ?x] [?x :db/ident ?xi] ?#2017-05-0111:17augustlenumattr, as in something with cardinality "many"?#2017-05-0111:19noogahttp://docs.datomic.com/best-practices.html#sec-9#2017-05-0111:19noogasomething like that#2017-05-0111:21noogaso when I have [?e :artist/country ?c] i want to get ?c as :country/GB from the query instead of 108#2017-05-0112:21favila[(datomic.api/ident $ ?x) ?xi] is equivalent (and you can do it outside the query too), but it's shorter and clearer to just do what you are doing#2017-05-0112:52noogaooh#2017-05-0112:52noogai was trying (ident ?x)#2017-05-0112:52noogathat’s why it didn’t work :F#2017-05-0115:47souenzzoIn my experience with datomic: It's almost always was "Simpler than expected"#2017-05-0211:47Matt ButlerHi 🙂, How are people doing Datomic Backups on AWS? Anyone managed to get it to work with Lambda?#2017-05-0213:14stuartsierra@mbutler I had a successful proof-of-concept of a scheduled AWS Lambda launching a short-lived EC2 instance to run the Datomic Backup.#2017-05-0213:29Matt Butler@stuartsierra This was what i assumed would be necessary (rather than being able to run the backup directly in a Lambda), how were you letting Lambda know that the backup was complete (http req maybe)? Any pitfalls I should be aware of? Or is it as simple as Trigger Lambda to bring up box with S3 access, make sure datomic files present on instance, run backup, trigger lambda to kill box.#2017-05-0213:30robert-stuttaford@mbutler we have a small ec2 instance running backups continuously#2017-05-0213:31pesterhazyI have Jenkins task that makes the backup#2017-05-0213:31pesterhazyRemember backups are strictly incremental so can be run very frequently with little harm#2017-05-0213:32Matt ButlerYes, had also considered running from an application server, just seemed that Lambda might be a nice solution. however the extra step of launching and managing an instance makes it a little less appetising.#2017-05-0213:32robert-stuttafordhttps://github.com/robert-stuttaford/terraform-example#2017-05-0213:32robert-stuttaford-whistle-#2017-05-0213:33robert-stuttaforddon’t run backups from an app server. it’s a java process, it’ll eat all your ram 🙂#2017-05-0213:34Matt ButlerAllowances had been made for the extra RAM usage but I agree its a messier solution. 🙂#2017-05-0213:34robert-stuttafordi’m fairly certain that the cost of an ec2 instance doing continuous backups will be less than the DDB read spikes you’d see by doing bigger infrequent backups. i have no actual facts to back this up#2017-05-0213:36pesterhazybeause of ec2's hourly pricing, spinning up ec2 instances doesn't make a lot of sense to my mind#2017-05-0213:37pesterhazyyou might as well keep one going continuously#2017-05-0213:38Matt ButlerI confess from a practical standpoint I'm inclined to agree @robert-stuttaford, Lamda just seemed like the right solution for a "backup" job.#2017-05-0213:39pesterhazy(now if only aws introduced by-second billing for long-running tasks, that'd be a game changer)#2017-05-0213:39Matt Butler@pesterhazy Doesn't hourly pricing incentive's the launching/killing of EC2 instances?#2017-05-0213:39pesterhazyonly if you back up less than once an hour 🙂#2017-05-0213:41Matt ButlerI suppose with the incremental backup, there's an argument for smaller windows, point well made 🙂#2017-05-0213:42robert-stuttafordwe do it continuously because AWS. don’t want to be at the mercy of DDB being down in that region and our business coming to a halt#2017-05-0213:42robert-stuttafordwe’re not in US-EAST-1 but still#2017-05-0213:55Matt ButlerGreat points, thanks 🙂#2017-05-0214:11stuartsierra@mbutler I think I had the Lambda launch the instance, then the instance destroy itself when it was done.#2017-05-0214:13Matt ButlerGotcha 🙂#2017-05-0217:26eraserhdI have a weird problem. I'm trying to transact with a single, pretty simple datom, and am getting :db.error/not-a-data-function.#2017-05-0217:27eraserhdI first tried with (d/transact (conn) [[<eid> :my/attr "value"]]), then with (d/transact (conn) [[<lookup-ref> :my/attr "value"]])#2017-05-0217:27eraserhdBoth times it complains that the value which is not a data function is the eid of the thing I'm updating.#2017-05-0217:27eraserhd(so it resolved the lookup ref).#2017-05-0217:28eraserhdOh, god. :db/add.#2017-05-0217:28eraserhdhahaha. Ignore me.#2017-05-0217:35pesterhazyThat one has bitten me a few times as well#2017-05-0313:10rb1719Hi all. If I have a transactor set up, how would I create a Free transactor integrated storage to use that transactor?#2017-05-0313:11augustlnot sure what you mean with a transactor using another transactor#2017-05-0313:16rb1719Maybe I am complicating it#2017-05-0313:16rb1719I want to set up a Free transactor integrated storage#2017-05-0313:17rb1719I tried doing datomic: but couldn't create a connection as I didn't have a transactor#2017-05-0313:17augustlthat's the default mode of the free transactor#2017-05-0313:17augustlyou need to run the transactor - that's a separate process from your app#2017-05-0313:18augustlthe peers/clients talks to the transactor to do writes#2017-05-0313:19rb1719If I want to test on my local machine first, do I need to create a transactor?#2017-05-0313:20augustlyeah, or you can use the in-memory storage that just has everything directly in the peer/client#2017-05-0313:22rb1719I have an in-memory atm#2017-05-0313:26marshall@rb1719 http://docs.datomic.com/dev-setup.html#sec-1 covers running a transactor#2017-05-0313:26marshallit is written for Starter, which requires a license key#2017-05-0313:26marshallbut you can do the same with free, using the free protocol instead of dev#2017-05-0313:29rb1719@marshall thanks. I'll have a look through that 😃#2017-05-0322:36slpssmA few days ago I asked about an error in relation to tempids. The final conclusion was we needed to update datomic. So we updated from 0.9.5302 to 0.9.5561 and while we were at it clojure from 1.7 to 1.8. We're finding that places where we could submit a string like "param" in a query we now need :param or ":param". Did something change or are we missing a setting for newer versions?#2017-05-0322:43slpssmI forgot to mention that param is specified as a keyword type.#2017-05-0323:43favila@slpssm your older practice sounds wrong, and may have worked by accident, but I can't tell without a concrete example#2017-05-0400:32slpssmSimplest example: :entity/state is a keyword [:find ?e :where [?e :entity/state :pending]]
works [:find ?e :where [?e :entity/state ":pending"]]
also works. [:find ?e :where [?e :entity/state "pending"]]
used to work in the old version but no longer works in the new version.#2017-05-0401:40jeff.terrellI have a question. Say I want to express the equivalent of this (pseudo?) SQL query in Datomic/datalog:
SELECT author.name, count(*) AS num_books
FROM author
JOIN book
ON author.id = book.author_id
GROUP BY author.id
HAVING num_books >= 5
I think that in the peer library, you could just fetch all the author/book combinations then group (`GROUP BY` above) and filter (`HAVING` above) in good ol' Clojure without any waste of bandwidth or anything, since you essentially have all the data locally anyway. (If this is not correct, let me know!)
If instead you were using the client library, and you took the same approach of fetching all the author/book combinations, I'm guessing you'd transfer data (potentially a lot of data) that you didn't end up using. Is that right? How would you avoid that in the client library?#2017-05-0403:42favila@slpssm third one if it ever worked was a bug. Second one is I think a concession to Java users of the API #2017-05-0405:24favila[:find (pull ?aid [:author/name]) (count ?bid) :where [?aid :author/books ?bid]] @jeff.terrell #2017-05-0409:26isaacThere is any way to specify :db/txInstant while import data pipelining? Datomic tx must ensure (:db/txInstant early-tx) <= (:db/txInstant later-tx), howerver, pipelining can’t ensure the ordering of imports.#2017-05-0411:33augustl@jeff.terrell you're correct about the peer library. All query engines need their working set in memory 🙂#2017-05-0411:33augustlI somehow managed to miss the existence of the client library#2017-05-0412:55marshall@jeff.terrell The approach would be essentially the same with the client - the difference is that the work would happen on the peer server instead of in your process#2017-05-0412:56marshallYou’d still do the “limit by > 5” in your app#2017-05-0412:56marshallusing something like the query @favila posted aboe#2017-05-0412:56marshallabove*#2017-05-0412:58marshallIncidentally, since the client will return results in a channel, I’d do the filtering with a transducer#2017-05-0413:06jeff.terrell@marshall - OK. I think that's about what I expected. Thanks for confirming. Now, if there was a long tail, with lots of results < 5, I'd be wasting a lot of bandwidth, right? Is there a simple way to avoid that? Maybe ship the filter up to the server side, somehow?#2017-05-0413:06jeff.terrellIf this were SQL and I was trying to limit results somehow in a way that wasn't supported by SQL, I'd reach for a database function.#2017-05-0413:07marshallTBD 🙂
Sort in the peer server + limit/offset - which I believe is a request in our customer feedback portal#2017-05-0413:07marshallyou should vote for it 🙂
or if it’s not there, you should add it#2017-05-0413:07jeff.terrellOK! :-)#2017-05-0413:12favila@isaac pipelining does not necessitate out of order#2017-05-0413:12jeff.terrell@marshall - I think the feature I want to suggest is "custom server-side database functions to limit or transform results server-side when using the client library". Does that sound like a reasonable request to you, or would that be a bad idea for any reason that you can see?#2017-05-0413:37jeff.terrellI went ahead and added this.#2017-05-0413:16kirill.salykinWhen I am trying to suggest features - it stays on the account page and just adds “#”#2017-05-0413:16kirill.salykinhttp://api.eu-west-1.receptive.io/widget/ping
Failed to load resource: the server responded with a status of 400 (Bad Request)
Also I have this error, is this connected?#2017-05-0413:17kirill.salykin@marshall#2017-05-0413:18jeff.terrellIt worked for me, @kirill.salykin.#2017-05-0413:19kirill.salykinbut doesnt work for me#2017-05-0413:20marshall@kirill.salykin Are you running any ad blockers or ghostery or anything like that?#2017-05-0413:21kirill.salykinI disabled AdBlock#2017-05-0413:21kirill.salykinstill same#2017-05-0413:21marshalldepending on config they can prevent external redirects#2017-05-0413:21marshallhrm. what browser?#2017-05-0413:21kirill.salykinchrome and safari#2017-05-0413:21kirill.salykinwill try Firefox#2017-05-0413:21marshallweird. i wonder if there’s a system firewall issue#2017-05-0413:21kirill.salykinpossible…#2017-05-0413:21marshallsounds like you’re not getting content from receptive#2017-05-0413:23kirill.salykinreceptive responses with “{”message”:“invalid user”}”#2017-05-0413:23marshallyeah, i just verified it works ok for me in chrome#2017-05-0413:23marshalllet’s go to a private chat#2017-05-0413:23kirill.salykinsure#2017-05-0413:31marshallheh. for those of you awaiting updates with baited breath about Kirill’s plight in accessing the feature portal - software is hard, integrating multiple softwares is harder 🙂#2017-05-0414:28isaac@favila It’s just keep ordering of returning, not guarantee order of executing.#2017-05-0414:37karol.adamiechow to modify this query:
‘[:find ?name ?filter
:where [?e :part/name ?name]
[?e :part/filter ?filter]]
so it returns […] for ?filter? :part/filter is a many keywords type, but above query returns only first keyword from collection …#2017-05-0414:41karol.adamiecgot it somehow working like that:
(d/q ‘[:find ?name (distinct ?filter)
:where [?e :part/name ?name]
[?e :part/filter ?filter]]
(d/db (d/connect database-uri)))#2017-05-0414:41karol.adamiecbut i feel i am missing something… ;/#2017-05-0414:44favila@isaac using the impl of pipelining in the docs, yes. But within a process you are guaranteed that order of d/transact call is order of execution#2017-05-0414:45favilaSo with an implementation of pipeline that calls d/transact in correct order you know they will run in correct order#2017-05-0414:45favila@isaac eg https://gist.github.com/favila/3bc6fae005228a3290d5509c088e2f11#2017-05-0414:46favilaI also wrote one (less elegant) that didn't use core async, used reduction, kept inflights in an accumulating vector, and did "gc" of inflight futures when the vector reached capacity#2017-05-0414:47favilathe advantage of it was it didn't run any additional threads#2017-05-0415:37isaacYou mean, the d/transact-async is keep ordering of executions same with ordering of invokates?#2017-05-0415:37isaac@favila#2017-05-0415:37favila@isaac yes#2017-05-0415:38favilathe problem with the pipelining code in the docs is core.async/pipeline does not preserve order of invocation because it uses a threadpool#2017-05-0415:38favila(it's not immediately obvious that it does this)#2017-05-0415:39isaacdatomic.api is not open source, and the docs has no instructions about this#2017-05-0415:40isaacAnyway, it’s a good feature, thanks, @favila#2017-05-0415:40favilaYou are talking about this pipeline code? http://docs.datomic.com/best-practices.html#pipeline-transactions#2017-05-0415:41isaacyeah#2017-05-0415:42favilacore.async/pipeline-blocking doesn't run input "tasks" in-order because it runs them in parallel on a threadpool#2017-05-0415:42favilaso which runs first is up to chance#2017-05-0415:42favila(this code is open)#2017-05-0415:43favilabut d/transact-async etc send transactions to the transactor in the order they are invoked#2017-05-0415:44favilaso you could just (doseq [tx txs] @(d/transact-async conn tx))#2017-05-0415:44isaac“but d/transact-async etc send transactions to the transactor in the order they are invoked” it is key point#2017-05-0415:44favilaexcept that the flood of transactions would kill the transactor#2017-05-0415:44favilawell, not kill it, but the transactors would be unresponsive#2017-05-0415:44favilait would eventually curn through them all#2017-05-0415:45favilathe pipelining code is just to ensure queue depth#2017-05-0415:45favilais reasonable#2017-05-0415:49isaacwell, thanks for you hints, let my try your code. 🙂#2017-05-0421:42spiedenis there any work on client api libraries for non jvm languages taking place yet?#2017-05-0423:30jdkealycan a transactor support multiple databases? If not, how would you test using multiple databases on a localhost environment?#2017-05-0506:14robert-stuttaford@jdkealy absolutely yes it can#2017-05-0511:39stijnhas anyone ever written a (db) function that takes the nested map form of an entity in transaction and also generates :db/retract's (based on e.g. 'nil' values for fields or missing values in a many relationship)?#2017-05-0511:40stijnor how are you all dealing with this when arbitrarily nested data structures need to be updated?#2017-05-0512:39michaelrHi#2017-05-0512:40augustl@stijn Datomic sure likes flat data/facts 🙂 Usually ended up with something non-generic and/or tried to flatten my data as much as i can#2017-05-0512:45michaelrI have a growing number of orders in the DB. I would like to expose an API which would return these orders between dates, as given in a parameter. The simplest thing to do would be to make a query and then iterate over the returned entities. But what is the best way to implement pagination of results?#2017-05-0512:49stijn@michaelr use the :avet index on :order/date and return a 'next-page' token to the client based on either the date, or if you want a fixed set of results the date + the entity id#2017-05-0512:50stijn@augustl yes, but I was looking for something generic because i'm fed up implementing the non-generic 🙂#2017-05-0512:52michaelr@stijn Thanks! Will check the docs on how to query an index#2017-05-0512:52stijn@michaelr you can do that with d/datoms#2017-05-0513:01michaelr@stijn Checking.. thanks#2017-05-0513:26michaelr@stijn No way to specify (> some-date) for the value? Should I be iterating over the whole index?#2017-05-0513:31stijn(d/datoms db :avet (d/entid db :order/date) from-date) will give you all entities as of from-date#2017-05-0513:32michaelrWow interesting#2017-05-0513:32michaelrThanks#2017-05-0513:34favila@stijn that will give you entities with exactly from-date only#2017-05-0513:35favilaBut @michaelr see d/index-range#2017-05-0513:35michaelr@favila Thanks. Will check it now#2017-05-0513:37michaelr@favila This one works! 🙂#2017-05-0513:44stijnoh yes 🙂#2017-05-0611:56ezmiller77I have a schema that includes an attribute :a/metadata, which in turn can hold an attribute :b/tags. And :b/tags takes a :tag. :a/metadata and :b/tags are refs with the component option set to true. If I pull an entity containing these attributes, however, I am getting some incongruent results between an in-memory db and a db setup using the transcator. If I do it using an in-memory db, the tags are pulled completely, but if I do it with the dev db using the transactor, I only get the :db/id for the tags. I'm trying to understand why this might be happening. One thought I had is that the dev db is slower, and that the transaction has somehow not yet completed, so that even though I'm pulling with the version of the db that is returned by the transaction, that those value aren't yet saved... Anyone have any thoughts?#2017-05-0612:36val_waeselynck@ezmiller77 can you show us the pattern you use for pulling?#2017-05-0612:42ezmiller77Sure. Basically, I do the transaction. Then from the result of the transaction I get :db-after so that I'm sure to be using the correct version of the db. I then just do use the pull api:#2017-05-0612:43ezmiller77(d/pull db-after '[*] [:arb/id id])
#2017-05-0612:44ezmiller77(The attribute shown there has a different name than waht `i wrote above, as I was just trying to simplify...)#2017-05-0612:45val_waeselynckI see. I'm not sure I can picture the entity diagram though. Could you show some example data (i.e a few datoms with these attributes) ?#2017-05-0612:46val_waeselynckand also what the output of the pull looks like ?#2017-05-0612:47ezmiller77So this is correct pull output that shows what you are asking for I think:
{:db/id 17592186045421,
:arb/id #uuid "590dc5a2-00e1-4a5b-8901-1fbc62c0dbc9",
:arb/value
[{:db/id 17592186045428,
:arb/value [{:db/id 17592186045430, :content/text "Title"}],
:arb/metadata [{:db/id 17592186045429, :metadata/html-tag :h1}]}
{:db/id 17592186045431,
:arb/value [{:db/id 17592186045433, :content/text "Paragraph"}],
:arb/metadata [{:db/id 17592186045432, :metadata/html-tag :p}]}],
:arb/metadata
[{:db/id 17592186045422, :metadata/html-tag :div}
{:db/id 17592186045423, :metadata/title "Untitled"}
{:db/id 17592186045424, :metadata/doctype {:db/id 17592186045418}}
{:db/id 17592186045425,
:metadata/tags
[{:db/id 17592186045426, :metadata/tag :tag1}
{:db/id 17592186045427, :metadata/tag :tag2}]}]}
#2017-05-0612:50ezmiller77This is an example of the pull result where the tags aren't recovered:
{:db/id 17592186046255,
:arb/id #uuid "590dc664-0ee1-4d47-b1ec-eeb332eeace9",
:arb/value
[{:db/id 17592186046260,
:arb/value [{:db/id 17592186046262, :content/text "A title "}],
:arb/metadata [{:db/id 17592186046261, :metadata/html-tag :p}]}
{:db/id 17592186046263,
:arb/value [{:db/id 17592186046265, :content/text "Paragraph"}],
:arb/metadata [{:db/id 17592186046264, :metadata/html-tag :p}]}],
:arb/metadata
[{:db/id 17592186046256, :metadata/html-tag :div}
{:db/id 17592186046257, :metadata/title "A title"}
{:db/id 17592186046258, :metadata/doctype {:db/id 17592186045418}}
{:db/id 17592186046259, :metadata/tags [{:db/id 17592186045417}]}]}
#2017-05-0612:51val_waeselynckhmm I see. Weird#2017-05-0612:53val_waeselynckFirst thing I would check is that the attributes are actually the same in both dbs - by calling e.g (datomic.api/attribute db :metadata/tag) for both dbs#2017-05-0612:59ezmiller77That makes sense. Will try that. Thanks.#2017-05-0615:31ezmiller77@val_waeselynck I suspect the problem I am having is related to a limitation of pull related to enums: https://groups.google.com/forum/#!topic/datomic/WBLEqNH6RtE#2017-05-0713:36bherrmannSorry to lob a noob question: but is there a tool which takes an existing SQL Database Schema and converts it into a datomic schema. Isn’t such a transformation possible?#2017-05-0713:50rauh@bherrmann Not automatic by any means, but it might help you little bit: https://gist.github.com/rauhs/309bbd6c270bd8608f0689d8a826c223#2017-05-0713:59bherrmannThanks @rauh … I think I should probably try converting one of my toy projects from sql to dataomic - that would probably help me grasp the issues.#2017-05-0714:01rauh@bherrmann Sure, let me know if you have any questions. It's not super idiomatic conversion since it won't use ref-many attributes. It just maps a row to an entity.#2017-05-0716:32degI don't really understand maven, so this question will be a bit vague.... but there seems to be something wrong with the way datomic-pro is being included in my project.
I use https://github.com/trptcolin/versioneer to show the versions of libraries in my project. It works correctly on all my other dependencies, but not on datomic_pro.
A bit of spelunking shows that versioneer get the info from the dependencies pom.properties file. Datomic_pro lacks this file, though I can see it has the version info in pom.xml.
I'm guessing that Datomic's maven_install script may lack some step, maybe?#2017-05-0721:46nikkiwould it be weird to just use http://docs.datomic.com/clojure/#datomic.api/function plus other stuff to store all your code and not do 'git' or such#2017-05-0721:46nikkiand so just have this shared image that people dev on#2017-05-0721:47nikkia database of all the code#2017-05-0721:47nikkiand functions require other stuff they need by querying that db#2017-05-0722:35lorenlarsenSo I'm trying to build a development environment using Docker. I can launch the transactor, peer and my webapp just fine except the peer needs to have the database created. I use docker-compose to launch everything and I'm looking for suggestions on the best way to create the database after my transactor starts and before my peer container starts up. I'm sure I'm just looking at the problem the wrong way but would appreciate any pointers.#2017-05-0801:46favila@lorenlarsen why can't the peer create the database?#2017-05-0802:21lorenlarsen@favila Well the peer could create the database but my thinking was that this isn't such a good idea long term when I have multiple peers. I can use that as a workaround for now though.#2017-05-0815:28dm3I’ve read the docs a few times, but I’m still confused about Datomic objectCacheMax on the Peer. I understand the cache consists of uncompressed segments on-heap and compressed segments off-heap. If so, does objectCacheMax specify the max size for the off-heap+on-heap cache? If I set e.g. Xmx500m, objectCacheMax=250m and MaxDirectMemorySize=32m, is the Peer smart enough to not blow the direct memory limit (if it even uses direct byte buffers)?#2017-05-0817:40stuartsierra@dm3 The Datomic Peer does not currently use any off-heap memory. objectCacheMax is the size of the in-heap cache space.#2017-05-0817:41stuartsierraCompressed segments are kept in the Storage Service and in Memcached.#2017-05-0818:07val_waeselynck@nikki I don't think that's viable, because Datomic doesn't qualify as a source control system (no forking and merging, no line/expression-level diffing) - for good reasons.#2017-05-0818:09val_waeselynckDatabases and codebases are inherentenly different (despite all the parallels drawn between them), because in databases the 'snapshots' are derived from incremental changes (transactions), whereas in codebases the incremental changes (patches) are derived from the snapshot (i.e a consistent codebase)#2017-05-0818:10val_waeselynckThe reason for that is that we reason about writing data in terms of incremental events, whereas we reason about writing code in terms of a coherent whole#2017-05-0818:10val_waeselynckat least that's what I do 🙂#2017-05-0818:12nikkii feel like diffing should be AST-level (CST-level) in any case#2017-05-0818:13val_waeselynck@nikki yeah would love that especially for lisps 🙂#2017-05-0818:13val_waeselynckBut I don't know if there are diffing algorithms for tree structures#2017-05-0818:13val_waeselyncknot an expert at all#2017-05-0818:14val_waeselynckThe thing is, we do like our indentation, so a diffing/merging algorithm must preserve more structure than just the CST#2017-05-0822:14nikkilol i'm actually thinking about an AST editor#2017-05-0822:14nikkiThat visualizes the code in a touch interface or something#2017-05-0822:39mac01021Hello everyone.
I watched the “Day of Datomic” videos with the presentations by Stuart Halloway.
In the section on query, he briefly discusses a query that looks something like this.
[ :find ?item ?price
:where [?item :product/costs ?price]
[(> 256 ?price)]]
Now, it seems clear to me that, if you have a billion products in your DB, then several gigabytes of data need to flow over the wire to the peer in order to process this query (assume a cold cache) even if the result set is only going to contain a few items.
Am I wrong? If so, how is this avoided?
Otherwise, what is the typical approach to solving this problem when using Datomic?#2017-05-0822:54favila@mac01021 Mac you are correct, but this is by design.#2017-05-0822:55favilaConsider: 1) network is faster than disk, 2) only the peer running a query feels the load of the query#2017-05-0822:56favilaavoiding network traffic with your db on the fast datacenter network is rarely the scalability problem#2017-05-0823:07mac01021#2 is a clear advantage.
But with most databases that I might use, the data are indexed in such a way that I could run this query without ever inspecting the vast majority of the records (via network, disk, or any other mechanism). So the time to arrive at your result set, assuming it is sufficiently small, will be logarithmic in the number of records. (Hurray for B-Trees!)
Does Datomic really provide no way to perform a range query without evaluating a predicate on every single item in the database?#2017-05-0823:15favilarange queries do not examine every item#2017-05-0823:15favila(necessarily)#2017-05-0823:15favilae.g. your example will array-bisect#2017-05-0823:15favilaassuming :product/costs has a value index#2017-05-0823:16favilasee also the datomic.api/index-range function#2017-05-0823:17mac01021ok awesome. Thank you#2017-05-0909:25tomtauIs there a way to set "Access-Control-Allow-Headers" in the bundled REST server?#2017-05-0914:59souenzzoHello
(d/transact conn (some-fn)) throws: clojure.lang.ExceptionInfo: Interceptor Exception: clojure.lang.LazySeq cannot be cast to java.util.Map
but
(d/transact conn (vec (some-fn))) works. Is is expected? It's a bug? I'm using lazylist in many other places with no problem
some-fn is (concat [{:map :form}] (function-that-returns-array-of-db-add))#2017-05-0915:02favila@souenzzo I would inspect the output of (some-fn) carefully#2017-05-0915:02souenzzoHow should I inspect?#2017-05-0915:02favilaprint it#2017-05-0915:03favilathe error is consistent with some inner item not being a vector or map#2017-05-0915:03favilanot a problem with the outer level#2017-05-0915:04favilalazy seqs are not a problem on the top level#2017-05-0915:04souenzzo(into [{}] (function...)) works. But I will debug 😉#2017-05-0915:05favilaso I am asking you to verify your assumption that (some-fn) is truly a lazy-seq of vectors or maps#2017-05-0915:05favilareally that (function-that-returns-array-of-db-add) doesn't have a lazyseq as one of its items#2017-05-0915:07favilaoh, could also be that d in d/transact is not peer api?#2017-05-0915:07favilamaybe client api is more restrictive. (I have no client api experience)#2017-05-0915:16souenzzod/transact was "traditional" datomic.
(let [a (concat ..)
b (vec a)]
(println "-----")
(println (type a))
(println (class a))
(println (mapv type a))
(println (type b))
(println (class b))
(println (mapv type b))
(println "-----")
a
)
Outputs:
-----
clojure.lang.LazySeq
clojure.lang.LazySeq
[clojure.lang.PersistentArrayMap clojure.lang.PersistentVector clojure.lang.PersistentVector clojure.lang.PersistentVector]
clojure.lang.PersistentVector
clojure.lang.PersistentVector
[clojure.lang.PersistentArrayMap clojure.lang.PersistentVector clojure.lang.PersistentVector clojure.lang.PersistentVector]
-----
#2017-05-0915:20favila@souenzzo have you looked at lower depths? what is the problem with just printing it for yourself and looking?#2017-05-0915:20souenzzo(defn function-that-returns-array-of-db-add
[x]
(mapcat (fn [a] [[:db/add a b c]]) x)
)
(internal mapcat fn was really returning a tx-data with a single, write as [[:db/add]]..)#2017-05-0915:21favilaok I'm going to try it too#2017-05-0915:23favila(d/transact c (concat [{:db/doc "1"}] [[:db/add "2" :db/doc "2"]]))
=>
#object[datomic.promise$settable_future$reify__7008
0x503b2908
{:status :ready,
:val {:db-before datomic.db.Db,
@67615b66 :db-after,
datomic.db.Db @a034e1ff,
:tx-data [#datom[13194139534312
50
#inst"2017-05-09T15:23:29.077-00:00"
13194139534312
true]
#datom[17592186045417 62 "1" 13194139534312 true]
#datom[17592186045418 62 "2" 13194139534312 true]],
:tempids {-9223301668109598144 17592186045417,
"2" 17592186045418}}}]
#2017-05-0915:23favilathat matches my experience#2017-05-0915:24favila(although usually I'm using map or partition)#2017-05-0915:25favila@souenzzo does above give same result for you?#2017-05-0915:36souenzzoI trying to reproduce in a minimal/idolated cenario and cant reproduce too =/#2017-05-0915:45souenzzoI am out of time 😞 sorry.
I will use into for now#2017-05-0915:49favila@souenzzo The reason I say look at the output carefully is I am worried that (vec) is masking some problem with your tx nested deep somewhere#2017-05-0915:49favilai.e. that "not getting an exception" != "works"#2017-05-0915:50favilathe fact that you can't repro with toy examples makes me nervous for you#2017-05-0915:51souenzzotx was working (On integration test, everything ok)#2017-05-0920:04csmis anyone running peers in a different AWS region than DynamoDB? Or is that a non-starter?#2017-05-0920:16stuartsierra@csm I would not recommend that. Behavior would be unpredictable with respect to caching and query behavior.#2017-05-0920:17csmI thought as much. Thanks!#2017-05-0921:42erichmondcan you restore a database to datomic:ddb-local ?#2017-05-1012:52jaret@erichmond Yes you can restore your DB to local dev etc or to any protocol.#2017-05-1016:14erichmond/cdn-cgi/l/email-protection#2017-05-1107:48jamesDoes anyone know the best practice for naming booleans in Datomic? Is it possible to follow the Clojure practice of using “?” at the end, or are you using an “is-” pre-fix, or is some other approach preferred?#2017-05-1121:43timgilbertjames: I don't know if it's a best practice or not, but we've been using :user/disabled? and the like for boolean attributes and I like the way it looks. It also destructures nicely into local bindings named disabled? which seems like a plus: (let [{:keys [:user/disabled?]} query-result] ... (if disabled? foo bar))#2017-05-1206:53james@U08QZ7Y5S Thanks for letting me know, and I like your approach.#2017-05-1114:35tjtoltongood question, I've always kind of wondered that.#2017-05-1116:15bmaddyI have some related entities in datomic:
{:db/id 1
:related
({:db/id 2 :name "B"})}
I’d like to do two different things to it. Replace the related entities (and retract the old ones):
{:db/id 1
:related
({:db/id 3 :name "C"}
{:db/id 4 :name "D"})}
and remove all related entities:
{:db/id 1
:related ()} ;; yes, I'm aware this wouldn't technically exist anymore. I'm just printing it here to help describe what I'm looking for.
I tried replacing them like this:
(d/transact conn [{:db/id 1
:related updates}])
but that just appended new ones.
What’s the best way to go about doing these in datomic? Do I really need to query the db again to find all the related items and retract them manually? If so, can I do that in a transaction somehow so that I can be sure another process doesn’t add another related entity at the same time?#2017-05-1116:18favilaDatomic only transacts changes (add/retract), not replacements (make the entity look like this). You have to emit retractions if you want to remove items#2017-05-1116:19favilaYou can do this by query, compare, retract in the peer, but that is subject to race conditions (by the time transaction arrives at the txor, something else may have been added)#2017-05-1116:20favilaYou can also use a db function like this: https://gist.github.com/favila/8ce31de4b2cb04cf202687c6a8fa4c94 (There are other ones out there which are a little smarter)#2017-05-1116:21favilathat eliminates the race since the work happens in the transactor. But you still need to think about who "wins" if different peers assert different things independently#2017-05-1116:21favilae.g., do you really always want "last writer wins" semantics at entity+attribute granularity?#2017-05-1116:22favilathese are questions the app needs to answer for itself#2017-05-1116:23favilaWhatever policy you finally choose you can write transaction function interfaces to enforce them#2017-05-1116:27bmaddyAre there any of these functions built in or do you always need to write them yourself? (a quick search yielded no results)#2017-05-1116:27favilano, none are built-in#2017-05-1116:28favilaonly :db.fn/cas and :db.fn/retractEntity are built-in#2017-05-1116:28bmaddyOk, thanks for the help--I appreciate it!#2017-05-1118:49erichmondSorry guys, one other question. I am trying to do a restore-db to a ddb-local instance, and the writes to the transactor are happening so fast I am losing the heartbeat. I am experimenting with the -Ddatomic.s3BackupConcurrency and -Ddatomic.backupPaceMsec system properties. Are these the right things to be tinkering with?#2017-05-1118:52marshallrestore doesn’t require a transactor#2017-05-1118:52marshall(except for dev)#2017-05-1118:52marshallthe process running the restore writes directly to storage#2017-05-1121:31val_waeselynck@bmaddy you may be interested in Datofu https://github.com/vvvvalvalval/datofu#resetting-to-many-relationships#2017-05-1121:31val_waeselynck(disclaimer: I'm the author)#2017-05-1121:37bmaddy@val_waeselynck Thanks--I’ll check that out!#2017-05-1201:09wistbhi. Our application uses a sql db (postgress/oracle) and mongo. One of the concerns we hear from customers is, they dont want to deal with multiple databases (different database admins).. we store json documents into Mongo. We are thinking of datomic for other considerations. But, question remains whether we can remove Mongo from the application and use datomic for that purpose too. Any suggestions ?#2017-05-1207:08val_waeselynck> and use datomic for that purpose too
@wistb I don't understand, what purpose is this?#2017-05-1207:21dominicm@wistb it depends. Datomic isn't a document store where you can invent new keys on the fly. Datomic is a graph database where any entity can have any attribute that's already defined up front in the schema.
The other drawback for your scenario is that you're turning a setup that looks like:
http://www.plantuml.com/plantuml/png/IqaiIKnAB4vLyCtFIy_dIe5n0_ABIzABKekvkA8T2mfoCfCJIpBpys8LT5Foo_DqxQ3AiSl1z080
and turning it into:
http://www.plantuml.com/plantuml/png/IqaiIKnAB4vLS4aioS_DJEPAWGa4v1UNf1Ub5dDnHJiM5EHa9YUMPERdnIhefkINv-dQmJL0QRWuJ1y0
which is still the same number of nodes, even if a different hierarchy. So it may not solve your customers problems.#2017-05-1218:14wistb@dominicm You are right about "the number of database boxes is still 2 ", but, is it true that there is a 'datomic admin' role (in the same meaning of 'oracle admin') ?#2017-05-1218:15dominicm@wistb Hmm, I'm not sure. In my experience, the dbas are the developers.#2017-05-1218:17wistbalso, the role of 'Oracle admin' is different from traditional, right ? in that, there is no schema, triggers, procedures that need admin's attention/approval#2017-05-1218:17wistbthe role is limited to 'operations' only.#2017-05-1218:17dominicm@wistb there will be an operations role on datomic#2017-05-1218:19wistbright. And, it is most likely handled by the product company (than the customer who bought the product)#2017-05-1218:19wistbIn that sense, the number of "admin" resources the customer need to worry is only one.#2017-05-1218:21wistbbut, because datomic is not document store, I probably cannot eliminate mongo.#2017-05-1219:08souenzzo(d/transact conn [{:my-attr/type-ref-to-may [{:other-entity 123}]}]) now works? There is docs about that?#2017-05-1314:22ezmiller77@souenzzo I think you might be talking about nested component entities? If so, see here for one: http://blog.datomic.com/2013/06/component-entities.html#2017-05-1314:26ezmiller77I'm trying to connect to a transactor deployed to AWS (e.g. "datomic:<ddb://us-east-1/<system>/<db>...%22|ddb://us-east-1/<system>/<db>...">). When I do this in the repl located in my datomic folder it works, but when I do it in my local project folder in the repl I get this error:
CompilerException java.lang.NoClassDefFoundError: Could not initialize class datomic.ddb_cluster__init, compiling:(form-init1336530578825920965.clj:1:11)
My first thought was that there was something wrong with the datomic project imported into my project, but connecting to a dev transactor works just fine.#2017-05-1314:27ezmiller77Anyone have any thoughts?#2017-05-1316:06favilaI think you now need to manually include the lib for the storage you require @ezmiller77 #2017-05-1316:18ezmiller77much thanks @favila. that was it! http://docs.datomic.com/storage.html#sec-6-1#2017-05-1321:55souenzzoawesome that cas allow from nil to :value!!!!#2017-05-1323:54devnI don't suppose anyone from the Datomic team is around?#2017-05-1400:29devn--#2017-05-1400:29devnWhere did I read that some attributes which used to be required are now optional?#2017-05-1400:37marshall@devn probably here http://blog.datomic.com/2016/11/datomic-update-client-api-unlimited.html?m=1#2017-05-1400:37devn@marshall thanks#2017-05-1400:38marshallImplicit install and update schema as well as string tempids#2017-05-1401:12souenzzo:db.fn/cas don't suport lookup ref's????
[:db.fn/cas eid :my-ref-attr from to] fail
[:db.fn/cas eid :my-ref-attr [:db/ident from] to] fail
[:db.fn/cas eid :my-ref-attr (d/entid db from) to] ok
There is some why? How do I open a "feature request" ? 😄#2017-05-1518:35souenzzoIf someone else interested
https://receptive.io/app/#/case/26858#2017-05-1509:32josephHi, does anyone have an idea about how to create database with datomic client api, since client has to connect to peer server first which requires a database name to be started...#2017-05-1509:32josephor I have to create the database in other ways before using client api#2017-05-1512:29anders@joseph (d/create-database "datomic:) will create the db if it doesn't already exist.#2017-05-1513:03josephso still use the peer api to create the database...#2017-05-1514:27pesterhazyyou can probably use the datomic shell (can't remember exactly what it's called)#2017-05-1514:27pesterhazyi.e. a shell script that (I think) uses the peer library#2017-05-1518:25jaret@souenzzo To open a feature request navigate to https://my.datomic.com/account and once you are logged in you should see the "suggest feature" option on the top right, next to log out.#2017-05-1603:12wistbis datomic a good fit for managing trees of data (predominently). There are relations between the data at different levels too.#2017-05-1605:28val_waeselynck@wistb so DAGs actually :)#2017-05-1605:29val_waeselynckI'd sat definitely yes for reading#2017-05-1605:29val_waeselynckTo be seen for writing#2017-05-1605:30val_waeselynckWould be easier if you gave us examples of data, reads and writes#2017-05-1606:15wistbhave you seen anyone replacing a hibernate layer with datomic ? Our situation is like that. we use spring/jpa/hibernate/postgress/oracle. In the interest of keeping the business logic in tact, we are wondering if we can replace the ORM with datomic ....#2017-05-1607:43val_waeselynck@wistb AFAIK there are no Datomic ORMs (Datomic is not 'R', and its community is not too fond of 'O' 😉 ). Reimplementing all of JPA on top of Datomic would probably be a huge endeavour IMO. Out of curiosity, what leverage do you expect from Datomic if you're planning on 'hiding' it behind a JPA-like interface? Application logic is where the specifics of Datomic usually shine (datalog, rules, pull, entity API, non-remote querying etc.)#2017-05-1607:43val_waeselynckAlthough I do understand the desire to make a smooth transition#2017-05-1607:49augustlfacts map to objects better than tables I suppose, so it should definitely be doable#2017-05-1607:58val_waeselynckthe read interface shouldn't be too hard, I guess you can have each Entity class have a private datomic.Entity field and implement the getters on top of that#2017-05-1607:58val_waeselynckwriting is probably more problematic#2017-05-1610:53Petrus Theron(Datalog beginner) Is there a reason why Datomic/Datomic does not support hash-map return values, e.g. [:find {:id: ?e :title ?title} :where [?e :product/title ?title]] => [{:id 1789... :title "Title 1"} ...] My first guess would be to do with subquerying and set uniqueness#2017-05-1610:55karol.adamiecuse [:find [(pull ?e [:id :title]) …]#2017-05-1611:05Petrus TheronThanks @karol.adamiec! 🙂#2017-05-1612:32Petrus TheronDate/instant attribute naming best practice: :thing/arrival-date vs :thing/arrived-on vs :thing/arrived-at?#2017-05-1613:05souenzzo#subscribe this doubt#2017-05-1614:01val_waeselynckI have a slight preference for :thing/arrival-date which is more informative re: type.#2017-05-1614:02val_waeselynckThe noun vs verb debate is not a big deal IMHO - clarity and searchability are the important concepts#2017-05-1616:55devthI intermittently get:
Exception in thread "main" clojure.lang.ExceptionInfo: Error communicating with HOST 0.0.0.0 or ALT_HOST datomic on PORT 4334
when a peer / kubernetes replica starts up and attempts to connect, using SQL storage with pro license. sometimes kubernetes will restart it up to 4 times after it crashes, but it always eventually connects. is this typically thrown if it can't establish network or are there other reasons? should my peer retry a few times before crashing?#2017-05-1616:59favilaThat means peer connected to storage, but could not connect to transactor via the provided "host=" or "alt-host=" config values set in the transactor's transactor.properties file @devth#2017-05-1616:59devthgot it. strange, since i know the transactor is up.#2017-05-1616:59favilathe host=0.0.0.0 sounds just bad#2017-05-1617:00favilaalt-host=datomic relies on "datomic" resolving correctly#2017-05-1617:00favila(from the peer's perspective)#2017-05-1617:00devthi think 0.0.0.0 is because i don't know the IP it will get#2017-05-1617:00devthand datomic does resolve from the peer#2017-05-1617:00devth(kubernetes dns)#2017-05-1617:00favila0.0.0.0 is only for peers#2017-05-1617:00favilaso it's not really useful#2017-05-1617:00favilaunless the peer is on the same machien#2017-05-1617:01devthso i guess it's not used at all#2017-05-1617:01favilanm, I take that back#2017-05-1617:01favilait might be used for the bind address#2017-05-1617:02devthi thought i needed it to be 0.0.0.0 in order to work, but it's been awhile since i first set it up on k8s#2017-05-1617:02devthmight try wrapping it in a retry#2017-05-1617:02devthdon't understand why it intermittently fails#2017-05-1617:03favilado you know that datomic is resolveable right away?#2017-05-1617:03favilathe name "datomic" I mean#2017-05-1617:03favilaresolveable and routable#2017-05-1617:03devthit should be but i should verify that#2017-05-1617:04favilahow is storage resolved? also a kubernetes dns name?#2017-05-1617:05devthkubernetes pods are configured to use an internal dns server#2017-05-1617:05devthstorage (sql) goes through a proxy (google cloud sql proxy) on localhost to the cloud mysql instance#2017-05-1617:06devthdns is configured automatically on google container engine. not something i touch#2017-05-1617:06favilaso the connection string is always something like datomic:?#2017-05-1617:07devthright#2017-05-1617:07favilaand the proxy has an IP address, or a name?#2017-05-1617:08devthdatomic:
that's what i'm using#2017-05-1617:08devthMYSQL_HOST is always localhost:3306#2017-05-1617:08devthwhich is a process running inside the pod using a docker image provided by google#2017-05-1617:08devthit uses credentials to find and access the cloud sql instance#2017-05-1617:08devthgiven a resource name#2017-05-1617:09favilawhat I mean is, how does the proxy get the destination address? I am seeing if these are determined via different systems, which would make it possible that one system is up and the other is not yet#2017-05-1617:10devthyeah two separate systems#2017-05-1617:10favilae.g. if proxy forwarded to hardcoded IP (likely for google cloud mysql), then maybe dns is just not up yet#2017-05-1617:10devthi don't know how google's cloud sql proxy works internally#2017-05-1617:11devthif anything i would suspect kubernetes dns resolution#2017-05-1617:11devththe proxy has worked flawlessly for other apps#2017-05-1617:12devthi can ping datomic on startup to see if it resolves to an IP#2017-05-1617:13devthi wish there was an http endpoint on the transactor i could curl though#2017-05-1617:13favilayes the proxy is definitely working#2017-05-1617:13favilathat's how the peer got the "datomic" name in the first place#2017-05-1617:13favilainteresting I didn't know about this proxy#2017-05-1617:13devthhttps://github.com/GoogleCloudPlatform/cloudsql-proxy#2017-05-1617:14favilawe use ssl auth directly with could mysql#2017-05-1617:14devthah#2017-05-1617:14devthalso https://cloud.google.com/sql/docs/mysql/sql-proxy#2017-05-1617:14favilayeah I looked it up#2017-05-1617:15favilaI suspect it uses the gcloud http apis#2017-05-1617:15favilabut no idea how those resolve#2017-05-1617:15devthright. might be internal dns inside google network#2017-05-1617:15favilayeah#2017-05-1617:15favilamagic *.internal names even#2017-05-1618:18wistb@val_waeselynck I liked this : "Application logic is where the specifics of Datomic usually shine (datalog, rules, pull, entity API, non-remote querying etc.)" . It is nice take-away point .#2017-05-1619:25eoliphantHi, I have an architecture question. I’ve a Datomic/Clojure based microservice. Some of its transactions are ‘domain events’ that I want to shoot off to kafka. I identify them with an attribute on the relevant transaction entity I’d been playing around with Onyx, it’s cool, but I’m starting think it might be overkill for this use case which is really just grabbing relevant transactions, mapping their attributes to the event structure and sending them on their way to kafka.
I’ve been looking at datomic’s TX-report-queue, as it looks like a listener on that guy would pretty much meet my needs. But i’m not clear on some of its semantics. it seems like each peer gets it’s own queue? if so then I’d potentially be processing (num peers) copies of the same transaction/event.#2017-05-1620:50kennyIf every entity needs a globally unique identifier, do you guys prefer to add it under an entity namespaced attribute (e.g. for your product entity :product/id or organization entity :organization/id) or globally understood name (e.g. every entity have the attribute`:entity/id`)?#2017-05-1620:51augustlI prefer to have separate ID attributes per "type" of entity, because it allows me to query for them more easily#2017-05-1620:51augustlI don't have to "duck type", I can just look specifically for :product/id 123. Otherwise I'd have to query for :generic-id 123 and then later figure out if facts for that entity is of the type I expect#2017-05-1620:58kenny@augustl Wouldn't your query effectively be the same either way?
'[:find ?name
:in $ ?id
:where
[?e :product/id ?id]
[?e :product/name ?name]]
'[:find ?name
:in $ ?id
:where
[?e :entity/id ?id]
[?e :product/name ?name]]
#2017-05-1621:01augustlI guess it depends on how you like to structure your queries, yeah#2017-05-1621:01augustlI like to just query for the id and then build entity objects, not pull out attributes in the query#2017-05-1621:02kennyStill seems pretty similar 🙂
(:product/name (d/entity db [:product/id my-id]))
(:product/name (d/entity db [:entity/id my-id]))
#2017-05-1621:02augustlwith the latter, you wouldn't know what "type" of entity you would get though#2017-05-1621:03augustlso your URL could be /people/5 and that could be an ID of a product, not a person, and you'd still get data from the query#2017-05-1621:03kennyIsn't the type implicit with the entity attributes? i.e. Because the entity has the :product/name attribute it is therefore a Product entity.#2017-05-1621:04kennyIt wouldn't make much sense if a Person had a :product/name attribute.#2017-05-1621:04augustlin that case, where you only ask for the product name, that makes sense#2017-05-1621:04augustlbut if you want a full collection of attributes you'd just end up with a sloppy system, I'd say#2017-05-1621:05augustlwhere you would get a person with the id 5 (even though 5 is a product) and all the attributes would be nil#2017-05-1621:05augustlmakes more sense to 404 in that case#2017-05-1621:08kennyI think you'd still be covered tho':
(let [{:product/keys [id] :as e} (d/pull db '[*] [:product/id my-id])]
(if id
{:status 200 :body e}
{:status 404}))
(let [{:product/keys [name] :as e} (d/pull db '[*] [:entity/id my-id])]
(if name
{:status 200 :body e}
{:status 404}))
#2017-05-1621:09augustlif all your "product" entities has a name attribute, that would work, yeah#2017-05-1621:10kennyAnd you wouldn't end up with a map of nils. From the docs:
> attribute specifications that do not match an entity are omitted from that entity's result map#2017-05-1621:11augustlI guess it's just a matter of taste#2017-05-1621:11augustlchecking for a "name" feels like duck typing, and I'd prefer an "actual" type#2017-05-1621:13kennyI think I see what you're getting at: you just want a globally consistent key to check for an entity's type?#2017-05-1621:13augustlyeah#2017-05-1621:14augustland you can use lookup refs more easily#2017-05-1621:15augustl(d/entity db [:product/id "123"])#2017-05-1621:15kennyHow's that different than (d/entity db [:entity/id my-id])?#2017-05-1621:15augustlthe built in type check 🙂#2017-05-1621:16augustlno additional check needed to verify that it's actually a product#2017-05-1621:17kennyIIRC, d/entity will always return an entity even if the entity does not exist. So, you'd still need the if.#2017-05-1621:20kennyYeah, you'd still need to check if pulling :product/id off the entity was nil before proceeding.#2017-05-1621:31souenzzo(transact conn (into #{[:db/add (d/tempid :db.part/tx) :audit/user x] [:db/add (d/tempid :db.part/tx) :audit/ns y]} other-tx)) resolve both tempid to the same "transact-id". It's a feature? Should I use it??#2017-05-1621:39favila@souenzzo there can only be one tx tempid id per tx. It's handled specially#2017-05-1621:40favila@souenzzo you can also use the string "datomic.tx" in recent datomics#2017-05-1621:42souenzzo"datomic.tx" is a special string that generates :db.part/tx tempid?#2017-05-1621:54favilayou can now use a string as a tempid, normally the tempid is in the :db.part/user partition, except "datomic.tx", which is in the :db.part/tx partition#2017-05-1621:55favilait is a tempid which always resolves to the currently executing transaction entity#2017-05-1622:44spiedenwow cool#2017-05-1702:17kennyIs it possible to use a partition in the same transaction the partition is created in?#2017-05-1702:28jaret@kenny no, you must create the partition prior to using the partition#2017-05-1702:28kenny@jaret thought so. Thanks.#2017-05-1709:35ezmiller77Anyone know what's a good, cheaper, ec2 instance that works for a AWS deployed datomic transactor? The default is c3.large which is a bit expensive for my little project.#2017-05-1709:36ezmiller77I looked around and it sound as though some ec2 instances won't work, but I couldn't quite determine what the requirements are.#2017-05-1712:47val_waeselynckezmiller77: I think it's about access to the local filesystem#2017-05-1713:07stuartsierra@ezmiller77 It's possible to run a Transactor on an EC2 micro, though only for tiny workloads. A 'small' instance works for light load, you just have to set the heap and cache sizes correctly on the transactor.#2017-05-1805:45ezmiller77stuartsierra: Thanks for the response. My little project just serves results to a single blog (my own), which is not exactly getting hit by a lot of requests 😉. I suppose that qualifies as a tiny workload, no?#2017-05-1714:21devthhey @marshall + @jaret – were you able to find out any more info on the issue where tx-range can potentially produce a clojure.lang.Delay? referring to: https://clojurians-log.clojureverse.org/datomic/2016-10-04.html#inst-2016-10-04T18:16:46.002099Z
i'm seeing this consistently using onyx-datomic on a fresh datomic-pro 0.9.5561 db with schema installed and about 500 generated entities.#2017-05-1720:06yedihey @all, I work at Indaba Media, and we're customers of Datomic.#2017-05-1720:07yediwe just had an issue where we forgot to set some attributes to be fulltext indexed, and apparently you can't alter fulltext-indexing status through a migration.#2017-05-1720:07yediSo our current plan is to rename the ident for that attribute to something else. then reinstall that ident with fulltext-indexing set. Then port the data we currently have over from the old renamed ident to this new one.#2017-05-1720:07yedia couple questions: 1. is this a decent strategy?#2017-05-1722:44val_waeselynckyedi: why not simply create a new attribute, migrate the data over to it, and change the client code to use that one? Seems easier operationally#2017-05-1814:01yedi@U06GS6P1N we're only in alpha currently and don't have much data for this attribute. So i'm thinking preserving the name is worth it as long as there's no major operational issues with this method#2017-05-1816:39val_waeselynckwell I'm not sure what you suggest is feasible without an interruption of service#2017-05-1816:42val_waeselynckand a breaking change in your database e.g you won't be able to go back to previous versions of the code.#2017-05-1720:07yediand 2. we have other fields that we might want to make fulltext indexed, but we're not currently sure about them yet. Since it isn't trivial to make them fulltext-indexed down the line, we're considering just making them indexed from the get go. What are the performance trade-offs for having a bunch of fulltext-indexed attributes?#2017-05-1805:36val_waeselynckI'm about to start coding a library which reimplements the Entity and Pull APIs to support derived data (derived attributes & getters). Before I dive in, is anyone working on this already?#2017-05-1808:58laujensenIm running a Datomic/MySQL service, which in the couse of 6 months has produced an inno-db file weighing in at 26gb. Thats seems excissive to me. Is the MySQL a bad fit or is this to be expected?#2017-05-1809:08val_waeselynck@laujensen do you gcStorage on a regular basis?#2017-05-1809:09laujensen@val_waeselynck Never. I understand it to remove history beyond a certain point.#2017-05-1809:11val_waeselynckNo, it doesn't delete data. It can only mess around with Peers which hold an old db value. But if you gcStorage from 1 week ago you should be safe (unless you have some process which holds on to some db value for longer than a week, which seems unlikely 🙂 )#2017-05-1809:11laujensen@val_waeselynck Aha... I'll give it a try. Sounds like something you would run weekly then ?#2017-05-1809:12val_waeselynckyes that seems like a sane default.#2017-05-1809:13val_waeselynckyou should try that and see if you inno-db file gets smaller. Having said that, one of the design choices of Datomic is to store a lot of things redundantly, the underlying assumption being that storage is cheap.#2017-05-1809:14laujensen@val_waeselynck And that makes sense, but I still want to retain some control over storage consumption. Right now it looks like its growing exponentially#2017-05-1809:15val_waeselynckmaybe your business is too 🙂#2017-05-1809:15val_waeselynckI don't know that there are any knobs for that. Maybe you store more things that you intend, or have a lot of unneeded updates for the same data?#2017-05-1809:17val_waeselynckOne thing you can do is avoid unnecessary indexes (see the :db/fulltext and :db/index options) but it's best to anticipate that ahead-of-time#2017-05-1809:17val_waeselynckhttp://docs.datomic.com/schema.html#2017-05-1809:18laujensen@val_waeselynck Well, yeah, I guess the business is too. But dumping the entire DB without history is just 5% of the total data consumed now.#2017-05-1809:18laujensenThe gc is running now, is there a way to monitor its progress?#2017-05-1809:18val_waeselynckyou can also set :db/noHistory on some high-churn attributes#2017-05-1809:18val_waeselyncknot expert enough for that sorry#2017-05-1809:19val_waeselynckwhat I would do is look at the Transactor and Storage metrics, the activity of gcStorage may be visible there#2017-05-1809:21laujensenIts only run for a couple of minutes, but its already consumed 1gb of disk space 🙂#2017-05-1809:46laujensenOddly enough, checking the log the gc cycle completes in less than a minute#2017-05-1809:46laujensenAnd instead of freeing up disk space, consumed it#2017-05-1810:15val_waeselynckweird#2017-05-1810:15val_waeselynckmaybe you need to run additional MySQL-specific gc#2017-05-1810:15val_waeselyncksadly I'm really no help in that regard#2017-05-1813:01marshall@laujensen You can determine the “real” amount of space required for a Datomic DB by running a backup and calculating the size of the resulting backup dir on disk
The difference between that and used storage space will be made up of recoverable storage garbage, unrecoverable storage garbage, and storage-specific overhead#2017-05-1813:01marshallthe first of those can be resolved with gcStorage#2017-05-1813:02marshallthe storage-specific overhead has to be reclaimed via the storage with something like Postgresql VACUUM or MySQL OPTIMIZE TABLE#2017-05-1813:04marshallalternatively if you can tolerate the downtime you can backup your DB, restore into a NEW backend storage instance and switch over your system. This approach will remove all types of garbage#2017-05-1813:05robert-stuttafordi turned my local from 30gb to 6gb by backing up, deleting, and restoring my local 🙂#2017-05-1813:05robert-stuttafordthat’s about 8 months’ accretion of garbage segments#2017-05-1813:05marshall@robert-stuttaford do you run gcStorage regularly?#2017-05-1813:05robert-stuttafordnot at all#2017-05-1813:05robert-stuttaford… we really should#2017-05-1813:05marshallindeed 🙂#2017-05-1813:06robert-stuttafordthis is my local machine, with multiple successive restores. so all the previous restores’ garbage is now unreachable by a gcStorage#2017-05-1813:06robert-stuttafordbut it’d be worth doing on our production storage for sure#2017-05-1813:06marshallah. yeah it’s probably not worth gcStorage on a local restore#2017-05-1813:06marshalljust blow away the data dir (if you’re using dev)#2017-05-1813:06marshallbut, yes, you should run it in prod#2017-05-1813:59jfntnCan db.type/uuid somehow be used as a valid :db/id value or do I need to create a new attribute?#2017-05-1814:21matthavener@jfntn: new attribute, :db/id’s are provided by the transactor.. you can’t choose them#2017-05-1814:22matthavenerbut you can use a stringized uuid as a tempid if you just need to generate a unique :db/id for adding facts#2017-05-1814:38laujensen@marshall thanks for weighing in. Im on the trail now but need to migrate to another host before I can kick it off.#2017-05-1814:46jfntn@matthavener makes sense, thanks#2017-05-1821:18unbalanceddoes this look right to anyone? trying to get a transactor/peer/etc running locally#2017-05-1821:18unbalanceddatomic:sql://<DB-NAME>?jdbc:#2017-05-1821:18unbalancedI feel like it shouldn't say <DB-NAME> there 😅#2017-05-1821:27favila@goomba that is the right pattern for a datomic.api/connect call#2017-05-1821:27favilareplace <DB-NAME> with the name of the datomic database you want#2017-05-1821:28unbalancedokay, so that looks normal for the transactor to print as the URI when you run it?#2017-05-1821:28favilayes#2017-05-1821:28unbalancedok, phew#2017-05-1821:43unbalancedalright making some progress... so close... trying to run the peer and getting this#2017-05-1821:43unbalancedAccess denied for user 'datomic'@'localhost'#2017-05-1821:43unbalancedrunning the following command#2017-05-1821:43unbalancedbin/run -m datomic.peer-server\
-h localhost \
-p 8998\
-a myaccesskey,mysecret \
-d datomic,datomic:
#2017-05-1821:52unbalancedI can connect just fine if I run mysql -udatomic -pdatomic datomic#2017-05-1822:26unbalancedwait, is <DB-NAME> the name of the mysql-database name or should it be mysql?#2017-05-1822:35shaun-mahoodI've been going through the Datomic docs looking for any specifics on requirements for running it on PostgreSQL. I haven't been able to find anything specific on requirements - am I safe to use Datomic on any reasonable installation of PostgreSQL or are there specific flags or settings I should be using that I missed in the documentation? This is going to be for experimentation to start but ideally will move into a real project soon.#2017-05-1823:05favila@goomba the mysql database name is "datomic"#2017-05-1823:05favilathat's at the end#2017-05-1823:05favilathat string is a template#2017-05-1823:06favilareplace <DB-NAME> with your datomic database name, not your mysql table name#2017-05-1823:06unbalanced😮#2017-05-1823:06favilaall datomic databases store data in the same mysql table#2017-05-1823:06favilamysql is being used as a key-value store for blobs#2017-05-1823:07favilanothing more#2017-05-1823:07unbalancedoh snap, how do I found out what I named my datomic database?#2017-05-1823:09unbalanced😅#2017-05-1823:09favilaAt some point you called (d/create-database "datomic:) or restored from a backup with a similar looking uri#2017-05-1823:10favilayou can also use http://docs.datomic.com/clojure/#datomic.api/get-database-names#2017-05-1823:10favilato list them all#2017-05-1823:12unbalancedfascinating#2017-05-1823:12unbalancedwell... 😕 seems like everything so far is correct then, must be some other error or SQL setting I'm missing#2017-05-1823:13unbalancedappreciate it 🙂 @favila#2017-05-1900:09favila@goomba ehat problem specifically are you having?#2017-05-1900:27unbalanced#2017-05-1900:27unbalanced:(#2017-05-1900:27erichmondQuick question. When I am trying to restore a database from a remote dynamo table to a local dynamodb-local table, I keep getting these sorts of errors Caused by: java.util.concurrent.ExecutionException: clojure.lang.ExceptionInfo: :restore/read-failed Unable to read 58de91f6-6e59-4284-ad6b-94c262caf019 {:key "58de91f6-6e59-4284-ad6b-94c262caf019", :db/error :restore/read-failed}#2017-05-1900:29erichmond2017-05-19 00:25:57.981 WARN default datomic.backup - {:message "error executing future", :pid 76, :tid 10}
java.util.concurrent.ExecutionException: clojure.lang.ExceptionInfo: :restore/read-failed Unable to read 58de91f6-6e59-4284-ad6b-94c262caf019 {:key "58de91f6-6e59-4284-ad6b-94c262caf019", :db/error :restore/read-failed}
#2017-05-1900:29erichmondetc#2017-05-1900:31erichmondI just have no idea what this error actually means, so it's hard to troubleshoot, any insight would be greatly appreciated#2017-05-1901:35favila@goomba does your MySQL give SELECT rights to user datomic and table datomic? Seems like it may not#2017-05-1901:36favilaTransactor needs select insert update delete#2017-05-1901:36favilaPeers just need select#2017-05-1901:57unbalancedIt does, strangely #2017-05-1901:59unbalancedI can login to the datomic db with mysql -udatomic -pdatomic datomic and select * from datomic_kv just fine #2017-05-1902:00unbalanced😅#2017-05-1902:00unbalancedIt's quite the mystery to me #2017-05-1912:35unbalancedokay what I've figured out so far is that the password parameter isn't being passed to MySQL for some reason#2017-05-1912:36unbalancedwhen I do tail -f /var/log/mysql/error.log I see the messages 2017-05-19T12:33:51.825580Z 316 [Note] Access denied for user 'datomic'@'localhost' (using password: NO)#2017-05-1912:38mgrbyte@goomba What works for us here is adding &useLocalSessionState=true - i forget the where and why this was needed.#2017-05-1912:39unbalancedworth a shot!#2017-05-1912:39unbalancedeh... what still seems to be the case strangely is that it's only picking up the first parameter I pass in for some reason#2017-05-1912:39unbalancedit seems to ignore the rest of them#2017-05-1912:40unbalanceddatomic: is what I just ran and it gives the same error 😅#2017-05-1912:42unbalancedIf I switch the order of parameters around, it reads the password but not the username facepalm#2017-05-1912:44mgrbytegoomba Is this the peer URI or in the transactor?#2017-05-1912:44unbalancedthis is a modification of what was given to me by the transactor that I'm trying to use to initialize the peer#2017-05-1912:45unbalancedso, peer URI 😄#2017-05-1912:46mgrbyteI wonder. In your URL you use "datomic" as the host, but the error is mentioning "localhost". Perhaps the hosts entries in the mysql auth tables are not matching?#2017-05-1912:48unbalancedhmm#2017-05-1912:48unbalancedwell it does say localhost:3306 in the string#2017-05-1912:49unbalancedbut I'm gonna glance at the grant table anyway#2017-05-1912:54mgrbytesorry. just saw your original message with the -m datomic.peer-server command#2017-05-1912:54unbalancedmarshall got me figured out#2017-05-1912:54mgrbyteHave you tried enclosing the value to -d in double quotes?#2017-05-1912:54unbalancedI wasn't escaping the ampersand#2017-05-1912:54mgrbyteah, cool#2017-05-1912:55mgrbyte👍#2017-05-1913:09unbalancedWoohoo! Got it sorted! Need to escape that ampersand! Everyone can rest easy now 🙂 I may have gotten a little hint from cognitect sales team about being a shell scripting noob hehehe 😅#2017-05-1916:19jfntnHas anyone tried to generate pull queries from s/keys specs?#2017-05-1916:33miikkajfntn: nope, but the basic (s/keys :req [::foo] :opt [::bar]) → [::foo ::bar] would be pretty easy with spec-tools (https://github.com/metosin/spec-tools)#2017-05-1916:33miikkauser=> (into [] (:keys (st/spec (s/keys :req [::foo] :opt [::bar]))))
[:user/foo :user/bar]
#2017-05-1917:44jfntn@U1NDXLDUG actually tried that but I was getting nil keys, suspect it was because I was using 0.2 with alpha15 but not sure#2017-05-1917:44jfntnAnd you'd use visit to traverse recursively right?#2017-05-1918:06miikkaYeah, 0.2 needs alpha16, and yeah, visit for recursive traversing.#2017-05-1918:16jfntnSounds good, I’ll try again#2017-05-1916:29unbalancedalso would like to throw in the question queue has anyone used the datomic console with a sql based storage?#2017-05-1916:30unbalancedstruggling to figure out the correct command#2017-05-1916:31unbalanceddatomic: is what I'm trying but no luck so far#2017-05-1916:33favilayou forgot ? @goomba#2017-05-1916:34unbalancedokay wasn't sure if that was important or not#2017-05-1916:35unbalancedyayyyy 😄 👍#2017-05-1916:41unbalancedokay so if I make an entity of some sort and I decide I want it to have more attributes later, that's doable yes?#2017-05-1916:41unbalancedlike this was presented in one of the examples
[{:db/ident :story/title
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one}
{:db/ident :story/url
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one
:db/unique :db.unique/identity}
{:db/ident :story/slug
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one}]
but if later I wanted to make that entity also have, say, an author, that's doable yes?#2017-05-1916:49favilathese are not a single entity, these are three entities. This transaction creates the :story/title :story/url and :story/slug attributes @goomba#2017-05-1916:50favila@goomba are you following along with a tutorial?#2017-05-1916:51unbalancedah, yeah I'm trying to follow along with the "Day of Datomic 2016" videos#2017-05-1916:51unbalancedok I see so my confusion was what an entity is#2017-05-1917:30unbalancedwait wait, just for clarification... aren't those three schema that collectively form an entity?#2017-05-1917:30unbalancedor do I have that backwards?#2017-05-1917:31favilaeach schema attribute is itself also an entity#2017-05-1917:31favilathey don't collectively form anything#2017-05-1917:32favilaat the bottom, datomic is a set of assertions [entity attribute value transaction] (called "datoms")#2017-05-1917:32favilaan "entity" is just a number in the entity slot of a datom#2017-05-1917:32unbalancedoh fascinating#2017-05-1917:32favilathe map view of a datom is gathering together all datoms with a common entity#2017-05-1917:33unbalancedokay so if I wanted to (conceptually) add some information about a story, like an author, what would be the idiomatic way to do that?#2017-05-1917:33unbalancedquery by some attribute of the story and then do some kind of association with the entity id and the new information?#2017-05-1917:35unbalancedby the way @favila I owe you at least one beer for all the help you've been! 🍻#2017-05-1917:35favilafirst you need a reference to the story. This can be via direct entity id, or an attribute+value unique to the entity#2017-05-1917:35favilae.g. [:db/ident :story/slug] or just :story/slug is the slug attribute entity.#2017-05-1917:36favilaso it looks like here :story/url is your unique attribute#2017-05-1917:37favilaso (d/transact conn [{:db/id [:story/url "the-url"] :story/author [:author/id #uuid"some-uuid"]}]) for example, would add a story author assertion to a story entity#2017-05-1917:38favilathe map form of the transaction is sugar for [:db/add [:story/url "the-url"] :story/author [:author/id #uuid"some-uuid"]]#2017-05-1917:41unbalancedso in one shot (d/transact conn [{:db/id [:story/url "the-url"] :story/author [:author/id #uuid"some-uuid"]}]) does a logical lookup on the story that matches "the-url" on the attribute :story/url and associates it with the :story/author specified... magical#2017-05-1917:41unbalancednow this presupposes that I've made a schema for :story/author first, yes?#2017-05-1917:56favilayes#2017-05-1918:12unbalancedI was blind but now I see 😄 😄 😄#2017-05-1919:23azimpelhi guys, im learning datomic using the official tutorial
but unfortunately there is a query on the page http://docs.datomic.com/getting-started/see-historic-data.html which produces error
can someone give me a hint what is wrong here?
query:
(<!! (client/transact conn {:tx-data [{:db/id commando-id :movie/genre "future governor"}]}))
result:
{:datomic.client-spi/request-id "b7ca85b3-b3d4-4f65-8f33-4b9d8b103fb5", :cognitect.anomalies/category :cognitect.anomalies/incorrect, :cognitect.anomalies/message ":db.error/invalid-lookup-ref Invalid list form: [[17592186045419]]", :dbs [{:database-id "591e005d-745c-492b-8f7b-ff406aaabda5", :t 1001, :next-t 1005, :history false}]}#2017-05-1919:59favila@azimpel did you forget to ffirst the previous query result?#2017-05-1919:59favilacommando-id is [[123]] instead of 123#2017-05-1919:59favilauser=> (def commando-id
(ffirst (<!! (client/q conn {:query '[:find ?e
:where [?e :movie/title "Commando"]]
:args [db]}))))
#user/commando-id
user=>#2017-05-1919:59azimpelyes, you're the best#2017-05-1920:00azimpel@favila , thank you, my dumb mistake#2017-05-1920:18kennyIs it guaranteed that the first datom in :tx-data off of a transaction report is the datom with the tx entity?#2017-05-1920:42favilaI doubt it#2017-05-1920:43favilajust look at the tx of any datom#2017-05-1920:43favilathey should all be the same for a given transaction#2017-05-1920:43favila(-> tx-result :tx-data first :tx)#2017-05-1921:20toshanyone knows what it could mean if (d/create-database db-uri) yields
IllegalArgumentExceptionInfo :db.error/read-transactor-location-failed Could not read transactor location from storage datomic.error/arg (error.clj:57)
#2017-05-1921:24toshissuing (d/get-database-names db-uri) with a version of the db-uri that has a * in place of the datomic db name gives me nil
which isn't unsurprising because when i look at the datomic_kvs table in postgres it is still empty#2017-05-1921:33favila@tosh transactor is either not running or not pointed at storage correctly#2017-05-1921:34favilaor there are permission problems etc#2017-05-1921:34favilaInvestigate transactor logs#2017-05-1921:55toshah of course, i haven't had a transactor running#2017-05-1922:02toshthanks @favila worked like a charm.
my first error was not starting the transactor (for some reason i thought the peer will write directly into storage and that starting the transactor only makes sense later once the datomic db is created)
my second error was that in the properties file of the transactor I had the name of the datomic db that I wanted to work with instead of the name of the postgres db that i use for storage => so many db names and cryptic db urls 🙂#2017-05-1922:20jfntnIs there a way to use transact-async without blocking on the future later?#2017-05-1922:31favilaYou can block with a deref. Is your goal to call a callback when it's done (push not pull)?#2017-05-1922:32favilaSorry should be "you can block with a timeout"#2017-05-1922:34jfntn@favila yes I’d like it if it were fully async, i.e. callback based#2017-05-1923:33favilaOnly client API has that. You would have to emulate on peer#2017-05-1923:40unbalancedis there a better/more-idiomatic way to do this?
(let [my-id (<!! (client/q conn {:query '[:find ?x
:where
[?x :story/title "Teach Yourself Programming in Ten Years"]
]
:args [db]}
))]
(println (<!! (client/transact conn {:tx-data
[{:db/id (ffirst my-id)
:story/author-first "Peter"
:story/author-last "Norvig"}]}))))
i.e., inlining the query? I assume there is but when I try
(<!! (client/transact conn {:tx-data
[{:db/id [:story/title "Teach Yourself Programming in Ten Years"]
:story/author-first "Peter"
:story/author-last "Norvig"}]}))
I'm told that :story/title is not unique (which is true)#2017-05-1923:44marshall@goomba unless you have a unique attribute (identity or value) you can include in the transaction map, the query for ID followed by transact is pretty standard#2017-05-1923:45unbalancedgotcha! Well I'm cool with that as long as I'm not writing code like a barbarian over here. Well, not a complete barbarian#2017-05-1923:45marshallIt's often a good idea to have an externally unique identifier (ie domain ID of some kind) for this reason (and some other reasons)#2017-05-1923:46marshallIn this case something like the library of Congress identifier or a UPC might be appropriate#2017-05-1923:47marshallTitle prob not, since there are plenty of duplicate titles for different books#2017-05-1923:47unbalancedmhm. Unfortunately my weak human memory was having trouble recalling the UPC exactly so I tried to fudge with whatever I could remember 😂#2017-05-1923:47unbalancedThanks again! 🎩's off!#2017-05-1923:47marshallNp#2017-05-1923:48unbalancedhey, how cool is our life that we get to do this? eh? eh?#2017-05-1923:49unbalancedstill coding at almost 8pm on a Friday cause it's the 🐝's knees!#2017-05-1923:50unbalanced(if I was still stuck with JavaScript I would've hit the bar at 11am I think)#2017-05-1923:51marshallLol#2017-05-1923:53unbalanced(11am pacific time, and I work on the east coast)#2017-05-1923:55unbalancedIs the backend of the site in datomic?#2017-05-1923:55unbalancedThis is a roundabout way of asking if datomic is appropriate for serving videos#2017-05-1923:55unbalancedOh wait just noticed they're using vimeo#2017-05-1923:55unbalancedBut still, "Is datomic appropriate for serving videos?"#2017-05-1923:55marshallDefinitely wouldn't store media in datomic#2017-05-1923:56marshallNot intended as a blob store#2017-05-1923:56unbalancedgotcha. not even close to my primary use case just curious#2017-05-1923:56unbalancedwe have polyglot persistence for that sort of thing so not a big deal#2017-05-1923:56marshallGood for metadata for blobs stored in a secondary store like s3#2017-05-1923:57unbalancedyeah I was thinking something along those lines#2017-05-1923:57unbalancedblob store... I don't know where we come up with these words but another reason I love this industry 😂#2017-05-2000:31marshallBinary Large OBject #2017-05-2000:31marshall;)#2017-05-2000:31marshallAs opposed to a Character Large OBject (CLOB)#2017-05-2000:39unbalancedlol you KNOW they came up with that after "blob"#2017-05-2019:53alex-dixonWhen I upsert a unique/identity attribute, what’s removed exactly? Is the existing entity identifiable by that attribute-value combination removed? i.e. Insert [[1 :unique-attr “foo”] … bunch of other attributes associated with 1]. Insert [2 :unique-attr “foo”]. Are all attributes associated with 1 removed, or just [1 :unique-attr “foo”]?#2017-05-2020:42favila@alex-dixon nothing is removed? I may be misunderstanding you#2017-05-2020:43favilaMaybe you can show us a sample transaction?#2017-05-2020:52alex-dixonThanks. So I don’t have one unfortunately but could try to create one. I’m really just not clear on the basics of what ends up happening when I transact an attribute that is marked :db.unique/identity and I haven’t been able to find an answer. I’ll try again! E.g., if :username is :unique/identity, and I transact [eid2 :username “foo”] when [eid1 :username “foo”] is already in the db, does Datomic remove everything associated with eid1?#2017-05-2020:54alex-dixon@favila#2017-05-2020:56favilaIf eid2 is not a tempid and is different from eid1, you will get an error#2017-05-2020:57favilaDatomic never silently retracts anything#2017-05-2020:58favilaIf eid2 IS a tempid, the tempid will resolve to same eid1, and you will assert on the same entity#2017-05-2020:58alex-dixonAh. Ok. Yes I should have specified it wouldn’t be a temp id. I guess I’m interested in the behavior of both#2017-05-2020:58alex-dixonOhhh#2017-05-2020:58favilaThat is the special behavior of upsetting#2017-05-2020:58favila*upserting #2017-05-2020:59alex-dixonOk. Thank you. That’s a huge help#2017-05-2021:00favilaA simple unique (vs identity) will error out in that case too#2017-05-2021:00favilaSo db unique identity affects tempid resolution, fundamentally#2017-05-2021:00alex-dixonSo if I wanted that behavior, would I just write a database function?#2017-05-2021:00favilaWhat behavior?#2017-05-2021:02alex-dixonSorry. If I wanted to remove everything about e.g. a “user” entity when I remove :username, or some arbitrary attribute for the entity#2017-05-2021:04favilaYou mean db.fn/retractEntity?#2017-05-2021:06alex-dixonYyyes. Kind of. Like I would want to basically declare “nothing about a user exists if ever there is a user without a username”#2017-05-2021:06alex-dixonI know there’s component but don’t see how it could be used at that level (within an entity)#2017-05-2021:07favilaThe invariant you describe is not enforceable by datomic alone#2017-05-2021:08favilaDatomic only keeps attr level invariants#2017-05-2021:08alex-dixonAh hah. Ok. I’ve had that impression#2017-05-2021:09favilaSo you need a distinct "delete user" op that your app always goes through#2017-05-2021:10favilaYou can't have [[:db/retract eid :username "foo"]] automatically result in all eid being retracted#2017-05-2021:11alex-dixonOk. I thought about whether that behavior could be emulated by having :user, then all attributes associated to that (:username, etc). Then if I retract :user and the attributes are components…#2017-05-2021:12favilaIf your notion of delete user is merely "ensure there are no references to this rid in the db anymore" db.fn/retractEntity is enough#2017-05-2021:13favilaBut sometimes other types become invalid by the loss of their reference. You need to know what those are and deal with it#2017-05-2021:14favilaE.g. If you had a user-prefs entity with a required user reference attr, and you del the user, what should happen to this entity?#2017-05-2021:15favilaShould you error? (Force deleting the user prefs explicitly) shouldnyou transitively delete it automatically? Etc#2017-05-2021:16favilaSo any entity or cross-entity invariants are your responsibility #2017-05-2021:21alex-dixonOk. Thank you so much. That makes a lot of sense. It seems like Datomic itself will never put me in the situation if I program correctly. Unfortunately I’m working on a framework (borrowing partially on ideas in Datomic) where I think I’ve introduced this problem as part of the implementation 😅#2017-05-2021:24alex-dixonSeems like a situation where an error very much should be raised#2017-05-2021:25favilaTransaction fns corresponding to your "operations" can help you, both organizing your code and keeping always in sync with data, and removing possible race conditions #2017-05-2021:25favilaBut you face same advantages and disadvantages as db-level triggers in a traditional sql db#2017-05-2021:26alex-dixonNevertheless it’s tempting to have something like an “essential property” for an entity…otherwise it seems the maintenance surrounding such a case falls to users#2017-05-2021:26favila(Except you also have full history)#2017-05-2021:29favilaThere are some libs out there that try to do some entity-level enforcement#2017-05-2021:30favilaThere's also a talk about structuring datomic attribute schema such that entity and type invariants in a db are self-describing#2017-05-2021:31favila(Meaning you could potentially use a smaller handful of generic tx fns for CRUD ops)#2017-05-2021:32favilaBut most of the time you want an app layer on top of datomic anyway. E.g. A rest interface#2017-05-2021:33favilaAt the very least is for authorization checks#2017-05-2021:34favilaSorry I am on a phone and am autocorrecting hard#2017-05-2021:34alex-dixonMakes sense. And interesting, thought I was encountering something isolated but sounds like other libs are trying to solve related issues?#2017-05-2021:34alex-dixonLol. Honestly just grateful for the responses. Thanks again#2017-05-2021:35favilaYes sure. However your life will be easier if you shed as many entity level invariants as possible#2017-05-2021:37favilaERD style design (e.g. For a relational db) tends to produce closed types with rigid entity invariants#2017-05-2021:39favilaYou can be more open in a graph style db#2017-05-2021:40favilaUse namespaces on attrs to resolve ambiguity. Express reference attr invariants in terms of required attrs on the destination entity instead of on a strict entity "type"#2017-05-2209:14dm3has anyone got a datomic -> prometheus metric exporter?#2017-05-2220:15ghadiI had a datomic -> statsd thing a couple years ago, extending to prometheus would be nice. (I ❤️ prometheus)#2017-05-2220:19ghadihttp://docs.datomic.com/monitoring.html#sec-2#2017-05-2220:19ghadi@dm3 ^#2017-05-2221:41kennyWhy is it that if I am transacting data to update an existing entity and I use a lookup ref to uniquely identify the entity, the the transaction report contains a :tempids map containing a tempid that maps to the :db/id that I updated? If I perform the same transaction to the same existing entity but this time use a :db/id to uniquely identity it, the transaction report's :tempids map is empty. Why is there this inconsistency?#2017-05-2221:44kennyHere's an example that uses the lookup ref :entity/id which is marked as :db/unique :db.unique/identity.
@(d/transact conn [{:entity/id #uuid"591b634f-0c58-4739-9d15-ddc524eaabd2"
:user/name "Cynthia"}])
=>
{:db-before datomic.db.Db,
@ffa0ca67 :db-after,
datomic.db.Db @b4fe535a,
:tx-data [#datom[13194139534336 50 #inst"2017-05-22T21:43:01.218-00:00" 13194139534336 true]],
:tempids {-9223301668109598102 312261302289389}}
And an example performing the same transaction using :db/id instead:
@(d/transact conn [{:db/id 312261302289389
:user/name "Cynthia"}])
=>
{:db-before datomic.db.Db,
@b4fe535a :db-after,
datomic.db.Db @bafb05cb,
:tx-data [#datom[13194139534337 50 #inst"2017-05-22T21:43:12.516-00:00" 13194139534337 true]],
:tempids {}}
#2017-05-2222:04kennyHowever, if I transact using the normal lookup ref syntax it I get the expected behavior:
@(d/transact conn [{:db/id [:entity/id #uuid"591b634f-0c58-4739-9d15-ddc524eaabd2"]
:user/name "Cynthia"}])
=>
{:db-before datomic.db.Db,
@1f813257 :db-after,
datomic.db.Db @b1cfceb4,
:tx-data [#datom[13194139534344 50 #inst"2017-05-22T22:04:15.797-00:00" 13194139534344 true]],
:tempids {}}
#2017-05-2222:11kennyAh, I think I found my answer: http://docs.datomic.com/identity.html#sec-4 TIL:
> If a transaction specifies a unique identity for a temporary id, and that unique identity already exists in the database, then that temporary id will resolve to the existing entity in the system. This upsert behavior makes it possible for transactions to work with domain identities, without ever having to specify Datomic entity ids.#2017-05-2303:49devn@ghadi what's Prometheus?#2017-05-2303:49devnI am not familiar #2017-05-2304:01ghadiA superb monitoring system#2017-05-2304:02ghadihttps://prometheus.io/#2017-05-2312:23val_waeselynck@hendriklouw your query has O(n^2) complexity#2017-05-2312:24val_waeselynckyou're scanning every (person, person) pair#2017-05-2312:24val_waeselyncktry this instead#2017-05-2312:25val_waeselynckthis one has O(n) complexity#2017-05-2312:26hendriklouwnice, that works perfectly. Thank you @val_waeselynck#2017-05-2315:49erichmondI have Datomic Pro, how does the 2 days support work?#2017-05-2315:49erichmondwho/what do I contact#2017-05-2315:58devthhttps://support.cognitect.com/hc/en-us @erichmond#2017-05-2316:41erichmondthanks!#2017-05-2317:13unbalancedcould anyone possibly explain a use case for db.type/ref?#2017-05-2317:16devthan attribute whose value references another entity#2017-05-2317:16devthmight make sense in the context of a cardinality many attribute, e.g. a person might have multiple addresses#2017-05-2317:16unbalancedokay... so the "value" of such a datom would be another entity?#2017-05-2317:17devththe value is a reference to another entity#2017-05-2317:17devthwhen you transact a value you can use the other entity ID directly, or a lookup ref (see http://blog.datomic.com/2014/02/datomic-lookup-refs.html)#2017-05-2317:17unbalancedlike if I had a user and I had a project and I wanted to make a schema for the project owner, that would probably be a :db.type/ref with the user as the owner?#2017-05-2317:17devthyep#2017-05-2317:18unbalancedgotcha, thank you!#2017-05-2317:18devthnp#2017-05-2317:28unbalancedokay let's say I have a bunch of schema {:db/ident user/attr ...} ... is it is possible to query all attributes that are available for user/*?#2017-05-2317:29devthif you're getting all user/* attributes for a single entity use a pull query#2017-05-2317:30unbalancedah okay#2017-05-2317:31devthor if for multiple entities you can use pull-many#2017-05-2317:31unbalancedmaybe I should be more clear I'm not looking for the attributes for any specific entity just all the attribute fields that relate to user#2017-05-2317:32unbalancedI mean I can clearly see them in the console but I'm wondering if there's a programmatic way to get them#2017-05-2317:32devthan attribute can belong to any entity - it's not like a traditional relational table#2017-05-2317:33devthyou could probably do something with database functions in a datalog query, e.g. starts-with?#2017-05-2317:33unbalancedokay let me rephrase... is there a way to query over all the attributes and just get the ones that have :user as the beginning of their keyword?#2017-05-2317:33unbalancedah I see#2017-05-2317:33devthnot something you'd typically need to do#2017-05-2317:36unbalancedagreed -- just feeling my way around#2017-05-2317:44favila[:find [?attr-ident ...]
:where
[:db.part/db :db.install/attribute ?attr]
[?attr :db/ident ?attr-ident]
[(namespace ?attr-ident) ?attr-ns]
[(= ?attr-ns "user")]]
#2017-05-2317:47favila@goomba note the namespace function#2017-05-2317:48unbalancedinteresting... is the ... significant?#2017-05-2317:49faviladestructures so you get a list of items instead of a list of relations#2017-05-2317:50favilahttp://docs.datomic.com/query.html#sec-5-8#2017-05-2317:50unbalancedokay, awesome 😄#2017-05-2317:52unbalancednow when I run this I'm getting Don't know how to create ISeq from: clojure.lang.Keyword ... could that just be because I haven't populated any data yet?#2017-05-2317:55unbalancedI'm looking up the docs I'm just trying to figure out which part of that query is supposed to be literal and which I'm supposed to fill in values for... e.g. in the :db.part/db am I supposed to put in the name of the db or is that a literal?#2017-05-2317:55unbalancedalso @favila what are you up to like 5 beers that I owe you now? 😂#2017-05-2317:56favilathe query is all literal#2017-05-2318:06erichmondhas anyone seen this kind of error when trying to restore-db? java.util.concurrent.ExecutionException: java.util.concurrent.ExecutionException: com.amazonaws.AmazonClientException: Unable to execute HTTP request: <bucket>.#2017-05-2401:37jeff.terrell@erichmond - No, but I'm wondering if somebody accidentally put a literal <bucket> instead of the actual bucket name. Maybe search your code for the literal <bucket> string?#2017-05-2402:21erichmond@jeff.terrell sorry, that was a real bucket name, I removed it tho, so you guys didn't see where our production backups are ;D#2017-05-2402:30jeff.terrell😆 d'oh! Makes sense…#2017-05-2402:59uwoforgive me for asking, I feel like the answer should be obvious to me: why is it important to ensure that schema changes are applied once and only once?#2017-05-2408:09pesterhazyno, afaik you can retransact the schema on each connection without any harm#2017-05-2408:41val_waeselynck@uwo: most of the time (e.g when installing attributes that don't change and database functions), you don't need to ensure schema installation runs only once.#2017-05-2408:42val_waeselynckSee https://github.com/vvvvalvalval/datofu#managing-data-schema-evolutions for a bit more details#2017-05-2412:36uwo@pesterhazy @val_waeselynck thanks! why is that the focus of conformity then?#2017-05-2412:42pesterhazyfor some migrations, like altering attributes, installing the schema only once may be necessary#2017-05-2414:34val_waeselynckAnother common use case is creating a new attribute and populating it with default data for existing entities#2017-05-2416:17uwo@pesterhazy @val_waeselynck thanks!#2017-05-2416:18uwoIs there any reason I might be getting db.error/transactor-unavailable besides overwhelming the transactor?#2017-05-2416:19uwoI’m not in an import situation. And there should be very few writes going on (unless of course something in our system is going haywire)#2017-05-2416:22uwowe’re using the dev store in staging at the moment. Could it be that??#2017-05-2417:28favilayou get this error when connecting or when transacting? @uwo#2017-05-2417:28devthside note: it'd be amazingly useful if there was a doc of all/most of Datomic error messages explaining what they actually mean#2017-05-2417:33uwo@favila ah, yes. sorry. when transacting#2017-05-2417:34uwowe handle db.error/transactor-unavailable during imports with exponential backoff to allow it to recover, but I wasn’t anticipating it during normal use#2017-05-2417:40marshall@uwo could definitely be due to dev storage
Dev is an integrated storage engine that shares resources with the transactor process. It’s intended strictly as a development convenience#2017-05-2417:40marshallnot intended for any kind of production or scale use#2017-05-2417:41marshallfor several reasons, including the fact that it doesn’t have its own independent resources to handle peers and transactor concurrently connected and using read and write bandwidth#2017-05-2417:41uwothanks. that sounds about right. we were certainly not intending to use it for prod, just been using it in staging for a little bit because of timelines I guess#2017-05-2417:42uwoI’m make it a priority to add real storage then#2017-05-2417:42marshall👍#2017-05-2417:44uwoIf I might add a question. we’re bringing our own storage (mssql). Silly question I know, but should we put storage on a separate box from the transactor?#2017-05-2417:51uwodopey question. sorry. they’re separate 😄#2017-05-2418:05marshallyea, it’s definitely recommended to be on separate instances#2017-05-2418:05marshallfor the same kinds of reasons#2017-05-2419:18timgilbertIs there a good reason why EIDs come back as #uuid from (d/q) but as java.lang.Long from (d/entity)?#2017-05-2419:20favila@timgilbert Are you sure you are not confusing your domain-specific id (the uuid) with the internal entity id (the long?) What attribute are you reading in each case?#2017-05-2419:21favilaentity ids (from :db/id, or the :e field in a datom) are always longs#2017-05-2419:22timgilbertAh, yes, you're right, my mistake#2017-05-2419:23timgilbertI was looking at an external domain UUID, as you suspected. Thanks!#2017-05-2420:02unbalancedanybody here using datomic in production?#2017-05-2420:03unbalancedI understand that it's designed so that transactor, peer server, storage, etc are usually on different machines (virtual or physical), curious what setups folks are using#2017-05-2420:07stuartsierra@goomba Yes, definitely those should all be on different machines. Most production installations are set up this way.#2017-05-2420:07potetm:raised_hand: Yeah that's the setup we use.#2017-05-2420:07unbalancedare you guys doing physical servers or VMs? or microservices?#2017-05-2420:08marshall@goomba are you likely to be on AWS?#2017-05-2420:08unbalancedgoogle cloud#2017-05-2420:09unbalancedor local hosting#2017-05-2420:10unbalancedwell, ultimately both#2017-05-2420:10unbalanceddepending on the project/company#2017-05-2420:10marshallso you’ll likely be limited to whatever infrastructure is available at the on-prem sites; in general that is something that Datomic supports fairly well#2017-05-2420:11potetmAWS#2017-05-2420:11marshallyou’ll want independent instances, either real or virtual, for transactor and peers (or peer server)#2017-05-2420:11marshallalso separate storage#2017-05-2420:11marshallbut you can use different storages in the different deployments#2017-05-2420:11marshalldepending on what is availble#2017-05-2420:12marshalli.e. you might use Oracle for one customer and cassandra for another if that is what they already have avaliable#2017-05-2420:15marshalli’d highly recommend a memcached instance for production as well#2017-05-2420:15marshallit will significantly improve performance across the board#2017-05-2420:17potetm^^^#2017-05-2420:19unbalancedokay, got all that. Is the primary way of communicating server information with jdbc strings etc? or is there a config file I should be looking at?#2017-05-2420:20unbalancedalthough I don't need it now ultimately I'd like to be doing some sort of microservices thing where where the whole shebang can be treated as a bunch of ephemeral instances and would like to avoid hardcoding resources locations or at least be able to do it programmatically#2017-05-2420:24potetmThe call to d/connect requires the storage location. So in the case of jdbc, yeah it requires a jdbc url.#2017-05-2420:24potetmhttp://docs.datomic.com/clojure/#datomic.api/connect#2017-05-2420:25potetmThe transactor location is looked up from storage.#2017-05-2420:25favila@goomba we use instance or project metadata#2017-05-2420:26favilathen the instance pulls metadata, spits into app-specific config (eg an edn or properties file, or systemd environments file, whatever), and it's accessible to app on startup#2017-05-2420:26unbalancedokay, gotcha. So the storage at least needs to be fairly static#2017-05-2420:26potetmYeah, or rigged up yourself.#2017-05-2420:27unbalancedcool, thanks guys#2017-05-2500:23unbalancedanyone have any tips for documenting schema or perhaps querying schema? like if I bring on a new dev how are they going to know what schema are available short of digging through where they were initially transacted?#2017-05-2500:25unbalancedalso does anyone happen to know if bin/console runs as a client or peer?#2017-05-2501:17marshall@goomba console is a peer#2017-05-2501:19unbalancedgotcha, thank you#2017-05-2501:32gavanitratehey everyone. just curious about any way of building queries programattically. anyone tried this?#2017-05-2501:37unbalancedthat's a good question, also something I'm looking into#2017-05-2501:38unbalancedright now I'm wrestling with just building them 😂#2017-05-2501:39gavanitratehaha yea. just coming to notice that i have a few queries, with only slight changes. been messing with syntax-quotes all day trying to find a nice way to go about it.#2017-05-2501:40jeff.terrell@gavanitrate - check out the query reference documentation starting here: http://docs.datomic.com/query.html#sec-5-6#2017-05-2501:40jeff.terrell(Or maybe a bit before.)#2017-05-2501:40jeff.terrellThe idea is that your query can be parameterized with "inputs".#2017-05-2501:54unbalancedOkay I finally understand now that this needs to be executed on the peer and will not work on the client#2017-05-2501:55unbalanced@jeff.terrell I'm not sure if @gavanitrate is trying to do something similar to what I'm trying to do or not but if you want to view the database as a state-space, for, say, an AI agent to navigate through, could potentially require more direct syntactic manipulation of the queries than just manipulating the inputs#2017-05-2501:56unbalancedthen again, maybe not#2017-05-2501:56unbalancedbut part of the appeal of clojure in general (for me (for AI)) is the interesting concept of programmatic code construction#2017-05-2501:56unbalancedand this is a perfect example#2017-05-2501:57jeff.terrellI mean, you could certainly go crazy with macros that wrap the query interface, but I think the usefulness of that over just simple functional inputs to the query is…not that high.#2017-05-2501:57unbalancedyou could be 100% correct#2017-05-2501:58jeff.terrellThere's also the pull API, in case that is better for what you're trying to do.#2017-05-2501:58unbalancedstep 1 is I need to figure out what I'm trying to do 😂#2017-05-2502:02unbalancedI don't know about anyone else but I absolutely programmatically create schema#2017-05-2502:02unbalancedone of my favorite things about datomic is giving you the ability to do that#2017-05-2502:03unbalancedsince it's so chock full of data driven goodness#2017-05-2502:13gavanitratei'm mostly just trying to allow toggling of certain where clauses. e.g.
(defn commercial-properties
[cnx {:keys [postcode]}]
(d/q
'[:find ?p .
:in $ ?postcode
:where
[?p :property/type :commercial]
(when ?postcode [?p :property/postcode ?postcode])]
(d/db cnx) postcode))
ideally, if the postcode is not provided, all commercial properties will be returned. this is probably not idiomatic at all though 😆#2017-05-2502:15unbalancedoh, neat. Does that work? 😮#2017-05-2502:16unbalancedalso, is it possible to retract schema? Or do you just rename them?#2017-05-2502:17gavanitratenah, that's just pseudocode.#2017-05-2502:21gavanitratethink schema retraction is not possible. as your history still requires it. excision might be worth looking into though.#2017-05-2502:22jeff.terrellOf course, you could have separate functions, commercial-properties and commercial-properties-for-postcode. But I see what you mean. That's a good use case for your question.#2017-05-2502:22jeff.terrellI'd try asking again in the morning if you don't get an answer by then @gavanitrate. I'm interested in hearing the answer too.#2017-05-2502:26unbalanced@gavanitrate yeah you're probably right unfortunately I polluted one of my namespaces with a schema name I didn't intend 😛#2017-05-2502:47unbalancedyeah it looks like even excising doesn't work to get rid of a schema... I think my solution will be just to create a namespace called :dead that I move typos to 😛#2017-05-2503:43favila@goomba why does my query only work on a peer?#2017-05-2503:47unbalanced@favila it seems like the predicates are really restricted on a client#2017-05-2503:48favilaHuh. Does clojure.core/namespace work?#2017-05-2503:51unbalanced{:cognitect.anomalies/category :cognitect.anomalies/incorrect,
:cognitect.anomalies/message
"The following forms do not name predicates or fns: (namespace)",
:dbs
[{:database-id
"datomic:",
:t 1207,
:next-t 1208,
:history false}]}#2017-05-2503:52unbalancedis a result of#2017-05-2503:52unbalanced(pp/pprint (<!! (client/q conn {:query '[:find [?attr-ident ...]
:where
[?attr :db/ident ?attr-ident]
[(namespace ?attr-ident) ?attr-ns]
[(= ?attr-ns "user")]
]
:args [(client/db conn)]})))#2017-05-2503:52unbalancedbut it works just fine with#2017-05-2503:53unbalanced(d/q '[:find [?attr-ident ...]
:where
[?attr :db/ident ?attr-ident]
[(namespace ?attr-ident) ?attr-ns]
[(= ?attr-ns "user")]
]
db)#2017-05-2503:56unbalancedmagic! ¯\(ツ)/¯#2017-05-2503:57unbalancedclojure.core/namespace is also a no go @favila#2017-05-2503:58unbalanced{:cognitect.anomalies/category :cognitect.anomalies/incorrect,
:cognitect.anomalies/message
"The following forms do not name predicates or fns: (clojure.core/namespace)",
:dbs
[{:database-id
"datomic:",
:t 1207,
:next-t 1208,
:history false}]}#2017-05-2512:49erichmondDoes excision also clear transaction and associated metadata?#2017-05-2512:57marshallhttp://blog.datomic.com/2013/05/excision.html#2017-05-2515:00erichmond@marshall thanks!#2017-05-2518:56gwshey all! I was noticing that pull syntax in queries (e.g. [:find [(pull $src-var ?foo [*]) ...]]) works for queries with multiple data sources. note $src-var. without $src-var, the pull syntax appears to pick the "first" data source in :in. I can't find a reference to the above src-var syntax in the documentation - is this simply undocumented, did I miss it somewhere, or am I getting into "this is undefined, and we may remove this in the future without warning" territory?#2017-05-2520:39unbalancedI don't know if it's the same as for query but in the "day of Datomic Training" they mention that the $ source var is implicit unless specified to allow you to join over multiple dbs#2017-05-2520:42gwsI'm joining over multiple DBs in my query (the :where clause), but when pulling the attributes of an entity, I need to specify which database those attributes come out of. I've managed to figure out how to do that (above), but I don't see where it's documented (check the definition of pull-expr)#2017-05-2520:43favila@gws this is undocumented, I discovered it by accident (actually I didn't know it was optional for a while, was doing by analogy of d/pull) and I don't know if there's a reason it's undocumented (other that simple oversight) or if it's safe to depend on it in the future#2017-05-2520:44favila@gws I was hoping a Cognitect would chime in during the day and answer that#2017-05-2520:45gwsI sort of suspected it was undocumented, but before I move this query out to production I'd like to know more. thank you!#2017-05-2520:46favilaI have been using this for a while and I can't conceive of a reason it would be unsafe#2017-05-2520:46favilaverify that two pulls of the same eid from different dbs yield different results, and keep that as a regression test somewhere#2017-05-2520:49favilaso [:find (pull $a ?e [:db/doc]) (pull $b ?e [:db/doc] :in $a $b :where [(ground 1001) ?e]], after transacting {:db/id 1001 :db/doc "a"} and {:db/id 1001 :db/doc "b"} in two fresh dbs#2017-05-2520:51gwsexcellent, thank you for the head start too#2017-05-2521:28unbalanceddoes anyone know of good tools for importing SQL data? I'm toying around with writing some automatic schema generation stuff but if there's some stuff out there already would rather leverage that#2017-05-2522:45timgilbertSay, if I have an attribute that is :db.type/ref and :db.cardinality/many, does it act more like a set or a bag? Eg, will it hold more than one reference to the same entity?#2017-05-2522:47favila@timgilbert everything in datomic is set semantics#2017-05-2522:47favilathe indexes themselves are ordered sets#2017-05-2522:47favila(the datom indexes)#2017-05-2522:49timgilbertOk. So then if I had :db.type/string, I wouldn't be able to store ["a" "a"] as the value, I take it#2017-05-2522:49favilacorrect#2017-05-2522:50timgilbertOk, cool. Thanks for the second time today @favila! 🍻#2017-05-2523:56marshallThe only exception is that you can get a 'bag' using the :with grouping #2017-05-2523:57marshallFor aggregations#2017-05-2601:05unbalancedhaving trouble grokking the "set" "unset" in the documentation for altering a schema to be non-unique#2017-05-2601:06unbalanced(d/transact conn '[{:db/id :user/email
:db/unique unset}]) not working like I would hope it would 😅#2017-05-2601:13marshall@goomba http://docs.datomic.com/schema.html#sec-5-3-4#2017-05-2601:14marshallTake a look at that section and the next few subsections #2017-05-2601:14unbalancedahhh hahaha#2017-05-2601:14unbalanceddarn users ... can't they read? 🤓#2017-05-2601:28unbalanceddoes anyone have any idea why these might be conflicting?
{:d1 [17592186045678 :user/id 52 13194139534544 true],
:d2 [17592186045678 :user/id 71 13194139534544 true]}
#2017-05-2601:29unbalancedor, what questions would you ask to determine if they are conflicting?#2017-05-2601:32unbalancedthis is the schema:
#:db{:ident :user/id,
:cardinality :db.cardinality/one,
:valueType :db.type/long}
#2017-05-2601:39marshallYou can't assert multiple values against the same EA pair in a single transaction if the attribute is cardinality one#2017-05-2601:39unbalancedto further the mystery... I attempt to transact a bunch at once (d/transact con list-of-maps) and I was getting lots of conflicts, but when I do
(doseq [m list-of-maps] (d/transact conn [m]))
it works fine#2017-05-2601:41marshallBecause the entirety of a transaction is atomic (ie it all happens at exactly the same time) how would you know which is the value to assert?#2017-05-2601:41unbalancedohhh... so, all those maps were being assigned the same entity?#2017-05-2601:42marshallAnd your 2nd example is doing 1 transaction vs many#2017-05-2601:42unbalancedI really should've gone to the g*dd*mn day of daytomic training 😅 do you know if they're doing another one at the conj this fall?#2017-05-2601:42marshallDon't know yet#2017-05-2601:42unbalanceddooo ittttt#2017-05-2601:43marshall:)#2017-05-2601:43unbalanced😄#2017-05-2601:43unbalancedso it the second way of doing it preferred for a bunch of seperate entities?#2017-05-2601:43marshallNot necessarily #2017-05-2601:44marshallBut I suspect you have multiple txns against the same entity in your maps#2017-05-2601:45marshallLooking at your original 2 conflicting datoms - you're saying the same entity has both 52 and 71 as id#2017-05-2601:45marshallNotice both have the same value in the E position#2017-05-2601:45unbalancedi.e. for a more complete example
(d/transact conn [#:user{:disable false,
:email "i********@****.net",
:authenticated false,
:pwdhash
"pbkdf2:sha1:1000$mIrT****************",
:lastname "B***",
:hawk "ib******",
:username "ibi*****",
:firstname "I*****",
:id 53,
:group_name "client",
:count 0,
:last 1485361260000}
#:user{:disable false,
:email "robe*****@***.net",
:authenticated false,
:pwdhash
"pbkdf2:sha1:1000$****************,
:lastname "F",
:hawk "r*****",
:username "r"****,
:firstname "R****",
:group_name "admin",
:count 0,
:id 77
:last 1491327360000}])#2017-05-2601:46unbalancedthere were a bunch more of those in the transaction but I just pulled two#2017-05-2601:46unbalancedthose are somehow all being regarded as the same entity?#2017-05-2601:48marshallWhere is the user/id from your original conflicting datoms example#2017-05-2601:48unbalancedoh, I filtered those out in desperation 😅#2017-05-2601:49marshallAh. Yes it seems that you have multiple maps referring to the same entity. Do you have a unique identity or value attribute ?#2017-05-2601:49unbalancedno I didn't make any of them unique 😛#2017-05-2601:50unbalancedif I had made at least one of the attributes unique would a bulk-add have worked?#2017-05-2601:51marshallYou could add an explicit db/id to each to be sure, but the behavior you describe is unexpectes#2017-05-2601:51unbalancedI thought it implicitly created a db/id#2017-05-2601:51marshallUnless there's a unique attribute or you have the same temp ids in more than one map#2017-05-2601:52marshallIt does. But you can use an arbitrary string temp id for instance to refer to other entities in the same txn#2017-05-2601:53marshallWhat version of datomic#2017-05-2601:53unbalancedah I see ... let me check#2017-05-2601:53unbalanceddatomic-pro-0.9.5561#2017-05-2601:56marshallIf youll email me a repro (schema and txn that fail) ill have a look tomorrow morning. #2017-05-2601:56unbalancedsure thing#2017-05-2610:45onetom@val_waeselynck i just came across your datofu project (via a slack log archive)
im wondering what happened to the idea of defining the schema using helper :db/fns as you suggested in https://stackoverflow.com/a/31480922
that time u said u haven't tried it because you are happy with generating the schema via code.
have you tried it since or do u know anyone who tried it?#2017-05-2611:51val_waeselynckonetom: haven't tried it, I've been moving more in the opposite direction. Regarding modeling, I see the Datomic schema as a derived thing rather than a source of truth; my approach for http://bandsquare.com is to store model metadata in a DataScript database from which installation transactions are derived.#2017-05-2611:52val_waeselynckI still believe that for most projects, datofu's approach will be the most reasonable one, at least for getting started. Datomic's transactions being data doesn't mean it has to be written in data literals#2017-05-2611:52val_waeselynckI should add this to the SO question#2017-05-2611:54val_waeselynckI now believe even less in database functions approach than I did at the time - they'd just be an un-portable DSL disguised as data#2017-05-2612:02onetominteresting... however what does porting mean?
you would expect that some other system might want to read the same EDN data which describes the schema as transaction function calls?
it will see a vector of lists which contain a symbol a few keywords and a string.
if you would just use the data literal, then you would get a vector of namespaced-keyword-keyed maps.
then what would be the next step that other system would do with this data?
it would still need to interpret it somehow.
the db/fn approach means it would just deal with positional parameters as opposed to named ones...
and if that other system would understand datomic schema attribute names already, then it should just receive the output of a datomic query which returns a schema as maps using pull... 😕#2017-05-2612:04onetomnot that i don't like the functional approach, just the person im working with at the moment insists on using .edn files for the schema and similar seed data.
and it works for now, so instead of resisting, i'd like to trick him towards a more concise solution 🙂#2017-05-2612:22val_waeselynckIn this case, trick him by using custom EDN tagged literals 😛#2017-05-2616:31unbalancedanyone have any tips for muscling through large sql imports?#2017-05-2616:31unbalancedmy transactor keeps timing out#2017-05-2616:32unbalanceddo smaller imports you say? that's a solid idea. I'm glad we had this talk 😂#2017-05-2616:32favilasmaller queue depth?#2017-05-2616:33favilaare you doing transact-async without derefing?#2017-05-2616:33unbalancedyeah I think part of the problem is I'm using jdbc and pulling the whole table into memory which isn't great either#2017-05-2616:33unbalancedI need to figure out away to do a lazy-seq on the rows#2017-05-2616:34unbalancedand I am doing transact-async and I'm assuming it's without derefing b/c I didn't know derefing was a technique you could employ 😳#2017-05-2616:34favilaif you don't deref at some point, you are just overwhelming the transactor#2017-05-2616:34unbalanced(defn import-table! [conn db table-name tx-fn]
(do (import-schema! conn db table-name)
(d/transact-async conn (import-table conn db table-name tx-fn))))#2017-05-2616:34favilaoh, you have one giant transaction#2017-05-2616:34favilaalso not good#2017-05-2616:34unbalanced😂#2017-05-2616:35unbalancedwhat's a better practice?#2017-05-2616:35unbalancedloop over the rows and transact them one at a time or in chunks?#2017-05-2616:36favilaI'm surprised I'm not finding something that puts all bulk import advice on one page#2017-05-2616:37unbalancedto be fair this stuff is fairly cutting edge as far as tech goes#2017-05-2616:37unbalancedit's really nice we have a nice community (aka (== @favila 'community))#2017-05-2616:38unbalancedyou know it's also interesting ... part of the brilliance of this all is the prolog has been around for a long time and so have databases and it's so awesome that someone finally put them together#2017-05-2616:42favilain order of importance: 1) transact in chunks of 1000 ish datoms 2) use pipelining 3) do it with a separate amped-up transactor with no other load, or on a local machine (or whatever) and get it into production with a backup/restore 4) dial up memoryIndexThreshold and memoryIndexMax (to avoid indexing as long as possible)#2017-05-2616:42favilahttp://docs.datomic.com/capacity.html#sec-2#2017-05-2616:43unbalancedballer... that oughta be pinned#2017-05-2616:43favilahttp://docs.datomic.com/best-practices.html#sec-3#2017-05-2616:43unbalancedthanks again 😄#2017-05-2616:43favilahttps://hashrocket.com/blog/posts/bulk-imports-with-datomic#2017-05-2616:44favilapipelining and smaller transactions are the most critical#2017-05-2616:44favilathe rest you can often ignore. since your entire import fits in memory anyway it's unlikely the other stuff matters much#2017-05-2616:45unbalancedwell 😅 some tables#2017-05-2616:45favilaI aim for 1000 datoms per transaction#2017-05-2616:45unbalancedcool#2017-05-2616:45favilaand deref after every tx (no pipelining)#2017-05-2616:45favilaif that's too slow, I add pipelining#2017-05-2616:45favilaif that's still too slow, I do it offline#2017-05-2616:46unbalancedokay so what would that look like?
(doseq [chunk chunks]
@(d/transact conn chunk))?#2017-05-2616:46favilause transact-async#2017-05-2616:46favilabut yes, essentially#2017-05-2616:46unbalancedI'm not sure I understand the signifcance of derefing the transaction#2017-05-2616:47favilad/transact and d/transact-async return futures#2017-05-2616:47favilad/transact waits-with-timeout for the future to resolve, then returns the future#2017-05-2616:47unbalancedah I see#2017-05-2616:47favilad/transact-async does not wait#2017-05-2616:47unbalancedbut if you deref it you get the benefits of async and sync#2017-05-2616:48favila(d/transact) is really just for repl use#2017-05-2616:48favilait automatically adds a timeout, and does not return until the future is either done or throws because it timed out#2017-05-2616:48unbalancedahhhh interesting#2017-05-2616:49favilabut really long waits are not abnormal on a bulk import job so you don't want the timeout#2017-05-2616:49favilayou want transact-async#2017-05-2616:49favilahowever, that doesn't wait at all, so if you call it over and over without deref you are just overwhelming the transactor with potentially thousands of tx requests#2017-05-2616:50favila(and not checking for errors either--transactions may legitimately fail but you won't see the error and won't stop issuing txes)#2017-05-2616:51favilahowever immediately derefing is slow: it means tx is sent, and no new tx is sent until response is received#2017-05-2616:51favilathat's where pipelining comes in: you send maybe 10-20 d/transact-async at a time and deref later or in another thread#2017-05-2616:51unbalancedyeah that's fine, slow is not an issue#2017-05-2616:51favilakeeping a bunch of txs in the air#2017-05-2616:51unbalancedjust steady and reliable is important#2017-05-2616:51favilabut still derefing somewhere#2017-05-2616:52unbalanced🤔#2017-05-2616:53unbalancedso you're saying send 10-20 chucks sized 1000 and then deref somewhere?#2017-05-2616:53favilapipelining relies on not waiting for a deref to finish before sending another d/transact-async#2017-05-2616:54favilabut with a limited number of in-flight requests#2017-05-2616:54favilaso as not to overwhelm the transactor#2017-05-2616:55favilae.g. http://docs.datomic.com/best-practices.html#sec-20#2017-05-2616:55favilathis does not preserve transaction order#2017-05-2616:56favilathis one does (but I rarely use it, not sure how bug-free it is) https://gist.github.com/favila/3bc6fae005228a3290d5509c088e2f11#2017-05-2616:56favilaand you can do it without threads too, using reduction or a loop#2017-05-2616:57unbalancedinorder isn't so important#2017-05-2616:57favilagather the in-flight futures into a vector. when it reaches the desired size, start derefing them and removing the derefed ones from the vector, then keep ggoing#2017-05-2616:57unbalancedoooo okay fantastic#2017-05-2616:58unbalancedI think it would probably be doable to use channels for that#2017-05-2616:58faviladon't deref them all at once, that will flush the pipeline#2017-05-2616:58unbalanceddon't deref all the in-flights at once?#2017-05-2616:59favilawell yes, that would mean you wait until all in-flights are finished#2017-05-2616:59favilaso flushing the pipeline#2017-05-2616:59unbalancedwhy is flushing the pipeline bad?#2017-05-2617:00unbalancedor is just inefficient?#2017-05-2617:00favilathat means while you are derefing all the inflights, you have 0 in flight#2017-05-2617:00unbalancedhmm is that true even if you are derefing them on a seperate thread/channel?#2017-05-2617:00favilaso you go from e.g. 20 inflight, then your depth is reached you start derefing them all, so then you have 0 in flight, then you issue 20 inflight all again#2017-05-2617:01favilano, I'm explaining a caveat of a single-thread impl#2017-05-2617:01unbalancedahhhh#2017-05-2617:01favilawhen your inflights fill, be careful to deref only some, not all of your backlog, or else you will empty your pipeline#2017-05-2617:02unbalancedI see#2017-05-2617:02unbalancedso... fill up 20, deref 10 or so, bring on 10 more, deref 10 or so, etc?#2017-05-2617:03unbalancedIf I had time for such things this would make for an interesting study#2017-05-2617:03favilayeah, so your effective inflight variance is going to depend on jitter#2017-05-2617:03favilai.e. difference in time-to-complete of each tx#2017-05-2617:04favilaI don't know how much depth matters, just as long as the txor never has to wait for another tx from you#2017-05-2617:05unbalancedgotcha. I did a NoSQL -> MySQL transfer pipeline once that did a similar thing that attempted to optimize its write speed by varying chunk size, but honestly I'm not sure it was worth the effort, the gains were fairly marginal#2017-05-2617:05favilait should always have another tx waiting in its queue after it finishes#2017-05-2617:05unbalancedgotcha. Excellent, gives me a great place to start#2017-05-2617:06favilaand if your depth is a little too deep, txor will apply backpressure anyway so its safe#2017-05-2617:06favilaas long as you eventually listen to the backpressure by derefing somewhere#2017-05-2902:25heptahedronHi! It is such a major pain to find any documentation on Datomic Free. I'm just trying to do the Getting Started tutorial on the site, but clojure.core.async isn't on the classpath when I run bin/repl. Am I supposed to include it manually? The guide mentions nothing about it#2017-05-2902:56favilaThe getting started tutorial uses the datomic client API (vs peer API) which uses core.async. Datomic free does not support client API so there isn't any point#2017-05-2902:56heptahedronOh thanks so much I was gonna waste hours trying to figure that out lol#2017-05-2902:56favilaThey really want you to start with datomic starter#2017-05-2902:56favilaNot free#2017-05-2902:57heptahedronI know, I hesitate to say this but it feels obnoxious at this point#2017-05-2902:57favilaOlder materials (e.g. Day of datomic) which start with the peer API will work with free#2017-05-2902:58heptahedroneugh, maybe I should just start with the starter though#2017-05-2902:58heptahedronAnyway thanks for that, spared me a lot of wasted effort#2017-05-2903:01favilaFree is so limited I wouldn't really use it unless I already knew datomic and had a particular app in mind which benefited from its licensing#2017-05-2903:01favila(But I also don't use the client API)#2017-05-2903:51eoliphantHi, I have a kind of broad modeling question. I’m trying to figure out the best way to support a dynamic data model. The simplest analogy would be something like what undergirds say, Google Forms or JIRA, where essentially, at runtime I can define a new field/attribute, that can then be used elsewhere. Datomic seems particularly suited to this, especially since apps that do this in a relational DB tend to use some sort of EAV anyway. This talk seems like a good approach (https://www.youtube.com/watch?v=sQCoTu5v1Mo), but was just wondering what other folks thought, and/or if there’s more recent ‘art’ on the subject#2017-05-2904:34favilaIIRC this talk talks about building self-describing databases with entity-level schemes. That's all well and good, but your problem is user-created attributes. Just like with a normal db, you probably don't want users creating their own schema--you need to reify their "attributes" as values : https://groups.google.com/d/msg/datomic/4clQqidRYJk/9hceY1hrno8J#2017-05-2904:35favilaI think we had a deeper discussion of this on this channel a long while ago but I can't find it#2017-05-2904:45favilaAnother thread: https://groups.google.com/d/msg/datomic/p3ZACQXnhd0/jc8sGN7rAwAJ#2017-05-2905:02eoliphantsweet thanks will check those out, yeah this isn’t really ‘true’ end users per se, but in my case more of an administrative function, where we can update some things in a more controlled manner but without a new deployment#2017-05-2905:09eoliphantOk yeah, I think your approach has less ceremony than what was described in that video I posted. thanks#2017-05-2914:26cap10morganshould database functions go in the :user partition or an application-specific one?#2017-05-3011:01mishagreetings,
very basic, or rather, fundamental question:
what are tradeoffs of using "type as an attribute" vs. "type as an enum value (ident) of an attribute"?
{:some.event/subtype-foo? true
:some.event/subtype-bar? false}
;; vs.
{:some.event/subtype :some.event.subtype/foo}
#2017-05-3011:10robert-stuttafordi prefer enum entities, @misha, because you can leverage datomic’s VAET index to find items-with-that-value very quickly#2017-05-3011:11robert-stuttaford(map :e (d/datoms db :vaet (d/entid db [:db/ident :some.event.subtype/foo]))#2017-05-3011:11robert-stuttaforddisadvantage is with d/pull; you get {:db/ident :some.event.subtype/foo} instead of simply :some.event.subtype/foo as a value#2017-05-3011:12robert-stuttafordhowever, a little extra code is worth paying for that perf benefit#2017-05-3011:12robert-stuttafordimportant to note that Datalog et al will leverage VAET in this way as well#2017-05-3011:13robert-stuttafordsemantically, it boils down to the idea that that enum value is reified as an entity in its own right, and you can leverage that#2017-05-3011:17mishaso far "a little extra code" is all I see in my project, and started to question whether I should prefer "flat is better than nested".#2017-05-3011:18mishacan't I "leverage datomic’s VAET AVET index to find items-with-that- value attribute very quickly" though?#2017-05-3011:19mishain my first example, :some.event/subtype-bar? false would actually be :some.event/subtype-bar? nil, and pretty much absent, resulting in just
{:some.event/subtype-foo? true}#2017-05-3011:22misha(although, absence of other "subtype" attributes will not be enforced as in :db/ident case, which might yield false-positives in AVET-results)#2017-05-3011:30mishaalso saving precious datoms out of that 100B datoms limit :)#2017-05-3012:29robert-stuttafordi think you know enough to make a value judgment for yourself 🙂#2017-05-3012:30robert-stuttafordactually, with idents, you’re storing N longs and 1 value. with flat values, you’re re-storing that value N times, which is probably ok for bools, but maybe not so ok for keyword or string values?#2017-05-3012:31robert-stuttafordi suppose Rich’s advice holds true here : “why worry, when you can measure”#2017-05-3013:36kschraderis the current limit 100B datoms or 10B datoms?#2017-05-3013:39mpenetI think 10, but I believe you could shard using multiple dbs#2017-05-3014:21dominicmNo, it's 100B#2017-05-3014:21dominicm100B with planning.#2017-05-3014:21dominicmIf you're going over 10B, call @marshall#2017-05-3014:21dominicmsee http://www.datomic.com/day-of-datomic-2016-part-5.html#2017-05-3014:21dominicmIf you're putting more than 100B datoms in datomic, just don't
- Stu Halloway
^^ my favourite quote from the day of datomic#2017-05-3014:36misha@robert-stuttaford ofc, datoms comment is absolutely irrelevant.#2017-05-3015:02robert-stuttaford😄#2017-05-3015:28Petrus TheronHow to pull :db/txInstant in Datalog query so I can sort entities by transaction insertion date?#2017-05-3015:32Petrus TheronThis seems to work: [:find ?tz (pull ?e [*]) :where [?e :variant/name _ ?tx] [?tx :db/txInstant ?tz]], but is there a way to make it part of the (pull...) result?#2017-05-3015:36robert-stuttaford@petrus nope. think about it. entities are not related to transactions via an “A”#2017-05-3015:36robert-stuttafordpull walks EAV, it doesn’t walk T#2017-05-3015:36Petrus TheronIsn't :db/txInstant an A?#2017-05-3015:37robert-stuttafordof the transaction entity, yes#2017-05-3015:37robert-stuttafordbut the relation between the datom and the transaction doesn’t follow an A#2017-05-3015:37robert-stuttafordagain, think about it. where would you put it? you have different T values for each EAV combination on the same E#2017-05-3015:37robert-stuttafordhow would you model that in a pull result, which is nested maps?#2017-05-3015:38Petrus TheronI see - and pull only operates on one entity? So this works: :find (pull ?e [*]) (pull ?tx [:db/txInstant]) ... but they are from separate entities#2017-05-3015:38robert-stuttafordyou can do that yes#2017-05-3015:38Petrus Theron(but not necessary, can just :find ?tz)#2017-05-3015:39Petrus TheronI find it awkward to work with vectors coming from Datomic. Order is complex. Wish I could get back a hash-map, i.e :find {:entity (pull ?e [*]) :time ?tz} :where ...#2017-05-3015:39favilapull can only follow attribute references to other entities. the link between an e+a+v and its tx does not follow an attribute reference#2017-05-3015:40robert-stuttaford(defn entity-attribute-date [db e attr]
(let [eavt (comp first #(d/datoms db :eavt %))]
(-> (eavt (id e) (d/entid db attr))
:tx
(eavt (d/entid db :db/txInstant))
:v)))#2017-05-3015:40favilatx is out-of-band info, as it were#2017-05-3015:40robert-stuttafordthis code assumes a cardinality/one attr#2017-05-3015:41robert-stuttafordgood luck!#2017-05-3015:43Petrus Theron@robert-stuttaford what is symbol id in that snippet?#2017-05-3015:53Petrus Theron@robert-stuttaford is that (comp first #(d/datoms ...)) right? I'm getting clojure.lang.ArityException: Wrong number of args (2) passed to: handler/entity-attribute-date/fn--43720 when I try your entity-attribute-date fn in the REPL#2017-05-3017:04robert-stuttaford@petrus https://gist.github.com/robert-stuttaford/39d43c011e498542bcf8#2017-05-3017:05robert-stuttafordit’s right; we use it all the time#2017-05-3021:26unbalancedI'm doing some back of the envelope math about whether or not a particular problem will "fit" into a datomic DB (without having to call marshall (as in, am I going over 10B datoms))... hypothetically if I put in, say, 1B datoms (let's say I use apprioprate pipelining, respect transactor limits, etc) is that what it means by 1B datoms or does it mean X datoms + Y datoms created by datomic for internal purposes == 1B datoms?#2017-05-3021:27unbalancedanother way of phrasing the question would be, if I transact X datoms, is there a rule about how many Y datoms are created by datomic (and is that something I should worry about when considering my calculations)?#2017-05-3021:40eggsyntaxSilly question: does Datomic need to be installed and/or running in order to run an app that uses Datomic with an in-mem DB? Trying to do some troubleshooting, and I've never actually tried running it on a box that didn't have Datomic installed.#2017-05-3021:41unbalanced@eggsyntax are you talking about proper datomic or DataScript (for front end clojurescript apps?)#2017-05-3021:42eggsyntaxActual Datomic. We typically run in staging with a connection to another server that provides the DB, but we're just doing a quick experiment with running in-mem on the staging box.#2017-05-3021:42unbalancedI love those "quick" experiments 😂#2017-05-3021:42eggsyntaxHeh. Part of the troubleshooting process...#2017-05-3021:44misha#2017-05-3021:44eggsyntaxI know. Thanks @misha, appreciate it.#2017-05-3021:45unbalancedI feel like I've run some demo apps that have somehow created an in-memory datomic DB without me starting my transactor, but I might be mistaken#2017-05-3021:45unbalancedI think the catalysis demo does it (maybe) but I'm still picking through the codebase to figure out how it works#2017-05-3021:46eggsyntaxCool, I'll look into that a bit.#2017-05-3021:46unbalancedfair warning: it won't make your "quick" experiment any quicker 😂#2017-05-3021:46eggsyntaxPossibly not 🙂#2017-05-3021:46misha#2017-05-3021:47unbalanced EDIT: (whoops put wrong one fixed)#2017-05-3021:51unbalancedhas anyone here done this? "Web after tomorrow" style syncing client-side datascript-style datomic-db with server-side datomic-db?#2017-05-3021:54mishaI am doing something with datascript-cache/datascript/datomic, but there is so much on my plate, that it is fair to say "I just started"#2017-05-3021:54mishayou might want to go through precursor-app's source, as guys use datomic/datascript. I haven't :)#2017-05-3021:55favila@misha @eggsyntax in memory datomic DBs do not require a transactor#2017-05-3021:55eggsyntax@favila cool, thanks 🙂#2017-05-3021:55favila@eggsyntax @misha you may be thinking of "dev" or "free" storages, which do require transactors#2017-05-3021:55favila"mem" requires nothing#2017-05-3021:56mishamost certainly, yes#2017-05-3021:56eggsyntax@favila and just to get total clarification: Datomic doesn't even need to be installed to run an app with an in-mem DB?#2017-05-3021:58favilaI don't know what "installed" means#2017-05-3021:58favilaYou need datomic peer api in your classpath#2017-05-3021:58favilaof whatever process wants an in-mem db#2017-05-3021:58misha@goomba catalysis is too scary for me at the moment. At this point I am much more comfortable reinventing my own wheel#2017-05-3022:01unbalanced@misha I feel you on that. It's a pretty clever piece of engineering but unfortunately I don't understand enough of the API and the alpha state of it means not suitable for production unless I make my own changes#2017-05-3022:03unbalancedanother way of phrasing the question (that I re-spammed below) would be, if I transact X datoms, is there a rule about how many Y datoms are created by datomic (and is that something I should worry about when considering my calculations)?
(sorry to spam this just afraid might have gotten burried)#2017-05-3022:03mishaI am not even sure what would be declared "suitable for production" at this point. Therefore I can't even evaluate it to a certain degree#2017-05-3022:04unbalanced"suitable for production" means I'm not going to have to worry about getting a call in the middle of the night that our servers are down because API changed/dependencies broke etc etc 😅#2017-05-3022:13favila@goomba I think "10 billion datoms" refers to number of unique datoms, not number of datoms counted by all indexes#2017-05-3022:13favilahttps://groups.google.com/d/msg/datomic/iZHvQfamirI/RANYkrUjAEwJ#2017-05-3022:14favilaThe soft limit of 10 billion is because the index structure segments get big#2017-05-3022:14favilaso having a datom in multiple indexes doesn't matter#2017-05-3022:14favilaits how many are in a single index that matters#2017-05-3022:16favilaI'm sure partitioning and/or creating entites in a specific order that increases read locality would significantly aid performance at large numbers of datoms#2017-05-3022:23unbalancedokay... so when doing capacity planning I shouldn't worry about the indexes datomic creates and just worry about the ones I create?#2017-05-3022:32favilawell, not exactly#2017-05-3022:32favilaeavt and aevt are created for every datom#2017-05-3022:32favilavaet is created for datoms with ref attributes#2017-05-3022:33favilaavet is created for datoms with index=true#2017-05-3022:33favilaand a fulltext index is created for fulltext=true#2017-05-3022:34favilathe presence/absence of these indexes affects storage space and query speed#2017-05-3022:37favilanoHistory=true is another consideration, since changes to those e+a datoms are not stored#2017-05-3022:37unbalancedah I see#2017-05-3022:38unbalancedwell... might have to do some experimenting and see how it goes#2017-05-3022:39unbalancedI know for my use case the data will never change and don't need full text but query power is a must#2017-05-3022:39favilaimport a % of your db, backup-db, measure bytes and multiply#2017-05-3022:39favilathat is the smallest amount of storage you will need#2017-05-3022:40unbalancedstorage isn't an issue, just performance#2017-05-3022:40unbalancedand also development ease. don't want to shard the hell out of the DB if I don't have to#2017-05-3022:40favilaThe biggest predictor of perf in my exp is datom read locality#2017-05-3022:41unbalancedhas cognitect gotten you a job application yet @favila ? 😄#2017-05-3022:41favilaif you have a static dataset and can predict locality, you can customize your import process to maximize locality#2017-05-3022:41favilathis will improve peer performance significantly#2017-05-3022:41unbalancedlocality as in data locality or geographic locality?#2017-05-3022:42faviladata locality#2017-05-3022:42favilai.e., when you read from an index, you want your reads to cluster together#2017-05-3022:43favilathis means the peer doesn't need to pull+decompress as many index segments#2017-05-3022:43unbalancedah I see#2017-05-3022:45favilapartitions were a feature designed to aid locality#2017-05-3104:03ezmiller77Is it possible to pass a _ as an input in a parameterized query?#2017-05-3107:30favila@ezmiller77 can you be more concrete? You want [:in $ _ ]? or a symbol _ as a value?#2017-05-3107:31favilafirst is probably no, but I'm not sure why you would want to do that. Second is definitely yes.#2017-05-3107:44ezmiller77@favila: More like :in $ ?type, where ?type is a wildcard _ from the input arg passed to the query fn when I want not to filter the results. Because the only other way I know to get all is to construct a separate query.#2017-05-3112:46souenzzoezmiller77: (d/q '[:find ?e :in $ param :where [?e :user/id param]] db (quote _))#2017-05-3113:44favilaAh. No that is essentially the "nil" case. You must construct a separate query. I use cond-> when I create the clauses#2017-05-3113:54souenzzoMy attempt does not work 😞#2017-05-3114:25favila_ as data is not the same as _ as syntax#2017-05-3114:25favilayour query looks for a user-id equal to the symbol _#2017-05-3114:25favilawhich nothing will be#2017-05-3114:26favilaeven using an empty collection for param won't work#2017-05-3114:32favila(defn q-users [db user-id]
{:query {:find '[[?e ...]]
:in (cond-> '[$]
user-id (conj '?id))
:where (cond-> []
user-id (conj '[?e :user/id ?id])
(not user-id) (conj '[?e :user/id]))}
:input (cond-> [db]
user-id (conj user-id))})#2017-05-3114:32favila(for example)#2017-05-3114:32favila(d/query (q-users db nil)) or (d/query (q-users db "some-id")#2017-05-3110:20misha@ezmiller77 what's wrong with separate query?#2017-05-3111:38mishais there a way to sort entities by creation date w/o storing some extra :e/created-date attribute value?
is sorting by id good enough proxy for this?
say, I have event, with a collection of subevents. actual dates of subevents are not important, but order of creation is (e.g. for further reduction).#2017-05-3111:42mishause case example would be order of operations: 100% -10 -20% is not the same as 100% -20% -10 (72 vs 70)#2017-05-3112:42robert-stuttafordentity ids will always be larger if they are newer @misha#2017-05-3112:50misha@robert-stuttaford will ids (order) be preserved during built-in backup/restore?#2017-05-3113:16robert-stuttaford@marshall or @jaret can confirm for you @misha#2017-05-3113:17jaretI don't believe so. One sec let me confirm.#2017-05-3113:21jaretNevermind they are wholly preserved#2017-05-3113:23jaretSo the order of entity IDs is preserved during backup/restore#2017-05-3116:05robert-stuttafordthanks @jaret! fyi ^ @misha#2017-05-3118:08kschrader@jaret is that just current behavior or is that a design decision that can be relied on going forward?#2017-05-3118:08jaretrelied upon#2017-06-0109:37degI'm having trouble writing a seemingly simple query...
I have a bunch of tuples
[<id1> :category :x]
[<id1> :value 100]
[<id2> :category :y]
[<id2> :value 200]
[<id3> :category :x]
[<id3> :value 300]
I want to count how many ids I have of each category. So I want to get back something like {:x 2 :y 1}. How do I write this query?#2017-06-0109:39robert-stuttaford[:find ?category (count ?categorised) :where [?categorised :category ?category]]#2017-06-0109:41degPerfect, thanks.#2017-06-0109:43degCan I do this on multiple attributes simultaneously? That is, a query that would return {:category {:x 2 :y 1} :color {:blue 3 :red 5} :shape {:square 2 :ellipse 6}}? Or is that better done in multiple queries?#2017-06-0110:30misha@U3JJ35GUT: (ds/q
'[:find ?a ?v (count ?e)
:in $ [?a ...]
:where [?e ?a ?v]]
@conn [:exchange/to-unit :exchange/from-unit])
=>
([:exchange/from-unit 11 36]
[:exchange/to-unit 6 3]
[:exchange/to-unit 11 40]
[:exchange/from-unit 35 1]
[:exchange/to-unit 4 1]
[:exchange/from-unit 101 1]
[:exchange/from-unit 9 4]
[:exchange/from-unit 105 1]
[:exchange/from-unit 4 9]
[:exchange/to-unit 35 12]
[:exchange/from-unit 31 2]
[:exchange/from-unit 57 1]
[:exchange/from-unit 103 1]
[:exchange/to-unit 72 1]
[:exchange/to-unit 24 8]
[:exchange/to-unit 57 6]
[:exchange/from-unit 24 1])#2017-06-0110:32mishain this example ?v is an id (instead of value :x)#2017-06-0110:35mishanext, just reduce over the results#2017-06-0110:51degthanks#2017-06-0109:45robert-stuttafordwhy not give it a try? if you don’t get it right, multiple queries is fine. i find i prefer multiple smaller, focused queries#2017-06-0109:48degYup. I just gave it a few tries, but got back various cross products. I think I'll stick to the multiple smaller queries. Thanks.#2017-06-0109:50robert-stuttafordyep, that’s one of the reasons why i prefer what i do - i can reason about them, heh#2017-06-0115:53pedroteixeirahello, for a "on premise" setup, I was considering postgresql as storage for datomic.. but then just read this post https://blog.clubhouse.io/using-datomic-heres-how-to-move-from-postgresql-to-dynamodb-local-79ad32dae97f about moving from postgres. Are there other opinions/docs about recommendations for on premise / small (few users, one host) datomic setup? latest/stable postgresql versions would probably be ok, right? or some other storage is more "native/optimized" to datomic implementation? it would be usefull to know current statistics "on premise" choices, if those could be shared some how.. perhaps by a poll?#2017-06-0119:08devthis it sufficient to backup underlying SQL storage for production, or should the datomic backup-db from-db-uri to-backup-uri feature be used?#2017-06-0119:11favila@devth Backing up the storage backs up everything (i.e. all datomic databases in the table and all segments, including possible garbage segments) and cannot be moved to a different kind of storage. backup-db backs up a datomic db at a time, does not backup unused nodes (if not an incremental backup), and can be restored to arbitrary storages#2017-06-0119:12favilaso they have different abilities and tradeoffs. Whether storage-only backup is sufficient for you depends on what you need#2017-06-0119:13devthbasic guarantees about not losing data is really all i need at this point. we use Cloud SQL, which has an Automatic Backups feature. ideally i enable this and not have to worry about doing backup-db.#2017-06-0119:13favilaAh, we use cloud sql too. we also run backup-db periodically#2017-06-0119:14devthjust in case? or do you have specific use cases?#2017-06-0119:14favilaisolating dbs or moving to different storages#2017-06-0119:15favilawe have a problem with CloudSQL where we can't seem to reclaim mysql space#2017-06-0119:15favilamy dim memories of mysql admin were that innodb never gave up filespace, even if it was optimized and compacted#2017-06-0119:15devthah. have you guys considered Cloud SQL Postgres (currently beta)?#2017-06-0119:16favilasometimes when we do a big import, or delete a bunch of large datomic dbs, we will drop the table and restoredb everything#2017-06-0119:16favilano, we need a BAA#2017-06-0119:16favilaonly mysql is covered#2017-06-0119:17devthah, ok. cool, good to know. i think i'll start with Automated Backups then later maybe setup backup-db if and when I need it#2017-06-0119:17devthdo you use Minio for S3-compatible interface in GCP?#2017-06-0119:19favilaI didn't think it was possible to intercept the hostname construction done by backup/restore#2017-06-0119:19devthoh, i'm not sure as I haven't tried it yet#2017-06-0119:19favilawe would really like to just backup to google cloud storage directly. We've asked for that feature for years#2017-06-0119:19favilathey have an s3 compatible interface, too#2017-06-0119:19devthwould make sense 🙂#2017-06-0119:19devthGCP cloud storage does?#2017-06-0119:19favilayes#2017-06-0119:20devthnice. i did not know that.#2017-06-0119:20favilabut you need control of the hostname#2017-06-0119:20favilai.e. s3:// urls only take buckets#2017-06-0119:20favilaI guess with some /etc/hosts trickery it might work#2017-06-0119:21favilaanyway, we use to use gcsfuse to mount the gcs bucket, and then read/write from that#2017-06-0119:21devthi see#2017-06-0119:21favilabut we had some zero-length segments once that caused silent db corruption, so we stopped#2017-06-0119:21favilanow we mount a drive onto an instance#2017-06-0119:21favilalike animals#2017-06-0119:22devth😱#2017-06-0119:23devthI don't see a feature request for Google Cloud Storage backups on http://receptive.io#2017-06-0119:24devthadding it#2017-06-0119:24devthBackup to Google Cloud Storage#2017-06-0119:25devthadded#2017-06-0119:25devthand put most of my available priority on it#2017-06-0119:25favilahttps://receptive.io/app/#/case/26351#2017-06-0119:25devththere's also an existing feature Allow use of S3 "compatible" storage alternatives#2017-06-0119:26favilayes#2017-06-0119:26favilaah, the one I thought was for cloud storage I misread#2017-06-0119:26favilait's for cloud datastore#2017-06-0119:26favilawhich would be nice but is not essential. cloudsql works fine#2017-06-0119:28devthyep#2017-06-0121:05jdkealyif i have multiple instances listen to the same transaction queue, will the message be duplicated across instances or does reading it take it off the queue#2017-06-0121:08ddellacostasorry, meant to ask in here: https://clojurians.slack.com/archives/C03S1KBA2/p1496351133966888#2017-06-0204:37robert-stuttaford@jdkealy each peer gets one queue.#2017-06-0204:38robert-stuttaford@ddellacosta yes, i believe you can#2017-06-0213:04stuartsierra@jdkealy Each peer process gets one queue: Every peer will see every message. Within a peer process, there is only one queue, so if there are multiple threads reading from the queue each thread will see only some of the messages.#2017-06-0213:56ddellacosta@robert-stuttaford Thanks!#2017-06-0222:08rustam.gilaztdinovhello, guys! i’m little bit of frustrated
trying to launching server with postgres, executed all sql scripts in bin/sql/
postgres-db.sql ->> postgres-table.sql ->> postgres-user.sql
After that run transactor
bin/transactor my-transactor.properties
Launching with Java options -server -Xms1g -Xmx1g -XX:+UseG1GC -XX:MaxGCPauseMillis=50
Starting datomic:sql://<DB-NAME>?jdbc:, you may need to change the user and password parameters to work with your jdbc driver ...
System started datomic:sql://<DB-NAME>?jdbc:, you ...
In my-transactor.properties
protocol=sql
host=localhost
port=4334
license-key=+-A...
sql-url=jdbc:
sql-user=datomic
sql-password=datomic
I don’t understand — what is <DB-NAME>?
Where is place in template which build this db-uri and <DB-NAME>? directly?
datomic:sql://<DB-NAME>?jdbc:
Many thx!#2017-06-0222:13favila@rustam.gilaztdinov <DB-NAME> is a placeholder for whatever datomic database you create#2017-06-0222:14favila(d/create-database "datomic:) for example#2017-06-0222:14favilacreates a datomic database called my-database#2017-06-0222:15rustam.gilaztdinovSo, first needed to be create database in repl
And then start transactor?#2017-06-0222:15favilano#2017-06-0222:15favilayou started the transactor#2017-06-0222:16favilathe transactor is telling you the uri "template" by which you can create databases on the storage the transactor has control over#2017-06-0222:16favilayou need a transactor to create a database#2017-06-0222:17favilathis template may not be correct if your peer is somewhere else on the network, uses a jdbc arg, etc#2017-06-0222:17favilabut in the common case you just substitute <DB-NAME> with a name and it works#2017-06-0222:18rustam.gilaztdinovAnd where I can substitute <DB-NAME> ? In which file?#2017-06-0222:23favilathere is no file#2017-06-0222:23favilain your code#2017-06-0222:23favila(d/create-database "datomic:) is an eg of clojure code#2017-06-0222:24rustam.gilaztdinovBut how can I execute this code, if I can not to run my transactor? 😃#2017-06-0222:24favilayou execute it in a peer#2017-06-0222:26favila(let [conn-uri "datomic:]
(when (d/create-database conn-uri)
(d/connect conn-uri)))#2017-06-0222:27favilaeg, that will create the database "my-database", and if it is newly created return the connection object#2017-06-0222:29faviladatomic peers connect to storage first, and an entry in storage (written by the transactor) tells them how to connect to the transactor#2017-06-0222:31favilaso the connection uri for a database is always datomic:<STORAGE-TYPE>://<DATOMIC-DATABASE-NAME>?<STORAGE-SPECIFIC-STUFF>#2017-06-0222:32favilaand transactor.properties host= and alt-host= are hostnames the peer should be able to use to get to the transactor (not storage)#2017-06-0222:33favilahost=localhost means essentially that peers and transactors are always on the same box#2017-06-0222:46rustam.gilaztdinovyup, thx#2017-06-0506:26nadejdeHi everyone! I’m having a problem with datomic client. I’m playing with a small app to get some play time with datomic. But the moment I try to add the datomic client dependency my project breaks
this is my project file:
clojure
(defproject rail-cards-api "0.1.0-SNAPSHOT"
:description "FIXME: write description"
:dependencies [[org.clojure/clojure "1.8.0"]
[metosin/compojure-api "1.1.10"]
[buddy/buddy-core "1.2.0"]
[com.datomic/clj-client "0.8.606"]
[org.clojure/core.async "0.3.443"]]
:ring {:handler rail-cards-api.handler/app}
:uberjar-name "server.jar"
:profiles {:dev [:project/dev :profiles/dev]
:project/dev {:dependencies [[javax.servlet/javax.servlet-api "3.1.0"]
[cheshire "5.5.0"]
[ring/ring-mock "0.3.0"]]
:plugins [[lein-ring "0.10.0"]]}
:profiles/dev {}})
and this is the error i get:
bash
lein ring server
2017-06-04 23:44:35.300:INFO::main: Logging initialized @1619ms
Exception in thread "main" java.lang.NoClassDefFoundError: org/eclipse/jetty/http/HttpParser$ProxyHandler, compiling:(ring/adapter/jetty.clj:27:11)
at clojure.lang.Compiler.analyzeSeq(Compiler.java:6875)
at clojure.lang.Compiler.analyze(Compiler.java:6669)
at clojure.lang.Compiler.analyzeSeq(Compiler.java:6856)
at clojure.lang.Compiler.analyze(Compiler.java:6669)
at clojure.lang.Compiler.analyze(Compiler.java:6625)
at clojure.lang.Compiler$BodyExpr$Parser.parse(Compiler.java:6001)
at clojure.lang.Compiler.analyzeSeq(Compiler.java:6868)
at clojure.lang.Compiler.analyze(Compiler.java:6669)
at clojure.lang.Compiler.analyze(Compiler.java:6625)
at clojure.lang.Compiler$IfExpr$Parser.parse(Compiler.java:2797)
at clojure.lang.Compiler.analyzeSeq(Compiler.java:6868)
at clojure.lang.Compiler.analyze(Compiler.java:6669)
at clojure.lang.Compiler.analyzeSeq(Compiler.java:6856)
at clojure.lang.Compiler.analyze(Compiler.java:6669)
it seems to have a problem with the ring plugin? This happens the seccond I add the [com.datomic/clj-client “0.8.606”] dependency. Tests still run fine though…
Am I doing it wrong?#2017-06-0506:27nadejdeI’m thinking that perhaps I need to include the datomic dependency differently? The one on Maven seems to be quite old.#2017-06-0506:36isaacI have got this error early time, both ring & clj-client depends on jetty, you need fix the conflicts of dependencies. use lein deps :tree show the confilict.#2017-06-0506:36isaac@nadejde#2017-06-0506:39nadejdeaaaa#2017-06-0506:40nadejdeThank you @isaac will do that. New to clojure didn’t think to check the dependency tree:)#2017-06-0507:36nadejdeGot it to build like this:
(defproject rail-cards-api "0.1.0-SNAPSHOT"
:description "FIXME: write description"
:dependencies [[org.clojure/clojure "1.8.0"]
[metosin/compojure-api "1.1.10"]
[buddy/buddy-core "1.2.0"]
[com.datomic/clj-client "0.8.606" :exclusions [org.eclipse.jetty/jetty-client
org.eclipse.jetty/jetty-http
org.eclipse.jetty/jetty-util]]
[org.clojure/core.async "0.3.443"]
[org.eclipse.jetty/jetty-util "9.2.21.v20170120"]
[org.eclipse.jetty/jetty-http "9.2.21.v20170120"]
[org.eclipse.jetty/jetty-util "9.2.21.v20170120"]]
:ring {:handler rail-cards-api.handler/app}
:uberjar-name "server.jar"
:profiles {:dev [:project/dev :profiles/dev]
:project/dev {:dependencies [[javax.servlet/javax.servlet-api "3.1.0"]
[cheshire "5.7.1"]
[ring/ring-mock "0.3.0"]]
:plugins [[lein-ring "0.12.0"]]}
:profiles/dev {}})
#2017-06-0507:37nadejdeis there a way to apply the exclusions for datomic client just in the dev profile while loading them in a production scenario? The conflict seems to be with lein-ring plugin. I will probably not be using that in a production environment right?#2017-06-0507:38nadejdeI guess what I’m asking is if a dependency added in the profiles overrides the one in the project?#2017-06-0515:27faviladependencies declared higher (closer to your project) in the dep tree override ones declared lower @nadejde#2017-06-0515:29favila@nadejde https://maven.apache.org/guides/introduction/introduction-to-dependency-mechanism.html The sections "transitive dependencies" and "dependency scope" apply#2017-06-0515:30favila"Dependency mediation" describes how deps are chosen#2017-06-0515:31nadejdeHi. Thank you. Ended up with this:
(defproject rail-cards-api "0.1.0-SNAPSHOT"
:description "FIXME: write description"
:dependencies [[org.clojure/clojure "1.8.0"]
[metosin/compojure-api "1.1.10"]
[buddy/buddy-core "1.2.0"]
[com.datomic/clj-client "0.8.606" ]
[org.clojure/core.async "0.3.443"]]
:ring {:handler rail-cards-api.handler/app}
:uberjar-name "server.jar"
:profiles {:dev [:project/dev :profiles/dev]
:project/dev {:dependencies [[javax.servlet/javax.servlet-api "3.1.0"]
[cheshire "5.7.1"]
[ring/ring-mock "0.3.0"]
[com.datomic/clj-client "0.8.606" :exclusions [org.eclipse.jetty/jetty-client
org.eclipse.jetty/jetty-http
org.eclipse.jetty/jetty-util]]]
:plugins [[lein-ring "0.12.0"]]}
:profiles/dev {}})
seems to work#2017-06-0515:31favilalein also has a "pedantic" mode which rejects ambiguous deps#2017-06-0515:33favilahttps://github.com/technomancy/leiningen/blob/e1d27074e5d85d2bb7ae83719442e14ac4331396/sample.project.clj#L77-L83#2017-06-0515:33mgrbyteIs there standard way to model hierarchical keywords? I'm thinking along the lines of: {:type my-type :item some-entity} , where my-type can be derived as per clojure.core/derive. The use case is to query all entities that have a sub-type.#2017-06-0515:34robert-stuttaford@mgrbyte https://github.com/jimrthy/substratum#2017-06-0515:35mgrbytethanks @robert-stuttaford , reading#2017-06-0515:35robert-stuttafordbasically, though, Datomic’s data model is so flexible, you can do what you describe quite comfortably#2017-06-0515:37robert-stuttafordthere was a talk given at 2013 Conj about this but i can’t find it#2017-06-0515:38robert-stuttafordfound it! https://www.youtube.com/watch?v=sQCoTu5v1Mo&list=PLZdCLR02grLpFFB54kq3FU5A0_C94l2WF&index=8#2017-06-0515:39mgrbyteI meant keywords not "collections"#2017-06-0515:40robert-stuttafordoh, heh#2017-06-0515:41robert-stuttaforddon’t think i’ve considered that before. generally speaking, i find it’s better to be explicit about what you’re asking for. there’s a running joke (at least in the parts of the Clojure community i subscribe to) that there’s no good use for derive yet#2017-06-0515:41mgrbytecontext: I'm modelling interactions between two or more biological entities, which can have an ontological type#2017-06-0515:42mgrbyteIn pure clojure, using clojure.core/derive would work as per https://clojure.org/reference/multimethods#2017-06-0515:42mgrbyteor a java class hierarchy would work with an embeded isa? in the query#2017-06-0515:43robert-stuttafordthis is interesting 🙂#2017-06-0515:43mgrbyteI was wondering if anyone has had the need for such a construct....#2017-06-0515:45robert-stuttafordmy gut says that derive shenanigans probably won’t work, because attrs are actually just db/idents that resolve to integer ids for the schema that they represent#2017-06-0515:46mgrbyteyep, that makes sense. i.e idents are "concrete" keywords#2017-06-0515:49mgrbyteI'll have to think on this some more, perhaps there's an alternate way to achieve the same thing: datalog query for: find all entities who's type t matches (ancestor t) or (child t) (made up notation)#2017-06-0515:49robert-stuttafordyou could model these relationships explicitly, and write Datalog rules that encompass these lookups#2017-06-0515:49robert-stuttafordand then use those wherever#2017-06-0515:50robert-stuttafordhttps://github.com/Datomic/mbrainz-sample has nice examples of rules and even recursive rules#2017-06-0515:54mgrbyteI could do it with string prefix matching, since the (namespaced) keywords will follow the same pattern. e.g:
:interaction.type.genetic/supressing
:interaction.type.genetic.supresssing/sub#2017-06-0515:55mgrbytedatalog clause: [(= (namespace ?a) "interation.type.genetic.supressing)]#2017-06-0515:56mgrbytenot sure about the efficiency of this however#2017-06-0515:57robert-stuttafordefficiency all depends on how often you do it and how much data you do it on#2017-06-0516:06mgrbyteoften: not very often, possible to cache results. volume: ~1/2 a million entities#2017-06-0516:07mgrbytesomewhere in the region of 200 distinct (but ontological) types#2017-06-0516:09mgrbyteanyhow, thanks for sounding me out @robert-stuttaford 🙂#2017-06-0519:30nadejdeHi. Does anyone have any ideea what version of ring is compatible with [com.datomic/clj-client “0.8.606”]?#2017-06-0519:31nadejdeI get some conflict in jetty dependency. datomic works with 9.2.21.v20170120 but ring does not#2017-06-0519:31nadejdeIf i try with a higher version ring works but datomic client does not:(#2017-06-0519:32nadejdehow do I deal with this?#2017-06-0522:33rustam.gilaztdinovHello! I want to use docker-compose for docker with postgres storage and guess what? This isn’t working 😃
My steps:
1. dockerfile with postgres
2. dockerfile with datomic-transcator, which depends on 1
3. dockerfile with datomic-peer, which depends on 2
4. dockerfile with datomic-console
This is my docker-compose.yml, 2 first steps
version: "2"
services:
db:
build:
context: ./postgres
dockerfile: Dockerfile
environment:
- POSTGRES_USER=datomic
- POSTGRES_PASSWORD=datomic
- POSTGRES_DB=datomic
ports:
- "5432:5432"
datomic-transactor:
build:
context: ./datomic-transactor
dockerfile: Dockerfile
ports:
- "4336:4336"
- "4335:4335"
- "4334:4334"
volumes:
- "/data"
- "/log"
depends_on:
- db
In step 1 I add sql-snippets from bin/sql to image and execute it
In step 2, in config/sql.templates I add this
protocol=sql
host=0.0.0.0 (???)
port=4334
alt-host=datomic-transactor
license-key=...
sql-url=jdbc: (db, not localhost!!, see docker-compose, where service name is db and this is default name for network)
sql-user=datomic
sql-password=datomic
When I execute docker-compose build, this happening
Starting datomic:sql://<DB-NAME>?jdbc:, you may need to change the user and password parameters to work with your jdbc driver ...
System started datomic:sql://<DB-NAME>?jdbc:, you may need to change the user and password parameters to work with your jdbc driver
Critical failure, cannot continue: Lifecycle thread failed
java.util.concurrent.ExecutionException: org.postgresql.util.PSQLException: Connection refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
#2017-06-0522:35rustam.gilaztdinovWhen I go to container with postgres, everything work fine, I can execute psql -h localhost -d datomic -U datomic#2017-06-0522:40rustam.gilaztdinovI search all across github, but nobody share solution =(
All work with dev properties#2017-06-0523:05rustam.gilaztdinovthisisfine#2017-06-0523:09favilacan you connect to the postgresql container from outside the doctor container? could the postgresql user restrict permissions by ip?#2017-06-0523:09rustam.gilaztdinovyes, I can!#2017-06-0523:10favilacan you run a transactor from outside?#2017-06-0523:10rustam.gilaztdinovNope, my build fails#2017-06-0523:10rustam.gilaztdinovAnd container doesn’t creates#2017-06-0523:15favilaThis is really a docker question. I don't know why the transactor container can't connect to the postgres container. My advice is only general: simplify the problem. Verify that a simple container (no transactor) can connect, work from there#2017-06-0523:15rustam.gilaztdinovNow I can’t run just datomic with postgres storage local, not in Docker
My steps — launch bin/transctor with sql properties, then bin/replor bin/shell, where I define db-uri
datomic % uri = "datomic:"
datomic % Peer.createDatabase(uri);
/ Error: // Uncaught Exception: bsh.TargetError: Method Invocation Peer.createDatabase : at Line: 2 : in file: <unknown file> : Peer .createDatabase ( uri )
Target exception: java.util.concurrent.ExecutionException: org.postgresql.util.PSQLException:
#2017-06-0523:18faviladatabase is "demos"?#2017-06-0523:18favila(psql db)#2017-06-0523:18faviladid you get that backwards?#2017-06-0523:19favila"datomic:"#2017-06-0523:20rustam.gilaztdinovYes, database, which I want to create is “demos”
I try your approach, still same error#2017-06-0523:25favilamissing username and password#2017-06-0523:25favilawhat is the exception precisely?#2017-06-0523:26favilathe connection uri = I would expect on the peer for a db named demos, given your transactor startup, is datomic:#2017-06-0523:26favilayou may still get an error about not being able to connect to the transactor, but not a postgresql connection error#2017-06-0523:33rustam.gilaztdinovno, this is psql error
// Error: // Uncaught Exception: bsh.TargetError: Method Invocation Peer.createDatabase : at Line: 2 : in file: <unknown file> : Peer .createDatabase ( uri )
Target exception: java.util.concurrent.ExecutionException: org.postgresql.util.PSQLException: The connection attempt failed.
java.util.concurrent.ExecutionException: org.postgresql.util.PSQLException: The connection attempt failed.
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at clojure.core$deref_future.invokeStatic(core.clj:2290)
at clojure.core$future_call$reify__9377.deref(core.clj:6849)
at clojure.core$deref.invokeStatic(core.clj:2310)
at clojure.core$deref.invoke(core.clj:2296)
Caused by: org.postgresql.util.PSQLException: The connection attempt failed.
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:233)
at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:64)
at org.postgresql.jdbc2.AbstractJdbc2Connection.<init>(AbstractJdbc2Connection.java:144)
at org.postgresql.jdbc3.AbstractJdbc3Connection.<init>(AbstractJdbc3Connection.java:29)
at org.postgresql.jdbc3g.AbstractJdbc3gConnection.<init>(AbstractJdbc3gConnection.java:21)
at org.postgresql.jdbc4.AbstractJdbc4Connection.<init>(AbstractJdbc4Connection.java:31)
at org.postgresql.jdbc4.Jdbc4Connection.<init>(Jdbc4Connection.java:24)
at org.postgresql.Driver.makeConnection(Driver.java:410)
at org.postgresql.Driver.connect(Driver.java:280)
#2017-06-0523:35favilafor the new uri?#2017-06-0607:44joseph@rustam.gilaztdinov in my experience, maybe you need to add an extra parameter useSSL=false in the url, and have you installet postsql library in the project.clj and in the transactor's lib folder?#2017-06-0608:26rustam.gilaztdinov@joseph I just try to up docker-compose, not using in clojure project.
> in the transactor’s lib folder?
Can you clarify what is that mean?#2017-06-0608:29josephI mean when you use some specific database like postsql behind datomic, you also need to give it library to both transactor and the project.clj#2017-06-0608:30josephfor example, I use mysql, so I need to download mysql-java connector and move it to the datomic's lib folder#2017-06-0608:30josephthis is the part of my Dockerfile:#2017-06-0608:31josephENV DATOMIC_VERSION 0.9.5561
ENV DATOMIC_HOME /opt/datomic-pro-$DATOMIC_VERSION
ENV DATOMIC_DATA $DATOMIC_HOME/data
ENV MYSQL_CONNECT_VERSION 5.1.41
RUN apk add --no-cache unzip curl wget
RUN mkdir -p $DATOMIC_HOME/lib
RUN mkdir -p /usr/datomic/config
RUN wget -O /tmp/mysql-connector-java.zip \
&& unzip /tmp/mysql-connector-java.zip -d /tmp \
&& cp /tmp/mysql-connector-java-$MYSQL_CONNECT_VERSION/mysql-connector-java-$MYSQL_CONNECT_VERSION-bin.jar $DATOMIC_HOME/lib \
&& rm -fr /tmp/mysql-connector-java.zip /tmp/mysql-connector-java*
#2017-06-0608:33josephand to connect to the postsql from client, you also need to add its library in the project.clj, for example, i am using mysql, so I need to add this [mysql/mysql-connector-java "5.1.41"] in the dependencies in the project.clj#2017-06-0608:35rustam.gilaztdinovAbout connector in Dockerfile — that’s interesting
In doc about setting up storage — http://docs.datomic.com/storage.html — nobody mention that#2017-06-0608:36joseph@rustam.gilaztdinov hmm... it seems datomic has already contained the postsql library in the lib folder#2017-06-0608:36joseph@rustam.gilaztdinov it's mentioned here: http://docs.datomic.com/storage.html#sec-5-1#2017-06-0608:38rustam.gilaztdinovYes, I see
But when I want just build storage with peer and transactor — no need for this, right?#2017-06-0608:40josephbuild storage with peer and transactor? I think it's for it or maybe I misunderstand you...#2017-06-0608:42josephwhen transactor connects to the postsql, it needs the postsql connector in its lib folder#2017-06-0610:25Matt ButlerIs it possible to do negation over a collection binding
(d/q '[:find [?e ...]
:in $ [?ages ...]
:where
[?e :ages _]
(not-join
[?e ?ages]
[?e :age ?ages])] db [1 2 3 4])
the inverse of this query, so that it returns any ?e whos age is not in that collection.
(d/q '[:find [?e ...]
:in $ [?ages ...]
:where
[?e :age ?ages]] db [1 2 3 4])
#2017-06-0612:48uwo@mbutler will this work for you?
(d/q '[:find [?e ...]
:in $ ?excluded-ages
:where
[?e :age ?age]
(not [(contains? ?excluded-ages ?age)])]
db #{1 2 3 4})
#2017-06-0613:20Matt Butler@uwo had considered that, only hesitant that the inverse query would be so vastly different, dont know the perf implications of datomics set comparison vs contains? maybe there is a lesson in there about using clojure in datalog.#2017-06-0613:21Matt ButlerSomething in me wants it to work with the collection binding.#2017-06-0614:14daveliepmannWhere would I report a typo in a Datomic doc page? Specifically the description of parameters to tx-data http://docs.datomic.com/query.html seems to be garbled: "Given a log and a database, tx-data returns a collection of the datoms added by a transaction." In fact (and according to the example on that page) tx-data is given a database log and a transaction.#2017-06-0616:07robert-stuttaford@jaret @marshall ^ 🙂#2017-06-0616:12uwoWhen handling references on the front end, do most of you work with the :db/id from datomic? I have a colleague that thinks datomic ids shouldn’t leave the server, and if you need to refer to a ref within datomic you should use a uuid. I don’t personally see a need to use uuids until I’m communicating with a separate system. (of course, there is the question of where that line begins)#2017-06-0616:22Matt Butler@uwo have done it both ways, creating uuid with squids and using them externally and just using the :db/id everywhere. Your IDs and by extension potentially your urls (in the case of a rest api) become very predictable. Don't know if you would consider that a concern?#2017-06-0616:23uwo@mbutler definitely something to weigh. good point#2017-06-0616:54uwoI guess I would put that in the category of authorization#2017-06-0618:56favila@uwo I never expose db/id to users or other systems, but will use them internally for short-lived uses#2017-06-0618:56favilanot so extreme as never transact anything by id#2017-06-0618:57favila(which is in fact impossible in many cases, e.g. updating a component id)#2017-06-0620:10hcarvalhoaves:db/id will not survive across migrations too, that's why datomic supports :db/unique and lookup refs#2017-06-0712:49uwo@favila thanks#2017-06-0713:01marshall@daveliepmann Thanks for catching that - I’ve fixed it in the docs. http://docs.datomic.com/query.html#sec-5-12-7#2017-06-0713:20marshallDatomic 0.9.5561.50 is now available https://groups.google.com/d/topic/datomic/m03v0hNENr8/discussion#2017-06-0714:21souenzzoThere is some how to store datalog inside datomic?
I think in two ways:
- Save a datomic funcion that returns a datalog
- Save a string and parse it.
But both looks not have good performance#2017-06-0714:28matthaveneryou could store it as bytes and use fressian if parsing is a perf problem#2017-06-0716:39anmonteiro@marshall can you expand on the new healthcheck endpoint?#2017-06-0717:27marshalltransactor : http://docs.datomic.com/transactor.html#sec-1-1
peer server: http://docs.datomic.com/peer-server.html#sec-2-1
@anmonteiro#2017-06-0717:28anmonteirothanks! I was looking here: http://docs.datomic.com/monitoring.html but didn’t find anything 🙂#2017-06-0717:28anmonteiroalso searched the docs but they’re probably not indexed yet#2017-06-0719:21souenzzoIs there something to worry about when making pmap queryies?? (pmap #(d/q % db) datalogs)#2017-06-0719:48favilano, but why not one query?#2017-06-0721:22souenzzofavila: I agreed that merge my datalogs was a good idea. But I tryied for hours and fail 😞#2017-06-0719:49favilaNot sure pmap will give any benefit over mapv#2017-06-0722:45devthgot an error: {:db/error :db.error/unsupported-alter-schema, :attribute :db/valueType, :from :db.type/string, :to :db.type/ref} while attempting to transact a large schema. the error doesn't indicate which field is being attempted to alter. hints?#2017-06-0722:47devthactually i do see a :norm-name :schema_852284445. maybe it's obfuscated by conformity#2017-06-0800:00bbqbaron#punkbandnames#2017-06-0804:50souenzzo@devth you can't change :db/valueType http://docs.datomic.com/schema.html#sec-5-3
I see somewhere, but not recommended: "If you really need to alter valueType, rename the attribute to a new ident, then create a new one with correct type"#2017-06-0813:54pbostromdoes anyone use conformity? https://github.com/rkneufeld/conformity I have a question about it's purpose:
In a more general sense, conformity allows you to declare expectations (in the form of norms) about the state of your database, and enforce those idempotently without repeatedly transacting schema, required data, etc.
Is there any harm in repeatedly transacting the schema? Isn't that operation already idempotent?#2017-06-0817:08souenzzowhen you have a big schema, is good to know all old migrations before do a new one. (yes, you can do it with git history, transaction log... but plain text is a good way 😉 )#2017-06-0813:57bbqbaronooh! i do in fact use it#2017-06-0813:57bbqbarontechnically yes, but any transaction has side effects, even if they may be negligible or acceptable#2017-06-0813:58bbqbaroni haven’t tested the ultimate effects, but you will leave a swath of extra transaction datoms behind if you just run your schema file on startup#2017-06-0813:58bbqbaronit may also be true that some forms of schema migrations could be destructive and should never be repeated, although i have none of that type#2017-06-0813:59bbqbaronmaybe something like “ensure that this schema has exactly six entities with :user/admin” or something similarly contrived that you may not always want to be true, but may wish to ensure that a db starts with?#2017-06-0814:00pbostromthanks @bbqbaron#2017-06-0814:01bbqbaronno problem#2017-06-0815:00val_waeselynck@pbostrom shameless plug: you may want to check out the README of datofu (https://github.com/vvvvalvalval/datofu#managing-data-schema-evolutions) for a guide on migrations. HTH, feedback welcome#2017-06-0815:00pbostromcool, thanks#2017-06-0815:00val_waeselynckThe internals are essentially the same as conformity#2017-06-0819:15uwois it possible to give a peer read only access?#2017-06-0819:19bbqbaroni don’t think so; datomic didn’t have any permissioning notion last i checked#2017-06-0819:23azHi all, trying to start up the console but I am getting: java.lang.IncompatibleClassChangeError. Any ideas? Using the demo pro version. The transactor is running System started datomic:#2017-06-0819:36marshall@limix Are you using datomic FREE? There is a known issue with Datomic FREE and console. We’re working on a fix. If you register for Datomic starter (which is free to use) the Datomic PRO edition has no classpath issue with using console.#2017-06-0819:36azI was using the starter pro#2017-06-0819:36marshallwhat version?#2017-06-0819:37azdatomic-pro-0.9.5561.50#2017-06-0819:38marshallhrm. I tested that yesterday and it started fine#2017-06-0819:38marshallwhat command are you running to start console?#2017-06-0819:39azas per the console readme, I’ve tried both: bin/console -p 8080 alias transactor-uri-no-db and bin/console -p 8080 mbrainz datomic:#2017-06-0819:39marshallhrm#2017-06-0819:40marshallmight be worth trying a clean download/unzip of the distribution#2017-06-0819:41azdoes the pro already come with the console? I downloaded the console separately and ran the install-console#2017-06-0819:41marshallah, yes#2017-06-0819:41marshallyou’ll want to use the one packaged with pro#2017-06-0819:41marshallit is already in there#2017-06-0819:41azah ok got it#2017-06-0819:41azthank you#2017-06-0819:41marshallsure#2017-06-0819:44azworks!#2017-06-0819:44marshall👍#2017-06-0820:01val_waeselynck@uwo maybe connect to the transactor then kill it :p#2017-06-0820:02uwo😄 @val_waeselynck#2017-06-0901:41azDo you no longer need to use: :db/id #db/id[:db.part/person]? Seeing the docs now reference: [{:db/ident :person/name…#2017-06-0908:54chelseyHey anyone have a solution to combine pull expressions with aggregation in a single query?#2017-06-0908:55chelseyFor example, I know that pull expressions don’t support count, but is there a way to get a count included in the pull result?#2017-06-0909:08robert-stuttaford@chelsey not as far as i can tell. Datalog’s pull expressions really just compose with d/pull directly. other than the argument order, i don’t think there’s any difference between the implementations#2017-06-0909:09robert-stuttafordhappily, though, you can just use multiple queries#2017-06-0909:16chelseyThanks Robert! Can you explain what you mean by multiple queries? Do you mean running d/q multiple times?#2017-06-0909:25chelseyLet me give you a bit more context: Essentially, I have a Reddit-like tree structure, where there is no distinction between posts and comments. On top of this, “likes” are themselves entities, which I can retrieve easily with pull, but also need to return a count. I realize I can transform the data afterwards but was hoping there was a cleaner way of doing this.#2017-06-0909:35karol.adamiec@chelsey well, i know your pain. it takes a lot to internalize that unique treat of datomic. Running multiple queries and post processing them IS the clean way in some sense (or to put it differently it definitely is not dirty!). we are not in kansas anymore 🙂#2017-06-0909:36karol.adamiecon top of that multiple q could be more reusable as well….#2017-06-0909:55robert-stuttaford@chelsey yes. or any combination of d/q d/entity d/pull. once you have that db value, you can query against it as much as you like, and your view on the data will be consistent#2017-06-0910:01chelsey@karol.adamiec ahh it is painful, but I’ll accept it ¯\(ツ)/¯#2017-06-0910:02chelsey@robert-stuttaford @karol.adamiec thanks to both of you 🙂#2017-06-0910:03robert-stuttaford@chelsey think of Datomic less like a store of data ‘over there’ and more like a neat way to talk about data ‘over here’#2017-06-0910:03robert-stuttafordsometimes you use map, filter, etc, and sometimes you use d/q, d/pull, etc. and very often you combine them#2017-06-0910:03robert-stuttafordDatomic handles the ‘over-there’ness for you#2017-06-0913:29uwoOur infrastructure team has configured our sql storage to use “Always on HA” for read-scale (https://docs.microsoft.com/en-us/sql/database-engine/availability-groups/windows/overview-of-always-on-availability-groups-sql-server). They assured me that it should be transparent to readers, but I’m curious if that sounds off.#2017-06-1123:36azIs there a way to listen for changes on a peer?#2017-06-1200:03bbqbaronyes, i believe you can convert datoms into a stream of some sort. one moment…#2017-06-1200:03bbqbaronsomething like http://docs.datomic.com/clojure/index.html#datomic.api/tx-report-queue?#2017-06-1200:03bbqbaroni’ve never had to use it, but something like that?#2017-06-1202:17azGreat thank you, any ideas on a realtime strategy for datomic?#2017-06-1207:59misha@chelsey how is "not requiring to pack every-possible query-result modification into query itself" painful? kappa
I'd rather ->>/`map`/`filter` all the things, than building monstrous queries (there is no execution-planners to optimize your queries btw.)#2017-06-1208:02mishafind ids, pull some attributes, aggregate or whatever, :beach_with_umbrella:#2017-06-1211:31bbqbaron@limix what did you mean by “realtime strategy?” assuming you’re not referring to Starcraft, of course 😛#2017-06-1218:07joshgIf you’re in the DFW area tomorrow, we’re talking about Om Next and Datomic at the Clojure Meetup: https://www.meetup.com/DFW-Clojure/events/239721041/#2017-06-1218:49az@bbqbaron haha sorry, yes that is correct, I meant to say a strategy for building realtime apps with datomic.#2017-06-1218:52azIs there someway to have the notion of change streams like rethinkdb? Would there be a reasonable way to create a map of queries that get looked up on new queries, and invalidated when a new query will alter the result?#2017-06-1218:55azBasically, how could we know if the query: [?e :person/name ?name] would have a new result after adding a new person to the database?#2017-06-1218:56azThen theoretically, could we have peers spin up, and manage connections and streaming changes for a list of client queries?#2017-06-1219:02azOr would a naive approach work better to simply just rerun each live query on each peer. Do I understand correctly that those will likely run against the cache anyway?#2017-06-1219:11jdkealyare there any examples on how to call tx-pipeline ? I see the function in the docs, confused on how to use it.#2017-06-1219:26uwowhen y’all write data migrations, do you stick with the typical approach of always having a revert procedure for each migration?#2017-06-1219:30stuartsierra@limix Datomic does not have "streaming queries" in that sense. Each peer receives a stream of all new transactions in the database, but it is up to the application to figure out what to do with that. For a small number of simple queries, it might be OK to re-run them all on each transaction. For more complex scenarios, you will need to write custom code to examine each transaction and decide if it affects the data you are interested in.#2017-06-1219:32stuartsierra@uwo Generally, no. The "revert migration" doesn't really make any sense in Datomic: Schema is added, but never removed (in production). See https://github.com/rkneufeld/conformity as an example of a Datomic schema migration strategy.#2017-06-1219:32azthanks @stuartsierra#2017-06-1219:33uwo@stuarthalloway does the answer change if it’s only about data migrations not schema migrations? I know I don’t need to remove attributes#2017-06-1219:35stuartsierra@uwo Still no. You can never truly "go back to the way it was" before the migration. You can only add new data, which might include retractions of data added in previous migrations.#2017-06-1219:36stuartsierraThere is a difference in how we tend to work with Datomic in development versus production. In development, it is common to delete and recreate a database many times until you are satisfied with the schema. But once you have real data in production, you can only add.#2017-06-1219:36uwoare there no such thing as data migrations forward?#2017-06-1219:37uworight, we’re trying to figure out how data migrations will work in production right now#2017-06-1219:37stuartsierraIn production, you only ever add new information (which may include retractions or schema alterations).#2017-06-1219:38stuartsierraEvery "migration" is just transacting new things into the database. Conformity is one way to manage this.#2017-06-1219:47uwothanks @stuartsierra we’re wrestling with some misconceptions#2017-06-1220:21nwjsmith@uwo#2017-06-1220:21nwjsmithI think you've got the wrong stu#2017-06-1220:26uwoha! @nwjsmith thanks#2017-06-1323:41devthidents can't be used as lookup refs in the same tx they are created in, right?#2017-06-1323:45devthhm. that sorta complicates a large-ish graph of potentially inter-referencing entities to be transacted.#2017-06-1323:46devthguess that's what tempids are for#2017-06-1402:07souenzzodevth: tempid's are awesome#2017-06-1406:50val_waeselynck@devth they can't, and you can definitely benefit from the client itself having some notion of a tempid. Note than you can also use unique-identity attribute for upsert, e.g {:a/id "some-id" :a/b {:b/id "some other id"}} instead of {:a/id "some-id" :a/b [:b/id "some other id"]}#2017-06-1406:50val_waeselynckBe careful of the security issues that can be caused by sending that from the client though#2017-06-1414:20devthval_waeselynck: thanks. you mean client as in browser? in my case i'm sending them from the server side of a peer.#2017-06-1420:02val_waeselynckYes as in browser#2017-06-1420:57devthgot it. thanks#2017-06-1414:56cap10morganHow would one query Datomic for any entities where attr1 has value a OR attr2 has value b? I can't use an or clause because the attrs are different. I feel like I might be missing something simple here.#2017-06-1415:01jeff.terrellNot sure if this is the best solution, but you could always do two queries and combine the results afterwards.#2017-06-1415:01cap10morgan@jeff.terrell Yeah. That's my fallback option. I'm just surprised that there is (apparently) no way to do this in one query.#2017-06-1415:06karol.adamiec@cap10morgan are you absolutely sure or clause is not what you need? i do use it with different attrs….. ??#2017-06-1415:06karol.adamiecit seems to me like it is is EXACTLY for that reason. on one attr you can have logical or by binding a collection#2017-06-1415:06cap10morgan@karol.adamiec I got an error when I tried and the docs say: "All clauses used in an or clause must use the same set of variables, which will unify with the surrounding query. This includes both the arguments to nested expression clauses as well as any bindings made by nested function expressions."#2017-06-1415:07karol.adamiecahh, yes#2017-06-1415:07karol.adamiecso the issue is in binding ;/#2017-06-1415:07cap10morganbut maybe I'm doing it wrong 🙂#2017-06-1415:08karol.adamiecso OR claus works for :attr1 :attr2 only when it is bound to the same variable a#2017-06-1415:09karol.adamiecseems like two queries then 😞#2017-06-1415:12favila@cap10morgan show us the clauses. You probably need or-join#2017-06-1415:13favilathe key is "same set of variables" (for unification), NOT same attributes#2017-06-1415:14cap10morganah, so maybe (or-join [?user] ...?#2017-06-1415:15cap10morganhey, that seems to work. thanks @favila!#2017-06-1416:16cap10morganoh, huh. it looks like the or-join version does not work. it finds every user b/c the ?email-search and ?phone vars don't get unified with their inputs I guess?#2017-06-1416:19cap10morganso the relation binding version might be the way to go#2017-06-1416:36favila(d/q '{:find [[(pull ?user [*]) ...]]
:in [$ ?email-search ?phone]
:where [(or-join [?user ?email-search ?phone]
[?user :user/email-search ?email-search]
[?user :user/phone ?phone])]}
(d/db c) "#2017-06-1416:37favila@cap10morgan ^#2017-06-1416:37cap10morganhuh, I would have thought that was what it was doing w/o or-join#2017-06-1416:37cap10morganinteresting#2017-06-1416:37cap10morganthanks again @favila#2017-06-1416:39favilaor adds the additional restriction of all clauses using the same vars#2017-06-1416:41favilaI'm not sure if that's just for safety or for an optimization#2017-06-1416:45cap10morganNow I'm wondering which one I prefer. Do you prefer the or-join over the relation binding, @favila?#2017-06-1416:46favilaexplicit or-join is the only thing you can do if you have to check more than one pattern to match a condition#2017-06-1416:47favilae.g., if you grow a "contact" entity and move all phone and email into those#2017-06-1416:47favilaor-join is probably also faster, since the query compiler can see what attrs it needs to look up#2017-06-1416:47favila(although I don't know if it's fast enough to make a difference)#2017-06-1416:47favilaI would only use the relation biding form if I needed its generality#2017-06-1416:48favilai.e., I had a set of different attributes I could match on, all attrs are on the same entity, and my searches could have one or many in any combination#2017-06-1416:49cap10morganyeah, makes sense. thanks!#2017-06-1416:49favilaso, my decision for one or the other is driven by interface more than anything#2017-06-1419:42souenzzoWhat (performance) concerns should I have when using ground?#2017-06-1419:44favilahow are you using it?#2017-06-1419:50souenzzoTwo uses:
- {:find [?name-e ?e] :where [[(ground (quote ?e)) ?name-e] ...(stuff)...]} - To know the "name" of each match
- {:find [?e] :where [[(ground 12313) ?e] ...]} - To force the match to occur in a specific entit(ies)
In general, these 2 uses at the same time. Queryies, with ground at begin of where, are taking ~100ms on an "empty"(very small background), mem, db.#2017-06-1419:51favilahow does that first one work?#2017-06-1419:53favilaground is supposed to take literals, so the value of ?name-e is either (quote ?e) (a list) or ?e (a symbol)#2017-06-1419:53favilaI am interested in how you use it in (stuff) because I can't think how that would work#2017-06-1419:54favilathe second case you could make ?e input to the query?#2017-06-1419:55favilabut that is a legit use and I don't think any other way of getting ?e and its value at the same time would be faster#2017-06-1420:00favila(d/q '[:find ?xv . :where [(ground (quote ?x)) ?xv]]) ;=> ?x @souenzzo#2017-06-1420:02souenzzo{:find [?e688269 ?e
?stage1688270 ?stage1
?stage2688271 ?stage2
?date-close688272 ?date-close
?date-open688273 ?date-open]
:where [[(ground 17592186046417) ?stage2]
[(ground #inst"2017-11-29T20:44:21.770-00:00") ?date-close] ;;**
[(ground (quote ?e)) ?e688269]
[(ground (quote ?stage1)) ?stage1688270]
[(ground (quote ?stage2)) ?stage2688271]
[(ground (quote ?date-close)) ?date-close688272]
[(ground (quote ?date-open)) ?date-open688273]
[?e :stage-container/calendar ?stage1]
[?e :stage-container/calendar ?stage2]
[?stage1 :calendar/stage-type :stage-type/a]
[?stage2 :calendar/stage-type :stage-type/b]
[?stage1 :calendar/close ?date-close]
[?stage2 :calendar/open ?date-open]]}
It's automatically generated....#2017-06-1420:04souenzzoOn ;;**, it's auto generated basead on other query. I dont know what results are entities-id, what are values...#2017-06-1420:05favilawell, it seems a little weird, but it does what you intend#2017-06-1420:06souenzzofavila: works! 😄
But its slow 😞#2017-06-1420:06favilaso if you remove the grounds, and leave all else the same, it runs faster?#2017-06-1420:08favila'{:find [?e
?stage1
?stage2
?date-close
?date-open]
:where [[(ground 17592186046417) ?stage2]
[(ground #inst"2017-11-29T20:44:21.770-00:00") ?date-close]
[?e :stage-container/calendar ?stage1]
[?e :stage-container/calendar ?stage2]
[?stage1 :calendar/stage-type :stage-type/a]
[?stage2 :calendar/stage-type :stage-type/b]
[?stage1 :calendar/close ?date-close]
[?stage2 :calendar/open ?date-open]]} for example#2017-06-1420:09favilaIt's probably slow because of clause ordering, not grounds#2017-06-1420:09favilayou are looking at all ?e which have :stage-container/calendar attribute#2017-06-1420:10souenzzoRemoving "label grounds", (dotimes [i 1000]), goes from 3s to 2s.
A single query on repl, is 10x faster then "on integration test" 😕#2017-06-1420:11souenzzoI'm going it on a d/with db-after ("on code")... But I think that should not be relevant..#2017-06-1420:12favilasomething is fishy with your tests#2017-06-1420:12favilayou are either redoing work, or your data makes the bad clause ordering more painful#2017-06-1420:13favilathis is closer to optimal:#2017-06-1420:13favila[?e :stage-container/calendar ?stage2]
[?stage2 :calendar/stage-type :stage-type/b]
[?e :stage-container/calendar ?stage1]
[?stage1 :calendar/stage-type :stage-type/a]
[?stage1 :calendar/close ?date-close]
[?stage2 :calendar/open ?date-open]#2017-06-1420:16favilaif date-close is even more selective, that might be even more optimal#2017-06-1420:16favilato put it higher#2017-06-1420:20favilawriting it by hand I would do this. (I might reorder the clauses if I knew more about the schema):'{:find [[(pull ?e [:db/id
{:stage-container/calendar
[:calendar/open
:calendar/close
{:calendar/stage-type [:db/ident]}]}]) ...]]
:in [$ ?stage2 ?date-close]
:where [[?stage2 :calendar/stage-type :stage-type/b]
[?stage1 :calendar/close ?date-close]
[?stage1 :calendar/stage-type :stage-type/a]
[?e :stage-container/calendar ?stage2]
[?e :stage-container/calendar ?stage1]]}#2017-06-1423:45weiI’m trying to write a query that checks whether a condition is sufficient for the last 4 out of 7 days. any suggestions on how to structure the datalog?#2017-06-1500:07souenzzowei:
'{:find [(count ?day) .]
:in [$ ?e]
:where [[?e :all/days ?day]
[?day :my/attr ?attr]
[(> ?attr 10)]
]}
#2017-06-1500:08souenzzohttp://docs.datomic.com/query.html#sec-5-17#2017-06-1500:59weithanks for the example. guess the problem was a bit underspecified but your link helped#2017-06-1423:50weiI have a workaround.. I can just run my condition query 7 times. but I’d appreciate any suggestions for a more efficient solution!#2017-06-1507:13isaacdatomic use cassandra, peer got this error
java.lang.IllegalStateException: Attempting to call unbound fn: #'datomic.kv-cassandra/kv-cassandra
#2017-06-1513:25chrisblom@isaac you need to add a dependency to you project to use cassandra#2017-06-1513:25chrisblomsee http://docs.datomic.com/storage.html#sec-7#2017-06-1513:26isaacyour mean this?#2017-06-1513:28chrisblomyes#2017-06-1513:32isaacClearly, It is a java lib. Has no Clojure functions#2017-06-1513:33isaacThe error told me — can not find a Clojure function, and actually I added this dependencies already.#2017-06-1513:40chrisblomThe error you got indicates the datomic.kv-cassandra ns could not be loaded#2017-06-1513:41chrisblomprobably because something related to cassandra is missing, but if you added the dep. already it should be able to load. Are you using the correct version?#2017-06-1513:43isaacNo, chrisblom. I check the lib again. As I said before. this is an pure java lib, there is no possible contains Clojure functions#2017-06-1513:43isaacBut I create this function myself. It’s worked#2017-06-1513:45chrisblomYes i know, but i'm guessing that datomic.kv-cassandra depends on the lib. being present and the right version. So if something is wrong, datomic.kv-cassandra cannot be loaded, and hence you get the Attempting to call unbound fn: #'datomic.kv-cassandra/kv-cassandra error#2017-06-1513:48marshall@isaac what version of Datomic are you using? what does your datomic depedency declaration look like?#2017-06-1513:51isaacchirsblom, thanks for you hint. this function defined at datomic peer lib.#2017-06-1513:51isaacI just need require the namespace#2017-06-1513:52isaac@marshall I use the version of 0.9.5561#2017-06-1513:52marshallDatomic Pro?#2017-06-1513:55isaacyes#2017-06-1513:55marshalli would expect that error if you were using datomic-free peer depedency#2017-06-1513:57isaacfree not support Cassandra#2017-06-1513:57marshallcorrect#2017-06-1520:01jfntnAdding com.amazonaws/aws-java-sdk-dynamodb to our project causes a lot of aws log messages to end up in our logback output, does anyone know how to exclude this?#2017-06-1520:29stuartsierra@jfntn Edit your logback configuration file, logback.xml to set a different level on the <logger> for the prefix of the AWS library.#2017-06-1609:58claudiuHi, Does datomic help with problems like multiregion. Ex: having a server in australia & one in europe. When a user from australia creates an "article" (transactor would be in europe), is it different vs mysql single master with multi-region read-replicas ?#2017-06-1614:33jaret@claudiu You can have peers from multiple regions performing reads but there is no cross-region support for writes with the transactor.#2017-06-1614:34jaretThose peers can also all write to the same transactor, but you cannot have multiple transactors in different regions working against the same DB#2017-06-1618:00eraserhdSomeone posted a script here for exporting a datomic db to txs that could be committed against an empty db. I don't remember who. Anyone remember this?#2017-06-1618:16devthhas anyone figured out a clean/sound way to specify ordering on a set of many cardinality refs? or are people just using :pos-like solutions as already discussed in google groups threads? (e.g. https://groups.google.com/forum/#!topic/datomic/uq9vBspB3zk)#2017-06-1619:47unbalancedhappy friday everyone 🙂#2017-06-1619:47unbalancedanyone know if there's an efficient way to do a "diff" between history versions of a db?#2017-06-1619:48bbqbaronsorry, dumb question: isn’t that just the atoms in the log between their t values?#2017-06-1619:53unbalanced@bbqbaron I couldn't tell if it's a dumb question or not 😅 the use case is I have a client side datascript DB that I want to sync with the server side datomic db... the very course-grained idea is that if the client-db historical # is behind the server-db historical # to do a diff and update the client-db with the missing data#2017-06-1619:54bbqbaronso, i haven’t used datascript, but assuming it’s a non-permanent datomic equivalent for the browser, then i’d probably just grab all the datoms in the server’s log greater than the client-side db’s basis-t (i think that’s the name) and transact them on the datascript db#2017-06-1619:54bbqbaronmomentarily ignoring questions of performance, etc#2017-06-1619:54unbalancedright#2017-06-1619:55unbalancedyeah that sounds like a fantastic approach, I'll look into that#2017-06-1619:55bbqbaron(i guess you can’t prove that the list of datoms is the actual diff, now that i think about it. it could contain cul-de-sacs of transactions that undo themselves)#2017-06-1619:55bbqbaron(such that sending them would be a waste)#2017-06-1619:55bbqbaronbut otherwise, great!#2017-06-1619:56unbalancedI'll work on that now and let you know how it goes 🙂#2017-06-1620:20souenzzoDatomic has some trigger to do something like:
(let [coonection (d/connect db-uri)
connection-with-middlewire (d/on-transact connection (fn [db tx-data] ,,,,))]
(reset! conn connection-with-middlewire))
Then, every time that anyone transact on this conn, the tx-data will be processed by this fn..#2017-06-1620:26unbalancedah, yeah, transactor functions... haven't gotten into those yet 🙂 @bbqbaron this seems to be getting pretty close to what you were talking about, dug this up from the day of datomic materials (`https://github.com/Datomic/day-of-datomic/blob/master/tutorial/basis_t_and_log.clj`):
(def conn (d/connect uri))
(def db (d/db conn))
(def basis-t (d/basis-t db))
(def basis-tx (d/t->tx basis-t))
(def log (d/log conn))
(-> (d/tx-range log basis-tx (inc basis-tx))
seq first :data count)
#2017-06-1620:27bbqbaronthat does seem to get the log from a given tx range for a given db. i haven’t done this particular operation myself, but it definitely looks related#2017-06-1620:28unbalancedI'm debating whether or not this is worth the effort compared to traditional APIs#2017-06-1620:30unbalancedthe idea of having a client side db and saying goodbye to APIs seems great but... really it's just a lot of APIs going on behind the scenes#2017-06-1620:31unbalancedI'm actually kind of surprised datomic corp hasn't jumped in on this yet#2017-06-1620:33bbqbaronthat is, on the idea of having some kind of self-maintaining shared client/browser state that uses sockets or whatnot?#2017-06-1621:57unbalancedmhm. Or just the client side db at all#2017-06-1621:58unbalancedI really do think that once this tech matures it could change the game for front end#2017-06-1621:58unbalancedwhat is currently most attractive to me about the clojure is the ecosystem ... database, server, AND client all handled with the same language#2017-06-1621:59unbalancedif pallet works out it would be my devops solution too#2017-06-1621:59unbalancedI mean cognitect created clojurescript... why not bring their flagship product to cljs too?#2017-06-1622:00unbalancedI'm curious if the reason why they haven't has more to do with dev resource limitations or if there are strategic reasons why they choose not to#2017-06-1720:43val_waeselynck@goomba have you looked into Datsync? https://github.com/metasoarous/datsync#2017-06-1720:55unbalanced@val_waeselynck I have but I have to confess I'm not smart enough to get it to work#2017-06-1720:55unbalancedAlso unfortunately it's very much married to the view for some reason -- I'm only interested in the data transfer part of it#2017-06-1801:09eoliphantis there any way to retract without the value? I saw a discussion about this in the group from ’14 or so, but was wondering if anything’s been added since.#2017-06-1802:38favilaOnly one "intrinsic" way: assert new fact on cardinality-one e+a#2017-06-1802:39favilaOne indirect way: :db.fn/retractEntity tx fn will retract a whole tree of things on an entity without knowing their value#2017-06-1802:40favilaOtherwise you need to write a transaction function which reads the current value first so it can retract it#2017-06-1802:41favilaKeep in mind that retraction without a value is essentially saying "last writer wins". Be sure that's the semantics you want#2017-06-1802:41eoliphantok, thx, yeah in this case, it’s just a single attribute, will look at using a tx fn.#2017-06-1802:41eoliphantyeah that’s a good point as well#2017-06-1802:42favilaDatomic has set semantics on card-many, often you can reconcile different writes safely#2017-06-1802:42favila(Not always though)#2017-06-1802:43eoliphantyeah and in this case it’s actually a card-one#2017-06-1802:43eoliphantgoing to rethink it a bit#2017-06-1912:15degI'm trying to setup Datomic to use DynamoDB, per the instructions in http://docs.datomic.com/storage.html. Running ensure-transactor, I get an error ... is not authorized to perform: iam:GetUser on resource .... What permission do I need to set?#2017-06-1915:22val_waeselynck@baptiste-from-paris Chose promise chose dûe: the blog post for selling Datomic to business stakeholders https://medium.com/@val.vvalval/what-datomic-brings-to-businesses-e2238a568e1c#2017-06-1916:53timgilbertHey, how do people generally handle automated backups for datomic? I was running cron scripts on the transactor but they've been failing from OOM errors#2017-06-1916:54timgilbertSo I'm considering firing up lambda or something to do them, or just having a periodic instance that spins up, does the backup (to S3) and then shuts down#2017-06-1916:59kwladykaDo you know production systems based on free version of datomic? I want to do some small things when price of datomic is totally too high, but still i want to be familiar with datomic and use it, because it is the closest to Clojure. But maybe it doesn’t make sense, maybe i just should use postgresql.#2017-06-1917:57marshall@kwladyka Datomic Starter is licensed for production usage. The only ‘limit’ with Starter compared to a full paid license is that Starter only comes with 1 year of updates/maintenance#2017-06-1918:09spieden@timgilbert that’s roughly how we’re doing it. lambda running an ECS task though, as we’re all docker#2017-06-1918:09spieden@timgilbert i can share our dockerfile and shell script we use for it if you’re interested#2017-06-1918:10spiedendatomic tools like to be called from the CLI, so wrapping in lambda might be tricky#2017-06-1918:25timgilbertThat would be great @spieden, would like to see them. I had some trouble dockerizing the datomic transactor in the past, but maybe if I just need the CLI stuff it will be easier#2017-06-1918:27spiedenwe have the transactor dockerized and running stably on a t2.medium too if you want to see our stuff for that#2017-06-1918:27spiedenFROM java:8
ARG VERSION=0.9.5344
RUN wget --http-user=xxxxx --http-password=xxxxx -O datomic-pro.zip
RUN unzip /datomic-pro.zip
RUN apt-get update && apt-get install -y awscli jq && apt-get clean
COPY entrypoint.sh /
RUN chmod 755 /entrypoint.sh
CMD /entrypoint.sh
#2017-06-1918:27spieden#!/bin/bash -ex
DDB_TABLE=$(aws cloudformation --region ${REGION} describe-stack-resource --stack-name ${STACK_NAME} --logical-resource-id ${LOGICAL_RESOURCE_ID} | jq -r .StackResourceDetail.PhysicalResourceId)
/datomic-pro-*/bin/datomic backup-db datomic:ddb://${REGION}/${DDB_TABLE}/${DATOMIC_DB_NAME} s3://${BUCKET}/${STACK_NAME}/$(date +%s)
#2017-06-1918:28timgilbertAwesome, thanks#2017-06-1918:29timgilbertDon't you lose the incremental backup stuff if you're using a timestamp-based S3 URL though?#2017-06-1918:30spiedenyes i can’t remember exactly why we did it that way#2017-06-1918:30spiedenour db is pretty small and we use a bucket policy to remove old backups#2017-06-1918:30spiedenmaybe we’re doing fulls because of the latter#2017-06-1918:33timgilbertOk, well thanks, I owe you a beer#2017-06-1918:33timgilbert🍻#2017-06-1918:34spiedencheers =)#2017-06-1918:35kwladyka@marshall but i think free version is not time limited?#2017-06-1918:42kwladykaah but it use only memory…#2017-06-1918:45kwladykaCan you confirm i understand it correctly?#2017-06-1918:49marshall@kwladyka Yes, Datomic Free only supports ‘mem’ and ‘free’ storages; Datomic Starter supports all storages as well as memcached, HA, PeerServer & Clients#2017-06-1919:14marshall@val_waeselynck Thanks for sharing the blog post - fantastic!#2017-06-1919:37kwladykaWhich exactly are “free” storages in datomic?#2017-06-1919:37marshall“free” is local storage used by the free transactor#2017-06-1919:38kwladykalocal storage mean file?#2017-06-1919:38marshallhttp://docs.datomic.com/storage.html#sec-4 << Similar to “dev” use in the Starter license#2017-06-1919:39marshallyes, it uses local disk storage#2017-06-1919:41kwladykaso anybody did something on production with free ver.?#2017-06-1919:44marshallYes, I believe we do have customers running in production with Free. Definitely with Starter#2017-06-2123:05alexisgallaghermarshall: yup. FYI, we run free datomic in production at http://topologyeyewear.com. This is mostly for operational simplicity at the moment, rather than b/c of economics. Our needs are light at the moment.#2017-06-1920:01eraserhdNot sure if this is a good place to ask, but I'm wondering if there's any prior art - papers, etc, on pull expressions. Or if there's a correspondence with some kind of CS or mathematical object.#2017-06-1920:01eraserhdThis question is weird because I'm not really even sure what I'm asking for, obvs.#2017-06-1920:02eraserhdBasically, curious about information lost when a graph gets dag-ified?#2017-06-1920:03val_waeselynck@eraserhd probably not very academic, but you should definitely look into GraphQL#2017-06-1920:03val_waeselynckhttp://graphql.org/learn/thinking-in-graphs/#2017-06-1920:06val_waeselynck@kwladyka happily running with Pro Starter, though I'm a bit embarassed to admit it 🙂#2017-06-1920:21spiedenalso on pro starter but have a line item for a real license in next year’s budget =)#2017-06-1922:11eraserhd(for reference, I found this, with references http://www3.cs.stonybrook.edu/~liu/papers/GraphQL-PADL06.pdf )#2017-06-1922:56csmdoes the client api support transacting database functions?#2017-06-1922:57csmI’m working on some edn files that are shared between apps that use the peer API, and some that use the client API, and my guess is that trying to read #db/fn literals is going to fail on the latter#2017-06-2001:39steveb8nI’m designing a schema where ordered to-many relationships are a requirement. I’ve seen the various blog posts etc and that seems to suggest either 1/ component wrapper entities with :position attr, 2/ referenced/child entities with :position attribute or 3/ referenced/child entities with linked-list attributes#2017-06-2001:40steveb8nDoes anybody have any experience or war stories from using these various techniques?#2017-06-2001:40steveb8nOr even better, is there anything on the Datomic roadmap that will help with this?#2017-06-2005:55robert-stuttaford@steveb8n Datomic can’t really help because it can’t know which of these is most suited to your use-case. few vs many items, items-have-one-parent-list vs items-have-many-parent-lists both have an impact on what you do. by providing for one case, Datomic could mislead people on thinking about these concerns#2017-06-2005:59val_waeselynck@steveb8n you should ask on Stackoverflow or the mailing list, and we'll try and give you the tradeoffs#2017-06-2006:00steveb8ngood point. I’m starting to play with this tomorrow so that helps#2017-06-2006:00steveb8nthanks both, I will post on SO#2017-06-2006:59steveb8n@val_waeselynck I’ve posted the question. Really appreciate any input. https://stackoverflow.com/questions/44645938/how-to-implemented-sorted-to-many-relationships-in-datomic#2017-06-2013:05shaneprinceHas anyone successfully created a leiningen project with dependencies for both the datomic client and ring? I've found this unresolved issue on SO relating to the exact same problem I have and can't figure out a way around it: https://stackoverflow.com/questions/43291069/lein-ring-server-headless-fails-when-including-datomic-dependency - appears to be caused by ring and the datomic client relying on different but incompatible versions of Jetty?#2017-06-2013:10karol.adamiec@shaneprince i had issues with some AWS libs conflicting and had to add ‘[com.amazonaws/aws-java-sdk-dynamodb “1.11.0”]’ by hand to resolve it.#2017-06-2013:10karol.adamiectry running lein deps :tree to find offensive lib#2017-06-2013:11karol.adamiecin your case try to include a version of jetty that will satisfy different libs#2017-06-2013:29shaneprinceThanks @karol.adamiec, running lein deps :tree shows the datomic client depends on jetty-util 9.3.7.v20160115 but from my investigation ring core depends on jetty-util 7.6.8.v20121106. I'm on ring 1.6.1 which I believe is the latest version. Is it even possible to include both dependencies that each require different versions of jetty?#2017-06-2013:44karol.adamiecdont know… maybe someone else can answer that ;(#2017-06-2015:29shaneprinceFor anyone else that comes across this issue: from what I understand it looks like ring-core is using an older jetty adapter for legacy purposes. I've found https://github.com/sunng87/ring-jetty9-adapter a worthy replacement so far.#2017-06-2017:46matthaveneris the ability to compare datomic :db.type/instant with < documented anywhere?#2017-06-2018:06robert-stuttaford@matthavener clj-time has comparison functions e.g. https://clj-time.github.io/clj-time/doc/clj-time.core.html#var-latest#2017-06-2018:08matthavener@robert-stuttaford i’m more asking if the ability to use a predicate like [(< ?inst1 ?inst2)] is documented#2017-06-2018:46souenzzo@matthavener and I think that is really odd that inside datalog we can compare #inst, and on clojure not.#2017-06-2018:56favilaall datomic valueTypes must be comparable#2017-06-2018:57favilaotherwise sorted index would be impossible#2017-06-2018:58favila(except bytes compare on something other than contents? not sure what)#2017-06-2020:09souenzzoso it's more a clojure "problem", that cant compare #inst, and datomic has a workaround to make it work internally ?#2017-06-2020:15favilamost likely they have a custom comparator for whatever sorted-datom-set data structure they made#2017-06-2020:16favilathe < <= >= > datalog ops are not clojure functions, they are magic#2017-06-2020:55souenzzoI'm trying to
`[(my-ns/function-that-returns-function ~'?foo) ~bar]
[(~bar ~'?var1 ~'?var2) ~'?foobar]
But getting
com.google.common.util.concurrent.UncheckedExecutionException: java.lang.IllegalArgumentException: Parameter declaration ?var2 should be a vector
I'm missing something?#2017-06-2020:59favilawhat does ~bar expand to?#2017-06-2021:02souenzzo~'bar both*#2017-06-2021:03souenzzocom.google.common.util.concurrent.UncheckedExecutionException: java.lang.RuntimeException: Unable to resolve symbol: ?f in this context
This is the real exception. The other exception was because I'm using fn(on ~'bar) keyword and it's trying to evaluate a function 😜#2017-06-2021:06souenzzo`[(my-ns/function-that-returns-function ~'?foo) ~'bar]
[(~'bar ~'?var1 ~'?var2) ~'?foobar]
=> Unable to resolve symbol: bar in this context
`[((my-ns/function-that-returns-function ~'?foo) ~'?var1 ~'?var2) ~'?foobar]
=> Unable to resolve symbol: ?foo in this context#2017-06-2021:12souenzzoFunctions aren't first class inside datalog?#2017-06-2021:26souenzzo`[(my-ns/function-that-returns-function ~'?foo) ~'?bar]
[(.invoke ^clojure.lang.Var ~'?bar ~'?var1 ~'?var2) ~'?foobar]
Works for me 😄#2017-06-2119:27stijnis there a way to specify multiple cassandra hosts in the transactor properties?#2017-06-2119:40rnewmanHello folks. I have a question about aggregates. Datomic automatically groups by all non-aggregate find elements. Given :foo/name and :foo/age, how would you write a query to return the name and age of the oldest person in the store? [:find ?name (max ?age) :in $ :where [?x :foo/name ?name] [?x :foo/age ?age]] isn't it, AIUI — that's the oldest age of each distinct name.#2017-06-2119:56favilaI think add :with ?x#2017-06-2119:56favilaor include ?x in the find#2017-06-2119:56favila@rnewman http://docs.datomic.com/query.html#sec-5-17-1#2017-06-2120:16ddellacostafolks, I've run out of ideas for how to retract a specific value in a component attribute#2017-06-2120:16ddellacostais there a way to :db/retract a specific value in a component? For example#2017-06-2120:17ddellacosta;; I just want to remove "bar"
{:db/id 1234, :some/component ["foo" "bar"]}
#2017-06-2120:18ddellacostaI can get ahold of the entity at :db/id of course, and transact on that, and it's easy to add values to that component, like
(d/transact conn [[:db/add 1234 :some/component "whatever"]])#2017-06-2120:18ddellacostabut I'm totally stumped at what the magic incantation is to remove a particular value from :some/component#2017-06-2120:19ddellacostaany ideas would be most appreciated#2017-06-2120:20ddellacostaseems like I'm missing something basic here#2017-06-2120:21danielstocktonDatascript or cardinality/many attr?#2017-06-2120:23ddellacostaoh you were asking me? sorry#2017-06-2120:23ddellacostayeah a component with cardinality/many#2017-06-2120:23ddellacostawhich I guess I thought was what you called that ("component")#2017-06-2120:25rnewman@favila: :with is equivalent to including the column without projection, and including ?x actually makes the problem worse: it now groups on both ?x and ?name#2017-06-2120:26ddellacostaoh, huh, guess it was just (d/transact conn [[:db/retract 1234 :some/component "bar"]]) ? I thought I tried that#2017-06-2120:26ddellacostabut, anyways, sorry for the noise--thanks anyways!#2017-06-2120:26ddellacostamaybe this will help someone else…#2017-06-2120:41timgilbert@shaneprince: late response, but I've found it to be a best practice to include :pedantic? :warn in my project.clj and then ruthlessly exclude anything that gives me warnings until there are none (having been bitten by order-of-class-loading bugs in the past)#2017-06-2120:49uwoIf we create user for our backend storage that has readonly permissions and connect with that user in the connection string, will that peer have readonly access, or will it just fall over and break?#2017-06-2213:01favila@U09QBCNBY you can do this. Peers don't write, so you an give them read only access to storage and all is well. We do this with Sql storage: peers get SELECT-only access, and transactor gets only SELECT INSERT UPDATE DELETE#2017-06-2208:46stijnHow are you all structuring your code for database functions? I assume: have small interfaces in datomic and put the rest of the code on the classpath of the transactor?#2017-06-2209:01shaneprinceCheers @timgilbert! Very useful 🙂#2017-06-2209:38robert-stuttaford@stijn i really dislike that approach because it means regular transactor downtime (every time you have code to deploy)#2017-06-2209:51stijn@robert-stuttaford good point.#2017-06-2211:15pesterhazytransactors should be as disposable and hands-off as possible#2017-06-2211:15pesterhazye.g. using cognitect's AMIs, where it's hard to pull in extra dependencies#2017-06-2213:02uwo@favila excellent. thanks!#2017-06-2213:13marshall@uwo Note that those peers are still able to write to the Datomic instance (i.e. submit transactions), as that work is handled through peer->transactor->storage#2017-06-2213:22uwoah, of course. so, that configuration would be somewhat pointless, then? @marshall#2017-06-2213:28marshallit probably isn’t a terrible idea just in the sense that your peer user creds are less open#2017-06-2213:28marshallbut it wouldn’t result in a ‘read only’ peer#2017-06-2213:40favilaAh I misunderstood what you were after @uwo. With respect to storage, all peers are read-only. But there's no way to turn off their ability to submit txs to the transactor#2017-06-2213:41uwoyeah. I wasn’t thinking through clearly.#2017-06-2213:43marshallincidentally, I believe ‘read only peer’ would be appropriate as a feature request in our portal - I’d suggest you add it there if it’s not already in there#2017-06-2213:58uwothanks, request made.#2017-06-2217:01haroldIt looks to me like com.datomic/datomic-pro "0.9.5561" depends on org.apache.httpcomponents/httpclient "4.5.2" which depends on commons-codec "1.9".
But com.datomic/datomic-pro "0.9.5561" also depends on commons-codec "1.5" directly, overriding httpcomponents' dependency and forcing the older 1.5 version of commons-codec.
Is this right? Is it good? What is the reasoning? If the code in datomic-pro is compatible with commons-codec 1.9, then I think removing the direct dependency on 1.5 is a good idea.#2017-06-2220:01kjothenAre there any performance improvements to be gained by using enums for attributes that are in some sense bounded? For example, I have a lot of data that needs to be retrieved by day (in the bi-temporal sense, not the t). Indexing millions of entities by day seems wasteful if the universal set of days is bounded to a few years, e.g. :as-of/+20170622 is not too cumbersome to use. Basically, will I get the same query performance swapping out indexes for enums?#2017-06-2220:13timgilbert@harold: I've just added a [commons-codec "1.10"] top-level dependency to my project.clj in order to pin the dependency, personally (I'm extra-paranoid about class-loading conflicts)#2017-06-2220:15harold@timgilbert - that sounds dicey too, if httpcomponents depends on 1.9 specifics the change you suggest could break it (same for datomic and commons-codec 1.5, and datomic's explicit direct dependency suggests to me that it does depend on 1.5 specifics).#2017-06-2220:17timgilbertI agree, but to me that diceyness is a lot more palatable than the diceyness of "include two copies of these classes in the classpath and let the fates decide which version the JVM will load on each invocation"#2017-06-2220:19timgilbertFWIW, from the changelog of commons-codec it looks like largely just bugfixes in more recent versions#2017-06-2220:20harold@timgilbert - If you use lein it's not as bad as that. On a related note, have you seen https://github.com/walmartlabs/vizdeps ?#2017-06-2220:20alexmillerlein or mvn will pick one, not include both (but that one might not be the one you want)#2017-06-2220:21timgilbertOh right. Well, build-time uncertainty is better than run-time uncertainty, but not by that much#2017-06-2220:25harold@alexmiller - any thoughts on this conflict? Is there a better place to report/post it?#2017-06-2220:25alexmillerif you have a Datomic support account, those channels are fine#2017-06-2220:25alexmilleror @marshall here#2017-06-2220:26alexmiller(I’m not it :)#2017-06-2220:26harold@alexmiller - Oh, didn't think of that. I don't interact much with the datomic licensing stuff we do. Thank you.#2017-06-2223:24jaret@harold we'll look into that. I'll get back to you with findings#2017-06-2316:17haroldjaret: Sounds good. I'm not on this slack much, feel free to email me at <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection> if I can be of service.#2017-06-2303:01danielcomptonWe're looking at Datomic data modelling and have seen a bunch of different libraries that provide abstractions over raw Datomic schema syntax. Some are light syntax, some are full-on ORMs. This is our first Datomic project, so I'm leaning towards minimal layers on top of Datomic if any, but wondered what people with experience would recommend?
https://github.com/vvvvalvalval/datofu
https://github.com/Provisdom/spectomic
https://github.com/nrakochy/schmad
https://github.com/vvvvalvalval/onodrim#2017-06-2304:36akjetmaThis might be heavier than what you’re looking for, but zcaudate’s Spirit library seems really cool, thought i’d throw it up for consideration: http://docs.caudate.me/spirit/spirit-datomic.html#2017-06-2305:31robert-stuttaford@danielcompton i highly recommend you start with the Datomic api directly. those libs, as good as they are, were oriented towards whatever those people were building. using one of them shades your thinking with that direction, and also teaches you that API instead of Datomic’s own. after nearly 5 years with it, and several attempts to use abstractions like this, i can tell you that the Datomic API is incredibly well designed 🙂#2017-06-2305:33robert-stuttafordrecent changes in Datomic also removes the helper data that the transactor used to need when declaring schema, so it really is nice, clean, declarative data now. like Rich says, don’t be scared of a little typing, and you’ll be fine!#2017-06-2305:33danielcomptonThanks, that's what I suspected but good to hear it confirmed#2017-06-2305:35danielcomptonThe one thing I think we might use is https://github.com/cognitect-labs/vase/blob/master/docs/design.md#schema-tx-reader-literal#2017-06-2305:35danielcomptonas that seems to cut down on the verbosity of the schemas#2017-06-2306:08robert-stuttafordquite honestly, schema verbosity is a non-issue. it lives in your database. the code that puts it there is transitory. once done, you should be looking at your database for schema, not your code#2017-06-2312:46val_waeselynckrobert-stuttaford: I have a different experience, verbosity has been an issue for us, and writing the schema in code has its benefits. On contrary, I've found it viable to think of the database as "catching up" with the code, not the other way around. My point being it's not obvious there's one right way to think about this 🙂#2017-06-2312:56uwoour team generates schema from code and, in my opinion, it has led to a number of pitfalls. (Our shortcomings could simply be attributed to us as a team of developers of course) I’ve noticed that we’ve ignored a lot of best practices when our model code doesn’t bend the right direction, like we’ll end up changing the semantics of attributes, essentially “reusing” a name. Recently our team started thinking about migrations as rollback or roll forward, which just completely blows my mind. I think if they weren’t looking at code that they edit in place, they’d be forced to have the right mental model.#2017-06-2313:08val_waeselynckyou do need the discipline to avoid breaking changes with the "code as the source of truth" approach, but I think it's not too contraining with Datomic.#2017-06-2313:08uwoyeah, I think our issues are probably lack of discipline or injured sensibilities#2017-06-2317:09akjetmahow do you guys handle synchronizing schema across local/staging/production environments?#2017-07-0820:23val_waeselynck@U0518FWJV periodic backups from production, plus forking locally#2017-07-0916:06robert-stuttafordyep#2017-06-2306:08robert-stuttafordthis becomes very apparent when you have multiple apps on a single database 🙂#2017-06-2306:12robert-stuttaford@danielcompton you may find the Datomic console or https://github.com/Cognician/datomic-doc#search useful for inspecting your database over time#2017-06-2306:14danielcomptonNice I just saw that#2017-06-2306:14danielcomptonLooks handy #2017-06-2306:25kwladykaWhat are downsides of embedded Datomic Datalog (local storage) vs SQL storage or others? What are differences?#2017-06-2306:27robert-stuttafordfor what environment, @kwladyka ? dev? production?#2017-06-2306:27kwladykaproduction#2017-06-2306:30robert-stuttafordso, dev + transactor-local are really not for production settings: they co-locate the data with the transactor process. this loses you some of the benefits of Datomic’s deconstructed design; to scale either, you need to scale both — but having said that, it’s really not scalable for storage.#2017-06-2306:31robert-stuttafordbut, having said all that, i know some folks use it anyway. it all depends what your needs are 🙂#2017-06-2306:33kwladykanow i mainly still consider datomic free vs postgresql, i want do things like for example generate labels for label printer for orders in e-store etc. These are too small things to pay high price for datomic at that moment.#2017-06-2306:33robert-stuttafordwhy not start with free, and switch when you feel it’s time to?#2017-06-2306:34kwladykaMaybe, just still thinking about it.#2017-06-2306:34kwladykaBut the main problem is it wouldn’t be SaaS probably, so each client has to have own database and it makes it very costly.#2017-06-2306:35robert-stuttafordah, you’re speaking of distribution. you’ll want to check the EULA for that.#2017-06-2306:36kwladykahmm is it work in different way than normally?#2017-06-2306:37robert-stuttaford@marshall will be able to tell you#2017-06-2309:25danielcomptonI think datomic free is fine for distribution#2017-06-2309:36karol.adamiechttps://my.datomic.com/downloads/free “Datomic Free is suitable for open-source projects requiring distribution, but is limited to 2 simultaneous peers and transactor-local storage only.”#2017-06-2312:38val_waeselynck@danielcompton @robert-stuttaford I'm biased as the author of course, but with Datofu I've really attempted to be close to the metal without imposing any opinions or additional concept - the only things Datofu attempts to achieve regarding schema declaration are readability and concision without losing any expressive power. Having said that, I also recommend you start without any library and consider adding it when you feel things are becoming tedious.#2017-06-2312:39val_waeselynck@danielcompton Also, please don't use Onodrim yet, it's not even alpha-ready - apologies if I miscommunicated that in the README. I may have something usable by the end of this Summer.#2017-06-2312:56naomarik@val_waeselynck starting bare was going to be my approach. Your recent blog post was helpful in redirecting my attention to Datomic and the new less restrictive licensing options makes it very appealing to use now. It would be cool to to see some information on why you would want to use something on top of bare Datomic and the problems they solve so newcomers have something to reference when they start hitting some friction. That and some best practices, perhaps in the form of another blog post hint hint 🙂#2017-06-2312:58val_waeselynck@naomarik you mean like this one? 🙂 https://vvvvalvalval.github.io/posts/2016-07-24-datomic-web-app-a-practical-guide.html#2017-06-2312:58naomarikyes 😉#2017-06-2312:58naomarikthank you!#2017-06-2312:59val_waeselynckMaybe we need to improve the discoverability of such community resources. @marshall, maybe a new section on the Datomic website?#2017-06-2313:00val_waeselynckI'm constantly discovering awesome blog posts and gists that have been around for months#2017-06-2313:15uwo@robert-stuttaford what do you like to use to visualize or reckon about installed schema?#2017-06-2313:35robert-stuttafordnice, @val_waeselynck :+1:#2017-06-2313:35robert-stuttaford@uwo i built a thing over December to help with that https://github.com/Cognician/datomic-doc#search , which helps, but mostly i query the database at the repl 🙂#2017-06-2313:53uwo@robert-stuttaford thanks!#2017-06-2316:11val_waeselynck@matthewdaniel I cannot speak as to whether the code you posted should work, but I think if you just put the lookup refs in the values (instead of a map with :db/id) it will work. You can also just write {:vendor/number "..."} and rely on upsert behaviour#2017-06-2316:21matthewdanielisn't :db/ident [:vendor/number "123] a lookup ref? the docs don't seem to have another option#2017-06-2316:21matthewdanielhttp://docs.datomic.com/identity.html#lookup-refs#2017-06-2316:21matthewdanieli tried removing the db/id stuff and just writing vendor/number '..." but that doesn't work either#2017-06-2316:44matthewdanielis there something i can do to get better error messages than "cognitect.anomalies "Server Error"#2017-06-2316:53matthewdanielwell i think i had 2 problems but my initial thing will work. I needed to put my transactions into a vector and I forgot to make vendor/number unique. Thanks for the pointers @val_waeselynck#2017-06-2317:55hmaurerHi! I am completely new to Datomic and I have two quick questions: (1) I just read http://docs.datomic.com/backup.html but from what I understand Datomic can work on top of various datastores (postgres, dynamodb, etc). Would a backup of the underlying datastore suffice?#2017-06-2317:56hmaurer(2) if I run into issues using datomic, how responsive is the community? (I guess this question will answer itself 😄)#2017-06-2318:07ghadi@hmaurer for support -- there are support contracts and there's also a lot of free resources in http://www.datomic.com/support.html#2017-06-2318:50timgilbert@hmaurer: 1. I've found the datomic backups to be super straightforward and painless so I've never tried just backing up the underlying storage (also the datomic backups don't take down your storage while they run in case that's an issue). 2. I've found the community to be responsive and friendly, if a little small. In addition to this channel, the mailing list is also worth checking: https://groups.google.com/forum/#!forum/datomic#2017-06-2319:01eriktjacobsenHave you had to restore from backup after catastrophic failure @timgilbert ?#2017-06-2319:03timgilbertI think we did so one time and it went ok, I wasn't super involved with it though#2017-06-2319:04timgilbertIt involves taking down all the transactors, which is unfortunate but makes some sense. The nice thing about having all the history in there is that the need to do it seems to come up a lot less (in my experience so far)#2017-06-2320:02hmaurer@timgilbert thank you!#2017-06-2320:03hmaurerI am trying to set up Datomic Starter locally (following the tutorial) but getting this error: https://gist.github.com/hmaurer/b9d303055c7a9b7c4e4827b5b79e2acc#2017-06-2320:28matthewdanielhad the same issue myself, you have to make sure you create the database before you connect the peer server to it#2017-06-2320:28matthewdanieli found this helpful http://docs.datomic.com/dev-setup.html#sec-2#2017-06-2320:29hmaurer@matthewdaniel are you sure you had the same issue? this error seems to be about class loading, which create the database shouldn’t impact?#2017-06-2320:30hmaurerI had this error in the first step of the getting started guide#2017-06-2320:30hmaureri won’t be able to create a databse if the server isn’t running I think?#2017-06-2320:30hmaurerhttp://docs.datomic.com/getting-started/connect-to-a-database.html#2017-06-2320:30matthewdanielah, your right, sorry#2017-06-2320:31hmaurerno worries, thanks for trying to help#2017-06-2320:31hmaurerdoes it work for you? can you run that first line from the tutorial?#2017-06-2320:31matthewdanieli start with this#2017-06-2320:31matthewdaniel bin\transactor.cmd c:\Users\mmeisberger\dev-transactor-template.properties#2017-06-2320:32matthewdaniel^ slightly different for windows here#2017-06-2320:32matthewdanielthen in a new shell i run the stuff to create the db in the repl then run the peer#2017-06-2320:34matthewdanielyeah, i mostly followed the dev-setup link i guess#2017-06-2320:36hmaurerTrying this now. It’s strange, the instructions are quite different from the getting started guide#2017-06-2320:36hmaurera bit confusing#2017-06-2320:37matthewdanielyeah, same here.#2017-06-2320:38hmaurer@matthewdaniel Everything works until I get to the “http://docs.datomic.com/dev-setup.html#peer-server” section#2017-06-2320:38hmaurerwhich gives the same command as in the “getting started” guide#2017-06-2320:39hmaurerand spits out the same error#2017-06-2320:39hmaurer“Exception in thread “main” java.io.FileNotFoundException: Could not locate datomic/peer_server__init.class or datomic/peer_server.clj on classpath. Please check that namespaces with dashes use underscores in the Clojure file name.”#2017-06-2320:42hmaurerOooh my bad#2017-06-2320:42hmaurerI didn’t realise there was a distinction between “datomic free” and “datomic starter”#2017-06-2320:42hmaurerc.f. https://groups.google.com/forum/#!topic/datomic/P69c3q__5gw#2017-06-2320:44hmaurerNo to excuse my own silliness, but maybe the tutorial should be a bit clearer#2017-06-2320:44hmaurerThe “Starter” edition is free, so after creating my account I went to the “Downloads” menu and downloaded the latest version#2017-06-2320:45hmaurerthe file was called datomic-free but it didn’t attract my attention and it’s what I was expecting#2017-06-2320:45hmaurerWhere in fact I needed Datomic pro with a Starter key#2017-06-2320:46hmaurerQuoting the google groups thread:#2017-06-2320:46hmaurer> The primary use case for Free is not new users, but rather people who need a license that includes redistribution rights.#2017-06-2320:46hmaurerIn that case the “Download” section on the dashboard should probably indicate that#2017-06-2321:20matthewdanielcool. glad to here you got it sorted out#2017-06-2323:31hmaurerHi! Another quick question: are “asOf” queries expensive? For example, can I use this feature to let the user view an old state of a document and all its associations?#2017-06-2323:32hmaurerAt an arbitrary point in time#2017-06-2323:32hmaurerWill those queries be expensive if different users try to see their documnts at different points in time?#2017-06-2402:29eoliphantHi, how do folks handle database migration, walking schema changes through enviroments, etc with Datomic? I’ve hand-rolled something using conformity, but was just wondering if there’s a “Datomic way”. After reading this: http://blog.datomic.com/2017/01/the-ten-rules-of-schema-growth.html, Item 3 speaks to your DB being the source of truth, etc. But it’s not entirely clear in terms of taking that truth and walking it forward. Would it be something like dumping the schema entities, versioning those then applying to subsequent environments?#2017-06-2407:00val_waeselynck@eoliphant my spin on it: https://github.com/vvvvalvalval/datofu#managing-data-schema-evolutions not claiming that is the right and only way#2017-06-2413:40eoliphantah thanks @val_waeselynck checking it out#2017-06-2414:24daemianmackeoliphant: FWIW conformity has always been more than sufficient for my personal/client projects.#2017-06-2414:26daemianmacki wouldn't interpret Item 3 of that blogpost to suggest adding machinery around versioning datoms. my read of it is more an observation that, working with a large/shared database, not to assume my local software description of the database is necessarily a trustworthy representation of the actual state of that database, and to defensively assume drift.#2017-06-2419:52eoliphantSure @daemianmack I can see that too, but it did seem , to me at least, a little confusing around what that means from a best practice, etc perspective#2017-06-2419:59eoliphantI have a question about isComponent, well I think I know that answer already based on my experimentation lol, but I just want to confirm. It appears to be ‘recursively greedy’ in that if I do a retractEntity, and isComponent tagged attribute will have it’s refs retracted, and refs that those refs, ref, etc?
I’m playing around with a dynamic schema/forms deal, so I say define field types, then I define a form type. The form has a ref to a collection of field defs, that include some form specific stuff (am I optional, etc) and a ref back to the field type (I’m a string, etc), so my :form/fields was tagged as a component and after a delete of a form type the entire graph had been retracted. Is there any way to short circuit this? such that say subsequent non-component refs are left alone?#2017-06-2420:25favilaAn entity reachable via an IsComponent attr should only ever be reachable via one datom assertion#2017-06-2420:26favilaSounds like in your example the ref from field to field type should not be via an iscomponent attr#2017-06-2420:32favilaPractical rules for use of isComponent (but not enforced by datomic): an entity reachable via an isComponent attr is wholly owned by its parent, and unreachable except through the parent, and the parent has only one reference to it#2017-06-2501:33eoliphantyeah, I need to recheck it, but I’m pretty certain my field -> fieldtype wasn’t that’s what threw me. But will check it again#2017-06-2420:00daemianmackeoliphant: i agree, it's a bit philosophical, and it's not clear what practical steps might be in sync #2017-06-2420:32favila#2017-06-2501:43eoliphantHi, i’m making some changes to an app, and I’ve added a ‘gid’ attribute that I’m setting to a squuid. do people generally set it to the datomic string or uuid type? I’ve used the uuid type and queries, entity refs, etc don’t seem to match against raw uuid strings.#2017-06-2501:46favilaWe use the uuid type. You are correct, datomic does not match different types, you need to make a real uuid object in lookup refs, queries etc#2017-06-2501:48eoliphantOk, so just java’s UUID.fromString(), etc?#2017-06-2501:50favilaYes. There is a reader literal too, #uuid#2017-06-2502:04eoliphantah right. But not sure how to ‘apply’ to a variable, I just added something like (let [myuuid (UUID/fromString myid]).. Not clear on how to use #uuid in this context#2017-06-2502:07favilaThat's only for literals. If you want a string in a var to be uuid-ed, do what you are doing#2017-06-2502:12eoliphantok gotcha, that’s what I thought, but didn’t know if there was some clojure trickery that I was unaware of lol. thanks#2017-06-2502:17eoliphantBTW, I’m using a variant of your dynamic schema approach from the mailing list. Very cool, gets very meta, very quickly lol, but Datomic makes everything so much easier. Working on replacing a homegrown forms system that smushes EAV and forms behavior into an unholy marriage of Oracle tables, PL-SQL and Java. It does a lot of stuff, but I’ve gotten a fair representation of their core model worked out with a couple days’ thinking and an afternoon of coding#2017-06-2510:35hmaurerHi! The Datomic transactor requires port 4335 and 4336 to be accessible in addition to port 4334? I was trying to run it inside a docker container and had a connection timeout error until I exposed & forwarded ports 4335-4336#2017-06-2510:40hmaurerGot my answer: https://groups.google.com/forum/#!topic/datomic/wBRZNyHm03o#2017-06-2511:12hmaurerHi! Is there anyone around who could help me debug a connection issue to a transactor? I am trying to run a datomic-pro-starter transactor on a server (in a docker container) but getting a connection timeout error when I try to connect to it from my machine#2017-06-2511:12hmaurerI exposed ports 4334, 4335 and 4336 (proprly I think) and even set alt-host to the public IP assigned to the container, but no luck yet#2017-06-2511:13hmaurerjava.util.concurrent.ExecutionException: org.h2.jdbc.JdbcSQLException: Connection is broken: "java.net.SocketTimeoutException: connect timed out: 209.177.90.158:4335" [90067-171]
#2017-06-2512:57hmaurerProblem solved!#2017-06-2515:23hmaurerHello again. Quick question: could anyone tell me how performant “asOf” queries are if the timestamp may vary widely between queries? (e.g. building a web app where users can see the state of their document at any point in time). I suspect it might cause performance issues if the indices are not built for this type of use#2017-06-2516:16robert-stuttaford@hmaurer this is precisely the sort of thing Datomic is built to be good at. go forth 🙂#2017-06-2517:08hmaurer@robert-stuttaford thanks! just wanted to check. There was the possiblity that datomic was built to keep a full history for auditing purposes but not for frequent querying at various points in time#2017-06-2518:27robert-stuttafordwell, it’s that too 🙂 we certainly use it both ways!#2017-06-2610:03mishagreetings! Again about datoms limit: does it include history datoms as well, or is it just about current db "snapshot" size?#2017-06-2610:08robert-stuttafordit’s about how big the roots of the tree get, @misha, which means history too. eventually they’ll get so big that peer ram size can’t contain it + space for queries#2017-06-2610:13mishathanks, Robert. Is there anything to read about working around this?
I have a 2-fold use case:
1. classic system of record, e.g. brands/food/nutrition info.
2. consumption log of the above
trying to assess how to deal with the 2 keeping it connected with the 1 at the same time.#2017-06-2610:14robert-stuttaford@jaret or @marshall or @stuarthalloway may be able to direct you to some literature. all i have is anecdotes from here 🙂#2017-06-2610:16mishaIs that limit includes all partitions within the same db? or is per partition? or even per db within a "server" (transactor?)?#2017-06-2610:17robert-stuttafordall partitions (partitions merely control overall sort order). if you have two 10bn datom databases, you’ll need twice the ram as with one 10bn datom database - in all peers, of which the transactor is one#2017-06-2610:19robert-stuttafordbecause the peer is considered part of the database — i.e. it’s ‘inside’, unlike a client, which is ‘outside’#2017-06-2610:20mishasame for databases, right?#2017-06-2610:22mishaSo if I'd want to keep sys of record in one db, and log in another to "save the datoms" – it would need to be 2 different transactors, not 2 dbs served by single transactor, right?#2017-06-2610:41robert-stuttafordyes — but if you have a peer that connects to both databases, it’ll need capacity for both#2017-06-2610:51mishaoh, that's true harold#2017-06-2614:33GalauxHi everyone!#2017-06-2614:35GalauxI am adding Datomic to one of our application but so far my :find+`pull` queries run in an average of 20ms which is not exactly what I was expecting#2017-06-2614:37GalauxThe application has 7CPU, 8.5G RAM (that's for the whole app, not just for the pear obviously)#2017-06-2614:38GalauxWe use Cassandra for storage but everything looks normal here: queries on the table for Datomic are fast#2017-06-2614:38GalauxWe haven't configured a Memcached as storage does not seem to be the bottleneck#2017-06-2614:40GalauxI have had a look at the queries to datomic: a simple :find is around 1ms but any pull on the result adds from 15ms to 20ms#2017-06-2614:40GalauxI guess the :find manages to uses one of the index which is expected but I guess the pull part does not#2017-06-2614:41GalauxLast thing is : metrics for the transactor show that queries hit cache at quite a good rate 75%+#2017-06-2614:41Galaux(I just can't get cache metrics for the peer unfortunately)#2017-06-2614:43hmaurer@gax I started reading about Datomic yesterday so I can’t really help you, but in a talked I watched I heard Datomic attempts to cache data that is “close to your query”. The speaker mentioned “pull” as an example, and said relations marked as “components” would be fetched as well (iirc)#2017-06-2614:44hmaurermaybe something similar is happening?#2017-06-2614:47GalauxSomething similar to what exactly?#2017-06-2614:47hmaurerDatomic over-fetching data#2017-06-2614:49GalauxIn my case, I have only one "component" this is precisely what I am looking for#2017-06-2614:49danielstocktonCaching and components are orthogonal concepts.#2017-06-2614:49GalauxJust to get an idea, here is my query:
(defn find-model [db subject-type subject-id optimization]
(let [query '[:find ?e .
:in $ [?type ?id ?optim]
:where [?e :model/subject-id ?id]
[?e :model/subject-type ?type]
[?e :model/optimization ?optim]
eid (d/q query db [subject-type subject-id optimization])]
(when eid
(let [pull-res (d/pull db "[*]" eid)
entity (resolve-model-enums db pull-res)]
entity))]))
#2017-06-2614:50GalauxPretty standard :find it seems…#2017-06-2614:51hmaurer@danielstockton how so?#2017-06-2614:51GalauxThe dataset must be quite small also#2017-06-2614:51Galauxeven though metrics say I have 20M datoms#2017-06-2614:52Galaux(was a bit worried by this "[*]" as it sounds like a SELECT * … 🙂 )#2017-06-2614:52marshall@gax is that a directy copy paste? you don’t seem to have a close bracket on your query#2017-06-2614:52Galauxoups… no: edited it but the query runs in PROD#2017-06-2614:53Galauxmust have deleted something…#2017-06-2614:53danielstockton@hmaurer It sounded like you were conflating the two ideas. Caching data 'that is close to your query' just means that whole segments are cached (which contain 1000s of datoms, possibly more than your query requires).#2017-06-2614:53Galaux(ah yes : read about the segments being cached)#2017-06-2614:53danielstocktonIt's always on and shouldn't get in the way of performance.#2017-06-2614:53marshallwhy not do the pull in the :find ?
Also, are you sure the part taking a while is the pull and not the query or the resolve-model-enums call?#2017-06-2614:54hmaurer@danielstockton Oh. I don’t know, I was just quoting (possibly misquoting) a talk which mentioned that Datomic tried its best to cache data that you might want to access after running your query, and iirc he mentioned components being part of that heuristic#2017-06-2614:54Galaux@marshall yup : I timed the :find part separartly from the pull and the pull is really the culprit here#2017-06-2614:55Galauxand I have also tried including the pull inside the :find. On my machine – yes I know this is not perfect – it takes up to 30ms#2017-06-2614:55marshalli would want to look at the cache metrics on the peer#2017-06-2614:55marshallhow many attributes are you pulling?#2017-06-2614:56GalauxI pull a "model" that has 6 attributes but the 6th is a many that hase usually 150 children with say 5 to 10 attributes each#2017-06-2614:57marshallcomponent?#2017-06-2614:57GalauxYes#2017-06-2614:57marshallso you’re pulling 1500 values#2017-06-2614:57GalauxThe children are components#2017-06-2614:57Galauxhes#2017-06-2614:57Galauxyes#2017-06-2614:57marshallin 20ms#2017-06-2614:57Galauxyes…#2017-06-2614:58marshallnormally a “simple” pull is very very fast#2017-06-2614:58marshallbut having pull+component entities will take a bit longer#2017-06-2614:59marshallsince it will have to traverse those links and pull their attributes#2017-06-2614:59Galauxso I thought maybe I could directly :find the children#2017-06-2615:00GalauxI am currently implementing a version where :find the parent model and a second :find on the returned children ids#2017-06-2615:00GalauxI expect this to use indexes#2017-06-2615:01marshalleverything in Datomic uses indexes#2017-06-2615:02GalauxWell… not exactly *everything* if I understand correctly …?#2017-06-2615:03GalauxFor instance the AVET indexes only index :index and :unique datoms#2017-06-2615:04marshallEAVT and AEVT contain all datoms; VAET is reference types only
But any query or pull is going to use an index#2017-06-2615:04Galauxok#2017-06-2615:05Galaux(was about to use q-explain to check that)#2017-06-2615:07marshallin a real sense, Datomic is a set of indexes#2017-06-2615:07marshallthere’s no way to get something out of it other than to use an index#2017-06-2615:07Galaux@marshall what is the cache usefull for then?#2017-06-2615:07marshallindex segments are immutable#2017-06-2615:08marshallso every ‘chunk’ of the index is a value that can be cached#2017-06-2615:09hmaurerQuestion unrelated to the current discussion: is it possible to get multiple Datomic Pro “Starter” licenses? (for multiple systems)#2017-06-2615:10Galaux@marshall do I understand it correctly that the cache is actually tried against before indexes are?#2017-06-2615:10marshall@gax parts of the index are in the cache; the query engine knows where in the index ‘tree’ it needs to look. it first looks for those segments in the local cache, then memcached, then finally storage#2017-06-2615:11marshall@hmaurer Yes that is possible. Alternatively we have Enterprise licensing options that may make sense for use cases with multiple system requirements#2017-06-2615:12marshall@hmaurer are they related systems (i.e. sharding) or totally independent?#2017-06-2615:12hmaurer@marshall Thanks for the quick reply! In my case we are considering to use Datomic at my company (in which case we would get a Pro license), but I have a few non-profit projects on the side that could make use of Datomic but don’t have the budget for a 5k/year license#2017-06-2615:12hmaurerWhich is why I was asking 🙂#2017-06-2615:13Galaux@marshall ok thanks for that clarification!#2017-06-2615:13marshallGotcha. Yes, you can certainly get a Starter license for a non-profit side project. As far as multiple individual licenses, it might be best to have a call to discuss - you can shoot me an email and we can set something up (<mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>)#2017-06-2615:13hmaurer@marshall related question: let’s say I want to write an infrastructure test which spins up a Datomic system, runs some tests against it, and tears it down. I assume I can use the same license as the “prod” system?#2017-06-2615:14marshallYes, all licenses provide unlimited testing/staging/dev instances#2017-06-2615:14hmaurerThanks! I’ll definitely shoot you an email at some point!#2017-06-2615:14hmaurerOk, awesome#2017-06-2615:14marshall👍#2017-06-2615:16hmaurer@marshall Also since you are around, I asked a question earlier about backups. I know Datomic has a utility to store backups incrementally to S3 or similar, but I was wondering if backing up the underlying storage would also work#2017-06-2615:16hmaurerThe preferred solution seems to be, unsurprisingly, to use Datomic’s backup procedure#2017-06-2615:16marshallit depends on the storage#2017-06-2615:16hmaurerbut I am nonetheless curious as to whether backing up the underlying storage would do the job#2017-06-2615:16hmaurerLet’s say SQL or DynamoDB#2017-06-2615:17marshallhttp://docs.datomic.com/ha.html#sec-7-1#2017-06-2615:17hmaurerAh I see; makes sense.#2017-06-2615:17marshallso SQL yes, Dynamo nope#2017-06-2615:19hmaurerLast question, to which I also got an answer by a community member but not by a Datomic dev: is it “ok” performance-wise to do a lot of “asOf” / history / “since” queries at arbitrary points in time?#2017-06-2615:19hmaurere.g. to provide users with a feature to see the state of a document at any point in the past#2017-06-2615:19hmaureror show them a changelog for that document#2017-06-2615:19marshallyep; depending on how ‘deep’ your history is they may or may not be more expensive than “current”, but generally the performance is quite good and lots of customers use it for exactly that purpose#2017-06-2615:20hmaurerAwesome, thank you! 🙂#2017-06-2615:20Galaux@marshall earlier you mentioned you would be curious to have a look at the cache metrics on the peer. Given what we said, would you still be looking towards the cache?#2017-06-2615:20marshallhttp://docs.datomic.com/filters.html#sec-6-1#2017-06-2615:21GalauxI came up with some code using callback to send metrics from the peers so I have some metrics but unfortunately nothing about the cache – though I have this metric for the transactor.#2017-06-2615:22marshall@gax it might be somewhat illustrative, but those numbers indicate ~ 0.01msec per value retrieved#2017-06-2615:22hmaurer@marshall Thanks. Last but not least: would you recommend the Client API or the Peer API for a new application? From what I understand the Client API cannot do cross-database (or cross-points-in-time) joins, which seems like a big feature-loss, but I am not quite sure since I haven’t used it yet#2017-06-2615:22marshall@hmaurer Depends on your needs; Your system overall could use both, mixing and matching as necessary : http://docs.datomic.com/clients-and-peers.html#2017-06-2615:24hmaurer@marshall I see. I should read the doc fully before bothering you again. Thanks!#2017-06-2615:30marshallno problem 🙂#2017-06-2615:30Galaux@marshall do you think performing a :find directly on the children would speed up the query?#2017-06-2615:33marshallit might; worth a test certainly.
The other option to try would be to get all the children’s entity IDs directly in query then doing a pull-many on them#2017-06-2615:33marshallnot sure whether that’ll be faster or not#2017-06-2615:34GalauxOk! I will try this.#2017-06-2615:35GalauxAlso : as said, I came up with some code to get metrics out of the peer but it won't show this ObjectCache metric that looks veeeery interesting: is it normal the peer won't show this metric? Should it?#2017-06-2615:42marshallit should yes#2017-06-2615:43robert-stuttaford@gax you mentioned a q-explain earlier. what did you mean by this?#2017-06-2615:43marshallalso, are you using memcached?#2017-06-2615:44Galaux@marshall no as Cassandra did not seem to be the bottleneck#2017-06-2615:44Galaux@robert-stuttaford I was referring to this project : https://github.com/dwhjames/datomic-q-explain#2017-06-2615:45robert-stuttafordneat, hadn’t seen that before, thanks!#2017-06-2615:45marshall@gax hard to assess whether it’s storage latency and/or whether memcached would help without some metrics (i.e. storageGetMsec numbers, cache numbers)#2017-06-2615:45robert-stuttafordhmm. abandoned#2017-06-2615:46Galaux@marshall would these metrics from the transactor be ok?#2017-06-2615:56marshall@gax they wouldn’t provide info about the query/pull of interest. all that work happens on the peer#2017-06-2615:57Galauxyep#2017-06-2615:57Galauxwill try to fix my reporter then#2017-06-2616:13hmaurer@marshall Hi! Another question… I read that it is highly recommended not to make “breaking” changes in the schema or change the semantics of an attribute. However, it seems you cannot completely exclude the possiblity that a poor design decision was made in the past and all the facts of some type X in the database history need to be updated to match a new schema. In those rare cases, is it doable?#2017-06-2616:14hmaurerE.g. change the type of an attribute and migrate all the data accordingly, etc#2017-06-2616:15hmaurerRoughly speaking this would mean traverse the whole log and make arbitrary edits to any transaction#2017-06-2616:15hmaurerand have Datomic update all its indices accordingly#2017-06-2616:16hmaurerActually now that I think of it, this could be done by re-building a new database and copying everything over, setting “txInstant” manually to keep the timeline#2017-06-2616:18hmaurerJust wondering if there is a less “nuclear” option for those scenarios#2017-06-2616:31marshallhave you read this: http://blog.datomic.com/2017/01/the-ten-rules-of-schema-growth.html#2017-06-2616:32hmaurer(p.s. I read and understood http://blog.datomic.com/2017/01/the-ten-rules-of-schema-growth.html ; only talking about rare scenarios here)#2017-06-2616:32hmaurerah, you were faster than me#2017-06-2616:32marshallyes, you can definitely rebuild a database if necessary#2017-06-2616:32hmaurer😄#2017-06-2616:32marshallthe way you indicate#2017-06-2616:32marshalla ‘less drastic’ option would be somethin like you suggest with creating a new attribute (of a different type say) and migrating the data over#2017-06-2616:32marshalland Datomic does allow you to rename attributes if necessary for that sort of thing#2017-06-2616:33hmaurerOk, great, thank you#2017-06-2616:33hmaurerYet another question…: is it possible to follow the transaction log from a remote service? For example keep an elastic search instance in sync#2017-06-2616:34hmaurer@marshall ^#2017-06-2616:36hmaurerAh, actually I guess this can be built on top of the Log API: http://docs.datomic.com/log.html#2017-06-2616:37marshall@hmaurer http://blog.datomic.com/2013/10/the-transaction-report-queue.html#2017-06-2616:37marshallcombination of the log and the tx report queue would be what you want#2017-06-2616:38hmaurer@marshall I am thoroughly impressed#2017-06-2616:38robert-stuttaford@hmaurer http://www.stuttaford.me/2016/01/15/how-cognician-uses-onyx/#2017-06-2616:39robert-stuttafordhidden in this post is the fact that we use the tx-report-queue to loosely couple our web services to our worker services. no need for a separate queue at all#2017-06-2616:39robert-stuttafordeverything just talks talks to / watches storage#2017-06-2616:39hmaurer@robert-stuttaford this is awesome. Sounds like event sourcing without the pain of implementing an event-sourced system from scratch#2017-06-2616:40robert-stuttafordyep 🙂#2017-06-2616:40robert-stuttafordthat’s certainly how we use it#2017-06-2616:41robert-stuttafordjust the other day i had to find out why something went missing. turns out someone wrote an overzealous hand-written transaction and cut 4000ish important datoms from ‘now’. had a ‘revert’ transaction transacted in 10 minutes, via remote repl#2017-06-2616:42robert-stuttafordimmutability is the gift that keeps on giving. it’s actually astonishing how it’s such a given that we should all use source control, when source is actually mostly a liability. but most folks use a forget-by-default database for their data, which is undeniably an asset. no one talks about Big Source, after all 🙂#2017-06-2616:46hmaurer@robert-stuttaford Yes, immutability is (mostly) a blessing to work with. I can’t complain so far 🙂#2017-06-2618:12spieden@robert-stuttaford either forget-by-default, or try to implement a broken subset of immutability via log tables at great cost!#2017-06-2620:22souenzzoThere is something like :db.type/edn (propose, workaround, future plans... )?
I have two key cases (store graph and queryies) and I dont know exactly how to handle...#2017-06-2620:25robert-stuttaford@souenzzo : use string + pr-str / clojure.edn/read-string. works just fine#2017-06-2620:28souenzzoYeah, I'm planning on using this. But wanted to know if there were more people with the same problem and if there is any expectation of having edn like type in the datomic#2017-06-2620:30favilathey've promised custom types from the beginning, and fressian is extensible enough to support it, but nothing has materialized#2017-06-2620:30favilastring or binary blob is how we handle it now, or for smaller types encode them into existing types somehow#2017-06-2623:14hmaurer> publicly display or communicate the results of internal performance testing or other benchmarking or performance evaluation of the Software;#2017-06-2623:14hmaurerMay I ask why this is a clause in the T&Cs?#2017-06-2702:36matthavenerhmaurer: (imho) perf tests for DBs are notoriously bad at providing real information because so many factors are at play in DB usage#2017-06-2703:08souenzzohow to query [:find ?u :where [?u :user/games ?g] [?g :game/finish? true]], but I want just users that have more then 5 games Finished?#2017-06-2704:29robert-stuttaford@souenzzo have you tried adding clauses [(count ?g) ?gc] [(< 5 ?gc)] ?#2017-06-2713:46souenzzorobert-stuttaford: [:find ?u ?gc :where [?u :user/games ?g] [?g :game/finish? true] [(count ?g) ?gc]]
=>
UnsupportedOperationException count not supported on this type: Long clojure.lang.RT.countFrom (RT.java:646)#2017-06-2706:13isaacWhat will caused :db.error/transactor-unavailable?#2017-06-2709:46Galaux@marshall it seems my performance issue is fixed by this rewrite of my :find method. Instead of a :find+`pull` I do :find+`:find`#2017-06-2709:47GalauxI have sthing around 2ms median for min queries#2017-06-2709:59hmaurer@matthewdaniel I get that and completely agree, but does it warrant forbidding people from doing it in the T&Cs?#2017-06-2712:05matthewdaniel@hmaurer i think you mean @matthavener#2017-06-2712:06hmaurerOops, yes, sorry#2017-06-2714:10matthavenershrug its much easier to forbidden someone from doing X then forbid them from doing X “badly” 😄#2017-06-2714:30dm3what’s the best way to get the datoms transacted in the transaction I have a ref to within a db function?#2017-06-2714:32dm3should I get the database as-of (:db/txInstant tx) and query from there?#2017-06-2715:06marshall@dm3 the transaction id is the entity ID of that reified transaction
You can use the log API (specifically the tx-data function) with that entity ID to get all the datoms: http://docs.datomic.com/log.html#sec-2#2017-06-2715:06marshallsee the last example there ^#2017-06-2715:08marshallif you prefer not to use query, you can also do it from the log API directly#2017-06-2715:08marshallhttp://docs.datomic.com/clojure/index.html#datomic.api/tx-range#2017-06-2812:18souenzzomarshall: How to get all transactions between 2 db's?
My scenario:
(let [db-before @(d/sync conn)
result (response-for pedestal-integration-test)
db-after @(d/sync conn)]
)
#2017-06-2812:20marshallyou could do a tx-range using the basis t from the two databases (before & after)#2017-06-2812:26souenzzo(let [_ @(d/transact conn [{:user/name "should not see"}])
before @(d/sync conn)
_ @(d/transact conn [{:user/name "should see"}])
after @(d/sync conn)]
(d/tx-range (d/log conn) (d/next-t before) (d/next-t after)))
Using basis-t, I get just the unwanted tx.
With next-t works!#2017-06-2812:27marshall👍#2017-06-2716:44hmaurerI assume it is a common/best practice not to rely on Datomic’s internal entity ID’s and instead add an “uuid” attribute?#2017-06-2717:23dm3@marshall for the log I have to access connection from db tx fn, right?#2017-06-2717:37marshall@dm3 not sure i follow the question#2017-06-2717:38marshall@hmaurer Correct#2017-06-2718:37hmaurer@marshall sorry, I’ve got another question. How do peers obtain the connection details to the underlying storage? From the transactor?#2017-06-2718:40marshallthe URI is the address of storage; they get connection details to the active transactor from storage#2017-06-2718:41marshallhttp://docs.datomic.com/deployment.html#sec-1#2017-06-2718:42hmaurerOh I see. I had only used the “dev” storage, thus the confusion#2017-06-2803:48yusupHi, is it possible to rollback Datomic to certain timestamp?#2017-06-2810:02danielstockton@yusup I believe there are two options, using log API to get assertions/retractions since t:
1) Add/retract respectively (keep history of the rollback)
b) Excise them http://docs.datomic.com/excision.html (history is gone)#2017-06-2812:21marshall@yusup it depends a little bit on what you mean by ‘rollback’#2017-06-2812:21marshallAs @danielstockton said, you can issue a set of compensating transactions#2017-06-2812:21marshallbut the basis-t of the resulting database will not be a ‘rollback’#2017-06-2812:23marshallt will continue to increase in either scenario. Unless you have a backup of the db at the specific t you want there isn’t a way to revert the entirety of a db (including t values)#2017-06-2812:23marshallunless you’re just wanting to ‘revert’ for reading (i.e. “show me what the database looked like yesterday”)#2017-06-2812:24marshallif that’s what you’re looking for you can use asOf to get a value of the database exactly as it was at a given time in the past#2017-06-2813:30val_waeselynck@yusup note that you won't be able to use db.with() on the asOf db#2017-06-2814:02hmaurer@val_waeselynck you can’t fork the past?#2017-06-2814:02hmaurerthat’s unfortunate; what’s the technical reason for it?#2017-06-2814:29val_waeselynck@hmaurer exactly, I don't know why, already complained about it on http://receptive.io - if it hurts you too, you can upvote the feature request :)#2017-06-2814:30hmaurer@val_waeselynck It doesn’t hurt me right now as I haven’t used Datomic on a project yet, but it sounds like a feature that could be extremely valuable when debugging#2017-06-2814:30hmaurere.g. go back in time and wonder “what would have happened if those operations were applied instead”#2017-06-2814:30hmaurerit might be an edge-case though…#2017-06-2814:31val_waeselynckI couldn't agree more#2017-06-2814:33val_waeselynckMaybe @marshall can give us a technical reason if there is one?#2017-06-2821:20danielcomptonWe're really interested in using that kind of feature for post-mortem debugging, to exactly reproduce errors#2017-06-2821:32spieden@danielcompton you could do something similar on a transient basis using datomic.api/with#2017-06-2821:33danielcompton@spieden I can query the db as of that time, but I can't issue transactions on the db as of that time#2017-06-2821:34spieden@danielcompton using d/with i believe you could, in a way. you’d have to pass the dbval returned around to other code for it to see your changes#2017-06-2821:35danielcomptonhttps://clojurians.slack.com/archives/C03RZMDSH/p1498656635400811#2017-06-2821:35danielcomptonI don't think that's possible at the moment#2017-06-2821:35spiedenah ok, sorry didn’t see that#2017-06-2821:37danielcomptonnp, we're pretty keen on this kind of feature 🙂#2017-06-2821:41spiedeni used d/with for the first time in production code recently. handy for accessing all the data related to an entity you’ll transact later if everything works out#2017-06-2821:50val_waeselynckIndeed, it's not possible. It will not throw an exception, but it will return an incorrect result.#2017-06-2821:51val_waeselynck(I've tried)#2017-06-2822:23timgilbertIs there a way to see what the result of calling a transactor function would be without actually calling the function? I have a retractEntity that's acting a little squirrelly and I want to check my math#2017-06-2822:27favilad/invoke#2017-06-2822:27favilahttp://docs.datomic.com/clojure/#datomic.api/invoke#2017-06-2822:27favila(d/invoke the-db :db.fn/retractEntity the-db the-eid)#2017-06-2822:28favilathis will return a vector of :db/retracts#2017-06-2822:28spiedenwow cool#2017-06-2822:28favilayou can also use d/with, but this is less direct#2017-06-2822:30timgilbertAwesome, thanks @favila#2017-06-2912:11linussHey guys. I want to query some data that is filtered by a value, but ONLY if that value is not nil. I currently have:
'[:find ?label ?value
:where
[?i :functiontype/functiontype ?label]
[?i :functiontype/functiontypeid ?value]
[?i :functiontype/blocked false]
[_ :offer/function ?fid]
[?j :functions/functionid ?fid]
[?j :functions/functiontypeid ?ftid]
[(or (nil? ?fid) (= ?ftid ?value))]]
but that doesn't seem to work, and also doesn't seem very idiomatic... Could anybody suggest an alternative?#2017-06-2912:26hmaurer@linuss maybe you could use a disjunction? http://blog.datomic.com/2015/01/datalog-enhancements.html#2017-06-2913:09uwo@linuss there will never be a nil value in datomic. Instead use missing? http://docs.datomic.com/query.html#sec-5-12-5#2017-06-2913:13linussAh, right!#2017-06-2913:26linussHm, even when I replace the nil? with missing?, the query returns all :functiontypes#2017-06-2913:28favilaquery predicates only do query var replacement at the first level#2017-06-2913:30favila[(or (nil? ?fid) (= ?ftid ?value))] is always true, because those inner items of or are literal lists#2017-06-2913:30favilawhat you want to do cannot be done without changing the query#2017-06-2913:31favilaor at least without branching#2017-06-2913:32favilayou could have a top-level or (or a rule) with two branches, one of which asserts value is nil, and this is the nil-case query, another one starts with [?i :functiontype/functiontypeid ?value] and is the non-nil case#2017-06-2913:32favilaPersonally I almost always use cond-> to create different queries dynamically#2017-06-2913:33linussOkay, thanks! I'll try that 🙂#2017-06-2913:34hmaurer@favila what do you mean by cond->?#2017-06-2913:34linussBut the or in datomic doesn't short-circuit, right?#2017-06-2913:34linussSo, it will return datoms that match either set of predicates#2017-06-2913:35linussso I'll always get all the functiontypes#2017-06-2913:35favilayes, but one branch will assert that ?value is nil, and the other one will match on nil, and both of these can't be true at the same time#2017-06-2913:35linussah! clever!#2017-06-2913:38favila@hmaurer eg (defn query-with-optional-filters [name]
{:query {:find '[?e]
:in (cond-> '[$]
name (conj '?name))
:where
(cond-> '[[?e :foo :bar]]
name (conj '[?e :name ?name]))}})#2017-06-2913:41hmaurer@favila oh, so you build up the query dynamically based on the presence of a filter. Neat#2017-06-2913:48linussOkay, so this is my new query:
[:find ?label ?value
:where
(or
(and [?i :functiontypes/functiontype ?label]
[?i :functiontypes/functiontypeid ?value]
[(missing $ ?e :offer/function)])
(and [_ offer/function ?fid]
[?i :function/functionid ?fid]
[?i :function/functiontypeid ?value]
[?j :functiontypes/functiontypeid ?value]
[?j :functiontypes/functiontype ?label]))]
but this won't run, saying Cannot parse clause, expected (data-pattern | pred-expr | fn-expr | rule-expr | not-clause | not-join-clause | or-clause | or-join-clause)#2017-06-2913:50linussoh#2017-06-2913:50linussno and clause#2017-06-2914:14hmaurer@linuss isn’t a vector the way to model a and clause?#2017-06-2914:18linussThat results in Join variables should not be empty#2017-06-2918:49hmaurer@marshall did you answer the earlier question on why d/with cannot be used with d/as-of? If you did, sorry, I missed it#2017-06-3008:39linuss#2017-06-3013:19favila[(missing? $ _ :offer/function ?fid)] works? I'm surprised#2017-06-3013:25favilawhat is :offer/function?#2017-06-3013:27favila:functions/functiontypeid and :functiontype/functiontypeid are meant to be joined by value?#2017-06-3013:29favilaI think you may want (not [_ :offer/function ?fid]) instead of missing?, but it's not clear to me what the relationships are and what you want do to#2017-06-3013:30favilamissing? has only 3 arguments: db, entity, and attribute#2017-06-3013:30favilausing it with 4 makes no sense#2017-06-3013:30favilaThat's why I wouldn't expect this to work at all#2017-06-3015:15timgilbertHey all, I recall seeing an open-source project mentioned here that would save and restore a datomic database in memory (for use in writing tests). I've forgotten the name of it and my google-fu is failing me, can anyone help me?#2017-06-3015:17timgilbertAha, I think I found it: https://github.com/vvvvalvalval/datomock#2017-06-3020:44matanHi, I am considering using datomic for the first time (hurray!)#2017-06-3020:46matanCan someone please remind me whether using datomic, my code needs to have a say about how data is stored, or whether it only needs to suffice with pushing and querying data (datalog?) letting datomic figure how to "best" store the data on the given storage backend being used?#2017-06-3021:25hmaurer@matan I am new to Datomic too but I’ll try to answer your question. Your code does not “have a say in how the data is stored” on a low level. When you commit a transaction, the Datomic transactor will store that transaction in the backend storage, and periodically update the indexes. There are, if I recall correctly, 4 types of indexes, each of which optimises querying the data in specific ways.#2017-06-3021:25hmaurerAs far as I understand, your data is stored in the exact same way on all backend storages: as binary blobs of “segments” of the index#2017-06-3021:26hmaurerSo Datomic will not try to take advantage of Postgresql’s or DynamoDB’s query features. Instead it will store its data in raw, compressed binary blobs. When you do a query, it will fetch the relevant blobs from the index and execute your query locally, in the peer#2017-06-3021:27hmaurerNow, you can define some attributes as “indexed”. You will have to read on exactly what this does as I am not familiar with it, but roughly speaking I think it indicates that those attributes should be stored in the AVET index (one of the 4 indexes), and access to those attributes will therefore have better performance#2017-06-3021:27hmaurerThat’s about it in terms of control on how the data is stored, as far as I know#2017-06-3021:28hmaurerCheck out http://docs.datomic.com/indexes.html#2017-06-3021:29hmaurerAlso http://tonsky.me/blog/unofficial-guide-to-datomic-internals/#2017-06-3021:58favila@matan Backend storage is (from datomic's POV) nearly a pure key+value store of string keys to binary blobs. The store must support atomic updates. No store-specific features are used beyond that.#2017-07-0108:05dimovichhello all#2017-07-0108:05dimovichhow can I get the eids of newly added entities?#2017-07-0109:58val_waeselynck@dimovich if said entities have a unique attribute, you can typically use the t of the corresponding datom#2017-07-0109:58val_waeselynckIt's hard to help you more without additional context#2017-07-0112:22dimovich@val_waeselynck I'll look for the moment at tempids. Thanks for the info#2017-07-0112:25dimovichbtw, I have an {:id "someid
:items [1 2 3 4]}#2017-07-0112:26dimovichif I transact an entity with same id, the new items will be conjoined to the original :items#2017-07-0112:26dimovichhow can I replace them instead?#2017-07-0113:23val_waeselynck@dimovich You need a transaction function for that. See for instance the one in Datofu
https://github.com/vvvvalvalval/datofu#resetting-to-many-relationships#2017-07-0115:35dimovich@val_waeselynck thanks!#2017-07-0115:35dimovichisn't there a built-in solution?#2017-07-0115:50val_waeselynckNo, but when you think about it, there isnt either for sql databases.#2017-07-0118:12dimovichbtw, whenever I make a transaction, if I do a query afterwards (against the new db value) I don't get the new entities#2017-07-0118:12dimovichI get them if I repeat the query#2017-07-0118:12dimovichmaybe it's related to caching...#2017-07-0118:35dimovichor memoization#2017-07-0118:47souenzzo@dimovich db is immutable. Once you get a db (let [db (d/db conn)] ), it will NEVER change. If you want a new db, you can make a new (d/db conn), or in case of a transaction, a transaction returns a future, that you can deref and get the db-after and db-before.#2017-07-0118:49souenzzo(defn handler
[req]
(let [db (get req :db)
conn (get req :conn)
params (get req :params)
tx-data (do-stuff db params)
tx (d/transact conn tx-data)
db-after (get @tx :db-after)]
(my-query-stuff db-after params)))
#2017-07-0118:57souenzzoAbout "to-many" problem, [{:id "foo" :items [1 2]}] is a sugar syntax to [[:db/add [:id "foo"] :items 1][:db/add [:id "foo"] :items 2]]. If you want to remove a thing from a many, you should explicit do it.
In my app, there is a generic (update-to-many db x), that walk across all maps, identify it, check to many atributes, compare with database and generate the retracts...#2017-07-0121:48dimovich@souenzzo thanks!#2017-07-0203:45souenzzoThere is some add-listener example?
I can understand what is each of the arguments [fut f executor]
- What is fut?
- What is the signature of f
- executor? No mentions to this word on any other datomic doc..
- There I put conn?#2017-07-0213:54favila@souenzzo these are Java terms, and this fn is not specific to datomic. fut is a java future (e.g. from d/transact-async), f gets one arg-- the value of the future. Executor is a java executor, it provides the threadpool that will execute the function f when it is called#2017-07-0213:59souenzzoThis is not what I was thinking.
There is some trigger on datomic to inspect each committed change on a connection?#2017-07-0214:04favilatx-report-queue @souenzzo #2017-07-0315:53djjolicoeuris d/transact-async bound by datomic.txTimeoutMsec as d/transact is?#2017-07-0315:56potetm@djjolicoeur No.#2017-07-0315:56potetmtransact-async has no timeout.#2017-07-0315:56djjolicoeurgotcha.#2017-07-0315:59djjolicoeurthanks @potetm#2017-07-0315:59potetmnp 🙂#2017-07-0318:24hmaurerHi! Quick question on datomic data modelling: how would you store a large sequence of items (e.g. a news feed) in such a way that the N latest items can be retrieved efficiently?#2017-07-0318:31jeff.terrell@hmaurer - I think this kind of thing might not be easy in Datomic. (I hope somebody more knowledgeable about Datomic corrects me though.)#2017-07-0318:32jeff.terrellOr, wait…unless you mean the N most-recently-transacted items. In which case maybe the Log API would be what you need?#2017-07-0318:32jeff.terrellDisclaimer: speaking from semi-ignorance here…#2017-07-0318:33hmaurer@jeff.terrell haha no problem, that’s what I do most of the time. The N most-recently transacted items could work but I was looking for a more general solution. From what I understand about Datomic I think you are right though, it might not be possible without building an additional service…#2017-07-0318:35jeff.terrellOne possibility would be to subscribe to the stream of transactions (I think this is possible in Datomic), then maintain a data structure with the information you need.#2017-07-0318:39robert-stuttaford@hmaurer @jeff.terrell the disadvantage with walking the log backwards is you have to keep reading chunks of N txes until you’ve filled however many items you need. in a database with lots of other stuff, this could mean traversing a lot of non-news-item txes. you could use the linked-list approach, where the latest item points to the next-latest item which points to the next-next-latest item which …, which then gives you a pretty straightforward path to discovery.#2017-07-0318:49jeff.terrellrobert-stuttaford: Ah, yeah…that's familiar, now that you say it. Thanks for clarifying that. simple_smile#2017-07-0318:41robert-stuttafordi believe Datomic’s feature request thing has a request for traversing indexes in reverse, which would give you a very clear and direct path: traverse the “primary key” attribute in reverse.#2017-07-0318:50jeff.terrellHere's a link to that feature request (might have to log in through http://my.datomic.com first, I dunno):
https://receptive.io/app/#/case/17927#2017-07-0319:05hmaurer@robert-stuttaford hi and thank you! Is there an efficient Datalog query to get the top N items from a linked list?#2017-07-0319:06hmaurer@robert-stuttaford Also, is there a way for a service to follow the transaction log while ensuring that it doesn’t “miss” any transaction? You mentioned using Onyx so you might know about this#2017-07-0400:06devthis there a simple mechanism to ping a transactor from a peer to make sure it's still up? i want to catch :db.error/connection-released The connection has been released errors that sometimes occur when we recreate the database in a non-prod environment#2017-07-0400:06devthmaybe sync?#2017-07-0400:10marshallThe latest release includes a transactor health check #2017-07-0400:11marshallYou could also use something like sync#2017-07-0400:11devthoh, awesome#2017-07-0400:12marshallhttp://docs.datomic.com/transactor.html#sec-1-1#2017-07-0400:12marshall@devth ^^#2017-07-0400:12devththanks!#2017-07-0404:30danielcomptonWhen naming attributes, should cardinality many attributes be plurally named? e.g. for cardinality many, should I use :client/address, or :client/addresses?#2017-07-0407:36pesterhazySingular I think, you transact [client :client/address address]#2017-07-0407:37pesterhazyI'd check the musicbrainz example for inspiration#2017-07-0409:00pesterhazymusicbrainz uses plural: https://github.com/Datomic/mbrainz-sample/blob/master/schema.edn#L264 so ignore me. Maybe it's just a Datomic version of the old "singular vs plural relations" debate in rdbms#2017-07-0410:49henrikIs it possible/advisable to use with at scale to do optimistic “writes”? Can this be reconciled with reality, when the actual result of the write comes knocking on the door?#2017-07-0411:10hmaurer@henrik newbie to Datomic here, can’t answer your question. However I am curious: in which cases would you feel the need to do optimistic writes?#2017-07-0411:13henrik@hmaurer It’s entirely hypothetical at this point, but the thought struck me. Some people have expressed fears about write performance, so I thought, what if writes are assumed to have succeeded until proven otherwise?#2017-07-0411:14henrikOr does that lead down into the black pit of eventual consistency?#2017-07-0411:15hmaurer@henrik it seems to me that if your write-load is so high that Datomic can’t process them quickly enough and you need optimistic writes to maintain decent user-facing performance, you are in trouble. (disclaimer: uninformed opinion)#2017-07-0411:15hmaurerThere might be cases in which you would want optimistic writes (e.g. let a user preview the effects of a change, or testing), but write performance doesn’t seem to be one of them#2017-07-0411:17hmaurerI am curious as to whether what you suggest is doable though#2017-07-0411:18hmaurerIf you really wanted to I think you could implement it yourself with the log api (http://docs.datomic.com/log.html)#2017-07-0411:19hmaurerNot sure though…#2017-07-0411:20henrikIt might not be entirely necessary, but even so it might be interesting since the vast majority of transactions in a well designed app should succeed in any case. So whatever speed up you get would be gravy, if the effort is small enough.#2017-07-0411:23henrikAnd then there’s probably the odd transaction where you really want to wait for the round trip, so ideally you should be able to decide per transaction whether to use it or not.#2017-07-0411:27hmaurerTrue. Let’s see if someone has an answer to this#2017-07-0411:41val_waeselynck@henrik where would you call db.with() ? Peer? Transactor?#2017-07-0411:51henrik@val_waeselynck Peer, right? Call with while simultaneously sending the same thing to the transactor. And once the transactor comes back with a :+1: or :-1:, the optimistic aspect of the state would have to be thrown away in lieu of the realistic ditto.#2017-07-0414:20GalauxHi everyone. I can't get my head around whether I should only transact the parts of my entities that have actually been modified or if I can just pass the whole entity and Datomic manages to only persist datoms that were changed#2017-07-0414:21GalauxI guess the result of a :find will be the same anyway but if Datomic does not remove unmodified datoms then I am going to clutter my storage…#2017-07-0414:25potetm@henrik I mean, that's do-able, assuming the semantics are okay for your use case.#2017-07-0414:27potetmAt that point, though, it's not even eventually consistent. It's, "I won't usually lose your data" consistent.#2017-07-0414:27karol.adamiec@gax datomic will diff the transaction and only touch things that it needs to. You can see that easily doing a transaction in repl and then doing it again. The return value will say nothing changed on second run.#2017-07-0414:27potetmUnless you have some kind of scheme to ensure there will be no data loss.#2017-07-0414:39Galaux@karol.adamiec ok sweet!#2017-07-0414:39GalauxThanks#2017-07-0414:39GalauxI will try that just for the sake of experimenting#2017-07-0416:23andreiI am trying out datomic for some of our use-cases and we have the following query:
[:find [(max ?revision) ?id]
:where [_ :appRegistry/revisionNumber ?revision]
[_ :appRegistry/id ?id]]
we try to return all Apps with the highest revision for a given ID,
however we only get the app with the highest revision#2017-07-0416:26pesterhazytwo queries, one to find the highest revision, one to retrieve all the apps?#2017-07-0416:27andreiso this cannot be done in 1 go ?#2017-07-0416:27pesterhazyI can't say definitely that it cannot be done#2017-07-0416:28pesterhazybut I will say that often the answer with Datomic is to use multiple queries. Remember that the data is often local inside the peer, so the penalty is not as great at with an SQL db#2017-07-0416:31andreior we could use a subquery#2017-07-0416:31andreialthough I did not find a good example on how that works#2017-07-0416:33pesterhazythen you might as well use d/q twice... you'll find it frustrating to try to express complex queries in a single d/q call, especially involving aggregation#2017-07-0416:35andreigot it#2017-07-0416:36andrei@pesterhazy thanks for the clarification#2017-07-0416:57andreiin the java api, if one wants to do a nested query, how do you refer to datomic.api/q?
e.g. this use-case
https://groups.google.com/forum/#!searchin/datomic/Kieran/datomic/5849yVrza2M/nHe6QZQ7CGYJ
(d/q '[:find (max ?count)
:where [(datomic.api/q '[:find ?track (count ?artist)
:where [?track :track/artists ?artist]] $) [[?track ?count]]]]
(d/db conn))
#2017-07-0416:58andreiright now one app uses Peer.query("datalog string")#2017-07-0417:46msshey there, extreme newbie q that I’d appreciate some help on. trying to establish a unique attribute (user’s email) and use it as an ident in a pull query. code roughly looks like the following:
(d/transact conn
[{:db/ident :user/email
:db/valueType :db.type/string
:db/unique :db.unique/identity
:db/cardinality :db.cardinality/one
:db/doc "A user's email address"}])
(d/transact conn
[{:user/email "
am I doing something wrong here? I’d think that pulling based on a unique ident would return the attributes for that specific user, but maybe I’m mistaken about that#2017-07-0417:51mssfigured it out. realized that {:user-email " should be a vec, as in [:user-email "#2017-07-0419:22henrik@potetm Is it really data loss if it would have been rejected by the db? I guess that's more of a philosophical question than a practical one.
Yeah, potentially some kind of "error handling" scheme could be triggered on the delta between the temporary fantasy and the more permanent reality. #2017-07-0419:32val_waeselynck@henrik this could be done, but what would be the point? What reads would the with'ed db serve? To me the point of using with ahead of transact is to ensure the transaction wouldn't violate some invariant.#2017-07-0419:33potetm@henrik assuming you're storing the "temporary fantasy" somewhere stable, and that all transactions are reconcilable, there might not be data loss.#2017-07-0419:34potetmOtherwise there absolutely will be. And not in a philosophical sense. If I understand correctly that is.#2017-07-0503:10matanHi, I am looking into trying out datomic.#2017-07-0503:10matanDoes datomic help maintaining any kind of schema or relationships among one's data, or does it simply store and retrieve keyed maps to the underlying storage, leaving any data connections to user code?#2017-07-0503:11matanCould you kindly comment and/or point me in the right direction about that?#2017-07-0503:45gws@matan have you had a chance to read http://docs.datomic.com/schema.html yet?#2017-07-0505:17henrik@val_waeselynck The point would be to make it seem like writes complete faster than they actually do. Like I said, it’s mostly a thought exercise than anything else. Optimistic updates are sometimes used in UI design to make it seem as if things are faster than they are. You show the update as completed to the user, even as it is being sent to the backend.#2017-07-0505:18henrikFacebook appears to do something akin to this. Your post is more or less immediately visible to whatever users happen to be provisioned to the same server/cluster as you, while it can take some time before the post actually propagates to other servers/clusters.#2017-07-0505:23henrik@potetm So, let’s say that we’re running a forum where a user tries to create a new thread. Checks are made to make sure that the user is posting somewhere where they are allowed to post, and then the data is sent off to the transactor. What could make the transactor reject the data, and how would we normally deal with it?#2017-07-0506:07val_waeselynck> Does datomic help maintaining any kind of schema or relationships
@matan yes, Datomic is very good at that. The schema is explicit, although more flexible than SQL databases. See http://docs.datomic.com/schema.html#2017-07-0506:10val_waeselynck@matan http://www.learndatalogtoday.org/ is good at showing how expressive Datomic querying is.#2017-07-0506:13val_waeselynck@henrik I see; well this approach definitely has a lot of challenges, both in managing the state location and in orchestrating subsequent writes#2017-07-0508:26matanthanks @gws @val_waeselynck for pointing me in all the right directions!#2017-07-0508:38matanokay, so with regard to schema, right, datomic uses a concept it calls schema, to describe the allowed attributes in the database (and this schema may leave dead data behind, when altered). Hopy I've followed the right lingo so far.#2017-07-0508:41matanBut what about relationships within the data? as I understand the unit of storage is a datom:
> Each datom is an addition or retraction of a relation between an entity, an attribute, a value, and a transaction.#2017-07-0508:42matanAnd as I understand, unlike e.g. a (legacy) RDBMS, datomic incorporates no enforcement of constraints on relations between datoms, am I correct insofar?#2017-07-0509:05asierHi, quick question. Can I get the :db/txInstant from (d/entity db 17592186046014) or it's only possible with datalog?#2017-07-0509:10mgrbyte@aiser you can't reach the transaction directly through an entity. If you don't want to use a datalog query for some reason, you can use the datoms api to get tx entity, then get the tx instant from that: (->> (d/datoms db :eavt 17592186046014) first :t ((partial d/entity db)) :db/txInstant)#2017-07-0510:51mgrbyteoops, sorry about the typo (:t, should of been :tx in the above)#2017-07-0509:17asierCheers - I have this error message IllegalArgumentException No matching clause: :t datomic.db.Datum (db.clj:326) - I'll dig into it#2017-07-0510:14isaac@asier use :tx instead of :t#2017-07-0510:15isaac5 elements tuple, [:e :a :v :tx :added]#2017-07-0510:19asier@isaac aha - yes!#2017-07-0510:34hmaurerHello! Quick question: why is there a distinction between “t values” and transaction IDs in Datomic?#2017-07-0510:36hmaurerAlso, is it possible to keep the transaction IDs and/or t values the same after a complete backup restore?#2017-07-0510:42hmaurere.g. regarding my second question, I would like to know if I can safely store transaction IDs and/or t values outside the system to “point at particular point in times”, and whether those references will be broken in case of disaster recovery#2017-07-0510:42hmaurerObviously I could also use txInstants, but it seems a txId / t value would be better#2017-07-0511:14andrei@pesterhazy we managed to solve our datomic query in 1 go using :with
[:find (max ?revision)
:with ?id
:in $ ?id
:where [?m :appRegistry/id ?id]
[?m :appRegistry/revisionNumber ?revision]]
this returns what we wanted, the app with the max revision for a given id.#2017-07-0511:17pesterhazyCool!#2017-07-0511:18pesterhazyHaven't used with yet#2017-07-0511:20andreiwith allows to consider additional vars for an aggregation#2017-07-0511:20andreias far as I understand#2017-07-0511:20andreihttp://docs.datomic.com/query.html#sec-5-17-1#2017-07-0512:24igrishaev#2017-07-0512:47potetm@henrik What happens when you get a network blip between the peer and the txor, and your transaction never makes it to the txor?#2017-07-0512:48henrik@potetm That’s a good example! So I guess, silently retrying behind the scenes in that case?#2017-07-0512:49potetmAnd if the server that is trying the request happens to die while retrying?#2017-07-0512:51henrik@potetm We’d be in trouble I bet.#2017-07-0512:52potetmLong story short, without some intermediary persistence there would be data loss.#2017-07-0512:52henrikYou’d have the same amount of data loss without optimistic updates though.#2017-07-0512:52henrikThe problem lies in user expectations.#2017-07-0512:52laujensenIm looking for an intuitive way to update something like a user record. The user can fill out 50 fields which are submitted. If something has changed, it flies straight through without issue. But if something has been emptied, I cant just submit a map with a nil value, I need to make a separate retract transaction. Is there some idiomatic tool for simplifying that workflow?#2017-07-0512:53potetmNot if you wait for the txor to respond that it's processed your tx.#2017-07-0512:53henrikAnd what if the server waiting for the response dies in the meantime?#2017-07-0512:53potetmYou respond to the user that their request failed.#2017-07-0512:53potetmBecause you know that.#2017-07-0512:54henrikIn both cases we can know that the process failed, but in one case we made it seem like everything was a-OK before we were entirely sure.#2017-07-0512:55henrikEqualing potential confusion for the user.#2017-07-0512:56potetmRight. You would have to establish an understanding that things "aren't quite done until I get the green check mark"#2017-07-0512:57laujensenIm looking for an intuitive way to update something like a user record. The user can fill out 50 fields which are submitted. If something has changed, it flies straight through without issue. But if something has been emptied, I cant just submit a map with a nil value, I need to make a separate retract transaction. Is there some idiomatic tool for simplifying that workflow? - Addendum: Something akin to nil making an automatic retraction#2017-07-0512:58henrik@potetm I think you’re right, and for things that are very essential, you would want to be pessimistic rather than optimistic. Optimistic writes would only make sense in the case where we know that we will be correct 99.99% of the time or better.#2017-07-0512:58henrikIf the success rate is very, very high, we could assume that it approaches 100% and declare victory “prematurely”#2017-07-0513:01potetmI mean, I think you hit the nail on the head before. If the user has a good mental model for what's going on, you can do whatever you want.#2017-07-0513:03potetmBut you don't get that for free by just asynchronously writing to the db. You have to build lots of things, most importantly you must build user understanding.#2017-07-0513:21hmaurer@henrik quick comment on the earlier discussion on optimistic writes: you mentioned that UI applications do it, but it seems like they have different concerns. A user-facing app has to be very responsive, and network delays can be pretty large. A backend application can usually afford slightly longer delays, but most importantly network delays are very low since your database will usually be running in the same local network#2017-07-0513:29hmaurer@laujensen this was asked earlier in this channel. Hang on, I’ll try to find the message#2017-07-0513:30hmaurer@laujensen https://clojurians.slack.com/archives/C03RZMDSH/p1498911745673823#2017-07-0513:31hmaurerRoughly speaking I think the summary is that you would need to write your own transaction function which retracts the missing attributes.#2017-07-0513:31hmaurerDon’t take my word on it though#2017-07-0513:33hmaurer@laujensen actually, couldn’t you just write a helper function which, given a map of attributes (with potential nil values), generate the right set of assertions and retractions?#2017-07-0513:34laujensenThanks buddy! And yeah I could, but it just feels like fixing datomic instead of using it. But looks like it needs some fixing#2017-07-0513:35hmaurerWhat exactly would you like Datomic to do? If you would like to “replace” the entity (e.g. only keep the new attributes you are transacting, and retract all others) then you will need to write a transaction function from what I understand.#2017-07-0513:36hmaurerBut if you know, at the point where you make your new assertions, exactly which attributes need to be retracted, then I don’t see an issue generating those retractions in your app and transacting the assertions and retractions all at once#2017-07-0513:38hmaurerDoes this make any sense?#2017-07-0513:38laujensenIn my mind datomic should automatically retract anything thats assigned a nil value. That would simplify the interface#2017-07-0513:39hmaurerOh, I see. Yeah, I am not sure what the implications of this would be but it could be neat. As I said you can implement that very easily though#2017-07-0513:39hmaurerThe “map” syntax of transact is just a convenience for writing a vector of assertions#2017-07-0513:41laujensenhttp://www.matthewboston.com/blog/setting-null-values-in-datomic/#2017-07-0513:41laujensenIve modelled a small wrapper after this principle of just joining retracted/edited fields in a generic sense.#2017-07-0513:42hmaurer@laujensen oh I hadn’t read this blog post. Yes, that’s pretty much what I was suggesting#2017-07-0513:44laujensenGood, then Im on the right trail#2017-07-0513:44laujensenThanks for weighing in#2017-07-0513:47hmaurer@laujensen Happy to help. Good luck 🙂#2017-07-0521:45matanThanks for the answers yesterday. Last newb question I guess:
Is it fair to say that any constraints between datoms are left to transactions to explicitly maintain, or is there any other mechanism which is more declarative? I am pretty sure its the former not the latter case.#2017-07-0522:08hmaurer@matan the only constraint Datomic can keep a track of is uniqueness I think#2017-07-0522:08hmaurerAll others constraints are indeed left to the transactions to explicitly maintain#2017-07-0522:12hmaurerThis makes me wonder however: is it possible to define a transaction function in Datomic that should be executed on every transaction? As a way to maintain an arbitrary invariant#2017-07-0522:13hmaurer@val_waeselynck maybe you would know? ^#2017-07-0522:22val_waeselynck@hmaurer what you can do (at some performance cost of course) is define a transaction function which would wrap a transaction request, db.with() it, check the invariant, then transact it or throw an error.#2017-07-0522:33hmaurer@val_waeselynck Oh I see, but what I meant is: is it possible to execute that function on every transaction, not upon request with the :db/fn attribute#2017-07-0522:33hmaureras a way to ensure that bad data can never get transacted#2017-07-0522:33hmaurerIt was more out of curiosity; I’m not sure I would want to do it in a production system#2017-07-0522:33val_waeselynck@hmaurer no, there is no trigger-like mechanism in Datomic#2017-07-0522:36val_waeselynck@hmaurer @matan I'm curious if there's a particular feature of other database systems you're trying to find here ?#2017-07-0522:37hmaurer@val_waeselynck no. I was just curious if you could forcibly maintain an invariant in this way#2017-07-0522:38spiedenwill the transactor process multiple transactions in parallel if they’re against different databases within the same storage?#2017-07-0522:38val_waeselynck@hmaurer well just for performance reasons you'd probably want to be explicit about where to look for invariant violations, so having to wrap the tx in a function call does not seem like an additional cost to me#2017-07-0522:38souenzzo@hmaurer you can use https://github.com/MichaelDrogalis/dire to wrap datomic.api/transact#2017-07-0522:39hmaurer@souenzzo oh interesting, I’ll take a look, thank you#2017-07-0522:40souenzzo(not sure if it's a best practice) 😅#2017-07-0522:42val_waeselynck@souenzzo not sure this solves the same problem; error handling is about dealing with bad stuff after it happens, maintaining invariants is about preventing bad stuff from happening :)#2017-07-0522:44val_waeselynck(Having said that, preconditions could do the trick)#2017-07-0522:44hmaurerOver the top of my head, I guess a “soft” option would be to hope your code doesn’t mess up and respect the invariants (test it properly, etc), but just in case have a service watch the transactions through the Log API and reports any infraction#2017-07-0522:45souenzzoAnother day I thought of use dire to transform the tx-data of the d/transact by adding my db/fn ... But reviewing now, I do not know if it is able to do this#2017-07-0522:45hmaurerI am not sure that would be a very judicious thing to do, it’s late, but I’ll throw it out here 😄#2017-07-0522:45val_waeselynckOr a batch job which inspects the whole db periodically#2017-07-0522:46hmaurerOr that, yes. So you get the peace of mind without the extra cost on each write#2017-07-0522:47souenzzoAt best, dire can inspect all tx-data and log in if any one is without your db / fn ... 😕#2017-07-0522:47val_waeselynck@hmaurer Datomic has opened a world of new possibilities, we need all the crazy ideas we can get to explore it :)#2017-07-0522:48hmaurer@val_waeselynck it’s great. It has a lot of the benefits of event sourcing without the pain of implementing it yourself#2017-07-0522:48hmaurerI just started exploring it but I am going to have a lot of fun over the coming months 🙂#2017-07-0522:49hmaurerAh by the way, I have another question which you might know about @val_waeselynck :#2017-07-0522:49hmaurerI understand there is no “order” clause with Datomic, but will the order of n-tuples returned by a Datalog query always be the same for a given db value?#2017-07-0522:49hmaurere.g. can I rely on it to do cursor pagination based on index in the result array, etc#2017-07-0522:50val_waeselynck@hmaurer no, I don't believe so#2017-07-0522:50hmaurerI suspect that the “re-indexing” step ran periodically by the transactor might mess this up#2017-07-0522:50hmaurerAh 😕#2017-07-0522:51val_waeselynckYou'll need to sort the whole result youtself then truncate. But you can pull most of the data downstream of that#2017-07-0522:52hmaurerYeah so long as the data isn’t huge in-memory sorting should be fine#2017-07-0522:52hmaurerAlso, do you know if I can rely on tx ids or “t values” to reference at point in times in the database, and store those externally?#2017-07-0606:54val_waeselynckhmaurer: as you suspected, relying on Datomic eids remaining stable on the long term is generally discouraged, because that's not robust to log rewriting (having said that, I have yet to see a complete story about log rewriting with Datomic). Same goes for t values IMO. If :db/txInstant is not good enough for you, I suggest you annotate each transaction with a UUID-type attribute using datomic.api/squuid.#2017-07-0522:53hmaurere.g. if I ever need to restore the DB from a backup, will I be able to keep the same tx ids or “t values”?#2017-07-0522:53hmaurerso as not to break those references#2017-07-0522:53val_waeselynckThese questions will have to wait until tomorrow - good night everyone, have fun!#2017-07-0522:54hmaurerval_waeselynck: Good night!#2017-07-0522:57spiedeni’m looking at introducing a new database for handling some high latency (700k+ average datoms) transactions versus adding them to my current db of low latency ones (~30 average datoms). if i just do d/create-database using the same DDB table will the transactor process low latency transactions and high latency ones at the same time? (so former aren’t held up by latter?)#2017-07-0614:09msshey all, newbie question regarding the pull api.
when resolving a ref entity in the pull api, the value returned is a map of attributes (e.g. pull [*] -> {:my-ref-attribute {:db/id 12345} ...}, pull [* {:my-ref-attribute [*]}] -> {:my-ref-attribute {:db/id 12345 :db/ident :my-ns/my-ident} ...}).
when resolving a ref entity in the query api, the actual ident is returned. (e.g. find ?eid ?my-ref-attribute -> [12345 :my-ns/my-ident]).
is there any way to return the actual ident for a ref from a pull (as in the query api), as opposed to a map of attributes? so the return value would be {:my-ref-attribute :my-ns/my-ident}.
this thread – https://groups.google.com/forum/#!topic/datomic/xxoYiko2muY – on the datomic user mailing list gets at what I’m trying to do. just wanted to check if that’s still something unsupported by the api
thanks for any input or advice!#2017-07-0615:27jjttjjdo people usually default to include project names in their :db/ident keywords? for example, do you use :myapp.user/first-name or just :user/first-name#2017-07-0615:33favila@mss The query api does not automatically change ids to idents. I am not sure how you got that impression#2017-07-0615:33favila@mss The only api that has the behavior you describe is the entity api.#2017-07-0615:33mssyou’re right, my mistake. meant to say that there’s an easy syntax for specifying how to return an ident#2017-07-0615:34mssand what shape to return it in#2017-07-0615:34msspull can return an ident easily, but it’s always in a map of attrs#2017-07-0615:34mssafaict#2017-07-0615:34favilain a query you need to explicitly follow :db/ident to the ident#2017-07-0615:35favilathe only api that implicitly treats entities-with-idents as keywords is the entity api#2017-07-0615:35favilain every other case, if you want an ident you need to ask for it#2017-07-0615:36favilawith pull results, a common strategy is to walk the results with clojure.walk/prewalk and replace any maps with :db/ident in them with the ident itself#2017-07-0615:37favilathen ensure you pull :db/ident in your pull pattern#2017-07-0615:37favilapull [* {:my-ref-attribute [:db/ident]}] in your example#2017-07-0615:46mssmakes sense, really appreciate the help#2017-07-0618:55kurt-o-sysI'm a bit confused about datomic starter and free. (I'm developing an app for a small ngo, so the pro version is certainly a no-go due to the pricing. small ngo's can't afford it). I do understand the Client not available on the free version and limited to 2 peers etc. But, it's about the updates: the free version can be updated at any time (meaning: install a new version and start using it), while using the starter version, this is not possible? Am I right?#2017-07-0619:02spiedeni believe starter is free upgrades for a year but then you need to pay. not sure if you get a perpetual license for the latest version at the end of that year but it seems that way#2017-07-0619:13kurt-o-sysSo, free is unlimited updates, starter only for 1 year and than you need to upgrade to pro to receive updates... Too bad.#2017-07-0620:26jaret@kurt-o-sys @spieden starter has perpetual licensing so you can continue using versions released prior to the expiration of your starter license indefinitely.#2017-07-0620:28kurt-o-sys@jaret Right, but no updates, meaning, after a few years, you'll probably lacking some features/security updates/ ... . Not a good idea, experience tells me.#2017-07-0620:43spiedenYeah, thankfully for any reasonably sized money making enterprise the license cost is pretty affordable#2017-07-0620:54kurt-o-sysright... that's it:
> any reasonably sized money making enterprise the license cost is pretty affordable
I'm talking about small non-profit organizations/associations, working with 1-3 FTE and a lot of volunteers (from 30 to >100). It's really not affordable for them. They are financially very small, but they are not small at all in terms of tech needs. Their 'business' is often pretty complex compared to usual businesses and they need to fall-back to poorer tech, which isn't really helping them either. (Yes, I know, the free edition, but in this use case, the client library makes much more sense.)#2017-07-0621:30spieden@kurt-o-sys maybe talk to them and they’ll cut you a deal(?)#2017-07-0621:30spiedencognitect that is#2017-07-0623:05matanA late thank you for the discussion about enforcing constraints in transactions. I'm not sure what the practical conclusion might be, and my question was for the sake of understanding, not a particular use case.#2017-07-0623:06matanI can't help but wonder though, how people make sure their data doesn't get corrupt, when constraints are expected in the data.
I guess only smart and proactive testing might allow smooth sailing....#2017-07-0623:07matanBut the cost of an error would be eternally corrupt data, given the write-only nature of datomic, which is a bit more than with an RDBMS, where the cost of an error is more transient (in the non-pathological case). Just a thought.#2017-07-0623:14hmaurer@matan Hi! There was a discussion on precisely this topic yesterday (or the day before yesterday?) on this channel#2017-07-0623:15hmaurerThe gist of it was that you cannot enforce constraints directly. If enforcing them in your application code is not enough, you can enforce them in a “transaction function” which runs on the transactor, but this relies on your application code using that transaction function (it could bypass it)#2017-07-0623:15hmaurerAnother approah could be to either inspect backups or inspect the transaction log live on the lookout for constraint violations#2017-07-0623:16hmaurer(e.g. run a dedicated process to do this check)#2017-07-0623:16matanYes, I know, I skimmed the discussion#2017-07-0623:16hmaurerErrors would then be reported back to you, and you can fix them#2017-07-0623:16matanIt just seems that's a soft spot as is#2017-07-0623:16hmaurerHopefully shouldn’t happen too often unless a big error was made in production code#2017-07-0623:17hmaurerIt’s not quite right that the data would get eternally corrupted though#2017-07-0623:17matanoh?#2017-07-0623:17hmaureryou can always excise data, commit a transaction which fixes the issue, or in extreme scenarios I guess you could even re-build your whole database#2017-07-0623:17hmaurerif the data corruption is really too nasty#2017-07-0623:18hmaurerWe are talking disaster recovery here, not routine operation#2017-07-0623:18matanseems to boil down to exceptionally judicious testing#2017-07-0623:18hmaurerbut for example, say you are doing daily backups#2017-07-0623:18hmaurerand you realise that an event has occured today which has corrupted your data in a non-fixable way#2017-07-0623:19hmaureryou could restore to yesterday’s backup, then apply every transaction that was ran before yesterday and today#2017-07-0623:19hmaurerfltering out the transactions that caused the data corruption#2017-07-0623:19hmaureror something like that#2017-07-0623:20matanany good book covering such processes? not that online docs aren't good enough...#2017-07-0623:20hmaurerYou’ll have to wait for someone else’s answer, I just started learning about Datomic#2017-07-0623:20hmaurerBut the online doc does talk about the “Log API”#2017-07-0623:20hmaurerwhich could be used in conjunction with backups to do this kind of recovery#2017-07-0623:21hmaurerAre you worried about a particular kind of data corruption?#2017-07-0623:22matanNot really, I was just assessing the pros and cons of using datomic#2017-07-0623:22hmaurerDo you have a kind of constraint in mind that would be enforced by a relational DB and, if not enforced, would lead to “unrecoverable” data corruption?#2017-07-0623:28matanhmaurer: unrecoverable is subjective... so, not really#2017-07-0623:23matanBy the way, in clojure core, we do have validators built-in to the api, for the state handling constructs (atoms, agents...)
I wonder why this concept was not carried over to the flagship database being datomic#2017-07-0706:52val_waeselynckmatan: well, generally speaking, Datomic's authors' line of conduct has always been to be minimalistic about adding new features in order to keep things simple, and I think it's a good strategy. Now, specifically, you can emulate validators using a transaction function as shown above - it does require a the Peer to be cooperative about it, but that's not too much of a hassle IMO#2017-07-0623:23hmaurer@matan unrelated but if you are assessing datomic, check this out: https://medium.com/@val.vvalval/what-datomic-brings-to-businesses-e2238a568e1c#2017-07-0623:25matanhmaurer: Thanks, will review it! hopefully it's not marketing material though#2017-07-0623:29hmaurer@matan for you to judge. It was written by someone using Datomic in his startup, and the points are well justified imo#2017-07-0706:39val_waeselynck@matan I wrote it, and I confirm it's not marketing material. FWIW, I've also tried to assess the limitations of Datomic in the post.#2017-07-0706:40val_waeselynck(of course, do feel free to challenge the content, it can only help me make it better)#2017-07-0820:13matan@U06GS6P1N thanks again for all the assistance#2017-07-0623:25hmaurerMmh, I would be curious to hear about that too#2017-07-0623:26matanWell, maybe someone will chime in ..#2017-07-0623:26matanCalling it a day#2017-07-0706:37val_waeselynck> the cost of an error would be eternally corrupt data, given the write-only nature of datomic
@matan @hmaurer Not true in my experience. That would be the case if you relied on past versions of your database in online code, which you don't want to do anyway in the vast majority of cases, for even other reasons than recovering from data corruption (schema evolutions, migrations, offline imports etc.) - it's a common misunderstanding of beginners (myself included) to think that they're going to use stuff like db.asOf() extensively in online code, but it usually turns out it's not very viable. Then the situation is the same as mutable DBs such as RDBMS, except that when data corruption does occur, you have all the historical and speculative features of Datomic to track down how it happened and reproduce it, which gives you an advantage in solving the problem.#2017-07-0710:15hmaurerval_waeselynck: why do you think db-asOf in online code wouldn’t be viable?#2017-07-0714:44val_waeselynck@hmaurer @U08715BSS I definitely need to write a blog post about that, but think about what happens when you need to make a data migration and / or expand your schema; you won't be able to consume this changes in an asOf db. Features giving users access to several versions of an object (à la google docs for instance) should generally not be implemented using asOf(), but by reifying versions into version entities - keep asOf() for auditing and debugging.#2017-07-0714:53hmaurer@val_waeselynck True… Would you copy over to the “version” entity all the attributes of the entity? I guess this would be a good use-case for a transaction function#2017-07-0820:21val_waeselynckIt's hard to give a general answer unfortunately#2017-07-0820:33hmaurer@val_waeselynck Are you doing this in production? (reifying versions)#2017-07-0709:19matan@val_waeselynck thanks for these comments, I really appreciate that!!#2017-07-0709:50mgrbyte@val_waeselynck Can you expand on why using (:as-of db) didn't turn out to be viable? we're not using yet, but have thought of doing so#2017-07-0713:05mssnew to datomic and having trouble figuring out how to model something in it, would love some insight:
a user (unique identity on email, let’s say) can be a part of multiple organizations. within each organization, a user has a role, which can be an enum of a few different options.
I started by trying to model this with a schema of 1) an organization having :organization/users, a ref to a user which is :cardinality/many 2) each user is associated with an organization via that ref. unfortunately, I couldn’t figure out a way to slide a :user/role or similar property into the schema easily considering that for each organization that a user is a part of, they might have a different role.
the second thing I thought of doing was creating a different user for every association with an organization. so user A can be a part of organization A with role ABC, and user A can also be a part of organization B with role XYZ, but there’s a different record for each user within the db for their participation in each organization. that feels slightly heavyweight, though, because I’d have a bunch of duplicate user records laying around.
what I’m struggling with is that it seems in datomic there’s not an easy way to say “this thing is related to this other thing, and the relation itself has a property”.
if anyone has any insight about how I might go about this, would love some input. I’m new to datomic, so apologies if I’m missing something obvious.
thanks in advance for any help!#2017-07-0713:14robert-stuttaford@mss :user/memberships many ref, :membership/org one ref, :membership/roles many enum if you often need to go from user direct to org or vice versa, you can also model than directly :user/orgs many ref, but you’ll have to take care of keeping both paths aligned, or at least make your code resilient to misaligned collections#2017-07-0713:14mssyep that makes sense. appreciate the help#2017-07-0713:16robert-stuttafordin SQL this would have been done with a join table, but in Datomic, we simply reify the relationship as an entity with its own name#2017-07-0713:17msspretty straightforward. just such a different way of modeling data/relationships, have to relearn how to use a database 😝#2017-07-0713:17mssthanks again#2017-07-0713:18robert-stuttaford:+1:#2017-07-0717:32rnewman@mss this is a common situation with tuple stores (including RDF stores and graph stores). There are two generic ways to do it: with an additional column in the tuple, or via explicit reification. In the Datomic case, explicit reification is as robert described: you introduce a new entity with new labeled links. Datomic has a tx field in its e a v tuples, so in some cases it’s appropriate to use that as the reified entity.#2017-07-0717:33rnewmanFor example, there are two ways to represent a person joining an organization on a given date: either [x :foo/join y] [y :foo/org :myorg] [y :foo/joinedOn date], or [x :foo/joinedOrg :myorg tx] [tx :db/txInstant date]#2017-07-0717:34rnewmanIn your case you probably wouldn’t use metadata about the transaction to describe a role, but you could, and it might be appropriate in other situations — particularly those in which you’re trying to model some information about how the data was generated or recorded.#2017-07-0717:35rnewmanOther tuple stores like AllegroGraph have five fields, not four — AG lets you make assertions about an tuple’s ID, and also use a ‘graph’ field for any purpose you like.#2017-07-0717:35mssI hear that, def makes a lot of sense when the additional component of the tuple is time based or otherwise representable by the transaction metadata itself#2017-07-0721:26hmaurerhttp://docs.datomic.com/transactions.html#sec-4-2#2017-07-0721:26hmaurerI quote:
> The :db.fn/cas (compare-and-swap) function takes four arguments: an entity id, an attribute, an old value, and a new value. The attribute must be :db.cardinality/one. If the entity has the old value for attribute, then the new value will be asserted. Otherwise, the transaction will abort and throw an exception.#2017-07-0721:26hmaurerDoes that mean that the whole transaction will abort? (not just the operation containing the cas?)#2017-07-0721:27hmaurerand does the position of the :db.fn/cas vector in the transaction (e.g. if the transaction has multiple operations) matter?#2017-07-0721:28hmaurerActually nevermind, my question makes no sense. Of course it has to abort the whole transaction, since it’s a “transaction”. It couldn’t reject just the :db.fn/cas operation and apply the others, as it would break the atomicity guarantee.#2017-07-0722:23favila@hmaurer also in general, order of items in a transaction does not matter#2017-07-0817:41joelsanchezdoes the fulltext function need :db/fulltext true attributes?#2017-07-0817:41joelsanchez(because i can't seem to make it work)#2017-07-0818:09joelsancheznevermind, seems to be the case 🙂#2017-07-0819:19ezmiller77I'm trying to set up a tagging systems in my db, I have added two attrs to handle this:
{:db/ident :metadata/tags
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/many
:db/isComponent true}
{:db/ident :metadata/tag
:db/valueType :db.type/keyword
:db/cardinality :db.cardinality/one
:db/index true}
This works more or less in terms of assigning tags, but how I am encountering some problems in filtering for tags. I think when doing this I assumed that I'd be able to reference the tags the way one would idents, as a keyword, e.g. :tag1. But it's proving a bit more challenging. My first approach has been to use a query filter expression, something like this:
'[:find (pull ?doc [*])
:in $ ?tags
:where [?doc :arb/metadata ?meta]
[?doc :metadata/tags ?tagset]
[(every? (set ?tags) ?tagset)]]
where :tags is a supplied list of tag keywords, e.g. [:tag1 :tag2 ...]`. But the result of the pull doesn't quite work here because for each tag the pull returns a map including the :db/id as well as the tag e.g.:
[#:arb{:metadata
[{:db/id 17592186045497,
:metadata/tags
[{:db/id 17592186045498, :metadata/tag :tag1}
{:db/id 17592186045499, :metadata/tag :tag2}],
I thought that I could still work with this by mapping over the ?tagset to extract just the tag keywords, but inside the map expression '?tagset` ends up being out of scope. So I'm a bit stumped about how I should approach this problem, which I feel must be somewhat common... Any tips?#2017-07-0819:26val_waeselynck@ezmiller77 I don't think the :where part of the query does what you think it does. Regarding the pull result, I don't think there's anything you can do except post-process it - pull just can't give you the data in the shape you want#2017-07-0820:36ezmiller77actually, i think these transformation functions, or even just a custom predicate function, might be the answer: http://www.learndatalogtoday.org/chapter/6#2017-07-0821:17ezmiller77Nah that won't work either since all that's available at the time of the query is the eid of the value of :metadata/tags. So post-processing it is maybe. Shame.#2017-07-0821:28val_waeselynck@ezmiller77 it's not too hard to implement your own version of pull on top of entities, if you're not too worried about performance#2017-07-0900:19hmaurerWhat would be the “cheapest” infrastructure to run Datomic on? e.g. on a very low load system, what is the minimum amount of RAM required for the transactor and the peers to operate?#2017-07-0900:19hmaurer(I have read the recommendations on the doc, but I am interested in the bare minimum to operate, not the optimal setup)#2017-07-0900:21hmaurerI am considering using Datomic for a non-profit application which should initially have low traffic, but will also generate no revenue so we cannot afford the luxury of running the transactor on a 2GB ram EC2 instance and each peer on their own 2GB instance.#2017-07-0900:31gwsI can run a transactor in dev for a project with a small amount of data in a container with a memory limit of 384MB, and a peer in a similarly sized container. I'm sure it's possible to go even smaller.#2017-07-0900:33gwsI can easily fit the transactor and a peer in an AWS t2.micro#2017-07-0908:34val_waeselynckSome caveats about Datomic's time-travel features: https://vvvvalvalval.github.io/posts/2017-07-08-Datomic-this-is-not-the-history-youre-looking-for.html#2017-07-0908:35val_waeselynck@U5ZAJ15P0 @U08715BSS @U1YTUBH53#2017-07-0909:47matanOh looking forward to read that#2017-07-0909:47matanWill surely help in better understanding datomic!#2017-07-0910:31hmaurer@val_waeselynck great, thanks!#2017-07-0912:32schmeeNice read, thanks!#2017-07-0922:39danielcompton@val_waeselynck when you say "The problem is, ... this won't work!" do you mean that it will return an error, return nil, or return a partial result with missing tags?#2017-07-0919:38ezmiller77@val_waeselynck I ended up just doing post-processing, as you suggested. it was just hard to forego the convenience of being able to do it in a query. it seems as though it is almost possible in those predicate and transform functions -- the problem being that for an attribute the value type of which is a reference, all that's available to those functions is the eid. at least that is I think what was going on, and it appeared not possible and a bit awkward to try to pull the values assocaited with that eid in the middle of the query process.#2017-07-1005:10bedersI'm pretty green with datomic still, so forgive me if this is a dumb question:
Is there some set semantics I can enforce on an entity level?
Here's the example: Parsing e-mail addresses and storing them as :email/personal and :email/address where :email/address is unique, however, personal might differ. That is ok, unless I'm trying to transact this:
[{ :email/address "
{ :email/address "
Any way to get around that?#2017-07-1008:30misha@ezmiller77
1. you can programmatically http://docs.datomic.com/query.html#sec-6
construct query which would look like:
[:find (pull ?doc [*])
:in $ ?t1 ?t2
:where
[?t1 :metadata/tag :tag1]
[?t2 :metadata/tag :tag2]
[?doc :metadata/tags ?t1]
[?doc :metadata/tags ?t2]]
not sure whether it would be more readable/performant/maintainable/etc. than post processing, though.
2. I'd say, your attribute names look confusing. I'd use :metadata.tag/name for individual tags, or even, if you need those only as enum values, – {:db/ident :metadata.tag/tag1}
Your schema makes it look (to me) like both :metadata/tag and :metadata/tags belong to the same entity, and do not represent relationship.
3. Also, if you supply tag ids to the query instead of actual tag keywords – you might be able to use your initial implementation. That'd be "preprocessing" with something like http://docs.datomic.com/clojure/#datomic.api/entid (which can be done inline btw.), I guess.#2017-07-1010:33hmaurer@beders is your :email/personal attribute marked as cardinality many? It seems that you are trying to attach multiple personal names to your entity#2017-07-1017:32bedershmaurer: If I marked them as many, I don't get the set semantic I'd like, i.e. adding an entity with the same email/name combo twice leads to two copies of the same name.
I guess I want this:
email: e1
name: n1, n2, n3, n4
where nx are the different names being used for the same e-mail address.
I can achieve that by declaring cardinality of /personal to many, but then I get duplicates.
I.e. I want n1, n2, n3, n4 to be unique as well#2017-07-1017:41favilaWhat is it precisely you want unique? entity+attribute+value is always unique, so you can't have duplicate names for the same email address if you make :email/personal cardinality-many.#2017-07-1017:43favilai.e {:email/address " is literally impossible.#2017-07-1017:56bedersSo the correct way is to look up the entity first for the e-mail address, then assert additional facts.
I wanted to avoid the extra lookup, but it seems it is unavoidable.#2017-07-1017:56bedersthanks for the help#2017-07-1018:01favilayou could make email address upserting#2017-07-1018:01favilaif that semantic makes sense for your application#2017-07-1018:01favilahttp://docs.datomic.com/identity.html#sec-4#2017-07-1018:02favilaCompare section "Unique Identities" with the following section "Unique Values"#2017-07-1018:02favilaseems like you have :email/address as a unique value, maybe you want unique identity#2017-07-1018:12bedersit's unique identity.
I still will not be able to insert the example I gave above in one go, due to :db.error/datoms-conflict Two datoms in the same transaction conflict#2017-07-1018:13bedersI'll do the extra round-trip to get the e-mail's e before inserting the actual e-mail entity (which contains attributes for :sender, :recipients, etc. )#2017-07-1018:13bedersthanks again#2017-07-1018:21favilaMy point is that is not necessary with unique-identity#2017-07-1018:31favila@beders See this example:#2017-07-1018:31bedersok, I found the error in my original schema: personal was not set to many (and address was set to 'identity')
With
{:db/ident :email/address
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one
:db/unique :db.unique/identity}
{:db/ident :email/personal-m
:db/valueType :db.type/string
:db/cardinality :db.cardinality/many}}
I can now do transact:
[{:email/address "
and it works as expected!#2017-07-1018:31favila(def uri "datomic:")
;=> #'user/uri
(d/create-database uri)
;=> true
(def c (d/connect uri))
;=> #'user/c
@(d/transact c [{:db/ident :email/address
:db/cardinality :db.cardinality/one
:db/valueType :db.type/string
:db/unique :db.unique/identity}
{:db/ident :email/personal
:db/cardinality :db.cardinality/many
:db/valueType :db.type/string}])
;=>
;{:db-before datomic.db.Db,
; @5c351827 :db-after,
; datomic.db.Db @ecb17425,
; :tx-data [#datom[13194139534312
; 50
; #inst"2017-07-10T18:29:14.565-00:00"
; 13194139534312
; true]
; #datom[63 10 :email/address 13194139534312 true]
; #datom[63 41 35 13194139534312 true]
; #datom[63 40 23 13194139534312 true]
; #datom[63 42 38 13194139534312 true]
; #datom[64 10 :email/personal 13194139534312 true]
; #datom[64 41 36 13194139534312 true]
; #datom[64 40 23 13194139534312 true]
; #datom[0 13 64 13194139534312 true]
; #datom[0 13 63 13194139534312 true]],
; :tempids {-9223301668109598144 63, -9223301668109598143 64}}
(d/transact c [{:email/address "#2017-07-1018:31bedersthank you!#2017-07-1018:31bedersyou rock#2017-07-1018:32favilaYour transaction error tells me that you definitely do not have :db.unique/identity set#2017-07-1018:32favila(on :email/address)#2017-07-1011:04isaacWhy does’t datomic support resolve partition from string tempid ?#2017-07-1013:51ezmiller77@misha thanks for the helpful feedback. i think you are right that the attributes could use some simplifying/clarifying. i can't use enums for these tags because i want the tags to be definable by the end-user. what my schema describes is a simple entity, an arb, that has three things: an id, a value, and metadata. the metadata is a ref with a cardinality of many. its ident is :arb/metadata. then i've defined a set of attributes that can be included as ref-ed values for :arb/metadata, including the one we were discussing: :metadata/tags. all the attrs meant to be refs for :arb/metadata start with :metadata. your comment is helpful because it makes me think that 1) i was misunderstanding the conventions for the . and the / in datomic, and 2) that i could have done this more simply by simply associating a series of attributes using the style you suggested :metadata.tag, where the first part clearly indicates that it's metadata and the second indicates what kind of metadata. then i could skip the whole ref thing. did i get you right? regarding the meaning of the . and / and their conventional use, is this documented somewhere? or discussed in a blog post perhaps?#2017-07-1014:18favila:x.y/z is keyword syntax from clojure (if you are not aware of clojure)#2017-07-1014:19favilax.y (before the slash) is called the "namespace", clojure.core/namespace function will give you this part. z (after the slash) is the "name", get with clojrue.core/name#2017-07-1014:20favilaI advise never putting . in the name part#2017-07-1014:22favilaI usually put some indicator of the entity type the attribute appears on in the namespace part#2017-07-1014:25favila@ezmiller77 I'm guessing you want your final result to look like {:db/id ... :metadata/tags [:tag1 :tag2]}? You can't both reify tags and get this result directly from a pull expression. Just accept that you will post-process the result#2017-07-1014:26favilaIf you don't reify tags (i.e. if metadata/tags is cardinality-many scalar type, no data on tags), you can do this.#2017-07-1014:27hmaurerHi! Could someone email to me or link a good article on the performance characteristics of Datomic’s “filter” function? It takes the current db and an arbitrary predicate on datoms. How can be that executed efficiently?#2017-07-1014:27favilaOr you can weak-reference the tags with another entity (but from a data-modeling perspective, this is not a good idea)#2017-07-1014:27favila@hmaurer the predicate is run on each and every candidate datom#2017-07-1014:28favilaso make sure your predicate is fast 😀#2017-07-1014:28hmaurer@favila are there any predicate that cannot be used? for example, can I do a datalog query in the predicate to check an ACL or something?#2017-07-1014:28favilayou can do literally whatever you want, as long as it's synchronous#2017-07-1014:29hmaurerright, but if I do a datalog query in the predicate I assume that will become very very slow on a large DB?#2017-07-1014:29favilabut again, speed is important#2017-07-1014:29favilanot necessarily#2017-07-1014:29hmaurerI am not quite sure how many “candidate datoms” there are for a given query#2017-07-1014:29favilaAh. You can determine that clause-by-clause#2017-07-1014:29hmaurerso candidate datoms are datoms that match all clauses?#2017-07-1014:30favilano, datoms are fetched lazily as needed#2017-07-1014:30favilaso the index segment that would be visible (without filtering) for a given clause is put through the filter#2017-07-1014:30favilae.g. first clause of a query is [?e :some-attr "some-value"]#2017-07-1014:30hmaurerSo it is not crazy performance-wise to use filters for access control?#2017-07-1014:31favilaso the candidate datoms are (d/datoms :avet :some-attr "some-value")#2017-07-1014:31hmaurerSorry, keep going with your explanation on lazy fetching, it’s interesting 🙂#2017-07-1014:31hmaureroh ok#2017-07-1014:31hmaurerso it will proceed and fetch clause by clause#2017-07-1014:31hmaurerand it will also apply the security filter clause by clause#2017-07-1014:31favilaand then the datoms from the next clause are necessarily limited by whatever could bind to ?e#2017-07-1014:31favilayes#2017-07-1014:32favilathis is a simplification, because there is some parallelism going on#2017-07-1014:32hmaurerso I guess in some cases it could even make a query more performant?#2017-07-1014:32favilaBut clauses are not reordered#2017-07-1014:32hmaurersince you are narrowing down the sets#2017-07-1014:32favilayeah, good point, that could be possible#2017-07-1014:32favilait really hinges on how fast and selective your filter function is#2017-07-1014:32favilayou want that to be as fast as possible always#2017-07-1014:33favilaMany people do use db filters for access control#2017-07-1014:33favila(not really access, visiblity)#2017-07-1014:33favilait's simple and brute force, but it gets the job done#2017-07-1014:33hmaurerwith your explanation on lazy fetching it makes a lot more sense#2017-07-1014:33hmaurerthanks#2017-07-1014:34hmaurera bit brute-force, but conceptually it’s really neat to be able to filter the database based on security rules#2017-07-1014:34hmaurerand let the user query that filtered db abitrarily#2017-07-1014:35favilajust make sure your filter function is fast is all#2017-07-1014:35favilatune the heck out of those#2017-07-1014:35favilaand try to use the same query object with parameters, so that query plan doesn't get regenerated every time#2017-07-1014:35hmaurerI guess I could even pre-load the ACL for every user in memory so there are no remote calls in the filter predicate#2017-07-1014:36hmaureralthough Datomic’s peer caching should do the job too I guess#2017-07-1014:36favilayes it often works just as well#2017-07-1014:37favilayou just need to try it#2017-07-1014:37favilasee what happens#2017-07-1014:37hmaurerout of curiosity, is there any way to control the peer cache?#2017-07-1014:37hmaurere.g. force to keep some segments in memory, view what’s current in memory, etc#2017-07-1014:37favilayou can control how big it is, but that's it#2017-07-1014:37hmaurerand in general, are there tools to debug datomic’s internals?#2017-07-1014:37hmaurerobserve what the peer is doing, etc#2017-07-1014:38favilalogs only#2017-07-1014:38favilamaybe there is a metrics callback for peers? don't remember#2017-07-1014:38favilabut there are no tools#2017-07-1014:38favilainternals are pretty black-box#2017-07-1015:13misha@ezmiller77 you can think about attribute's namespace – as an sql table name, and attribute's name – as a column name.
So :metadata.tags would be equivalent to tags column in metadata table.
And :metadata.tag/name – name column in metadata.tag table.
(as opposed to yours name column in the same metadata table)#2017-07-1015:16mishasuch thinking is a bit limiting though, because you can have attributes with different namespaces on the same entity (e.g. {:db/id 1 :foo/bar 2 :baz/quux 3}), but it explains my earlier comment well enough.#2017-07-1017:46hmaurer@favila does datomic re-assert existing facts?#2017-07-1017:46hmaurere.g if I transact a fact, then transact the same fact again later on#2017-07-1017:46favilait used to, but not anymore#2017-07-1017:46hmaurerwill it ignore the transaction or double assert it?#2017-07-1017:46hmaureroh ok#2017-07-1017:46hmaurerthanks#2017-07-1017:47hmaurerso transactions are always idempotent? (excluding possibly tx functions)#2017-07-1017:47favilaI want to say yes, but not sure about new rules for schema#2017-07-1017:47favilaI think still yes though#2017-07-1017:48favilahowever a new :db/txInstant is still asserted#2017-07-1017:48favilaso it's not completely idempotent#2017-07-1017:49favila(d/transact conn []) always asserts at least one datom--the tx-instant#2017-07-1017:49hmaureroh, that was my main question/concern#2017-07-1017:50hmaurerso if you transact the same fact two times, the txInstant will be the latest one?#2017-07-1017:50favilano it will be the earlier one#2017-07-1017:50hmaureroh so you mean it will create an “empty” transaction#2017-07-1017:50favilaif an E+A+V assertion would be redundant, it is not reasserted#2017-07-1017:50hmaurerwith a txInstant#2017-07-1017:51favilawell, it's not an empty transaction#2017-07-1017:51favilait has a txInstant assertion#2017-07-1017:51favilaWhat I mean is every transaction has at least one assertion, the assertion of the time the transaction occured#2017-07-1017:51favilaeven if you assert nothing else, calling d/transact will assert that#2017-07-1017:51hmaurer“It will create a transaction entity, but no datom will be associated with that transaction”#2017-07-1017:52hmaureractually this makes me wonder#2017-07-1017:52favilano, you misunderstand?#2017-07-1017:52hmaurerare attributes of a transaction (e.g. txInstant) associated with their own transaction?#2017-07-1017:52favilayes#2017-07-1017:53faviladatom looks like [tx-id :db/txInstant #inst "..." tx-id true]#2017-07-1017:53favilathat is in fact the index used by tx-log to determine tx ids#2017-07-1017:53hmaurerI see#2017-07-1017:54hmaurerI get it now; thanks for your patience 🙂#2017-07-1021:45weiis anyone validating data going into datomic using spec? in general, how are folks approaching data validation?#2017-07-1107:40val_waeselynck@wei I think this depends more on the external communication architecture of the surrounding system than on the fact that it uses Datomic. We have an RPC-style architecture, in which each endpoint has its Plumatic schema - I wish I could use spec for that, but I need coercion which is an unfortunate conseaquence of using JSON a a format. A GraphQL-style may have a spec per write handler - and maybe some of it is either derived from the database schema or colocated with the schema definition.#2017-07-1109:13danielstockton@val_waeselynck Can you not use custom conformers for coercion?#2017-07-1113:40val_waeselynckInteresting, didn't know that was possible!#2017-07-1109:14danielstocktone.g. https://gist.github.com/Deraen/6c686358da62e8ed4b3d19c82f3e786a#2017-07-1109:54hmaurer@val_waeselynck would you use clojure.spec instead of Plumatic if you were starting your app today?#2017-07-1113:43val_waeselynckI'd need to study spec more before I can answer that :)#2017-07-1114:49hmaurer@val_waeselynck let me turn the question differently then: are you happy with Plumatic?#2017-07-1111:44nooga@gws any special configuration needed? I’m trying to cram free trasactor onto a raspberry pi#2017-07-1114:11gwsYeah, I set memory-index-max to 1/4 the JVM heap size and object-cache-max to 1/2 the JVM heap size#2017-07-1114:17gwsnooga: I've run one with a 256MB heap, and set a hard memory limit of 384MB for the entire transactor, not sure if you can go much lower but it may be possible#2017-07-1114:22noogaAnd how was it? Did it grind to a halt eventually? I’m running a small sensor hub, tx-ing 20-200 datoms every minute and started thinking about moving this to a spare raspberry pi I have lying around#2017-07-1115:11gwsIt didn't grind to a halt, but I didn't even put that much load through it - I'd be interested to see your findings!#2017-07-1120:30noogathanks, I’ll report back#2017-07-1120:30nooga🙂#2017-07-1112:03gphilipp@val_waeselynck Hi Val, do you known if there’s a tool to visualize a Datomic schema ?#2017-07-1112:35hmaurer@gphilipp out of curiosity, what sort of visualisation would you be looking for?#2017-07-1112:35hmaurerthe Datomic console gives you a tree for the schema#2017-07-1112:44gphilippSomething like this : https://github.com/Datomic/mbrainz-sample/blob/master/relationships.png#2017-07-1112:45gphilippIt would probably rely on parsing key names because you can't enforce refs types. #2017-07-1113:40val_waeselynck@gphilipp I don't use one (controversially, I derive schema from code) but I think @robert-stuttaford has some tools for that#2017-07-1113:52robert-stuttaford@gphilipp i built https://github.com/Cognician/datomic-doc to help with exploring and documenting schema. no pretty diagrams, i’m afraid!#2017-07-1113:59gphilipp@robert-stuttaford This looks nice, I’ll take a look#2017-07-1114:00hmaurer@gphilipp interesting. Datomic doesn’t have a way to define which attributes belong to which entity (which would be required to auto-generate a diagram like mbrainz), BUT namespace in attribute keywords kind of play that role#2017-07-1114:01hmaurerSo if you are strict on always using the same convention for attribute names, namely :entity-name/property-name, then I guess you could auto-generate this diagram#2017-07-1114:02hmaurerSounds like a good opportunity for a PR to Cognician/datomic-doc if @robert-stuttaford is keen 😛#2017-07-1114:02hmaurerI am not sure whether :entity-name/property-name for attributes is a convention with Datomic though#2017-07-1114:03hmaurerIt just seems to be from what I’ve read/seen thusfar#2017-07-1114:03favila@hmaurer it's a loose convention. But it will never be a solid convention because the openness of entities is desirable#2017-07-1114:04hmaurer@favila That’s what I thought. So long as it is somewhat widespread I guess a visualisation based on that convention could make sense#2017-07-1114:05favilaBetter would be to think of the namespaces as corresponding to problem domain (i.e. attributes that are meant to be together) or even revision (e.g. :x.y.v2)#2017-07-1114:05favilait just happens that most of the time these correspond to entity instances also#2017-07-1114:06val_waeselynck@hmaurer you will probably want to share some attributes across several entity types#2017-07-1114:07favilaunless there's a higher-level schema encoded somewhere I don't think those visualizations can be automatic#2017-07-1114:07favilaanother wrinkle is that you can have multiple, independent schemas overlaying each other#2017-07-1114:08val_waeselynck@hmaurer note that attributes are entities themselves, and you can use Datomic to annotate them with type information
#2017-07-1114:22gphilippTo give some context, I’m trying to grasp Simulant schema for a POC at work https://github.com/Datomic/simulant/blob/master/resources/simulant/schema.edn. I just found out that @stuarthalloway had made a diagram out of it https://github.com/Datomic/simulant/wiki/Schema-diagram !#2017-07-1114:26pbostromis [com.datomic/clj-client "0.8.606"] still the latest version for the client library? it was released on 27-Nov-2016, seems a bit old#2017-07-1114:43robert-stuttaford@hmaurer @gphilipp PRs most welcome 🙂#2017-07-1115:04hmaurer@val_waeselynck do you think it would make sense to then think of namespace in Datomic attributes as denotating “traits”?#2017-07-1317:26val_waeselynckGrouping attributes into traits/mixins definitely makes sense: however, if you're going to build something programmatically on top of it, don't base it on keyword-namespace. You're better off declaring the traits of the attributes explicitly in the database - it won't be that much effort.#2017-07-1115:05hmaurerIn the sense that they group attributes relating to a particular “trait” of an entity#2017-07-1115:05hmaurerif that makes any sense#2017-07-1115:10hmaurer@favila oh, I see.#2017-07-1116:39matanAre there special limitations to using datomic from Java?#2017-07-1118:44matanJust wondering, if the Java API exposes "all there is to it"#2017-07-1118:48dobladezAnybody knows if there are plans from Cognitect to offer "Datomic as a Service" ?#2017-07-1123:07hmaurerHi! Is it possible to configure the path to storage files when running the transactor in “dev” mode?#2017-07-1123:47favilaYes. It's not in transactor.properties?#2017-07-1203:14cjmurphyI put this in SO: https://stackoverflow.com/questions/45046582/create-entity-that-refers-to-an-existing-entity#2017-07-1206:54val_waeselynckcjmurphy: answered!#2017-07-1203:14cjmurphyIf anyone can help or point me to where the documentation is. Thanks..#2017-07-1206:11pesterhazy@cjmurphy I think you need to do first twice in read-account#2017-07-1206:11pesterhazyd/q returns a sequence of tuples. You want the first element of the first tuple I think#2017-07-1206:25cjmurphy@pesterhazy Interesting - so it really is just supposed to be just a number?? I've been thinking of using (d/entity db bank-acct-id) - to really put an entity there. I can't test either approach (`ffirst` or d/entity) right now as there seem to be other problems I need to sort out. Thanks and I'll be back..#2017-07-1206:25pesterhazyan entity id is just a long, yes#2017-07-1206:26cjmurphyAnd d/entity will work as well because of upserting, but is kind of pointless??#2017-07-1206:26pesterhazyyou can also find a unique identifier for your entity and use that in a lookup ref like so: [:my.entity/name "CJ Murphy"]#2017-07-1206:27rauh@cjmurphy I agree with Peter, or you can use .... :find ?a . (notice the trailing dot)#2017-07-1206:27rauhThat'll return a single value#2017-07-1206:27cjmurphyMake it a scalar#2017-07-1206:27pesterhazy*Paulus 🙂#2017-07-1206:27rauhOoopsie, sorry Paulus. 🙂#2017-07-1206:28pesterhazynot sure what you mean about d/entity#2017-07-1206:28cjmurphyThat's a function you pass an eid and it returns the data.#2017-07-1206:29cjmurphyI went for that idea when I thought just putting a number there was wrong. But you guys have corrected me now. thanks.#2017-07-1206:30pesterhazymy rule of thumb is not to expose entity ids to the outside#2017-07-1206:30pesterhazybut it's fine to use them within your code#2017-07-1206:30cjmurphySo put a db/id there in preference to an eid?#2017-07-1206:31cjmurphyDefinitely within the code.#2017-07-1206:31pesterhazy:db/id is the entity id#2017-07-1206:32cjmurphyOh right. So when I pull on [:db/id] I'm doing something senseless.#2017-07-1210:21isaacDoes Datomic database functions support multiple arity? (fn [arg1] [arg1 arg2])#2017-07-1214:25mike_ananevHi! We are doing PoC of global data registry, where we trying to specify the shape of our data from various sources using clojure.spec. We considering to use Datomic as spec storage. Is it a good idea to store a clojure.spec expressions as Strings in Datomic? Spec is used in application level, where we do ETL, validation, transformatio, sample generation.#2017-07-1218:23tomcIs it possible to change the owner of an entity referenced via an isComponent attribute in a single transaction?#2017-07-1218:32tomcAh, I should specify that the original owner is deleted in that same transaction.#2017-07-1218:34tomcExample: in a tree, replace a node with one of its children, deleting the replaced node and any of its children that are not the one that replaced it.#2017-07-1219:41favila@tomc [:db.fn/retractEntity the-parent] will retract the is-component children. So you can do it only if you don't use retractEntity#2017-07-1219:59tomcSo if I explicitly retract all attributes of the parent it'll have the same effect, except I'll be able to move the child to another parent?#2017-07-1220:25favilaYou can always move the child; the question is whether there will be any assertions left on the child.#2017-07-1220:25favilaYes you can do what you propose. But it sounds like maybe your attribute should not be is-component?#2017-07-1220:26tomcIn my case I think isComponent is the right choice, and using explicit attr retractions on the parent will probably solve my problem.#2017-07-1220:28favilais-component attrs have a (not-enforced) invariant that the :v of every datom with that attribute is not the :v of any other datom#2017-07-1220:29favilaIOW, the "child" entity (value of the is-component attr) is not reachable except from a single parent and that one attribute.#2017-07-1220:48tomcUnderstood, but if I make that change in a single transaction, that invariant is not violated, correct? This seems like something I could write a transactor function to handle...#2017-07-1221:39favila@tomc true, it is not. it's just strange for the lifetime of the child to outlive the parent. If you keep the invariant you will be fine though.#2017-07-1221:40tomcOk, thank you for the help.#2017-07-1314:35souenzzoEvery question about datomic that I google, I arrive in a @val_waeselynck's github repo
https://github.com/vvvvalvalval/datalog-rules#2017-07-1315:03hmaurerSounds like a new law. Similar to how you eventually end up on Wikipedia’s “Philosophy” page if you keep following the first link of every Wikipedia article#2017-07-1317:19val_waeselynck@U2J4FRT2T and do you find the answer there? :)#2017-07-1317:20souenzzoAlmost always parrot#2017-07-1318:59matanIs this an okay place for newb questions?#2017-07-1319:01hmaureryes#2017-07-1319:08matanthanks 🙂#2017-07-1319:08matanI probably do not understand in what way is datomic consistent, as I find the description at http://docs.datomic.com/acid.html#sec-2 well, cryptic. Is it simple enough to explain the consistency model of datomic in few lines of plain English? or would you recommend some other resource for it?#2017-07-1319:09matanHow is a peer's time basis determined by datomic?#2017-07-1319:09hmaurer@matan which parts of the explanation given on that page do you not understand?#2017-07-1319:09hmaurerHappy to try to clarify (but I’m a newb too :p)#2017-07-1319:10matansection 2, where the above link points to#2017-07-1319:10matanI think the key to my understanding would be answering ―
> How is a peer's time basis managed by datomic?#2017-07-1319:12matanConceptually, if all peers need to have the same consistent view of the database, then either all peers have to synchronize with some network element before arriving at that consistent view, or something out of the box governs the peer nodes seeing the same consistent view of the data (?!)#2017-07-1319:13matanI would like to learn what kind of "consistency" does datomic guarantee, and also, at a high level how is it internally accomplished#2017-07-1319:13matan🤔#2017-07-1319:15val_waeselynck@matan For writes, Datomic is serializable. For reads, the system that constitutes a set of peers (viewed as a database server) can have various degrees of consistency and/or availability based on your own decisions#2017-07-1319:17val_waeselynckFor instance, if you use datomic.api/sync, you can always read your writes#2017-07-1319:17val_waeselynckif not, you may have stale reads#2017-07-1319:17val_waeselynck(speaking under control of the cognitect guys of course)#2017-07-1319:22matanI'm not sure I see how this translates to being consistent. It's easy to see that writes can be consistent, if at all a sentence like this isn't by definition void. I'm still stumped by the notion of consistency there.#2017-07-1319:25val_waeselynck@matan well in the sense of the CAP theorem, consistent means linearizable, and a set of peers + a transactor behind a load balancer is typically not linearizable when implemented in the most naive way, since they allow for stale reads (updates are pushed asynchronously to peers, and peers remain available in case of a network partition which separates them from the transactor)#2017-07-1319:27val_waeselynckNow, interestingly, you can return the new t to the client after each write and read, and call d/sync for each read, which I believe makes the system closer to CP#2017-07-1319:27val_waeselynckso the client is kind of in control of consistency here#2017-07-1319:31val_waeselynckAgain, not an expert at all, so take what I say with a grain of salt 🙂#2017-07-1319:41val_waeselyncksee also: https://stackoverflow.com/questions/44580596/in-which-position-does-datomic-lies-in-the-cap-triangle/44592282#2017-07-1320:25mssapologies for the newbie question, still wrapping my head around datomic:
if you have an attribute that’s a ref to multiple entities, is there a way to enforce order? if not, how do people usually get around that?
as a practical example, let’s say I’m making a recipe app. I have a recipe entity which has a ref with cardinality/many to possibly multiple recipe-stepentities, each of which may be edited/updated at any time
would I want to put a recipe-step/step-number attribute on that entity and sort that way? any other idiomatic ideas?#2017-07-1321:41hmaurer@mss no way to enforce order (there is no order) afaik#2017-07-1321:41hmaurerYou can add an “index” attribute to your entities, or store your data as a linked-list#2017-07-1321:42hmaurerso yes, something like a step-number would be necessary I think#2017-07-1321:55mssinteresting. any intuition on if there’s a way to ease the pain of updating each affected step-number attribute as steps are inserted or removed?#2017-07-1321:56mssobv if I have a list of 1-10 and step 2 gets removed, I can just update 3-10 to have a different value for that step-number attribute. just wondering if there’s a more idiomatic/convenient way#2017-07-1322:01mssthat example is pretty simple, but doing that on a collection of 10k elements seems painful#2017-07-1322:05favila@mss If you are deleting an item, you do not need to renumber#2017-07-1322:05favilaonly if you are inserting an item between two others do you need to renumber#2017-07-1322:05mssyep, you’re right. let’s say inserting, then#2017-07-1322:06favilathat's exactly what you would do. For small lists this is the simplest thing#2017-07-1322:06favilause :db.fn/cas on the sequence items you update to ensure there is no trouble#2017-07-1322:08favilaif it's important for you to renumber as few as possible, you can renumber from the insertion point to the top of the list (decrementing as you go)#2017-07-1322:09favilai.e. with list [0 a 1 c 2 d], inserting b between a and c, it touches less to do [-1 a 0 b 1 c 2 d]#2017-07-1322:13mssmakes sense, really appreciate the help#2017-07-1322:25bedersyou could use fractions (double/floats) as step numbers. For a reasonable amount of inserts, the order would stay the same if you use (atIndex*2 + 1) / 2 for the new step number#2017-07-1414:40olii was going to reply with choosing a suitably large integer step between each item.#2017-07-1414:40oli10 PRINT "HELLO WORLD"#2017-07-1414:40oli20 GOTO 10#2017-07-1414:41olifor the old school#2017-07-1322:26bedersyou'll renumber when showing it to the end-user on the fly#2017-07-1322:37hmaurer@favila do you think representing a linked list in Datomic makes sense? It seems neat conceptually, but I’m not sure how it would play with querying etc#2017-07-1402:32favilaMaybe for larger lists. I've never used it. I never need order over more than ~10 items at a time#2017-07-1411:47isaacDatomic can’t install attribute & functions in one transaction?#2017-07-1413:31cjmurphyIf you do a pull on a :db/id and get a number back, then enter that number in the "Entities" tab of the Datomic Console, you would expect to get something back every time. Trouble is I get no response. It seems there's something I don't understand.#2017-07-1414:11jaret@isaac No, you cannot create and then use the attribute in the same transaction.#2017-07-1414:12jaret@cjmurphy can you call d/entity on the id?#2017-07-1414:35isaacI have functions & schemas, I want d/transact in the same transaction, that will throws error.
this code will throws
@(d/transact conn (concat schemas functions))
but working while I separates theme
@(d/transact conn schemas)
@(d/transact conn functions)
#2017-07-1414:35isaac@jaret#2017-07-1414:37jaret@isaac Yep, thats exactly what I expect. You cannot create and use the attributes (schema) in the same transaction.#2017-07-1414:38jaretthe attributes have to exist to be used.#2017-07-1415:15pbostromit would be nice if there was an API to start a peer server in-memory/in-process to simplify testing setup/teardown#2017-07-1415:18hmaurer@pbostrom like this? https://github.com/vvvvalvalval/datomock#2017-07-1415:19hmaurerthere is also the in-memory datomic connection, e.g. (d/connect "datomic:")#2017-07-1415:19pbostromno, that uses the peer API, I want to write some tests using the client API without having to kick off a separate peer server process#2017-07-1415:19hmaureroh#2017-07-1415:24cjmurphy@jaret: When I call d/entity on it I get back: #:db{:id 17592186045473}#2017-07-1416:06cjmurphyCalling (d/pull db '[*] id), where id is 17592186045473, which is of course the original number, gives me a hash-map that has all the attributes I would expect in it.#2017-07-1416:07isaac@jaret thanks, I resolved#2017-07-1416:07cjmurphyOh shivers - I did it again and it worked. Just needed to refresh the browser.#2017-07-1417:04cjmurphyI am trying to work with idents, doing this: (d/pull db '[*] [:account/name "bank-fee"]), and get the error message "Attribute values not unique: :account/name". Not sure what that means. There is only one :account/name called "bank-fee", and in the schema I marked :account/name to be :indexed. Is there something else I can do to make my little pull statement work, or am I going about it wrong?#2017-07-1417:07schmee@cjmurphy your entity needs to be unique for lookup refs to work: http://docs.datomic.com/identity.html#lookup-refs#2017-07-1417:08schmeei.e :db/unique instead of :db/index on the schema attribute#2017-07-1417:14cjmurphyThanks @schmee (and @jaret from before). I'm using the YuppieChef thing, so I think that translates to :unique-identity - will give it a try.#2017-07-1417:16favila:unique-value and :unique-identity are different @cjmurphy but both will allow lookup refs#2017-07-1417:17cjmurphyI think I'll go for :unique-value then...#2017-07-1417:19cjmurphyHmm - they both somehow seem to stop upserting, which I'm relying on when importing data.#2017-07-1417:19cjmurphy"Two datoms in the same transaction conflict".#2017-07-1417:37schmeehttp://docs.datomic.com/identity.html#unique-identities#2017-07-1417:40schmeehint: you want :db.unique/identity for upserts#2017-07-1418:02cjmurphyYes :unique-identity (using Yuppiechef) gets me past that error on that attribute. And furthermore allows me to d/pull using an ident. Thanks @schmee simple_smile#2017-07-1420:28hmaurerIs it necessary to backup an S3 Datomic backup file? e.g. is it possible for datomic’s backup process to corrupt a backup file due to malfunction?#2017-07-1422:28schmeecan you not call Java functions in a query from a Datomic client?#2017-07-1422:46schmeeor Clojure fns…?#2017-07-1422:46schmeekleinheit.datomic.impl=> (a/<!! (c/q conn {:query '[:find ?e :where [?e :advertiser/advertiser ?v] [(subs ?v 0 1) "a"]] :args [(c/db conn)]}))
{:dbs [{:database-id "datomic:"
:history false
:next-t 2002
:t 1001}]
:cognitect.anomalies/category :cognitect.anomalies/incorrect
:cognitect.anomalies/message "The following forms do not name predicates or fns: (subs)"}
#2017-07-1422:49hcarvalhoavesyou can, as long as the symbol is available inside the namespace you evaluate the query#2017-07-1422:49hcarvalhoavesah sorry... Datomic client#2017-07-1422:49hcarvalhoavesI see now. It might not be supported because of the above (the peer is not the same runtime evaluating the query)#2017-07-1422:50hcarvalhoavesin this case I believe you need to the use the peer API#2017-07-1422:51schmeeI mean, it makes sense, but it’s crazy that this is not mentioned in the docs#2017-07-1422:56favilatry clojure.core/subs#2017-07-1422:56schmeesame thing#2017-07-1422:56favilawait that clause is wrong#2017-07-1422:57favila[(subs ?v 0 1) "a"] works?#2017-07-1422:57faviladon't you have to bind it then compare?#2017-07-1422:58favila[(subs ?v 0 1) ?a][(= ?a "a")]?#2017-07-1422:58favilayou may still not be able to use subs via the client api, but even on the peer api this seems wrong#2017-07-1423:05schmee@U09R86PA4 you are correct!#2017-07-1423:05schmeethe modified query works just fine with the Peer API#2017-07-1423:05schmeeagain, is this not documented or have I just missed it?#2017-07-1503:24souenzzo(d/transact @conn [{:db/ident :foo/bar
;:db/cardinality :db.cardinality/one
:db/valueType :db/ident}])
=> ":db.error/invalid-install-attribute Error: {:db/error :db.error/schema-attribute-missing, :attribute :db/cardinality, :entity #:db{:id 65, :ident :foo/bar, :valueType 10}}",
(d/transact @conn [{:db/ident :foo/bar
:db/cardinality :db.cardinality/one
:db/valueType :db/ident}])
=> ":db.error/invalid-install-attribute Error: {:db/error :db.error/schema-attribute-missing, :attribute :db/cardinality, :entity #:db{:id 65, :ident :foo/bar, :valueType 10}}",
=> ":db.error/not-a-value-type Not a value type: :db/ident",
- How datomic knows that is missing an attribute?
- How datomic knows that :db/ident is not a valueType?
There is some tool like schema/`spec`?#2017-07-1504:22favilaLook at entity 0#2017-07-1504:22favila(The :db.part/db entity)#2017-07-1512:45msshello all, new to datomic and considering trying to use it as a datastore on a project.
for my specific use case, there’s a set of attributes I’d like to store that might exhaust the 10b datom soft limit relatively quickly. jamming that into another datastore obviously is un-ergonomic
I was looking for some clarification around how db/noHistory works. it seems from my tests that facts are still accumulated (I’m testing off the memory version of datomic), as opposed to something resembling an update-in-place operation when a new value is transacted for an attribute. is that actually the case?
beyond that, is excision an option if I don’t have a particularly critical retention window? anecdotally, it seems to put tremendous pressure on the db and doesn’t seem like a long-term solution either
appreciate any input or suggestions#2017-07-1513:51hmaurer@mss I assume you read this bit from the doc:
> The purpose of :db/noHistory is to conserve storage, not to make semantic guarantees about removing information. The effect of :db/noHistory happens in the background, and some amount of history may be visible even for attributes with :db/noHistory set to true.
?#2017-07-1513:52hmaurerBeyond that I am too much of a newb to help you at the moment#2017-07-1513:52mssyep, that’s what I’m looking at#2017-07-1513:53mssseems to suggeset that facts don’t accrete, and only some amount of mostly recent facts are stored.
my experience using the mem transactor/storage was that all the facts were retained.
wondering whether that’s actually the case in a production setup, and if so if there’s another solution I might be missing#2017-07-1514:02val_waeselynck@mss I believe db/noHistory takes effect when recent datoms are compacted into an index segment: it does not change the fact that db values are immutable#2017-07-1514:03val_waeselynck@mss if you have too much data for Datomic, I suggest you try and figure out if some of the data could go to a complementary data store (e.g S3 or a KV store)#2017-07-1514:03mssyep that def makes sense#2017-07-1514:04mssand I’m leaning away from datomic for that specific set of attrs, just wanted to make sure I wasn’t missing something obvious. still wrapping my head around the tech#2017-07-1514:04val_waeselynckWe typically have 5% of our data and 95% of our schema in Datomic#2017-07-1516:07hmaurer@val_waeselynck so datomic is essentially a database of pointers to external storage for you?#2017-07-1516:07hmaurerI assume you then have to ensure that this external storage is also immutable?#2017-07-1516:07hmaurerI mean, that you interact with it in an immutable fashion#2017-07-1516:09hmaurerI that sense Datomic is an index for your data, and you only store in it attributes that you might want to query/filter upon#2017-07-1516:11val_waeselynck@hmaurer no, it is mainly a regular database to me - the use of external storage is marginal, it just happens to cover a lot of bytes. And yes, the external storage is treated immutably#2017-07-1516:12hmaurer@val_waeselynck what do you use as your external storage? and do you enforce its immutability through permissions? (e.g. if you use S3 there might be a way to make it insert-only with IAM permissions)#2017-07-1516:17hmaurer(out of curiosity)#2017-07-1516:18val_waeselynckS3 with public but secure object names, and no I don't believe so#2017-07-1516:51schmeeI want to count find the campaign with the highest number of creatives, this is what I got so far:
(let [db (d/db conn)]
(->> (d/q
'[:find ?campaign (count ?creative)
:where
[?campaign :campaign/id]
[?creative :creative/campaign ?campaign]]
db)
(d/q
'[:find ?campaign (max ?count)
:in $ [[?campaign ?count]]]
db))))
#2017-07-1516:51schmeebut this gives me back every campaign and its count#2017-07-1516:51schmeewhat am I missing?#2017-07-1517:03val_waeselynckThere is no 'max-by' aggregation in Datomic unfortunately#2017-07-1603:07souenzzoI want to find every entity that keys the atribute :my/keys.. How to datalog it?
(d/q [:find ?e :in $ [?keys ...] :where [?e :my/keys ????]] db [:foo/bar :bar/quiux])#2017-07-1603:32favilaWhy do you mean "entity that keys the attribute"?#2017-07-1603:33favilaWhat is the valueType of :my/keys?#2017-07-1608:41robert-stuttaford@U2J4FRT2T ???? -> ?keys#2017-07-1613:09favilaNot if it's type is ref and the input keywords are indents#2017-07-1711:53souenzzomy/keys is a ref to many.
Each is it's values should has a db/ident.
If I pass :a :b :c, I want just the entities that has this keys.#2017-07-1712:34favilaYou have to convert ?keys from indents to entity ids, using datomic.api/entid or with another clause#2017-07-1712:35favilaThe v slot of query match clauses is never interpreted because it's interpretation depends on the attribute#2017-07-1712:36favilaOnly the e and a slots understand lookup refs and indents because their types are known structurally #2017-07-1618:42matanHow does datomic handle a query that needs to use more data than fits in memory?#2017-07-1618:43hmaurer@matan it can’t#2017-07-1618:43hmaurerwell, depending on what you are asking:#2017-07-1618:43hmaurerhttp://docs.datomic.com/query.html#memory-usage#2017-07-1618:46hmaurerI am a newb so don’t take my word for it, but Datomic executes Datalog queries by steps (I think @favila is the one who explained this to me, so he might be able to correct me / explain it to you)#2017-07-1618:46hmaurerThe data queried for each of these steps should fit in memory#2017-07-1618:47hmaurerbut this does not mean you cannot query a dataset larger than what fits in memory of course#2017-07-1618:47hmaurerjust that the data returned by the query (and the data used in each of the intermediate steps) should fit in memory#2017-07-1618:52hmaurerI suspect this is also the reason Datomic threw the following exception at me:
> Exception Insufficient bindings, will cause db scan#2017-07-1618:54hmaurerIt’s basically a degenerate case of the “query step data does not fit in memory”. If clauses are not specific enough then Datomic cannot use the indexes to narrow down the data to get from storage, and so it would have to scan the whole database, which in most applications would not fit in RAM#2017-07-1618:54hmaurerThere might be other reasons, but since you asked the question and I just had this error 5min ago I though it might be partially related#2017-07-1619:15hmaurerUnrelated question: are datomic backups storage-agnostic? e.g. if I use “dev” mode for a while and then decide to move to Dynamo or SQL later, will I be able to smoothly transition by populating the new storage from a backup?#2017-07-1619:55val_waeselynckSure you can#2017-07-1620:08hmaurer@val_waeselynck thanks!#2017-07-1619:15hmaurer@val_waeselynck maybe ^#2017-07-1619:34cjmurphyWith dates is it common practice to store them as java dates and coerce them to clj-time/joda dates each time query? Or is there some better way - such as for instance just keeping them as clj-time/joda dates in datomic, so everywhere they are always clj-time/joda dates? The 'clj-time/joda everywhere' makes sense to me, but all the examples I've seen have java dates being stored.#2017-07-1700:12danielcomptonThere is https://receptive.io/app/#/case/17713 to request support for java.time Instants#2017-07-1619:45matan@hmaurer well that explains why they tout a customer using Spark to overcome the limitation
http://www.datomic.com/nubanks-story.html#2017-07-1619:45matanBut definitely the weak spot of the datomic architecture, even if most queries in a given system don't hit this wall.#2017-07-1619:46matanBy the way, while the docs still say that memory is cheap enough to fit all the data in memory, this is not a reality with enterprise data center memory prices (even if it is for your desktop machine).#2017-07-1619:51matanThe problem here is scalability and reliability, as you can simply one day find out that your queries no longer fit in memory just because data accumulation had persisted over time; which is quite terrible a situation unless you can plug in more memory by demand across each machine in your cluster in emergency mode, which is well, a terrible scenario..#2017-07-1620:17val_waeselynckI've been through the process of moving all our aggregations from Datomic to Elasticsearch and it went quite well. I see a lot of people who use a relational store and end up in a much worse situation when they hit that wall - because mutable databases simply arent well suited to feeding derived data stores, as they cant answer 'what changed' queries out of the box#2017-07-1620:18hmaurer@val_waeselynck are you using the log api to keep ES in sync? Or do you follow another approach? out of curiosity#2017-07-1620:18val_waeselynckYes#2017-07-1620:19hmaurerYeah it’s much easier to do on Datomic… There is bottledwater for postgres but it’s much more complex: https://github.com/confluentinc/bottledwater-pg#2017-07-1620:20matanIs the log log api simply "the way" to sync all data changes to an external target, such as ES or even HDFS?#2017-07-1620:21hmaurer@matan you can do it in whatever way you want, but even when using a SQL datastore usually you want to sync data changes from a flux of events that describe all changes in your main data store#2017-07-1620:21hmaurerand the log API provides you with that#2017-07-1620:22hmaurerOn your earlier message, that’s not quite true. I don’t know Datomic’s internals in details but I am pretty sure that standard queries on the “present” will not degrade in performance / memory consumed for a large database#2017-07-1620:26matanI only commented on not scaling by query data size, not the size of the database... some queries will grow with the size of the database, and then ......#2017-07-1620:28hmaurer@matan I was just commenting on the part ” as you can simply one day find out that your queries no longer fit in memory just because data accumulation had persisted over time”#2017-07-1709:40matan@hmaurer I know 🙂 and it still holds. Some queries grow with the database, so my statement holds 😉#2017-07-1620:05hmaurer@matan possibly, but Datomic’s target market isn’t big data. Also if you have a query which would need to go over extremely large amounts of data you are likely better of denormalising in another datastore#2017-07-1620:05hmaurerwhich seems fairly straightforward to do with Datomic’s log API#2017-07-1620:06hmaurerYeah, basically what Nubanks is doing with Spark.#2017-07-1620:12matan@hmaurer 👍#2017-07-1620:13matanKind of odd though, not aiming at being scalable in the size of the data, in this way.#2017-07-1620:15hmaurerThere are always tradeoffs. Datomic can handle pretty large amounts of data, but their goal clearly wasn’t to build a database to process huge volumes of data/writes. They favoured other properties#2017-07-1620:18matanClear#2017-07-1620:27matanIs there a tool or monitor for query memory utilization then?#2017-07-1700:16danielcompton@matan http://docs.datomic.com/monitoring.html#transactor-metrics peer metrics (a little further down the page) have some metrics for object cache hit rate and a few others. That might be an OK proxy for memory use of a single query#2017-07-1707:25val_waeselynck@matan If I wanted to get a precise measurement, I'd just use any tool which can monitor the memory usage of a JVM method call#2017-07-1709:36matanthanks @danielcompton! @val_waeselynck these tools are kind of notorious for being hard to configure, expensive or unsafe in production, but thanks anyway, I guess this is a general JVM thing, datomic just uses plain objects rather than manage its memory like e.g. spark does. I guess I'd spin up an extra peer node to do this kind of instrumentation on.#2017-07-1710:39daveliepmannCount me as a +1 on the "Retract with no value supplied" feature request, nearly old enough for kindergarten: https://groups.google.com/d/msg/datomic/MgsbeFzBilQ/NBXjQEDRzk4J#2017-07-1717:18devthtrying to figure out what do you do when you need a lookup ref that's based on 2 attributes instead of 1.. not supported of course but need to work around.#2017-07-1717:19devthcan't make either :db.unique/value because the uniqueness is composite#2017-07-1717:22devthadd an extra composite attribute to set :db.unique/value that's the combination of the other 2 i guess.#2017-07-1717:31matthaveneris it possible to use :db/excise on an in-memory db?#2017-07-1717:34marshall@devth https://receptive.io/app/#/case/17932#2017-07-1717:35marshall@matthavener http://docs.datomic.com/excision.html#limitations nope. There is no persistent index, so no effect#2017-07-1717:36matthavenerah, totally missed that, thank you#2017-07-1717:36marshallnp#2017-07-1719:33matanA twofold question about caching and pushing data: 1. Am in trouble if my database no longer fits in memory? will the peers constantly thrash or is it just a normal situation where one would rely on the most recent data being the most relevant data and the majority of the data seldom needing to sit in peers' memory?#2017-07-1719:34matanAnd 2) I'm not sure I follow on the optional role of memcached, I mean datomic already caches data on the peers so why would it help much? what am I missing?#2017-07-1719:37marshall@matan 1: generally you don’t have to worry about thrashing of the peer cache, most use cases don’t rely on having the full dataset in memory; you can also treat multiple peers as a heterogeneous set. if you segment your requests to various peers, each peer’s individual cache will then reflect the workloads it handles#2017-07-1719:38marshallso, for instance, you can have one peer for ‘back office’ analytics, a separate peer (or set of peers) for your web app, and maybe a third set for batch processing#2017-07-1719:39marshallyou can get even fancier if you want to, for example, route customer traffic through a smart LB that can segment your incoming traffic to multiple peers (either by load or, even better, by something like customer ID)#2017-07-1719:39marshallquestion 2: reads from memcached are an order of magnitude faster than reads from storage (in general)#2017-07-1719:40marshallso, if a read is satisfied by a segment in memory you’re looking at ns to fetch it. memcached would be order of 1ms to fetch. a storage read is order of 10ms#2017-07-1719:41marshalland if you end up having to do a storage read, you’re effectively now at the “same” latency as a traditional RDB, which always has to do a roundtrip#2017-07-1719:41marshall@matan ^#2017-07-1719:41marshall🙂#2017-07-1720:53apseyDoes anyone know about issues regarding storage when running Datomic backed by Postgres?#2017-07-1720:54apseyWe suddenly used our last 87gb of storage in only 4 days#2017-07-1720:55marshall@apsey are you running gc-storage regularly?#2017-07-1720:55apseyPeers didnt change significantly, but I was wondering if someone doing lots of queries could explain that?#2017-07-1720:55marshallqueries will not affect storage use at all#2017-07-1720:56marshallonly transactions#2017-07-1720:56apseyNo temporary tables are created?#2017-07-1720:56apseyAFAIK, we run gc periodically (every week or every month, I will have to double-check this)#2017-07-1720:56marshallno. Datomic only creates and uses a single table in postgres - the one you create with the setup scripts#2017-07-1720:56marshallyou may have to use postgres-level vacuum to reclaim space that is released by gc-storage#2017-07-1720:57marshalllikely depends on the version of PG you’re running and what your autovacuum settings are#2017-07-1720:57apseyso, I checked if there was anything different regarding the amount of datoms, but the slope is the same#2017-07-1720:57apseythe weirdes thing is this storage claim all out of sudden#2017-07-1720:58marshallhow many datoms?#2017-07-1720:58marshalloverall#2017-07-1720:58apseyaround 1 bi#2017-07-1720:59marshalland is it possible another system is using the same postgres instance? you might want to check pg-level metrics (i.e. the stuff in the postgres internal catalog tables)#2017-07-1720:59apseywrite iops look stable in rds#2017-07-1721:26wistbtest#2017-07-1722:43danielcomptonHow do people here deal with schema migrations in dev, where you may want to choose a different schema approach after already transacting one? I'm using conformity which is nice, but doesn't have any way to "roll-back" the schema (because Datomic doesn't have this feature either).#2017-07-1805:31val_waeselynckWe do it all in memory, including using a fork of a dev-hosted production backup, so we pretty much never need to retract#2017-07-1808:47danielcomptonCan you explain this setup a bit more? If you make a bad schema migration, do you just restore from a backup?#2017-07-1822:42val_waeselynck@danielcompton maybe this post can help a bit: https://vvvvalvalval.github.io/posts/2016-07-24-datomic-web-app-a-practical-guide.html#testing_and_development_workflow#2017-07-1822:45val_waeselynckessentially, we're pretty-much always developing against either an mem conn (we call it 'lab'), or an in-memory fork (using Datomock) of a dev conn which is a recent enough backup of our production database ('dev-fork'); sometimes, in order to get the freshest data, we just use a fork of our production database directly ('prod-fork')#2017-07-1822:46val_waeselynckSo the connections we use for development are not durable; the only time we commit a migration durably is to production.#2017-07-1822:46danielcomptonhmm, I think the part I was missing was the in memory fork of a dev conn#2017-07-1822:46val_waeselynck(The exception is to our staging environment, which is obtained by restoring backups of our production environment periodically)#2017-07-1822:47val_waeselynck@danielcompton preaching for my own church here, but I believe it is a powerful tool indeed 🙂 https://github.com/vvvvalvalval/datomock#2017-07-1822:50val_waeselynckFinal tip: use a local memcache to have all these environments share a common cache on your machine, you may see pretty good speedups when going from one conn to the other#2017-07-1722:44devthwe drop and recreate the db in staging fairly often#2017-07-1722:44danielcomptonah, I thought that might have been the case#2017-07-1722:45danielcomptonwhen you recreate, are you restoring from a backup, or from fixtures, or?#2017-07-1722:45devthgenerators or importers#2017-07-1722:46devthwe use clojure spec to infer datomic schema via generators, so it's easy to use those generators to create test entities to play with#2017-07-1722:46devthand importers are workers that are loading data in hourly, so just depends on what data we need for a feature that's being worked on#2017-07-1722:47danielcomptonimporters are loading data in from the prod db?#2017-07-1722:47devthno, external data sources (apis) in this case#2017-07-1722:47danielcomptonah#2017-07-1723:08hmaurer@marshall Are there any plans to allow for community-developed storage and backup implementations? (to support arbitrary storages and arbitrary backup targets)#2017-07-1803:25danielcompton@devth do you have any open source code showing your spec->schema->generators stuff?#2017-07-1813:55devthwe're planning to open source it but not yet. i'll see if i can get something out in the next week or two. will post here when we have something out.#2017-07-1803:25danielcomptonthat sounds really useful#2017-07-1806:57wistbhi, (we are analyzing datomic for our situation) : say, we divide our data among n number of datomics. Our logic needs to break down a unit of change among these datomics. Our understanding is, you have the peer library that application can use to "transact" the data. Is it possible for a code of the following nature ? or is there a different way to do it ?#2017-07-1806:57wistbsave () {#2017-07-1806:58wistbtry { tx1.save(stuff1); tx2.save(stuff2); } catch (Exception e) { rollback();} }#2017-07-1811:52val_waeselynckI don't believe so, Datomic provides transactions only in the context of one transactor.#2017-07-1811:54val_waeselynckApproaches to mitigating this issue can include 1 - using db.with() on both connections prior to transacting to ensure some invariants speculatively 2 - issuing compensating transactions in case of an error. Both are full of caveats IMO.#2017-07-1807:02schmee@wistb you can do (almost) arbitrary Clojure code in a transaction: http://docs.datomic.com/database-functions.html#transaction-functions#2017-07-1811:55val_waeselynckI don't think this solves @wistb's issue, as transaction functions can't be used to provide coordination across several Datomic deployments#2017-07-1807:03schmeehere’s a video explaining it: https://www.youtube.com/watch?v=8fY687k7DMA#2017-07-1819:33pbostromare transaction fns not supported when transacting a schema via the client API?#2017-07-1819:38pbostromI'm running into the same problem described here: https://groups.google.com/forum/#!topic/datomic/YKsnB_z1YHs#2017-07-1820:05schmeeI also had trouble a couple of days ago trying to use clojure.core fns with the client api#2017-07-1820:59bedersyou need to make the code available to the transactor as far as I understand, since with the client library the query are not running locally#2017-07-1914:07andreiIs there a good practice around queries that return a lot of data? Querries that one would use pagination in a sql db#2017-07-1914:14stuartsierra@wistb Datomic has no built-in support for transactions that span multiple databases.#2017-07-1914:40ckaratza@andrei some nice suggestions https://stackoverflow.com/questions/27162566/equivalent-of-sql-limit-clause-in-datomic#2017-07-1919:14souenzzoIs possible to do recursive queries?
category(root) has :category/subcategory or :category/items
:category/subcategory can be another :category/subcategory or :category/items#2017-07-1919:14robert-stuttaford@souenzzo look at the rules examples in the mbrainz sample repo. they do stuff like this i think#2017-07-1919:18souenzzoI tryied some mutual recursion/simple recursion on rules, but no erros and no result... 😞#2017-07-2012:53ckaratzahttps://gist.github.com/taylorSando/36e7f6593e503a38bb10#2017-07-1923:26potetmI'm curious if anyone's ever tried using datomic for coordination (e.g. as a ZooKeeper replacement)#2017-07-1923:29potetmOn the surface it seems like basically all the semantics you need are there (minus callbacks, which you could replicate with polling or the tx queue).#2017-07-1923:32potetmThe only thing that's obviously missing is ephemeral nodes. But I'm pretty sure you could ape that with a heartbeat as well.#2017-07-2002:50wistbthank you @schmee @val_waeselynck @stuartsierra . We are thinking of microservices. personally, I am extremely concerned about the transactions, latency, consistency issues surrounding a kafka/micro-se/materialized-views approach. One thought is , use kafka as log but not as event source. instead use datomic as the database and as coordination scheme for intra-service coordination. In this model, the external commnds are logged to kafka and will be handled by some entry service. after that point, the mutations done by all services are over datomic to which all the services have visibility to. all the mutations and internal commands between services can be logged to kafka but only as a 'FYI'. In this mode, it is not a 'micro-service' per say, but, 'modular services'. if all our data needs can be supported by one datomic instance, there is no problem, but, looks like it is not the case. so .. do you think, the pain of 'distributed tx over multiple datomic' is more manageable than the pain of 'distributed tx in a event sourced design' ? any thoughts are appreciated.#2017-07-2005:54val_waeselynckI tend to agree with you. An event sourcing system as you describe can only accept events (I.e definitive information, as opposed to commands, which call for decision) as an input, and is essentially asynchronous. As soon as your system must process commands or write synchronously, you need some sort of transactional front-end, for which Datomic is an excellent choice. If data size is a concern, you should consider making a hybrid system in which Datomic manages only a subset of the data - it may be more viable than sharding.#2017-07-2008:43wistbthank you. The moment you have a hybrid system , you will be required to handle the distributed transaction, right ? and if you are dealing with DT, then, I could as well have another datomic instance than any other database. And the question of DT is becoming real no matter what database I use over a "kafka/Event-sourcing/micro-ser" architecture.#2017-07-2020:59hmaurerHi! Quick question about Datomic data modelling: it seems that relying on attributes present on entities to determine what “type” of entities they are is a bit messy. Would it make sense to tag entities with something like :node/label, defining their type?#2017-07-2020:59hmaurere.g. {:node/squuid ... :node/label :node.label/person ...}#2017-07-2021:00favilathat is a common technique#2017-07-2021:03hmaurerCool, thanks @favila 🙂#2017-07-2117:24matanWhat are the procedure and "costs" of migrating from one storage backend to another?
e.g. between postgres and Cassandra or vice versa?#2017-07-2117:30hmaurer@matan you can restore from a backup on a new storage backend afaik#2017-07-2117:46matanYep, I guess so. I assume all functionality will be maintained, except that moving from a consistent (postgress) storage to an eventually consistent one (cassandra default configuration) will introduce application bugs if the application assumed strict consistency before.#2017-07-2117:51favila@matan What is your reasoning? datomic hides the inconsistency#2017-07-2117:51favila@matan application should be completely unchanged by the choice of storage for datomic#2017-07-2117:53favila@matan Datomic only mutates a tiny number of records. everything else is immutable, so no opportunity for inconsistency#2017-07-2117:55matan@favila last I asked on the mailing list, it was my understanding of the response I got, that query results will be different based on which cassandra node answered to datomic ― and as I recall the default setup scenario with cassandra is to "commit" a change before all cassandra nodes have updated (maybe I am wrong there) ― I think that with high throughput/activity, I might get different query results depending on which cassandra node answered to datomic#2017-07-2117:56favilaI see that mailing list thread#2017-07-2117:56favilathe results will not be different#2017-07-2117:56matanTo be honest I am not sure I got the correct bottom line on that thread#2017-07-2117:56favilathey may merely be not there yet#2017-07-2117:57favilano quorum is needed#2017-07-2117:57favilaonly one server with a record#2017-07-2117:57favilawhat may happen is that NO server available to you has the record (in case of network partition)#2017-07-2117:58favila(you = peer)#2017-07-2117:58favilathat would be some kind of failure or retry, but the application would not keep going silently with different results#2017-07-2118:01matanSo from an application point of view, say I query for all movies (borrowing from the tutorial minimalist scenario), and a movie was added, and it is not there yet when datomic runs the query, on the cassandra node that was used by datomic for this query.#2017-07-2118:01matanSay the movie was added by a user on a front-end part of the overall application#2017-07-2118:02matanIf the application can't retreive the movie, this will reflect a wrong "world" from the user's point of view#2017-07-2118:02favilait would not be an alternate world#2017-07-2118:02matanMaybe I should revisit after finishing with the tutorial, which I'm halfway through#2017-07-2118:03matan(agreed, not alternate, only stale in a confusing way for the user)#2017-07-2118:03favilathat's not a function of storage#2017-07-2118:03favilathat's just speed of light#2017-07-2118:04favilaif one peer writes while another peer queries, the querying peer may not know about the latest transaction#2017-07-2118:04favilabut its view of the world will not split#2017-07-2118:05favilait will have an exact perfect snapshot of the world at the moment the query started#2017-07-2118:05favilajust a few ms behind or whatever#2017-07-2118:06matanOkay, right, I easily flow with the timeline metaphor#2017-07-2118:06matanAs an aside, I should de-complect what the user tells the app, and what the app agrees to enter into the world#2017-07-2118:07favilathat's not really it#2017-07-2118:08favilathe key is there is only one transactor. the transactor writes and informs peers of the latest t#2017-07-2118:08favilathe latest t is like a pointer to a shapshot#2017-07-2118:09matanoh, right, so a query coming after the data transacting, would not need to go all the way down to the storage layer, it will get the latest as long as the transactor already finished updating it about the transaction? is that it?#2017-07-2118:09favilayes but that's an optimization (index-merging on the peer)#2017-07-2118:10favilathe point is once the peer knows about a T, that T is guaranteed to exist in storage#2017-07-2118:10favilaif the peer asks for it and it isn't there, it knows that there is a read failure#2017-07-2118:14matanThough, if the peer forgets about T, because it has been taken out of its cache in order to satisfy some larger query?#2017-07-2118:15matanT being what here?#2017-07-2118:21favilathe transaction id#2017-07-2118:22favilathe peer does not forget them#2017-07-2118:22favilaare you familiar with Clojure?#2017-07-2118:24favilaImagine the entire database is an atom containing {:current-t T :transaction-log [...] :indexes {:eavt [...] :avet [...] :aevt [...] ...}#2017-07-2118:25favilainside the atom is everything, including all history#2017-07-2118:25favilathe transactor swap!s into this atom, and shares the returned value with peers#2017-07-2118:26favilaso the result of every transaction is an immutable database value with access to all history too#2017-07-2118:26favilainconsistency is impossible#2017-07-2118:27favilathe only thing that can happen is the peer doesn't know about the latest db value#2017-07-2118:27favilathat just means that its view of the world is behind, but it is still consistent#2017-07-2118:27favilaimplementation-wise of course this is not how it's done, but operationally that is the experience you get#2017-07-2120:55matanYes, I write clojure code, e.g. https://github.com/Boteval/compare-classifiers#2017-07-2118:10favilafurthermore, that T never changes, so there is no chance that an eventually-consistent storage has different values for the same T#2017-07-2118:13matanI should revisit after being done with the tutorial and having a complete grasp of datums, attributes etc.. the data model. Will be back at that point...#2017-07-2118:13matan@favila many thanks so far#2017-07-2118:30favila@matan this may interest you if you want to know more about internals: http://tonsky.me/blog/unofficial-guide-to-datomic-internals/#2017-07-2121:04matanWell I think I get it. So reading the most current data boils down to chasing the latest time handle (if searching for an entity) or having the right transaction id at hand to begin with, or waiting for them to arrive courtesy of the transactor.
Framed as such, the notion of "consistency" is reduced to an easy to satisfy definition, one that is closer to the ACID definition than the CAS one.#2017-07-2121:07matanWhich does not make an application using datomic behave consistently without some effort in the form of judicious use of datomic api. I can live with that, maybe, in exchange for not using SQL nor a lame NoSQL database.#2017-07-2121:08hmaurerMongoDB scales though troll#2017-07-2121:09matanlol#2017-07-2121:14matthavenermatan: if you’re worried about a client reading their own writes, you can use d/sync to make sure you read the basis t of the last write from that user#2017-07-2121:15hmaurer@matthavener with postgres as a backend this won’t be an issue, right?#2017-07-2121:16hmaureronly with an eventually consistent store such as dynamo?#2017-07-2121:16matan@matthavener yep, so I came to gather, thanks.#2017-07-2121:16matthavenerafaik, you can lag reads on any type of storage#2017-07-2121:17matthavenerif you write with peer B and read from peer A, there’s no guarantee that the transactor has updated A with the basis T that was just transacted from B#2017-07-2121:17matthavener(hence, d/sync, which would allow you to force A to sync to some T)#2017-07-2121:18hmaurer@matthavener but from within the same peer, there is guarantee, right?#2017-07-2121:18matthavenerI don’t know if that’s a guarantee datomic makes, but it would seem reasonable to me 🙂#2017-07-2121:19matthavenerif its the same peer, you can read the value of d/transact‘s future to get the db-after which has that guarantee#2017-07-2121:20hmaurerBy the way, quick unrelated question:#2017-07-2121:20hmaurerIf I were to write a service whose purpose is to follow the transaction log through the log API and update some internal state#2017-07-2121:21hmaurerhow can I make sure that it doesn’t miss a transaction?#2017-07-2121:21hmaurerI don’t think I can rely on transaciton IDs being sequential or anything like that#2017-07-2121:21hmaureris it possible to get, at every point, the transaction ID that was immediatly preceding it#2017-07-2121:22hmaurerso my service can keep track of the “latest transaction ID it processed”, and before processing a new transaction it can check that the “latest tx ID” it has recorded matches the “previous transaction ID” of the transaction he’s about to process#2017-07-2121:22hmaurerand if not, it would request the log again from the “latest tx id” it has#2017-07-2121:24hmaurerif that makes any sense#2017-07-2121:25hmaurerTL;DR; how do I reliably write a service that follows the transaction log, ensuring that it doesn’t miss any transaction#2017-07-2121:35favilaread the tx-report-queue, it never skips a transaction#2017-07-2121:38favilayou can get all tx-ids in order by reading (d/datoms db :aevt :db/txInstant)#2017-07-2121:38favilathere is unfortunately no cheap way to get the previous tx, because there's no way to walk indexes backward#2017-07-2121:49hmaurer@favila ok, thanks. I wish there was a way for me to manually check it though, e.g. when getting a new transaction to process be able to get a reference to the transaction that immediatly precedes it#2017-07-2121:51hmaurer@favila oh wait; records from the report queue contain “db-before”. Would it be expensive to call basis-t on that, then t->tx ?#2017-07-2121:52favilano, it's just pulling a long out and masking some bits#2017-07-2121:53favilaso if you have a log entry, you can do that#2017-07-2121:53favilaif you just have a t, you can't easily get previous t#2017-07-2121:53hmaurer@favila masking some bits?#2017-07-2121:54favilawell adding some bits in#2017-07-2121:54favilaa tx is just a T with the transaction partition bits added#2017-07-2121:54hmaurerwhat’s the distinction between t and tx id by the way?#2017-07-2121:54hmaureroh#2017-07-2121:54favilanotice that t->tx and tx->t don't need a database#2017-07-2121:55favilatx->t masks partition bits to 0; t->tx does the opposite#2017-07-2121:55favilait's a special case of d/entid-at#2017-07-2123:36val_waeselynck@hmaurer you can also use the Log API, for an architecture where the consumer decides when it catches up instead of being notified by the tx-report-queue in real-time#2017-07-2121:29matan@hmaurer have you looked at the docs e.g. http://docs.datomic.com/log.html ?#2017-07-2121:46hmaurerI have; I am still unsure how to check that my process hasn’t missed a transaction#2017-07-2121:33favila@hmaurer @matan Again, I don.t see how choice of storage affects anything#2017-07-2121:35faviladatomic acts acid even with eventually-consistent storage#2017-07-2121:37matanThanks, that's clear by now. The ACID definition of consistent is easy to accomplish in this architecture and the paradigm implied by the time-oriented API.#2017-07-2121:42matanGoing through the unofficial internals doc suggested above, my thoughts are twofold:
1. Most these things should have been on the official docs, right after the introductory parts. It gives a sense of what performance to expect in different scenarios, and thus whether or how to use datomic for a scenario. Maybe they are already mentioned there.
2. I think datomic has the upper hand on data modelling compared to what else we have out there, but possibly at a dire cost of being prohibitively slower than other options for some standard scenarios, due to all the translation taking place for weaving transactions into non-transactional storage, the unoptimized (?) nature of datalog v.s. SQL, and involving external storage layers which act as databases not just storage, thus incurring additional overhead.. I'd be really happy to see some intelligent benchmarks or refutations of the assumptions sprinkled in this quick note.#2017-07-2122:34hmaurerDatomic’s terms and conditions prohibit the publishing of benchmarks#2017-07-2122:30stuarthalloway“prohibitively slower” and “standard” both would need definition#2017-07-2122:31stuarthallowaythere is no doubt that SQL query engines have decades of clever performance optimizations#2017-07-2122:32stuarthallowayand it almost feels like cheating to win some read scenarios via the architectural advantage of immutability + multi-tier caching#2017-07-2122:33stuarthalloway@matan in Datomic, external storages act as block stores, not as databases#2017-07-2122:35stuarthalloway@matan Datomic happily accepts the write overhead needed for ACID transactions, and the application fits and misfits this implies#2017-07-2122:35stuarthalloway@matan I think the important thing missing from your note is the horizontal scaling and caching advantages enabled by peers + immutability#2017-07-2122:38hmaurer@stuarthalloway from your experience, has there ever been cases where the single-writer process becomes a performance problem? And if yes, what would be the recommended way to deal with this?#2017-07-2122:39hmaurerOver the top of my head I thought the application could then be split into multiple databases, which might allow the transactor to operate in parallel#2017-07-2122:39hmaureralthough I am not sure it does#2017-07-2122:40stuarthalloway@hmaurer sure! Datomic is not a fit for write scale, as the FAQ states: http://www.datomic.com/faq.html#2017-07-2122:41hmaurer@stuarthalloway Yep I saw that, and I wouldn’t hold Datomic guilty for not dealing with large volumes of write. I am just wondering if there are potential workarounds#2017-07-2122:41stuarthalloway@hmaurer You can split the application into multiple databases with a separate transactor for each. Remember that peers can query (with join!) across databases, and peers do not care what transactor they come from#2017-07-2122:42hmaurerYep that was my feeling when I discovered peers can join across databases#2017-07-2122:42hmaurergreat 🙂#2017-07-2122:42hmaurerThanks for Datomic by the way, it’s making me enjoy databases again#2017-07-2214:24matanSame here (only the scalability concerns)#2017-07-2122:43hmaurerAlso I might just be delusional but I feel I understand its internals much better than other databases I have used before, which is comforting#2017-07-2122:43stuarthallowayIf you are looking at that kind of scaling trick, please stay in touch here and/or on the mailing list. Happy to help vet and bench your ideas.#2017-07-2122:43hmaurerOr that, even if I do not understand its internals, I won’t need 10 years of experience to understand them if explained properly#2017-07-2122:44hmaurerGreat, thanks! I don’t have any practical application for which I would need to scale writes though; it was just out of curiosity#2017-07-2122:45stuarthallowayCheers!#2017-07-2122:47hmaurerAh, quick other question while you are here @stuarthalloway 🙂#2017-07-2122:48hmaurerIs there a technical reason why Datomic isn’t accommodating for custom/community-built backup and storage drivers?#2017-07-2201:13danielcompton@matan, following up on my post on the mailing list about consistency.
At any point, given a database point t that you got from the transactor you can go to your storage to get the actual transactions (ignore the transactor holding them in memory for now). There are two possible outcomes when reading from Cassandra:
1. The data is there, and it is returned
2. The data is not there (perhaps because of eventual consistency and network partitions, perhaps because the node has failed) and an error is thrown
It is never possible to get an inconsistent view of the world, just an unavailable one#2017-07-2214:22matanwould you not think that from an application point of view, getting an error is not much better than inconsistent state? I mean it is a little less worse, but how would you code your application if for many reads you need to loop on arbitrary retry intervals?#2017-07-2304:40danielcomptonThey are just different tradeoffs of availability or consistency. Which one you choose depends on your application.#2017-07-2304:41danielcompton> I mean it is a little less worse, but how would you code your application if for many reads you need to loop on arbitrary retry intervals?
It depends on your requirements, but returning an error to a client that the service isn't available is always an option#2017-07-2212:44stuarthalloway@hmaurer quality control#2017-07-2214:25matan@stuarthalloway @hmaurer thanks for the discussion, it sure improved and corrected my understanding. In particular with regard to the natural ability to join across databases (but not atomically transact across different databases?!)#2017-07-2214:27matanMaybe it would have been nice to auto-shard by time, given that most (?) the designated scenarios would not regularly pick in history but rather use one of the latest time points? or maybe joining from same-time shards is typically more straightforward than with other sharding schemes? just asking 🙄#2017-07-2309:57ckaratzaHello I am currently evaluating datomic and using the java API 😮 . One use case I am trying to solve is to retrieve a specific set of transactions for an entity. After getting the transactions ids which are non continuous I clumsily retrieve the actual tx data via the log api by applying a short range of one based on the transaction id (txRange(x, x+1)). Is there a more elegant way of achieving the same result?#2017-07-2313:34robert-stuttaford@ckaratza in Clojure, it’s not clumsy 🙂 (first (d/tx-range (d/log (d/connect "URI")) t (inc t)))#2017-07-2314:26hmaurer@robert-stuttaford isn’t it possible to retrieve the tx data directly in the datalog query anyway?#2017-07-2314:46robert-stuttaford@hmaurer yes you can#2017-07-2316:45ckaratza@hmaurer how could you change this to include the datums
`[:find ?tx ?e
:in $ ?id ?owner ?revision
:where
[?e :appRegistry/id ?id]
[?e :appRegistry/owner ?owner]
[?e :appRegistry/revisionNumber ?revision]
[?e ?tx]]`#2017-07-2316:50hmaurer@ckaratza ah wait, you want all the datoms for a particular transaction?#2017-07-2316:51ckaratza🙂 yes#2017-07-2316:51hmaurerin that case I’m not sure you can do it efficiently in Datalog. @robert-stuttaford can you please confirm?#2017-07-2316:51hmaurerThere is no index helping you for this#2017-07-2316:52hmaurerThe log is probably the way to go, as you did#2017-07-2316:53ckaratzalet me try thic:
`[:find ?tx ?a ?v ?e
:in $ ?id ?owner ?revision
:where
[?e :appRegistry/id ?id]
[?e :appRegistry/owner ?owner]
[?e :appRegistry/revisionNumber ?revision]
[?e ?a ?v ?tx]]`#2017-07-2316:54hmaurerAh, this should work, because you have extra filters on ?e#2017-07-2316:54hmaurerbut if you tried [?e ?a ?v ?tx] alone I think it would warn you that the query would lead to a full DB scan#2017-07-2316:56hmaurerWhat are you trying to do exactly?#2017-07-2316:56hmaurerAs in, what is the problem you are trying to solve / question you are trying to answer?#2017-07-2316:57hmaurer@ckaratza ^#2017-07-2317:00ckaratzaI want to reconstruct the commands executed to build an entity#2017-07-2317:00hmaurerAre you familiar with Datomic’s “history” database?#2017-07-2317:00ckaratzalets say I have an entity revision and I want to display its chronicle#2017-07-2317:01ckaratzayes I execute the query in this db and as i saw now it returns the datoms#2017-07-2317:01ckaratzathey are not grouped by as I want but thy come back#2017-07-2317:02ckaratzaso what happens now is I get all datoms flat and I need to group them by txid#2017-07-2317:09hmaurer@ckaratza you can probably group them after getting back the result from the query then#2017-07-2317:10ckaratzayeap that's the easy part 🙂 thanx a lot for the directions!#2017-07-2317:23ckaratza@hmaurer I also added the ?added part to find out if its an assertion or retraction#2017-07-2317:23hmaurer@ckaratza happy I could be helpful 🙂#2017-07-2417:57marshallDatomic 0.9.5561.54 is now available https://groups.google.com/d/topic/datomic/oHBXJkE7cdA/discussion#2017-07-2421:31gworley3we're seeing indexing alarms and what looks to me like transactions causing the transactor to timeout and miss heartbeat, causing a failover to the standby transactor. search google and looking at our stats, it appears it's related to our high sustained write volumes (over 500MB during some minutes). would increasing writeConcurrency help with this? if so, what are safe values to try?#2017-07-2422:07hmaurerI am creating a new datomic in-memory database on every test run, then deleting it, but there seems to be some memory overhead to doing this numerous times#2017-07-2422:07hmaurerIs anything being kept in memory? e.g. an open connection, or else? And can I release it?#2017-07-2422:48erichmondWe are seeing this error when upgrading our dynamo-based datomic peer to 0.9.5561.54 or 0.9.5561.50. Has anyone seen this / can advise?
ClassCastException com.amazonaws.services.dynamodbv2.model.GetItemResult cannot be cast to com.amazonaws.AmazonWebServiceResult datomic.ddb/fn--9598 (ddb.clj:47)
#2017-07-2422:51danielcompton@erichmond what are you upgrading from? What other AWS related dependencies do you have in your project?#2017-07-2422:53erichmondfrom 0.9.5561 and aha#2017-07-2422:53erichmondgood idea. what is that command? lein deps --tree ?#2017-07-2422:55erichmondlein deps :tree, got it, thanks#2017-07-2422:55apseyHi Guys, I am interested in running Datomic with Finagle (https://github.com/twitter/finagle). Why?
We are using it on most of our stack (http,thrift,kafka and iso8583)
We benefit from using finagle Futures chaining;
In our microservices architecture, pretty much every port (from ports and adapters) has its own component.
When we implement our components using finagle, we get circuit-breakers, expo backoff, and self-healing almost for free.
We current have an issue that when a Peer fails to reach the transactor for a long enough time period, we have a hard trip for our circuit breaker and we need to restart the peer application;
We could implement self-healing and expo-backoff in our own datomic wrapper library, but it could be better if we just had a finagle library wraping datomic component;
To do that, I basically need:
a way to get the transactor host from the connection
encode and decode data that is being transacted or queried (I believe we use use fressian + edn on datomic, but I couldnt confirm)
Have you heard of anyone doing this or could you tell me if you think that doesnt make any sense?#2017-07-2502:21danielcompton@apsey, that's very interesting.
> a way to get the transactor host from the connection
I don't know if this is part of the public API
> encode and decode data that is being transacted or queried
I'm not sure what this would get for you?#2017-07-2502:21danielcomptonDatomic has a new HTTP endpoint you can query to get some status info, but I'm not sure if that's what you need here#2017-07-2502:23danielcomptonDatomic has a lot of the HA stuff built-in, as well as connection retries I think#2017-07-2502:24danielcomptonBut it's all internal, I'm not sure how well it's exposed to the user#2017-07-2502:25danielcomptonIs it possible to just monitor the queries and transactions, and fail if they fail, but in the background retry to try and let Finagle handle the self-healing and backoff?#2017-07-2503:00nonrecursivehey y’all, what are the best practices around creating datomic connections? I’m using datomic for a web app, and is it ok to call d/connect once per API request?#2017-07-2506:39danielcompton> Datomic connections do not adhere to an acquire/use/release pattern. They are thread-safe and long lived. Connections are cached such that calling datomic.api/connect multiple times with the same database value will return the same connection object.#2017-07-2506:39danielcomptonfrom d/connect docstring#2017-07-2506:40danielcomptonhowever as a matter of code architecture, I would suggest maybe using the Component architecture that does use a long-lived connection#2017-07-2506:41danielcomptonIt makes reloading easy, and helps you easily use a memory database for testing#2017-07-2506:41danielcomptonYou can do it either way, but if you need to inject a URI to connect to, then you may as well inject a full Datomic Component#2017-07-2507:32val_waeselynck@U0AQ1R7FG not relying on URIs also gives you the opportunity to do things like forking etc.#2017-07-2510:57nonrecursive@danielcompton @U06GS6P1N Thanks for the help!#2017-07-2522:09danielcomptonHere's a fully worked component that might be useful as a starting point:
(ns app.system.datomic
(:require [com.stuartsierra.component :as component]
[datomic.api :as d]
[app.datomic.schema :as schema]
[clojure.spec :as s]
[clojure.string :as str]
[suspendable.core :as suspendable]
[clojure.tools.logging :as log])
(:import (datomic Connection)))
(s/def :datomic/conn (partial instance? Connection))
(s/def :datomic/uri (s/and string?
#(str/starts-with? % "datomic:")))
(defrecord Datomic [uri conn]
component/Lifecycle
(start [component]
(let [created? (d/create-database uri)
conn (d/connect uri)]
(when created?
(log/info "Creating a new datomic database:" uri))
(schema/ensure-schema conn)
(assoc component :conn conn)))
(stop [component]
(when conn (d/release conn))
(assoc component :conn nil))
suspendable/Suspendable
(suspend [component]
component)
(resume [component old-component]
(if (and (= (dissoc component :conn) (dissoc old-component :conn))
(some? (:conn old-component))
;; Try and sync the db, to ensure that we are still connected
;; If not, we shut down the component and try to reconnect.
(try
(deref (d/sync (:conn old-component)) 500 false)
(catch Exception e
false)))
(assoc component :conn (:conn old-component))
(do (component/stop old-component)
(component/start component)))))
(defn new-datomic [{:keys [uri] :as config}]
(map->Datomic {:uri uri}))
(s/fdef new-datomic
:args (s/cat :config (s/keys :req-un [:datomic/uri])))
YMMV ofc#2017-07-2514:02ckaratzaHello! Question regarding txReportQueue. So if I subscribe from a Peer to txReportQueue at Time=t0 where maxTxId=XXX and keep getting messages until Time=t1 where maxTxId=YYY, if for some reason connection to datomic breaks from t1 to Time=t3 where maxTxId=ZZZ, when my Peer recovers will it get all the txIds YYY-ZZZ or it will start getting after ZZZ?#2017-07-2516:15val_waeselynckNo, txReportQueue is a realtime notification system, use the Log API for catching up#2017-07-2515:06chrjsHey all. Noob question. Is it possible to start datomic in local (in memory) mode, and then migrate to a backing store, without losing all the in memory datoms?#2017-07-2515:07chrjshttp://docs.datomic.com/backup.html seems to suggest swapping out the backing store is possible.#2017-07-2515:16favilaIn memory databases can't be backed up, so they can't be restored#2017-07-2515:16favilaYou need to replay the tx-log of the mem db and manually copy#2017-07-2515:16chrjsAh, ok. Thought that might be the case. Thanks!#2017-07-2516:53apsey@danielcompton can you guide me to the new HTTP endpoint status info API?
I am still researching, but I dont think it is possible to activate finagle only in the failing cases, if that is what you are suggesting#2017-07-2517:15marshall@apsey http://docs.datomic.com/transactor.html#health-check-endpoint#2017-07-2517:15marshalland http://docs.datomic.com/peer-server.html#sec-2-1#2017-07-2517:57robert-stuttaford@bbloom https://github.com/tonsky/datascript/blob/master/src/datascript/btset.cljc perhaps?#2017-07-2517:58bbloomthat’s probably worth studying - thx @robert-stuttaford#2017-07-2518:11bbloomhttps://gist.github.com/tonsky/c5605058f29c620242eb7e0130234a8c <- does look promising for my needs#2017-07-2518:13hmaurerCan I call functions from my application’s namespace / librairies in a Datalog queries? I tried but got an error. I suspected it’s because the quoted query could not reference the functions I was using, so I tried syntax quoting (which auto-prefixes every symbol), but then the variables (e.g. ?trete) got prefixed too and everything broke#2017-07-2518:13hmaurerI would like to use some functions from the clj-time package in my query#2017-07-2518:15hmaurerAh, I found the relevant passage of the doc#2017-07-2518:16hmaurer> Function names outside clojure.core need to be fully qualified, and their namespaces must be loaded before use in query.#2017-07-2518:16hmaurerI tried that and still got an error, but will try again#2017-07-2518:18hmaurernevermind, full qualification does fix it. I must have made a mistake earlier#2017-07-2518:24bedersare you doing it using the client library?#2017-07-2518:24hmaurerpeer library#2017-07-2522:35gonewest818Can datomic-console-0.1.214 be installed and run with datomic-free-0.9.5561.50 ?#2017-07-2522:37gonewest818could be I blew the installation, but for me it's crashing at startup with
console_1 | Exception in thread "main" java.lang.IncompatibleClassChangeError: Implementing class
console_1 | at java.lang.ClassLoader.defineClass1(Native Method)
console_1 | at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
console_1 | at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
...
#2017-07-2522:49gonewest818same issue with datomic-free-0.9.5561.54 and each version back to 0.9.5544#2017-07-2523:02gonewest818ah, nevermind. I found this comment in the Google group https://groups.google.com/forum/#!topic/datomic/TiRcDBqs9cM#2017-07-2611:10isaacI want split my database to multiple(sharding), What the best practice to alter my type of :db.type/ref attributes to type :db.type/long? According the [doc](http://docs.datomic.com/schema.html#Schema-Alteration), I must install a new attributes.
eg.
I have User & Account entity, now, I put the User and Account to difference db, the attribute :user/accounts can not still be type of :db.type/ref#2017-07-2615:33mgrbyteHas anyone experienced difficulty getting a transactor running on AWS w/dyanmo db with version 0.9.5561.50 ?#2017-07-2617:39erichmond@mgrbyte We had no issues with our transactor, we had issues with our peer, but it was a library conflict issue#2017-07-2617:39erichmondwhat error are you seeing?#2017-07-2619:20devthcan a transaction function return txs as list-of-maps in addition to list-of-lists?#2017-07-2619:22favilaYes#2017-07-2619:23favilaTransaction functions have macro-like evaluation#2017-07-2619:23favilaTransactor will keep expanding results until it reaches a fixed point#2017-07-2619:24devthawesome.#2017-07-2619:53uwodid I mishear, or did someone say you should index all attributes by default?#2017-07-2620:26favilaThe only cost is space and indexing time (when the transactor compacts the index), and you can always remove them later#2017-07-2620:26favilaso, that seems reasonable#2017-07-2620:36hmaurer@favila you can also add an index later on an attribute, right?#2017-07-2620:37favilayes#2017-07-2620:46timgilbertSay, is there anything like a $DATOMIC/bin/transactor status command? Trying to script some dev tools around the dev transactor and can't see an obvious way to check whether it's running or not#2017-07-2620:47timgilbertI mean, there's always ps | grep datomic but I'm hoping for something slightly less janky#2017-07-2620:47timgilbertI do see a pidFile system property, but I can't seem to find the pid file in the dev transactor#2017-07-2620:52favila@timgilbert why not just ask your process manager?#2017-07-2620:53favilae.g for systemd, systemctl status datomic.service#2017-07-2620:54favilaor even is-active#2017-07-2620:54favila(which gives a useful 0 exit code)#2017-07-2620:54timgilbertI would do that on my actual transactor server, but locally I'm just running a script to start it (os/x)#2017-07-2620:54favilaah#2017-07-2620:55timgilbertMaybe I just need to add a pidfile argument in the config for local, though then I'll need to get the engineers to agree on a path#2017-07-2620:56favilaI don't think the pid is removed on death#2017-07-2620:56favilaalthough I guess a wrapper script could do it#2017-07-2620:56timgilbertHmm, was afraid of that. I suppose for catastrophic failure it couldn't be anyways#2017-07-2620:57timgilbertWell, I'll mess around with it some#2017-07-2620:57favilayou could make a launchd agent too#2017-07-2620:57favilaalthough I don't know the details#2017-07-2620:57timgilbertI suppose so, but I'd need to get over my burning hatred of launchd first#2017-07-2620:59favilahttps://tgk.github.io/2012/11/installing-datomic-transactor-with-launchd.html#2017-07-2621:01favilathat's for a daemon (system-wide) not an agent (user-specific)#2017-07-2621:01favilabut probably not very different#2017-07-2621:03timgilbertYeah, I can enthusiastically state that I won't be editing any XML to get this little script working. 😉#2017-07-2621:03timgilbertBut I do appreciate the suggestion#2017-07-2621:05favilayou can use the plist editor#2017-07-2621:07timgilbertI think I would probably attempt to get the dev transactor running in docker before going down that path#2017-07-2621:07favilayour call#2017-07-2621:07timgilbertI am enjoying my hatred of launchd too much to let go of it, unfortunately#2017-07-2621:07favilaan agreed-upon pid with a wrapper script that deleted the pid would probably work too#2017-07-2621:08favilait's just nice to have job control and not have to keep a terminal open somewhere#2017-07-2621:10timgilbertThat's true. I wind up doing a lot of my day-to-day work in-memory and then go from there to ddb, typically. But since I can't restore from ddb to in-memory, I'm starting to mess with the dev transactor more#2017-07-2621:12favilaah. this may interest you then https://gist.github.com/favila/785070fc35afb71d46c9#2017-07-2621:12favilaIt's old (from before mem db had logs)#2017-07-2621:13favilanowdays I think you could just replay the log#2017-07-2621:13timgilbertOh, interesting. I will definitely check it out#2017-07-2622:30gonewest818Looks like 0.9.5561.50 added a feature “health check endpoints for transactors and peer servers.” Details: http://docs.datomic.com/transactor.html#sec-1-1#2017-07-2714:15hmaurerDatomic’s default behaviour in a transaction is to “upsert” (e.g. if an ID specified as :db/id does not represent an existing entity then it’s considered a temporary ID). This seems prone to errors; is there a way to throw an error on transact if :db/id is not referring to an existing entity? and vice-versa?#2017-07-2714:21hmaurerAh nevermind, I am wrong on this. It’s not that :db/id gets considered as a temporary ID if it does not represent an existing entity ID. It’s that it uses the value of :db/id as the new entity’s ID instead of auto-generating it.#2017-07-2714:22hmaurerMy question still stands though: can I disallow upserts#2017-07-2714:22hmaurer(without doing a read before hand, or using a custom transaction function)#2017-07-2714:26hmaurerBasically, I want to make sure that when I am updating an entity I am actually updating an existing entity, and not creating a new one#2017-07-2715:59hmaurer@favila would you have an insight on this?#2017-07-2716:04favilaentity ids are just numbers, so there is no solid notion of one "existing" or not#2017-07-2716:04favilathe closest you can get is to say there are no datoms which contain that number as a reference#2017-07-2716:05favila(precisely: no datoms in :eavt index with that number in :e, and no datoms in :vaet index with that number in :v)#2017-07-2716:06favilayou can arbitrarily assert datoms against any entity id you want whose t value is <= the internal t counter in the db (and whose partition id is valid? not sure if this is checked)#2017-07-2716:09favilaWhat I suggest you do is define some notion of entity "existence" for your application (e.g., has a certain attribute asserted) and use a transaction function to assert that the attribute either exists or has a certain value.#2017-07-2716:10favila:db.fn/cas with same old and new value might work? not sure#2017-07-2716:10favilaotherwise you need custom fn#2017-07-2716:10favilatransaction fn should throw if the precondition it is testing fails#2017-07-2716:11favilaanother idea, you can use a lookup ref as the :db/id#2017-07-2716:11favilaif the lookup ref and the notion of an entity existing are the same#2017-07-2716:12favila{:db/id [:unique-attr "unique-value"] :more "assertions"} will fail if the lookup ref cannot resolve#2017-07-2716:49hmaurer@favila thanks! That’s helpful. I was wondering the same thing about :db.fn/cas working with same old and new value; I’ve to try it#2017-07-2716:49hmaurerwould db.fn/cas work with old value nil ? To check whether an attribute has NOT been set?#2017-07-2716:49hmaureror would I need a custom transaction function for this too?#2017-07-2716:52hmaurer@favila are you doing “existence” checks like this internally? It seems like something that would be quite common, e.g. some users might have the right to update an entity but not create one, etc#2017-07-2716:53favila:db.fn/cas works with nil with meaning you said.#2017-07-2716:53favila(says so on docs)#2017-07-2716:53favilait doesn't support new value of nil though, which I have wanted#2017-07-2716:53favila(i.e. a conditional retraction)#2017-07-2716:53favilaWe don't typically do checks like this#2017-07-2716:54favilaIf they happen to write to a db id that was deleted, it just gets orphaned#2017-07-2716:54favilaif they create a new thing, it's with a unique attribute and an entid#2017-07-2716:55faviladoesn't mean we couldn't be more careful about it though#2017-07-2716:58hmaurerok. And slightly unrelated question: do you use :db/id in your product directly? or do you tag every entity with a uuid?#2017-07-2716:58hmaurer@favila ^#2017-07-2717:00favila:db/id if used, is always short-lived#2017-07-2717:01favilawe don't use it as external reference#2017-07-2717:01favilawe use it in pull, compare, update scenarios#2017-07-2717:01favilaIt's really only necessary for isComponent entities#2017-07-2717:04hmaurer@favila what do you mean by “short lived”?#2017-07-2717:05favilaon the order of minutes. db id is not persisted anywhere, it's just in a client's memory#2017-07-2717:05favilausually a browser#2017-07-2717:07hmaurer@favila oh I see, in your application.#2017-07-2715:34devthcan a map-form transaction contain a reverse lookup that associates multiple other entities with "this" entity? can't find any docs on this.#2017-07-2715:35devthit appears i can associate a single entity with a reverse ref:
{:db/id "new-entity"
:book/_subject 12312312312
:person/name "foo"}
#2017-07-2715:36devthbut not multiple:
{:db/id "new-entity"
:book/_subject #{12312312312 456456456456}
:person/name "bar"}
#2017-07-2715:58favila@devth that is correct. Anything seq-like will be interpreted as a lookup ref#2017-07-2715:58devthah. so it's impossible to assoc multiple refs#2017-07-2715:59favilait's possible with additional maps#2017-07-2715:59favila{:db/id foo :book/_subject 123}{:db/id foo :book/_subject 456}#2017-07-2715:59favila(in same tx)#2017-07-2715:59favilabut not in a single map#2017-07-2715:59devthah, right#2017-07-2715:59devthmakes sense. thanks#2017-07-2716:02favilaI don't know if this is documented. I had to reverse engineer some map format edge cases#2017-07-2716:03favilaforward refs do sometimes accept many, but I don't remember how it decided between one lookup ref vs many items#2017-07-2716:10devthyeah, finding the docs are a little light in this area#2017-07-2717:09timgilbertIs there a better / preferable way to get a list of datomic entity-API items out of a query besides this?
(map #(d/entity db %) (d/q '[:find [?eid ...]] db))
#2017-07-2718:58favilaThat is the only way.#2017-07-2718:52hmaurerIs there a way to prevent peers from execution d/delete-database? (unless they are “priviledged” or something)#2017-07-2718:52robert-stuttafordstop using the peer lib, and use the client lib instead#2017-07-2718:53hmaurer@robert-stuttaford client lib is in alpha and lacks support for some features though, no?#2017-07-2718:54robert-stuttafordi guess. i don’t use the client lib 🙂 but that’s basically what it boils down to. the peer is considered to be inside the database#2017-07-2718:54hmaurermmh. Would you use the client lib if you started a new project?#2017-07-2718:54hmaurerthe peer lib seems to have some nice perks#2017-07-2718:54robert-stuttafordprobably not. now that there’s no peer limit, i prefer its programming model#2017-07-2718:55robert-stuttafordwe’re all in with Datomic; its our only database, and we’re full-stack Clojure. so the peer is everywhere#2017-07-2718:56hmaurerthe idea of being able to delete your production database with a single line is scary#2017-07-2718:56hmaureralthough as you said in a thread, wth continuous/regular backups it’s not so bad#2017-07-2718:56robert-stuttafordyeah#2017-07-2718:56hmaurerhow do you go about doing continuous backups by the way @robert-stuttaford ?#2017-07-2718:58robert-stuttafordwe have a t2.small that has one job; this script#2017-07-2718:59hmaureroh, so it just runs it on repeat#2017-07-2718:59robert-stuttafordyep#2017-07-2718:59hmaurerI thought you were doing something clever observing the transaction log with Onyx and piping it to a backup location#2017-07-2718:59hmaurer😄#2017-07-2718:59hmaurerThat works too though#2017-07-2719:00robert-stuttafordgosh no 🙂#2017-07-2719:00robert-stuttafordwe are looking at DDB streams for cross-region replication so that in Disaster Recovery we can be back up quicker#2017-07-2719:00hmaurerAre there some guarantees that Datomic’s backup system won’t fuck up the incremental backup?#2017-07-2719:01hmaureror do you also continuously backup that to another location?#2017-07-2719:01robert-stuttafordbecause when we dry run it right now, the longest part of downtime is copying the backup to another region#2017-07-2719:01robert-stuttafordwe have regularly scheduled backups to non-AWS yes#2017-07-2719:06hmaurer@robert-stuttaford out of curiosity, do you test that your backups are actually working? this is a non-datomic question, but I was planning to do this on a project#2017-07-2719:06hmaurere.g. periodically restore a DB from backups and run some tests on it#2017-07-2719:09hmaurer@marshall Hi! Maybe you could answer this question: what exactly happens when you call d/destroy-database? Does it send a message to the transactor? Does it destroy the storage directly? Also, would it be possible to disallow d/destroy-database calls on, say, a production database? (by configuring the transactor or the storage in a certain way)#2017-07-2719:10marshall@hmaurer delete-database tells the transactor to remove the database from the catalog. it doesn’t destroy any of the storage directly (that happens when you later call gc-deleted-databases)
There is currently no way to disable it or launch a peer that can’t call it#2017-07-2719:11hmaurerat which point will it delete storage if you don’t call gc-deleted-database? and if it doesn’t delete storage, is there a way to “restore” a deleted database?#2017-07-2719:11marshallnot really; there is probably some way to recover it manually, but it wouldn’t be pretty#2017-07-2719:12marshallbasically, you probably shouldnt have any code paths in your system that include a call to delete-database
Think of it a bit like having a DROP TABLE somewhere in your code#2017-07-2719:12marshalluseful as an administrative tool, not so great in your app 🙂#2017-07-2719:13hmaurerYep, of course. But on PG/MySQL I could configure the production db user to not be able to drop tables at all#2017-07-2719:13hmaurerWhich is reassuring, even if code reviews/linting tools can ensure that your production code does not call those methods#2017-07-2719:13marshalltrue enough. I would suggest that is a reasonable request to put into our Suggest Features portal 🙂#2017-07-2719:14hmaurerWill do. Thanks for your insights!#2017-07-2719:15hmaurer@marshall another small thing I discussed earlier in a thread: is the “transact” function in the Datomic peer clojure library part of a protocol that I could implement?#2017-07-2719:15hmaurere.g. to wrap some behaviour around d/transact#2017-07-2719:16hmaurerI couldn’t figure it out from the doc#2017-07-2719:18marshallNot sure i understand the question. If you need to enforce constraints on the transacted data you can either use a transaction function, or build up the transaction data structure in your peer and do a compare-and-swap against some known value#2017-07-2720:07hmaurerMy bad, my question wasn’t very clear. I would like to add some attributes on every transaction in my application for auditing purposes (e.g. the ID of the current user). To this end, I could wrap d/transact with my own function, so as to add the necessary tx-data to every transaction. I was wondering if the d/transact function in the Clojure API is part of a protocol that I could reify to add my own behaviour. I am quite new to clojure so this might not make sense at all#2017-07-2720:08hmaurere.g. is datomic.Connection a protocol?#2017-07-2720:11marshallgotcha - I dont believe it is extensible in that manner. I’d probably suggest you write a wrapper fn that adds the user info you’re wanting to include and use it exclusively throughout your application#2017-07-2814:19timgilbert^ that's the approach we've taken at my shop, it works great#2017-07-2814:21hmaurer@U08QZ7Y5S do you pass the “current user” context manually all the way down to this function? Or implicitly through something like a dynamic var?#2017-07-2814:41timgilbertYep. We keep a kind of metadata/session object around which gets initialized from a JWT token at the top of the stack and contains the user-id and roles and some related stuff, then we pass it all the way down to the datomic layer, and then our wrapper function adds a {:db/id (d/tempid :db.part/tx) :user/id blah :meta/data foo} to the transaction data that the calling code passed to it.#2017-07-2814:43timgilbertWe did experiment with some ways to avoid needing to explicitly pass the metadata around, but nothing seemed to be significantly better and most of our experiments introduced subtle context semantics that we didn't want#2017-07-2817:51hmaurer@U08QZ7Y5S thanks for the explanation 🙂 out of curiosity did you wrap other datomic functions (e.g. d/entity) to enforce some security rules based on roles too?#2017-07-2817:59timgilbertNope, and having three separate datomic APIs (pull / query / entity) has been a source of some architectural friction for us that we haven't quite solved. What we tend to do these days is return entities from our data layer and then do filtering at the higher levels, but that spreads the filtering logic around our codebase a bit more than we like#2017-07-2818:01timgilbertIt's nothing unsolvable, but returning entities rather than data from the data-access layer has some implications for the complexity of the rest of the code#2017-07-2818:02timgilbertOn the other hand, it keeps the data layer simple and is very flexible in terms of what we can do at the controller level#2017-07-2720:54robert-stuttaford@hmaurer we restore production to our staging environment daily#2017-07-2720:54robert-stuttafordpart of our business has a content creation component to it, so we’re constantly testing new content with new code#2017-07-2721:54stijni'm trying to generate a datalog clause that looks like this [(.before ^Date ?event-start ^Date ?start-before)] (with type hints)#2017-07-2721:54stijn?event-start and ?start-before are generated symbols though#2017-07-2721:56stijnwhen I don't quote the ^Date they get removed (which seems logical to me): [(list '.before ^Date (calculate-symbol param) ^Date (calculate-symbol other-param))] ==> [(.before ?event-start ?start-before)]#2017-07-2721:57stijnbut when quoting them: [(list '.before '^Date (calculate-symbol param) '^Date (calculate-symbol other-param))] ==> [(.before (calculate-symbol param) (calculate-symbol other-param))] they also disappear and the calculate-symbol calls get quoted#2017-07-2721:58stijnso, how to do type hints with calculated symbols?#2017-07-2721:58favila@stijn < should work with date#2017-07-2721:59stijnok 🙂#2017-07-2721:59favilathe type hint issue is because of quoting#2017-07-2721:59favilathe metadata is put on the LIST calculate-symbol, not the result#2017-07-2722:00stijni see#2017-07-2722:00stijnand < on dates doesn't give any reflection warnings#2017-07-2722:00favilayou need something like (with-meta (calculate-symbol param))#2017-07-2722:01favilathe query comparison operators are magic in datalog#2017-07-2722:01favilathey follow datomic's comparator rules for the types that are indexed#2017-07-2722:02favilaand datalog query optimizer can often understand them to avoid full scans#2017-07-2722:02stijnso it's even better to use < then .before?#2017-07-2722:02favilamuch better#2017-07-2722:02stijncool, thanks @favila#2017-07-2722:03favilahttp://docs.datomic.com/query.html#built-in-expressions#2017-07-2722:03favilabuilt-in means "not clojure.core"#2017-07-2722:04favilanote e.g. != is magic builtin, not= is clojure.core/not=#2017-07-2722:05favilashould prefer != < <= > >= on all types that are valid :db/valueType (except bytes)#2017-07-2722:05stijnok#2017-07-2816:15souenzzoDouble check (class your-date-in-clojure)
I have trouble with java.util.Date(works) x java.sql.timestamp (dont work)#2017-07-2812:27erichmond@robert-stuttaford how do you deal with needing to up the write capacity on dynamo before doing the restore? did you automate that?#2017-07-2812:49robert-stuttaford@erichmond yes, we have a totally scripted process that ups DDB, restores, downs DDB, cycles the transactor, and restarts all the connected peer service apps on their instances#2017-07-2815:17erichmondbash script?#2017-07-2815:17erichmondare you ever in NYC?#2017-07-2815:17erichmondDinner on me, and we can talk shop, if so#2017-07-2815:17erichmondhehe#2017-07-2815:17erichmondwe have the same stack basically#2017-07-2819:27uwodoes the time it takes to connect a peer increase as the size of the db increases? In dev, I’m noticing that some datomic instances are taking longer to connect to than others. (They’re all setup the same way with sql storage).#2017-07-2819:37robert-stuttaford@uwo peers need to grab all the idents and the live index. so if your live index is large (ie just before indexing job) it’ll take longer to start up#2017-07-2819:37robert-stuttafordwe saw a substantial peer startup time when we doubled our transactor instance size 🙂#2017-07-2819:38robert-stuttaford@erichmond i live in Cape Town, South Africa, so dinner might be a little tough to swing 😁#2017-07-2819:39uwo@robert-stuttaford thanks!#2017-07-2819:45erichmondNo problem! Just checked google maps. It appears the midway point between NYC and Cape Town is roughly the Cape Verdean Islands. We just each take a quick flight over there.#2017-07-2819:45erichmond#Joking#2017-07-2819:46robert-stuttafordwe’ll be the best two Clojure developers for miles!#2017-07-2819:46hmaureror you could both buy an oculus rift and have a virtual dinner together#2017-07-2819:46hmaurercheaper and more 2017#2017-07-2819:46robert-stuttafordthat won’t be awkward at all#2017-07-2819:46hmaurerjust throwing ideas 😄#2017-07-2819:48uwo@robert-stuttaford do you schedule your own indexing jobs outside of imports?#2017-07-2819:49erichmondLOL#2017-07-2820:41robert-stuttaford@uwo not so far#2017-07-2820:41uwothx#2017-07-3020:43hmaurerOut of curiosity, does Datomic fetch multiple segments at once from storage? (e.g. in one roundtrip)#2017-07-3020:43hmaurerDepending on the query plan#2017-07-3021:15hmaurerAlso, I have another completely unrelated question#2017-07-3021:16hmaurerI am aware that d/transact (Clojure, peer API) returns a :db-after. However I am wondering if, after executing the transaction (and de-referencing the future), I am guaranteed that calling (d/db conn) (on the conn as I executed the transaction) will return a DB at least as recent as :db-after?#2017-07-3021:32favilaYou are guaranteed#2017-07-3021:33favila(On that peer, caveats for race conditions in your code)#2017-07-3021:38hmaurer@favila yep I meant on the same peer, in the same thread#2017-07-3021:38hmaurerthanks!#2017-07-3021:47camdezNice, I’ve been wondering the same.
@favila Do you know of a citation for that? I’d love to read any additional details.#2017-07-3022:01favilaNot offhand. Detail: same channel that sends all txs sends your own txes, and receiving tx data is exactly the same as having that t available#2017-07-3122:25hmaurerOut of curiosity, has anyone investigated how ontologies (e.g. https://schema.org) could be used in conjunction with Datomic?#2017-07-3122:48hmaurer@U06GS6P1N that sounds like something you might have pondered about too 🙂#2017-08-0108:36val_waeselynckUnfortunately, I have not#2017-07-3122:26hmaurerInstead of specifying a fully “ad-hoc” schema for one’s application, it would seem like a good idea to extend an existing ontology, or to build your own on “solid” principles#2017-07-3122:47hmaurer@favila maybe, e.g. http://health-lifesci.4.3-2f.schemaorgae.appspot.com/ ?#2017-08-0113:43marshallhttps://github.com/cognitect-labs/onto
In reference to @hmaurer s question above ^^#2017-08-0113:45hmaurerAre you aware of a reason why this project in particular hasn’t been maintained in a while? More specifically, was the approach abandoned because of major flaws, or just lack of time / interest?#2017-08-0113:50hmaurer@U05120CBV ^#2017-08-0113:51marshallI believe it was intended as a framework for this type of problem, as opposed to a ready-to-go library.
Also, I don’t think anything about the Datomic schema or interfaces has changed since it was last released so I don’t see why it wouldn’t continue to work as-is#2017-08-0113:54hmaurer@U05120CBV Ok, thanks! I was just curious as to whether the person working on this project (from Cognitect, it seems) found it unsuitable as an approach to model data in datomic and therefore abandoned the idea#2017-08-0114:05marshallNo, I believe it was felt to be a good approach when necessary/appropriate#2017-08-0118:02souenzzoWhy I can't do [:find ?e ?e ...]?? Is it intentional? There is some workaround?#2017-08-0118:21hmaurer@souenzzo what’s your use-case?#2017-08-0213:13souenzzoI'm doing :find ?e (pull ?e [*]) (pull ?f [*])
Then, when I'm processing, pure ?e means "you should do some calculus about ?e"
and (pull ?e [*]) is just a map of random attributues#2017-08-0213:20hmaurer@souenzzo ?e will have the same value as :db/id in the map returned by pull#2017-08-0213:21souenzzoYep. I made it. But I see no reason to this limitation#2017-08-0216:53val_waeselynckneither do I, and I've been bit by this kind of limitation when doing code generation among other things#2017-08-0121:42hmaurerQuestion: why does datomic.api/transact takes a vector as tx-data and not a set?#2017-08-0121:59favilatransaction functions are not necessarily pure @hmaurer#2017-08-0122:00hmaurer@favila ah, right. Are they executed in the order of the vector of tx-data?#2017-08-0122:00favilano idea. I expect there are no guarantees#2017-08-0122:01favilawhat matters is what they return. It's the final expansion that has set semantics#2017-08-0122:03hmaurer@favila speaking of, is there a way to expand the tx-data locally? not the transaction functions, but at least expand the “map” shorthand into assertions#2017-08-0122:03hmaurerI mean, without writing a function to do the expansion of course#2017-08-0122:03favilad/with?#2017-08-0122:04favilathe transaction functions are just functions, you can invoke them directly to see what they expand to#2017-08-0122:04favilaI think the only way to see map expansion is d/with#2017-08-0122:05hmaurerhow will with work with tx functions? I assume it won’t know about the ones installed on the transactor?#2017-08-0122:05favilaof course it will#2017-08-0122:05favilathey may not run if their dependencies are not satisfied#2017-08-0122:06hmaurerOk, I see. thanks 🙂#2017-08-0122:06favilayou can pull a function out of the db and call it#2017-08-0122:07favilathey're created by d/function#2017-08-0122:07hmaureryeah actually I hadn’t really read the doc about transaction functions yet. I just realised they are just data stored in the db#2017-08-0122:08favilaas long as its :imports and :requires are on the peer (and compatible with whatever the transactor has), it will run just like on the transactor#2017-08-0215:41aklein3175Hi everyone, we need to implement an auto-increment attribute, what are good patterns in datomic to do so?#2017-08-0216:34val_waeselynck@U6H375J4T using a transaction function. The Datofu library provides an implementation, which you may use or take inspiration from https://github.com/vvvvalvalval/datofu#generating-unique-readable-ids#2017-08-0216:35val_waeselynckcode: https://github.com/vvvvalvalval/datofu/blob/0e6c5212a399b4d068f050e8a91022e9b0379d74/src/datofu/uid.clj#L55#2017-08-0217:09aklein3175great, thank you#2017-08-0217:32devthit's not possible to go back and add :db.part/tx metadata to a transaction, correct?
realized this when attempting transactions with assertions plus tx metadata in the :db.part/tx partition, and noticing that if the assertion was already true, the tx metadata was not added#2017-08-0217:33devthso, iiuc: if i want to put tx metadata (aka reified transactions) on a specific existing EAV the V must change. otherwise the assertion is ignored.#2017-08-0217:41robert-stuttaford@devth you can annotate transactions after the fact; simply use its real id instead of a tempid#2017-08-0217:41robert-stuttafordie by querying [_ _ _ this] in datalog#2017-08-0217:42devth@robert-stuttaford ah, makes sense!#2017-08-0217:42robert-stuttaford:db.part/tx will always reference the in-flight transaction#2017-08-0217:43robert-stuttafordliterally did this today 🙂 had to yank some scheduled mails, and forgot to document the ad-hoc transaction, so i had to find it and annotate it afterwards#2017-08-0217:44devthawesome. yeah i was relying on the :db.part/tx referencing in-flight txes bit but hadn't considered looking up the tx-id after the fact#2017-08-0217:44devththanks!#2017-08-0220:20yediif i have: [[?p :user/first-name ?fn][?p :user/last-name ?ln]] and I want to bind the concatenation of ?fn and ?ln to ?name, how would i go about doing that#2017-08-0220:21kennethkalmer[[?p :user/first-name ?fn] [?p :user/last-name ?ln] [(clojure.core/str ?fn " " ?ln) ?name]] possibly#2017-08-0220:21yediwoah that's so simple#2017-08-0220:22hmaurer@kennethkalmer no need to prefix str either#2017-08-0220:22kennethkalmereven better#2017-08-0220:22kennethkalmerwasn’t sure if clojure.core was available, so thought I’d be safe 🙂#2017-08-0220:23hmaurer@kennethkalmer yeah I just checked the doc to be sure; I had a doubt too 🙂#2017-08-0220:37kennethkalmerhowzit everyone, I need to ask some advice on how to save a bunch of derived transactions to Datomic. Some background first: I have this abstraction of a data-set, which has a beginning and end date, and a type. these are prepopulated and saved to Datomic. Then I process a ton of other entities and calculate a bunch of derived entities from different entities. this is all simple and well, but I need to “inject” some additional transactions into the mix based on the transactions I’ve seen pass by already…
By means of example, lets say I see the following (abbreviated) transactions pass by: [{:db/id -1 :foo/id 1} {:db/id -2 :foo/id 2} {:db/id -3 :foo/id 1}], I wanna concat onto that list [{:db/id 1 :data-set/ids [n m]} {:db/id 2 :data-set/ids [m]}] (where n & m are other eids).
You see, nowhere else are entities 1 or 2 referenced in the transactions, so I’m thinking I’d need a transducer that accumulates the seen references to them and build up the additional transactions to concat onto the list. If I just slot them in directly as I encounter my ref I’ll get an exception of trying to update the same entity twice in one transaction.
My list of transactions also needs to be lazy, or I’ll run out of memory. The list ends up in a generic function that batches the transactions.
I’ve already played with a simple transducer that simply counts the elements in a list and appends that number to the final results and it seems to work nicely when combined with sequence.
Thing is, I just don’t know how to best handle this situation (even battling with some missing vocabulary here). Hope this rambling makes sense#2017-08-0220:42kennethkalmernow I’m wondering two things, 1) a reducing function that returns larger outputs than inputs is probably a smell, and 2) if this couldn’t be reworked with core.async to be more fluid#2017-08-0220:43hmaurer@kennethkalmer I am pretty sure a reduce function which returns a large output than input is not a smell#2017-08-0220:43hmaurerNo idea about the rest though, sorry 😄#2017-08-0220:47kennethkalmerthink i’ll spend a bit of time wrapping my head around http://docs.datomic.com/best-practices.html#pipeline-transactions and core.async. maybe I can come up with something simpler while hopefully getting some feedback here#2017-08-0302:12devthwhat's an example of when you'd want to use a :db.unique/value instead of a :db.unique/identity?#2017-08-0308:45mgrbyte"Unique values have the same semantics as unique identities, with one critical difference: Attempts to assert a new tempid with a unique value already in the database will cause an IllegalStateException."#2017-08-0308:45mgrbytehttp://docs.datomic.com/identity.html#unique-values#2017-08-0312:59devthyep, read the docs 🙂 just trying to figure out the intended use case.#2017-08-0313:00devthnew question: if you d/delete-database a db with underlying sql storage, does it drop the associated rows in sql?#2017-08-0313:01favilaNo. That happens when you run gc-deleted-databases#2017-08-0313:02devthyou mean gc-storage?#2017-08-0313:03devthhaven't run that yet. good idea to run it on some regular schedule i assume?#2017-08-0313:04favilahttp://docs.datomic.com/capacity.html#garbage-collection-deleted-production#2017-08-0313:05favilaNo gc storage is for active dbs #2017-08-0313:05devthah, command line api. i was looking at the clojure api#2017-08-0313:05devththanks, good stuff#2017-08-0313:37devthright but having a hard time thinking of a use case#2017-08-0313:37matthavenerif you had an entity that was totally immutable (never updated)#2017-08-0313:37matthavenerthen you wouldn’t want to upsert it, devth#2017-08-0313:37devthsure.#2017-08-0313:38devthso i take it ppl use identity more often?#2017-08-0313:38devthi.e. unless you have a special reason to use value, default to using identity#2017-08-0313:39matthavenerthat’s probably fair#2017-08-0313:40matthaveneri would say unless you know you’re going to upsert, just use value, schema change allows those attributes to change later anyways#2017-08-0313:40devthoh, i thought that was unalterable#2017-08-0319:16uwosay I want to add a derived field to a reference, and there are many of this type in the system (memory large). Let’s call the ref a location, cause that happens to be what I’m updating currently. In a transaction function, I could query all locations and then map over and return the new datom for each. If I didn’t want to pull all results in to memory at once, what would i do? Also, if I do this within a transaction function the size of the returned transaction would be enormous. I should probably fix that as well, no?#2017-08-0319:27uwoI should probably do this outside of a transaction function, and use d/datoms, shouldn’t i?#2017-08-0320:04favila"add a derived field to a reference" means what?#2017-08-0320:04favilaI'm not clear what you're querying and transacting#2017-08-0320:05favilaare you trying to compute some extra attribute+value on an entity based on input from a query, and do it in bulk across a bunch of entities?#2017-08-0320:43uwoI’m just adding a new attribute to an entity in the system, and there are a lot of entities I need to update. For each new attribute is a projection of many existing attribute on said entity. I think the answer to your last question is, yes. @favila#2017-08-0320:44favilawhy are you desiring a transaction function?#2017-08-0320:44uwono, reason. I realized that was not a good direction#2017-08-0320:44uwomy guess is I should use d/datoms to iterate over all the entities and then tx-pipeline them in#2017-08-0320:48favilaoptions are: 1. don't transact. if purely a derived value, just have the peer derive it as needed. 2. transact derived and non-derived together 3. transact derived separately later (possibly in bulk; be careful that entity hasn't changed in a way that invalidates the derived value).#2017-08-0320:48favilayou can't use a transaction function for derived values because these cannot see the result of the transaction they are within.#2017-08-0413:42devthwhen querying the history db for transactions, transactions will not be sorted by timestamp by default, will they?#2017-08-0415:52favilaThe name of the index tells the sorting#2017-08-0415:54favilaE.g. E a v tx#2017-08-0415:54favilaSo sorted by tx, but by value first#2017-08-0415:55favilaFinally retractions sort before assertions#2017-08-0415:56devthi see. thanks#2017-08-0422:21bbloomhave anyone experimented with squids vs totally random uuids in datomic? i’m curious how big the difference is for moderately sized databases. i don’t need any hard numbers. i’m mostly looking for an excuse to say “OK, i can just use fully random uuids and not think about it at all”#2017-08-0500:57favilaWe default to squuids by default, but that doesn't stop us from using namespace uuids (v5) where we have a hash-like use case. We don't notice any operational difference but we're not very sensitive to performance or space#2017-08-0500:58bbloommakes sense#2017-08-0500:58bbloomi’m working on something with a notion of “people with the link” security#2017-08-0500:58bbloomwould be nice to have the link be just the ID, unguessable, without having to include a special token#2017-08-0500:59bbloomall the other uses of uuids will be relatively small in number - pretty much all interesting publicly accessible entities in this system can have this security mode#2017-08-0501:00bbloomso i was trying to decide if i should use squuid + token, or random uuid for this - there’s probably no other (immediate) usecase for uuids & most other internal entities can just use datomic long ids#2017-08-0501:01favilaif you are doing direct lookup it's probably fine. the downside of not using squuids is fragmentation#2017-08-0501:01bbloomthinking about it more, a special token is probably the right way to go anyway - as you may want to revoke it (ie change the link, or create a new one)#2017-08-0501:01favilaso, seqs, locality, etc suffer; indexing takes longer and generates more garbage (disk space)#2017-08-0501:02favilaif you don't have scanning queries you're probably fine with the read-side downsides of fragmented index#2017-08-0501:03favilaso only the write side matters, and probablu only matters at scale#2017-08-0501:03favilais my thinking anyway. without hard numbers#2017-08-0501:03bbloomyeah - that was basically my thought process, which is why i wanted an excuse to just use totally random 🙂#2017-08-0501:04bbloomthanks#2017-08-0517:35gonewest818Can someone point me to the recipe for running datomic console with the free transactor? For me the console keeps throwing an exception when I try to run any query.#2017-08-0517:35gonewest818I’m launching with bin/console -p 8080 dev datomic:. This is inside a docker container, where transactor is a linked container.#2017-08-0517:35gonewest818The exception is console_1 | ActiveMQNotConnectedException[errorType=NOT_CONNECTED message=AMQ119007: Cannot connect to server(s). Tried with all available servers.]
console_1 | at org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory(ServerLocatorImpl.java:799)
console_1 | at datomic.artemis_client$create_session_factory.invokeStatic(artemis_client.clj:114)
console_1 | at datomic.artemis_client$create_session_factory.invoke(artemis_client.clj:104)
#2017-08-0517:36hmaurer@gonewest818 as far as I remember I had to mess with the alternative host to get it to work last time I tried. It wasn’t the free protocol but the dev protocol, although I guess your problem might be similar#2017-08-0517:37hmaurerby alternative host I mean the “altHost” setting#2017-08-0517:37hmaurerIt might not have anything to do with your issue though, just my two cents…#2017-08-0517:37gonewest818thx#2017-08-0517:43gonewest818in transactor.properties protocol=free
host=0.0.0.0
alt-host=127.0.0.1
port=4334
#2017-08-0518:02gonewest818ok I changed that to alt-host=transactor and I think it’s resolved#2017-08-0519:16hmaurer@gonewest818 sorry I was afk. Glad you got it fixed! 🙂#2017-08-0614:09gonewest818Thanks. Now the console works as expected but now my peer is throwing an exception when it first connects. Something thrown from netty. In spite of the exception the peer seems able to create a db, perform transactions, and do queries. I'm not able to look at this for a few days though. #2017-08-0710:26hmaurerDoes d/squuid start from some seed value? and if yes, what? It seems to always generate the same uuid in my tests#2017-08-0711:18jcf@hmaurer how many UUIDs are you generating in your tests?#2017-08-0711:19hmaurer@jcf one#2017-08-0711:23jcfThe most significant bits in a sequential UUID are a rounded timestamp so the UUID should change at least once per second regardless of any other randomness.#2017-08-0711:26jcf> Constructs a semi-sequential UUID. Useful for creating UUIDs that don't fragment indexes. Returns a UUID whose most significant 32 bits are currentTimeMillis rounded to seconds.
http://docs.datomic.com/clojure/#datomic.api/squuid#2017-08-0711:26jcfI'm not sure exactly what Datomic does inside squuid but based on that alone I'd expect the UUID to change every second or so.#2017-08-0711:27jcfCan you share some code, @hmaurer? So I can try to reproduce at my end. I've never seen a duplicate sequential UUID, but that's not to say it can't happen.#2017-08-0711:29hmaurer@jcf ah nevermind, it was a mistake on my end. Sorry!#2017-08-0711:30jcf@hmaurer no worries! Glad you've got it fixed.#2017-08-0712:34souenzzo@hmaurer in some cases you can use this
https://clojuredocs.org/clojure.core/with-redefs
(Overwrite squuid with a function that always returns the same uuid)#2017-08-0716:03sparkofreasonDoes anybody have suggested code or patterns for dealing with multi-tenancy, particularly on the transaction side, for instance, ensuring that the logged in customer only transacts to eid's in their partition. We have some code that does this, just curious how it compares with best practices etc.#2017-08-0716:09hmaurer@dave.dixon do you need to run transactions across tenants? if not, you could use a database per tenant#2017-08-0716:09hmaurer(I am new to Datomic so take everything I say with a grain of salt)#2017-08-0716:11sparkofreason@hmaurer Cognitect recommends keeping the number of databases per transactor small, as there is overhead associated with each DB, thus the approach to put one customer per partition.#2017-08-0716:11hmaurerAh!#2017-08-0716:12hmaurerTIL, thanks#2017-08-0716:13hmaurerI think @favila is developing a multi-tenant system; he might have some insights#2017-08-0716:14favilahuh, we run multiple dbs on one transactor#2017-08-0716:15favilawe would never consider putting multiple customers on the same db#2017-08-0716:25hmaurer@favila out of curiosity, I assume you never have to run cross-tenant transactions?#2017-08-0716:27favilayou mean like XA two-phase commit stuff? never#2017-08-0716:28hmaureryeah, like in the context of what I assume is your application, it would be transferring a medical record from one system to another atomically#2017-08-0716:28hmaurerbut it makes sense that you’re not doing that; I was just curious#2017-08-0716:29favilathere's no need for atomic operations across dbs#2017-08-0722:24jfntnIs generating an or expr the right way to test whether an attribute matches one of multiple values ?#2017-08-0722:56favilathe fastest way is usually [(contains? some-set ?the-v)]#2017-08-0722:57favila[(ground [:a :b] [?option ...]] [?e :some-attr ?option] is ok for smaller sets#2017-08-0722:58favilashould be roughly equivalent to or#2017-08-0723:38sparkofreason@favila Can you give some idea of the number of DBs you're running on a single transactor?#2017-08-0800:06favilaAt least ten. Many have low write volume#2017-08-0800:07favilaWhere did you get advice to use fewer dbs?#2017-08-0804:35sparkofreasonI've gotten it multiple times from multiple people at cognitect. But I suspect they would view 10 as ok. I'm thinking more on the order of hundreds or more. #2017-08-0815:00hmaurerQuestion: say I model a linked list in Datomic through an attribute :foo/parent, pointing to the immediate parent of an entry in the list. What would be the most efficient way to fetch the N elements in the list starting from some element X? It seems that I could just follow the :foo/parent attributes as if the list was in-memory, and the AEVT index will ensure that storage isn’t hit very often as it will get the data in segments of ~1000 attributes. Is that correct?#2017-08-0815:01hmaurerAlthough if I have a lot of such lists, the probability of two consecutive entries :foo/parent attribute being in the same segment will probably be quite low#2017-08-0815:01hmaurerDoes anyone have an approach to recommend?#2017-08-0815:08hmaurerActually this might not be as bad as I thought. Even if in the worst case scenario (e.g. cold cache) fetching 50 entries in a list requires 50 roundtrips to storage, once the cache is hot storage shouldn’t be hit nearly as much#2017-08-0815:08hmaureras far as I understand#2017-08-0815:51mitchelkuijpers@favila I did not know it was possible to run multiple databases on one transactor? That sounds pretty awesome#2017-08-0818:45uwoIf I transact tx-data and I don’t know if that tx-data actually changes any values - acknowledging that obviously it updates the time at which the values were asserted - what would be the way to diff :db-before and :db-after? for instance, say I attempt to add an index to attribute that already has one, I would want to look at a diff of before and after and see no changes.#2017-08-0819:14solussd@uwo any reason not to just look at the :tx-data in the transaction result map?#2017-08-0819:15uwo@solussd won’t that contain the datom that “updates” the value to the same thing as well?#2017-08-0819:16solussd@uwo it’ll contain a new transaction datom, but if nothing actually changed, that’ll be the only thing there.#2017-08-0819:24uwo@solussd thanks!#2017-08-0819:34timgilbertSay, I'm trying to get every tx that touched a certain entity out of datomic (for an auditing tool, performance is not currently an issue). So for a given eid I want to find transactions where it was either in the e or v position in the datom.
This is what I've got right now, but it's giving me () and I'm pretty sure my logic is wrong:
(d/q '[:find [?tx ...]
:in $ ?log ?target
:where
[(tx-ids ?log nil nil) [?tx ...]]
[(tx-data ?log ?tx) [[?e _ ?v]]]
(or-join [?target ?v ?e]
[(= ?target ?v)]
[(= ?target ?e)])]
db log eid)
#2017-08-0819:35timgilbert(In reality I'm trying to just get transactions with a certain bit of reified data on them, but I need to get this bit working first)#2017-08-0819:38timgilbertNot really sure how to use the ?e and ?v values I get back from (tx-data) in further datalog#2017-08-0819:41timgilbertI also tried it this way with the same results:
(d/q '[:find [?tx ...]
:in $ ?log ?target
:where
[(tx-ids ?log nil nil) [?tx ...]]
[(tx-data ?log ?tx) [[?e _ ?v]]]
(or-join [?target]
[?target _ ?v]
[?e _ ?target])]
db log eid)
#2017-08-0820:13timgilbertAha! Got what I was looking for via this:
(d/q '[:find [?tx ...]
:in $ ?target
:where
(or [?target _ _ ?tx]
[_ _ ?target ?tx])]
(d/history db) eid)
#2017-08-0901:16favilaBe careful, there is no guarantee that your second clause will match entity ids only. You should guard the attribute valuetype, or use d/datoms :eavt and :vaet directly#2017-08-1015:34timgilbertBelatedly following up, but this would just be guarding against the possibility that I happen to have an int whose value is the same as the EID I'm passing in, right? Otherwise I'm thinking the ?target would restrict the value adequately#2017-08-0820:53devthsometimes tempids get represented like {:part :db.part/user, :idx -1000944} when nested inside a map. trying to figure it why 🤔#2017-08-0820:53devthalso, i need a reliable way to detect if something is a tempid, and this sneaks past my checker#2017-08-0821:34hmaurer@devth could you give an example? d/tempid will return something of that form:
wef-backend.core=> (d/tempid :db.part/user)
#datomic.db.DbId {:idx -1001185 :part :db.part/user}
#2017-08-0821:35devthi'm not sure exactly - maybe it's just an artifact of pretty printing#2017-08-0821:35devthlooks like that is the case. (pprint (d/tempid :db.part/user))#2017-08-0821:36devththis outputs a string like {:part :db.part/user, :idx -1000050}#2017-08-0901:17favilaTempids are clojure defrecords @devth @hmaurer #2017-08-0901:17favilaThat is why they print like maps#2017-08-0901:18favila(type (d/tempid :foo))#2017-08-0901:18favilaYou can check for that type#2017-08-0916:12eoliphanthi, i have a modeling question. We’re kicking off a new project and one of the first things is coming up with a schema for people, orgs, etc. This is traditionally done with something like Parties and Roles. It makes sense logically, but most of the practical guidance is based on shoehorning that into a relational model. just wondering about doing it in a more “Datomicy” manner. I was thinking for something like Person (Party) and Customer (Role), that I’d have a person entity (:person/fname, :person/lname, etc) that refs a set of roles where one might be Customer (:customer/shippingaddress, etc)#2017-08-0916:15hmaurer@eoliphant I am completely new to Datomic but I am working on my first project and had to think about a similar modeling question so I can share with you my approach (which might be flawed, who knows…)#2017-08-0916:16hmaurerRoughly speaking, persons in my system are represented as “identities”, and roles as “facets”. An identity in my system can have a number of facets (administrator, coordinator, and a few others in my case)#2017-08-0916:16eoliphantsure 😉 I’ve been using it a bit now for small services, so we haven’t hit many major modeling issues. Now we’re starting to attack the core domain so the now the fun begins#2017-08-0916:17hmaurerInitially I modelled it as a :identity/facets attribute with cardinality many, but ended up moving to an attrbute per facet, e.g. :identity/administrator-facet, each of which may be null (I found it a bit easier to work with in my project)#2017-08-0916:17eoliphantso you have something like :person/facets as a ref?#2017-08-0916:17eoliphantok#2017-08-0916:18hmaurerAlthough in my project those “facets” are not just roles for access control, they are more like “profiles”, used both for access control and to store profile-specific attributes#2017-08-0916:18hmaurerif that makes any sense#2017-08-0916:19eoliphantours is even more fun. As you may have varying ‘facets’ that provide context for your relationship with other persons/organizations.#2017-08-0916:19eoliphantyeah totally#2017-08-0916:19eoliphant‘roles’ in this context aren’t necessary atuhorization roles (though they may guide allocating them, etc)#2017-08-0916:19hmaurerMmh that sounds a bit similar to mine. My users might have multiple facets, with various attributes#2017-08-0916:19hmaurere.g. a user may be both an “administrator” and a “coordinator”#2017-08-0916:20eoliphantyeah#2017-08-0916:20eoliphantthat’s what we have#2017-08-0916:21eoliphantfurther for us it’s like ‘a user is a coordinator for “organization a”’#2017-08-0916:21hmaureryes exactly#2017-08-0916:21hmaurerfor me it’s coordinator of a “sector”…#2017-08-0916:21hmaurersame thing#2017-08-0916:22eoliphantnow the nice thing with datomic is automatic bidirectional refs#2017-08-0916:22hmaureralthough in your case you could imagine being coordinators of multiple organisations#2017-08-0916:22hmaurerso my model with an attribute per facet might not work for you#2017-08-0916:22eoliphantyeah but that’s close to what I was thinking#2017-08-0916:23eoliphantthanks !#2017-08-0916:23hmaurerNo worries. Please share if you have any striking insights on how to deal with this stuff 🙂#2017-08-0916:24eoliphantyeah i’ll definitiely post or blog it 😉#2017-08-0916:24eoliphantI keep having to ‘de-relationalize’ my thinking lol#2017-08-0916:26hmaurerYeah 😄 I feel it’s very natural to think in terms of graphs when working with Datomic#2017-08-0916:30eoliphantyeah it’s far more natural in most cases, it’s just decades of ‘thinking in tables’ lol#2017-08-0916:30eoliphantI did some stuff with RaimaDB like two decades ago that was much closer to the way datomic works#2017-08-0916:30hmaurerI had never heard of RaimaDB#2017-08-0916:30eoliphantnot surprising lol#2017-08-0916:31eoliphantit was a hierarchical db#2017-08-0916:31eoliphantwe were using it for embedded stuff#2017-08-0916:31eoliphanti odn’t even know if it’s still around#2017-08-0916:31hmaurerApparently it still has a website#2017-08-0916:31eoliphantjust just checked lol cool#2017-08-0916:32hmaurerAs I said I am new to Datomic, and one of the things hurting my brain at the moment is the amount of discipline you need to model data properly#2017-08-0916:32hmaurerDatomic is basically a triple-store#2017-08-0916:32hmaurerIt’s very flexible in the way your data is modelled#2017-08-0916:32hmaurerWhich is both great and a bit scary#2017-08-0916:40eoliphantexactly, you can do pretty much whatever you want lol#2017-08-0917:46hcarvalhoaves@eoliphant if you namespace your attributes, then you don't need to treat it as inheritance#2017-08-0917:47hcarvalhoavese.g. any entity w/ :customer/shipping-address works as a customer#2017-08-0917:47eoliphantyeah I’ve thought about that as well#2017-08-0917:49eoliphantbut that’s a pretty simplistic example for us, i’m trying to balance the flexibility of just dumping attrs on an entity and keeping the model relatively robust. I guess in object terms, what I was thinking was more composition than inheritance?#2017-08-0917:50eoliphantgranted in various contexts#2017-08-0917:51eoliphantprogramattically i only care about certain attributes at given times#2017-08-0917:51eoliphantwhich would work better with your example#2017-08-0917:51eoliphanti think the main issue is that ‘you’ as a ‘person (Party) may assume certain roles in relation to other Parties#2017-08-0917:52hcarvalhoavesI think this is outside the scope of the db - the point of datomic is using the database to store data#2017-08-0917:52eoliphantSo that’s where I’ve been wondering if the breakout of the role specifc stuff onto another entity might be in order#2017-08-0917:53eoliphantsure, but i need to capture those relationships in the data#2017-08-0917:53eoliphantlike ‘find all customers for x’ or something#2017-08-0917:53hcarvalhoavesqueries are straightforward. e.g. a query like [?p :person/id ?some-id] [?p :customer/shipping-address] will return an empty set#2017-08-0917:54hcarvalhoavesbut the entity for that :person/id may valid in another context (not shipping someting)#2017-08-0917:55hcarvalhoavesbut this is different than querying like [?p :person/role :person.role/customer] or something like that#2017-08-0917:55eoliphantyeah and I’ve got some stuff along those lines. I think customer might have been a bad example lol, as it’s actually straightforward to model it’s just that in this domain you might be a ‘customer’ with varying attributes depending on certain relationships#2017-08-0917:55eoliphantno it’s not#2017-08-0917:56eoliphantwhat I was saying that you might have :person/roles pointing to a ref of type customer, with additional data on it#2017-08-0917:56eoliphantbut yeah if it’s just simple ident#2017-08-0917:57eoliphantthen plopping them on the person is cleaner#2017-08-0917:58hcarvalhoavescan two entities point to the same customer ref?#2017-08-0917:59eoliphanttwo persons probably not. but the person and the thing that you’re a customer of#2017-08-0917:59eoliphantbut i see where you’re going#2017-08-0917:59eoliphantif this stuff is particular to the person (customer) why is it another entity#2017-08-0917:59hcarvalhoavesalso, can you have a customer without a person? does that make sense?#2017-08-0917:59eoliphantright#2017-08-0917:59eoliphantand generally no#2017-08-0918:00eoliphantbut we have some situations where that might be the case (not customers lol).#2017-08-0918:00eoliphantbut I think i’ll use that heuristic#2017-08-0918:00eoliphantin that if it truly makes no sense by itself#2017-08-0918:00hcarvalhoavesI think you're trying to model tables on top of datomic w/ refs#2017-08-0918:00eoliphantput it on the person#2017-08-0918:00eoliphantyeah that’s more than possible lol#2017-08-0918:01eoliphantit’s an ongoing struggle#2017-08-0918:01hcarvalhoavesif all this data is about the same "thing", they share an entity#2017-08-0918:01eoliphantthat i tend to fall into ‘rectangles’#2017-08-0918:01hcarvalhoavesso just use attributes#2017-08-0918:01eoliphantright#2017-08-0918:01eoliphantand to your point#2017-08-0918:01eoliphantthe role relationships#2017-08-0918:01eoliphantcould themsleves simply be other attributes#2017-08-0918:16hcarvalhoavesanother way to think about it is: you store facts, and queries project these facts into tables#2017-08-0918:32eoliphantyes good point. I’ve done some other small greenfield projects and didn’t have too much of an issue with modeling around facts#2017-08-0918:32eoliphantin this case i’m working on our legacy domain#2017-08-0918:33eoliphantso kind of have to ‘unsee’ aspects of the current approach to get this right i think#2017-08-0919:17hmaurer@hcarvalhoaves are you sure it is reasonable to store everything that is about the same “thing” under one entity? It seems problematic, particularly if you have a one-to-many relationship, where the many things could be considered to be a part of the “thing”#2017-08-0919:18hmaureryou can’t really model that with attributes on a single entity; refs seem to make sense#2017-08-0919:18hmaurerwith :db/isComponent true#2017-08-0919:18hcarvalhoavesif you have a one-to-many relationship <- then you don't have one entity anymore#2017-08-0919:19hmaurerwell, under your definition, I could (if I understand it correctly). The “many” pieces might not have an existence of their own (aka the answer to the question “can you have one of these without X?” would be no, where X is your “main” entity)#2017-08-0919:19hcarvalhoavese.g. in his case person and customer are just facets of the same entity - 1:1#2017-08-0919:20hmaurerright, but you could easily conceive a case where person has many “administrator” facets for different groups, or similar#2017-08-0919:20hcarvalhoavescan you give a concrete case?#2017-08-0919:21hmaurernot really, because the only case I have in mind could be solved without having multiple facets of the same kind#2017-08-0919:22hcarvalhoaveswell, going back to eoliphant's example#2017-08-0919:24hcarvalhoavesit boils down to this: if I assert entity e1 has :customer/shipping-address, does it matter what customer is?#2017-08-0919:26hcarvalhoavese.g. I could have :shipping/address#2017-08-0919:27hmaurermmh ok I think I see what you mean#2017-08-0919:27hcarvalhoavesthe notion of customer doesn't exist at the data level - it's in your application#2017-08-0919:27hcarvalhoavesthe only thing at the data level is that e1 has this new fact about it#2017-08-0919:30hcarvalhoavesnow, maybe it makes sense to assert that, at some point, this entity is now a customer. there are a few possibilities#2017-08-0919:32hcarvalhoavesusually, turning into a customer means there will be unique fact you want to assert, e.g. :customer/ssn#2017-08-0919:32hcarvalhoavesand then you can infer it started being a customer at the time of this transaction#2017-08-0919:33hcarvalhoaves(and if you retract those attributes, it stops working as a customer)#2017-08-0919:33hcarvalhoavesotherwise, usually you'll have a synthetic attribute like :customer/id#2017-08-0919:34hcarvalhoaveseither way, you have a way to distinguish what works as a customer or not in a query#2017-08-0919:39hcarvalhoavesI just don't know if the notion of "customer" matters much e.g. an entity can have many :customer/* attributes but without :customer/shipping-address, this "customer" doesn't exist for logistics#2017-08-0919:40hcarvalhoavesto get more philosophical here: in real life, is common to see many different interpretations of "is a", depending on who is consuming the data#2017-08-0919:40hcarvalhoaveshence why -> https://clojurians.slack.com/archives/C03RZMDSH/p1502302586735386#2017-08-0919:45hmaurer@hcarvalhoaves thanks, that’s very insightful. Bouncing off a small detail in your explanation: would you then have multiple “ids” attached to an entity, one for each of its “facets”? e.g. :person/id, :cuztomer/id, etc#2017-08-0919:46hcarvalhoavesIMO it's fine, it's just another unique attr#2017-08-0919:46hmaureryou talked about a problem I encountered in my project: I have a facet which has no attributes, so I need some “witness attribute” to know an entity has this facet#2017-08-0919:46hcarvalhoavesthe reason you usually need those ids is because entitiy ids are internal to datomic - you don't want to expose those#2017-08-0919:52hcarvalhoavesrelated: https://en.wikipedia.org/wiki/Ship_of_Theseus#2017-08-0921:27csmAm I totally off-base with this approach, or it it reasonable?#2017-08-0921:28csmor would doing the map/frequencies calls be more advisable in the client-side code (and, we connect via the client API in this case)#2017-08-1015:49adamfreyI'm using datomic free to query against some raw clojure collections, a la https://gist.github.com/stuarthalloway/2645453 and when I tried to use a pull expression in a find clause I get:
clojure.lang.LazySeq cannot be cast to datomic.Database
is that expected?#2017-08-1019:27pesterhazy@adamfrey does it work if you turn wrap the sequence in vec?#2017-08-1019:32adamfreyno I tried that, same error but with clojure.lang.PersistantVector or whatever#2017-08-1020:55hcarvalhoaves@adamfrey it seems the pull API is defined in the Database interface. you can use (datomic.api/create-database "datomic:") and transact some datoms to query against#2017-08-1021:00hcarvalhoavescan also use http://docs.datomic.com/clojure/#datomic.api/with to always start from an empty db too#2017-08-1101:25uwois there any way to tell apart the datoms in :tx-data that were transacted by the user versus ones that were tacked on like the transaction datum, and in some cases :db.alter/attribute datums?#2017-08-1113:46marshall@uwo http://docs.datomic.com/transactions.html#making-temporary-ids
The transaction datom is in a different partition, as are schema datoms#2017-08-1113:49uwo@marshall thanks. I did notice that. Are the only other generated datoms db.atler and db.install (ignoring the retracts that are implicit in an update)#2017-08-1113:51marshalltxInstant#2017-08-1113:51marshallbut yes, i believe that’s it#2017-08-1113:52uwoexcellent. thanks!#2017-08-1114:58tengI found the problem, I need to add (:db-after db) instead of only db!#2017-08-1118:23ibarrickIs there a version of the datomic client api that doesn't conflict with ring-jetty-adapter? There doesn't seem to be any permutation of exclusions that results in both datomic and lein ring server functioning correctly.#2017-08-1200:11hmaurerHi! Quick question: is there a soft guarantee that results from a datalog query or pull expression will always come back in the same order if the query is executed on the same DB?#2017-08-1200:11hmaurerI am not familiar with the internals of Datomic/the datalog execution engine#2017-08-1200:12hmaurerThis doesn’t have to be a hard guarantee. Basically I am wondering if I could build cursor pagination on top of datomic by using a (tx-id, index) pair as cursor#2017-08-1200:13hmaurerThe tx-id would be used to get back the db at the point where the request was originally made, and the index would be the index in the result set#2017-08-1200:14hmaurerI don’t need those cursors to work forever; I would likely set a short expiry on them (since querying an old point in time in a Datomic db can be problematic for a variety of reasons; schema changes, etc)#2017-08-1202:09favilaIf result is a set, likely will always be in same order because hashing the same#2017-08-1202:09favila(Assuming set has same values: since queries can call arbitrary fns is not guaranteed)#2017-08-1202:10favilaBut if use aggregation probably all bets are off#2017-08-1223:10hmaurerHi! Quick question: is it OK performance-wise to use :db.unique/value a lot? In my application it seems that quite often when :db/isComponent true is used it makes sense to also declare the attribute as unique#2017-08-1316:03robert-stuttaford@hmaurer component only makes sense on refs, and i don’t think you can unique a ref. perhaps a schema code example to explain what you mean? 🙂#2017-08-1316:38hmaurer@robert-stuttaford Hi! Are you sure about this? I just tried :db/unique :db.unique/value on a :db.type/ref attribute and it seems to behave as expected#2017-08-1316:38hmaurerWhich makes sense as, as far as I understand, a ref is just an ID (long or something?)?#2017-08-1316:48hmaurer(by behave as expected I mean it does throw an exception if trying to transact the same ref twice)#2017-08-1316:49hmaurerAnd an example would be “Users” who have an “Account”. If, in a particular application, each user has a distinct account which is referenced by :user/account, then it makes sense to make :user/account unique.#2017-08-1316:50hmaurerThe reason why I mentioned :db/isComponent is that it seemed to me that every ref that is a component should also be unique (as far as I can see)#2017-08-1316:51hmaurerUnless I am mistaken, the semantics of :db/isComponent are that whenever a “parent” entity is retracted, its components are also retracted, recursively#2017-08-1316:52hmaurerSay you have two entities A and B, where B is a component of A (e.g. there is a ref from A to B with :db/isComponent true)#2017-08-1316:53hmaurerI cannot think of a scenario in which you would want other entities to hold refs to B with the same attribute#2017-08-1316:53hmaurerin a sense, B “belongs to A”#2017-08-1317:26favilaYou can unique a ref#2017-08-1317:27favilaThe expected invariants of an isComponent are redundant with the unique-value invariants. If you want a little more safety, go ahead#2017-08-1317:28favilaBut it's still not enough for full safety#2017-08-1317:29favilaAn isComponent-referenced entity should only be in the V of a single datoms in the entire database (for a given t)#2017-08-1317:29favilaUnique-value will only guarantee it's in a single V for a given attribute#2017-08-1317:30hmaurerWhat is the reasoning for “should only be in the V of a single datoms”? I agree though, it’s the conclusion I came to regarding a given attribute#2017-08-1317:34favilaYou don't want any references to that entity from other attributes either#2017-08-1317:34favilaOtherwise you will have a surprise when you db.fn/retractEntity#2017-08-1317:36hmaurer@favila makes sense, thanks! So the conclusion is: the invariants expected by isComponent are stronger than :db.unique/value, but are not enforced. Adding :db.unique/value enforces part of the expected invariants.#2017-08-1317:36hmaurerFollow up question: is there any significant cost in marking attributes as unique?#2017-08-1317:36hmaurere.g. can I afford to mark a lot of attributes in my DB as unique?#2017-08-1317:37hmaurerand can unique constraints be dropped later on?#2017-08-1317:37hmaurerdropped and/or added#2017-08-1317:41favilaIndexing time and disk space are the only cost. You can change constraints later#2017-08-1412:11malcolmsparksis there a way of restoring a database to a specific t? - the docs say no
> Note that you can only restore to a t that has been backed up. It is not possible to restore to an arbitrary t.#2017-08-1412:12malcolmsparksbut is there another more long-winded way?#2017-08-1412:37dm3@malcolmsparks retracting all datoms after t?#2017-08-1412:38dominicm@dm3 yes#2017-08-1414:35uwoany reason why I would see
(:db/unique (d/entity db :my-attr)) => :db.unique/identity,
but also
(:db/index (d/entity db :my-attr)) => nil?#2017-08-1414:46favilajust means it was transacted like {:db/unique :db.unique/identity}#2017-08-1414:46faviladatomic doesn't try to make them consistent#2017-08-1414:46uwogotcha. so even though (:indexed (d/attribute db :my-attr)) => false it’s still actually indexed#2017-08-1414:47faviladouble-check with (:has-avet (d/attribute db :my-attr))#2017-08-1414:47favilashould be true#2017-08-1414:48uwoyeah, it’s true. thanks#2017-08-1509:12kennethkalmerI’m curious how folks are dealing with slow queries, more specifically slow queries that eagerly load a ton of results for further processing. I’ve done as much as I can to speed up queries (liberally adding indexes, isolating huge swaths of data in separate partitions to help the peer load less segments), and now I’m starting to hit limits due to the size of data…#2017-08-1509:14kennethkalmerMy next experiment is to try and simplify the queries to only give me a starting point, and then try various combinations of core.async and transducers to walk the graph and see if that speeds things up#2017-08-1509:14kennethkalmerJust curious if anyone else has gone down this path and has any advice to offer#2017-08-1509:27val_waeselynckI've had the same problem, my approach has been to offload the work to ElasticSearch. I think Datomic is just not well suited for low-latency analytical queries that span a lot of data; fortunately, thanks to the txReportQueue and the Log API, it's very well suited to be a source for derived data systems.#2017-08-1509:29val_waeselynckAlso note that the current Datalog engine is not completely immune to the N+1 problem; I've observed that running a Datalog query which only needs one index access is still 100x slower than using the raw index API - as if there was some startup time associated with the Datalog engine#2017-08-1509:31val_waeselynckOf course I encourage you to profile and draw your own conclusions#2017-08-1509:43kennethkalmerthanks, these very useful insights! I’m already working off derived data (source data is also in datomic but in different partitions), but there are just some conditions that datalog seems to fall flat under#2017-08-1509:44kennethkalmerI’m also trying to keep the stack very flat and simple, it is not a huge app, just a lot of varied data (but not “big data” either)#2017-08-1509:54robert-stuttafordyou should use Datalog when you need to model joins. if you know you’ll only be using one index, you can almost certainly do it faster with d/datoms, because Datalog will always produce two result sets one for the clause, and one for the find expressions. reductions over d/datoms produce only one.#2017-08-1510:01kennethkalmerThanks Rob, I’ll have a look. I’m really sure I can do it with one index, if not, it won’t be a big jump to restructure the derivatives to make this possible#2017-08-1510:31kennethkalmerZOMG Rob! You just opened my eyes to something amazing#2017-08-1513:27ibarrickIs there a way to add :db/doc to a transaction using the client API?#2017-08-1513:28favila{:db/id "datomic.tx" :db/doc "my doc"}#2017-08-1513:30ibarrickCould you point me to a place I can go to understand that?#2017-08-1513:30favilahttp://docs.datomic.com/transactions.html#temporary-ids#2017-08-1513:31favila>the temporary id "datomic.tx" always identifies the current transaction#2017-08-1513:31ibarrickPerfect, that's exactly what I needed. Thank you!#2017-08-1520:51ibarrickI'm getting: The following forms do not name predicates or fns: (tx-ids) Am I not able to use the helper functions for the Log API from Client API?#2017-08-1615:09ibarrickI can't figure out what's wrong with the following query: (d/q '[:find ?tx ?e :in ?log ?t1 ?t2 :where [(tx-ids ?log ?t1 ?t2) [?tx ...]] [(tx-data ?log ?tx) [[?e]]] [?tx :db/txInstant ?time]] (d/log conn) t1 t2)#2017-08-1615:10ibarrickIt tells me IllegalArgumentExceptionInfo :db.error/invalid-data-source Nil or missing data source. Did you forget to pass a database argument? and it works fine if I take out the [?tx :db/txInstant ?time]#2017-08-1615:16jazzytomatonot sure but what if you add $ '[:find ?tx ?e :in $ ?log ?t1 ?t2 :where#2017-08-1615:18favila@ibarrick d/log looks suspicious, are you sure that is serializeable?#2017-08-1615:18favilaI suspect that just won't work with client api#2017-08-1615:19favilawait you're not talking about client api anymore#2017-08-1615:20favilathe clause where you go [?tx :db/txInstant ?time] requires a "data source" (i.e. a datomic db) to satisfy#2017-08-1615:20favilaso you need to provide a db as args too#2017-08-1615:20ibarrick@jazzytomato That told me it was expecting 4 arguments but only received 3. I was able to get results by adding (d/db conn) right before (d/log conn) but I'm not positive why I need to include the db and the log or even if I'm getting the results I want with that.#2017-08-1615:21ibarrickMaybe @favila just answered my question though. I switched from the client API because I couldn't get tx-ids or tx-data to work at all with queries from the client api#2017-08-1615:24thegeez@ibarrick didn't test this but maybe this works: (d/q '[:find ?tx ?e :in ?log ?t1 ?t2 :where [(tx-ids ?log ?t1 ?t2) [?tx ...]] [(tx-data ?log ?tx) [[?e :db/txInstant ?time]]]] (d/log conn) t1 t2)
#2017-08-1615:24ibarrickSo why can I query over the return of (d/log conn) (using just tx-ids and tx-data) if it isn't technically a "data source"?#2017-08-1615:25favilalook at your query clauses: none of them are actual clauses, they are just functions and destructuring#2017-08-1615:25favilathe data flows through just fine#2017-08-1615:25favilait's when you have [$ ?e ?a ?v ?tx] clauses that a data source is involved#2017-08-1615:28favilanote also that ?tx = ?e#2017-08-1615:30ibarrickI think I follow you on the first part but I'm not sure why ?tx would be equal to ?e and in my results they appear to not be equal#2017-08-1615:30favilaI was looking at @thegeez 's example#2017-08-1615:31favila@ibarrick what is it you really want? tx, entities mentioned, tx-instant?#2017-08-1615:31favilayour end goal, not your impl#2017-08-1615:32ibarrickOh okay. what about the "$"? should my [?tx :db/txInstant ?time] have been [$ ?tx :db/txInstant ?time]?#2017-08-1615:33ibarrickAnd yes I want exactly what you described#2017-08-1615:33favilaI am reminding you that clauses like that have a source var, which can be omitted (and defaults to $ if you do)#2017-08-1615:33favilaso if a clause could take a source-var at the beginning, it's a clause that needs a db ("data source")#2017-08-1615:33ibarrickThis seems like the easiest way to get a log of what changes were made to which entities during a time period, ordered by time.#2017-08-1615:34favilayou want what changes, or just the entities?#2017-08-1615:34favilawhy not use the log directly and ignore query?#2017-08-1615:37ibarrickI guess I could just use the log, buy how would I extract the timestamps of the transactions? Would I have to do a query for each unique transaction returned from the log?#2017-08-1615:39favila(:db/txInstant (d/entity db ?tx))#2017-08-1615:39favilaor you could extract it from the tx data itself#2017-08-1615:40favila(probably not worth the trouble)#2017-08-1615:42favilaE.g.#2017-08-1615:42favila(let [db (d/db conn)
txs (-> (d/log conn) (d/tx-range 0 (inc (d/basis-t db))))]
(->> txs
(take 1)
(map (fn [{:keys [t data] :as tx-info}]
(conj tx-info (find (d/entity db (d/t->tx t)) :db/txInstant))))))#2017-08-1615:42favilapretty much equivalent, but also lazy#2017-08-1615:56ibarrickWhat would the implementation of the first option look like? the ?tx make me think that goes in the query itself but I didn't think you could do all that inside a query (and also you mentioned not doing a query)#2017-08-1615:56favilafirst option?#2017-08-1615:57ibarrickyou said: (:db/txInstant (d/entity db ?tx)) or extract it from the tx data itself. I assumed the code sample was the implementation of the latter#2017-08-1615:57favilano, the code sample is doing the first#2017-08-1615:58favilaso I gave you an example of reading the tx log and adding a :db/txInstant key to the returned maps, without doing a query#2017-08-1615:58favilaThat is in this line: (conj tx-info (find (d/entity db (d/t->tx t)) :db/txInstant))))))#2017-08-1615:59favilawe're just grabbing the tx entity, then pulling out the :db/txInstant value and adding it to the tx-info map#2017-08-1615:59ibarrickOh I see. The code segment makes perfect sense I just misunderstood which one that referred to#2017-08-1615:59favilaas a query it would look like this:#2017-08-1616:00favila(d/q '[:find ?tx ?tx-data ?time
:in $ ?log ?t1 ?t2
:where
[(tx-ids ?log ?t1 ?t2) [?tx ...]]
[(tx-data ?log ?tx) ?tx-data]
[?tx :db/txInstant ?time]
]
(d/db conn) (d/log conn) 1000 1001)
#2017-08-1616:00favilabut queries are not lazy#2017-08-1616:00favilaand this is a hack that doesn't use a db:#2017-08-1616:01favila(d/q '[:find ?tx ?tx-data ?time
:in ?log ?t1 ?t2
:where
[(tx-ids ?log ?t1 ?t2) [?tx ...]]
[(tx-data ?log ?tx) ?tx-data]
;; 50 is the :db/txInstant id
[(identity ?tx-data) [[?tx 50 ?time _ true]]]
]
(d/log conn) 1000 1001)#2017-08-1616:01favilahere we look in the tx-data itself for the tx instant#2017-08-1616:01favila(I wouldn't recommend this approach, but it's just to show what's possible)#2017-08-1616:02favilaa safer version needs to determine what the :db/txInstant attribute's eid is#2017-08-1616:02ibarrickYep, that's what the query I got working looked like exactly. Is the only intrinsic advantage to not querying laziness?#2017-08-1616:04favilait's presorted#2017-08-1616:04ibarrickI was just asking about the last query and then you fixed it 😅#2017-08-1616:04favilaquery assembles a set#2017-08-1616:05ibarrickI really appreciate you taking the time to go over all of this for me.#2017-08-1616:05favilaso if you want to sort by time, the non-query version will already be sorted#2017-08-1616:05favilathe query version you will need to sort after#2017-08-1616:05favilathe query version in theory can make use of parallelism, not sure it matters here#2017-08-1616:06favilahonestly some things are really just easier to express without a query too#2017-08-1616:06ibarrickIn my case the version without querying gives me a more manageable shape for the data anyways#2017-08-1616:08favilawhen I just need to read an entire index segment or tx log segment I find the non-query approach to be faster, use less memory, and be more straightforward#2017-08-1616:09favilait's only when I need to do actual pattern matching or walking across entities (i.e. real queries) that using tx-log in a query makes sense#2017-08-1616:09favilaI always have the history db as input too in those queries#2017-08-1616:10favilaprobably a tx-log query that doesn't have a history db as input is a sign that it's probably not an ideal query--just map over tx-range#2017-08-1616:17ibarrickI think this is all starting to click, thanks!#2017-08-1618:17marshallDatomic 0.9.5561.56 is now available https://groups.google.com/d/topic/datomic/ZO4lW8wI2MI/discussion#2017-08-1715:42matthaveneris it possible to delete a memory db? it looks like its still retained somewhere in the heap, even after d/release and d/delete-database#2017-08-1715:44potetm@matthavener Wouldn't its lifecycle on the heap be determined by GC?#2017-08-1715:46matthavenerthat’s what I was hoping, but it doesn’t seem to be the case 😞#2017-08-1715:46potetmWhy does it not appear to be collected?#2017-08-1715:47potetmOr, better phrased, what are the symptoms?#2017-08-1715:48matthavenerif I transact 1 mil datoms, call ‘d/delete-database’ and ‘d/release’, and then dump the heap, I see 1 mil datomic.db.Datum instances, even after a manual GC#2017-08-1715:48matthavenerso eventually, after repeating that pattern, my JVM runs out of memory#2017-08-1715:50potetmInteresting....#2017-08-1715:53potetmI'm guessing "manual GC" means (System/gc)? Have you confirmed that GC is actually being run (e.g. via jstat)?#2017-08-1715:54potetmI realize this isn't particularly helpful, but it might provide some useful data points. Could certainly just be that they said, "This is a dev tool. We're not going to worry about running out of memory." But only a rep could answer that.#2017-08-1715:55matthaveneryeah, we’re abusing the mem db for a kind of whacky kafka+datomic CQRS-type thing#2017-08-1715:56matthavenerI will check GC with jstat though, good idea#2017-08-1718:24chris_johnsonIs the use-case of running a Peer in AWS Lambda JVM runtime still in the status of “we don’t expect that to work and offer no advice or support”?#2017-08-1718:25andrewhrI believe the the Client API is the way to go in regards to running Datomic + Lambda. Peers will be off-loaded to old-school EC2 as usual#2017-08-1718:28chris_johnsonThat certainly seems reasonable, however what I was hoping to do was run a Vase service in Lambda and it uses the Peer library. I get as far as it refusing to launch because the transactor keystore and truststore are not at the literal file URIs starting with /datomic that the library expects#2017-08-1718:28chris_johnsonI guess my hobby project is going to be more involved than I had thought. 😄#2017-08-1719:20favila@matthavener are you sure you don't have a reference to the connection somewhere? a def or a *1 or something?#2017-08-1719:20matthavenerfavila: yeah, a bit more digging and I think that is the issue#2017-08-1719:21favilamem db connections probably have strong references to their data, whereas normal connections have some indirection#2017-08-1719:22favilastill seems like d/release should clear and poison the connection somehow#2017-08-1720:14devthtrying to track down a transaction that includes many retractions on a new db that doesn't contain many transactions. not sure how to form the query without performing a full scan. tried with:
(datomic/q
'[:find (count ?e)
:in $ ?log
:where
[?e ?a ?v ?tx false]
[(tx-data ?log ?tx) [[?e ?a ?v _ ?op]]]]
(datomic/history (latest-db))
(datomic/log (conn)))
even if i could filter down to transactions that contain 10 or more datoms i would quicky find it#2017-08-1720:36devthcan i force re-indexing after an excision?#2017-08-1720:37devthjust tried fully excising 923 entities. doesn't appear to take effect as i assume it's going to async reindex at some point#2017-08-1720:41devthlooks like it took effect.#2017-08-1720:53favilayou can force reindex any time#2017-08-1720:54devthrequest-index is async though#2017-08-1720:54devthany way to see progress or block ?#2017-08-1720:56favilad/sync-index#2017-08-1720:56favilahttp://docs.datomic.com/clojure/#datomic.api/sync-index#2017-08-1720:57devthah, missed that. apparently sync-excise too. though i don't understand how sync-excise could not communicate w/ transactor#2017-08-1720:58favilaprobably looks for something in the index#2017-08-1720:58favilaexcisions are recorded#2017-08-1720:58devthis it building an index in the peer?#2017-08-1720:58favilano, peer looks at storage for the latest root#2017-08-1720:58favilawhen it moves, that's the new index#2017-08-1720:59favila(means an indexing completed)#2017-08-1720:59devthoh, so it doesn't communicate with tx'or but it does hit storage#2017-08-1720:59favila"doesn't communicate" may just mean waits vs sends a request#2017-08-1720:59favilatxor constantly pushes stuff to peer without peer asking#2017-08-1721:00devthah#2017-08-1721:00favilaso peer may just wait until it sees what it wants#2017-08-1721:00devthcool, makes sense#2017-08-1721:00favilain any given case I am not 100% sure if knowledge is from transactor or storage#2017-08-1721:01favilabut the index roots are definitely pulled directly from storage; maybe transactor also informs (via a push) peers that storage is updated#2017-08-1721:02devthok. interesting.#2017-08-1722:25gworley3i'm seeing some weird behavior in my code. I have code wrapped up in a try with (catch Exception e) but it doesn't seem to be catching db.error/transactor-unavailable clojure.lang.ExceptionInfo exceptions#2017-08-1722:26gworley3are these exceptions different in some way that try wouldn't catch them? doesn't seem like they would be but weird to see it ignoring the catch#2017-08-1723:37favilamaybe the exception is thrown out of a future you deref only outside the try/catch?#2017-08-1818:22devthi'm confused as to why excision is not taking effect after several hours 🤔#2017-08-1819:00devthit's a small db, and the same procedure worked on another similar-sized db in another env#2017-08-1821:30devthfinally took effect about 4 hours later. this is on a db with about 6000 entities.#2017-08-1902:24favilaExcision takes place during the next reindex iirc#2017-08-1902:25favilaReindex triggered by size of not-yet-indexed stuff. So may take a while on small/low volume db#2017-08-1819:12devththis was asked a few months ago, but has anyone done any work on a datomic prometheus exporter, or have examples of other metric exporters you built?#2017-08-1819:25matthavenerdevth: the javaagent and a default prometheus config works fine, but you can’t get any of the native datomic metrics out, just jvm metrics 😞#2017-08-1819:27matthavenerhm, I take that back, I suppose you could hook into prometheus with the callback system#2017-08-1819:36devthyep, so maybe a lib that wraps jmx_exporter?#2017-08-1819:37devthi'm thinking about hooking up a timbre logging callback just so i can at get some basic visibility on the metrics#2017-08-1820:16matthavenersomething like this https://github.com/yeller/datomic-riemann-reporter#2017-08-1820:16matthavener+ prometheus client lib#2017-08-1820:16matthavenershould be fairly straightforward#2017-08-1820:16devththanks. checking it out#2017-08-1821:53devththrew this together to get some quick visibility into metrics via logs https://github.com/devth/timbre-datomic-handler#2017-08-1901:17bbloomi curious how many folks have applications that have something like a global :entity/class keyword attribute - and how many people don’t - i’m curious what compelled ppl to include that attribute. or how people get by without it. war stories welcome 🙂#2017-08-1905:01foomipHi everybody, been playing around with Datomic for about 3 months. I have a transactor question I was hoping someone here could shed some light on. Is it possible to have 2 transactors on the same data store? So not an HA scenario, but rather 2 distinct apps with their own databases, but sharing eg. a Cassandra cluster database store.#2017-08-1914:17val_waeselynckYou can even have 2 apps share a transactor#2017-08-1917:25foomipYes you’re right you could do that. Though wouldn’t the transactor become a bottleneck before some thing like a cassandra database cluster would (with regards to write throughput)?#2017-08-2008:59val_waeselynck@U6QTF2C5A it sure allows for less write throughput (although I seem to recall someone saying that transactions from both dbs will be processed in parallel)#2017-08-2009:03val_waeselynckI've never set up cassandra storage for Datomic, but I guess what you could do is set up one cassandra table for each db on the same cluster. I see nothing in the configuration template nor the connection string that prevents you from doing that.#2017-08-2009:57foomipThanks @U06GS6P1N I guess the only limiting factor that I can think of would be licensing costs 😛 but I will try get a test setup going to test this out.#2017-08-2010:06foomipOK so I see the cassandra config cql statements make a default keyspace of datomic and a table datomic.datomic.#2017-08-2010:07foomipTransactor template has default option of cassandra-table=datomic.datomic set. So if you change the keyspace and table values accordingly you should be able to get more than one transactor on a cassandra cluster.#2017-08-1916:01robert-stuttaford@bbloom i dabbled with a type attr but didn’t enjoy what it did to queries - needing two clauses for a lookup e.g. :entity/type + :entity/identifier or when walking relationships. found that i prefer one attr that both denotes type (through its ns) and identifier (through its name), and suffer a little (some-fn :one/id :two/id) when the need for polymorphic code arises#2017-08-1916:01robert-stuttafordindirection in schema adds a LOT of cognitive overhead, i’ve found. data you can immediately understand with a uri and datomic.api is a big win#2017-08-1916:02robert-stuttafordi built https://github.com/Cognician/datomic-doc#namespace-list to make browsing and overall discovery easier for myself#2017-08-1916:10hmaurer@robert-stuttaford could you elaborate on your difficulties with a type attribute? I am doing something similar in my current app (first datomic application ever)#2017-08-1916:10hmaurerwhy would you need two clauses for lookup if your identifiers are globally unique?#2017-08-1917:38favilaYes I use an entity type attr, but I don't use it to alter meaning of the attrs on the entity (so I never need to double lookup) I use it to establish what attrs are expected or required#2017-08-1918:19bbloomthx for the responses folks - @robert-stuttaford could you elaborate a bit? was it necessary for the identifier to be paired with the type? seems like having a type attribute doesn’t mean you can’t have globally unique ids#2017-08-1918:20bbloom@favila - multi-spec style?#2017-08-2003:33favilaYes, similar, although predates and is not spec. Our entities are still open (some attrs are not entity-type specific), but we know required and optional attrs per attr. We also have cases where we need to refine (constrain further, Xsd style) particular attrs, an additional entity "subtype" (called profile) governs this. We don't have a comfortable way to handle this wth spec, and we didn't invent this type system so we can't just design the concept out#2017-08-2019:26bbloominteresting - thx#2017-08-2019:27bbloomfurther constraining specs is something i’ve wondered about too#2017-08-2016:34hmaurerIs anyone here using Adi? Any thoughts on it?#2017-08-2017:17val_waeselynckI considered it, but it was too constraining for my use case (AFAICT entity types are determined by the namespace of the keys, I'd like to have the ability to make exceptions to this rule). I'm also not a big fan of some design choices, like conflating the notions of database value and connection in a single 'datastore' abstraction - I believe you lose many of the architectural benefits of Datomic when you start doing that, because you're back to a client-server architecture.#2017-08-2018:46hmaurerThanks for the insights @U06GS6P1N 🙂#2017-08-2019:11val_waeselynck@U5ZAJ15P0 You're welcome 🙂 they're more opinions than insights though#2017-08-2109:42dm3I have a process which continuously pulls data from a source and writes it to Datomic. Most of the time the data stays the same, so only the tx datoms are asserted. However, I don’t care about capturing empty txs and they seem to take up a lot of the space in the db. What’s the best way to not record empty txs/prune them from the db periodically? I know I could 1) dump the db and reimport the data periodically - not great; 2) check if the tx-data to be asserted is exactly the same - would like not to do the work if can be avoided. Are there any better solutions?#2017-08-2110:37augustl@dm3 one thought that pops into my head is to wrap it all in a transactor function, and wrap the entire transaction in it. That function can use "with" and look at db-after to see if anything actually changed#2017-08-2110:38augustlso a transaction will look something like [[:ignore-noop .... normal tx data here ...]]#2017-08-2110:38augustlthen you could throw an error and use ex-info to determine that it was in fact just a noop, not an error#2017-08-2110:39augustlsince afaik using exceptions is the only way to abort a transaction in a transaction function, you can't use normal control flow#2017-08-2119:48dm3@augustl thanks, that seems like an OK third option 🙂#2017-08-2121:27hmaurer@dm3 note that depending on your system, it might be fine to do that check on the peer (e.g. query the database and check if your data has changed)#2017-08-2121:29hmaurerif it’s a periodic job and you know for fact that two jobs won’t be running concurrently you won’t have race conditions etc#2017-08-2121:29hmaurerso a transaction function might not be necessary#2017-08-2121:29hmaurer(but appears to be a more “solid” solution)#2017-08-2121:29hmaurerdisclaimer: datomic noob talking 🙂#2017-08-2121:30hmaurerthere might even be simpler solutions#2017-08-2121:31hmaurere..g what do you mean by “the data stays the same”? if your data source is something like a file and that when the data is “the same” it’s actually the same bit for bit (ordering preserved, etc), then you could just keep a hash of the last processed batch#2017-08-2121:31hmaurerbut I am getting a bit sidetracked, and it was likely not your question#2017-08-2206:32augustlI wish there was a "transactor library" as well as a peer library, as an alternative to shipping code as strings to the transactor#2017-08-2206:32augustla library where you spawn the transactor server yourself, and there's hooks for processing transactions and registering transaction functions#2017-08-2206:33augustlso you can redeploy your transactor service, rather than shipping strings and running code in a "foreign" place#2017-08-2206:33augustlwould also allow for transactor functions to be written in any JVM language#2017-08-2206:41dm3@hmaurer - it’s a bit more involved than that 🙂 I know I could do it on the peer, hence my 2nd solution. I’d really like to avoid it though.#2017-08-2211:43favila@augustl cognitect's advice is to add a jar to the transactor classpath with your code. Then the tx fn impl just calls something from that jar#2017-08-2211:44favilaThe problem with these approaches is that it doesn't version code in the same timeline as data#2017-08-2211:48augustlah, didn't know that was possible, but that makes sense#2017-08-2211:48augustlare there any practical problems with not having the transaction functions in the database, versioning/history/timeline wise?#2017-08-2217:07raymcdermotthello folks - simple question that is annoyingly hard to google…#2017-08-2217:07raymcdermottI have a uri type#2017-08-2217:09raymcdermottI try to add “https:/some-url,.com”#2017-08-2217:09raymcdermottbut it blows up#2017-08-2217:09raymcdermottwhat is the magic to convert this string into a literal URi?#2017-08-2217:10raymcdermott:cause :db.error/wrong-type-for-attribute Value https://some-url.com is not a valid :uri for attribute :person/photo#2017-08-2217:32hmaurer@raymcdermott (java.net.URI. "")#2017-08-2217:33raymcdermottok, yes … so no literal?#2017-08-2217:33hmaurerwhat do you mean?#2017-08-2217:34raymcdermott#uuid#2017-08-2217:34hmaurerDatomic’s URIs are java.net.URI instances#2017-08-2217:34raymcdermottok, thanks#2017-08-2217:34raymcdermottI can move on 🙂#2017-08-2217:35hmaurer@raymcdermott ah; #uuid is a reader macro which creates an instance of java.util.UUID#2017-08-2217:35hmaurere.g.#2017-08-2217:35hmaurer=> (type #uuid "6c84fb90-12c4-11e1-840d-7b25c5ee775a")
#<
#2017-08-2217:35hmaurerthere does not appear to be a similar reader macro for URIs#2017-08-2217:35raymcdermottyes, indeed. Thanks#2017-08-2217:35raymcdermottbit of a shame but there we are 🙂#2017-08-2217:35devthyou can make one 🙂#2017-08-2217:37raymcdermottit means you cannot put it into an edn file#2017-08-2217:37raymcdermottwhich is the main drawback#2017-08-2217:37kvltHey all, I’m running into an odd problem. On my docker container I’m unable to pull datomic-pro yet locally I am. I have set teh following:
:repositories [["" {:url ""
:username [:env/datomic_username]
:password [:env/datomic_password]}]] and have those environment variables set in the docker container#2017-08-2217:37hmaurer@raymcdermott ah if it’s in an EDN file it’s easy#2017-08-2217:37kvltI know I’m doing something stupid, I just can’t figure out what#2017-08-2217:38devthi do the same. works for me. double check your env vars?#2017-08-2217:39devthexcept i use this format:
:repositories {"" {:url ""
:username [:gpg :env/datomic_repo_user]
:password [:gpg :env/datomic_repo_pass]}
#2017-08-2217:43kvltI’ll try that format#2017-08-2217:43kvltTHanks!#2017-08-2217:43kvltAre you doing this on a docker container?#2017-08-2217:44devthyep#2017-08-2217:44devtha docker container running in CI builds an uberjar#2017-08-2217:44devthso at runtime i don't need datomic credentials to download the jar#2017-08-2217:45kvltHmmm fair enough#2017-08-2217:45kvltI’ll doubel check the creds again#2017-08-2217:46devthexec into the docker container and echo the env vars for a sanity check 🙂#2017-08-2217:50kvltI have done, they match#2017-08-2217:51kvltit downloads using wget#2017-08-2217:51kvltBut not through lein#2017-08-2217:51devthah, strange#2017-08-2217:51kvltCould not transfer artifact com.datomic:datomic-pro:pom:0.9.5407 from/to (): Not authorized , ReasonPhrase:Unauthorized.#2017-08-2217:51devthdifferent profiles activated?#2017-08-2217:51kvltDefinitely seems to be user/pass related#2017-08-2217:52devthnewlines at the end of the env vars?#2017-08-2217:54kvltI’m going to try using gpg, mind if I ask how you got those creds onto your CI?#2017-08-2217:54kvltDid you build them there?#2017-08-2217:54devthi didn't - i use gpg for local creds and env for CI#2017-08-2217:55kvltgpg --default-recipient-self -e ~/.lein/credentials.clj > ~/.lein/credentials.clj.gpg So you did this <- and just copied that to the ci?#2017-08-2217:56kvltSurely that wouldn’t as the private key isn’t known to the ci#2017-08-2217:58devthnope, not using gpg in CI#2017-08-2218:02kvltSorry I missread, my bad!#2017-08-2218:02devthnp 🙂#2017-08-2217:37hmaureryou can define custom tags when parsing the edn#2017-08-2217:37hmaurerwith the :readers options iirc#2017-08-2217:38raymcdermott@hmaurer I’m gonna have to think about that#2017-08-2217:39kvltThis all works fine outside of the container#2017-08-2217:39hmaurer(clojure.edn/read-string input {:readers {'uri #(new java.net.URI %)}}#2017-08-2217:39raymcdermottnice#2017-08-2217:52raymcdermott@hmaurer I’ll give it a shot, though TBH I find it irksome that the supported data types don’t have reader literals#2017-08-2217:54hmaurer@raymcdermott these are Datomic’s supported data types. I am not sure why uuid has a reader macro in clojure and URI does not, but you can’t have a reader literal for every type anyway#2017-08-2217:54hmaurermaybe someone else can elighten us on this#2017-08-2217:57alexmillerb/c java.net.URI is a disaster#2017-08-2218:35val_waeselynckDon't be so harsh, it's not that big an issue to have to do a network call for each equality check. Oh wait, it actually is!#2017-08-2218:50donaldballErrrr I thought that particularly design affliction affected java.net.URL, not java.net.URI…#2017-08-2219:18hmaurerwhy is it a disaster? (I am not familiar with java)#2017-08-2219:19hmaurer@U06GS6P1N wait what?!#2017-08-2219:20val_waeselynck@U04V4HWQ4 oh yeah my mistake. Apologies to java.net.URI 🙂#2017-08-2219:36raymcdermottwhat’s the emoji for fingers tapping on the bar 😉#2017-08-2219:54alexmilleroh yeah, I was thinking of URL (although URI is no picnic either)#2017-08-2219:55raymcdermottit is a supported type though in Datomic and one of the few ‘complex’ types#2017-08-2219:56raymcdermottand now it seems like it’s being deprecated, at least in this room#2017-08-2220:13donaldballjava.net.URI is a data structure dressed up as a class without providing any of the ostensible benefits of a class#2017-08-2220:14raymcdermottdescribes so many SDK classes…#2017-08-2220:15raymcdermottthe main thing is it saves me writing a regex#2017-08-2220:17raymcdermottthe constructors pack a decent amount of value#2017-08-2220:18raymcdermottAside from some minor deviations noted below, an instance of this class represents a URI reference as defined by RFC 2396: Uniform Resource Identifiers (URI): Generic Syntax, amended by RFC 2732: Format for Literal IPv6 Addresses in URLs. The Literal IPv6 address format also supports scope_ids. The syntax and usage of scope_ids is described here. This class provides constructors for creating URI instances from their components or by parsing their string forms, methods for accessing the various components of an instance, and methods for normalizing, resolving, and relativizing URI instances. Instances of this class are immutable.#2017-08-2220:19raymcdermottstill waiting for the knife in the heart of URI#2017-08-2220:21alexmilleryou can always validate with URI, then store in a string (or a map in pieces)#2017-08-2220:23raymcdermotteesh, really?#2017-08-2220:23raymcdermottdatomic has a uri type#2017-08-2220:23raymcdermottthat’s what we’re discussing, no?#2017-08-2220:25raymcdermottmy goal is to understand why it’s not good practise to use it#2017-08-2220:33alexmillersorry, I walked into this in the middle and carefully avoided all important context#2017-08-2220:34alexmillerI thought the question was why clojure didn’t have a #uri literal. if you’re using Datomic, then go for it.#2017-08-2220:35raymcdermottthe original point was that there is no easy way to supply uri literals in an edn file#2017-08-2220:35alexmillerah#2017-08-2220:36raymcdermottand then we moved on to user supplied readers which is like, yikes#2017-08-2220:36raymcdermottbut Ok#2017-08-2220:36alexmillerdoes datomic have a #uri reader? I don’t remember.#2017-08-2220:36raymcdermottand then a storm came down on URI 😉#2017-08-2220:36alexmilleryeah, my comments were really about URL, so I’ve managed to create maximal confusion#2017-08-2220:37raymcdermottif I say #uri “https://some-valid-uri” it barfs#2017-08-2220:37raymcdermottLOL, awesome work 🙂#2017-08-2220:39alexmillerif you want to represent uris which will end up in Datomic in edn, then I think creating your own reader literal for them is a reasonably sane thing to do (but I would call them #ray/uri or something namespaced so you don’t collide when some future Clojure version includes them in core)#2017-08-2220:40raymcdermottsure, I’m just a lazy whiner#2017-08-2220:40raymcdermott💀#2017-08-2220:40raymcdermottso many people are writing that code#2017-08-2220:41raymcdermottbut OK, good to know that there is nothing intrinsically negative in the use of said URI#2017-08-2220:42raymcdermott//out#2017-08-2220:42alexmillerif only it was possible to package useful code in a way that more than one person could use it#2017-08-2220:42alexmiller😜#2017-08-2220:43raymcdermott🤔#2017-08-2218:15robert-stuttafordi wouldn’t use the uri datatype @raymcdermott#2017-08-2218:16robert-stuttafordnot sure what using it gets you other than pain 🙂#2017-08-2218:40raymcdermottso uri is not useful in Datomic? Just use strings?#2017-08-2218:41raymcdermottmaybe somebody better tell the Datomic team 😉#2017-08-2218:43raymcdermottbut I will explain why types are sometimes useful if you insist @robert-stuttaford 😉#2017-08-2218:43raymcdermottgoes against the grain though#2017-08-2219:07kvltAnyone have a link to the docs for updating a transactor on aws?#2017-08-2219:19stijn@petr there are several ways to do it, but updating the cloudformation stack through the aws console is probably the easiest http://docs.datomic.com/aws.html#aws-management-console#2017-08-2219:21kvlt@stijn I was thinking more in terms of updating the version/cw metrics. Basically the stuff set in the config files#2017-08-2219:22stijnthe version is a parameter in the cloudformation stack#2017-08-2219:24stijni'm not sure about cw metrics though#2017-08-2219:31kvltOH sweet#2017-08-2219:32kvltI didn’t know that#2017-08-2221:39raymcdermott@hmaurer I tried to add my own reader as suggested and now the datomic reader is missing#2017-08-2221:40raymcdermottCompilerException java.lang.RuntimeException: No reader function for tag db/id, compiling:(core.clj:141:25)#2017-08-2221:40raymcdermottis there a way to chain them?#2017-08-2221:46hmaurer@raymcdermott how are you reading your edn file?#2017-08-2221:46hmaurerand how were you reading it previously?#2017-08-2221:48raymcdermott(def data-tx (edn/read-string reader-opts (slurp “resources/data.edn”)))#2017-08-2221:48raymcdermottwhere reader-opts#2017-08-2221:48raymcdermott(def reader-opts {:readers {‘uri #(new java.net.URI %)}})#2017-08-2221:49raymcdermottpreviously#2017-08-2221:49raymcdermott(def data-tx (read-string (slurp “resources/data.edn”)))#2017-08-2221:49raymcdermottso not using the edn reader#2017-08-2221:49raymcdermottor any opts#2017-08-2221:53hmaurerI am not too sure how the Datomic library defines the EDN tag literals, so I can’t advise you on how to merge your custom ones. Someone more knowledgeable might be able to answer 🙂#2017-08-2221:58raymcdermottso maybe @robert-stuttaford is right … pure pain … what do you think @alexmiller?#2017-08-2222:00raymcdermottIt’s midnight here so I have to go catch my carriage home…. ttyl#2017-08-2222:39alexmillerwell, you can either set up an external data_readers.clj in your project (it should be merged with all others found on the classpath) OR you can explicit bind your own around the reader call#2017-08-2222:39alexmillerbinding *data-readers* that is#2017-08-2222:40alexmiller(doc *data-readers*) is a good place to start#2017-08-2222:40alexmillerthis and all the related things around it is an area ripe for a guide#2017-08-2222:40alexmillerI guess I did write some stuff about it at https://clojure.org/reference/reader#tagged_literals#2017-08-2222:41alexmillerbut could use some more exposition and examples#2017-08-2223:04bbloomif i’m using lein - what’s the least painful way to get my code on the classpath in a way that datomic db.fn :requires can find it?#2017-08-2223:49bbloomthis is how i’ve unblocked myself:#2017-08-2223:49bbloom(let [cl (.getContextClassLoader (Thread/currentThread))
dcl (clojure.lang.DynamicClassLoader. cl)]
(.setContextClassLoader (Thread/currentThread) dcl)
(add-classpath "file://./src"))
#2017-08-2223:49bbloom¯\(ツ)/¯#2017-08-2307:56raymcdermott@alexmiller and I guess that’s why I’m going to use type/string#2017-08-2307:56raymcdermott@alexmiller the datomic reader should support its official types as literals IMHO#2017-08-2307:56raymcdermottwhere can I add that feature request?#2017-08-2309:23rodHi (apologies if this is a stupid question but...) is there a way of connecting via the datomic.client library to an in-memory peer server for unit testing?#2017-08-2320:27laujensenJust upgraded a mysql installation. Now datomic wont start due to ""Requested object beyond end of cache 14" -how do I resolve?#2017-08-2323:52seonhokimAnybody who has experiences to use Datomic for a CLP(constraint logic programming), specially to be adopted in real-time processing, for example distribution center optimization or similar cases?#2017-08-2323:55seonhokimeven i’m not sure whether Datomic is a proper way to do that or not#2017-08-2323:59seonhokimI assume that core.logic is not enough to handle for the projects of which data size is big so DBMS should comes in, so I’m regarding to use Datomic.#2017-08-2415:28stijnIs there a way to reuse rules between queries in different transaction functions?#2017-08-2609:00val_waeselynckMaybe send the rules as an argument to the tx fn ?#2017-08-2619:12stijngood suggestion, thanks#2017-08-2415:40mgrbyte@stijn only way I can think of is to write a transaction function that returns the rules, then use datomic.api/invoke to retrieve them in other transaction functions#2017-08-2415:43mgrbytewithout adding a library to the classpath on the transactor that is#2017-08-2415:46andrea.crottithere are probably not a lot of nixos users using datomic, but at the moment all these commands using /bin/bash would fail on nixos
ag /bin/bash
bin/console
1:#!/bin/bash
bin/repl-jline
1:#!/bin/bash
bin/transactor
1:#!/bin/bash
#2017-08-2415:46andrea.crottithe fix is rather trivial, just /bin/bash -> /usr/bin/env bash, and it's generally better to use that form anyway#2017-08-2416:08the-kennyandrea.crotti: just replace them with /usr/bin/env bash#2017-08-2416:09andrea.crottiyeah I know that's the fix#2017-08-2416:09andrea.crottiI was suggesting that they could fix it datomic#2017-08-2416:09the-kennywhoops I thought this was the NixOS channel actually x) My bad!#2017-08-2416:09andrea.crottiI can probably report it somewhere#2017-08-2416:09the-kenny(I'm using Slack via the IRC gateway)#2017-08-2417:34ezmiller77Hey All, I'm deploying my little datomic project to aws instances, but the default c3.large is too expensive. I've been trying to use some of the smaller instances, but with no luck yet. Some of the possibilities like t2.small don't seem to be supported. If I try to run bin/datomic ensure-cf on properties file with that instance indicated, I get an error Key not found: t2.small. Does anyone know of a workable deploy with a cheaper instance?#2017-08-2417:37marshallm3.medium @ezmiller ?#2017-08-2417:46ezmiller77I'll give that a shot @marshall. Was hoping to try something a bit smaller, to save more money...#2017-08-2417:47ezmiller77Is there a list somewhere of which instances are available, and what specifically an instance needs to support in order to run a datomic transactor?#2017-08-2417:58marshall@ezmiller77 If you look at the generated cf template JSON you’ll see a map of all instance types; theoretically you can use any of them - some folks have been able to get the transactor running on small/micro instances, but you’d need to heavily tweak heap and memory index settings#2017-08-2417:58marshallif you have any substantial load, I would avoid burst instances#2017-08-2417:59ezmiller77I really don't have any substantial load at this point. Any links to guidance on tweaking heap and memory index settings would be much appreciated. Thanks!#2017-08-2417:59marshalldepends how much memory you’re working with; the JVM can (and will) require more than you specify with Xmx, plus you need some for your OS#2017-08-2418:00marshallthe memory-index-max will be taken from the heap#2017-08-2418:00marshallso you need to reduce object-cache-max to allow that#2017-08-2418:00marshalland/or reduce memory-index-max#2017-08-2418:00marshalldefault is for object cache to take half of allocated heap#2017-08-2418:09ezmiller77Thx!#2017-08-2418:10marshall@ezmiller77 https://stackoverflow.com/questions/45501981/is-there-any-way-to-use-t2-small-ec2-instance-when-deploying-datomic-transactor FYI#2017-08-2418:10marshallfor what it’s worth - claim to use a t2.small#2017-08-2418:16ezmiller77Oh yeah, I tried that (some of my commments below the answer there...) Wasn't able to get that to work in the end...#2017-08-2418:16ezmiller77Might try again when I have a moment.#2017-08-2418:16marshalldefinitely need to adjust heap, probably object cache, maybe mem index max#2017-08-2418:23petterik@ezmiller77 we're using t2.small transactors. When calling ensure-cf we use any key that's valid/found. Then we manually enter "t2.small":{"Arch":"64h"}, into the produced cf.json. My java options are: java-xmx=1500m and I don't remember having to change the object cache or mem index max#2017-08-2418:24ezmiller77@petterik : that sounds much like what the stackoverflow question that @marshall linked says. Maybe I need to give it another shot!#2017-08-2418:29marshall@petterik @ezmiller77 http://docs.datomic.com/changes.html#0.9.5561.50 As of 0.9.5561.50 the ensure-cf script should work fine with t2.small (and pretty much all the other instance types)#2017-08-2418:30marshalllet me know if that’s not true, b/c it should be 🙂#2017-08-2418:30ezmiller77@marshall : good to know. If we are on a free version, we aren't able to upgrade is that right?#2017-08-2418:30ezmiller77I'm on an earlier version...#2017-08-2418:30marshalldepends when your Starter license maintenance period expires#2017-08-2418:30ezmiller770.9.5544#2017-08-2418:31marshallyou can access updated versions until your maint. period ends (a year)#2017-08-2418:32marshall@ezmiller77 Looks like you got Starter in Jan 2017, so your license key is valid for all versions up until Jan 2018#2017-08-2418:33ezmiller77then after that you are frozen at whatever version you are on?#2017-08-2418:34marshalluntil you upgrade to a paid license#2017-08-2418:36petterik@marshall I'm on 0.9.5561, but I'll let you know if specifying t2.small doesn't work when I upgrade 🙂#2017-08-2509:08stijn@mgrbyte thanks for the suggestion#2017-08-2519:54dpsuttoni'm trying to run the day-of-datomic tutorials. I'm following the README to lein run -m datomic.samples.tutorials which is blowing up with a connection refused error. Has anyone run across this before?#2017-08-2519:55hmaurer@dpsutton can you link me the code?#2017-08-2519:55hmaurer(it’s on a git repo I assume?)#2017-08-2519:55dpsuttonhttps://github.com/Datomic/day-of-datomic#2017-08-2519:56dpsuttonyeah it's stu halloway's day of datomic. and just cranking it up is not working for me#2017-08-2519:57dpsuttonthere seem to be some in-memory examples which work fine, but some examples seem to be reaching out to a running instance which hasn't been started as far as i can tell#2017-08-2519:59hmaurer@dpsutton yeah, looks like partition_locality is hitting a non-mem conn https://github.com/Datomic/day-of-datomic/blob/59186b4b39c124e2d9d0e79243f3e373b0a0b9d9/tutorial/partition_locality.clj#L27#2017-08-2519:59dpsuttonyeah. i don't understand how this is supposed to work, honestly#2017-08-2519:59dpsuttonthe readme seems pretty silent about it#2017-08-2520:00hmaurerif it’s only a couple of tutorial steps I guess you can just skip them for now. Then later you have two options:#2017-08-2520:00hmaurereither set up a local datomic free transactor. As far as I can see there are steps in the README of the mbrainz repo to do it#2017-08-2520:00hmaurerhttps://github.com/Datomic/mbrainz-sample#2017-08-2520:01hmaureror adapt the tutorial to use an in-memory database (probably harder if you are not familiar with datomic)#2017-08-2520:02hmaureractually I get why they’re using a non-memory db…#2017-08-2520:02hmaurerAs far as I know you can’t restore a DB from a backup if you are using an in-memory DB#2017-08-2520:02hmaurerand the mbrainz-sample data is provided as a db backup#2017-08-2520:02hmaurer(which you can then restore to populate your database)#2017-08-2520:03hmaurerFollow the README from mbrainz-sample, it should guide you through the necessary steps#2017-08-2520:03hmaurer@dpsutton ^#2017-08-2520:07hkjelsI’m starting out a new project and the only thing I know for sure is that I’m going to use Datomic, but I haven’t really decided on a front-end yet. What are my best options these days?#2017-08-2520:07hmaurer@hkjels are you looking to use clojurescript or not?#2017-08-2520:08hkjelsClojureScript for sure#2017-08-2520:09dpsutton@hmaurer thanks a lot. i'll get on that in a second#2017-08-2520:21hmaurerLet me know how it goes. I’ll try it locally if you run into trouble#2017-08-2520:09hmaurerIn that case I can’t advise you much as I am new to Clojure. It depends how experimental you want to go. From what I gather something like Om next is a safe bet. There are also people attempting to do things with Datascript#2017-08-2520:10hmaurerI watched this talk recently: https://www.youtube.com/watch?v=aI0zVzzoK_E#2017-08-2520:10hmaurerhttps://precursorapp.com also works by syncing Datascript and Datomic#2017-08-2520:10hmaurerand is open-source (https://github.com/PrecursorApp/precursor)#2017-08-2520:11hmaurerIt seems non-trivial if your project has non-trivial requirements though (lots of data, authorization concerns, etc)#2017-08-2520:12hmaurere.g. in the talk linked above the speaker worked on a project where every user had full access and the data-set was fairly small, so he could afford to replicate it “dumbly” to every user’s browser over websocket#2017-08-2520:13hkjelsI see#2017-08-2520:13hmaurerYou might also want to watch this talk: https://www.youtube.com/watch?v=qijWBPYkRAQ#2017-08-2520:13hmaurer(they use Om Next)#2017-08-2520:14hmaurerYou might get more useful advice on frontend options if you ask on #clojurescript 🙂#2017-08-2520:14hkjelsThis was plenty 😀
I’ll have a look at these videos#2017-08-2520:15hmaurerAs a last bit of info: I am working on a project right now with Datomic on the backend. I have 0 experience with Clojurescript and I am new to Datomic so I didn’t want to go full experimental with Datascript/Om Next. I ended up writing some code to auto-generate a GraphQL schema from my Datomic schema + an extra map of information to handle the basic CRUD stuff.#2017-08-2520:15hmaurerThen I added some custom behaviour on top of that#2017-08-2520:16hmaurerI am using Javascript (ES6) + Relay on the frontend, hitting a GraphQL API mostly auto-generated#2017-08-2520:17hmaurer(my needs are very simple though; basic CRUD)#2017-08-2520:17hkjelsI haven’t decided if I’m going with safe just yet. If so, that sounds like a really good option, to generate GraphQL I mean#2017-08-2520:17hkjelsand safe for me would be re-frame#2017-08-2520:18hmaurer@hkjels the first talk I linked mentions re-frame. They are using https://github.com/mpdairy/posh, which is a library on top of re-frame and Datascript (from what I gather)#2017-08-2520:18hkjelsThis was great! I have the rest of my evening set 😉#2017-08-2520:18hmaurerEnjoy 🙂#2017-08-2610:25hmaurerIs a datomic.db.DbId always representing a temp ID?#2017-08-2617:27favilaIn theory it can represent any eid. Its purpose is to late-bind the partition id#2017-08-2619:24hmaurer@U09R86PA4 Oh I see. So I assume the db partition is encoded as part of a long entity ID.#2017-08-2619:24hmaurerA datomic.db.DbId is just an entity ID without the partition bit?#2017-08-2619:32hmaurerAh nevermind, what I just said was non-sense#2017-08-2619:33hmaurerIf I understand it correctly now, an entity ID is an encoded version (as a long) of a DbId, with the DB partition id resolved#2017-08-2619:33hmaurerWhich is what you said 😛#2017-08-2620:12favilahttps://groups.google.com/d/msg/datomic/0AZpa-YmkpY/bTV3hcQSs-MJ#2017-08-2621:00hmaurer@U09R86PA4 thank you! out of curiosity, where did you learn so much about Datomic’s internals?#2017-08-2621:01favilaScraps of info from talks, articles, etc, plus reverse engineering. We needed entity ids to survive in js, so we had to study their ranges#2017-08-2610:26hmaurerDB ids in query results are always of type Long#2017-08-2815:29stijnis there a way to show the final transaction after all transaction functions have been executed?#2017-08-2815:32hmaurer@stijn maybe the txdata returned by transact? although that will only contain the transacted atoms (e.g. fresh facts)#2017-08-2815:34stijn@hmaurer it is to debug an IllegalArgumentExceptionInfo :db.error/tempid-not-an-entity tempid used only as value in transaction datomic.error/arg (error.clj:57)#2017-08-2815:35stijnso it does not return tx-data, but throws#2017-08-2815:35stijnI can probably recursively call d/invoke, but if it exists that would be nice 🙂#2017-08-2815:35hmaurer@stijn ah, in that case I do not know of a built-in way to do it (I am new to Datomic). However you could pretty easily write some code to execute your transaction functions locally#2017-08-2815:36hmaureryeah exactly 🙂#2017-08-2815:44laujensenGents, Im getting a " Critical failure, cannot continue: Indexing retry limit exceeded." from Datomic when it saves to MySQL. There's not a lot of data being moved, I estimate < 1 MB, and MySQL is set to max_allowed_package 200M, so that shouldnt be a problem. How do I debug?#2017-08-2816:17raymcdermottAWS question …. or clarification#2017-08-2816:17marshall@laujensen what is showing up in the transactor logs#2017-08-2816:17raymcdermottare there supported cognitect AMIs that automate the install?#2017-08-2816:18laujensen@marshall Pretty much just a few of what I pasted above.Then the server restarts the service#2017-08-2816:18raymcdermottthe manual references AMIs but I can’t see them in the AWS EC2 market place#2017-08-2816:18raymcdermott(at least not from Cognitect)#2017-08-2816:18marshall@raymcdermott yes, but they’re not marketplace products http://docs.datomic.com/aws.html#starting-transactor#2017-08-2816:19marshall@laujensen there should be some cause of the indexing failure above the retry limit exceeded message#2017-08-2816:20raymcdermott@marshall I have read that 3 times and still don’t get it 😉#2017-08-2816:20laujensen@marshall, yea, mysql is complaining about a BLOB size that exceeds 10% of the redo log, so Ive tried increasing it, though it seems an odd problem for the low amount of data im pushing#2017-08-2816:21marshall@raymcdermott there is an AMI that will download the version of Datomic specified and launch it
it is a publicly available AMI, but it is not a Marketplace AMI#2017-08-2816:21raymcdermott@marshall …. ok so it’s all hidden behind create-cf-stack#2017-08-2816:22raymcdermottensure-cf / create-cf-template / create-cf-stack#2017-08-2816:23raymcdermottreading it the fourth time is the charm#2017-08-2816:50chris_johnson@raymcdermott Note that “hidden” is relative; the AMI IDs will be in a map called AWSRegionArch2AMI in the stack JSON created by create-cf-stack#2017-08-2817:04raymcdermottthanks @chris_johnson … I am not yet ready to poke around in their privates 😉#2017-08-2919:01uwois there anything that logs when an indexing job requested by d/request-index finishes?#2017-08-2919:23marshall@uwo you can look for :event :update/create-index with :phase :end#2017-08-2919:23uwo@marshall thanks!#2017-08-2919:37wilkerluciohello people, question: I'm trying to understand better the trade-offs between using enums vs keywords, I found this old post: https://groups.google.com/forum/#!topic/datomic/KI-kbOsQQbU#2017-08-2919:37wilkerlucioin terms of search performance, is this relevant? personally it seems that using keywords is simpler than enums, the search performance is significant between those?#2017-08-2920:11favilacomparing numbers likely faster than comparing an object with two string fields?#2017-08-2920:11favilaI doubt it matters in practice#2017-08-2920:12favilause enums if you want a closed, enumerable set; keywords if you want open#2017-08-2920:12favilae.g. if you typo a keyword type, no error, silently accepted; typoed enum will error#2017-08-3001:53wilkerluciothanks 🙂#2017-08-3014:19viniciushanaone point me and @wilkerlucio were discussing about this is index lookup. if you keep the keyword values in AVET, then lookups might match VAET lookups (which take place for refs when you use db/ident), so there's only this caveat. of course, never benchmarked anything of it so i wouldn't be surprised to see my assumptions wrong.#2017-08-3100:17georgekHi, I'm trying to bootstrap datomic using cassandra underneath. I've locally set up a 3 node cluster using the tool ccm, configured a cass transactor and have been able to use the datomic shell to create a database using the uri. However in my lein project I'm seeing errors upon trying to get a connection or create a database:
NoSuchFieldError DEFAULT_MAX_PENDING_TASKS io.netty.channel.epoll.EpollEventLoop.<init> (EpollEventLoop.java:84)
java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
Throwables.java:160 com.google.common.base.Throwables.propagate
NettyUtil.java:136 com.datastax.driver.core.NettyUtil.newEventLoopGroupInstance
NettyOptions.java:96 com.datastax.driver.core.NettyOptions.eventLoopGroup
Connection.java:713 com.datastax.driver.core.Connection$Factory.<init>
Cluster.java:1381 com.datastax.driver.core.Cluster$Manager.init
Cluster.java:163 com.datastax.driver.core.Cluster.init
Cluster.java:334 com.datastax.driver.core.Cluster.connectAsync
Cluster.java:309 com.datastax.driver.core.Cluster.connectAsync
Cluster.java:251 com.datastax.driver.core.Cluster.connect
And from the cassandra log:
INFO [main] 2017-08-30 18:12:36,574 CassandraDaemon.java:527 - Not starting RPC server as requested. Use JMX (StorageService->startRPCServer()) or nodetool (enablethrift) to start it
INFO [Native-Transport-Requests-1] 2017-08-30 18:13:57,297 Message.java:619 - Unexpected exception during request; channel = [id: 0xc819135d, L:/127.0.0.3:9042 ! R:/127.0.0.1:57026]
io.netty.channel.unix.Errors$NativeIoException: syscall:read(...)() failed: Connection reset by peer
at io.netty.channel.unix.FileDescriptor.readAddress(...)(Unknown Source) ~[netty-all-4.0.44.Final.jar:4.0.44.Final]
INFO [MigrationStage:1] 2017-08-30 18:14:31,868 ColumnFamilyStore.java:406 - Initializing datomic.datomic
INFO [HANDSHAKE-/127.0.0.3] 2017-08-30 18:18:10,274 OutboundTcpConnection.java:560 - Handshaking version with /127.0.0.3
WARN [Native-Transport-Requests-2] 2017-08-30 18:18:10,277 FBUtilities.java:336 - Trigger directory doesn't exist, please create it and try again.
I've followed carefully all the datomic docs and, again, the local shell can create a database which I can see in the datomic console but nothing works when trying to connect via a lein project in the repl.
Any ideas?#2017-08-3101:30georgekK, looked like the datomic docs are out of date on the driver they instruct you to use. Its listed as 3.1.0 and 3.3.0 appears to work fine#2017-08-3111:35laujensenIs there anyway to force a transactor to start, even though it complains about “Requested object beyond end of cache at 14” ?#2017-08-3113:42dazldhttps://github.com/dazld/awesome-datomic - would be cool if others would like to contribute, or it'll just end up being my list of bookmarks 🙂#2017-08-3116:23gdeer81Is anyone using Datomic on a cloud platform other than AWS? For political reasons amazon is not an option where I work. #2017-08-3118:30hmaurerI am planning on using it on google cloud#2017-08-3118:30hmaurer(have exprimented a little with it but I don’t have a full setup yet)#2017-08-3119:17ljosaThe big question is which storage backend to use. With DynamoDB out of the question, you're left with Cassandra and SQL. Do you have operational experience with either Cassandra or a SQL database in the Google cloud?#2017-08-3122:01gdeer81I think the issue with google cloud is the lack of long running sessions.#2017-08-3122:13hmaurer@U30H25RT6 the fact that google cloud storage isn’t supported by the backup protocol is also annoying#2017-08-3117:46ljosaI have a use case where it would make sense to have up to ~100,000 values for a multivalued string attribute. Is there a limit on the number of values? Should I expect slowness or other practical problems?#2017-08-3117:57gdeer81I'd set :no-history on that attribute#2017-08-3118:11ljosait won't see a ton of churn#2017-08-3118:18gdeer81oh, I guess I didn't understand the question fully before I replied#2017-08-3119:14marshall@ljosa There shouldn’t be any specific issue other than if you, i.e. pull * on that attribute you will get a lot back, so it could slow things down in that respect#2017-08-3119:14ljosacool, then we'll try it out in dev. thanks!#2017-08-3122:00dimovichdoes anybody have some good examples of buddy+datomic projects?#2017-08-3122:03hmaurer@dimovich which uses of Buddy in particular?#2017-08-3122:17dimovich@hmaurer for a webapp, I need to authenticate users#2017-08-3122:21hmaurer@dimovich I havent used buddy (clojure beginner here) but I doubt it is tied to a particular database backend. Just looking quickly at the doc it seems that it lets you define your own auth functions, in which you are free to make calls to Datomic.#2017-08-3122:25hmaurerTake what I say with a grain of salt, but for example you could first define a function which fetches a user from Datomic, like this:
(defn- get-by-credentials [db username password]
(let [query '[:find ?e
:in $ ?username ?password
:where [?e :user/username ?username]
[?e :user/password ?password]]]
(ffirst (datomic/q query db username password))))
#2017-09-0100:43donaldballNote of course that regardless of the underlying storage medium, you should never store a password in cleartext, it should always be salted and securely hashed.#2017-08-3122:26hmaurerThen, assuming you are using Http-Basic auth:
(require '[buddy.auth.backends :as backends])
(defn my-authfn
[request authdata]
(let [db (:db request)
username (:username authdata)
password (:password authdata)
user (get-by-credentials db username password)]
user))
(def backend (backends/basic {:realm "MyApi"
:authfn my-authfn}))
#2017-08-3122:27hmaurer(just adapted the example from https://funcool.github.io/buddy-auth/latest/#http-basic)#2017-08-3122:28dimovich@hmaurer thanks!#2017-08-3123:49johnjWhy doesn't datomic also refers to itself as a time-series DB? Doesn't being immutable and being able to query data by time make it a time-series DB too?#2017-09-0106:01val_waeselynckNot really I guess, shamaless plug https://vvvvalvalval.github.io/posts/2017-07-08-Datomic-this-is-not-the-history-youre-looking-for.html#2017-09-0115:49hmaurer@U06GS6P1N out of curiosity, do you have a preferred approach to version data? especially when versioning relationships#2017-09-0518:08val_waeselynck@hmaurer not really. I don't have a one size fits all answer, I do it in an ad hoc way.#2017-09-0116:51laujensenI have some items going into a queue, each items goes through a function that needs almost full cpu-load and memory to complete, so only one can go at a time. What idiomatic tools do we have?#2017-09-0117:00marshall@laujensen core.async#2017-09-0117:00marshall+ transducer#2017-09-0117:00laujensenSorry I meant to post that in #clojure, fortunately Marshell jumps to the rescue 🙂 Thanks I'll have a look#2017-09-0117:00marshall😉#2017-09-0118:08uwoWhen we need to page a query against a large number of records, and assuming we’re not using a ranged query from the domain, is the next option using d/datoms?#2017-09-0120:37matthaveneruwo: yeah, or just do take/drop#2017-09-0120:38favilabe careful with that: order is not guaranteed#2017-09-0120:38matthavenergood point, that only works after you apply some domain sort#2017-09-0120:39favilagiven same input, and assuming the result is a set (not a bag), you should be fine#2017-09-0120:39favilabut I would sort first#2017-09-0121:35souenzzo(let [conn @config/conn
schema [{:db/ident :ref/to-many
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/many
:db/isComponent true
;; ^
:db/id (d/tempid :db.part/db)}
{:db/ident :any/thing
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one
:db/unique :db.unique/identity
:db/id (d/tempid :db.part/db)}]
{db :db-after} (d/with (d/db conn) schema)
tx-data [{:db/id "user"
:any/thing "My User"}
{:db/id "foo"
:any/thing "001"
:ref/to-many ["user"]}
{:db/id "bar"
:any/thing "002"
:ref/to-many ["user"]}]
{:keys [db-after tempids]} (d/with db tx-data)
user (d/entity db-after (d/resolve-tempid db-after tempids "user"))]
(d/touch (:ref/_to-many user)))
There is some way to know if this touch will be on 001 or 002?#2017-09-0122:10favilano#2017-09-0122:10favilaThis violates the isComponent contract: the "user" entity is in more than one datom's :v#2017-09-0203:33souenzzoThere is specific docs about it?#2017-09-0613:06pesterhazywhy run as www-data??#2017-09-0614:30uwowe have a common query that appears to be causing an out of memory on GC overhead. the query itself at the moment scans 3.3m records, which we then have to sort and page. Implementing a ranged query is top on my list, but are there any other recommendations?#2017-09-0614:50marshall@uwo have you done some optimization on the query itself? https://github.com/Datomic/day-of-datomic/blob/master/tutorial/decomposing_a_query.clj#2017-09-0614:51uwo@marshall the query only has one clause [?e :attr]#2017-09-0614:52marshall@uwo so that sounds like a very scan-like operation - “find me all the entities” - I’d just use the datoms API#2017-09-0614:54uwo@marshall yeah, definitely I’ve considered that as well. Will the memory profile be identical though since I still have to sort before I take/drop?#2017-09-0614:54marshallare you sorting on a single attribute?#2017-09-0614:58uwoinitially, yes. though we’re populating a table, and the user can sort by up to 3 columns if they click on enough column headers#2017-09-0615:04marshallwell, presumably you could get your initial sort from a d/datoms call on AVET#2017-09-0615:05marshallwhich means if you have additional sub-sorts, you’d only need to iterate through AVET until you reach a new V (i.e. the first ‘chunk’ of datoms that have the same value when sorted by your primary attribute), which you then would need to sub-sort by the additional attribute(s)#2017-09-0615:05marshalldepending on your data size/distribution, that approach may be able to reduce your memory overhead#2017-09-0615:07uwothanks @marshall. I’ll experiment with that approach#2017-09-0615:12uwoalso, I realize this is app specific, but we’ve currently got 6GB on our app server. I’ve read the capacity planning documentation, but I’m still a little unsure how to figure out how large an object cache we should target,etc. any tips?#2017-09-0615:13uwowe’re starting the peer with -Xmx4g -Xms4g atm#2017-09-0615:13marshallit entirely depends on what your app is doing and what your db usage patterns are
the default is half the heap#2017-09-0615:14marshallif you have a 6gb box and you’re running into memory pressure, i’d probably use more for the heap unless you are running other things on the box#2017-09-0615:14marshallyou can look at your objectCache metrics to see how frequently you’re missing#2017-09-0615:15marshallto help tune whether you need more or could get away with less#2017-09-0615:15marshallalso, if you can use memcached that will help alleviate a lot - you can potentially leave object cache alone (or even reduce it) while adding memcached to give your application more headroom#2017-09-0615:16uwothanks. make sense!#2017-09-0617:35ocis there a datomic equivalent of LIMIT or FETCH FIRST N ROWs? Im running a query that returns thousands of entities and i only want the first 500#2017-09-0617:39favila(take 500 query-result)#2017-09-0617:40octhat takes the same amount of time to excute, so i'm assuming it's just filtering after the query has found all the thousands of results#2017-09-0617:40oc(time (d/q '[:find (take 500 ?e)
:in $ ?txt
:where [(fulltext $ ::s/ACCT_NME_TXT ?txt) [[?e]]]] db "test"))
"Elapsed time: 14039.683358 msecs"#2017-09-0617:42favilaI'm surprised that works at all#2017-09-0617:42favilaI mean take after you get the results#2017-09-0617:42favilaso, result order is undefined, and querys run in a map/reduce fashion, so it's eager--it doesn't terminate early#2017-09-0617:43favilaso limit is not especially useful#2017-09-0617:44ocyeah#2017-09-0617:45ocwe're trying to compare apples to apples with oracle text search which lets you short circuit the query with a limit#2017-09-0617:45ocdatomic is searching 8 million records and return 10000 in 12 seconds, but oracle responds in 2 seconds because it's only return the first 500#2017-09-0617:46ocwanted to see if we could get something closer to an equal comparsion#2017-09-0619:05pesterhazyUnfortunately there is no Limit clause#2017-09-0620:16arohnerIf you use datoms, you can build a query that uses take, but ofc at that point you’re not using the query planner#2017-09-0620:16arohneroh nvm, text search#2017-09-0620:53ocis there a way to stop the query and just keep the results so far? i know having the queries run in the peer is great because your'e not blocking the 'db' like in sql but i could imagine not wanting my web server to block a thread for a minute if the user finds a nasty query#2017-09-0621:04pesterhazyI'd love to be proven wrong, but AFAIK you can either use Datalog or limit the results, but not both - it's a real limitation.#2017-09-0621:30ockind of makes me want to find out how quickly d/datoms + java.lang.String#contains can return 500 items#2017-09-0621:31ddellacostaI’m having trouble finding this info online: is there a way to drop/delete a Datomic partition? I’m testing out some schema stuff in a dev environment with datomic running in a docker container, and I’m having trouble figuring out how to override a attribute spec which I’ve already foolishly inserted into and now want to add :db.unique/identity to. <- alternatively telling me how to do that via other means would work.#2017-09-0621:32octhis isn't helpful? http://docs.datomic.com/schema.html#Schema-Alteration#2017-09-0621:34ddellacostawell, I’m getting an error trying to add the constraint#2017-09-0621:34ddellacostaand I assumed it was because of
> In order to add a unique constraint to an attribute, Datomic must already be maintaining an AVET index on the attribute, or the attribute must have never had any values asserted.#2017-09-0621:34ddellacostahttp://docs.datomic.com/schema.html#altering-avet-index <- under that section#2017-09-0621:43uwo@ddellacosta so you’ve tried this? 1) add :db/index 2) d/request-index 3) add :db/unique#2017-09-0622:56ddellacosta@uwo sorry for the slow response. I get this when I try to add an index (or set it unique for that matter)
CompilerException java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException: :db.error/invalid-install-attribute Error: {:db/error :db.error/incompatible-schema-install, :entity :my/id, :attribute :db/index, :was nil, :requested true}, compiling:(form-init2080569204125065060.clj:1:35)
#2017-09-0622:58ddellacostathe :was nil bit confuses me in particular#2017-09-0622:58ddellacostawonder if I’ve got something else wrong with this schema item#2017-09-0623:06marshall@ddellacosta can you share the tx data you're submitting to add index to your attribute#2017-09-0623:07ddellacostaone sec#2017-09-0623:08ddellacosta@marshall
1│ [
2│ {:db/id #db/id[:db.part/db]
3│ :db/ident :my/id
4│ :db/index true
5│ ;; :db/unique :db.unique/identity
6│ :db/valueType :db.type/uuid
7│ :db/cardinality :db.cardinality/one
8│ :db/doc "UUID for the thing"
9│ :db.install/_attribute :db.part/db}]
#2017-09-0623:09ddellacostasorry for the slightly weird formatting#2017-09-0623:10ddellacostabasically I had that previously without the index or unique, messed around and transacted on a few things, then realized I wanted that to be unique#2017-09-0623:10ddellacostabased on my reading of the schema doc, it sounds like I’m simply not going to be able to add unique to it at this point because I’ve already transacted some data#2017-09-0623:10ddellacostaso I figured the best thing to do was delete the partition#2017-09-0623:10ddellacostathis is all dev work so I don’t care about this data so much#2017-09-0623:17marshallYou can add unique #2017-09-0623:17marshallUse a slightly different syntax#2017-09-0623:17marshallWhat version of datomic#2017-09-0623:18ddellacosta0.9.5544 I believe is what you’re looking for#2017-09-0623:18marshall{:db/ident :person/name
:db/cardinality :db.cardinality/many
;; explicit alter
:db.alter/_attribute :db.part/db}#2017-09-0623:19marshallIf you want to use the explicit alter attribute#2017-09-0623:19marshallIn your case it would be your ident of :my/id#2017-09-0623:20marshallAnd instead of cardinality put in the :db.index true#2017-09-0623:20marshallOr unique #2017-09-0623:21ddellacostaso let me make sure I understand: I want this?
[{:db/ident :my/id
:db/unique :db.unique/identity
:db.alter/_attribute :db.part/db}]
#2017-09-0623:22marshallAlternatively, you can do something like [{:db/id :my/id
:db/index true}]#2017-09-0623:22marshallYes just like that#2017-09-0623:22ddellacostagot it--but where were you going with that? I’m curious#2017-09-0623:23marshallJust that if you're using explicit syntax you need to use alter attribute, not install#2017-09-0623:23ddellacostaor is that it, I should also be able to simply do
[{:db/id :my/id
:db/unique :db.unique/identity}]
?#2017-09-0623:23ddellacostaI see#2017-09-0623:23marshallYep that should work too#2017-09-0623:24ddellacostagotcha. I’ll have to go read up on the distinction between explicit syntax and the alternative, as I don’t think I understand that#2017-09-0623:24marshallThe implicit install and alter syntax was introduced in 0.9.5530 (iirc)#2017-09-0623:24ddellacostanor have I ever seen that :db.alter/_attribute thing#2017-09-0623:24ddellacostaah, I see#2017-09-0623:25ddellacosta@marshall, I’m going to go give this a shot. Thanks so much for your help! I learned a lot.#2017-09-0623:25marshallhttp://blog.datomic.com/2016/11/datomic-update-client-api-unlimited.html?m=1#2017-09-0623:25marshall@ddellacosta ^ talks about the implicit install and alter stuff #2017-09-0623:25ddellacostagotcha#2017-09-0623:25ddellacostagreat#2017-09-0715:58ocI think i read a while back that :db/index was going to default to true going forward, is that the case?#2017-09-0721:05uwowe’re getting Connection failure has been detected: AMQ119011: Did not receive data from server for org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnection in our peer. Any obvious places to begin searching?#2017-09-0721:40danielcompton@oc, I doubt they will change the defaults because of backwards compatibility issues, but I think the recommendation was to default yourself to indexing, and only turn off indexing when it's a problem#2017-09-0723:36octhx#2017-09-0809:42kommenI’m looking for some feedback on our datomic schema. anybody can give us some guidance? schema & questions here: https://gist.github.com/kommen/a902e4c5bfef1395e69a617d1cb427ac#2017-09-0810:30hmaurer@kommen I would personally favour the first approach and use :db/isComponent on the profile refs. Disclaimer: I am new to Datomic, so take my word with a grain of salt.#2017-09-0810:34hmaurerLately I have been using the “document” analogy to help me decide if i should use :db/isComponent. I ask myself: “if I were to represent this entity as a document (e.g. in a document-oriented database), would I put this sub-entity in the document as well?”#2017-09-0810:34hmaurerThis is basically asking “is this other entity (profile) a part of / a component of my entity (person)?” (quite naturally)#2017-09-0810:35hmaureror “does a Profile have an existence of its own, or does it only exist as a part of some other entity?”#2017-09-0810:35hmaurerI am not sure these are the semantics that were intended for :db/isComponent, but that’s how I have been using it so far#2017-09-0810:37hmaurer@favila made a remark to me the other day which is related to what you are asking now#2017-09-0810:38hmaurerI was asking if it made sense to add a :db/unique constraint on attributes which have :db/isComponent. It seemed to me that these should always be unique, and so that :db/isComponent entailed the :db/unique constraint#2017-09-0810:39hmaurerhe pointed out that it entailed an even stronger constraint: an entity that is referenced by some attribute with :db/isComponent should not be referenced by any other attribute in the database#2017-09-0810:39hmaurer(if I recall correctly, @favila can correct me if I don’t :<)#2017-09-0810:40kommenalright, thanks for your input!#2017-09-0810:41hmaurerNp 🙂 By the way on :profile/owner, I tend to avoid polymorphic references like this because they’re harder to validate#2017-09-0810:41hmaurerthat’s purely a matter of taste though#2017-09-0813:38favila@hmaurer @kommen that is correct, entities reachable by an isComponent attr should not be reachable (by forward references) any other way#2017-09-0813:38favila(d/datoms db :vaet component-entity-id) should only have 1 or 0 results for all times#2017-09-0813:40hmaurer@favila out of curiosity, what is your opinion on polymorphic refs? (however you define “polymorphic”, since there isn’t a clear notion of “entity type” in Datomic). Is it something you try to avoid? Or do you not pay particular attention at it?#2017-09-0813:40favilatype generally depends on use#2017-09-0813:41favilathere's a distinction between entity-level validity/expectation constraints for the entities in the db, and an explicit type marker is necessary for sanity here#2017-09-0813:42favilaand what a program does with the map view of this, which seems to be more interested in attrs and sets of attrs than types#2017-09-0813:43favilaat least in my experience#2017-09-0813:43favilabut never, ever make the meaning of an attr dependent on some other attribute#2017-09-0813:46hmaurer@favila that’s a similar philosophy to clojure.spec I assume? The meaning of attributes should be independent of the aggregates in which they’re used#2017-09-0813:47hmaureror did you mean something else?#2017-09-0813:51augustl"foo" meaning different things in different contexts I suppose?#2017-09-0813:56favilaEssentially. I just want to make sure no one misinterprets me to think that {:entity-type "earthquake" :magnitude 7} {:entity-type "decimal-number" :magnitute 2} is ok#2017-09-0814:07hmaurermakes sense#2017-09-0814:08hmaurerout of curiosity, have you written a small lib internally for pre-processing data being transacted to datomic? (validation, etc)#2017-09-0814:08hmaureror are you doing it in a ad-hoc manner?#2017-09-0815:18favilaad hoc. We should do better#2017-09-0815:18favilawe have an entity-type attr on all entities which anchors our expectations as to what other attrs should/may be on the attr#2017-09-0815:19favilabut nearly every attr is namespaced by its entity type#2017-09-0815:19favilaso in practice, code doesn't look at the explicit entity type much#2017-09-0815:19favilajust the attrs#2017-09-0819:53mgrbyteWill the ensure-cf datomic command work without creating new AWS resources (security groups, roles etc) if existing ones are specified in the input properties files?#2017-09-0820:52cjhowehow do i use datomic's tempids? what are those -92233... numbers in {-9223301668109598138 17592186045422, -9223301668109598137 17592186045423} from http://docs.datomic.com/getting-started/transact-data.html ?#2017-09-0820:54hmaurer@cjhowe those are the IDs assigned to the 3 entities that were created by this transaction#2017-09-0820:54cjhoweyeah, but how do i know which is which?#2017-09-0820:54hmaureryou could add a :db/id attribute to each entity in that transction as a string#2017-09-0820:55hmaurere.g.
{:db/id "movie-1"
:movie/title "The Goonies"
:movie/genre "action/adventure"
:movie/release-year 1985}
#2017-09-0820:55hmaurerthe returned tempids should then be something like
{"movie-1" 17592186045422, -9223301668109598137 17592186045423}
#2017-09-0820:57favila@cjhowe use the d/tempid function#2017-09-0820:57favilasorry, d/resolve-tempid#2017-09-0820:58hmaurer@favila taking this opportunity to ask: is there any advantage to using d/tempid and d/resolve-tempid over string tempids?#2017-09-0820:59favilanot really#2017-09-0820:59favilayou don't need an auto-increment string function?#2017-09-0820:59favila(str (name (gensym)))#2017-09-0821:00hmaurerAlso I assume string tempids resolve to an ID living in the :db.part/user partition?#2017-09-0821:01favilayou can control partition with tempids, not with strings#2017-09-0821:01hmaurerI see; thanks!#2017-09-0821:02favila(let [tid1 (d/tempid :db.part/user)
tid2 "string-tempid"
{:keys [tempids db-after]} @(d/transact conn [{:db/id tid1 :db/doc "tid1"}
{:db/id tid2 :db/doc "tid2"}])
tid1-eid (d/resolve-tempid db-after tempids tid1)
tid2-eid (d/resolve-tempid db-after tempids tid2)]) @cjhowe#2017-09-0821:02hmaurerAre partitions something important to think about when developing a small to medium sized app with Datomic? As far as I understand they are “locality hints” and if used properly can increase performance; is that right?#2017-09-0821:02favilathey reduce index fragmentation and increase locality#2017-09-0821:02favilathey are not mere hints, they control sorting order#2017-09-0821:03hmaurer@favila how so? (regarding sorting order)#2017-09-0821:03favilaan entity id is a long composed of partition id bits and an auto-increment-id bits#2017-09-0821:04favilasince the partition bits of the entity id are higher, they have a stronger effect on magnitude than the lower bits#2017-09-0821:04favilaso entities with the same partition will group together when you sort#2017-09-0821:06favila(d/entid-at db :db.part/db 1000)
=> 1000
(d/entid-at db :db.part/tx 1000)
=> 13194139534312
(d/entid-at db :db.part/user 1000)
=> 17592186045416
#2017-09-0821:08hmaurerWhen would the grouping property when sorting by entity ID be desirable? (sorry if the question is silly)#2017-09-0821:09favilaall datoms are stored in sorted blocks#2017-09-0821:10favilaif all items are "close" to one another (same partition), reduces the likelyhood you need to fetch additional blocks or fragment blocks while indexing#2017-09-0821:10favilae.g. suppose you have two companies using the same db for their data (not that I would recommend this)#2017-09-0821:10favilaread and write patterns are such that txes and queries only deal with one partition at a time if you have companies in separate partitions#2017-09-0821:13hmaurer@favila so, for example, you might want to store entities and their components in the same partitions?#2017-09-0821:13hmaureror have a partition per entity type?#2017-09-0821:15hmaurerdepending on what is often fetched together#2017-09-0821:20favilayes#2017-09-0821:21favilahttp://docs.datomic.com/indexes.html#partitions#2017-09-0821:25favilaactually components automatically get an entid in the same partition as their parent#2017-09-0821:26favila{:db/id (d/tempid :some-part) :some-component-attr {:foo "bar"}} (for eg) was legal even before implicit tempids#2017-09-0821:27favilathe inner entity would be given a :some-part partition id automatically#2017-09-0821:29favilahttp://docs.datomic.com/transactions.html#nested-maps-in-transactions#2017-09-1018:21cjhoweis it a good idea to use datomic as a graph database, or is it better to use neo4j? i know datomic doesn't have the same level of graph capabilities as neo4j, but it can still do most graph database operations, right?#2017-09-1018:23cjhowei like the capabilities of neo4j's cypher, but i would much prefer to use data. the biggest thing is i can't think of how to add properties to links in datomic...#2017-09-1018:24cjhowehas anyone deployed datomic on heroku?#2017-09-1020:11hmaurer@cjhowe so, regarding the first question: I am new to Datomic/Clojure but yes, Datomic seems like a great fit for exploring graphs#2017-09-1020:11hmaurerit’s essentially a triple-store (well, 5-tuple store really if you account for the transaction id and operation)#2017-09-1020:12hmaurerIt really depends on what you want to do; neo4j’s cypher is very high-level, you might have to do a lot more work to run the same queries on top of Datomic#2017-09-1020:12hmaurerbut Datalog should already get you a long way#2017-09-1020:13hmaurerand if you have more complex query needs you can build them on top of Datomic’s primitives#2017-09-1020:14hmaurerYou are right, you cannot add properties to links in Datomic (links = attributes)#2017-09-1020:14hmaurerAs fa as I aware, a standard approach would be to reify the link as a entity, which can then have attributes#2017-09-1020:15hmaurere.g. imagine you had a link between friends, e.g. :person/friends, an attribute with cardinality many and value-type ref#2017-09-1020:15hmaurernow say you would like to store an attribute on that link, e.g. an integer “weight” which indicates how good a friend the person is#2017-09-1020:16hmaurerNeo4j would let you add that attribute directly on the edge#2017-09-1020:16hmaurerWith Datomic you can’t do that, but you could “reify” that link as an entity, and have an attribute like :person/friendships#2017-09-1020:17hmaurerwhich would point to Friendship entities, which would themselves have a :friendship/target pointing to the friend Person entity#2017-09-1020:17hmaurersince Frienship is now an entity, you can add :friendship/weight, or any attribute you want#2017-09-1020:30cjhowei think the power of using plain data structures for queries makes up for some additional query complexity#2017-09-1020:30cjhowei'm more concerned about cost:performance ratio for graph heavy workloads #2017-09-1020:30hmaurer@cjhowe yeah, I think it really depends on the type of queries you are doing to do. If you have very complex graph queries then something like Neo4j might do a lot of the optmization grunt-work for you#2017-09-1020:30cjhowethen again, if it's reads it's cached on the client right?#2017-09-1020:31hmaurer@cjhowe yes the peer keeps a cache. I’m not sure how it determines when to clear a portion of the cache and what to clear though (if it’s full)#2017-09-1020:32cjhoweit seems like running the graph query mostly on the client would help make up for neo4j's extra optimizations#2017-09-1020:33cjhoweit's a lot to give up immutability for query optimization#2017-09-1020:34hmaurerIt depends on your dataset and the type of queries but for most use-cases I suspect Datomic will work just fine#2017-09-1020:39hmaurer@cjhowe to be honest, it might just be a lack of understanding/experience on my side, but I kind of wish there were a few higher-level abstractions built on top of Datomic#2017-09-1020:40hmaurerFor example, if your use-case is manipulating graphs, I wish there was library built on top of Datomic which offered Cypher-like querying capabilities, amongst other things#2017-09-1020:44cjhoweshortest path queries are pretty common for me#2017-09-1020:44cjhowei think that could be added through a library too#2017-09-1020:50hmaurer@cjhowe yes I’m sure you could write a function for that which uses Datomic’s low-level API (direct access to the index) to walk the graph#2017-09-1020:51hmaurerHow big is your dataset? out of curiosity#2017-09-1020:54cjhoweidk, i'm making a study group app, so however many people use it i guess. it's kind of like tinder, and each one of those tinder-like matches has to find the path with the shortest total weight that starts at and ends with two different users who haven't matched before#2017-09-1020:54cjhoweand then in addition to that, it's trying to choose matches that will cause complete subgraphs of 3+ nodes to appear#2017-09-1020:55cjhowethat's for every time someone does a match, so it's very high volume#2017-09-1020:55hmaurer@cjhowe I see, so I guess you won’t have such a large dataset that you would need to start writing fancy, optimized versions of your pathfinding algorithm#2017-09-1020:55cjhowenot immediately#2017-09-1020:56cjhowethen again, i don't want to shoot myself in the foot#2017-09-1020:57hmaurerMy (beginner, ignorant) approach would be: use Datomic, see how far it can get you. If you ever run into a case where Datomic isn’t enough and/or it’s too much work to build the feature you want on top of it, construct a read-only neo4j (or else) replica of your Datomic DB#2017-09-1020:57hmaurerwhich should be doable with the direct access to the transaction log and the tx-report-queue API#2017-09-1020:57cjhoweohhh, that's a good point#2017-09-1020:58hmaurerDatomic will store the history of your data, neo4j won’t#2017-09-1020:58hmaurerso you have more information by using Datomic#2017-09-1020:58hmaurertheoretically you can move to neo4j later, should you want to (ignoring the fact that it would be a pain to rewrite your app)#2017-09-1020:58hmaureror you can use neo4j as I mentioned, as a read-only replica for some queries#2017-09-1020:59hmaurershould it become necessary#2017-09-1020:59hmaurerthe right term would probably be “use neo4j as a materialized view of your Datomic database”#2017-09-1020:59hmaurer🙂#2017-09-1020:59cjhowei like this#2017-09-1021:00cjhowethanks for the help! i'll probably do that, since this isn't really a problem right now and i want datomic#2017-09-1021:00hmaurerHave fun!#2017-09-1021:02hmaurerThere is also another benefit of using Datomic: since it’s lower-level, you’ll learn a lot more about how your graph traversals actually run (since you’ll likely implement them yourself).#2017-09-1021:02hmaurer(if that’s something you care about)#2017-09-1021:57cjhoweah, yeah, that's great! i just took my last math class for my CS degree so i'm ready to deep dive into graph theory#2017-09-1021:59cjhowehow do people transact their datomic schemas? is it best to use a boot/leiningen plugin, or should i transact it every time i start my app server with https://github.com/rkneufeld/conformity ?#2017-09-1022:02hmaurer@cjhowe https://vvvvalvalval.github.io/posts/2016-07-24-datomic-web-app-a-practical-guide.html#schema_/_model_declaration#2017-09-1022:03hmaurer@val_waeselynck did some nice work writing about this#2017-09-1022:06cjhowei just read through that, thanks!#2017-09-1022:06cjhowei guess i just need to know how i should actually run the conformity code at deployment#2017-09-1109:12val_waeselynck@cjhowe you may want to read this as well: https://github.com/vvvvalvalval/datofu#managing-data-schema-evolutions#2017-09-1109:12val_waeselynckThe basic idea is to re-transact your schema (idempotent) and run transactions (non-idempotent) prior to executing new code.#2017-09-1109:15val_waeselynckDepending on the write semantics of your app, this protocol may be too naive and present some race conditions (for instance, when migrating the data from an attribute to a new attribute, some Peer may continue to write to the old attribute between the time the migration is run and the time the new code is deployed to the Peer).#2017-09-1109:18val_waeselynckIn which case you may want to either:
1. if that works, run the migration after new code is deployed to all online Peers, or in 2 phases
2. Temporarily prevent all Peers from writing
3. Move the write semantics from the Peer code to storage (e.g in the form of a transaction function), so that the switch can be atomic#2017-09-1022:07cjhowei mean, if i use conformity every time my api server launches, it's a bit of overhead, but it shouldn't do anything if the schema is already there#2017-09-1022:09hmaurer@cjhowe it will still create a transaction#2017-09-1022:09hmaurerbut without any datoms#2017-09-1022:09hmaurerit should be neglible overhead if you are worried about startup time though#2017-09-1022:10hmaurerthe JVM’s time to startup is likely an order of magnitude larger than the time to run a transaction for your schema#2017-09-1022:10cjhoweokay, cool#2017-09-1022:10cjhowethanks again!#2017-09-1022:14hmaurer@cjhowe oh also, regarding your earlier question on Heroku: I think the recommended memory for the transactor (and peers) is pretty high; in the order of 1gb or more#2017-09-1022:14hmaurerI considered using heroku on a project but realised that would be prohibitively expensive#2017-09-1022:15hmaureryou might be able to get away with lower memory on small projects / with the right configuration though; I haven’t investigated#2017-09-1022:15cjhowehmmm, well, maybe i'll use aws free tier then#2017-09-1022:15cjhoweit seems like datomic was made for that anyways#2017-09-1022:15cjhowei hope 1GB is enough though#2017-09-1022:15hmaurer@cjhowe right now I’m trying to run Datomic on Kubernetes on Google Cloud#2017-09-1022:16hmaurerhaven’t encountered significant issues thusfar, but I have only experimented over short periods of time, not under load and/or in production#2017-09-1022:17cjhoweah, then you have to set up a cassandra instance i guess?#2017-09-1022:17hmaureroh no, you can run the transactor in dev mode (it will store stuff on the filesystem)#2017-09-1022:17hmaurerfor production I’ll likely use mysql or postgres#2017-09-1022:18cjhoweoh, right#2017-09-1022:18hmaureron AWS you can use Dynamo though#2017-09-1022:18hmaurerI also managed to run a transactor on https://hyper.sh#2017-09-1022:19hmaurerAWS will likely be the cheapest option though 🙂#2017-09-1022:22cjhoweah, well, in my case, i'm just worried about what i can get for very cheap/free since i'm a student#2017-09-1022:23hmaurerI am a student too; I know the struggle!#2017-09-1022:24hmaurerGoogle Cloud also gives you 300$ credits usable over a 12 months period#2017-09-1109:43val_waeselynckThe cheapest you can get for small projects is probably to run your transactor and storage (and maybe also Peers) on one box e.g using Digital Ocean, maybe using the dev storage to save even more memory#2017-09-1022:22cjhowei hope ad money will pay for it in the long run, but if not, i can just shut the backend down and take the app off the app store#2017-09-1109:50daveliepmannTypo at http://docs.datomic.com/query.html#multiple-inputs : "limit releases to those perfomed by John Lennon." (performed)#2017-09-1118:46tankthinksI have a problem that I believe datomic would solve nicely: I have facts about things I would like to assert and I would like to be able to track those facts over time. The one wrinkle is that I also know the event associated with fact changes, and its timestamp. I want to track the facts over time by this known timestamp, not by the transactors built in time clock. So it looks like I can set :db/txInstant (http://docs.datomic.com/transactions.html#explicit-db-txinstant); however, due to the constraints explained in that link, I am responsible for ordering the events before I transact them.#2017-09-1118:46tankthinksDoes this make sense? If so, can someone confirm the last sentence?#2017-09-1118:48marshall@tankthinks That is correct, you can’t assert things in the past (with respect to Datomic’s built-in notion of time).
In general, if you have a ‘domain time’ you need to model (i.e. real-world time something happened), it’s best to model it explicitly on your entities with a user-defined attribute#2017-09-1118:49marshallthe built-in t in Datomic is Datomic’s notion of time - that is, when did Datomic know about a fact#2017-09-1118:50marshallin some cases you can co-opt t to represent your domain time, but there are a number of reasons that can be problematic - you noticed one with the fact that you can’t assert txInstant values in the past#2017-09-1118:50tankthinksthanks @marshall#2017-09-1118:52tankthinksSo I can explicitly model "domain time" with an attribute, and as such I get a slightly different "built-in" notion of a database-at-point-in-time#2017-09-1118:52marshallyep exactly#2017-09-1118:52tankthinksIs the performance the same as using the Datomic's built int notion of time?#2017-09-1118:52tankthinksI guess it would be, since txs are first class entities?#2017-09-1118:53marshallit depends a bit - are you interested in ‘time series’ types of queries frequently? i.e. “show me all the changes to this entity or entities” or are you only ever interested in the ‘latest’#2017-09-1118:53marshalland, actually, in either of those ^ cases, you can change some details of your schema to essentially optimize for either#2017-09-1118:55marshallkeep in mind, only the log (http://docs.datomic.com/log.html) is ordered by T first; in all the other indexes (avet, aevt, eavt, vaet) t is the least significant sort#2017-09-1118:55tankthinksgood to know ^#2017-09-1118:56tankthinksI'm interested in a "graph of things" that changes over time. I'm imagining a time slider UI component that updates the graph as time moves forward and backward.#2017-09-1119:05marshallyou could definitely use either approach, but depending on the size of your dataset and how real-time it needs to be, using built-in time may be too limiting#2017-09-1119:07itaiedHello, I'm new to datomic and clojure.
I'm trying it out, and starting with the official tutorial I try to create my own data schema.
I have created a type/ref isComponent true, but I get the following error:
:cognitect.anomalies/message ":db.error/invalid-nested-entity Nested entity is not a component and has no :db/id"#2017-09-1119:08itaied`{:db/ident :node/references
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/many
:db/isComponent true
:db/doc "External material and sources"}
{:db/ident :reference/title
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one}
{:db/ident :reference/value
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one}`#2017-09-1119:09itaiedand this is the entity that I try to create:
{:node/name "Docker"
:node/description "Container management tool."
:node/references [{:reference/title "Official Docs"
:reference/value "https://docs.docker.com/"}]}#2017-09-1119:09itaiedWhat am I doing wrong? It seems like the same as the tutorial...#2017-09-1119:17hmaurer@itaied if your attribute isn’t marked as :db/isComponent, you must set the :db/id explicitly in the transaction#2017-09-1119:17hmaurerin this case it’s throwing because of :node/references#2017-09-1213:30lenI am looking at db/datoms fn anf not sure how to use the results, any pointers ?#2017-09-1213:32augustl@len some info here http://augustl.com/blog/2013/datomic_direct_index_lookup/#2017-09-1213:33lenThanks @augustl#2017-09-1300:24clojuregeeki'm looking for an example of an app (beside the music database 🙂 ) of using datomic... any OS projects ya'll can recommend ?#2017-09-1300:41clojuregeekis this good? its 5 years old .. https://github.com/johnwayner/lein-datomic#2017-09-1301:51marshall@clojuregeek I like this blog post and project: https://hashrocket.com/blog/posts/using-datomic-as-a-graph-database
IIRC the repo includes all the stuff to run the full app#2017-09-1301:51clojuregeekthanks @marshall 🙂#2017-09-1308:15stijnis there some documentation about the properties to be set for the Dynamo table in case you are not using ensure-cf? (in our case terraform)#2017-09-1309:45dazld@clojuregeek you can also do something like lein new luminus my-thing +datomic#2017-09-1313:09marshall@stijn http://docs.datomic.com/storage.html#manual-setup#2017-09-1313:13stijn@marshall ok thanks. So, the other attributes n, v, prev that are visible in the ensure-cf generated table should not be declared?#2017-09-1313:26marshall@stijn I would recommend doing what the generated table does#2017-09-1314:33stijn@marshall just tried and you should not include the other attributes, only the key/hash#2017-09-1314:33stijnso documentation is correct#2017-09-1314:34marshall👍#2017-09-1406:54tengWe found a bug in Datomic (datomic-pro-0.9.5561.54). We, by mistake, added Double/NaN (not a number) facts to the database (the result of a square root of a negative number) and after that we tried to fix it by adding correct facts (like 3.14) for these NaN. Datomic still returned NaN and we had to retract the NaN facts first to get the more recently data be returned. @marshall#2017-09-1413:21piotrekHello! I have a question about license: is it ok to reuse one Datomic Starter license for multiple transactors (e.g. for supporting multiple test environments of an application)?#2017-09-1415:20the-kennyteng: oh wow, that feels wrong. I'm able to reproduce it here.#2017-09-1415:24marshall@teng we’ll have a look#2017-09-1416:40dimovichhow can I get all entities in the db? trying to export / import the db#2017-09-1416:41marshallgrr#2017-09-1416:41marshallhttp://docs.datomic.com/indexes.html#datoms-api @dimovich#2017-09-1417:09dimovichthis gives me the datoms, but I would like sth more readable that I can save in edn#2017-09-1417:14pbostromHas anyone had any luck loading both the Datomic client and Ring in a project? I'm experiencing the dependency conflict described here: https://stackoverflow.com/questions/43291069/lein-ring-server-headless-fails-when-including-datomic-dependency#
The workaround in the discussion did not work for me, I'm wondering if anyone has figured out any other workarounds#2017-09-1417:50ljosa@stijn: resource "aws_dynamodb_table" "datomic" {
name = "${var.environment}-platform-datomic-config"
read_capacity = "${var.dynamodb_read_capacity}"
write_capacity = "${var.dynamodb_write_capacity}"
hash_key = "id"
attribute {
name = "id"
type = "S"
}
}
#2017-09-1417:50ljosa^ that's what we use. I think it's the same that you concluded above.#2017-09-1417:52dimovich@marshall ended up with sth like this...#2017-09-1417:58marshall@dimovich seems reasonable to get all user entities. Keep in mind your result has to fit in memory so something like that is probably not ideal for an entity type with a lot of data or a v large db#2017-09-1418:01dimovich@marshall good point#2017-09-1418:48timgilbertSeems like http://docs.datomic.com is down? http://docs.datomic.com/filters.html#2017-09-1418:50ljosafwiw, I just got alerts about SlowDown exceptions from S3 on our system, so maybe an aws issue#2017-09-1418:50timgilbert@pbostrom: not sure about that specific problem, but I'd probably start by adding :pedantic? :warn you my project.clj and then running lein deps until I'd fixed any classpath conflicts#2017-09-1418:52pbostromyeah there are definitely conflicts, ring wants one version of org.eclipse.jetty/jetty-http and Datomic client wants another, I suppose there's no way to satisfy both of them...#2017-09-1418:53ljosadiscussion about s3 issues right now: https://news.ycombinator.com/item?id=15251180#2017-09-1418:54timgilbert@pbostrom: what you might try is just adding :exclusions until they work properly (essentially picking one or the other of the lib versions)#2017-09-1418:55pbostromyeah, that's the thing, they each break the other, due to shuffling of classes across versions of org.eclipse.jetty/jetty-http#2017-09-1418:55timgilbertFWIW, I'm running with [com.datomic/datomic-pro "0.9.5561" :exclusions [org.slf4j/slf4j-nop com.google.guava/guava]] and [ring/ring-core "1.6.2"] and everything seems to be working fine#2017-09-1418:56pbostromthis is for [com.datomic/clj-client "0.8.606"]#2017-09-1418:56timgilbertBut my web server is immutant, not jetty#2017-09-1418:57timgilbertOh, sorry, didn't realize you were using the client lib. That one I haven't had any experience with#2017-09-1519:08souenzzoI'm using tx-report-queue to watch/process the transactions
I'm handling shutdown/restore cases.
There is some how to ask to tx-report-queue start from transaction tx?#2017-09-1519:11marshall@souenzzo No, you need to use the Log API to get from a specific transaction to ‘now’ then use the report-queue from there#2017-09-1519:15souenzzo@marshall d/log+`d/tx-range` gives me data in a format way different that d/tx-report-queue. But ok. I will se some way to "normalize" and allow my handler funcion dont care abou the origin of data (log or report)#2017-09-1718:32itaiedHello, I'm trying to automate tests for datomic#2017-09-1718:32itaiedHow can I drop and re-create the db?#2017-09-1718:32itaiedI'm using the memory datomic instance#2017-09-1718:33itaiedexecuting
(<!! (datomic.client.admin/list-databases
nodes.test-utils.db/db-conf))
returns ["hello"]
But executing delete-database results in "invalid-op"#2017-09-1720:51souenzzoYou cant delete-database via client-api. You need to use peer-api.#2017-09-1807:01ertugrulcetinDoes anyone know what might cause Datomic to delete databases?: https://stackoverflow.com/questions/46269961/datomic-deletes-databases#2017-09-1813:18uwo@itaied In a lot of cases you can just create a scratch in memory database. The old ones will just be garbage collected:
(defn scratch-conn []
(let [uri (str uri-base (d/squuid))]
(d/create-database uri)
(d/connect uri)))
#2017-09-1813:20val_waeselynck@itaied my take on this: https://vvvvalvalval.github.io/posts/2016-07-24-datomic-web-app-a-practical-guide.html#testing_and_development_workflow#2017-09-1819:50itaiedin order to use the peer-api I need to use the gpg tutorial to install the library?#2017-09-1819:50marshall@itaied You can download the distribution and install it locally using the bin/maven-install script#2017-09-1819:51marshallyou can download it as a zip from the http://my.datomic.com dashboard#2017-09-1819:51marshallthe gpg method is intended for use by automated systems (i.e. CI or build systems) that need to get the peer library#2017-09-1819:55itaiedhow can I use it in a lein project?#2017-09-1819:56marshallonce you’ve installed it to your local m2 repository you can include it in your lein deps with: http://docs.datomic.com/integrating-peer-lib.html#leiningen#2017-09-1819:57marshallso something like [com.datomic/datomic-pro "0.9.5561.56"]#2017-09-1819:57itaiedok thanks ill try it#2017-09-1820:17apseyWhen is Datomic going to support multiple Cache clusters for Datomic peers?#2017-09-1820:17apseyDo you guys have plans to support DAX for DynamoDB?#2017-09-1820:18marshall@apsey Datomic supports using multiple memcached instances, if that’s what you mean#2017-09-1820:19marshallhttp://docs.datomic.com/caching.html#memcached
> You can set up more than one memcached, and each Datomic process can choose via configuration which one(s) it uses. Datomic uses only UUIDs for memcached keys, and so Datomic can coexist with other uses of a memcached installation. Additionally, if you specify more than one memcached server, Datomic will distribute values across all instances, effectively enabling a single cache the size of the sum of all the instances.#2017-09-1820:23apseyNot that. We have a use case that a peer needs to access different databases. Because of that, we use the same memcached config for every transactor and peer:
-Ddatomic.memcachedServers=
#2017-09-1820:24apseyOh, I just saw that I could have:
-Ddatomic.memcachedServers=
#2017-09-1820:25marshallyep!#2017-09-1914:43mkvlrhello 👋 how does datomic licensing work with regards to development and staging? Can we use the same license as in production for these environments?#2017-09-1915:30marshall@mkvlr Yes, all Datomic licenses provide unlimited staging/test/dev environments; you can and should use the same key for those envs#2017-09-1915:30mkvlr@marshall ah great, thanks!#2017-09-1916:40uwofor dev we connect to our staging datomic db (unnecessary context) , and I just started seeing this when attempting to connect#2017-09-1916:40uwo> ActiveMQNotConnectedException AMQ119010: Connection is destroyed org.apache.activemq.artemis.core.protocol.core.impl.ChannelImpl.sendBlocking (ChannelImpl.java:325)#2017-09-1916:41uwoanyone know right off the bat what might be going on? I can ssh into the datomic server and the service appears to be running#2017-09-1916:43uwoha! nevermind. critical failure in the datomic log. lol#2017-09-1919:20lukerohdehi there - i was hoping someone could shed some light on what the expression clause ground does#2017-09-1919:20lukerohdewhen would i use that versus, say, a parameter binding to a value i know before issuing the query#2017-09-1919:22lukerohdewhat effect does it have on query compilation?#2017-09-1919:47uwoI use ground when dealing with constants. I don’t know effects on compilation. The docs mention that ground enables query optimizations#2017-09-1919:49crinkyeah, i think luke might be wondering how is it different than passing constant values in via the :in portion of the query?#2017-09-1919:50uwoyeah, I followed#2017-09-1919:53uwoI guess I mean. If a value is truly constant, why parameterize for it?#2017-09-1919:54uwosorry. maybe I’m just missing the question entirely.#2017-09-1920:16lukerohdesure, but then why is ground necessary at all?#2017-09-1920:16lukerohdejust trying to build intuition#2017-09-1920:24uwoCertainly, I would be curious to know if a grounded binding is optimized over a parameterized binding. I sorta assumed that was the case, but I don’t have a clue about that. At the very least, I like it’s expressiveness, because a parameter to me implies something that can change#2017-09-1920:58lukerohdenecessary i mean versus inlining a constant#2017-09-1921:13dpsuttonI made an account on my-datomic, downloaded datomic-free-0.9.5561.56 and then ran the command to start it up from the datomic tutorial at http://docs.datomic.com/getting-started/connect-to-a-database.html: bin/run -m datomic.peer-server -h localhost -p 8998 -a myaccesskey,mysecret -d hello,datomic: and i'm getting the following stack trace:
~/projects/datomic/datomic-free-0.9.5561.56$ bin/run -m datomic.peer-server -h localhost -p 8998 -a myaccesskey,mysecret -d hello,datomic:
Exception in thread "main" java.io.FileNotFoundException: Could not locate datomic/peer_server__init.class or datomic/peer_server.clj on classpath. Please check that namespaces with dashes use underscores in the Clojure file name.
#2017-09-1921:13dpsuttonhas anyone seen this before?#2017-09-1921:13dpsuttonor know how i can follow along with the tutorial?#2017-09-1921:17dpsuttonah. apparently the free version of datomic is not amenable to the official datomic tutorial?#2017-09-1921:18dpsuttonthat seems like a pretty big barrier to learning#2017-09-1921:23hmaurer@dpsutton I stumbled upon this issue when I got started with Datomic, managed to fix it after a bit of headache, and now forgot what the solution was#2017-09-1921:23hmaurer😄 sorry#2017-09-1921:25favilaThey want you to use the starter-pro version, not free. Free is basically for use after you already know what you are doing, have the limitations in mind, but specifically need no-cost redistributable licensing. (e.g. you are running lots of small open-source sites)#2017-09-1921:26hmaurer@favila does free allow for persistence?#2017-09-1921:26favilayes#2017-09-1921:26favilait's the same as the dev transactor#2017-09-1921:39marshall@dpsutton Datomic Pro Starter is no cost and supports running peer-server#2017-09-1921:39marshallit is intended as the ‘getting started’ entry to Datomic#2017-09-1923:10dpsuttonbut it begins a 1 year license and then you're done. i just want a playground with no strings or penalties which i don't consider this version#2017-09-1923:10dpsuttonperhaps in the future i'll make something and wish i still had this license. i'm trying to follow the official tutorial and its offputting that i begin a 1 year trial rather than just using their free no-strings version#2017-09-1923:11dpsuttonit was also offputting that i had to enter some (poorly validated) personal data just to try it as well#2017-09-1923:17hmaurer@dpsutton the tutorial uses the datomic client API, but you can experiment with the datomic peer api#2017-09-1923:21hmaurere.g. http://docs.datomic.com/dev-setup.html#2017-09-1923:21hmaurer“Using the Datomic peer library”#2017-09-1923:21hmaurerI do agree the documentation is pretty bad for newcomers though#2017-09-1923:33dpsuttonthat's why it would be nice to follow along with a nice tutorial directly from the people who make it 🙂#2017-09-2000:00marshall@dpsutton the 1 year term is only the maintenance period. The license is perpetual#2017-09-2011:13ertugrulcetinHi @marshall , can I use HA(High Availability) feature with Datomic Pro Starter Edition license?#2017-09-2011:56marshall@ertucetin yes, since last Nov all features of Datomic are available with Starter#2017-09-2011:56marshallas long as you’re using version 0.9.5530 or newer#2017-09-2012:16ertugrulcetin@marshall thanks for the reply!#2017-09-2014:57ertugrulcetinHey guys when I run this command: bin/datomic -Xmx4g -Xms4g backup-db "file:///Users/ertugrulcetin/Desktop/backup-dir" "datomic:" I get this error: :storage/invalid-uri Unsupported protocol: datomic any ideas?#2017-09-2015:13terjesb@ertucetin looks like you reversed the args for backup-db, should be from-db-uri to-backup-uri#2017-09-2015:16ertugrulcetin@terjesb oww let me try like that then#2017-09-2015:18ertugrulcetin@terjesb it worked thanks!, silly mistake 😕#2017-09-2017:49marshallDatomic 0.9.5561.59 is now available https://groups.google.com/d/topic/datomic/sr0S1GHDgZQ/discussion#2017-09-2019:51twashingDoes anyone know how to surface TransactionMsec and StorageGetMsec Datomic metrics in AWS CloudWatch?#2017-09-2019:51twashingApparently the transactor is already recording them. http://docs.datomic.com/monitoring.html#transactor-metrics#2017-09-2019:53twashingBut the metrics I see under cloudwatch > metrics > EC2 > Per Instance Metrics > (search for the instance id: <my-id>)
don’t map to what I see here. http://docs.datomic.com/monitoring.html#transactor-metrics#2017-09-2020:19ezmiller77Hi all, I have a question about schema naming conventions. Say I have an entity defn :metadata/tags that is a reference type with cardinality many and is also configured as a component. The components items for this reference entity are individual content tags. I'm unclear what's the best way to name the tags. Currently, I'm thinking of this:
{:db/ident :metadata/tags
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/many
:db/isComponent true}
{:db/ident :tag
:db/valueType :db.type/keyword
:db/cardinality :db.cardinality/one
:db/index true}
{:db/ident :empty}
Is there a better way to setup the naming here?#2017-09-2020:53favilawhat is :tag?#2017-09-2020:54favila{:metadata/tags [{:tag/name "the-name" :tag/whatever "foo"} ...] is the design I would expect @ezmiller77#2017-09-2021:21ertugrulcetinhey @marshall, I'm getting this error when using restore-db , clojure.lang.ExceptionInfo: Key missing in storage {:tail "59c2b291-c5b9-48cb-acdf-e4d581a9c33a", :t nil, :v nil, :prev nil} any ideas?#2017-09-2023:58csmDoes datomic-console support not (or or) in queries? it seems to change (not [...]) to ["not" [...]]#2017-09-2108:29isaacWhat the default jdbc connections pool size in datomic peer lib?#2017-09-2108:53richardwongI don't even think the datomic need to use the common conn pool to manage DB backend.#2017-09-2110:52apseyDo you guys have plans to support DAX for DynamoDB?#2017-09-2114:00timgilbertSay, am I correct that the only difference between :db.unique/identity and :db.unique/value is that :db.unique/value will throw an exception when I try to assert a new entity with an existing value, whereas :db.unique/identity will do an upsert (merge the new attributes into the existing entity)?#2017-09-2114:22favilaThat is the only difference#2017-09-2116:09timgilbertThanks!#2017-09-2114:48mkvlrdo unique values of type ref give me a single value instead of a set when doing the reverse lookup?#2017-09-2114:58favila(But I had to try it to find out)#2017-09-2215:01kvltThere might not be a good answer to this. The datomic result set, is it ordered in any way?#2017-09-2215:52favilano#2017-09-2215:53favilaqueries that don't use aggregation (i.e. those whose result would be a set) will have a stable order for the same input#2017-09-2215:54favilathis is just because the order of sets is stable (based on hash code)#2017-09-2215:54favilabut queries that use aggregation I'm not sure you can rely on repeatable order because they may perform some parts of the aggregation in parallel, and the result is an ArrayList#2017-09-2215:55favilaSo basically, no, there is no order you can really depend on. datalog operates in a set-wise fashion#2017-09-2216:59hmaurer@favila is it stable “forever”, or only within a certain time window?#2017-09-2217:03favilaIf the result set is the same (same values in it, all hashing the same) and the collection implementation is the same (uses same hashing algorithm, etc) then it's likely the same#2017-09-2217:04favilaThe further you get from immediately rerunning the same query in the same process, the weaker the guarantee#2017-09-2217:04favilaIOW I think it is foolish to depend on a stable order for anything non-trivial#2017-09-2217:05favilaorder is not part of the contract of d/q#2017-09-2315:39chris_johnsonSo I’m trying to set up a trivial case where I follow the Connect to a Database documentation, but in a REPL with some of my own code wrapping the client connection. I have a local server serving up an in-memory DB at hello just as the documentation specifies, but when I invoke client/connect with an apparently-correct connection map, what comes down the <!! is not a connection, but rather this:
#:cognitect.anomalies{:category :cognitect.anomalies/incorrect, :message "Incomplete or invalid connection config: {:timeout 60000, :account-id datomic.client/PRO_ACCOUNT, :access-key \"myaccesskey\", :secret \"mysecret\", :endpoint \"localhost:8998\", :service \"peer-server\", :region \"none\", :db-name \"hello\"}"}#2017-09-2315:40chris_johnsonAnd I cannot tell what is “Incomplete or invalid” about that map, it seems to me to be exactly what I would submit at the command line#2017-09-2315:41chris_johnsonHas anyone else encountered a similar problem and uh, solved it? 😇#2017-09-2321:21chris_johnsonFor people following along at home, the issue is that I was reading the config out of an EDN file, and therefore datomic.client/PRO_ACCOUNT was not being interpolated to its string value when passed in to datomic.client/connect#2017-09-2322:06dimovichhello#2017-09-2322:06dimovichI have entities that have tags... How can I pull all entities that have at least one tag from a collection of tags that the user specifies?#2017-09-2322:08hmaurer@dimovich not sure about the most efficient way, but you can definitely do it with or#2017-09-2322:08hmaurerhttp://docs.datomic.com/query.html#or-clauses#2017-09-2322:09hmaureroh actually there is a better way#2017-09-2322:09hmaurerCollection bindings#2017-09-2322:09hmaurerhttp://docs.datomic.com/query.html#collection-binding#2017-09-2322:10dimovich@hmaurer oh nice! thanks!#2017-09-2403:17ezmiller77@favila , missed your comment in response to my question about schema naming conventions. I think my :tag was meant to be equivalent to your :tag/name. Is the idea that yours is more explicit? The domain tag indicates what the entity is; whereas 'name' is the property of the domain. So : domain/property is the convention? I have also seen the use of a . in some cases. Do you ever use that? Is there some canonical basis to the conventions being used here?#2017-09-2420:06favilaIt mirrors the conventions datomic itself uses e.g. :db/cardinality :db.cardinality/one) and is a natural fit with clojure keyword and symbol namespace naming @ezmiller77 #2017-09-2420:07favilaI was confused in your example that you said a tag was an entity but the attribute was a keyword type#2017-09-2422:14ezmiller77> you said a tag was an entity but the attribute was a keyword type
good point!#2017-09-2422:15ezmiller77That makes sense that the conventions mirror datomic itself. In this example -- :db.cardinality/one -- what is the . saying?#2017-09-2422:15ezmiller77@favila#2017-09-2422:16favila@ezmiller77 think java package names#2017-09-2422:30ezmiller77@favila: I'll look it up. I haven't used Java since college. :0)#2017-09-2509:22dimovichHello#2017-09-2509:22dimovichI need to generate some invitation codes... Maybe datomic has some built-in functionality for this?#2017-09-2509:23dimovichdoes some have some examples of this?#2017-09-2511:29augustldatomic doesn't have any built in functions to generate data transactor side that I can think of#2017-09-2512:43hmaurer@dimovich generate it on the peer? is there any particular reason you want Datomic to do it for you?#2017-09-2512:53dominicm@hmaurer I'm guessing to guarantee uniqueness in a distributed system#2017-09-2512:54hmaurer@dominicm a sufficiently long random code should guarantee uniqueness to a high enough degree of certainty, no?#2017-09-2512:54hmaurere.g. a uuid#2017-09-2512:55dominicm@hmaurer sufficiently high is relative I suppose. Presuming that you want the token to be of a short length due to external constraints, it becomes more complicated.#2017-09-2512:56hmaurer@dominicm mmh, true#2017-09-2512:58hmaurerwhat you could also do is have a :db.unique/value constraint on the invitation code, generate it on the peer, and retry if the transaction fail because of a uniqueness exception#2017-09-2512:58hmaurer(if you expect collisions to be fairly infrequent)#2017-09-2514:52cch1Soliciting Help: I’ve got a peer that sets up the schema for a database, and then successfully transacts several additional entities. A client using the client API then connects and tries to query the DB for those entities -but without seeing them. Most suspicious, the client API reports that it can’t even resolve the new schema entity and reports :t 1002 …which to me says the client API does not seen any of the peer’s transactions.#2017-09-2515:07cch1Looks like a problem with the peer server communication. If I start a transaction from a client, the transaction is sent off, and the peer server shows it in its logs, but the response from the transactor never appears in the peer server’s logs and the client never gets an answer and eventually times out.#2017-09-2515:08cch1Not sure where to look and there are no errors on the peer server.#2017-09-2515:11benh@dimovich if the data has to be generated on the txtor, you can do this in a txtor function#2017-09-2515:13benhI have done thiis to simulate a SQL IDENTITY column before#2017-09-2515:18benhI have a datalog question;
I have some datalog that returns tuples; I only want a sample of 10 items; is the any way I can use something like
(d/q '[:find (rand 10 [?val1 ?val2])#2017-09-2515:19benhdo I#2017-09-2515:20benhjust have to (take 10 (shuffle (d/q '[:find ?val1 ?val2#2017-09-2515:24hmaurer@benha I think your second bit of code is the only option#2017-09-2515:26benhah well. always worth asking#2017-09-2515:39favila@benha You could try coercing the result to a vector, then use the sample aggregation function#2017-09-2515:40favilahowever, the whole result is realized in memory all the time anyway, even with sample, so there's no benefit to doing it in the query#2017-09-2515:40favila(except it's possible it's a more efficient impl than shuffle+take)#2017-09-2515:45benhhow would I coerce results to a vector ?
naive (d/q '[:find (sample 10 [?val1 ?val2])
gets me
java.lang.IllegalArgumentException: Argument [?val1 ?val2] in :find is not a variable#2017-09-2515:46favila[(vector ?v1 ?v2) ?r] at the end of your :where#2017-09-2515:46favilathen :find [(sample 10 ?r) ...]#2017-09-2515:46favilaI don't know that sample will work with vectors#2017-09-2515:47benhYes. that does work.
But makes the code harder to follow#2017-09-2516:26colindresjA few times know I’ve had a datomic entity that needs to serialized in order to include in a web response. I find that I’ve been doing a [:*] pull in those cases, but was wondering if there was a more succinct way to go from datomic entity to hash-map. Any insight?#2017-09-2516:28favilamore succinct than a wildcard pull? what are you imagining?#2017-09-2516:29colindresjOnly feels a little verbose because I’ve got to have the db available and in our controllers that’s not always the case#2017-09-2516:30colindresjBut it’s def not too bad, just wondering if there was something sort of like touch that gives back a hash map#2017-09-2516:30favilaWhen would you not have a db?#2017-09-2516:32favilaAre you saying you have an entity map (from d/entity) and you want to pull out of it?#2017-09-2516:32colindresjWe’ve kind of got an mvc-like namespace setup where datomic functions interact with the db, then give back the results to the controller. The controller then decides what the web response body should look like. We don’t typically have the db readily available in those controller namespaces#2017-09-2516:33favilawhat do you have available?#2017-09-2516:34colindresjUsually just some ring related utils and maybe some spec validation stuff#2017-09-2516:34favilawhat is "results"?#2017-09-2516:34colindresjTypically an entity or collection of entities#2017-09-2516:34favilaEntity as in entity-map?#2017-09-2516:34colindresjYeah#2017-09-2516:35favilaYou can get the db out with d/entity-db#2017-09-2516:35colindresjOh sweet#2017-09-2516:35colindresjDid not know that#2017-09-2516:35favilaentity maps are lazy, they all have dbs in them#2017-09-2516:35colindresjCool, so that would make pulling on the entity much simpler#2017-09-2516:35colindresjThanks#2017-09-2516:35favilawhich is why your question didn''t make much sense to me#2017-09-2516:35colindresjYeah didn’t know that#2017-09-2516:35colindresjI can see why confusing to you armed with that knowledge#2017-09-2516:35favila"I want to pull from the database without a database." huh?#2017-09-2516:36colindresjYeah it was more like I want to turn a lazy datomic entity into a hash-map#2017-09-2516:36favilad/entity is a map-projection of the datoms in the db#2017-09-2516:39benhwould (into {} entity-obj) do the trick?#2017-09-2516:40benhonly converts that top level, of course#2017-09-2516:40colindresjDoes that work? I feel like I tried that and still got some serialization exception#2017-09-2516:40colindresjI could be wrong though — it was a while back#2017-09-2516:41colindresjActually remember now, it works as long as there are no references to other entities#2017-09-2516:41favilayes, because those are entity-maps#2017-09-2516:41colindresjRight#2017-09-2516:41benhyou could use tree.walk to recurse down#2017-09-2516:42benhbut then you have to worry about loops#2017-09-2516:42favilaI've done generic entity-map to plain map conversions before, it's a little tricky#2017-09-2516:42colindresjYeah, I feel like pull is really the right thing to do, just wanted to see if there was something else#2017-09-2516:42favilabut this was before the pull api existed#2017-09-2516:42favilapull api is much better#2017-09-2516:42colindresjYeah agree#2017-09-2517:41souenzzoAbout pull-api: how to deal with enums? Example: :user/tags is a ref-to-many. I usualy want to send {:user/tags [:tag/1]} to frontend, not {:user/tags[{:db/id 3123}]}.
I made some code that replace all {:db/id 1321} that has a ident to this ident. But not sure if there is a better way to deal...#2017-09-2517:50souenzzosomething like (d/pull db pattern eid :prefer-idents true)#2017-09-2517:50hmaurer@souenzzo try {:user/tags [:db/ident]}#2017-09-2517:51souenzzo{:user/tags [{:db/ident :tag/1}]} is better then {:user/tags [{:db/id 123456}]} but still not {:user/tags [:tag/1]}...#2017-09-2517:52hmaurer@souenzzo you can transform the map you get to match your desired output format#2017-09-2517:52souenzzoYep. I do that. But has some odd edge cases..#2017-09-2517:52hmaurer(update result :user/tags map :db/ident)
#2017-09-2517:53souenzzo#specter
(transform (walker :db/ident) :db/ident x)#2017-09-2517:54souenzzo(all maps that contains a :db/ident, should be trasformed by :db/ident function)...#2017-09-2517:54hmaurerah yeah, I messed up my code 😛 and spectre is nicer anyway#2017-09-2517:55souenzzoBut I reached the case: I have a entitie that is a enum but contains other attributes. Then sometimes I want to see all attr, others I want to use as enum...
Once I transform with specter on a default interceptor...#2017-09-2519:12Brendan van der EsHi all, does anyone know where the :db/fn definition for the :db/add and :db/retract tx functions live? I can't seem to find it in the extent of the respective entities nor in the :db.part/db entity.#2017-09-2519:13favilaadd and retract are intrinsics#2017-09-2519:13favilathere is no fn#2017-09-2519:14favilaThink about it: if these were transaction functions, what would they expand to?#2017-09-2519:23Brendan van der EsThey could still expand to the actual datom but it does make more sense that the transact function (peer or transactor side) would take care of this.#2017-09-2519:23Brendan van der EsThanks#2017-09-2520:13tengIf I want to reconstruct a database with all the old transactions intact, can I do that? If the old database consists of 1000 transactions, do I need to perform 1000 transactions to the empty database too, or is it another way of doing it? Some of the facts need to be filtered out also in the target database so it’s not just a simple dump/backup.#2017-09-2521:36favilaYou need 1000 transactions, in the same order. You can reconstruct the old transaction times by setting :db/txInstant on each transaction; but you cannot alter transaction times after transacting#2017-09-2520:52souenzzo(defn add-to-tx-data
[db tx-data]
(let [{:keys [db-after db-before tempids tx-data]} (d/with db tx-data)
ids-after (d/q MY-QUERY db-before db-after tx-data)]
(into tx-data
(for [id ids-after]
[:db/add (resolve-id->tempid tempids id) :foo/bar true]))))
Existi some way to do resolve-id->tempid? It shoud look to tempids and if there is a tempid that was "resolved" into this id in transaction, returns this tempid, else return the id.#2017-09-2521:38favilaHow do you distinguish ids that were issued in the tx?#2017-09-2521:39favilaI can't think of a way to do it that doesn't involve being careful about what external ids each entity has#2017-09-2521:41favilaperhaps you could check the history of the entities mentioned by each tx and see if there are any other txs in the entity history#2017-09-2522:00souenzzoI get it.
I will think in another solution#2017-09-2521:41favilaif there are none, it is a new entity id; if not, probably it was resolved to an existing id#2017-09-2608:53mkvlris there a way to use datomic/entity and add additional computed data on the “map”? We noticed that assoc throws. We can merge a new map with the attributes with the entity but then we lose the laziness that datomic/entity provides.#2017-09-2609:01dimovich@hmaurer @dominicm @benha thanks for the tips regarding the invitation code generation#2017-09-2610:19benh@mkvlr you could reify IAssoc to allow local assoc calls, but pass through gets to underlying EntityMap#2017-09-2610:20benhbut be aware that this does not transact the new data to datomic#2017-09-2610:20benhwhich is likely to cause confusion#2017-09-2610:33mkvlrsure, I’m aware it doesn’t transact#2017-09-2610:36mkvlrany other best practices to add computed data to an entity? Not put it on the entity at all but use another thing?#2017-09-2610:53hmaurer@mkvlr silly question, but do you really need to keep the map lazy?#2017-09-2610:54hmaurerand do you need to add computed data deep inside the map, or only at the top level?#2017-09-2610:55mkvlr@hmaurer only at the top level. I’m new to datomic, so I’m asking for best practices#2017-09-2610:55mkvlrwhat if I touch the entity, will I still be able to lazily load relations reachable from it?#2017-09-2610:59hmaurermmh I am not sure, but let’s try it#2017-09-2610:59hmaurerjust a sec#2017-09-2611:02hmaurermmh no, it looks like touching loads the whole thing#2017-09-2611:02hmaurer> Touches all of the attributes of the entity, including any component entities recursively.#2017-09-2611:03hmaurerfor components at least#2017-09-2611:05hmaurerI am new to Datomic too so I am just trying things out, but it looks like (into {} your-entity) will return a PersistentArrayMap at the top level, but leave EntityMaps for nested entities (which are lazy)#2017-09-2611:06hmaurer@mkvlr ^#2017-09-2611:08mkvlr@hmaurer thanks! that’s great then, I’ll just touch 🙏#2017-09-2613:05teng@favila yes, I realized my mistake, thanks!#2017-09-2613:25hmaurer@favila do you know if datomic will attempt to optimise queries, and if yes, how? e.g. are clauses re-ordered on datalog queries?#2017-09-2613:26favilaDatomic does some optimizations, but clause reordering is not one of them. I think they may never add that optimization for philosophical reasons#2017-09-2615:24pbostromis anyone familiar with this stack trace, and what the cause might be? I see it when I transact my schema using datomic.api
11:16:24 ERROR nREPL-worker-0 core.client AMQ214016: Failed to create netty connection
java.nio.channels.UnresolvedAddressException
at sun.nio.ch.Net.checkAddress(Net.java:101)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:622)
at io.netty.channel.socket.nio.NioSocketChannel.doConnect(NioSocketChannel.java:208)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.connect(AbstractNioChannel.java:203)
at io.netty.channel.DefaultChannelPipeline$HeadContext.connect(DefaultChannelPipeline.java:1226)
at io.netty.channel.AbstractChannelHandlerContext.invokeConnect(AbstractChannelHandlerContext.java:549)
at io.netty.channel.AbstractChannelHandlerContext.connect(AbstractChannelHandlerContext.java:534)
at io.netty.handler.ssl.SslHandler.connect(SslHandler.java:438)
at io.netty.channel.AbstractChannelHandlerContext.invokeConnect(AbstractChannelHandlerContext.java:549)
at io.netty.channel.AbstractChannelHandlerContext.connect(AbstractChannelHandlerContext.java:534)
at io.netty.channel.ChannelDuplexHandler.connect(ChannelDuplexHandler.java:50)
at io.netty.channel.AbstractChannelHandlerContext.invokeConnect(AbstractChannelHandlerContext.java:549)
at io.netty.channel.AbstractChannelHandlerContext.connect(AbstractChannelHandlerContext.java:534)
at io.netty.channel.AbstractChannelHandlerContext.connect(AbstractChannelHandlerContext.java:516)
at io.netty.channel.DefaultChannelPipeline.connect(DefaultChannelPipeline.java:970)
at io.netty.channel.AbstractChannel.connect(AbstractChannel.java:215)
at io.netty.bootstrap.Bootstrap$2.run(Bootstrap.java:166)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:408)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:402)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:140)
at java.lang.Thread.run(Thread.java:745)
#2017-09-2615:26favilaYou sure it is datomic.api and not datomic.client?#2017-09-2615:26favilalooks like it can't resolve an address#2017-09-2615:26favilahostname somewhere is wrong#2017-09-2615:32pbostromyes, datomic.api. it's coming from ActiveMQ, there should be no hostname resolution required for the transaction (uri is DDB which looks up the transactor IP in Dynamo)#2017-09-2615:33favilawhat is in transactor host= and alt-host= ?#2017-09-2615:38pbostromah, alt-host has some garbage, thanks for the tip#2017-09-2617:29phillipcdoes anyone know if the :string type for an attribute has any sort of character limit? I cannot seem to find anything in the documentation that would suggest there is.
Thanks#2017-09-2617:34favila@phillipc AFAIK no intrinsic limit. you will probably run into platform limits first (e.g. string too long for java or fressian)#2017-09-2617:34favilabut it's not a good idea to put big values into a datomic db#2017-09-2617:35favilaconsider writing a uuid-named file to a blob store (e.g. s3) and storing the reference in the db as the value#2017-09-2617:36phillipcI doubt it will be large enough for a blob, I'm just converting some sql data stores and was making sure that :string could handle it
Thank you for the quick response!#2017-09-2617:37favilaoh, yeah it will handle that fine#2017-09-2710:51Ben HammondI was hoping to pass a variable sample size in to datalog sample function.
But it's not having it. Is there anything obvious I am doing wrong ?
(d/q '[:find (sample ?size ?id) .
:in $ ?size
:where [_ :myns/main-id ?id]]
(d/db (d/connect "datomic:"))
1024)#2017-09-2710:52Ben Hammond=>#2017-09-2710:52Ben Hammondjava.lang.ClassCastException: clojure.lang.Symbol cannot be cast to java.lang.Number
#2017-09-2710:52Ben HammondI could beat it to death with (defmacro#2017-09-2710:53Ben Hammondbut I'd rather not if I can avoid it#2017-09-2710:54Ben Hammondperhaps its a hint that sample sizes are best hardcoded#2017-09-2712:52dominicm(d/q (concat [:find] (list 'sample 1024 '?id) '[:where [_ :myns/main-id ?id])) 😂#2017-09-2713:56souenzzoI installed one datomic function to run as a "transaction function"
Then I "overinstalled" it. On (d/with db [[:my-fn]]) it runs the new version. On (d/transact conn [[:my-fn]]) runs the older version...#2017-09-2713:57souenzzo(datomic+dynamo transactor)#2017-09-2714:08favila@souenzzo you should get a reproducible test case and report to datomic#2017-09-2714:09favilaare you sure the new tx fn was actually transacted not d/with-ed?#2017-09-2714:11souenzzoOn (d/pull db '[*] :my-fn) I get the new version.#2017-09-2714:14favilaI'm asking if db isn't the result of other d/withs rather than directly from (d/db conn)#2017-09-2714:14favilajust double-checking#2017-09-2714:27souenzzoI finded. requires behaves like to-many
My wrong version was with 2 requires (one invalid require). Then I made a new one with just one require. But it still trying to import the wrong require#2017-09-2714:40souenzzo(let [db-uri "datomic:"
conn (do (d/create-database db-uri)
(d/connect db-uri))
fn-1 {:db/ident :my/fn
:db/fn (d/function {:lang :clojure
:requires '[[foo.bar] [datomic.api :as d]]
:code '(do "same-code")})}
{db-1 :db-after} @(d/transact conn [fn-1])
fn-2 {:db/ident :my/fn
:db/fn (d/function {:lang :clojure
:requires '[[datomic.api :as d]]
:code '(do "same-code")})}
{db-2 :db-after} @(d/transact conn [fn-2])
fn-3 {:db/ident :my/fn
:db/fn (d/function {:lang :clojure
:requires '[[datomic.api :as d]]
:code '(do "not-the-same")})}
{db-3 :db-after} @(d/transact conn [fn-3])]
{:fn-1 (:requires (:db/fn (d/entity db-1 :my/fn)))
:fn-2 (:requires (:db/fn (d/entity db-2 :my/fn)))
:bug? (= (:requires (:db/fn (d/entity db-1 :my/fn)))
(:requires (:db/fn (d/entity db-2 :my/fn))))
:fn-3 (:requires (:db/fn (d/entity db-3 :my/fn)))})
=>
{:fn-1 [[foo.bar] [datomic.api :as d]]
:fn-2 [[foo.bar] [datomic.api :as d]]
:bug? true
:fn-3 [[datomic.api :as d]]}
#2017-09-2715:25favila@souenzzo tx-2 didn't actually do any work#2017-09-2715:25favilalook at the tx-data#2017-09-2715:29favilathat seems like a datomic bug to me#2017-09-2715:29favilasomehow the txor thinks the function hasn't changed, so no new datoms are asserted#2017-09-2715:29favilathe functions don't compare equal in clojure or using query comparator, so I'm not sure what's up#2017-09-2715:33favila:lang also does not cause a change#2017-09-2715:34favilaah nm#2017-09-2715:34favilachecking each key now#2017-09-2715:34souenzzoI'm trying to find/recover my password at http://receptive.io 😄#2017-09-2715:36favila(let [db-uri "datomic:"
conn (do (d/delete-database db-uri)
(d/create-database db-uri)
(d/connect db-uri))
{tx-1 :tx-data} @(d/transact conn [{:db/ident :my/fn
:db/fn (d/function {:lang :clojure
:requires '[[foo]]
:code '(do "same-code")})}])
{tx-2 :tx-data} @(d/transact conn [{:db/ident :my/fn
:db/fn (d/function {:lang :clojure
:requires '[[bar]]
:code '(do "same-code")})}])]
(assert (> (count tx-2) 1)))#2017-09-2715:36favilasmaller test case#2017-09-2715:37favila:imports is also ignored#2017-09-2715:37souenzzoIt should just compare :code#2017-09-2715:38favilaI think the transactor, when comparing old and new fn values, is only looking at :lang and :code and ignoring :requires and :imports#2017-09-2715:39favilathis case also fails:#2017-09-2715:39favila(let [db-uri "datomic:"
conn (do (d/delete-database db-uri)
(d/create-database db-uri)
(d/connect db-uri))
{tx-1 :tx-data} @(d/transact conn [{:db/ident :my/fn
:db/fn (d/function {:lang :clojure
:imports '[java.net.URI]
:code '(do "same-code")})}])
{tx-2 :tx-data} @(d/transact conn [{:db/ident :my/fn
:db/fn (d/function {:lang :clojure
:imports '[java.util.UUID]
:code '(do "same-code")})}])]
(assert (> (count tx-2) 1)))#2017-09-2715:40favila:params changes also seem fine#2017-09-2715:40favilaso :imports and :requires are ignored#2017-09-2715:40favilathat's the bug#2017-09-2715:53souenzzobug #2 - Can't login/recover my password in http://receptive.io#2017-09-2715:54favilareceptive isn't where you go, it's zendesk#2017-09-2715:54favilareceptive is for the feature requests#2017-09-2715:55favilahttps://support.cognitect.com @souenzzo#2017-09-2716:37souenzzoreported#2017-09-2718:30timgilbertSay, just out of curiosity has anybody managed to get datomic to run with localstack? https://github.com/localstack/localstack#2017-09-2815:31apseyHas anyone compared performance of transactors comparing backend storages such as Cassandra vs DynamoDB?#2017-09-2815:31apseyI am mostly interested in transaction functions, indexing job and write parallelism#2017-09-2818:33mkvlrshould datomic strings always be utf8? We’re seeing umlauts like ü come out as ? after transacting them and reading them again…#2017-09-2818:34mkvlrsame is true when importing from an utf8 encoded postgres table. I’ve read recent jdbc drivers should pick up the encoding automatically.#2017-09-2818:37mkvlrmaybe there’s also something with pedestal that we’re missing and it’s all correct in datomic… We are setting the charset=utf-8 in the Content-Type header though#2017-09-2818:45favila@mkvlr In a scenario involving a peer encoding is not an issue since strings are shared in a type-safe way. Your problem is at some higher layer#2017-09-2818:45favilayou can confirm by interacting with the peer in a repl#2017-09-2818:47favila(the datomic data in postgres is stored as a blob--it is opaque to the sql server so things like column encoding don't matter)#2017-09-2818:48mkvlr@favila yes, that’s true for the datomic side, but we’re migrating our data from actual postgres tables into datomic using a script called from the repl#2017-09-2818:49favilathe strings you transact may be decoded incorrectly#2017-09-2818:49favilaor you may be encoding them incorrectly in the http response#2017-09-2818:50favila(d/pull db [:string-attr] suspect-id) will tell you if it is http's fault#2017-09-2818:50mkvlralright, so maybe more on the #pedestal side#2017-09-2818:50favilaif you see bad characters, then the problem is with what prepared the string for transacting#2017-09-2818:50favilaif you see good characters, it's pedestal's fault#2017-09-2818:50mkvlrany gotchas with the repl? do I have to set utf8 encodings there?#2017-09-2818:50favila(or something)#2017-09-2818:51favilaI've never had to set encodings with repls, but I use nrepl all the time#2017-09-2818:51favilamaybe other repl types that is a concern#2017-09-2818:52favilatest by seeing what "\u00DC" prints I guess#2017-09-2818:55mkvlr@favila hmm, on staging (through telnet) I only see ?, locally it works, alright, so it’s not datomic, thanks! 🙏#2017-09-2818:56mkvlrmight it be a JVM thing?#2017-09-2818:56favilawhat is your staging repl? repl socket server? something else?#2017-09-2818:58mkvlr@favila yes just clojure.core.server/repl#2017-09-2819:00favilaugh it uses default charset, and not configurable#2017-09-2819:01favilawhat does (Charset/defaultCharset) say in your telnet repl? @mkvlr#2017-09-2819:04favilaif it mismatches locale charmap in your terminal you will have problems#2017-09-2819:04favilaand all of this is different from what encoding of strings pedestal may do#2017-09-2819:04mkvlruser=> (Charset/defaultCharset)
CompilerException java.lang.RuntimeException: No such namespace: Charset, compiling:(NO_SOURCE_PATH:1:1)
#2017-09-2819:05favilahm#2017-09-2819:05favila(java.nio.charset.Charset/defaultCharset)? @mkvlr#2017-09-2819:07mkvlrnothing good:#2017-09-2819:07mkvlruser=> (java.nio.charset.Charset/defaultCharset)
#object[sun.nio.cs.US_ASCII 0x56a76e18 "US-ASCII"]
#2017-09-2819:07favilaah, so ascii#2017-09-2819:07favilaanything not 7-bit will get stripped#2017-09-2819:08favilaso that's why you have ?#2017-09-2819:08mkvlrshould that be fixed by setting export LC_ALL=en_US.UTF-8?#2017-09-2819:09favilathat affects locale too, so I'm not sure#2017-09-2819:09favilato just alter default encoding I think starting java with -Dfile.encoding=UTF-8 will do#2017-09-2819:10favilalocale affects other stuff in addition (like number formating, money signs, etc), so if there's code that relies on the defaults it will break#2017-09-2819:10favilabut maybe that's what you actually need to do#2017-09-2819:11favilaIMO any API where charset/locale/etc is an optional variable is broken#2017-09-2819:11favilaunfortunately that's every api#2017-09-2819:11mkvlralright, will try LC_ALL and java -Dfile.encoding=UTF-8#2017-09-2819:11favilayou should only need one or the other I think#2017-09-2819:12mkvlryep, will try LC_ALL first#2017-09-2820:27mkvlr@favila can confirm LC_ALL is working, thanks again! 🙏#2017-09-2905:16favilaIs there any way to make datomic with sql storage use a different table name than datomic_kvs?#2017-09-2913:13stuarthalloway@favila not at present. The last time somebody hit this I think they found they could do a level of indirection on the SQL side.#2017-09-2914:43cch1In a transaction, I’m stuck trying to retrieve the (temp) entity id or the (temp) transaction id and using it as a value for unique attribute. Use case: I want to create a low-cost, shortish unique id for an entity that will survive backup/restore and other cross-database operations. db:/id seems like the obvious choice, but I am slightly worried about their stability across backup/restore.#2017-09-2914:44cch1Is there any way to have the tempid usable as the value for a custom attribute? What would be the db.valueType of such an attribute?#2017-09-2914:47cch1Squuids are unsavory due to length.#2017-09-2914:48cch1I’ve considered an alias to :db/id, but I think that would suffer from the same stability issues across backup/restore.#2017-09-2914:55cch1Alternatively, can anybody confirm that :db/id is intended to be stable across backup/restore?#2017-09-2915:01ovanWe think we've discovered a bug in Datomic related to string tempids being resolved incorrectly (two entities get mixed up in single transaction under certain conditions). Does anyone know what's the proper channel to report it to the Datomic team?#2017-09-2915:21favila@cch1 Cognitect has warned that :db/ids are not guaranteed to survive backups/restores and should not be used as stable identifiers. (but they have been stable so far). I don't get how a tempid would help you though!#2017-09-2915:22favilaWhat do you mean specifically by "retrieve the (temp) entity id"?#2017-09-2915:23cch1By “retrieve the (temp) entity id” I mean “reference the (temp) entity id so that I can assign it as a value to my attribute”.#2017-09-2915:23cch1Thanks for confirming my concern about the stability of :db/id across backup/restore.#2017-09-2915:23favilaI'm still not sure what you mean. Can you show me a transaction?#2017-09-2915:24cch1{:db/id "user"
:my/identifier "user"}
#2017-09-2915:24favilathe tempid there is literally "user"#2017-09-2915:24cch1Yes.#2017-09-2915:24favilathere's nothing else to retrieve?#2017-09-2915:26cch1That’s enough to illustrate my example. In that case, :db/id is assigned a value (reported in the tempid resolution key of the transaction) and my custom identifier uses the same value. Significantly, my attribute does survive backup/restore because it is stored independently of the :db/id.#2017-09-2915:26cch1Does that make sense?#2017-09-2915:27favilaYou mean you want the entity id which finally replaces the tempid?#2017-09-2915:28cch1yes. I want it to be a value of my custom attribute.#2017-09-2915:28favilaso, if :my/identifier is a ref type, this is done automatically#2017-09-2915:28favilaif :my/identifier is not a ref type, there is no replacement done, what gets written in is literally whatever you put#2017-09-2915:29cch1So can I create an explicit self-reference? Or a ref to the transaction itself?#2017-09-2915:29cch1That’s an interesting idea.#2017-09-2915:29favilatempids are placeholders for real entity ids#2017-09-2915:29cch1Right. So a reference to myself would work.#2017-09-2915:29favilathe final step of tx expansion is to replace all tempids with ids#2017-09-2915:29favilayes, it would work, but also be pointless for what you seem to want, which is a short stable identifier#2017-09-2915:30cch1I see. It would not be stable.#2017-09-2915:30favilaalso, just use the db id#2017-09-2915:30favilathe db id is already there, why add another attr that just asserts itself?#2017-09-2915:30favilait's pointless#2017-09-2915:30cch1IFF it were stored separately, it would be stable across backup/restore. But as a reference, that would probably not work.#2017-09-2915:31cch1What I really need is access to the tempid creation mechanism as a value, not a reference/entity-id#2017-09-2915:31favilaAgain, I still don't see what that would accomplish#2017-09-2915:31favilatempids are syntax#2017-09-2915:31cch1Unique identifier, stable across backup/restore, short-ish, cheap.#2017-09-2915:31favilatempids are none of those things#2017-09-2915:31cch1squuid works, but too long#2017-09-2915:32cch1No, but the resolved value is all of those things except stable across backup/restore.#2017-09-2915:32cch1My hope was to store it as a value.#2017-09-2915:32favilabut the resolved value is the db id itself#2017-09-2915:32cch1Maybe not possible.#2017-09-2915:32cch1That’s OK.#2017-09-2915:33cch1The resolved value is unique, short-ish and very cheap (already computed in fact). I just need to get it assigned as a value during the transaction. But maybe that is not possible.#2017-09-2915:33favilawhy does it need to be in the value slot of a datom? that I don't understand#2017-09-2915:34cch1Because otherwise I suspect it is not guaranteed to survive the backup/restore process.#2017-09-2915:34cch1As an entity id, it could be regenerated as long as the references were presevered.#2017-09-2915:34cch1As a value, it should be stable.#2017-09-2915:35favilayes, but suppose datomic renumbers db/ids at some point. your "unique id" mechanism is now broken, because newly-issued ids may collide#2017-09-2915:35favilayou need to control the issuance mechanism too#2017-09-2915:35cch1That is a very good point.#2017-09-2915:35cch1Sigh. Maybe use squuids-as-a-long then.#2017-09-2915:36favilaso, we have created autoincrement counters in datomic for this purpose#2017-09-2915:36cch1Using a db fn?#2017-09-2915:36favilaexternal systems couldn't tolerate uuids because they were too long, and we couldn't find an encoding method that could compress them enough#2017-09-2915:36favilayes, with db.fn#2017-09-2915:36cch1OK. Point to example? You have been very helpful.#2017-09-2915:37favilait has the limitation that you can only increment one counter once per TX#2017-09-2915:37cch1Your scenario sounds exactly like mine. I was hoping for a solution that leveraged the internal uniqueness of tempids, but your point about clashes post-restore is the kiss-of-death for that.#2017-09-2915:37cch1That limitation might be OK…#2017-09-2915:38favilaI also vaguely heard that there are counter services (which just issue guaranteed-monotonically-increasing counters), or you could build something with zookeeper; and call these from within a tx fn#2017-09-2915:38favila(you have to tolerate gaps in ids from failed txs)#2017-09-2915:39favilaanyway, let me dig up our solution#2017-09-2915:39cch1gaps are ok too.#2017-09-2915:39cch1Your solution will be appreciated. ZK might work too since we are already using it.#2017-09-2915:39cch1Gotta run, work emergency#2017-09-2915:39favilaour solution doesn't have gaps and doesn't require coordination with other services, but has the one-increment-per-tx limitation#2017-09-2916:01cch1Thanks for your help, @favila#2017-09-2916:23favila@cch1 https://gist.github.com/favila/f33518c7e72a4079b5948d2f853053b0#2017-09-2916:44cch1Thanks!#2017-10-0114:47timgilbertSay, I’m seeing a bunch of errors in my datomic connected to DDB. The transactor says this:
2017-10-01 09:32:15.549 WARN default o.a.activemq.artemis.core.client - AMQ212037: Connection failure has been detected: AMQ119014: Did not receive data from /x.x.x.x:47452 within the 10,000ms connection TTL. The connection will now be closed. [code=CONNECTION_TIMEDOUT]
#2017-10-0114:49timgilbert…and then on the peer I get a traceback that looks like this;
org.apache.activemq.artemis.api.core.ActiveMQUnBlockedException: AMQ119016: Connection failure detected. Unblocking a blocking call that will never get a response
at org.apache.activemq.artemis.core.protocol.core.impl.ChannelImpl.sendBlocking(ChannelImpl.java:409)
at org.apache.activemq.artemis.core.protocol.core.impl.ChannelImpl.sendBlocking(ChannelImpl.java:307)
at org.apache.activemq.artemis.core.protocol.core.impl.ActiveMQSessionContext.deleteQueue(ActiveMQSessionContext.java:249)
at org.apache.activemq.artemis.core.client.impl.ClientSessionImpl.deleteQueue(ClientSessionImpl.java:330)
at org.apache.activemq.artemis.core.client.impl.ClientSessionImpl.deleteQueue(ClientSessionImpl.java:339)
at datomic.artemis_client$delete_queue.invokeStatic(artemis_client.clj:147)
at datomic.artemis_client$delete_queue.invoke(artemis_client.clj:143)
at clojure.core$partial$fn__5380.invoke(core.clj:2604)
at datomic.artemis_client$create_rpc_client$fn__7146$fn__7147.invoke(artemis_client.clj:290)
at datomic.artemis_client$create_rpc_client$fn__7146.invoke(artemis_client.clj:290)
at clojure.lang.AFn.applyToHelper(AFn.java:152)
at clojure.lang.AFn.applyTo(AFn.java:144)
at clojure.core$apply.invokeStatic(core.clj:657)
at clojure.core$apply.invoke(core.clj:652)
at datomic.error$runonce$fn__263.doInvoke(error.clj:148)
at clojure.lang.RestFn.invoke(RestFn.java:397)
at datomic.artemis_client$create_rpc_client$fn__7150.invoke(artemis_client.clj:290)
at clojure.lang.AFn.applyToHelper(AFn.java:152)
at clojure.lang.AFn.applyTo(AFn.java:144)
at clojure.core$apply.invokeStatic(core.clj:657)
at clojure.core$apply.invoke(core.clj:652)
at datomic.error$runonce$fn__263.doInvoke(error.clj:148)
at clojure.lang.RestFn.invoke(RestFn.java:397)
at datomic.artemis_client$create_rpc_client$fn__7154.invoke(artemis_client.clj:290)
at clojure.lang.AFn.applyToHelper(AFn.java:152)
at clojure.lang.AFn.applyTo(AFn.java:144)
at clojure.core$apply.invokeStatic(core.clj:657)
at clojure.core$apply.invoke(core.clj:652)
at datomic.error$runonce$fn__263.doInvoke(error.clj:148)
at clojure.lang.RestFn.invoke(RestFn.java:397)
at datomic.artemis_client.RpcClient$fn__7133.invoke(artemis_client.clj:276)
at clojure.core$binding_conveyor_fn$fn__5297.invoke(core.clj:2027)
at clojure.lang.AFn.call(AFn.java:18)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
#2017-10-0114:50timgilbertIt’s crashing one of my loops somewhere, but without any of my application code in the traceback it’s hard to figure out where the actual error occurs. Would this be in a (d/transact) call most likely? And what can I do to recover from it?#2017-10-0114:51timgilbertThe thread for the peer exception is reported as clojure-agent-send-off-pool-28#2017-10-0115:59itaiedHow can I execute functions on transactions?
For example, say I have an entity that I want to multiple its :e/c field by 2, how can I execute it?#2017-10-0116:42wistbhi, in our place, we are considering between datomic and apache ignite. For us, scale is top priority and our back of the envelope calculations are coming up in 20 - 30 billion datoms to start with. If you factor in the novelty and the accumulated history it does not look like datomic can scale for our needs. We talked about 'data decantation' (where we extract latest snapshot of data from one datomic and populate another) as a means to prolong. On the other hand Ignite is in-memory and can scale to big data (according to their docs). I wonder how in-memory node can scale to big data. I guess it uses the same idea of 'hot data' will be kept in memory and rest of the data will be fetched from storage. Any ideas or pointers how I should go about comparing the two ? thanks.#2017-10-0122:03clojuregeekHi, i'm creating an entity for a customer who has 1-3 address lines... is it better to have 3 attributes line1 line2 line3 .... or street-address with cardinality of many? ... i tried searching but not much luck#2017-10-0122:10danielcompton@clojuregeek cardinality many doesn't have ordering built-in, so you'd need to add that as well#2017-10-0122:11danielcomptonDepends on what kind of queries you want to be doing, if you need to query any of the three lines easily, then maybe you'd want card/many, however I suspect most of the time line-1, line-2, line-3 would be simpler, and communicate intent just as well#2017-10-0122:12clojuregeekok, yeah that make sense. thanks 🙂#2017-10-0200:45favilaYou may also consider using cardinality-one address line with a delimiter#2017-10-0200:55favilaE.g ascii unit separator character#2017-10-0211:30jwkoelewijnHi all. At our company we are looking to use Datomic with Cassandra as a storage engine, but now we noticed that multiple datacenters are not supported. Will this change in the (near) future?#2017-10-0219:52danielcompton@jwkoelewijn I doubt it, as I think this would require all writes to go to a quorum of nodes in each datacenter to preserve consistency guarantees#2017-10-0305:42jwkoelewijn@danielcompton good point, thanks#2017-10-0305:43jwkoelewijnthen let me rephrase, how do people deal with multiple DC’s, do people use some kind of hot-standby set up, or are there other methods I missed?#2017-10-0313:55wistbI am puzzled why a 'coordinator' component that gives a facade api for the application and works with multiple datomic dbs is discouraged. It will make datomic a viable option for applications that need to scale horizontally.#2017-10-0313:56wistbIt may have to compromise on throughput which may be agreeable to some applications.#2017-10-0313:59wistbIt hurts to lose an argument to some other product just because they said they scale indefinitely .... Is it too much to expect a discussion on this sour point ?#2017-10-0315:13favilaWho discourages this? It doesn't come out of the box, but I don't think it's discouraged? It's definitely more complicated though#2017-10-0315:14favilaah I meant that to be a thread#2017-10-0315:16favilaDatomic is read scale, not really write-scale. If you have a huge amount of data but low write volume, you can use one transactor with a very large storage and multiple dbs. (Although be careful what storage backend you use. A sql backend, for eg, puts everything for one transactor in one table.) If you have high write volume, you need multiple transactors, sharding, and probably have to live without cross-tx atomic commits.#2017-10-0315:18favilaBoth of these are just more awkward.#2017-10-0315:23favilaLooking at ignite. It's a more complex ops architecture (designed to run in a cluster with dedicated machines), I'm not sure how read-scaling works (I think you need more cluster members?) and it doesn't have time-travel. But it's definitely "bigger", i.e. if you have the resources you can scale it to more storage and higher write volumes than is possible with datomic#2017-10-0315:23wistbwhen the in-memory grid databases talk about distributed transactions, they are dealing with similar concerns , right.#2017-10-0315:23favilaIf that's really what you need, then that's probably a better fit#2017-10-0315:23favilaI don't think so re: distributed transactions#2017-10-0315:24favilaIt depends on their model, but usually they are trying to write the same data to multiple nodes to ensure quorum and no conflics#2017-10-0315:25favilait's an artifact of clustering#2017-10-0315:25favilawhen I talk distributed transactions, I mean more XD-like#2017-10-0315:25favilayou don't want some to succeed and some to fail, but they're not part of the same storage#2017-10-0315:26wistbkeeping Ignite aside for a moment, I am trying to understand whether solving this in Datomic .. would it not make datomic available for lot more use cases .#2017-10-0315:26favilasolving what? high write volume?#2017-10-0315:27wistbsuppose I am ok with low write volume, but, just that, I need to handle lot of data.#2017-10-0315:27favilaIf you need to handle a lot of data, make more dbs#2017-10-0315:27favilathe same strategy that e.g. elasticsearch would take#2017-10-0315:28wistbIf my data is split among 3 dbs and say I have one unit of work that needs to be stored among the three.#2017-10-0315:28wistbwill that be one tx over 3 dbs (so that I get the acid over 3 dbs)#2017-10-0315:29favilaYeah, that doesn't seem to be a problem worth solving#2017-10-0315:29favilayou could do it with your own single-writer facade#2017-10-0315:29favilano one else could write#2017-10-0315:29favilaand you could attach metadata to each tx to correlate them together#2017-10-0315:30favilaand you could have well-defined rollback behavior if one fails#2017-10-0315:30favilaI mean, that doesn't look like it's worth it to me?#2017-10-0315:31wistbAnd, all that 'code' is essentially domain agnostic, right. As long as we come up with a standard way to express the needed metadata anyone else can use it.#2017-10-0315:31favilathere are lots of edge cases around rollbacks#2017-10-0315:32favilaand what if the dbs got out of sync because someone bypassed the writer facade?#2017-10-0315:32favilaI mean, yeah, I guess you could solve all those problems#2017-10-0315:32favilawe use lots of dbs in the same application, but we don't split a unit of work across two dbs#2017-10-0315:33favilawe never need a guarantee that two txs fail or succeed together#2017-10-0315:34wistband .. it is exactly the that does not look like it is worth aspect that is puzzling to me. I understand it is complex, but, it looks like the in-memory grid guys have done it... May be my understanding of what they exactly offer is poor.#2017-10-0315:35favilathey have done it by making different tradeoffs#2017-10-0315:35favilathey have clustered architectures, custom storages, mutable data#2017-10-0315:35faviladatomic has single writer, storage agnostic, immutable data#2017-10-0315:38wistbok. the trade offs. I wanted to understand the trade offs and the right questions to ask. Your explanation is helping me. Thank you much. I appreciate all the help you provide to datomic users.#2017-10-0315:39favilaglad to help. I'm not going to claim datomic is right for every problem#2017-10-0315:39favilaFor us, we don't have big-data workloads, and immutability and history and easy administration are all very important#2017-10-0315:40favilathe low-friction, low-impedance api is good too (no complex sql orms)#2017-10-0315:41favilabut it's not the only database we use. e.g. datomic is bad at fulltext search, so we pipe datomic data into elasticsearch#2017-10-0315:43wistbgot it. thank you.#2017-10-0317:59cch1Here (http://docs.datomic.com/transactions.html#monitoring-transactions) it’s stated that the :tx-data key on a transaction result can be used as a db value for a query. When trying that trick with the client API, I get Exception Not supported: class datomic.client.impl.types.Datom com.cognitect.transit.impl.AbstractEmitter.marshal (AbstractEmitter.java:176) using the exact code example from the above link. Has anyone successfully pumped the transaction result back into a query using the client API?#2017-10-0318:09marshall@cch1 You can use the :db-after as a db value for query, not the :tx-data#2017-10-0318:09cch1OK. But I was assuming the point of using tx-data is that it would obviate the need for a trip to the server.#2017-10-0318:09cch1Is there a use case for :tx-data in the client API?#2017-10-0318:10marshallall queries go to the peer server#2017-10-0318:10marshallif you’re using client#2017-10-0318:10cch1OK.#2017-10-0318:10marshalldoesn’t matter what the db is#2017-10-0318:10marshallsure, you may want to examine the specific datoms created for something. maybe to get the tx-inst or save off the txn id locally for something#2017-10-0319:15lenI am trying to find the reverse links to an entity using the datoms fn via the :vaet index, I am not sure how to navigate the results, what does it return and how do I handle those results ?#2017-10-0319:23favilaIt returns a seqable (i.e. you can call seq on it, or use something that does so automatically) that returns a lazy seq of datoms from the index you specified, with components matching what you specified in the args#2017-10-0319:24favilaIndividual datoms have fields that can destructured by position [[e a v tx added?]] or by key {:keys [e a v tx added?]}#2017-10-0319:26lenthanks looking into that now#2017-10-0319:27favilatypical use would be like this:#2017-10-0319:28favila(->> (d/datoms db :vaet :db.cardinality/one)
(take 5))
=>
(#datom[8 41 35 13194139533312 true]
#datom[9 41 35 13194139533312 true]
#datom[15 41 35 13194139533366 true]
#datom[17 41 35 13194139533366 true]
#datom[18 41 35 13194139533366 true])
#2017-10-0319:28lenyes thats what I am getting#2017-10-0319:28favilaThis is all references TO the :db.cardinality/one entity#2017-10-0319:28favila(which is id 35)#2017-10-0319:29favilaso note the "v" slot on all results is 35#2017-10-0319:29lenHow to decode the vals in the #datoms vector ?#2017-10-0319:29favilayou can get by position or key#2017-10-0319:30lenI see the attrribue name is returned by id, do I have to look that up ?#2017-10-0319:31favilayes. this is the raw index, so there are no names or other niceties#2017-10-0319:31lenaah right#2017-10-0319:31favilad/ident is your friend here#2017-10-0319:31lenmake sense#2017-10-0319:31lenthanks !#2017-10-0319:33favilaI feel like accessing datom fields should be better documented. all I could find was this note#2017-10-0319:33favila> The datoms implement the [datomic.Datom](http://docs.datomic.com/javadoc/datomic/Datom.html) interface. In Clojure, they act as both sequential and associative collections, and can be destructured accordingly.#2017-10-0319:34favilaon http://docs.datomic.com/log.html#2017-10-0319:34lenright thanks#2017-10-0319:35favila(->> (d/datoms db :vaet :db.cardinality/one)
(take 5)
(map (fn [{:keys [e a v tx added]}]
[e a v tx added])))#2017-10-0319:35favila(->> (d/datoms db :vaet :db.cardinality/one)
(take 5)
(map (fn [[e a v tx added]]
[e a v tx added])))#2017-10-0319:35lenWas just converging on that 🙂#2017-10-0319:37len(->> (d/datoms db :vaet :db.cardinality/one)
(take 5)
(map (fn [[e a v tx added]]
[e (d/ident db a) v tx added])))#2017-10-0320:04len@favila thanks that works, does the datomic console synthesize the reverse keyword names like :entity/_link, just not sure where they come from ?#2017-10-0320:06favilaI don't use the console so I'm not sure#2017-10-0320:07favilaif their "datoms" feature has names, it's because it's calling ident#2017-10-0320:28lenHow do I get a list of all the attributes in the system ?#2017-10-0320:31favilathe values of the :db.install/attribute attribute on the :db.part/db entity#2017-10-0320:32lenthanks#2017-10-0413:47matthaveneris there an idiomatic way to determine the previous transaction? like opposite of d/next-t#2017-10-0413:51matthavener(dec some-tx-id) seems to work, I suppose thats idiomatic (if I understand transactions well enough)#2017-10-0414:00favilathere's no efficient way to determine the previous transaction#2017-10-0414:00favila(dec some-tx-id) is actually wrong, tx ids do not increase monotonically#2017-10-0414:00favila(there will usually be gaps)#2017-10-0414:56dpsuttonjust a quibble, but monotonically does not mean no gaps. 3,3,3,3,4,4,4,5 is monotonically increasing, as is the same sequence with gaps. Monotonic means that each term in the sequence is greater than or equal, not "contiguous"#2017-10-0418:51cch1Can anybody explain why, in the client API, pull manages to work without being passed the connection? I understand (thanks, @marshall!) that queries are processed on the peer server, but how is it that pulls are processed locally? I can conceptually understand the difference, but why was the local/remote line drawn between pull and query?#2017-10-0418:57favilaPulls are processed remotely too.#2017-10-0418:58cch1How is that possible if the connection is not passed in?#2017-10-0418:58favilaIt's accessible via the db#2017-10-0418:58cch1OK. But then why require it for queries? It seems inconsistent.#2017-10-0418:59favilaThe distinction is that queries MUST be evaluated somewhere, conn is where to evaluate it#2017-10-0418:59favilait is not input to the query#2017-10-0418:59favilaYou can have a query that has no inputs or multiple dbs as input, but it still needs to run somewhere#2017-10-0418:59favilathe connection is where it runs#2017-10-0418:59cch1Right. But pulls don’t need to run anywhere?#2017-10-0419:00favilawith the other apis, the input and where it runs are the same.#2017-10-0419:00favilapulls need to run on the peer server, but the source of the db object is the same as the connection is the same as where it runs. there is no ambiguity#2017-10-0419:01favilaqueries have arbitrary inputs (including no input), so you need to be explicit#2017-10-0419:04cch1The key to me seems to be the “no input” or “no dbs” input case -then how to find the connection? So that is why query requires conn as an arg. Does that reasoning seem rational?#2017-10-0419:05cch1In contrast, pull must take a db arg, from which a conn can be determined.#2017-10-0419:05favilayes exactly#2017-10-0419:06favilaReally, its that query is an RPC. the database inputs (if any) are sent "by-reference" to the conn#2017-10-0419:06favilapull and datoms are more like reads#2017-10-0419:07favilaThis makes more sense if you compare with the peer api. Queries run locally in the peer api#2017-10-0419:07favilathere is no connection object there#2017-10-0419:07cch1Yes. And that is how I got confused. I assumed it was a question of a dependency on datalog.#2017-10-0419:10favilaThink of it in terms of whether you need to access storage vs transactor#2017-10-0419:10faviladb is read-only access to storage. In client api, it's routed to the peer server#2017-10-0419:10favilain peer api, it's a direct connection to storage#2017-10-0419:11favilaconn is for transactor access. in peer, that's only for tx-log and d/transact#2017-10-0419:11favilain client, these are all routed through the peer server too, which also provides query#2017-10-0419:12favilathat's an attempt at an analogy between peer and client access patterns#2017-10-0419:13cch1Here’s the part I still don’t get: a db value (on the client) specifies the database (storage access), but as far as I can see it does not say how to connect the the peer server. So without a connection, how does the pull fn know how to reach the peer server?#2017-10-0419:14cch1(client/db $connection) => {:database-id "datomic:", :t 1009, :next-t 1013}#2017-10-0419:14favilathere's either a hidden reference in the db, or a process-level global that can find the connection for a db. It's an implementation detail in any case#2017-10-0419:14favilathe connection string is right there?#2017-10-0419:14cch1That’s storage, not the peer.#2017-10-0419:15favilais this a plain map?#2017-10-0419:15cch1checking…#2017-10-0419:15cch1yep#2017-10-0419:15favilathen there must be a global which maps storages to peer servers#2017-10-0419:15cch1That is surprising.#2017-10-0419:16favilamakes sense given the peer api#2017-10-0419:16favilathey were probably trying to be as close to that as possible#2017-10-0419:16favilawith query, it is impossible#2017-10-0419:18cch1But if the connection can be inferred from the db (by a global?) on pull, why not on query? That would make the q args closer to the peer as well.#2017-10-0419:18favilafrom which input argument would you infer the peer endpoint?#2017-10-0419:18favilaqueries have no necessary connection to storages#2017-10-0419:19cch1yes, of course. I again forgot about the “no dbs” case.#2017-10-0419:19favilacan you query different peer servers?#2017-10-0419:19favila(in the same query)#2017-10-0419:20favilaSeems possible to me#2017-10-0419:20favilawhat I mean is, dbs come from connections from different endpoints, mixed together in the same query#2017-10-0419:21cch1Well, you only pass one connection so I don’t think so. But that does raise an interesting question: if I pass two dbs (from different servers) to a peer#2017-10-0419:21favilaas long as the peer server can access that storage it should be possible for it to execute the query#2017-10-0419:21cch1I suppose so.#2017-10-0419:22cch1On a similar note, I found it interesting that pull can’t work on an arbitrary set of datoms -like query can.#2017-10-0419:23cch1I tried passing the :tx-data of a transaction to pull but it didn’t work (because, presumably, there was no indication of where to send the work/contact a peer server)#2017-10-0419:24favilapull only works on dbs. same in peer api#2017-10-0419:24favilayou need to supply something that admits of d/datoms-like calls to it. (whatever the internal interface is)#2017-10-0418:53cch1And, another question: is there a spec or documentation somewhere for the :tx-data key returned from a transaction?#2017-10-0419:02favilaIt's a vector of raw datoms, same as you would get from d/datoms#2017-10-0419:06cch1Presumably in the order provided as input, modulo the obfuscation that entity map form introduces.#2017-10-0419:08favilaorder is arbitrary, it's all set-wise#2017-10-0419:08favilatx-data tells you what actually was added and retracted, vs what you asked to happen#2017-10-0515:19uwowhile a single thread in a single process is responsible for writing transactions, the transactor can still take advantage of multiple cores for other purposes, right?#2017-10-0515:30uworight: http://docs.datomic.com/monitoring.html#add-more-cores
sorry for noise#2017-10-0516:06faviladatomic peer can really light up the cores IME#2017-10-0516:07favilauntil it's io bound#2017-10-0519:14neverfoxI’m trying to get Datomic working in k8s with Cassandra as the storage service, my peers cannot find the transactor after connecting to storage. I’ll describe the setup:#2017-10-0519:15neverfoxHere’s the transactor config:protocol=cass
host=0.0.0.0
alt-host=datomic
port=4334
license-key=<redacted>
cassandra-table=datomic.datomic
cassandra-host=cassandra
cassandra-port=9042
memory-index-threshold=32m
memory-index-max=512m
object-cache-max=1g
#2017-10-0519:16favila@roman host and alt-host are used by peers to find the transactor#2017-10-0519:16neverfoxThe transactor is running in its own pod and there’s a service called datomic with port 4334 that other pods can connect to#2017-10-0519:17neverfoxyes, I know but…#2017-10-0519:17neverfoxthe peer reports that it’s trying to connect to the transactor at localhost and alt-host nil#2017-10-0519:17neverfoxdespite what the config says#2017-10-0519:18neverfoxIt’s my understanding that the transactor stores its connection info in storage and the peer picks it up there. But it appears not to get the configured information. Is that not correct? Why does the peer not try datomic:4334?#2017-10-0519:19neverfoxIt’s as if the transactor’s config is not applied to storage.#2017-10-0519:19favilaa different transactor or config is running; peer is connecting to wrong storage; peer cannot resolve "datomic" to an address?#2017-10-0519:21neverfoxThere is without a doubt no other transactor or no other Cassandra because in both cases I’m launching fresh pods in fresh clusters. As for the peer not resolving it, it’s not even trying it:#2017-10-0519:22neverfoxclojure.lang.ExceptionInfo: Error communicating with HOST localhost on PORT 4334 {:alt-host nil, :peer-version 2, :password "ckGYIzP97L6l+ERbbcnZuzCiSB/v3S1HfzUZdyIFLdE=", :username "f8HVTTNhf8bmknxqrO2TINx2xmPH5KAJLwQ5q/Cs0J8=", :port 4334, :host "localhost", :version "0.9.5561.59", :timestamp 1507230053031, :encrypt-channel true}
#2017-10-0519:22neverfoxIt believes alt-host is nil for one thing.#2017-10-0519:22favilathat could be because it didn't resolve#2017-10-0519:22neverfoxI see#2017-10-0519:22favilawhat is the connection string peer is using?#2017-10-0519:22favilatransactor log will echo the connection string on startup#2017-10-0519:23neverfoxdatomic:<cass://cassandra:9042/datomic.datomic/<redacted>>#2017-10-0519:23neverfoxfyi, there’s a service called cassandra and the peer connects to it (clear from logs(#2017-10-0519:24neverfoxBut that’s interesting what you said about nil meaning it didn’t resolve.#2017-10-0519:24neverfoxDoes that host need all three datomic ports to work?#2017-10-0519:25neverfoxi.e. 4334, 4335, and 4336?#2017-10-0519:25favilaI think not. I think 4335 is dev transactor storage access, 4336 is the h2 GUI console when using dev storage#2017-10-0519:26neverfoxThat’s what I thought#2017-10-0519:26neverfoxso that’s not the problem then#2017-10-0519:26favilawhat if you don't use 0.0.0.0 as the host#2017-10-0519:26faviladoes anything change?#2017-10-0519:26neverfoxI first tried with localhost#2017-10-0519:26neverfoxsame issue#2017-10-0519:27neverfoxdo you mean leave it out?#2017-10-0519:27favilaI mean use something routeable#2017-10-0519:27neverfoxhmm, given that the IP is dynamic, how?#2017-10-0519:27favilagenerate the transactor.properties on startup#2017-10-0519:27neverfoxclever#2017-10-0519:29favilaI'm mostly suspicious that the wrong config file is pulled or another transactor is running against that storage#2017-10-0519:29neverfoxBut here’s what’s strange. I’ve run this in k8s before just fine with this setup when it was the dev transactor without having to do anything that complex.#2017-10-0519:29neverfoxonly when the storage is separate am I running into this.#2017-10-0519:30faviladev transactor's "storage" is the peer, so your connection string routes to that peer already#2017-10-0519:30neverfoxThere’s only one transactor pod. I don’t know where it’s even possible for there to be another.#2017-10-0519:30favilaoutside the k8 cluster#2017-10-0519:30neverfoxthat’s true, good point#2017-10-0519:30favilamaybe a forgotten test#2017-10-0519:30neverfoxthe cluster is locked down#2017-10-0519:30neverfoxthere’s no way in without port forwarding and I’m the sole person using the cluster#2017-10-0519:31favilamaybe you could kill all transactors you know about, try to connect a peer, see if you get a different error#2017-10-0519:31neverfoxbut you’re right that it’s a mystery#2017-10-0519:31favilaif you get same error, then there's definitely a transactor somewhere#2017-10-0519:31neverfoxok#2017-10-0519:31neverfoxone sec#2017-10-0519:33neverfoxtrying it#2017-10-0519:36neverfoxSame error, but that doesn’t make any sense. This is a completely fresh Minikube with nothing on it by Cassandra. Is it possible that a transactor that is no longer running but was once running connected to Cassandra and left it’s connection info there and it’s just not getting updated?#2017-10-0519:37neverfoxwhen I launch a fresh one#2017-10-0519:37neverfoxnothing but cassandra and the peer, that is#2017-10-0519:38favilaI don't know. it's not what I would expect to happen#2017-10-0519:39neverfoxso strange#2017-10-0519:39neverfoxI appreciate your help however#2017-10-0519:40favilaps axf | grep datomic doesn't show anything? you mentioned port forwarding. maybe you tried a transactor outside the cluster earlier and forgot about it?#2017-10-0519:40favilayou can also inspect the cassandra table itself, see if it's getting written to#2017-10-0519:40favilatransactors write at least once a second to heartbeat#2017-10-0519:40neverfoxThat’s reasonable but I’m not currently port-forwarding#2017-10-0519:41neverfoxI just mean in theory the only way in is such#2017-10-0519:41favilayou an also double-check your paths for your config file#2017-10-0519:41neverfoxThat’s a good idea#2017-10-0519:41favila(for the transactor startup)#2017-10-0519:41neverfoxWouldn’t it have failed to start though if that had been wrong?#2017-10-0519:41neverfoxBEcause of the license#2017-10-0519:41favilamaybe you have two?#2017-10-0519:42favilasome editing shuffle#2017-10-0519:42neverfoxthese are good suggestions#2017-10-0519:42favilaor forgot to save#2017-10-0519:42favilajust covering bases#2017-10-0519:42neverfoxno, I appreciate it#2017-10-0519:42favilalocalhost and alt-host nil are suspicous#2017-10-0519:42neverfoxI know, right?#2017-10-0519:43neverfoxthe ps checks out#2017-10-0519:43favilacan also confirm in the logs that the transactor did actually startup and connect to cassandra. maybe it never did and the settings in there are from an earlier test, like you suggested#2017-10-0519:44favilaafter that, I'm out of ideas#2017-10-0519:44neverfoxwell, here are the logs:#2017-10-0519:44neverfoxLaunching with Java options -server -Xms4g -Xmx4g -XX:+UseG1GC -XX:MaxGCPauseMillis=50
Starting datomic: ...
System started datomic:
#2017-10-0519:44neverfoxthat’s it#2017-10-0519:45favilathat's the systemd logs, not the transactor logs#2017-10-0519:45favila(stdout)#2017-10-0519:45neverfoxI see#2017-10-0519:46neverfoxthat’s just what is produced to stdout#2017-10-0519:46neverfoxI should be looking for a file then?#2017-10-0519:46favilaI'm trying to find the defaults#2017-10-0519:47neverfoxdoes the name of the properties file matter, i.e. the config file?#2017-10-0519:48favilaas long as it matches what was supplied, no#2017-10-0519:49neverfoxRight, and I’m giving it config/transactor.properties and that’s the only file at that location#2017-10-0519:49neverfoxnext up is examining the table#2017-10-0519:50favilabin/logback.xml has the log config#2017-10-0519:50neverfoxduh it’s just the log dir#2017-10-0519:52neverfoxeverything in there seems clean#2017-10-0519:53neverfoxit discovers and connects to the Cass cluster nodes#2017-10-0519:53neverfoxalso datomic.transactor - {:event :transactor/start, :args {:cassandra-port 9042, :cassandra-table "datomic.datomic", :log-dir "log", :alt-host "datomic", :protocol :cass, :rest-alias "cass", :memory-index-max "512m", :cassandra-host "cassandra", :port 4334, :memory-index-threshold "32m", :data-dir "data", :object-cache-max "1g", :host "0.0.0.0", :version "0.9.5561.59", :encrypt-channel true}, :pid 1, :tid 12}#2017-10-0519:53neverfoxso it knows the alt-host here#2017-10-0519:55favilayep#2017-10-0519:55favilaso something is wrong with the peer or the networking setup for the peer#2017-10-0519:56neverfoxthis is what’s in the Cass table: {:key "[\\"0.0.0.0\\" \\"datomic\\" 4334 \\"XnH1k0PQkm/Hz/Y4ISZpE7fpHcBMP7ui8pz8wwNcPXk=\\" \\"afmWR0zouI8Bee4gld/5zM48H4pecIzmHeNjeCODSfI=\\" 1507233334931 \\"0.9.5561.59\\" true 2]"}#2017-10-0519:56neverfoxI think you’re right.#2017-10-0519:56favilayeah that all looks good, transactor is definitely connecting#2017-10-0519:58favilayou can look in the peer (java) logs too for more info#2017-10-0519:59favilayou should see events like :peer/get-connection :coord/lookup-transactor-endpoint, and :peer/hornet-connect and :peer/hornet-connect-failed#2017-10-0519:59favilathese will tell you more than the exception#2017-10-0520:06neverfoxgood call#2017-10-0520:07neverfoxI’m currently suppressing them in logback but that’s easy to change#2017-10-0520:22neverfoxwow#2017-10-0520:22neverfoxnow it’s just working#2017-10-0520:31favilahah, that's great#2017-10-0520:54timgilbertI'm having some trouble trying to start a dev transactor inside of docker-compose. Well, it's not trouble exactly...#2017-10-0520:55timgilbertI'm able to start the transactor and connect to it OK and it seems to work, but every time I connect to it I get bunches of these tracebacks in the peer:
ERROR 16:54:30.264 o.a.activemq.artemis.core.client: AMQ214016: Failed to create netty connection
java.nio.channels.UnresolvedAddressException: null
at sun.nio.ch.Net.checkAddress(Net.java:123)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:622)
at io.netty.channel.socket.nio.NioSocketChannel.doConnect(NioSocketChannel.java:208)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.connect(AbstractNioChannel.java:203)
at io.netty.channel.DefaultChannelPipeline$HeadContext.connect(DefaultChannelPipeline.java:1226)
at io.netty.channel.AbstractChannelHandlerContext.invokeConnect(AbstractChannelHandlerContext.java:549)
at io.netty.channel.AbstractChannelHandlerContext.connect(AbstractChannelHandlerContext.java:534)
at io.netty.handler.ssl.SslHandler.connect(SslHandler.java:438)
at io.netty.channel.AbstractChannelHandlerContext.invokeConnect(AbstractChannelHandlerContext.java:549)
at io.netty.channel.AbstractChannelHandlerContext.connect(AbstractChannelHandlerContext.java:534)
at io.netty.channel.ChannelDuplexHandler.connect(ChannelDuplexHandler.java:50)
at io.netty.channel.AbstractChannelHandlerContext.invokeConnect(AbstractChannelHandlerContext.java:549)
at io.netty.channel.AbstractChannelHandlerContext.connect(AbstractChannelHandlerContext.java:534)
at io.netty.channel.AbstractChannelHandlerContext.connect(AbstractChannelHandlerContext.java:516)
at io.netty.channel.DefaultChannelPipeline.connect(DefaultChannelPipeline.java:970)
at io.netty.channel.AbstractChannel.connect(AbstractChannel.java:215)
at io.netty.bootstrap.Bootstrap$2.run(Bootstrap.java:166)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:408)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:402)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:140)
at java.lang.Thread.run(Thread.java:745)
#2017-10-0520:57timgilbertMy transactor props looks like this:
protocol=dev
host=0.0.0.0
alt-host=dev-datomic
port=4334
...and the docker-compose bit looks like this:
services:
dev-datomic:
image: ""
ports:
- "4334-4336:4334-4336"
volumes:
- "datomic-data:/opt/datomic/data"
#2017-10-0520:58timgilbertFrom the peer I'm connecting to datomic:#2017-10-0600:01neverfox@timgilbert Your peer is running in a container in the docker network or outside of it?#2017-10-0600:01timgilbertMy peer is running on the host, connected to the external port on the transactor which is in a container#2017-10-0600:03timgilbertI’m beginning to suspect I’ll need to run it inside the docker-compose network and access the transactor as datomic:#2017-10-0600:03timgilbert…but I was hoping to avoid the hassle of developing inside a container#2017-10-0607:23mx2000Can I use the local filesystem dev storage for a small website in production?#2017-10-0607:28augustlwe're doing that 🙂#2017-10-0618:54marshallDatomic 0.9.5561.62 is now available https://groups.google.com/d/topic/datomic/eLVunC7B4Uo/discussion#2017-10-0716:57favilaOut of disk space?#2017-10-0723:35danielcomptonThat looks like a JVM GC log, what’s the memory usage like?#2017-10-0822:32jetzajacHello everyone! We’ve used to use DataScript to store our application state. Even when running on a JVM we use it. There is a hope that in-memory Datomic could be faster than DataScript. Before we measure it and take the decision of moving there, I have to mention that out DataScript db stores not only regular data but also arbitrary java objects! Is there a way to achieve this with Datomic given it is running in memory and there is no need for the serialization/deserialization pass?#2017-10-0822:59favilaNo.#2017-10-0916:18mgrbyteWondering what the best-practise is for validating a spec of a data-structure passed to a custom datomic.api/function... currently am throwing if invalid using ex-info but this is seen as java.util.concurrent.ExecutionException from the outside. Any pointers?#2017-10-0917:23favilaI think getCause will give you the original exception#2017-10-1004:43akirozI guess I'm doing something stupid but I've been stuck trying to transact a schema for quite a while now.
Getting java.lang.IllegalArgumentException: :db.error/not-an-entity Unable to resolve entity: :db/indent when I called (datomic.api/transact conn [{:db/indent :foo/bar :db/valueType :db.type/string :db/cardinality :db.cardinality/one}])... I'm on version 0.9.5561.59 any ideas?#2017-10-1006:43akirozOk, I found the problem..... misspelt ident -> indent, my guess was right. (facepalm)#2017-10-1017:59djjolicoeuradding an index to an attribute happens in a background process on the transactor, correct? does it degrade service on running peers at all, other than using resources?#2017-10-1018:03djjolicoeurI just want to make sure I understand the process#2017-10-1018:13djjolicoeuran existing attribute, that is#2017-10-1018:36djjolicoeurand is there a way to monitor progress?#2017-10-1019:36djjolicoeurother than monitoring sync-schema, which I guess I could easily script something out around#2017-10-1020:52eoliphantHi, I have a quick collections question. I’m doing some stuff with datomic and of course you’re kind of on the hook for your own paging. I’m trying to implement cursor based paging such that I can give a key value then return that item, plus N items ‘in front’ or ‘behind of it’ So for the following
({:id "A" :val..} {:id "B" :val..} {:id "C" :val..}{:id "D" :val..} {:id "E" :val..})
If I have say “C” and ‘2 behind’, I get A,B,C. Or “C” and ‘one after’ would give me C and D. and of course, it’d be nice if it was reasonably performant. Any suggestions?#2017-10-1110:42karol.adamiechi guys, as we go for production (YAY!!) we need to setup backup and restore. Datomic ships with proper tools and they do work… What i am interested in is how do you set backup to s3 to be done eg. daily with AWS infrastrucure? are there any easy ways to do that? any knowledge or pointers you could give?#2017-10-1111:02karol.adamiecok, i found some super helpful script from @robert-stuttaford :+1: will try to work with that…#2017-10-1111:02karol.adamiecok, i found some super helpful script from @robert-stuttaford :+1: will try to work with that…#2017-10-1118:09hmaurerWhat’s the link to the script please? 🙂#2017-10-1209:42hmaurer@U24SVMY3B thanks!#2017-10-1117:24danielstocktonWhat IP/patents/trademarks does datomic have that would prevent an open-source copy?#2017-10-1208:01augustlto my knowledge, not patents#2017-10-1415:04souenzzoWhere I can find more debug info about Critical failure, cannot continue: Indexing retry limit exceeded.? It's running on Dynamo/EC2/8Gb.#2017-10-1415:15souenzzoWhen a new client initiates the connection, then the transactor fails. But restarting the transactor with the 2 applications "already connected", everything goes ok.#2017-10-1422:13alexisvincentHi there, I’m wanting to build a file system service backed by s3, was wondering if I’d be able to get a datomic instance actually backed by S3 as well?#2017-10-1422:14alexisvincentThe datomic db would be there to manage object metadata#2017-10-1508:48laujensenI have two databases that I want to merge in a single db, can I just run 2 restore-db's, or will the second overwrite the first?#2017-10-1508:50dominicmDatomic will get angry at you, yep#2017-10-1508:50dominicmIt will simply disallow the second restore as the history isn't linear.#2017-10-1508:51dominicmYou'll have to do a decant yourself with code, afaik.#2017-10-1512:37souenzzobump#2017-10-1609:17mpenetare there already details about the pricing for datomic cloud, apart for the dev plan?#2017-10-1609:34dazldwhat's that @mpenet?#2017-10-1609:35mpenet@dazld https://www.youtube.com/watch?v=Ljvhjei3tWU#2017-10-1609:36dazldhm! cool!#2017-10-1615:20jaret@souenzzo Can you provide your transactor logs from to startup to the point of failure to me privately? Are you seeing any throttling/backoff reporting from DDB? In general the error happens when the transactor failed to write an index multiple times. Possible causes include memory pressure, storage backpressure, GC churn.#2017-10-1615:31souenzzoDue Unix permissions, the transactor was unable to write tempfiles. Chown fixed.#2017-10-1615:37jaretglad you were able to track that down. Sorry I didn’t see this over the weekend. 🙂#2017-10-1719:31timgilbertIs there a quick way to get from a datomic t value to a java.util.DateTime representing the (approximate) time that it occurred? I could swear I saw a function or something at one time, but now I can't find it#2017-10-1719:34favila(->> t d/t->tx (d/entity db ) :db/txInstant)#2017-10-1719:34favilajava.util.DateTime? you mean java.util.Date?#2017-10-1719:37timgilbertEr, yes#2017-10-1719:38timgilbertAh, I had forgotten about :db/txInstant. Thanks @favila!#2017-10-1800:44boldaslove156Can we get when an eid is created using entity ?#2017-10-1800:44boldaslove156Or the only way is to use q and get the :db/txInstant ?#2017-10-1801:47potetm@boldaslove156 unless the entity is a transaction, the latter#2017-10-1809:59boldaslove156thanks! @potetm#2017-10-1817:39uwohow to express a query to find all entities that have the same value for an attribute?#2017-10-1817:40uwo(where that value is left unbound)#2017-10-1817:46robert-stuttaford@uwo so yo don’t have the value, you just want to find entities that share values for an attribute? you could group entities by values
(->> (group-by :v (seq (d/datoms db :aevt :ns/key)))
(map (fn [[v datoms]]
[v (map #(d/entity db (:e %)) datoms)]))
(into {}))
#2017-10-1817:48uwohehe. I was wondering if this was something that is expressible in datalog. I had thought of just dropping into sequence fns#2017-10-1817:53robert-stuttaforddatalog = bound values. you said not to bind it. [:find ?e ?v :where [?e :attr ?v]] otherwise 😛 group-by ?v ofc#2017-10-1817:55uwo@robert-stuttaford 😄 thanks#2017-10-1818:39uwoso is there anything wrong with this kind of query
[:find ?t1 ?t2 ?y
:where
[?e1 :movie/title ?t1]
[?e1 :movie/year ?y]
[?e2 :movie/year ?y]
[?e2 :movie/title ?t2]
[(not= ?e1 ?e2)]]
it doesn’t smell right to me, but I can’t say why. totally legit?#2017-10-1818:42favilanot totally getting the point of the query? you want every possible pair of titles in a year?#2017-10-1818:42favilabut yes, the not= (or something like it) is needed to avoid the self-join#2017-10-1818:43favilayou can use != (native operator, not clojure function) and maybe the query engine can use that info#2017-10-1818:43favilaalso I would reorder the clauses#2017-10-1818:43favilaput the years first, then the not, then the titles#2017-10-1818:49uwobasically my colleagues are trying to express a query like “find all entities that have the same value for attribute X and different values for attribute Y” and are writing queries like the above#2017-10-1818:51manutter51And I’m the colleague 🙂 So imagine you have a bunch of Address entities with city, state, and zip (etc), and you want to query for “Show me all cities with more than one zip code,” how would you query for that?#2017-10-1818:51manutter51And I’m the colleague 🙂 So imagine you have a bunch of Address entities with city, state, and zip (etc), and you want to query for “Show me all cities with more than one zip code,” how would you query for that?#2017-10-1818:55favila(->> (d/q '[:find (count-distinct ?zip) ?c
:where
[?addr :city ?c]
[?addr :zip ?zip]]
db)
(remove (fn [[zipcount]] (== zipcount 1))))#2017-10-1819:00favilaPure datalog without aggregation is the approach you have been doing. It looks like a self-join#2017-10-1819:00favila(d/q '[:find ?c
:where
[?a1 :city ?c]
[?a2 :city ?c]
[(!= ?a1 ?a2)]
[?a1 :zip ?zip1]
[?a2 :zip ?zip2]
[(!= ?zip1 ?zip2)]]
db)#2017-10-1819:00favilanot sure which would be faster. Depends on how smart the query optimizer is about short-circuiting vs how fast it is simply to aggregate everything#2017-10-1819:18manutter51Cool, thanks for the examples, I’m a longtime SQL coder just getting past the “Learn Datalog Today” stage so this is very helpful.#2017-10-1819:33manutter51@U09R86PA4 My actual query is different from the simple example I gave, so I had to modify your queries to fit. The one with aggregation finished in about 1 sec, and the “self-join” I killed after about 8 minutes of running at 700% CPU.#2017-10-1819:43favilawell that answers that!#2017-10-1818:51favila@uwo there's no check that t1 and t2 differ#2017-10-1818:51favilatwo different movies (?e1 ?e2) could be in same year and have same title#2017-10-1818:53favilayou could use aggregation @manutter51#2017-10-1818:53laujensenI have a bunch of data which is time-stamped. I want to count every occurance of a data-point, grouped by the day/month of the timestamp. Whats the datalog way to go about that?#2017-10-1818:54uwosorry, yeah i should have read the example I pasted before hand. my bad#2017-10-1819:24marshall@laujensen http://docs.datomic.com/query.html#with#2017-10-1819:53laujensen@marshall, thanks, but how do I got about disregarding all information except date and month ?#2017-10-1819:56marshallah. they’re separate attributes? you may need to do a custom aggregation#2017-10-1820:21laujensenRight. Then when I query [?x ?y] I only get one result and it doesnt allow [?x ?y] ..., or [?x ?y ...]. How do I get a list of filtered results ?#2017-10-1820:40hmaurer@marshall hi! I have a quick question on Datomic Cloud: will the pricing include a license to use? and around how much it will cost?#2017-10-1820:49marshall@laujensen I'll have to try a couple things. I might do the aggregations in app code and/or use nested or multiple queries depending on the data size. I'll try to get back to you tomorrow #2017-10-1820:51marshall@hmaurer the license is included via purchasing through AWS marketplace. Solo topology will be around $1 a day. That's cost of the AWS infrastructure and the cost for Datomic#2017-10-1820:51marshallProduction topology will be more#2017-10-1820:57hmaurer@marshall that sounds amazing for hobbyists#2017-10-1820:59marshallThat's definitely the hope :)#2017-10-1913:10conanThat pricing structure is really helpful, it removes the latter half of the "RDS just works and is cheaper" argument I often hear#2017-10-1913:55wistb@conan curious to know... what is RDS#2017-10-1913:56conanAmazon's Relational Database Service, in which you can run Postgres and MySQL and stuff#2017-10-1913:56conanIt is significantly easier to set up than Datomic, because it's just the underlying store#2017-10-1913:56conanThe new CloudFormation stack for Datomic Cloud (hopefully) makes it much easier#2017-10-1913:57conanI've regularly heard people saying "I'd love to use Datomic, but I don't have time for all that ops work"#2017-10-1913:57conanThey tend to also say "And it costs too much", but at a dollar a day that argument no longer holds#2017-10-1913:58wistb@conan oh ok.. thank you for the info#2017-10-1915:14mpeneta dollar a day for dev stack#2017-10-1915:14mpenetstill waiting on the prod topology pricing#2017-10-1916:36chris_johnsonApparently I’m in a really narrow band of users, according to what was said at the conj about where Cognitect sees Datomic Cloud going, but I am already doing (some of) “all that ops work” and my general response to “we don’t know what the prod topology pricing will be” is give it to meeeeee#2017-10-1916:37chris_johnsonMigrating from a by-hand production Datomic footprint to the new Marketplace offering is the closest I think we will ever get to “Datomic RDS” (assuming Amazon doesn’t buy Cognitect) and I want it as soon as I can get it. I’m 100% sure that the price will be a wash relative to keeping my own infrastructure up, monitored, and maintained#2017-10-1917:17robert-stuttaford+1 to that, @chris_johnson - we already have all our infra figured out and stable, but i’m pretty sure that Datomic Cloud is going to give us better performance for less money. the only issue we have is we’ve written a ton of code using Datomic Peer 🙈#2017-10-1918:49hmaurerDatomic Cloud won’t be compatible with the peer library?#2017-10-1919:17andyparsonsUnfortunately the peer is not part of Cloud. I asked Stu specifically about this. They may add it at some point later.#2017-10-1917:38mpenetI d really love to use datomic cloud, really, so I hope pricing will be flexible enough#2017-10-1917:39mpenetWait and see#2017-10-1917:40mpenet(our stuff doesn't use datomic atm, but I wish some of it could)#2017-10-1919:29danielcomptonI'm not sure if I missed this in the talk, but does the Datomic dev stack include a license, or is that extra?#2017-10-1919:55marshall@danielcompton Datomic Cloud is usage-based pricing. The approx $1 a day pricing is for both the infrastructure cost and the Datomic cost. No additional/separate license required.#2017-10-1919:56danielcomptonGreat, same for Prod?#2017-10-1919:58marshallyep. other than the $1 per day 🙂#2017-10-1920:01jeff.terrellOh, I must have missed that from the talk, very cool!#2017-10-1920:01jeff.terrellI was interested in Datomic Cloud before, but now I'm even more interested. :+1:#2017-10-2001:23steveb8n@marshall I've got quite a few transactor fns but I want to use datomic cloud. Stu's talk mentioned that transactor fns are not supported. Is there any info on how to migrate these in prep for cloud?#2017-10-2013:40marshall@steveb8n TBD - we’ll have information about a lot of that once we release#2017-10-2013:41augustlintriguing 🙂#2017-10-2013:48qqqCurrently, what is the easiest way to run Datomic on AWS ?#2017-10-2013:48qqqAlso, does Datomic use AuroraDB or DynamoDB ? ThingsI've read seems to suggest 1. Datomic uses Dynamo and 2. Aurora is cheaper for most use cases than Dynamo.#2017-10-2013:53augustldatomic can write to many different backends, including DynamoDB: http://docs.datomic.com/storage.html#2017-10-2013:55marshall@qqq the quickest path to running on AWS currently would be http://docs.datomic.com/aws.html#2017-10-2013:55marshallusing DynamoDb#2017-10-2018:34laujensenAnything to be mindful of when upgrading datomic versions?#2017-10-2101:27sherbondy@qqq it seems like Aurora could work too, since Datomic also works with JDBC compatible SQL databases as a storage backend. Aurora even has a Postgres-compatible version in beta, which should work with the driver that ships with Datomic.#2017-10-2101:29sherbondyThat said, my understanding is Datomic essentially treats all storage backends as block key value stores, so dynamo seems somewhat better aligned from a design standpoint, albeit more expensive. Let me know if you wind up trying with Aurora, curious to hear how it goes & to learn about performance characteristics.#2017-10-2101:33sherbondyDynamoDB’s fee tier sounds very generous though if you are targeting a smaller or test application: https://aws.amazon.com/dynamodb/pricing/#2017-10-2101:49malcolm.edgarThe AWS offering is still a template so I think the is opportunities to customise this to include peers.#2017-10-2212:50lmergendo i understand it correctly that datomics internal ids should typically not be used anywhere else in the application ?
say that i have a bunch of objects which i want to uniquely identify, what's preventing me from using that id in other places in my application ?#2017-10-2212:52lmergen(consider the case where i want to reuse datomic's id for a user id, for example)#2017-10-2212:55lmergenwhat i'm planning on doing right now is to simply associate each user with a uuid (generated using squuid), and indexing that, but i'm not sure whether this is the best practice ?#2017-10-2213:58potetm@lmergen I would highly recommend generating squuids for any entity you need to reference externally (e.g. that users interact with, that you have APIs for, etc). The upside is 1. you now have a globally unique id for an entity, which is good conceptually and 2. it enables certain things like decanting and sharding. The downside is a small storage and indexing cost.#2017-10-2213:58lmergenyep, i figured as much#2017-10-2213:58potetmThis article has a decent rundown on it: https://tomharrisonjr.com/uuid-or-guid-as-primary-keys-be-careful-7b2aa3dcb439#2017-10-2213:59potetmThough datomic thankfully does the internal integer id itself.#2017-10-2214:01lmergenyeah, i'm not against uuids, on the contrary#2017-10-2214:01lmergenit's just that i was a bit uncertain about the intent here#2017-10-2214:09mpenetmake it app independent for extra points: ex URNs. Might be overkill but depending on the context it s quite nice when it s a fit#2017-10-2313:28Empperihi, I have a very specific problem to solve which might sound very generic one in the beginning. In short, I’m trying to implement LIMIT functionality with Datomic but my use case really doesn’t allow me to use for example the datoms API and pull based approach really doesn’t work either since that does the limitation on attribute level#2017-10-2313:29Empperiso, I’m kinda like trying to solve the LIMIT problem just by using the q based queries#2017-10-2313:29Empperisince queries done with it are eager and it doesn’t support limit itself I’m kinda out of ideas#2017-10-2313:30Empperithe reason why I need to stick with q and really cannot use datoms API is that we are creating a SPARQL endpoint to our data and thus we really need to actually perform datalog queries#2017-10-2313:32Empperialso, I think I wouldn’t be able to solve this via datoms API either since when making actual queries I think the query engine uses all the indices available and with datoms I’m restricting myself to single index#2017-10-2313:32Empperiso even if I do get access to all datoms via datoms API, I cannot exactly do stuff like:#2017-10-2313:34Empperithat executes but will not provide the same results as straight query due to different indices (or well, that exact query just might, but in general you can’t rely on that)#2017-10-2313:34Empperiand besides, doing limit functionality on datoms level there wouldn’t give me correct results since then limiting is done too early#2017-10-2313:35Empperiso, am I just screwed or is there some hidden feature somewhere which would allow me to do LIMIT?#2017-10-2313:38dominicmWould sample work for you? http://docs.datomic.com/query.html#aggregates-returning-collections#2017-10-2313:39Empperilet me read that through with thought#2017-10-2313:40dominicmLIMIT somewhat implies order afaik, which I don't think Datomic has. Datomic uses sets. We've run into a desire for LIMIT before, and had to work around it using d/datoms (and we could, in our situation).#2017-10-2313:40dominicmPerhaps you could write a custom aggregate for this, but I suspect there isn't one for a reason.#2017-10-2313:40Empperiyeah, we are actually going to need ordering too but we already have a plan for that#2017-10-2313:41Empperithat is: add ordering information as metadata, do the ordering at post via code#2017-10-2313:41Empperinot optimal but nothing too bad really#2017-10-2313:41Empperilimit is much harder nut to crack and has much bigger performance implications#2017-10-2313:42dominicmI think your LIMIT solution, is coupled to your ordering situation. You can't LIMIT until you've ORDERed#2017-10-2313:43Empperiyeah, true#2017-10-2313:43Empperithis sucks 😕#2017-10-2313:43Empperilove datomic but this is a real problem#2017-10-2313:43EmpperiI can guess it would be there if it was straightforward to do#2017-10-2313:45Empperidid a quick test with sample, execution time is same with sample and if you just get all the data out#2017-10-2313:45Empperiso not really helping#2017-10-2313:46EmpperiI’m slowly starting to lean on that we need to add LIMIT to “not supported” list#2017-10-2313:46Empperiwhich would be a huge letdown#2017-10-2313:51augustlif you need to sort and limit a dataset, you'll need to have the whole thing in memory at some point, don't you#2017-10-2313:52augustlI can't think of any way for a RDBMS to sort and then limit, on the database server, without having the whole working set in memory#2017-10-2313:52augustl(and if I can't think of an algorithm to do that off the top of my head, then obviously it cannot exist, right?)#2017-10-2313:57EmpperiI guess doing it naively after making the query is our best bet here, at least we can reduce the amount of data sent over the wire that way#2017-10-2313:59Empperiand at least it would improve performance in ORDER BY + LIMIT scenario, since one needs to do the sorting only as long as the LIMIT has been reached#2017-10-2313:59Empperibut then again, I’d guess the performance benefit would be marginal at best#2017-10-2313:59Empperijust trying to find something positive here 😄#2017-10-2314:39hmaurer@niklas.collin so, I haven’t gotten into a case where I had to implement this yet, but this problem bothered me a bit and I thought of two solutions:#2017-10-2314:40hmaurer(1) if there is only one ordering you care about (e.g. a newsfeed, where you want to retrieve the top N entries), store the data in a format which allows for this specific query to be efficient. e.g. a linked list#2017-10-2314:41hmaurer(2) if (1) does not work (e.g. because you need to sort arbitrarily), build a materialised view of your database using the tx report queue#2017-10-2317:45Empperi@hmaurer both good ideas but unfortunately not usable in this case, thanks for your input and ideas though :+1: #2017-10-2320:59rrevoI was trying to look for information on AWS cross region fail-over support. https://github.com/awslabs/dynamodb-cross-region-library from amazon can clone from one dynamodb table to another across regions. And dynamodb streams (https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.html) guarantee that the records appear once and in sequence.
Does this satisfy the consistent copy options for HA in datamic or is something else that is missing? http://docs.datomic.com/ha.html#other-consistent-copy-options#2017-10-2321:06mike_ananevHi, Datomic Team! I would like to recommend you new host DB for Datomic – Tarantool DB.#2017-10-2321:06mike_ananevSub 1 ms latency
• 100K-300K QPS per one CPU core
• 100K updates per node
• Small number of nodes (money saver)
• Expiration
• Always up, no maintenance windows
• Optimized for heavy parallel workloads#2017-10-2321:06mike_ananev+ Full ACID DB#2017-10-2321:07mike_ananevYou can avoid HornetQ, cause Tarantool can work as queue too.#2017-10-2321:08mike_ananevTarantool is a cache + acid db in one solution. Proven in production many years on highload services: Badoo, Avito, http://Mail.ru#2017-10-2321:11mike_ananevIt has App server in DB, so you can write stored procedures in High Level Language#2017-10-2321:11mike_ananevhttps://tarantool.org/en/doc/1.7/intro.html#2017-10-2414:43Matt ButlerIs there any security/injection concerns when using the fulltext feature of datomic?#2017-10-2418:02souenzzobe careful with those characters
#{"\t" "\n" "\r" " " "!" "\"" "(" ")" "*" "+" "-" ":" "?" "[" "\\" "]" "^" "{" "}" "~"}#2017-10-2418:03souenzzoThey may cause ParseException '*' or '?' not allowed as first character in WildcardQuery com.datomic.lucene.queryParser.QueryParser.getWildcardQuery (QueryParser.java:982)#2017-10-2418:06souenzzoThere is some place in docs that says that #{"\t" "\n" "\r" " " "!" "\"" "(" ")" "*" "+" "-" ":" "?" "[" "\\" "]" "^" "{" "}" "~"} could break fulltext searchs? It's on my code.#2017-10-2418:15favila@mbutler The string given to the fulltext function is actually a lucene query syntax: https://lucene.apache.org/core/2_9_4/queryparsersyntax.html#2017-10-2418:15favilathere's no injection danger, but it is a minilanguage and may not match a user-facing expectation#2017-10-2418:16favilaI also don't know a reliable way of escaping characters in that syntax#2017-10-2418:16favilaah, this doc says just prefix with backslash#2017-10-2418:16favila(last section)#2017-10-2418:27Matt ButlerThanks, @souenzzo @favila I knew about the lucene query sytnax, I rely on it to handle searching of emails as the tokenizer seems to split on @ (e.g. I turn ").
Just wanted to check that the worst thing that can happen is a Parse Exception, rather than any security concern.
As i understand its still not possible to change the settings of or use a different tokenizer, but do you know if its possible to escape the input so that i can get the full string " into the index?#2017-10-2418:36favilaAll I can think is munge it to something that tokenizes the way you want#2017-10-2418:36favilahowever, what kind of query are you doing? Sounds like exact-match? in which case why use fulltext at all?#2017-10-2418:37Matt ButlerI was just using the exact match as the most clear example#2017-10-2418:38favilayou could make your datomic query try exact match (normal indexed field), and use that to boost scores#2017-10-2418:41Matt Butleryeah, I considered/was doing that as some point. Gets a bit messy since I allow a variable number of fields to constrain the search, and since I do that its not a big deal that I have to treat email a bit oddly#2017-10-2418:41Matt ButlerWas just hoping that there was some easy answer to the tokenizer problem 🙂#2017-10-2418:42Matt ButlerAlso considered doing as you said and storing a "normalised" version of email but seemed like more tech debt than it was worth. If current implementation proves to poor a UX ill probably move to doing that.#2017-10-2418:42Matt ButlerThanks for the advice btw 🙂#2017-10-2418:45faviladon't forget about query rules to abstract some of this. e.g.: '[[(email-search [?email] ?e ?score)
[?e :email-attr ?email]
[(ground 2.0) ?score]]
[(email-search [?email] ?e ?score)
[(fulltext $ :email-attr ?v ?score) [[?e ?v _ ?score]]]]]
#2017-10-2418:45favila(this is the "score-boosting" approach I was talking about)#2017-10-2418:45favilaYou can also compare ?v to the original search and infer something#2017-10-2418:51Matt ButlerTrying to model the behaviour of the query in this case. If you invoke this rule once, its a logical or right? So in the case the exact match returned it would bind that ?e and "exit early"?#2017-10-2418:56favilaIt would still try both, but you can aggregate :find (max ?score) ?e to dedup and make exact matches float higher#2017-10-2418:51Matt Butlerand not try to do the fulltext.#2017-10-2418:54dominicmhttps://lucene.apache.org/core/3_6_1/api/core/org/apache/lucene/queryParser/QueryParser.html#escape(java.lang.String) there's an escape function too#2017-10-2418:55dominicmI thought lucene was an implementation detail though. It would be nice if fulltext escaping was provided by datomic, so you didn't have to depend on this.#2017-10-2419:51souenzzoawesome @dominicm
(com.datomic.lucene.queryParser.QueryParser/escape "|&&|")
=> "\\|\\&\\&\\|"
Maybe datomic.api could wrap this function (d/fulltext-escape s)#2017-10-2420:05dominicm@souenzzo more or less what I was thinking, yup#2017-10-2420:06dominicmNo specification of the engine necessary, but a stable API.#2017-10-2423:38alexisvincentHow are folks achieving ordering on cardinality many attributes?#2017-10-2500:25csmwe’re using a separate attribute, that contains the ordering as an edn string (it’s a list of #uuids, since we give each entity a GUID primary key)#2017-10-2502:12potetm@alexisvincent The other choices I know of are: an index attr (array) or an attr that points to the next item (linked list).#2017-10-2505:10devnRE: lucene I really wish there was some tiny amount of support for custom tokenization, even if the use of it meant all performance guarantees were off.#2017-10-2509:41alexisvincent@csm Hm, I suppose thats an approach. But you loose queryability.#2017-10-2509:44alexisvincent@potetm I’m wondering if a generic datastructure lib for datomic would be useful or if they should be baked into each instance by hand#2017-10-2510:19Matt Butler@devn I agree, its a great feature (fulltext) thats almost there, so even its not exposing lucene directly, some form of control would make all the difference, at least in my case 🙂#2017-10-2513:18gerstreeI was wondering if anyone has a good strategy to prove a datomic backup.#2017-10-2513:19gerstreeWe run the backup job every hour, backing up to s3. When the backup finishes, we sync the s3 folder to something not AWS.#2017-10-2513:22gerstreeThere we restore using a local transactor with dev storage#2017-10-2513:23gerstreeWhile all the different steps work perfectly fine, I would love to be able to verify that we restore exactly what we backup.#2017-10-2513:26gerstreeAs far as I understand, running bin/datomic list-backups gives a list of t's that are based on folder names in 'backup-folder/roots'. Nothing more nothing less. Is that correct?#2017-10-2513:28gerstreeIdeally I am looking for a checksum for 't' at backup time, that I verify at restore time for that same 't'.#2017-10-2513:43robert-stuttaford@alexisvincent if you take the [[eavt] [eavt] ..] data model - how would you model arbitrary cardinality-many order with it? this is what Datomic has to somehow do for you. it turns out that either you have to model it with extra eavt’s yourself (index attr / linked list), or you have to affect the index’s own sort, which of course messes with the indexing algorithms. guess which one Rich picked 🙂#2017-10-2513:53alexisvincent@robert-stuttaford thanks for the answer, the choice makes sense. I’ve been thinking about this a bit and here’s what I’ve more or less arrived at:
1. Order shouldn’t be baked into the data itself (since we can have multiple orderings per list), but rather is a semantic structure on top. 2. Order isn’t only a performance booster, (i.e. give me everything, I’ll sort it myself), but also vital for expressive queries, (e.g. limiting query to first 5 items of an ordering). --- Maybe an approach would be to provide user defined indexes as named orderings that can be specified at query time?#2017-10-2513:54alexisvincent@robert-stuttaford Do you do this via datastructures embedded into datomic?#2017-10-2513:54robert-stuttafordyeah, that old chestnut … performant sort + pagination#2017-10-2513:55robert-stuttafordit’s an interesting problem, with no one correct answer#2017-10-2513:55augustlseems to me that any ordering mechanism other than "whatever it is ordered in when you walk the data" requires the whole dataset to be in memory for a sort first, no matter how you do it#2017-10-2513:56robert-stuttafordyep#2017-10-2513:56augustlso, you can get insertion order in datomic, at least. Right?#2017-10-2513:57alexisvincent@augustl Could also have a lazy index#2017-10-2513:58augustlthis is the only thing I found after a quick google https://www.postgresql.org/message-id/33721.67.116.52.35.1090035034.squirrel%40mail.redhotpenguin.com#2017-10-2513:58augustlis that a lazy index?#2017-10-2513:58augustlif so, Datomic kind of has that I suppose, since it merges the actual main datom tree periodically, not on every transaction#2017-10-2513:58alexisvincentSo for instance, given ordering based on popularity of photos, you’re more likely to view top photos, and so that would be hot in cache#2017-10-2513:59alexisvincentI meant on demand order resolution, only when you need it#2017-10-2513:59augustlas in you maintain a subset of popular photos and sort those only?#2017-10-2514:02robert-stuttafordyou’d have to maintain this recency/popularity index yourself - to add things when they become recent/popular, and to remove them when they stop being either#2017-10-2514:04alexisvincent@robert-stuttaford I’ve also run into this brain bug where I’m not so sure how to handle versioning. Say for instance you want to track file revisions, you could use datomics ‘as-of’, but… versioning is actually a ‘first class’ problem of the domain. Also, when you want to deal with data imports you might want to specify a realworld time not a datomic transaction time. How do people solve these problems in the datomic world?#2017-10-2514:05robert-stuttafordyou can annotate your entities with any datetime value, including transaction entities#2017-10-2514:05robert-stuttafordand then write explicit queries against those#2017-10-2514:07robert-stuttafordas-of and since are great for slicing and dicing what the transactor did. they are both performant implementations of d/filter, which you can make yourself#2017-10-2514:07robert-stuttafordafter having tried this, i’ve found that a straightforward datalog query got me there quicker 🙂#2017-10-2514:07augustlsome colleagues of mine has run into something similar. They want to use datomic's time model, for tracking the position of some GPS data. But the GPS data can be delayed. So they have a delay of the maximum expected real world delay, through a queue, for inserting the data into datomic#2017-10-2514:08augustl@robert-stuttaford using d/filter instead of as-of, interesting#2017-10-2514:08augustlmakes sense, as-of just does the same binary search on the sorted sets that all other operations do on the index, I suppose#2017-10-2514:09alexisvincent@augustl I actually mean, define a global ordering on all the photos, then when you make your query where clause you do something like this [id :user/photo ?photo]
[(order ?photo :date)]
The order index could then be constructed lazily. You would still need to scan all the data, but you wouldnt need to store it all in memory all the time.#2017-10-2514:13alexisvincent@robert-stuttaford In this case the time is actually needing to be set on the datom (arrow) which unfortunately aren’t first class entities in datomic (I think)#2017-10-2514:14alexisvincentthe best we have is groups of datoms as entities (transactions)#2017-10-2514:17alexisvincentSo for instance we could have different times we need to set for datoms in a transaction. For the moment I’m approaching this by adding a multi-relational arrow, implemented via an intermediary entity.
Anyway, don’t want to pull you away from your work 🙂#2017-10-2518:38Brendan van der EsAnyone know if there are any reusable specs for the datomic api? [ e.g. (s/def :db/id (s/or keyword? int? vector?)) ]#2017-10-2520:39hmaurer@augustl wouldn’t it make more sense to have two notions of time in the system, the time of events and the recording time of events (the later being the “datomic time”)#2017-10-2520:45augustlmaybe, I wasn't aware of using d/filter instead of d/as-of as @robert-stuttaford mentioned previously#2017-10-2600:46eoliphantI also have some versioning questions 😉 I’ve made the newbie mistake of conflating datomic’s ‘native’ sense of time/version with what my application actually needs. I read @val_waeselynck’s blog post about reifying versions and what have you. I think I need to do something simliar for a use case of mine. I’m storing dynamic form definitions and their data. I need to link an instance of form data to the def of Form A - Version 3, so it seems like I need to have actual ‘current’ entities for each version of the form, copying/updating for each rev. My concern is that this seems rather inefficient, anyone done something similar ?#2017-10-2602:09danielcompton@eoliphant not answering your exact question, but the recommendation I've been given from Cognitect employees is "If it matters to your business domain, then model it as an entity, and model it explicitly"#2017-10-2602:12mac01021Is Datomic's query language really datalog?
Here's a tiny, canonical example of datalog from wikipedia (https://en.wikipedia.org/wiki/Datalog#Example):
% Store some data
parent(bill, mary).
parent(mary, john).
% Define "ancestor" in terms of "parent".
ancestor(X,Y) :- parent(X,Y).
ancestor(X,Y) :- parent(X,Z),ancestor(Z,Y).
% A query to find all of bill's ancestors
?- ancestor(bill,X).
Does Datomic's query language support anything like this inductive definition of ancestry?#2017-10-2604:46EmpperiDatomic supports recursive queries and it is logic based so yes. Your example as it is cannot be defined directly via EDN Datalog but it’s functionality can be duplicated.#2017-10-2602:18eoliphantthanks @danielcompton yeah I’m on board with the modeling aspect, Im just trying to work out the specifics functionally.#2017-10-2604:50Brendan van der Es@mac01021 Datomic's datalog supports recursion in rules. This is a handy recursive rule to get the extent of an entity: https://gist.github.com/stuarthalloway/2002582. Here is thread talks about that exact example (the issue was with datomic version): https://groups.google.com/forum/#!topic/datomic/sD2m810kfrQ#2017-10-2613:22daveeelHi All, I setup a boot based project with Datomic Pro (0.9.5561.62) on dynamoDB.
Connection to the DB is fine from a local transactor and console.
However when I try to start my project, I am stuck with the following error:
Exception while starting the system
java.lang.Thread.run Thread.java: 745
java.util.concurrent.ThreadPoolExecutor$Worker.run ThreadPoolExecutor.java: 617
java.util.concurrent.ThreadPoolExecutor.runWorker ThreadPoolExecutor.java: 1142
java.util.concurrent.FutureTask.run FutureTask.java: 266
...
clojure.core/binding-conveyor-fn/fn core.clj: 1938
datomic.kv-cluster.KVCluster/fn kv_cluster.clj: 222
datomic.kv-cluster.KVCluster/fn/fn kv_cluster.clj: 224
...
clojure.core/partial/fn core.clj: 2534
clojure.core/apply core.clj: 652
...
datomic.kv-cluster/retry-fn kv_cluster.clj: 82
datomic.kv-cluster/retry-fn/fn kv_cluster.clj: 82
datomic.kv-cluster.KVCluster/fn/fn/fn kv_cluster.clj: 226
datomic.kv-dynamo.KVDynamo/get kv_dynamo.clj: 44
datomic.ddb/get-item ddb.clj: 94
datomic.ddb/get-item* ddb.clj: 62
datomic.datafy/fn/G datafy.clj: 136
datomic.ddb/fn ddb.clj: 47
java.lang.NoClassDefFoundError: com/amazonaws/AmazonWebServiceResult
java.util.concurrent.ExecutionException: java.lang.NoClassDefFoundError: com/amazonaws/AmazonWebServiceResult
The project works fine with in-memory datomic.
Googled for quite some time and no clue. Any pointer?#2017-10-2702:41daveeelHi @marshall
[com.amazonaws/aws-java-sdk-dynamodb "1.11.6"]
already there. I have another lein based project that could connect to the same transactor.#2017-10-2704:55daveeelTurns out if I downgrade datomic from:
[com.datomic/datomic-pro "0.9.5561.62"]
(or .56)
to:
[com.datomic/datomic-pro "0.9.5561"]
The issue no longer exists. And I can do what’s expected from the project repl
From what I googled, feels like later datomic versions has specific dependencies on aws-java-sdk version ?#2017-10-2613:28marshall@daveeel you need to add the AWS SDK to your dependencies#2017-10-2613:29marshallhttp://docs.datomic.com/storage.html#dynamodb-aws-peer-dependency#2017-10-2617:53eoliphantHi, I’m getting a weird error trying to transact in a db/fn in my REPL. I thought it might be my function, but getting the same error when I try the code from the datomic docs.
(d/transact conn [{:db/ident :add-doc
:db/fn #db/fn {:lang "java"
:params [db e doc]
:code "return list(list(\":db/add\", e, \":db/doc\", doc));"}}])
CompilerException java.lang.RuntimeException: Can't embed object in code, maybe print-dup not defined:
anyone seen this before?#2017-10-2618:11marshall@eoliphant try this:
@(d/transact conn [{:db/ident :add-doc
:db/fn (d/function
{:lang "java"
:params '[db e doc]
:code "return list(list(\":db/add\", e, \":db/doc\", doc));"})}])
#2017-10-2618:11marshallas indicated here: https://github.com/Datomic/day-of-datomic/blob/59186b4b39c124e2d9d0e79243f3e373b0a0b9d9/samples/literals_vs_code.clj#L19#2017-10-2618:51eoliphantUgh thanks @marshall I read this (https://support.cognitect.com/hc/en-us/articles/215581438-When-to-Use-Data-Literals) forever ago as I was just learning datomic/clojure and kind of forgot about it lol#2017-10-2618:59marshallnp#2017-10-2620:26uwoI have an entity in my system that throws an error I can’t explain when I (d/pull (db) '[*] 17592188295819)
Exception Key not found: 59f20f54-00df-4e30-a0dc-267dd0bdd9fc datomic.common/getx (common.clj:191)#2017-10-2620:29uwoI can pull specific attributes from it without error#2017-10-2620:29favilaI'm suspecting db corruption, missing block of data#2017-10-2620:29faviladoes d/datoms work on this entity?#2017-10-2620:31uwolike this? (first (d/datoms (db) :eavt 17592188295819)) => #datom[17592188295819 416 17592196436651 13194149925399 true]#2017-10-2620:32uwoany obvious practice that will lead to corruption?#2017-10-2620:38uwo@val_waeselynck oh weird. I don’t know why it would be, but the issue i describe above only happens when I’m running off of a ‘forked’ (datomock) connection#2017-10-2620:40favilanot first but all#2017-10-2620:41favilaor, try to visit all the datoms (they are lazy-loaded). Theory was a block was missing. maybe datomock is messing it up#2017-10-2620:42favilathe exception superficially looks like attempt to retrieve a fressian block from the underlying kv store failed (keys look like uuids)#2017-10-2620:43favilanot sure how datomock would manage to do that#2017-10-2620:43favilaso I may be wrong#2017-10-2620:43uwoyuck. nvm. I’ve tested again with a forked connection. that’s not it. sorry for false alarm @val_waeselynck#2017-10-2620:43uwoyeah, it wouldn’t make sense for reads to be affected by datomock#2017-10-2620:44favilamy mention of datoms is an attempt to trigger the error via something other than a pull#2017-10-2620:44favilawhat is your storage?#2017-10-2620:45uwosqlserver#2017-10-2620:45favilais that key in the datomic_kvs table?#2017-10-2620:45uwoI haven’t checked#2017-10-2620:45uwothis error was happening when I connected my dev peer, and it appears to be intermittent#2017-10-2620:46uwobecause it didn’t survive a restart#2017-10-2620:46favilahm, bad sql connection?#2017-10-2620:46uwoperhaps? other reads were working fine#2017-10-2620:46marshallIs it possible you ran a GCstorage?#2017-10-2620:47favilaah good call#2017-10-2620:47uwoactually, come to think of it yes#2017-10-2620:47marshallIf you were holding onto an old value of the db and GC storage was run with a more recent time#2017-10-2620:48uwomakes sense. thanks y’all!#2017-10-2620:56uwo@marshall some of the code that was failing, an api endpoint, appears to request a new db on each query. Would that mean that GCstorage wasn’t the culprit?#2017-10-2621:59hmaurer@marshall is the datomic client API sufficiently documented to build a stable PHP client?#2017-10-2621:59hmaurer@marshall is the datomic client API sufficiently documented to build a stable PHP client?#2017-10-2622:17favilaAFAIK the actual on-the-wire stuff is not documented. Only the client code API interface.#2017-10-2622:18favilaYou would have to reverse-engineer the existing impl to build your own client#2017-10-2622:18favilathe rest server is the only option#2017-10-2622:28hmaurer@U09R86PA4 it is deprecated, isn’t it?#2017-10-2700:31favilaYes#2017-10-2712:59marshallNot yet. When we ship Datomic Cloud we will also document the wire protocol#2017-10-2709:42EmpperiI was talking about LIMIT functionality and lack of it in Datomic here few days ago, I want to continue a bit on the subject and say that after thinking in detail about this I can totally get the basic reason why it is not supported. However I think it should be relatively easy to support IF the query API would return a lazy sequence instead of an eager one. So, my question goes to that department now: why is it exactly that the query API results are actually eager and not lazy?#2017-10-2709:43EmpperiI’m guessing it has something to do with different indices within Datomic and query planner and combining the datasets from these indices reliably, but this is mostly just guesses and would love to hear some ideas#2017-10-2709:45EmpperiI would guess that the where clauses are handled via reducers within Datomic (that would just make sense) and based on that assumption creating a lazy sequence shouldn’t be too much of a problem#2017-10-2709:46Empperibut I’m pretty certain I’m missing something here, otherwise we would be receiving lazy sequences already. I want to understand the internals of Datomic a bit better so that I can circumvent it’s limitations and use it’s advantages more efficiently#2017-10-2709:53augustlI would imagine it's something that could be included in the query engine if you're OK with limiting when a certain condition is met and the ordering is "whatever the order the query engine iterates the indices in"#2017-10-2709:53augustlit is fundamentally walking a lazy tree of chunks, after all#2017-10-2709:53EmpperiI actually think it is not as long as the resultset is eager#2017-10-2709:54Empperibecause the where clauses are applied one by one#2017-10-2709:54Empperiso in order to do the LIMIT you need to apply all of them#2017-10-2709:55Empperiand at that point you already have all the data without the LIMIT processed and since it is in memory due to peer cache then what’s the point in returning just a subset? Just return it all and let the client to do the limit functionality#2017-10-2709:55Empperibut, if this processing would be lazy then you could do this depth first traversal of the where clauses instead of breadth first (which I guess is currently happening)#2017-10-2709:56Empperithen one could just simply do (take limit query-results) at Datomic level#2017-10-2709:56Empperiand it would work exactly the way people would want it to work#2017-10-2709:56Empperibut, actually just got another idea why it is like this: to optimize the peer cache population#2017-10-2709:56Empperiright, it must be actually because of that#2017-10-2709:57Empperibecause you actually want to do the breadth-first: that way after doing the first where clause you know the absolute worst case of data you’re going to need in order to do rest of the query#2017-10-2709:57Empperithen you can retrieve that from the storage backend with one sweep and do the rest of the stuff in memory#2017-10-2719:04jfntnWe’re currently running an index creation migration and would like to get a sense of how long it will take to finish but I’m not sure how to check on that?#2017-10-2719:38currentoorIn the docs it says the client library can support non-JVM languages. Are there any examples of that? Can we for example use datomic’s new client library from a ruby process?
http://docs.datomic.com/architecture.html#clients#2017-10-2809:08lmergenso i'm relatively new to Datomic, and trying to figure out what makes the most sense.
if i have an entity with a unique identity (a uuid) that spans multiple domains, would it be better to use two different datomic identities (created using d/tempid), or is it safe to just use one single identity that spans multiple domain barriers ?
for the sake of example, let's consider the case where i have one customer id that's used by both the accounting department and infrastructure department. so i have :accounting/name, :accounting/customer-id and :infrastructure/foobar values, that are all uniquely identified by a customer id. how would you design this ?#2017-10-2809:12lmergenalso, i definitely need to be able to pull the :accounting/* information when doing :infrastructure queries, but not vice versa. i guess this influences design.#2017-10-2811:57the-kennylmergen: My guess is that a single entity with attributes from both domains is the correct™ thing to use.#2017-10-2811:57the-kennyBut you have to make sure there aren't any conflicts possible, now and in the future#2017-10-2811:58the-kennyAs for pulling stuff: As long as there's a reference between two entities you can just use the entity api and walk the tree in any direction#2017-10-2815:09chris_johnsonJust to confirm what I expect to be true (or maybe learn something that makes my morning much better hehe) - if I want to run a transactor in AWS and I want a custom for something other than dumping logs to S3, is it the case that I have to roll my own CF template rather than going the bin/create-cf-template route and using a Cognitect-provided AMI?#2017-10-2912:51daveliepmann@lmergen I'd have to see more thorough examples about what exactly you mean by "domain" and what facts you plan to store, but I would default to a single entity (e.g. pick its own namespace, neither accounting nor infrastructure) and make the distinction at the application level when querying. So, following your example, the schema might be customer/uuid, customer/name, customer/foobar, customer/accounting-id. Accounting queries would use pull to specify the attributes they want. Infrastructure queries would use pull or entity to get all customer attributes.#2017-10-2913:16lmergen@daveliepmann i think you're right, that makes sense#2017-10-2913:17lmergeni knew i was looking at this from the wrong perspective, but couldn't pinpoint what it was#2017-10-2913:17lmergeni think the abstractions i'm trying to build are wrong, which is why it didn't really map that well to datomic entities#2017-10-2913:17lmergeni should stick to a single entitiy in a single namespace, and make both domains query from that indeed#2017-10-2914:26bmaddyDoes anyone know when Datomic Cloud is scheduled to come out?#2017-10-2915:36joelsanchezhi, I'm having a little problem with enums:
{:db/ident :user.role/administrator}
{:db/ident :user.role/user}
{:db/ident :user/roles
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/many}
an user is both admin and user, I want to change it to just user, but it doesn't work:
(db/transact [{:schema/type :schema.type/user,
:user/name "A test",
:user/email "
edit: solved by https://groups.google.com/forum/#!topic/datomic/sm0a8uWAHAY
(db/transact [[:db/retract 17592186045864 :user/roles 17592186045444]])#2017-10-3000:53steveb8nDocs say that the client API doesn't support partitioning in the but I want to use this feature. Does anyone know there's a way to do this without the peer API?#2017-10-3012:52marshall@steveb8n you can set the default partition in the transactor properties file : http://docs.datomic.com/transactions.html#default-partition#2017-10-3012:52marshallNote that this behavior won’t be supported in Datomic Cloud#2017-10-3012:52mishaGreetings! Is there anything to read about encryption and datomic?
or should I just put scrambled values in, and that's it?#2017-10-3012:52marshall@misha What storage?#2017-10-3012:53marshall@bmaddy We’re shooting for this quarter. We’re working out the final arrangements with AWS Marketplace, but unfortunately I don’t have a specific timeline#2017-10-3012:53mishano idea yet. Just figuring out what my options are for either entire db encryption, or per-user, or partial data per user ones#2017-10-3012:54marshall@misha several storages provide their own transparent encryption (i.e. postgres and other sql options for sure)#2017-10-3012:54marshall@misha Datomic Cloud will have all data encrypted by default#2017-10-3012:55marshallif you need to use something like Dynamo, which doesn’t have transparent encryption, then yes, you’ll need to handle it in your application#2017-10-3012:55marshallIf you’re running in your own datacenter, you can also do something like OS-level whole-disk encryption#2017-10-3012:55marshallbut of course if you’re using a storage service that’s less feasible#2017-10-3012:59mishaI'll read about postgres encryption options, thank you @marshall#2017-10-3013:01mishabut will os-level encryption protect only "offline" data? I mean, as soon as data gets into application memory – anyone with repl access to process will essentially have anything in plain text#2017-10-3013:03mishaas a service provider, I don't want to know the contents of the data too. Structure – yes, actual values – no. Does this limit my options to "store encrypted strings, or even encrypted edn entities, where applicable"?#2017-10-3013:58bmaddyNice. I'm looking forward to checking it out. Thanks @marshall!#2017-10-3013:59marshall@misha I think yes, generally if you don’t want any part of your application to interpret the data until it hits the ‘edge’ you’ll need to handle that encryption yourself#2017-10-3021:48jjfinehey, this is my first attempt using the since filter an i'm getting the following error:#2017-10-3021:48jjfinemessage: processing clause: [$since ?n :alert/acked _], message: Cannot resolve key: $since#2017-10-3021:49favilaor is like rules, each subclause clause must have the same db#2017-10-3021:49favilatry ($since or ...)#2017-10-3021:49jjfineahh cool thanks#2017-10-3021:49favilaand take the explicit db out of each subclause#2017-10-3021:50jjfineworked! thanks#2017-10-3021:50favila@jjfine http://docs.datomic.com/query.html#how-or-clauses-work#2017-10-3021:51favila> As with rules, src-vars are not currently supported within the clauses of or, but are supported on the or clause as a whole at top level.#2017-10-3021:51favilasyntax is (src-var? 'or' (clause | and-clause)+)#2017-10-3021:51favilathat src-var? bit is referring to the db#2017-10-3021:52favilathere's no explicit example so it takes a bit to connect it together#2017-10-3021:53jjfinegotcha#2017-10-3022:11steveb8n@marshall thanks. so as I move towards clients and cloud, I should just pull out all partitioning (storage locality) code. Is there a replacement for storage locality or is this just because it doesn’t make that much difference? BTW I’m happy to simplify by ripping it out but wondering if performance will suffer#2017-10-3113:23marshall@steveb8n Cloud has a built-in partitioning mechanism. If you’re using clients today you’re just using a default single partition anyway (http://docs.datomic.com/transactions.html#default-partition)#2017-10-3120:04rrevoI’m trying to configure datomic with sql storage. I see the following line in the logs
Starting datomic:sql://<DB-NAME>?jdbc:mysql:// .... #2017-10-3120:04rrevohow can I set the DB-NAME?#2017-10-3120:10marshall@rrevo That indicates that you’ve started the transactor against that SQL storage. You’ll set the DB name in your connection URI when you connect with a peer to create the database#2017-10-3120:10marshall*to create or to connect to an existing one#2017-10-3120:10marshallfor example: http://docs.datomic.com/dev-setup.html#create-db
using dev storage ^#2017-10-3120:15rrevo@marshall thanks. i wanted to restore to the transactor that i just started using the sql storage. So I was not sure what to-db-uri to provide.#2017-10-3120:15marshallah, yeah you can also use a URI with a db name for the restore job#2017-10-3120:16marshallso when you run restore from another terminal windows you’ll use the uri printed out from the transactor with the <DB-NAME> replaced with the name you want to restore into#2017-10-3120:24rrevothanks.. that worked#2017-10-3120:43marshall👍#2017-10-3121:26hmaurer@marshall are you going to ship cloud with clients for other languages than clojure/java? or will you just open the transport spec and let the community build clients?#2017-10-3121:31souenzzojava.util.concurrent.ExecutionException: java.util.concurrent.ExecutionException: java.util.concurrent.ExecutionException: com.amazonaws.services.dynamodbv2.model.ProvisionedThroughputExceededException: The level of configured provisioned throughput for the table was exceeded. Consider increasing your provisioning level with the UpdateTable API. (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: ProvisionedThroughputExceededException; Request ID: UKBM7UDV8S7MVJ8ID6V49612JNVV4KQNSO5AEMVJF66Q9ASUAAJG)
Trying to restore a 20Mb backup to a DynamoDB on amazon.#2017-11-0103:03steveb8n@marshall that's good to know, thanks#2017-11-0116:42dustingetzI know Cloud’s storage impl permits low cost clones someday. What is it about Pro’s storage impl that prevents this? (`:restore/collision The database already exists under the name 'hyperblog'`)#2017-11-0117:02marshall@hmaurer We will document the wire protocol#2017-11-0117:04marshall@souenzzo Restore will put segments in DDB as fast as it can. You can increase your dynamo throughput or you can restart the restore process and allow it to finish. it will skip the segments it’s already copied#2017-11-0117:06souenzzoNot sure if S3 is slower, but restore from S3 works, restore from disk fail. My "default" workflow uses S3...#2017-11-0117:07marshall@souenzzo If you’re running the restore from a local process, I’m not surprised the S3 one is slower - it has to fetch segments from S3 then put them in DDB as opposed to put them directly from local disk into DDB#2017-11-0117:27dustingetzd/datoms, filtered to a specific partition - Anyone know the best way to do this?#2017-11-0117:28dustingetzOr even a way to discover what the starting entity number is for a given partition?#2017-11-0117:29marshall@dustingetz http://docs.datomic.com/clojure/index.html#datomic.api/entid-at#2017-11-0117:29marshallyou can use entid-at#2017-11-0117:35dustingetzthank you#2017-11-0214:27djjolicoeurhas anyone seen intermittent SSL errors from dynamoDB on their peers? we seem to be getting intermittent handshake errors. some googling suggests this may be due to TLS versions on amazons side not always being consistent or cypher suite mismatches. has anyone had a similar experience?#2017-11-0220:35jumblemuddleIs it possible to have two way cardinality/many refs? (e.g. A has a list of Bs, which in turn always point back to A)#2017-11-0220:36jumblemuddleOr would I need to manually keep that up to date? (Always add B to A's list when created B)#2017-11-0221:06gonewest818Is there a release date for Datomic Cloud on AWS, more specific than “now-ish?”#2017-11-0221:43djjolicoeur@jumblemuddle are you asking about A having a reference to B and being able to traverse that relation going from B to A? if so, all refs should be able to be traversed via the reverse ref from and entity, e.g. if you had attribute :A/B which is a ref to some B, then the B you could access that relation from B as (:A/_B b-entity) with the underscore before the attribute name#2017-11-0314:02jumblemuddle@djjolicoeur Hmm, ok. So does it make sense to give A a list of B refs (cardinality/many), or give each B a ref to A?#2017-11-0314:06djjolicoeur@jumblemuddle it depends on how you are going to use it. I personally prefer a :cardinality/one from each B to A in most cases, given that I can get that many relation via the reverse reference on A. I just find it to be cleaner to maintain from B. My personal preference is that, if the ref is not a component entity of A, then the ref goes from B to A. I'm sure some folks may disagree with that, though.#2017-11-0314:07jumblemuddleOk, that's helpful. Thank you#2017-11-0314:08jumblemuddle@djjolicoeur Component entity meaning all Bs are deleted when A is deleted?#2017-11-0314:08jumblemuddleOnly if it's marked as such, of course.#2017-11-0314:09djjolicoeuryes, that would be a consequence of having a ref from A to B where :db/isComponent is true.#2017-11-0314:11jumblemuddleOk, and that could only be done with a one to many from A to Bs. Makes sense. 👍#2017-11-0323:59cjmurphyI have an attribute and would like to query for a specific value or no value set (nil?). Is that possible? Currently I'm using a special attribute to indicate 'no value set', but want to get rid of it to reduce confusion.#2017-11-0402:28favilaYou probably want missing?#2017-11-0402:30favilaNothing wrong with a negative assertion though. It can be important in some domains to distinguish between “db doesn’t know” and “db knows it’s not this”#2017-11-0402:34cjmurphyThanks. That s/do the trick. I see permanent? as a calculated field indicating the rule is for all time. Yes to the negative assertion thing, but in this case I don't think I want it.#2017-11-0514:42zigndI have a possibly noob question about the Datomic libraries, I was following the Getting Started section in the Datomic Documentation and it only mentions the Client API to interact with the database, but one thing I noticed is that its examples regarding defining a schema are a bit different from the ones I found on the internet, its definitions are missing the :db/id #db/id[:db.part/db] and :db.install/_attribute :db.part/db parts in the maps. To tell you truth I seriously have no idea why it is necessary and I couldn't find documentation to help me learn about it, the only thing I found (http://docs.datomic.com/schema.html) is that it's related to defining a partition, but the documentation related to it uses it a bit differently:
[{:db/id "communities"
:db/ident :communities}
[:db/add :db.part/db :db.install/partition "communities"]]
#2017-11-0514:43zigndI have been defining the schema for my attributes like this using the Client API:
{:db/ident :user/username
:db/valueType :db.type/string
:db/unique :db.unique/value
:db/cardinality :db.cardinality/one}
#2017-11-0514:45zigndAnd it seems to be working, data is being stored in the database with an id. Does the Client API does that by default for me?#2017-11-0514:53rauh@zignd Last paragraph of the link you included should answer your question.#2017-11-0514:55zignd@rauh Oh, thanks for that! I ended up reading only some sections of this page! xD#2017-11-0515:07sparkofreason@marshall We've previously discussed keeping data for multiple customers in a single DB rather than splitting into many DB's on a single transactor. Will that advice still apply for Datomic Cloud?#2017-11-0614:36mitchelkuijpersI have a general question how de other people approach multitenancy with datomicdb. We currently have around 300 customers and we just put everything into one big datomic db en dan add references for all entities to the correct tenant. Have people tried a datomic db per tenant approach? Not sure if that is even feasible#2017-11-0617:12hmaurer@U09R86PA4 is doing a db per tenant iirc#2017-11-0617:13favilaYes#2017-11-0617:14favilaWhy not feasible? if anything it made our job easier#2017-11-0617:15favilaIt made it much easier to isolate data between our customers. a coarse-grained permission system: you can't see other people's data if its not in your database#2017-11-0617:16favilaeverything they legitimately share is in separate shared dbs#2017-11-0617:16hmaurer@U09R86PA4 out of curiosity, do you ever end up doing cross-database queries? (since datomic allows it)#2017-11-0617:16favilawe do denormalize a lot, but that matches our domain's way of doing things#2017-11-0617:17favila(health records and communications)#2017-11-0617:17favilaevery provider has their own view of the world. normalization makes that hard to maintain#2017-11-0617:17favilayes we do cross-db queries sometimes#2017-11-0621:33mitchelkuijpersBut how does that work with transactors can one handle multiple dbs?#2017-11-0621:34mitchelkuijpersI would feel so much better if we separate every tenant..#2017-11-0621:36favilayes transactors handle multiple dbs#2017-11-0621:36faviladidn't you notice you have to specify a db name in your connection string?#2017-11-0621:36favilaThat's a downside too depending on your storage. There is one (active) transactor per storage.#2017-11-0621:37favilaIf you want to keep them on separate storages too, you will need more transactors#2017-11-0621:37favilaalso cross-db atomic commits is impossible#2017-11-0621:37favilaso make sure you don't need those#2017-11-0621:39mitchelkuijpersAh that is not a problem. But you can add extra transactors if you need it for write troughput for tenants then right?#2017-11-0621:39favilaYou need a backup+restore (thus downtime), but yes you can move a tenant to a different transactor#2017-11-0621:40mitchelkuijpersThank you so much, this helps a lot#2017-11-0617:27souenzzoI'm using incremental S3 datomic backup.
Let suppose day 1 backup, day 2 backup, day 3 backup.
On day 2, due a hardware failure, there is a corruption on store that don't block day 3 backup.
How do I restore my database, from s3 day1?#2017-11-0619:48marshallCome join us on the new Datomic Developers Forum at http://forum.datomic.com !#2017-11-0619:58marshall^ setting a topic so folks can see the announcement once it scrolls up a ways#2017-11-0705:06timgilbertThat forum software is super slick, nice job#2017-11-0708:32lmergenthe real question of course is whether the datomic forum is powered by datomic :)#2017-11-0709:42val_waeselynckMy computer crashed and now I can't start my dev transactor anymore without it crashing. I'm seeing:#2017-11-0709:42val_waeselynckLaunching with Java options -server -Xms1g -Xmx1g -XX:+UseG1GC -XX:MaxGCPauseMillis=50
Starting datomic:, storing data in: ../data/datomic/data ...
System started datomic:, storing data in: ../data/datomic/data
Critical failure, cannot continue: Heartbeat failed
#2017-11-0709:44val_waeselynckThe logs show :
2017-11-07 10:20:38.670 INFO default datomic.lifecycle - {:event :transactor/heartbeat-failed, :cause :conflict, :pid 4647, :tid 25}
2017-11-07 10:20:38.672 ERROR default datomic.process - {:message "Critical failure, cannot continue: Heartbeat failed", :pid 4647, :tid 23}
2017-11-07 10:20:38.674 INFO default datomic.process-monitor - {:MetricsReport {:lo 1, :hi 1, :sum 1, :count 1}, :HeartbeatMsec {:lo 5001, :hi 5005, :sum 20010, :count 4}, :Alarm {:lo 1, :hi 1, :sum 1, :count 1}, :AlarmHeartbeatFailed {:lo 1, :hi 1, :sum 1, :count 1}, :SelfDestruct {:lo 1, :hi 1, :sum 1, :count 1}, :AvailableMB 628.0, :event :metrics, :pid 4647, :tid 23}
2017-11-07 10:20:38.759 INFO default o.a.activemq.artemis.core.server - AMQ221002: Apache ActiveMQ Artemis Message Broker version 1.4.0 [dc68a7d7-c39c-11e7-9092-12754bafa2ff] stopped, uptime 24.966 seconds
#2017-11-0709:44val_waeselynckHow can I fix this ?#2017-11-0712:44marshall@val_waeselynck Are you able to run a dev txor against a new storage (i.e. move/rename the data dir)#2017-11-0808:55val_waeselynck@U05120CBV it works if I start from a clean data dir, I guess data corruption occurred when my machine crashed.#2017-11-0808:55val_waeselynckI guessed I'll just restore a backup#2017-11-0809:00val_waeselynckRestoring to a clean data dir worked. Thanks @U05120CBV @U06GLTD17!#2017-11-0714:18bkamphaus@val_waeselynck zombie transactor process somewhere? :cause :conflict can be multiple transactors running without a license that supports HA , otherwise possibly due to the local H2 that backs dev not being robust against many failure cases (not intended for that purpose), i.e. failing without all acked writes to storage having been persisted to disk and not being available.#2017-11-0717:05wds_Hey guys, having an issue retrieving large amounts of nodes using datomic pull. I saw in the documentation that there is a limit of 1000 nodes. How would I use the (limit :attr nil) syntax is this query here below?
['(pull ?e [* {:pid/a [:db/ident]
:pid/b [:db/ident]
:problem/root [* {:problem/foo [:db/ident]
:problem/bar [:db/ident]}]}])])
we need to pull more than 1000 nodes under problem/root#2017-11-0717:12favilaReplace :problem/root with (limit :problem/root nil)#2017-11-0717:13favilaThere's an example in the docs: http://docs.datomic.com/pull.html#limit-expressions#2017-11-0717:16wds_thank you, trying now#2017-11-0717:22igor.i.ges['(pull ?e [* {:pid/a [:db/ident]
:pid/b [:db/ident]
(limit :problem/root nil) [* {:problem/foo [:db/ident]
:problem/bar [:db/ident]}]}])])
still 1000
['(pull ?e [* {:pid/a [:db/ident]
:pid/b [:db/ident]
:problem/root [(limit * nil) {:problem/foo [:db/ident]
:problem/bar [:db/ident]}]}])])
still 1000
['(pull ?e [* {:pid/a [:db/ident]
:pid/b [:db/ident]
:problem/root (limit [* {:problem/foo [:db/ident]
:problem/bar [:db/ident]}] nil)}])])
;syntax error#2017-11-0717:23wds_@U2XL48J00 and I tried the suggested solution to no avail#2017-11-0717:35favila@U2XL48J00 That limit is in the wrong spot#2017-11-0717:37favilamap-spec = { ((attr-name | limit-expr) (pattern | recursion-limit))+ }#2017-11-0717:37favila(from the docs)#2017-11-0717:46igor.i.gesthe last example was pure frustration. First example replaces attr-name with limit-expr in the outer map spec. the second example replaces attr-name with limit-expr in the list spec. So when you say, in the wrong spot, what exactly do you mean? (as i understand attr-expr = limit-expr | default-expr but it does not seem to produce intended effect) Thank you.#2017-11-0717:51favilathe first one should work#2017-11-0717:51igor.i.gesand yet it didn't#2017-11-0717:52favilatry with d/pull instead of query pull#2017-11-0717:52favilamaybe it is a bug#2017-11-0717:55favilayou can also try a very small limit (e.g. 2) to see if it is working at all#2017-11-0717:57igor.i.gesgood idea.#2017-11-0717:57igor.i.gesturns out nil aka no limit doesn't seem to work in this case, but setting a specific number works (in example 1)#2017-11-0717:57igor.i.gesthank you!#2017-11-0717:58favilasame behavior with d/pull?#2017-11-0717:58favilapossible workaround? [* (limit :problem/root nil) {:problem/root [*]}]#2017-11-0717:59favilaanyway that is definitely a bug#2017-11-0717:59favila(to ignore nil limit on map key)#2017-11-0718:02igor.i.gesdoes not work with d/pull either#2017-11-0717:22conanHi all, looking for a bit of help getting a transactor running on AWS against dynamodb storage. I've created the ddb table using ensure-transactor, and i've created the cloudformation stack using ensure-cf, create-cf-template and create-cf-stack. I've now got a stack in place, but the ec2 instance it creates just shuts down as soon as it starts; no logs make it into my s3 bucket (although the bucket has been created fine). Can anyone point me in the direction of some resources about debugging this? Thanks#2017-11-0717:27conani'm using a t2.small instance, which appears in the list of supported instances in my cf-template.json#2017-11-0718:40marshallThe t2.small is pretty limited for resources; what are you using for xmx, and your other memory settings (mem index, obj cache)?#2017-11-0811:48conan1G for Xmx#2017-11-0811:48conanthe others are set to the developer defaults#2017-11-0811:49conanWhat would be really useful would be to know how I can debug this, there doesn't seem to be any way of getting logs unless I happen to request them at exactly the right time#2017-11-0806:48podviaznikovhey everyone, I had a clojure app using datomic(dynamo) deployed to aws. The app was working before but now I started getting errors like this 17-11-08 06:42:15 221c15fc989f ERROR [io.montaigne.api.server:31] - datomic connection error clojure.lang.ExceptionInfo: Error communicating with HOST 10.238.58.21 or ALT_HOST 54.190.108.228 on PORT 4334 {:alt-host "54.190.108.228", :peer-version 2, :password "EDITED, :username "EDITED", :port 4334, :host "10.238.58.21", :version "0.9.5554", :timestamp 1506450187882, :encrypt-channel true}
17-11-08 06:42:15 221c15fc989f ERROR [io.montaigne.api.server:32] - datomic connection error details Error communicating with HOST 10.238.58.21 or ALT_HOST 54.190.108.228 on PORT 4334. I don’t think I changed anything with configuration, so not sure what is going on. My Dynamo DB instance is still running. However, it seems my transactor was restarted with new IP address. How would I fix such situation? What should I update with new IP address?#2017-11-0810:24jonpitherhi - I'm getting a 'Transactor not available error' when doing a large data import, the full stack here: https://gist.github.com/jonpither/17a90989d42569988db30d2171e5d58e. Is this recoverable on the Peer process? The transactor seems to be able to carry on with heartbeats etc.#2017-11-0812:44jonpitherI've put a retry code + exponential-backoff around the transact-async call - wondering if other's have taken this approach?#2017-11-0814:21favilaSending many txs in a tight loop without back pressure (deref) or sending txs that are really big can both cause that#2017-11-0814:22favilaThe tx may succeed, but the transactor is too overwhelmed to heartbeat#2017-11-0814:30jazzytomatoHi, I have an entity that consists of a unique attribute (squuid), a third party id (unique identity) and a bunch of other attributes.
What is the best way to upsert data based on third party id, without overriding my squiid if it already exists? i.e. I want to generate a squuid the first time but not on the subsequent updates. This seems to be a common use case so I wonder if a custom database function is reasonable#2017-11-0815:39val_waeselynckYou can definitely use a custom database function for that (and make it generic), but I would then question the usefulness of the UUID#2017-11-0815:56jazzytomatothank you#2017-11-0902:43podviaznikovI use dynamodb as storage. It seems like my transactor failed on EC2 and I can’t make it connect to the dynamodb. Can I still extract/query data from dynamo? how would I do that?#2017-11-0903:52jaret@podviaznikov I assume you were previously running a transactor on AWS? What error are you getting when starting the transactor?#2017-11-0903:53jaretTransactor AMI was designed as an expendable resource. Be sure to shut down any transactor in a damaged state and let CloudFormation start a new one.#2017-11-1000:06csmso, we’re seeing an issue on startup of our services that use the peer library, connecting to dynamodb; we get constant {:event :kv-cluster/retry, :StorageGetBackoffMsec 0, :attempts 0, :max-retries 9, :cause "java.net.SocketTimeoutException", :pid 3955, :tid 55}, and the service never seems to connect properly#2017-11-1013:02timovanderkampIm trying to add data in datomic 'in the past' for testing purposes but whenever i try to set the :db/txInstant attribute i get this error:
IllegalArgumentExceptionInfo :db.error/past-tx-instant Time conflict: Tue Oct 03 00:00:00 CEST 2017 is older than database basis
Is there any way to force this or change the database creation date.#2017-11-1013:14hmaurer@timovanderkamp afaik Datomic forces txInstant to be increasing#2017-11-1013:14hmaurerthe database basis is the timestamp of the latest transaction I think#2017-11-1013:16timovanderkamp@hmaurer So that means my schemas must be timestamped in the past aswell?#2017-11-1013:17hmaurer@timovanderkamp how so? sorry, I don’t understand the question#2017-11-1013:18timovanderkampIf the schema's are installed at this time, i wont be able to add transactions in the past right#2017-11-1013:18hmaurerright#2017-11-1013:18hmaurerin which scenario would you want to transact things “in the past” with an altered txInstant?#2017-11-1013:19timovanderkampI want to create reports for some entities, with information about certain moments in time#2017-11-1013:19hmaurerAh. I think you should read this: https://vvvvalvalval.github.io/posts/2017-07-08-Datomic-this-is-not-the-history-youre-looking-for.html (written by @val_waeselynck on this channel)#2017-11-1013:20hmaurerSpecifically the section “Taking a step back: event time vs recording time”#2017-11-1013:20timovanderkampAlright thanks!#2017-11-1013:20hmaurerBut as a quick TL;DR;, you shouldn’t be using :txInstant as the time at which your events occurred; only as the time at which they were recorded#2017-11-1013:21hmaurerotherwise you run into issues, as you mentioned, where you might want to add events that occurred “in the past”, and Datomic will stop you from doing it#2017-11-1013:22hmaurerFrom my (limited) understanding of the topic, it’s an issue you would stumble upon when building pretty much any event-sourced system. I think it’s sometimes referred to as “bitemporal data”#2017-11-1017:04mishagreetings! Is there anything worthy to read about storing localized names in datomic? Schema approaches, etc.#2017-11-1208:56val_waeselynckI'm doing it by storing json-encoded maps in a string field.#2017-11-1210:57mishado you store that map in the "name" attribute? or in a separate attribute?#2017-11-1211:01mishaBasically every time you pull an entity, you pull all the translations no matter what, right? But, with a convenience of not complicating any queries with a locale/language info?#2017-11-1219:07val_waeselynckYeah, that worked for me because any content had on a few locales defined, and only the UI dealt with localization. If that were not the case, I guess I would have used an entity with one attribute per locale; not very generic, but seems like a reasonable tradeoff for performance.#2017-11-1220:24misha@U05120CBV was asking about this ^^^ sort of thing#2017-11-1017:08mishaor at least a checklist of things to consider while building brand new bicycle#2017-11-1019:37rauhJust rewrote my db function to assert exactly the refs (or values) of a cardinality many: https://gist.github.com/rauhs/0704f6492674ea79e935a9e01ac3a483#2017-11-1019:38rauhIt's pretty flexible and works with tempid, refs, idents and most importantly also works when you do NOT necessarily know if what you're inserting is new or existing.#2017-11-1200:18zigndThere's a talk by Rich in which he talks about a Datomic feature he calls "with" that allows you to simulate changes in a database without actually changing it. But I couldn't find any documentation on it, does anyone knows a name, a keyword or a link that could help me find it?#2017-11-1200:52marshallhttp://docs.datomic.com/clojure/#datomic.api/with#2017-11-1200:52marshall@zignd ^#2017-11-1200:54zigndThanks @marshall!#2017-11-1200:55marshallSure #2017-11-1200:56marshallSame api structure as transact. Use the db after value from it to examine the speculative db value#2017-11-1200:57marshallhttp://docs.datomic.com/clojure-client/index.html#datomic.client/with#2017-11-1200:57marshallIf you're using client api ^^#2017-11-1200:59marshallhttp://blog.datomic.com/2017/01/the-ten-rules-of-schema-growth.html @misha not sure if that's what you're looking for ?#2017-11-1201:06zignd@marshall Nice! May I ask you another question? I'm executing unit tests against a temporary in memory instance of Datomic. I guess it would be better to do so using with instead right? There's a recommend practice between these two options?#2017-11-1208:48val_waeselynck@zignd you may also want to check out Datomock (https://github.com/vvvvalvalval/datomock). I'm the author, feel free to ask me questions#2017-11-1213:27zigndInteresting project @U06GS6P1N, I gave you a star. I'm still learning Clojure and Datomic, but I will keep it in mind, I might end up needing it during my unit tests#2017-11-1201:09marshallProbably depends on your specific goal. The logical structure should be similar if not the same#2017-11-1201:10marshallThe advantage of with is if you need a non trivial basis db for the test you can use one backed by persistent storagr#2017-11-1201:10marshallStorage #2017-11-1201:11marshallI.e. you can have a restored backup of prod loaded in your test system and use with against that #2017-11-1201:13marshall@zignd ^#2017-11-1201:17zigndThanks for the insight @marshall . I guess another advantage for me would be not having to load the schema every time I create an in memory instance#2017-11-1205:57DesmondJava interop question: I would like to take the results of a query and construct a java object with them. Rather than doing (MyObj. (first result) (second result)) I would like to use apply to pass the arguments to the constructor of MyObj. However when I try to do that I get a compiler error. I'm guessing that just means that the . constructor is not actually a function. Is there a clean way to do this?#2017-11-1208:43val_waeselynck@U7Y912XB8 I guess you need to wrap the constructor in a multi-arity fn (which you can generate using a macro)#2017-11-1214:11souenzzo(defrecord Foo [a b])
=> user.Foo
(defn into-Foo [[a b]] (new Foo a b))
=> #'user/into-Foo
(into-Foo [1 2])
=> #user.Foo{:a 1, :b 2}
;;By default there is this macros:
(->Foo 1 2)
=> #user.Foo{:a 1, :b 2}
(map->Foo {:a 1 :b 2})
=> #user.Foo{:a 1, :b 2}
#2017-11-1220:28mishahave a look at https://clojuredocs.org/clojure.core/memfn#2017-11-1219:14zigndHow do I properly provide a temporary id to a transaction so that I can take it out of the :tempids of the returned map?#2017-11-1219:16zigndI tried to create a temporary id with (d/tempid :db.part/user) and then use it in the transaction data but the returned map still returns a negative long for the temporary id instead of the value I created through d/tempid#2017-11-1219:19favilaUse d/resolve-tempid. It does the tempid to negative number conversion for you.#2017-11-1219:17zigndHere an example demonstrating what I'm trying to accomplish:
(let [content "test"
author-id 1
tid (d/tempid :db.part/user)
tx-data [{:db/id tid
:tweet/content content
:tweet/author-id author-id}]]
(-> @(d/transact conn tx-data)
(:tempids)
(get tid))) ; it doesn't work because the map associated with `:tempids` contains a negative long as a key to the real id I'm trying to retrieve
#2017-11-1219:21zigndThanks @favila I'll check it out!#2017-11-1400:06souenzzoLet's suppose (d/q '[:find [?e ...] :where [?e :foo/bar]] (d/as-of db 555)) will return [1 2 3].
This order is stable (always using db at basis 555)?#2017-11-1412:08souenzzoCan I bump this? 😅
P.S.: I'm looking for cursors.#2017-11-1412:33mitchelkuijpersDatomic will never guarantee the order of results so you should never depend on this. In practice it seems to be mostly ordered but I would not depend on this behaviour because it is a implentation detail which they won't guarantee.#2017-11-1400:56zigndIs it possible to use an entity's id in a query? I tried to create a pattern using :db/id but apparently that's not the way to do so.
(d/q '[:find ?t ?content ?display-name
:in $ ?p-user-id
:where
[?u :db/id ?p-user-id]
[?t :tweet/author-id ?u]
[?t :tweet/content ?content]
[?u :user/display-name ?display-name]]
(d/db conn) zignd-id)
> CompilerException java.lang.Exception: processing rule: (q__11193 ?t ?content ?display-name), message: processing clause: [?u :db/id ?p-user-id], message: :db.error/not-an-entity Unable to resolve entity: :db/id
#2017-11-1401:35souenzzoSee also:
(d/touch (d/entity (d/db conn) :db/ident))
(d/touch (d/entity (d/db conn) :db/id))
:db/ident, as :tweet/author-id are attributes on datomic. They have a description.
:db/id isn't. So you cant use it on second position of the query tuple.
(d/touch (d/entity (d/db conn) 0)) also a cool place to explore.#2017-11-1402:03zigndInteresting the results of calling this touch function, knowing that makes the fact that I was creating a pattern for :db/id even more pointless xD#2017-11-1405:56favilaThe db/id here is ?u! Just replace that with ?p-user-id#2017-11-1409:49zigndthanks guys i understand it now, it just took me sometime to realize that the e in the [e a v t] format could be parameterized and that it was a value, just like to ones you use in the v part of the format#2017-11-1409:49zigndthings now make much more sense#2017-11-1400:58csmjust use ?p-user-id in place of ?u#2017-11-1400:58csmand omit the :db/id lookup#2017-11-1400:59zigndOh, that makes sense! Thanks @csm#2017-11-1403:12zigndHow do I properly use the or clause? I trying to follow the docs here http://docs.datomic.com/query.html#or-clauses, but I'm starting to think my use case may require something else, maybe some sort of join. My guess is that it's somehow related to the fact I'm trying to create a pattern involving a value coming from an input and another from within the query itself.
(d/q '[:find ?tweet-id ?content ?display-name
:in $ ?user-id
:where
[?follow-id :follow/follower-id ?user-id]
[?follow-id :follow/followed-id ?followed-id]
(or [?tweet-id :tweet/author-id ?followed-id] ; I'd like to have an `or` clause here. Because I also need to retrieve the :tweet entities in which the author is the ?user-id itself
[?tweet-id :tweet/author-id ?user-id])
[?tweet-id :tweet/content ?content]
[?followed-id :user/display-name ?display-name]]
(d/db conn) zignd-id)
> Assert failed: All clauses in 'or' must use same set of vars, had [#{?followed-id ?tweet-id} #{?user-id ?tweet-id}] (apply = uvs)
#2017-11-1411:57souenzzoIt's a (or-join [?tweet-id] ...), not?#2017-11-1416:05zignd@U2J4FRT2T do you mean this?
(or-join [?tweet-id :tweet/author-id ?followed-id]
[?tweet-id :tweet/author-id ?user-id])
#2017-11-1416:07zigndI'm not at my computer right now, but I'll will try it out as soon as I get to it#2017-11-1416:21souenzzo(or-join [?tweet-id]
[?tweet-id :tweet/author-id ?followed-id]
[?tweet-id :tweet/author-id ?user-id])
#2017-11-1403:19zigndThe query properly retrieves the :tweet entities in which ?user-id is not an author when I'm using the [?tweet-id :tweet/author-id ?followed-id] pattern instead of the (or [?tweet-id :tweet/author-id ?followed-id] [?tweet-id :tweet/author-id ?user-id]) though#2017-11-1412:09augustlI suspect that it's possible that the order is not stable, based on nothing but guessing#2017-11-1412:09augustlmaybe the ording changes randomly after re-indexeing. Then again, all the data is stored in sorted sets, so maybe it's consistent#2017-11-1416:50uwoIf we’re getting ‘transactor unavailable’, are there other culprits besides write-heaviness? This isn’t during imports.#2017-11-1417:14bkamphaus@uwo could also be memory pressure on the peer, GC, etc. — it’s a peer<->transactor timeout so can be on either side.#2017-11-1417:15uwohmm. thanks#2017-11-1421:19currentoorI enjoyed the intro to Datomic Cloud Native by @stuarthalloway at the Conj. Any updates on when that will be released? Is it still expected to be ready by end of this year?#2017-11-1516:15jaretWe’re targeting Q4 for release of Cloud#2017-11-1518:33currentoorHi @U1QJACBUM, thanks for the reply. #2017-11-1518:33currentoorJust to confirm that’s Q4 this year right?#2017-11-1519:01jaretYep, this year.#2017-11-1503:16andrethehunterAnyone else having issues testing with datomic? Running locally "datomic:... we get the error:
datomic.api/create-database api.clj: 19
datomic.Peer.createDatabase Peer.java: 117
...
datomic.peer/create-database peer.clj: 764
datomic.peer/create-database peer.clj: 772
datomic.peer/send-admin-request peer.clj: 752
datomic.peer/send-admin-request/fn peer.clj: 760
datomic.connector/create-transactor-hornet-connector connector.clj: 320
datomic.connector/create-transactor-hornet-connector connector.clj: 322
datomic.connector/create-hornet-factory connector.clj: 142
datomic.connector/try-hornet-connect connector.clj: 110
datomic.artemis-client/create-session-factory artemis_client.clj: 114
org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory ServerLocatorImpl.java: 799
org.apache.activemq.artemis.api.core.ActiveMQNotConnectedException: AMQ119007: Cannot connect to server(s). Tried with all available servers.
type: #object[org.apache.activemq.artemis.api.core.ActiveMQExceptionType$3 0xacdff3a "NOT_CONNECTED"]
clojure.lang.ExceptionInfo: Error communicating with HOST 172.31.8.99 or ALT_HOST 52.64.171.90 on PORT 4334
#2017-11-1506:37axrsWe found the problem. We had used a backup of our database that still had heartbeat information. Unfortunately the transact or had died and could no longer be resolved on the last used ip addresses. (A new transactor replaced the crashed one with new ips) #2017-11-1516:17jaretAndre, I believe you logged a case last night and I replied, but can you confirm that the peer machine can communicate to the HOST and ALT HOST (set in transactor properties file)? http://docs.datomic.com/deployment.html#peers-fail-connect-txor#2017-11-1503:50Desmondim trying to go through the day-of-datomic tutorial but i dont know where to run the repl from. when i run it from the root of the day-of-datomic repo the first require statement fails because theres no datomic on the classpath. and if i run it from the bin/repl where i installed the free datomic in the datomic getting started guide it complains that there is no datomic.repl on the classpath.#2017-11-1504:15souenzzo(require '[datomic.api :as d]) @Desmond#2017-11-1505:01Desmond@U2J4FRT2T that's the line that fails because datomic isn't on the classpath#2017-11-1505:04Desmondha! ok I guess I wasn't at the root of the repo after all. working now#2017-11-1505:04Desmondthanks anyway for the quick response @U2J4FRT2T#2017-11-1503:51Desmondanyone know where i should be running the repl from?#2017-11-1511:39foobarAnyone have an insight into this one? https://groups.google.com/forum/#!msg/datomic/gga9PUYj73I/Z3-fsXbpAwAJ#2017-11-1511:39foobarSetting custom trustStore gives ssl errors trying to connect to datomic#2017-11-1512:33EmpperiI’m trying to optimize Datomic peer performance and going around with different JVM options, was wondering if there’s any “known to be good” values where I could start from?#2017-11-1512:34Empperito me it looks like +UseParallelGC seems to be better for performance with less deviation than +UseG1GC which was kinda surprising to me#2017-11-1512:35Empperijust by switching that I get about 2x performance boost and about half the deviation#2017-11-1512:35Empperianother thing I’ve noticed is that Datomic seems to enforce the amount of query threads to be equal to the amount of cores it sees#2017-11-1512:36EmpperiI wonder if that value can be tuned somehow? I do not see any properties in the Datomic documentation here http://docs.datomic.com/system-properties.html#2017-11-1515:18mpenetMight be the setting for core.async or the agent thread pool#2017-11-1515:38conanI'm having trouble connecting to a transactor running on AWS using the CloudFormation template. Does anyone know what value should I have for the host in my transactor.properties?#2017-11-1515:43conanI'm trying to connect from a peer running in Elastic Beanstalk. It's using the datomic-aws-peer instance profile, which includes four dynamo permissions in its policies.#2017-11-1515:43conanThere are only INFO logs in s3 for the transactor, so I think it's running fine.#2017-11-1515:45conanThe transactor instance is running in a security group called datomic, which allows TCP ingress on port 4334.#2017-11-1516:20jaretthe host should be set to the externally reachable IP of the transactor machine. You’ll want to confirm you can reach that host IP from your peer machine.#2017-11-1516:23jaretyou can put the machine VM hostname in the host property and optionally use the internal/private hostname IP in the alt-host property to ensure that it uses both.#2017-11-1516:34marshallIf you’re using the provided create and start -template tools#2017-11-1516:34marshallthen the host and alt host get populated automatically by the startup script#2017-11-1516:34marshall@U053032QC if you’re launching the stack ‘manually’ then you’ll need to do what Jaret mentioned#2017-11-1516:35marshallIf your transactor is up and running (can confirm with CW metrics and/or logs), then you only need to specify the storage URI in your peer#2017-11-1516:36marshallit will lookup the transactor endpoint from storage#2017-11-1517:44conanI used the scripts, and it just has localhost in there.#2017-11-1517:44conanI don't really see how I can set it to the IP of the machine it's running on, as the transactor.properties file is used to create the CloudFormation stack that in turn spins up the machine, so there's no way to know the IP in advance.#2017-11-1517:45conanI've read that the peer gets the URI from the storage, in which case what is the host property for?#2017-11-1517:45marshallif you run the stack with the bin/datomic create-cf-stack command it will put in the correct address#2017-11-1517:45marshallit won’t use the one listed in your local copy of the properties file#2017-11-1517:46conanSo the one in the file doesn't get used?#2017-11-1517:46marshallnot if you’re using the included build/launch tools#2017-11-1517:46conanOh ok, so maybe I'm barking up the wrong tree. Thanks!#2017-11-1517:46marshalldid you launch the transactor with bin/transactor create-cf-stack ?#2017-11-1517:46conanyes#2017-11-1517:46marshallyeah, then it’s not that#2017-11-1517:47conanthe transactor is running fine. maybe it's a security problem, the docs said all the security groups were also set up by those tools but maybe there's something that isn't working#2017-11-1517:47marshallyou should launch your peer process with the aws role specified in the configuration#2017-11-1517:47conanyep#2017-11-1517:48marshallthe aws-peer-role#2017-11-1517:48marshallthat is listed in your properties file after you run ensure-transactor#2017-11-1517:48conanthat role just gives access to the storage though, nothing else#2017-11-1517:48marshallthat’s correct#2017-11-1517:48marshallthe peer reads storage#2017-11-1517:48conanso i still can't connect to the transactor#2017-11-1517:49marshallwhat error are you getting#2017-11-1517:49conanActiveMQNotConnectedException[errorType=NOT_CONNECTED message=AMQ119007: Cannot connect to server(s). Tried with all available servers.]#2017-11-1517:50conani think it makes sense to me, there is nothing to say that the peer should be able to connect to the transactor#2017-11-1517:51marshallwhat security group is it using?#2017-11-1517:51conansome ebs-generated one#2017-11-1517:51conanthe peer, that is#2017-11-1517:51marshallyou’ll need to allow ingress to the transactor security group from that one#2017-11-1517:52marshallon your txor port#2017-11-1517:52marshall(default 4334)#2017-11-1517:52conani figured that datomic-aws-peer role would provide access but i now realise that's just an assumption i made up#2017-11-1517:53marshallthe role is for storage access#2017-11-1517:53marshallthe system should also make a peer security group#2017-11-1517:53conanyep, that's probably it#2017-11-1517:53marshallbut you’d have to be using it#2017-11-1517:55conanthanks for your help, it would have taken me a while to find that role#2017-11-1517:55marshallnp#2017-11-1610:13conanbtw i've been able to connect from EBS now, and if i allow ingress from the internet I can connect locally as well, so it's all working perfectly. thanks!#2017-11-1614:01marshall👍#2017-11-1614:01marshallglad to hear it#2017-11-1516:39marshallYou can specify the query pool thread count with the datomic.queryPool property. For example:
-Ddatomic.queryPool=6#2017-11-1516:40marshall@niklas.collin ^#2017-11-1516:40EmpperiOk, good to know, that's not in the documentation#2017-11-1516:40marshallOur official recommendation is the G1GC. I can imagine varying peer workloads benefiting from other GC options, but since the peer is your application, that will definitely depending on your specific code and usage#2017-11-1516:41marshallIt’s a relatively new addition; I’ll see about adding it to the docs#2017-11-1516:41EmpperiThanks :+1:#2017-11-1516:41marshallDefault object cache is 1/2 of the heap
Again, depending on what your workload looks like you may want to tweak that#2017-11-1516:42EmpperiYeah, that I found already about#2017-11-1516:42marshallhttp://docs.datomic.com/caching.html and http://docs.datomic.com/capacity.html have some general info#2017-11-1516:43EmpperiOur data sizes is at least currently relatively small but the structure is complex and so are the queries#2017-11-1516:44EmpperiWe are building a knowledge graph solution on top of Datomic and trying to find out how well it works out#2017-11-1516:44EmpperiSo very query intensive stuff#2017-11-1518:06Quan NguyenHad a random conceptual question. Is it possible to do atomic transactions that involve a read and a write? E.g. suppose I have multiple threads or peers trying to increment an int attribute on some entity ? All the examples I've seen with transactions are simply asserting some new value, but that doesn't take into account the current value. Thanks! #2017-11-1518:11potetm@quan.ngoc.nguyen You're looking for transaction functions: http://docs.datomic.com/database-functions.html#2017-11-1518:12potetmI'm sure your example was just that, an example, but, to be clear, storing an incrementing number is probably not the best fit.#2017-11-1518:22Quan Nguyen@potetm thanks for the tip! #2017-11-1609:52thomasHi, I am getting a :db.error/not-an-entity Unable to resolve entity: error and I have no idea how to debug this 😇, where should I start looking?#2017-11-1609:52thomasany pointers would be greatly appreciated. TIA#2017-11-1610:03dominicm@thomas that usually means you're trying to resolve something that doesn't exist. e.g. a typo'd entity name in [:some-id "Fobar"]#2017-11-1610:05thomasso this this doesn't exist: Unable to resolve entity: {:order-no 1, :text "XS"}?#2017-11-1610:06dominicmYou might have that in the wrong place I suppose? Does your code look like:
[:db/add ?e :some/param {order-no …}]#2017-11-1610:08thomasthis is the full error as I get it: :message :db.error/not-an-entity Unable to resolve entity: {:order-no 1, :text "XS"} in datom [17592186047266 :choice/options {:order-no 1, :text "XS"}]#2017-11-1610:10thomasnot sure if there is an :db/add somewhere (can see that one in my data at least)#2017-11-1610:10dominicmYeah, it definitely looks like you're setting the value of something to a map?#2017-11-1610:11dominicmAre you transacting a map like this:
{:foo/bar {:order-no 1 …}}
?#2017-11-1610:14thomasI am transacting a massive map of things I think (I didn't write the code nor do I really understand it either, sorry)#2017-11-1610:14conando you definitely have attributes called :order-no and :text?#2017-11-1610:15thomaslet me check.#2017-11-1610:15conanthis is handy
(defn print-db-schema
"Prints all the attributes from :db.install/attribute"
[db]
(clojure.pprint/pprint
(map #(->> % first (d/entity db) d/touch)
(d/q '[:find ?v
:where [_ :db.install/attribute ?v]]
db))))
#2017-11-1610:15thomasI think so... let me check something else...#2017-11-1610:15thomasone sec#2017-11-1610:19rauhYou can't add new data in the value position of a [:db/add ...]. You must look it up with a lookup ref. Like so: [:db/add e a [:order-no 1]]#2017-11-1610:19rauhIf you also want to transact new data then you can either use a tempid or a backwards ref#2017-11-1610:20thomasthe problem is not perse in the DB... in this App we read in a Excel spreadsheet and use the data in that as well.. and the Excel spreadsheet is the only thing that has changed... and that has resulted in this error... I am looking at the excel at the moment for the text XS and see of that has changed.#2017-11-1610:21thomasand there has been indeed a change there!#2017-11-1610:22rauhOption instead of:
- [17592186047266 :choice/options {:order-no 1, :text "XS"}]
use:
- {:order-no 1, :text "XS", :choice/_options 17592186047266}#2017-11-1610:30thomasok I am pretty sure I have found the cause of this problem. nothing datomic related...#2017-11-1610:31thomasand thank you for all your help. It certainly helped me point in the right direction and told me what I had to look for in the Excel spreadsheets!!!#2017-11-1610:31thomasThanks!#2017-11-1614:44conanWhat's the value of using Datomic's :db.type/uri instead of putting a string in there? The only thing I'll ever do when pulling the data out again is to put it into cemerick/url which is happy with strings#2017-11-1614:49augustlI have the same approach to UUIDs, seems wildly more convenient to just use a string since most of the time the UUID comes from a URL and is a string anyway. And then I don't have to convert it to an UUID, return 404 instead of 500 in my web API if it's an invalid UUID, etc#2017-11-1614:51conanyep, that was my thinking. like how i used to put longs in SQL dbs to represent timestamps (as ms since the epoch) instead of using built-in timestamp types#2017-11-1616:00favilaI always use the types because types.#2017-11-1616:02favilaUUIDs do have a more compact representation in datomic (really fressian) than strings. But for URIs there is really no difference except type.#2017-11-1616:02mpenetdate types in dbs are my pet peeve#2017-11-1616:02favilayeah, java util date is just not enough#2017-11-1616:02mpenetso fun converting that stuff back and forth to no end#2017-11-1616:03favilaI have a (still unrealized) plan to encode all the xsd date and time types into a long with canonical sorting#2017-11-1616:03favilawe really hurt for "vague human" date types#2017-11-1616:04favilae.g. LocalDate LocalTime LocalDateTime, etc. where there is no time zone and it's not timestamp-resolution#2017-11-1616:04favilaor sometimes where there is a zone, but it's not timestamp resolution#2017-11-1616:05favilawe encoded dates (i.e. Y-M-D values) into a java util date, and that was a huge mistake. we should have just used strings#2017-11-1616:06favilawe were promised custom datatypes in datomic eventually. does anyone know how that is progressing?#2017-11-1616:24matthavenerAnyone know if this is expected behavior?#2017-11-1616:24matthavener(d/q '[:find [(pull ?e [:some/attr]) ?e] :where [?e :some/attr]] db)
#2017-11-1616:24matthavenerArrayIndexOutOfBoundsException [trace missing]
#2017-11-1616:24matthavenerwhereas.. something like this works fine:#2017-11-1616:25matthavener(d/q '[:find [(pull ?e [:some/attr]) ?tx] :where [?e :some/attr _ ?tx]] db)
#2017-11-1616:25marshall@matthavener Yes, that is a known behavior; you can’t get both e and a pull on e in the same find spec#2017-11-1616:25matthavenerinteresting! Thank you#2017-11-1616:25marshallyou can, however pull both :some/attr and :db/id if you need to#2017-11-1616:26matthavenerright#2017-11-1618:36thosmosI'm having troubles connecting console to my local datomic-pro running with a mysql storage. I'm using the following command from the datomic directory:
bin/console -p 8888 dev datomic:sql://\?jdbc:
I can use this same URI (with the addition of the DB name) to connect from a peer and query data. If I put the URI in quotes and remove the escapes it still doesn't work. Here's the error message: https://imgur.com/a/kf3i0
Any ideas?#2017-11-1618:41favilamysql lib may not be on classpath#2017-11-1618:42foobarconsole live datomic:sql://\?jdbc:postgresql://${docker_postgres_datomic_host}:${docker_postgres_datomic_port}/${docker_postgres_datomic_db}\?user=${docker_postgres_datomic_user}\&password=${docker_postgres_datomic_pass}#2017-11-1618:42thosmos@favila the mysql lib is in the lib folder and the transactor connects successfully? does it need to be somewhere else?#2017-11-1618:42favilalogs or console may have a better error message#2017-11-1618:42favilano, that should be enough#2017-11-1618:43foobarAre you missing the mysql db ?#2017-11-1618:44thosmos@foobar that was it! duh!#2017-11-1618:45thosmosi assumed it worked like the peer#2017-11-1619:32fentontrying to setup health checking datomic using ping-host/ping-port configuration, but getting following stacktrace error:
bin/transactor ~/projects/abraxas/transactor/config/dev-transactor.properties
Launching with Java options -server -Xms1g -Xmx1g -XX:+UseG1GC -XX:MaxGCPauseMillis=50
Critical failure, cannot continue: Error starting transactor
java.lang.RuntimeException: Unable to start ping endpoint localhost:9999
at datomic.transactor_ext$start_ping_endpoint.invokeStatic(transactor_ext.clj:42)
at datomic.transactor_ext$start_ping_endpoint.invoke(transactor_ext.clj:25)
at datomic.transactor_ext$start_pro.invokeStatic(transactor_ext.clj:63)
at datomic.transactor_ext$start_pro.invoke(transactor_ext.clj:59)
at clojure.lang.Var.invoke(Var.java:379)
at datomic.transactor$run_STAR_.invokeStatic(transactor.clj:294)
at datomic.transactor$run_STAR_.invoke(transactor.clj:220)
at datomic.transactor$run$fn__23373.invoke(transactor.clj:347)
at clojure.core$binding_conveyor_fn$fn__6757.invoke(core.clj:2020)
at clojure.lang.AFn.call(AFn.java:18)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.IllegalStateException: Insufficient threads: max=3 < needed(acceptors=1 + selectors=2 + request=1)
at org.eclipse.jetty.server.Server.doStart(Server.java:368)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at cognitect.http_endpoint.jetty$start.invokeStatic(jetty.clj:123)
at cognitect.http_endpoint.jetty$start.invoke(jetty.clj:120)
at cognitect.http_endpoint.jetty$server$fn__21184.invoke(jetty.clj:136)
at cognitect.http_endpoint$create_endpoint.invokeStatic(http_endpoint.clj:292)
at cognitect.http_endpoint$create_endpoint.invoke(http_endpoint.clj:217)
at cognitect.nano_impl.server$create.invokeStatic(server.clj:172)
at cognitect.nano_impl.server$create.invoke(server.clj:141)
at cognitect.nano_impl$create.invokeStatic(nano_impl.clj:166)
at cognitect.nano_impl$create.invoke(nano_impl.clj:96)
at datomic.transactor_ext$start_ping_endpoint.invokeStatic(transactor_ext.clj:33)
... 13 more#2017-11-1619:41marshall@fenton what version of datomic?#2017-11-1619:42fenton@marshall datomic-pro-0.9.5561.62#2017-11-1619:42marshallis something else using that port on your machine?#2017-11-1619:42marshallhrm. hang one#2017-11-1619:43marshallthat insufficient thread bug should have been fixed in http://docs.datomic.com/changes.html#0.9.5561.59#2017-11-1619:47fenton@marshall yet i'm on the later version...hmmm...#2017-11-1619:47fentonregression?#2017-11-1619:47fentoni am running in dev mode...#2017-11-1619:47fentonconfig follows:#2017-11-1619:48marshall@fenton can you email <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection> with your issue and your config files (redacted of any sensitive info) as well as the number of cores on your system (real & virtual)#2017-11-1619:49marshalli.e. getconf _NPROCESSORS_ONLN #2017-11-1619:50marshallor the contents of /proc/cpuinfo if you’re in linux#2017-11-1619:50fenton@marshall ok will do thx.#2017-11-1619:53marshallhttp://docs.datomic.com/changes.html#0.9.5561.59#2017-11-1619:53marshalloops#2017-11-1619:53marshall🙂#2017-11-1711:10zigndSQL Server offers a pagination syntax like so:
SELECT col1, col2, ...
FROM ...
WHERE ...
ORDER BY -- this is a MUST there must be ORDER BY statement
-- the paging comes here
OFFSET 10 ROWS -- skip 10 rows
FETCH NEXT 10 ROWS ONLY; -- take 10 rows
is there something similar for queries in Datomic for pagination? or is it something like sorting and i would have to do outside the query?#2017-11-1711:13augustlpagination is generally less useful when you have the data in your peer already anyway#2017-11-1711:13augustl(take 10 result) 🙂#2017-11-1711:16zigndoh i see then, it's like sorting#2017-11-1711:17zigndbut what people usually do when they have queries that return too much data? it would stay in memory for a moment because you're paginating after the query execution#2017-11-1711:18augustlthat's generally difficult to avoid. That would have to happen on a database server too#2017-11-1711:18zigndyou're right. the first thing i can think of would be to perform a query that retrieves the ids, then i'd paginate the ids and use them on a second query, that would retrieve the expected result using something like min max#2017-11-1711:19augustlthe only optimization I could think of is if you wanted to paginate without sorting, i.e. in whatever order the peer walks the index#2017-11-1711:19augustlmaybe just retrieving the IDs could work, yeah. Unless they're just stored in the same chunks as most of the data anyway#2017-11-1711:20augustlmy mental model of the organization of data in the cunks is lacking, so not sure if doing ids first would help...#2017-11-1711:24zigndi'm not sure, either. because if the pagination depends on some sort of ordering this resolution model wouldn't be that great, in the first query i'd have to retrieve the ids and the other attributes required for the sorting before the pagination#2017-11-1711:26augustlthe attribuets you sort on are probably indexed I suppose? Then I would assume that the data are mostly in the same chunks in avet anyway#2017-11-1711:27augustlor, wait. Does each index have its own set of chunks? Then at least you'd only have the indexed attributes in avet. I think?#2017-11-1711:27augustlheh#2017-11-1711:27augustlavet only contains the actual data that is indexed, I suppose? And if you need additional facts, you'd need to look it up in eavt or elsewhere#2017-11-1711:27augustl"you" being the query engine#2017-11-1711:30zigndoh, i didn't know there was indexes in Datomic, just checked the docs and there's a page on that, I guess I'll have to study xD#2017-11-1711:30augustlyou typically want to tag attributes you query for as indexed#2017-11-1711:31zigndoh, i will keep that in mind#2017-11-1711:32zignddoing so would affect the query results so that ordering would be done over the tagged attribute?#2017-11-1711:33zigndreading the docs#2017-11-1711:34augustlsort of kind of#2017-11-1711:34augustlthe indices started making sense to me after I started to play with the "datoms" API, that gives you direct access to the indices#2017-11-1711:34augustli.e. what the query engine ends up doing#2017-11-1711:35augustleach index is a nested sorted set of data, nested differently. Or something like that. So "eavt" lets you input an e (entity id ) and get all attributes for that entity. Then you add an a and you get all values for that attribute. Then you walk those to find the attribute closest to the T of the db you're accessing#2017-11-1711:36augustland avet is just a different pattern, and only indexed values are stored there. So you add an attribute and you get all values for that attribute. And so on#2017-11-1712:30zigndthanks for the help @augustl, that shed me a light#2017-11-1712:31zigndit seems that the datoms API might be of some one, but I will have to experiment with it first#2017-11-1803:59Desmondso i have a database with questions and comments that belong to those questions. i want to query for the questions’ attributes as well as the number of associated comments. using a join and the count aggregate works fine when there are comments but of course doesn’t work when there are zero comments because there is nothing to count#2017-11-1814:56souenzzomaybe or + missing?#2017-11-1905:06Desmondi didn't know about missing? - thanks for the tip#2017-11-1905:06Desmondi have it down to 2 queries and i'm happy with that#2017-11-1906:26Desmondok, now its getting trickier. now i have two associations I need to count and they can both be zero.#2017-11-1906:28Desmondnow i'm trying to do it with 4 queries but that means i need to be able to identify the case where both associations are not there. i tried (and (not-join...) (not-join...)) but that doesn't seem to be a legal use of and.#2017-11-1913:16val_waeselynck@U7Y912XB8 my take at it: https://gist.github.com/vvvvalvalval/5547d8b46414b955c88f#2017-11-2106:50Desmond@U06GS6P1N thanks that is very helpful. im having trouble figuring out how to extend it to count two optional associations however. in particular i'm struggling with how to use the where clause. this is what i have (warning - awkward code):
(def query '[:find (sum ?comment-weight) (sum ?affirmation-weight) ?text ?time ?source-identifier ?q
:with ?uniqueness
:where
[?q :question/text ?text]
[?q :question/time ?time]
[?q :question/source-identifier ?source-identifier]
(or-join [?q ?uniqueness ?comment-weight ?affirmation-weight]
(and [?comment :comment/question ?q]
[?affirmation :affirmation/question ?q]
['((identity ?comment) (identity ?affirmation)) ?uniqueness]
[(ground 1) ?comment-weight]
[(ground 1) ?affirmation-weight])
(and [?comment :comment/question ?q]
[(identity ?comment) ?uniqueness]
[(ground 1) ?comment-weight]
[(ground 0) ?affirmation-weight])
(and [?affirmation :affirmation/question ?q]
[(identity ?affirmation) ?uniqueness]
[(ground 1) ?affirmation-weight]
[(ground 0) ?comment-weight])
(and [(identity ?q) ?uniqueness]
[(ground 0) ?comment-weight]
[(ground 0) ?affirmation-weight]))])
#2017-11-2106:51Desmondthis doesn't actually work#2017-11-2106:51Desmondthe counts are wrong#2017-11-2106:53Desmondany idea how to get this to work? at this point i don't care so much about pretty code...#2017-11-2108:39val_waeselynck@U7Y912XB8 can you post this on SO? I think it's useful that other people see the solution#2017-11-2115:48val_waeselynck@U7Y912XB8 you seemed in a hurry, so I took the liberty of asking a question on SO and answering myself 🙂 https://stackoverflow.com/questions/47417009/datomic-aggregations-counting-related-entities-without-losing-results-with-zero/47417128#47417128
HTH#2017-11-2116:19favilaI suggest an alternative: call other functions which are ok with nils.#2017-11-2116:25val_waeselynck@U09R86PA4 I don't see what this would consist of, could you add it as an SO answer please?#2017-11-2116:25favilaalready did#2017-11-2116:26val_waeselynck@U09R86PA4 alright I see it now#2017-11-2205:49Desmondwow. thanks for going the extra mile! i'll give this a try and respond on OS#2017-11-1804:00Desmondhow can i do this without n+1 queries?#2017-11-1808:12Desmondwound up doing one query for questions with a count of comments and one query for questions without comments using a not-join#2017-11-1814:24nickikHi all. I am confused. Maybe somebody can help me. I have a java project and I would like to use the java query DSL. I have the newst version of datomic pro on the classpath but 'import datomic.*;' doesn't work.#2017-11-1818:15fabraoHello all, today I´m using mysql database for selling system that use 30-40 clients simultaneously. This concurrency is controled by ring web app. What do you think changing the database to Datomic? I saw that the limit for free version is 2 peer connection. What is it means? Two ip´s connected?#2017-11-1818:16fabraoCan anyone explain what it means?#2017-11-1903:20zigndi'm still confused about pagination of query results in datomic, the whole thing about using the datoms API still seems weird, and almost feels like not the right way to do pagination, so i decided to ask a question on SO, it might end up helping others that might end up facing this same problem in the future https://stackoverflow.com/questions/47373356/query-result-pagination-in-datomic#2017-11-1903:22zigndi'd really appreciate any answers on that, it's driving me nuts all the possibilities i could use to solve this problem xD#2017-11-1913:19val_waeselynck@U6PCW7E9F answered 🙂#2017-11-1914:32zignd@U06GS6P1N Thanks for answer! My understanding almost reached the same point as what you described in your answer regarding the limitations, I just needed someone to confirm it. It seems that Datomic is not the best fit for all use cases, and specially not for a side project attempting to be a Twitter clone#2017-11-1914:38val_waeselynck@U6PCW7E9F well I would rather say that Datomic does not address all the needs of a data system; what Datomic excels at is being a system of records, i.e a transactional entry point for your system. I'd much rather have a Datomic + some derived data stores than just a SQL database that I will end up de-normalizing :)#2017-11-1906:30Desmondhow can i describe the lack of two associations. i tried (and (not-join...) (not-join...)) but that doesn't seem to be a legal use of and/not-join#2017-11-1921:11chris_johnson@fabrao those “connected peers” are Peers, not end-user clients. Does your current architecture support 30-40 connections to your mysql DB, or 30-40 connections to your Ring server, which talks to the database? If it’s the latter, Datomic-free would work for you because your Ring server would be one Peer.#2017-11-2004:09fabrao@chris_johnson the ring Server use mysql database pool to serve 30-40 clients. So how is this works for datomic?#2017-11-2006:47Empperi@fabrao the way Datomic works is like this:
1) You have data in whatever storage (mysql in your case)
2) You have an application with Datomic Peer within it (yes, within, it’s part of your app)
3) You query the Peer for data. This querying can happen within your application without any network I/O (so far!) or it can be a separate application.
4) Peer checks if it already has the needed segments of the database cached in memory. If the answer is NO, then it proceeds to download those segments from mysql into memory.
5) After the segments have been received (or if they already were there) Peer performs the actual query against the dataset that is within the memory of Peer (which is usually your application) and returns the data#2017-11-2006:48Empperiso basically the limitation of having only 2 peers for free version means that you can have 2 applications at the same time talking to storage backend. These 2 applications can be either the same app duplicated for performance reasons or they can be completely separate apps#2017-11-2006:49Empperithe way datomic free does this limitation is that when a Peer starts it does the following:#2017-11-2006:50Empperi1) Peer connects to the Storage Backend
2) It retrieves several configurations from the Storage Backend, among those is the location of the Transactor
3) Peer connects to the Transactor and informs it is now available. With Datomic Free, if this is the third Peer informing about it’s availability to the Transactor this will fail. With Datomic Pro it succeeds#2017-11-2006:52Empperiand why does Peer need this connection to the Transactor, after all it just reads from the storage backend and Transactor writes there? The reason is of course the Peer caches. When Transactor writes something to the storage backend it needs to inform all Peers about this new data so that Peers can either invalidate or update the caches accordingly#2017-11-2010:40nickikIs this Java typed Java DSL released by now?#2017-11-2010:41nickikStu talkes about it here: https://www.youtube.com/watch?v=GgMlXr9p9_A#2017-11-2020:02djjolicoeuris there an operator for pull for excluding an attribute? lets say I have one attribute I don't want the client to see....are my options in this programmatic removal and explicitly writing the pull for every attribute I do want to include, or is there a way to"blacklist" an attribute?#2017-11-2109:15stijn@U06DX8UJY you can use a filtered database for this http://docs.datomic.com/filters.html#2017-11-2112:25djjolicoeur@U0539NJF7 I wonder what the performance tradeoff is, there. I hadn't thought of blacklisting attributes at the db level. you could add a :db/pii? attribute on the attribute schemas then filter out attributes where that is true.#2017-11-2022:28jfntnWe have a transactor deployed to an AWS vpc and configured through IAM roles, everything works fine on that front, but we now want to add a peer that’s not on AWS and I’m not sure it’s even possible?#2017-11-2022:44favilaIt's an IAM role and AWS network config issue. Datomic doesn't care about the network boundaries. The peer just needs to be able to ip-route to storage and the transactor.#2017-11-2022:44favilaE.g. I have done it with ssh tunnels#2017-11-2023:02jfntn@U09R86PA4 so we’d have to tunnel our external peer onto one of the aws peers?#2017-11-2023:03jfntnIs there any config required?#2017-11-2023:03favilaYou just have to get in the same network. You don't need to hit any machine in particular.#2017-11-2023:04favilaYou could set up a VPN. You could whitelist IPs in IAM. You could ssh into a bastion machine, whatever.#2017-11-2023:05favilaThe requirements are: 1) The datomic connection string you give to peer must be able to contact the storage; 2) The host= or alt-host= hostname on the transactor must be resolvable by the peer to the transactor#2017-11-2023:06favilaThere are so many different ways to do that#2017-11-2118:02timgilbertSay, anybody got a "recursively touch entity e to depth n" function lying around? I saw this but I need to handle the {:db/valueType :db.type/ref :db/cardinality :db.cardinality/many} case. https://stackoverflow.com/questions/27072840/how-to-write-touch-all-touch-all-reachable-entities-from-an-entity-in-datomic#2017-11-2118:35timgilbertWell, this is what I came up with, not the prettiest or most efficient, but it works well enough for debugging purposes.
(defn touch-n
"Recursively touch an entity to n levels deep (default 2). Return a nested map."
([e]
(touch-n 2 e))
([n e]
(if (or (zero? n) (not (entity? e)))
;; We've reached our recursion limit, or we're at a leaf node
e
;; Else we're an an entity, touch it an recur
(reduce (fn [m [k v]]
(assoc m k (if (set? v)
;; Cardinality many
(into #{} (map (partial touch-n (dec n)) v))
;; Cardinality one
(touch-n (dec n) v))))
{}
(d/touch e)))))
#2017-11-2118:41timgilbert(where I've already got (defn entity? [thing] (instance? datomic.Entity thing)))#2017-11-2201:25csmI’m getting an error Caused by: com.mysql.cj.jdbc.exceptions.PacketTooBigException: Packet for query is too large (16,642,439 > 4,194,304). You can change this value on the server by setting the 'max_allowed_packet' variable. trying to restore to an Aurora (mysql) database in AWS#2017-11-2203:21ghaskinshi all, im struggling to understand the notion of distinct values and aggregates#2017-11-2203:21ghaskinsi have a :db.type/float attribute#2017-11-2203:22ghaskinsif I have, say, three entities with [1.2, 2.0, 4.0], running (sum) on that produces the expected 7.2#2017-11-2203:22ghaskinsbut if I have [1.2, 1.2, 1.2], (sum) produces “1.2” and count produces “1"#2017-11-2203:23ghaskinsI dont get what I am doing wrong#2017-11-2203:23ghaskinsany help appreciated#2017-11-2205:00karlmikko@ghaskins Datomic generally group bindings on value - treating them as a set of values :with can help with controlling the grouping http://docs.datomic.com/query.html#with#2017-11-2206:02greywolve@ghaskins always remember that Datomic is set based. This has bitten me tonnes of times. You sort of have to drum that into your head. So you if you get identical tuples, and don't use :with then they will appear as one tuple.#2017-11-2215:19ghaskinsYeah, I had read about the :with clause and it wasnt helping me…then I realized that my result was the same with or without the :with clause because I wasnt targeting it properly#2017-11-2215:20ghaskinsOnce I figured that out, its all working now…but thank you!#2017-11-2214:13biscuitpantswith datomic rules, how would i pass the db as an arg to rules, and then to a function in a rule? like so:#2017-11-2214:13biscuitpants[(chat-about-subject $ ?subject ?chat)
[?subject :email ?email]
[(= (some-fn $ ?chat)
?email)]]
#2017-11-2214:15biscuitpantsi keep getting “unable to find symbol $” in this context#2017-11-2214:29biscuitpantsor even just pass the db to a function#2017-11-2215:14jfntnIs there a library that can generate specs from a schema tx?#2017-11-2219:23souenzzo/subscribe this topic#2017-11-2222:58jfntngenerating specs for scalar attributes is pretty trivial, but refs get complicated pretty quick#2017-11-2219:40kbaribeauHey all, is there a way to free up resources associated with a datomic connection so that my process doesn't hang on exit?
I have a lein task that looks just like this:
(ns slow-lein-because-datomic.core
(:require [datomic.api :as d])
(:gen-class))
(defn -main []
(println "about to connect")
(d/create-database "datomic:)
(d/connect "datomic:)
(println "connected"))
It prints "connected" and then hangs for a very long time before exiting. I thought maybe datomic.api/release would help, but it appears not to, and I can't find many docs around it, or examples.#2017-11-2220:01favila@kbaribeau The hang is caused by the clojure agent pool, which things in datomic use. (This is a general clojure problem, not datomic specific). See docs for datomic.api/shutdown, which may help you.#2017-11-2220:01favilaalso clojure's shutdown-agents function#2017-11-2220:02kbaribeauAha, thanks!#2017-11-2223:02jfntnsay we a unary ref, its spec would have be an s/keys but it doesn’t feel right to statically define which keys are required and which are optional because we often have different “schemas” on read or write#2017-11-2311:33boldaslove156Do you guys think it's okay to expose eid to client as part of a url?#2017-11-2311:34val_waeselynckNo, consider them internal#2017-11-2311:48kirill.salykinso, it is better to have some uuid or int id which can be exposed?#2017-11-2315:16pesterhazyyes. lookup refs can be used everywhere an eid can, e.g. [:person/id #uuid "asdf-asdf-asdf-asdf"]#2017-11-2400:32zigndmay I ask why it's not okay? i'm building an http service and returning them to the service's client, just curious idk, i might be doing something wrong xD#2017-11-2400:45favilaThey are not guaranteed to be unchanging. So there should be no long-lived weakly-referenced entity ids in your system (eg in a url, or in a json blob stored somewhere). In your example, your urls night break one day.#2017-11-2400:46favilaUsing eids in short-lived contexts is ok#2017-11-2401:41zigndthanks for the explanation! i'll look into this lookup refs#2017-11-2408:08pesterhazyeid haven't changed in practice so this may not be important practically speaking#2017-11-2408:09pesterhazybut remember that if you re-create a database in some way other than restoring (i.e. re-inserting datoms), you won't have control over eids#2017-11-2411:31zigndyeah, that's what i noticed reading the docs on Identity and Uniqueness, thanks for confirming#2017-11-2411:34zigndit works just like an identity column in a relational database configured auto increment#2017-11-2411:44augustl@U06F82LES what will happen if you use [:person/id #uuid "....."] and the string is not a valid UUID?#2017-11-2411:45augustlmy argument for using strings for UUID types has been that I don't want to manually convert to an UUID object and have to catch the exception that java throws if it's invalid and return a 404 instead of a 500 in my web servers#2017-11-2411:45augustlbut maybe the #uuid tag behaves differently#2017-11-2411:58pesterhazy@U0MKRS1FX you'll get a reader exception#2017-11-2411:59pesterhazybut yeah you'd have to catch the exception manually I think if you care#2017-11-2412:03augustlah, I'll stick to strings, then. I'm lazy 🙂#2017-11-2412:03augustlthanks for the info!#2017-11-2415:00damianHi! Me and my team have rather urgent issue. Could you guys please help us?#2017-11-2415:01damianWe’ve ran into storage issues. We found that directory datomic/log takes a lot of space. Is it safe to delete old log files from there?#2017-11-2415:02damianMost importantly, we did a complete dbbb reset on October. So logs from September and earlier are junk we can delete, right?#2017-11-2415:33bkamphaus@damian you should be safe to delete any .log files Datomic puts in that directory and you can also change the logging behavior: http://docs.datomic.com/configuring-logging.html#transactor-logging#2017-11-2415:34damianmany thanks simple_smile#2017-11-2422:37johnjNo knowing much about Datomic yet, for a prototype, will it be to much trouble moving your data from maps/vectors/edn to datomic?#2017-11-2422:38johnjfor example, can you model your data first just with clojure literals and easily move to datomic?#2017-11-2422:39favilaWatch out for vectors: If they contain duplicates, or if order is significant, it won't be compatible with datomic's set-wise data model.#2017-11-2422:40favilaWatch out for types that are not supported by datomic: you will have to encode them into datomic somehow.#2017-11-2422:41favilaWatch out for cases where the key of a map entry does not determine the type and cardinality of its value#2017-11-2422:42favilaWatch out for nils: datomic does not support nil. This matters if you need to make a distinction between "I assert there is nothing" and "I do not assert either way".#2017-11-2422:45favilaIn general: datomic is not an edn document store, but more like a graph database with schema on the edges (attributes) only. You do need to keep datomic in mind when planning the edn representation of your application data.#2017-11-2422:46johnjwriting them down for later research, thanks! although you are making me think of the possibility of just starting with datomic directly and avoid all those gotchas 😉#2017-11-2422:49favilaExcept for order, a datomic-first design will almost always be an easy and natural edn design#2017-11-2422:50favilaespecially with spec, which encourages your keywords to be namespaced and have an unchanging type+cardinality on their value#2017-11-2422:50favilabut order is a complete PITA in datomic#2017-11-2423:01johnjnoted, was aiming at saving edn files to disk from clojure data literals for prototyping but going to rethink it a little more#2017-11-2518:24luchiniI’ve been using conformity and loving it. But I would like something that fits my workflow a bit better and thought about something like this https://github.com/luchiniatwork/migrana
This is just a potential readme of what I would be willing to implement. Before doing so, I would like to know what you folks think about it and how you would do it differently?#2017-11-2620:12lmergen@luchini this looks very interesting! Am I correct in assuming that it's more explicit ok no specifying the migrations you are going to perform? And allow for more complex migrations?#2017-11-2620:12lmergenIf so, count me in! This is exactly what I've been missing from conformity#2017-11-2620:19lmergenOne thing that I would like is the possibility to run it outside of lein. So that I can run it as a standalone docker container. I suppose that will not be too much of a problem tho?#2017-11-2623:19yannvahalewynHey channel, is there a way to query for the max value of a datom, and use that result further in the query? Like “HAVING” sql syntax and not :find (max ?x).
I would to filter rel-many elements to the last in a where clause, like
[:find ?x
:where
[?parent :parent/children ?child]
[?child :child/position ?pos]
[(max ?pos) ?max-pos]
[... ;; do something else with max-pos,
]
#2017-11-2623:20yannvahalewynA way to chain queries maybe?#2017-11-2700:33bkamphaus@yannvahalewyn typical case is to chain queries. If you want to stay within one query, you can use subquery, see similar examples here: https://groups.google.com/d/msg/datomic/5849yVrza2M/31--4xcdxOMJ#2017-11-2702:09yannvahalewyn@bkamphaus great, that’s what I was looking for. Thanks 🙂#2017-11-2723:18mishahttps://github.com/mtnygard/datoms-say-what#2017-11-2819:15luchini@lmergen the core idea is to give the option to users for being as specific as they want (i.e. writing migrations one at a time - complex or not) or allowing users to manage one schema and then letting Migrana deal with the migrations#2017-11-2819:16lmergenawesome#2017-11-2819:16lmergeni'm interested#2017-11-2819:17luchinicool… I’ll put a few lines together during the next weeks#2017-11-2820:53timgilbertSay, is there a version of :db.fn/retractEntity that doesn't recur into :db/isComponent true attributes?#2017-11-2820:55favilaNo, but not deleting them doesn't make much sense if they are components. The semantics of being a "component" mean you don't outlive your parent.#2017-11-2820:56favilaMaybe these are not really component entities?#2017-11-2820:56timgilbertYeah, they aren't really, but they are marked as such in the database and have caused some pretty spectacular cascading deletes#2017-11-2820:57favilaThe assumption for components is that (d/datoms db :vaet component-entity-id) will always show exactly 0 or 1 datom, and the attribute for that datom is :db/isComponent true#2017-11-2820:57favilaYou should just remove :db/isComponent from that attribute I think?#2017-11-2820:58favilaThat's a schema alteration you can do at any time.#2017-11-2820:59favilahttp://docs.datomic.com/schema.html#altering-component-attribute#2017-11-2821:29timgilbertTangentially-related to the last question: let's say I have called retractEntity on entity 123 and I want to undo the transaction. Is it safe to add the data back as [:db.add 123 :attr value] seqs, or do I need to generate a new tempid and add the data in to the newly-created entity instead?#2017-11-2822:00favilaIt's safe. There's some internal counter in datomic that is the max "t" component of all entity ids. If you attempt to assert an id with a "t" component larger than the counter, your transaction will fail.#2017-11-2822:00favilaBut since the entity used to have assertions on it, the counter is higher than that, so it will succeed.#2017-11-2822:00favilahttps://stackoverflow.com/questions/25389807/how-do-i-undo-a-transaction-in-datomic#2017-11-2822:47timgilbertThanks @favila!#2017-11-2823:40souenzzoWired/Cool idea about @timgilbert problem
(d/function
'{:lang :clojure :requires [[datomic.api :as d]] :params [db eid]
:code (as-> '[:find ?add ?attr ?comp ?v
:in $ ?e
:where
[(ground :db/add) ?add]
[(ground :db/isComponent) ?comp]
[(ground false) ?v]
[?e ?i]
[?i :db/valueType :db.type/ref]
[?i :db/ident ?attr]] ↓
(d/q ↓ db eid)
(d/with db ↓)
(:db-after ↓)
(d/invoke db :db.fn/retractEntity ↓ eid))})
#2017-11-2908:27EmpperiI think we just found a bug in Datomic or if it isn’t a bug then please tell me why we get an exception which leaks datomic internals? 😛
(d/q ‘[:find ?x :where [?x ?x]] (d/db db-conn))
IllegalArgumentExceptionInfo :db.error/insufficient-binding [?x-1752 ?x] not bound in expression clause: [(= ?x ?x-1752)] datomic.error/arg (error.clj:57)
(d/q ’[:find ?x :where [?x ?x ?x]] (d/db db-conn))
IllegalArgumentExceptionInfo :db.error/insufficient-binding [?x-1773 ?x] not bound in expression clause: [(= ?x ?x-1773)] datomic.error/arg (error.clj:57)
#2017-11-2908:28Empperiand it complains about a binding which isn’t bound in our expression but that binding is not in the original query, it’s auto generated by Datomic under the hood…#2017-11-2918:37jaretDatomic 0.9.5651 is now available https://forum.datomic.com/t/datomic-0-9-5651-now-available/220#2017-11-2920:02souenzzo## Changed in 0.9.5651
* Enhancement: Peer server allows arbitrary code in queries.
* Enhancement: Better error messages from peer server.
#2017-11-2919:41kennethkalmerIs it possible to change the parent of a component entity? ie, move a component entity from one entity to another?#2017-11-2919:45bkamphaussounds like something that’s not a component maybe 🙂#2017-11-2919:46kennethkalmerit definitely is, in just case though we accidentally created a duplicate of the parent (unique value got changed in source data) and now have two streams of components that I need to merge…#2017-11-2919:47favilaYou can definitely move it. Retract from its existing parent and assert on the new one in the same tx#2017-11-2919:47kennethkalmerThen it dawn on me that I could possibly transact [{:db/id component-id :parent/_id new-parent-id …}] and voila… Except not#2017-11-2919:47favila[[:db/retract old-parent attr component][:db/add new-parent attr component]]#2017-11-2919:48favilaYou cannot retract anything with the transaction map syntax#2017-11-2919:48favilathe map syntax is sugar for :db/add only#2017-11-2919:48kennethkalmeryep, got that… in your example, what would the value of component be?#2017-11-2919:49kennethkalmerthe whole component entity as a map, or just the entity id#2017-11-2919:49favilathe component entity#2017-11-2919:49favilaonly the id#2017-11-2919:49kennethkalmerhmm, thanks, let me have a look quick#2017-11-2919:52favilaAll transactions expand to a list of primitive :db/add and :db/retract operations. Map syntax is sugar for a bunch of db/adds, and transaction functions must return something that eventually bottoms out with db/add or db/retract.#2017-11-2919:53faviladb/add and retract don't take maps as arguments#2017-11-2919:57kennethkalmerthanks, confirmed your suggestion with retract/add in the same transaction has the desired effect!#2017-11-2919:58kennethkalmerit just never occurred to me to tackle it that way around#2017-11-2923:37yannvahalewynYou could make a transactor fn to perform an atomic swap if you often need to query for the old parent before transacting#2017-11-2919:42kennethkalmerI did a little repl test using d/with, and it didn’t have the behaviour I wanted#2017-11-2920:04csmI know this was just announced, but any opinions on using global DynamoDB tables with Datomic? https://aws.amazon.com/blogs/aws/new-for-amazon-dynamodb-global-tables-and-on-demand-backup/#2017-11-2920:36nickikDoes Datomic now have some sort of Java Typed API? Or is that still in the future?#2017-11-3000:53olivergeorgeHello. I have a quick "getting started" question.#2017-11-3000:53olivergeorgeI have a datomic-free transactor running locally but I can't work out how to access it with the datomic.client library#2017-11-3000:54olivergeorgeShould that be possible? Perhaps I should be using the datomic.api peer library.#2017-11-3000:54olivergeorgeI think the problem is guessing what parameters I should be passing to datomic.client/connect#2017-11-3000:56olivergeorgeI haven't found documentation which makes it clear. Based on the db-uri I tried (<!! (client/connect {:db-name "hello" :endpoint "localhost:4334"})) which reports "Incomplete or invalid connection string".#2017-11-3001:01olivergeorgeOkay, still not working but new error (can't put nil on channel) when I add in more params.
{:db-name "hello"
:endpoint "localhost:4334"
:region "none"
:service "peer-server"
:account-id datomic.client/PRO_ACCOUNT
:access-key ""
:secret ""}#2017-11-3001:26olivergeorgeAPI Docs say "alpha, subject to change" so perhaps I should not be using the client api. I'll use the peer api instead.#2017-11-3005:01olivergeorgeI think I can answer my own question now. On the "Get Datomic" page is says that Free "does not support Datomic Clients".#2017-11-3005:02olivergeorgeFWIW my main interest in Free was a hassle free dev setup. It can be annoying to have registration things to consider while developing (or experimenting with new tech).#2017-11-3007:51uwoI have a columnar query that I’m using (d/datoms db :aevt …) to make. A large number of the results have to be removed (remove (fn [datom] (:legacy (d/entity db (:e datom))))). Is there a way to filter the db to exclude these legacy entities that would be faster than the approach I’m taking? The legacy entities are the majority of of the results. I’ve also used this:
(defn not-legacy?
[db ^Datom {:keys [e]}]
(not (:legacy (d/entity db e))))
(defn without-legacy
[db]
(d/filter db not-legacy?))
#2017-11-3007:53uwoI realize that that filter has to run for each datom, so I’m not surprised than its slightly slower than just calling remove/filter on the result seq#2017-11-3008:11itaiedI would like to develop an open source web app using clojure on top of datomic.
I have started by using postgres and the development and deployment options are amazing (free open source support on heroku, openshift etc...).
Are there any paas services that provide free datomic connections?#2017-11-3008:59Empperidatomic cloud is coming I think#2017-12-0119:09jaretYep, we’re still working on it and targeting Q4 this year.#2017-11-3011:08laujensenWhats the correct way to update a datomic installation?#2017-11-3019:02souenzzo{:db/ident :catalog/products :db/valueType :db.type/ref}
{:db/ident :product/name :db/valueType :db.type/string}
How do I find (in datalog, without clojure) all entities dat has :product/name but dosent "contains" :catalog/_products#2017-12-0119:09marshallYou can likely do this with a not clause combined with missing?#2017-12-0119:10marshalloh, actually probably just missing#2017-12-0119:10marshallsince you do want things that are missing 🙂#2017-12-0119:10marshallhttp://docs.datomic.com/query.html#missing#2017-12-0119:59souenzzoIt's not missing once reverse reference is just a sugar for pull/entity#2017-12-0102:06olivergeorgeHello. Can someone clarify how licensing works for developers. I think each registration is allowed one license for Datomic Starter with 1 year of upgrades. Does that mean each dev on my team will be unable to access a current version of Datomic Starter for development purposes after 1 year?#2017-12-0102:07olivergeorgeFor example, I poked around with Datomic briefly in 2014 and I can't see any way to "renew" my starter license.#2017-12-0113:57sbDo you have a great tutorial how to import logs with timestamps into Datomic? I saw I need to use :db/txInstant .. as when the transaction recorded.. just would be great an example. thx#2017-12-0114:00sb{:db/id "datomic.tx"
:db/txInstant #inst "2014-02-28"}]#2017-12-0114:01sbWhat is the best case?#2017-12-0115:49donaldballI believe the advice is not to use :db/txInstant to record the instant at which the observations within a transaction occurred, unless perhaps you have an irrefutable guarantee that you will never ever need to import your logs in any order.#2017-12-0116:34sbthank you for your advice, how can I import large logs with ts in Datomic (sorry for the beginner question)?#2017-12-0116:45souenzzoit's about backup/restore?#2017-12-0117:54sbI have a database (I can transform to json or any format, like a big log) and I would like to import to Datomic.#2017-12-0117:54sbI would like to add new records to this db in Datomic#2017-12-0117:56sb(timestamps very critical) / I don`t know.. it might be good if I use manual timestamps or maybe here is more elegant solution#2017-12-0118:02souenzzono idea. 😕 But I'm also interested.#2017-12-0119:05marshallsetting explicit txInstant on import is intended for this type of use#2017-12-0119:05marshallone second, i’ll get you the docs and an example#2017-12-0119:05sbOk, thanks!!#2017-12-0119:07marshallhttp://docs.datomic.com/transactions.html#explicit-db-txinstant
http://docs.datomic.com/best-practices.html#set-txinstant-on-imports
an example doing it here:
https://github.com/Datomic/day-of-datomic/blob/master/resources/streets.edn and https://github.com/Datomic/day-of-datomic/blob/master/tutorial/log.clj#2017-12-0119:07marshallNote that you have to assert them in order, so you need to sort your source data in ascending time order#2017-12-0119:08marshallDatomic will not allow you to specify a txInstant that is older than the newest that already exists in the db#2017-12-0119:08sbOk! that is important! thank you very much!!#2017-12-0119:09sbthanks @U05120CBV @U2J4FRT2T :+1::+1::+1:#2017-12-0119:09marshallnp#2017-12-0119:07jarethttp://docs.datomic.com/deployment.html#upgrading-live-system
@laujensen This section of docs covers updating a Datomic system.
>To upgrade the transactor, start a new transactor (or pair of transactors) based on the release of Datomic you are upgrading to. Once these processes are up and monitoring the storage heartbeat, kill the old transactors and the new ones will automatically take over.
>To upgrade peer applications, simply start new peers and take down old ones. You can stagger the ups and downs to maintain availability during the upgrade.#2017-12-0123:00kenbierDoes anyone have experience with using Datomic from a non clojure project? Say Java, Ruby, or NodeJS? Curious how your experience was.
I used datomic in a clojure project on my last team and everyone loved it. I recommended it to my team, and they are really excited about the auditing capabilities Datomic provides. Its likely we won’t be using clojure though.#2017-12-0209:37steveb8nI’m interested in this too. Specifically node.js on AWS lambda via the client api#2017-12-0213:42sbIf I remember good then Datomic.. have Nodejs library, Ruby and Python. I don’t have experience.#2017-12-0221:49steveb8nThe nodejs lib uses the (old) rest api, to use the new Datomic Cloud service, we need client api support. Nodejs has the best cold-start time for AWS Lambda so it is preferable.#2017-12-0314:10sbI created in the past therefore a multi-route java lambda which in one “line” running like a logger (run automatically), in other 5-6 routes.. solved the cold start problems. In this way, system was faster.#2017-12-0404:06steveb8nthat only helps the cold start from zero problem i.e. when scaling up under load, there are more cold starts that users will see#2017-12-0404:51steveb8nI’m pretty sure this use case is why the client API was created. Looking forward to an official client that works for a fast-starting lambda#2017-12-0220:43sbI solved. I forget add new db, before connect. 🙄#2017-12-0511:59dm3is there anything I should do in order to force Datomic to remove the data from storage after delete-database?#2017-12-0512:02dm3I’m on Postgres… I guess given all the Datomic databases are in the same table - I have to vacuum full the table in order to reclaim the space - is that right?#2017-12-0514:45marshallDatomic 0.9.5656 is now available https://forum.datomic.com/t/datomic-0-9-5656-now-available/229#2017-12-0514:49marshallBlog post with more info: http://blog.datomic.com/2017/12/datomic-pull-as.html#2017-12-0515:22favilaThe new pull grammar doesn’t seem to match what is demonstrated in the blog post.#2017-12-0515:22favilaIs attr-with-options allowed on a map key? Grammar says no, blog post says yes.#2017-12-0515:22favilaAnd what about default-expr on map keys? That doesn’t seem to be allowed either. Is it now allowed if we use attrs-with-options?#2017-12-0515:23favilaAnd finally, when is :ident-only going to be an attr option? 😏#2017-12-0515:35marshall@favila It should be allowed on map key - i may need to tweak the grammar#2017-12-0515:36marshall@favila :ident-only as an attr option is a great feature request - can you add it to the portal?#2017-12-0515:37marshall@favila https://github.com/Datomic/day-of-datomic/blob/master/tutorial/pull.clj#L91 - yes, definitely works on map form#2017-12-0515:42marshallalso : http://docs.datomic.com/pull.html#sec-9-2#2017-12-0515:42marshalland yes, it looks like i need to tweak the map spec grammar - thanks for catching that#2017-12-0523:58olivergeorgeI'm wondering what patterns people use to enforce data integrity constraints. I see that database functions can be used for integrity constraints. It looks like database functions must be intentionally triggered in the transaction meaning it's the obligation of the programmer to remember to fire the right constraint checks. What are the common implementation patterns? I can imagine (1) by convention trigger a common database function which will check all constraints. (2) use different database functions for different types of update so that relevant constraints are checked. (3) keep a list of constraints and associated attributes to decide what constraints need to be checked for a transaction.#2017-12-0523:59olivergeorgeThe simple use case I was exploring is: unique constraint across multiple attributes. (eg. https://github.com/Datomic/day-of-datomic/blob/master/tutorial/transaction_function_exceptions.clj#L15)#2017-12-0600:28olivergeorge(Second use case is required fields. Feels like something a (s/keys :req [...]) might cover nicely. Again, I'd love confirmation of common/recommended approaches).#2017-12-0603:22steveb8n@olivergeorge this post is an excellent pattern which I have used : http://cjohansen.no/referentially-transparent-crud/#2017-12-0603:23steveb8nit allows you to run all mutations through a single fn and this can call the db via a standard set of transaction fns so that all constraints are always checked#2017-12-0603:23steveb8nI think that Datomic Cloud will also have enhanced txn fn support, Stu hinted at that in his talk#2017-12-0605:15olivergeorgeThanks I’ll check it out #2017-12-0612:15boldaslove156If I have an eid of :db/txInstant , how do I get the transacted data?#2017-12-0612:24souenzzo(d/q '[:find ?e ?a ?v ?tx ?op
:in $ ?tx
:where
[?i :db/ident ?a]
[?e ?i ?v ?tx ?op]]
(d/history (d/db conn)) tx)
You may like to translate ?v to ident in case of enums.#2017-12-0612:25souenzzoyou can also use [(if ?op :db/add :db/retract) ?x] to create a "tx-data-like" result.#2017-12-0612:40boldaslove156Thanks! I was missing the first part of the clause and kept getting "full scan" error#2017-12-0614:32bkamphaustx-data with the Log API in query will use the most efficient index for this http://docs.datomic.com/log.html#log-in-query — the query above is basically still a scan - it just limits the A position of the datoms to attributes. You can use any t oriented APIs also with datomic.api/tx->t#2017-12-0619:33kenbierIf i have an entities A and B, and A->B. If i edit b, i want to be able to query all transactions that include A to include anytime B changed. Is there some good conventions on how to do that?#2017-12-0619:34kenbierI was thinking about just upserting the unique id of A in any transaction that edits B. is there a better approach than this?#2017-12-0619:35kenbieralthough i believe datomic doesn’t record a transaction if the value doesn’t change, so perhaps that won’t work.#2017-12-0619:35kenbiersomething along the lines of ’“touching” A whenever I edit B perhaps?#2017-12-0619:47marshallyou may be able to use transaction metadata - either put an attribute on the transaction entity that indicates what ‘parent’ entity(s) it affects, or go the other way and have an attribute on your entity of interest that references the transaction entity ID itself and is updated whenever that entity chain is ‘touched’#2017-12-0619:47marshalldepends a bit on what your query pattern / use case is#2017-12-0622:52kenbier@U05120CBV Perhaps I should describe the data model first.
Imagine a simple hash (string->string). I could change each value in the hash, add new key-value pairs to the hash, or delete keys the hash. I want to look at the history of this hash over time, as well as the history of an individual key in the hash.
In datomic, I have modeled the hash as A, using a many ref to a bunch of pairs (entity B). B naturally has attributes key and value, each of type string. Its easy to see the history of each key value pair, but getting the history of the entire hash is more expensive the way I am doing it.
Hash historical view:
{“foo” “bar” “c++” “awesome”} => {“foo” “bar” “c++” “sucks”} => {“foo” “bar” “c++” “meh” “clojure” “awesome”}
(notice the double edit in the last change, thats allowed)
Key historical view:
“c++” => [“awesome” “sucks” “meh”]#2017-12-0622:53kenbierI wonder if there is a better way to do this than I am doing currently.#2017-12-0622:53kenbierAt the moment I am using the query API to get all txns ids in which a B (pair) changed that was owned by A. Then I can do a pull on A, at the database value during that time. But that means a database query for every transaction id, and there could be many (perhaps I could limit to 10 at a time).
Now the schema is not set in stone, perhaps my model is poor given my use case. However I would like to preserve the time view of both an individual key value pair as well as enabling easy time view of the entire hash.#2017-12-0701:32kenbier@U05120CBV i can move this to a google groups topic if you prefer.#2017-12-0701:32kenbierits getting a bit lengthy 🙂#2017-12-0619:48marshallif you re-assert the same value (i.e. your unique id) you won’t get that datom - you’ll still have a transaction with an ID and a txInstant, but Datomic removes redundant datoms#2017-12-0623:33Zach Currydiatomic.client appears to expose ring.adapters.jetty to an incompatible version of jetty. Am I missing something or am I the only using using datomic.client? I’m definitely not the only one using ring and jetty 🤔#2017-12-0700:18dsnuts@zach892 is right about datomic.client spoiling ring.adapter.jetty...#2017-12-0700:20dsnuts...returns
Exception in thread "main" java.lang.NoClassDefFoundError: org/eclipse/jetty/http/HttpParser$ProxyHandler, compiling:(ring/adapter/jetty.clj:27:9)#2017-12-0700:29kenbier@dsnuts can’t you just require the correct version of jetty yourself? or does datomic complain if you do that?#2017-12-0700:29kenbierin the deps of your project.clj file#2017-12-0700:35kenbierand @zach892 i’d wager most people are not using the datomic client if writing an application in clojure (could be wrong though). most people are using the peer library#2017-12-0700:35kenbierhttp://docs.datomic.com/integrating-peer-lib.html#2017-12-0700:36kenbier> Note that the client library is currently in alpha and subject to change.#2017-12-0700:50dsnutsYou can require [org.eclipse.jetty/jetty-server "9.2.17.v20160517"] as a fix but, then you've got to keep this explicit dependency in sync with the corresponding version of ring 😞#2017-12-0700:56dsnutsI'm less than confident prospective datomic users would go through the trouble of tracking this problem down to ring.adapter.jetty and find this obscure version of jetty-server as a fix before they decide to give up. In fact, I found a stackoverflow issue related to this very dependency issue and the OP said he just gave up on trying Datomic. I like Cognitect and I like Datomic. I fear for Datomic if this is the sort of thing people go through just to try it out. If I can help make it easier for people to adopt Datomic, I will. What can I do to help?#2017-12-0701:10kenbierDefinately. I am just a happy user of datomic so I can’t speak for the team. Dep resolution is a common problem across many projects imo, I am not sure its an indictment of datomic. That being said, its likely that datomic will be used in projects that use ring so this issue is def good to raise with the team#2017-12-0701:10dsnutsI'm totally not blaming the team but, the fact of the matter remains#2017-12-0701:10kenbieragreed. btw does bumping to the latest version of ring not work either btw? doubtful, just noticed you are on an old minor version#2017-12-0701:11kenbierAs for reporting, this is definately a good place, as is the google group. IDK if datomic has a jira board, but clojure does so may be they do too somewhere. email would def work too. Awesome that you are reporting bugs that alot of ppl are going to run into#2017-12-0701:12dsnuts@kenbier Thanks, bro. Same problem with ring 1.6.3#2017-12-0701:14kenbierfigured.#2017-12-0701:17dsnutsI emailed support at Cognitect about this. Support Is being handled by Think Relevance so, we'll see if the team catches wind of this issue through the grapevine. fingers crossed#2017-12-0701:18kenbierhaha awesome#2017-12-0701:19kenbierimo i hate letting a library decide what version of a server lib you are going to consume.#2017-12-0701:26kenbier@dsnuts one last thing, you could try exluding ring jetty adapter from the datomic-client lib.#2017-12-0701:26dsnutsAnd to be clear ring doesn't require any particular server but, if you wan't to use ring out of the box, you're probably using jetty#2017-12-0701:27kenbierrighyt#2017-12-0701:27kenbieri wasn’t going to suggest changing your server because thats a big decision for some, but if its not for you then why not? jetty adapter is a bit old anyway#2017-12-0701:27kenbierhowever i think datomic may still pull in jetty#2017-12-0701:28kenbierso youll have a bigger jar#2017-12-0701:28dsnutscan't use http-kit because of TLS support#2017-12-0701:28kenbierahhh#2017-12-0701:28dsnutsAnd :exclusions don't help the issue. Was the first thing I tried#2017-12-0701:29kenbier[com.datomic/clj-client "0.8.606" :exclusions [org.eclipse.jetty/jetty-client org.eclipse.jetty/jetty-http org.eclipse.jetty/jetty-util]]] didn’t work?#2017-12-0701:29dsnutsNo, sir#2017-12-0701:29kenbierweird, someone on slack claimed they got that to work earlier#2017-12-0701:30kenbierim all out of ideas then. curious what the resolution on this is. thanks for bringing it up#2017-12-0701:41dsnutsFound the proper :exclusions to fix the datomic.client/ring.adapter.jetty issue! [com.datomic/clj-client "0.8.606" :exclusions [org.eclipse.jetty/jetty-http org.eclipse.jetty/jetty-server]] wins#2017-12-0711:00val_waeselynckHelp wanted here 🙂 https://stackoverflow.com/questions/47693495/datomic-on-a-peer-does-connection-db-read-your-writes-after-connection-trans#2017-12-0712:39daemianmackthe new datomic developers forum (https://forum.datomic.com/) has replaced the datomic google group (cc kenbier dsnuts)#2017-12-0714:09marshall@kenbier if you’d post it on the new forum at http://forum.datomic.com that’d be a good option - can get additional feedback from others there as well#2017-12-0714:31jaret@dsnuts and @kenbier thanks for chasing down the deps issue. We’ll fix this in an upcoming release of client.#2017-12-0717:50dsnuts@daemianmack @jaret Noted. Thanks guys!#2017-12-0719:16lenIs there a way in a pull to get the :db/txInstant out ?#2017-12-0719:47souenzzotxInstant of which attribute?#2017-12-0804:18lenOf the t of the ?e that I have #2017-12-0809:45souenzzothis ?e has many ?a ?v with different ?tx....dosent make much sense ask for tx on pull
but you can do
(map (apply (partial into {})) (d/q '[:find (pull ?e [*]) (pull ?tx [*]) :where [?e :user/name _ ?tx]] (d/db conn)) or something like#2017-12-0723:32caleb.macdonaldblackI’m reading http://www.learndatalogtoday.org/chapter/3 to learn datalog and im confused by the last query down the bottom under the heading “relations”#2017-12-0723:32caleb.macdonaldblack[:find ?title ?box-office
:in $ ?director [[?title ?box-office]]
:where
[?p :person/name ?director]
[?m :movie/director ?p]
[?m :movie/title ?title]]
#2017-12-0723:32caleb.macdonaldblackIs [[?title ?box-office]] input?#2017-12-0723:32caleb.macdonaldblackAnd why isn’t ?box-office in the where clause?#2017-12-0723:33caleb.macdonaldblackI almost looks like they’re destructuring a vector within a vector but It wouldn’t make sense parsing in your expected output#2017-12-0800:19caleb.macdonaldblackSo reading further i learned that [[?title ?box-office]] is input and is a vector of vectors. It is being destructured#2017-12-0800:20caleb.macdonaldblackAnd the ?box-office value has no effect on the query and is simply being outputted with the title#2017-12-0808:35daveliepmannThis indeed looks very weird. I don't see a way to experiment without recreating the entire dataset locally but this example looks half-finished. For instance, all the outputs are also inputs?#2017-12-0812:50manutter51What’s missing is the larger context. It’s been a while since I did that tutorial, but as I recall, this is illustrating how to combine db + external data to answer questions like “Show the title and box-office receipts for each movie directed by a specific director.”#2017-12-0812:51manutter51No one data source has all the information you need: the db is missing the box-office info, and the list of movies is missing the director info.#2017-12-0812:52manutter51This query combines the two in a way that lets you get the specific answer you want.#2017-12-0814:57daveliepmann@U06CM8C3V I see that. But is ?director meant to be input, or output? Without running the query, it's unclear to me that ?director gets used at all except as a filter to verify that titles in the input list have a director at all.#2017-12-0814:58manutter51Director is an input#2017-12-0814:58manutter51You can tell because it’s listed on the :in line#2017-12-0815:00manutter51In datomic you would call it like this: (d/q '[:find ?title ?box-office
:in $ ?director [[?title ?box-office]]
:where
[?p :person/name ?director]
[?m :movie/director ?p]
[?m :movie/title ?title]]
db "Stanley Kubrick" box-office-receipts)#2017-12-0923:00daveliepmannI think I now see the confusion. I was aware that ?director was listed as an input; what was unclear was whether that was intentional. The phrase "to find box office earnings for a particular director" made me think it was intended to return something like ?director and the sum of box office earnings. I see it's just meant to subset the input list (which already has most of the info needed) based on information that the database has (the director).#2017-12-0813:44SoV4Datomic fam, I have a query for you all 😃 I have a rating system, I want to let users rate things but only show the effects of their rating 24 hrs later, meaning someone upvotes something, someone downvotes something, but the effcts aren't apparent to the score until 24 hrs later. Can i structure a datomic query to get all the results that are at least 24 hours old?#2017-12-0813:50stijn@sova you can query the database 24 hours in the past by using (datomic.api/as-of db some-date) and pass that database to the query#2017-12-0813:51stijnhttp://docs.datomic.com/getting-started/see-historic-data.html#2017-12-0814:28SoV4@stijn excellent.#2017-12-0814:28SoV4thank you#2017-12-0814:28SoV4some things are so simple 😄#2017-12-0819:52kenbierI wonder if anyone has ever had to join datomic data stored on postgres with postgres data, for a very simple join.#2017-12-0819:53kenbierI.e. a postgres table has a unique column with an id, you want to join ON a datomic entity with the same id as a lookup ref, and add some datomic attribute values to the resulting rows.#2017-12-0819:53kenbierIn theory you could do it?#2017-12-0820:01kenbierin other words, use SQL to lookup an attribute’s currrent value using only the entity lookup ref.#2017-12-0820:02kenbierIt may require hardcoding the attribute id when creating the schema for better performance though.#2017-12-0820:03kenbierI ask this cause some teams are interested in exploring datomic for its audit and read scalability, but they do merge some of their CRUD app data with Postgres tables to fill in missing columns. And these Postgres tables are often owned by other teams.#2017-12-0820:40favila@kenbier are you wanting to do this in a datomic query? in theory you could, just call a function inside your datomic query to lookup postgres#2017-12-0820:40favilause the returned value for further joins#2017-12-0820:44kenbier@favila no, from a SQL query#2017-12-0820:45kenbierthe use case would be query a postgres table, but join in some missing columns from datomic data also stored in postgres#2017-12-0820:45kenbierthe postgres primary key would be a lookup ref in datomic#2017-12-0820:46favilathat's going to be tough#2017-12-0820:46favilait's better just to process the result after#2017-12-0820:46kenbierin memory?#2017-12-0820:47favilaare you using the missing columns purely for select, or for where/group-by/etc#2017-12-0820:47kenbierjust select#2017-12-0820:47favilathen postprocess should be easy, no additional memory burden#2017-12-0820:48favilato do what you want would require issuing a datomic lookup from within the postgresql server itself#2017-12-0820:49favilayou could probably write a stored procedure which called out to datomic, but your data fetch is going to leave the posgres query process no matter what you do#2017-12-0820:50favilawhy burden the postgresql server with that cpu load and additional complexity just so SELECT works?#2017-12-0820:51kenbierbecause our BI is going to be querying our data warehouse, not just applications.#2017-12-0820:52kenbierthey are going to want to join in missing fields like a an entities title based on an id, for example. to generate reports and look into insights for customers#2017-12-0820:55kenbiersiloing data from the rest of our large organization really hurts the value prop of adopting a new database engine, even if the data lives on postgres in reality.#2017-12-0820:56kenbierthough i am curious how to do the postgres query from datomic, the aforementioned is a more pressing issue for adoption.#2017-12-0820:59favilayou would be better off having a derived sql-shaped view of your datomic data I think#2017-12-0820:59favilayou can have a process read the datomic tx queue and write the data you want into postgres#2017-12-0820:59favilabasically a streaming materialized view of sorts#2017-12-0821:00favilaat that point it's just normal postgres data, so you can use it in queries#2017-12-0821:00favilaand it's kept up to date automatically#2017-12-0821:00favilathe source of truth would still be datomic#2017-12-0821:01kenbieroh wow thats not a bad idea#2017-12-0821:01kenbierwhat if the postgres server fails over? how would i pickup where i left off in the txn queue?#2017-12-0821:01favilawrite the transaction T into posgres#2017-12-0821:02favilayou can then restart your stream process from there#2017-12-0821:03favila(playing catch-up from datomic's tx-log) before switching to the live-streaming tx-queue#2017-12-0821:04kenbierah nice#2017-12-0821:04kenbierand the postgres query within a datomic query?#2017-12-0821:05faviladatomic queries are run on the querying machine, not the transactor, so they can call anything they want#2017-12-0821:05kenbieras a param i pass in? like the db param but a second as a bunch of rows?#2017-12-0821:05favila:where [(some-postgres-result x) [[?col1 ?col2 ?col3]]]#2017-12-0821:05kenbiersweet#2017-12-0821:05favilawhatever you want#2017-12-0821:06favilayou just need a clojure function that returns something shaped appropriately for query destructuring#2017-12-0821:06favilathat function can do anything (thread safe) you want#2017-12-0821:06kenbierjdbc does that already i thought? its a vector of vectors?#2017-12-0821:07favilaI don't know offhand, depends on what you use#2017-12-0821:07kenbierok#2017-12-0821:07kenbierdatomic is back on the menu!#2017-12-0821:07favilayou can supply the postgres transaction context as a :in var if you want#2017-12-0821:08faviladatomic queries don't care what's in there unless they start with $ or you try to destructure them#2017-12-0821:08favilaso you can just give that value to your functions which do your postgres selects#2017-12-0923:47boldaslove156Why this lookup-ref that has the same attribute name points to the same :db/id ?#2017-12-0923:47boldaslove156(let [db (db/db @db/db-conn)
eid1 (:db/id (db/entity db [:db/ident :db/retractEntity]))
eid2 (:db/id (db/entity db [:db/ident :db.fn/retractEntity]))]
(= eid1 eid2))#2017-12-1000:02SoV4What does the following mean?
java.lang.String cannot be cast to clojure.lang.IPersistentMap#2017-12-1000:04dominicm@sova it means you're using a string where you should be using a {}#2017-12-1000:04SoV4Oh. Hmm let's see.#2017-12-1000:24SoV4(GET "/k/:tag" [ tag :as ring-req ]
(let [tag-result (f9db/get-blurb-by-tag tag)]
{:status 200
:headers {"Content-Type" "text/html"}
:body tag-result})) quantum jumped back down to something that works.#2017-12-1000:34SoV4I see. It's more of a Ring thing.#2017-12-1000:35SoV4Logic needs to live before you send a ring response map back#2017-12-1000:35SoV4with {:body "" :session "" :headers "" :etc "tec"}#2017-12-1000:45SoV4lessons learned tonight: else must live subordinately inside an if.#2017-12-1000:45SoV4😄#2017-12-1017:47joelsanchezI think isComponent is not useful for collections of components, because to be able to use it I need to model the relation as n-m, while in most cases it should be 1-n
To make my point concrete, I disagree with the example given in this URL: http://blog.datomic.com/2013/06/component-entities.html
{:db/id #db/id[:db.part/db]
:db/ident :order/lineItems
:db/isComponent true
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/many
:db.install/_attribute :db.part/db}
This allows one lineItem to be associated to more than one order, which should not be allowed (in my opinion), so I do this:
[{:db/ident :order.line/order
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/one}
{:db/ident :order.line/product
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/one}
{:db/ident :order.line/quantity
:db/valueType :db.type/long
:db/cardinality :db.cardinality/one}]
This way I get the desired 1-n relation but I can't use isComponent anymore 😞#2017-12-1019:39favilaisComponent and cardinality are orthogonal concepts. Use is-component when you need an entity to have value-semantics instead of identity-semantics @joelsanchez #2017-12-1019:53joelsanchez@favila that clears it up a little bit, thanks#2017-12-1019:54favilaAlso in your n-m example, note the reverse ref from line item to its parent is card-one not many#2017-12-1019:55favilaThis is true of all isComponent attributes when followed backwards#2017-12-1019:56joelsanchezyes, but when the :order/lineItems attribute exists, any order can refer to any lineItem, even those that are not of its ownership#2017-12-1019:56joelsanchezbut I guess I shouldn't worry about that#2017-12-1019:57favilaThe implicit invariant with isCompnent data (which datomic assumes but does not check) is that there is only ever one reference to its value#2017-12-1019:57joelsanchezmakes sense for Datomic to assume so, since it deletes isComponent values when the referrer is deleted#2017-12-1019:58favilaMore specifically, (d/datoms db :vaet the-component-entity) will only ever produce one datom, whose attr is an isComponent attr#2017-12-1019:59favilaSo the lineitems should be wholly owned by their order, but datomic isn’t going to check for you#2017-12-1019:59favilaYour application should ensure it#2017-12-1020:00favila(Owner transfership is possible too without breaking the invariant)#2017-12-1020:00joelsanchezthanks for the explanation, it goes in a similar way to refs then 🙂 (you're allowed to refer to any entity but if your app requires a certain constraint you should handle it or write a db/fn)#2017-12-1020:01favilaYes. IsComponent is a strange annotation in that it’s on attrs but it really asserts something about entities#2017-12-1020:01favilaIt’s the closest thing to entity schema that exists natively #2017-12-1020:02joelsanchezpreviously I thought about it as a sort of "ON DELETE CASCADE" equivalent#2017-12-1020:03joelsanchezso...will be changing my schema! 😂#2017-12-1103:03favila“delete cascade” seems like a good way to think about it too. The same lifetime and ownership considerations exist in sql when deciding to cascade or not#2017-12-1103:05favilaHonestly if you never isComponent nothing will break. You will have to pull those refs explicitly when you want them, and deletion will orphan entities instead of retracting them#2017-12-1109:28boldaslove156Is there any difference under the hood if I use (mapv (partial pull db pattern) eids) instead of (pull-many db pattern eids) ?#2017-12-1115:09favilaThere is no semantic difference. The difference would be performance#2017-12-1204:25boldaslove156I take it that it is something like: pull-many only do it once while (mapv pull eids) does it as many times as how many eids is there, is this right?#2017-12-1215:28favilaIn theory it is possible for pull-many to have much more efficient db access patterns because it knows the full work. In practice I do not know: try it and see if it makes a difference#2017-12-1112:36kennethkalmerI’m having issues restoring an older backup to my local dynamodb storage and I’m wondering if anyone can help? the following error is produced by dynamodb in the console: https://pastebin.com/8avZbXD7#2017-12-1112:37kennethkalmerI originally backed up my datomic free some time ago, and now I actually need this data again and am trying to get it restored#2017-12-1113:19kennethkalmerurgh, looks like ddb-local is completely broken now, will nuke away and restore independent backups and see how it goes#2017-12-1114:26kennethkalmeroh snap, ok, so it seems my problem was starting ddb with the -optimizeDbBeforeStartup flag… it must have corrupted something, somehow#2017-12-1116:50uwodebugging a “not on my box” issue. i’ve packaged an uberjar that runs its own peer, and I’ve included the sql driver in that jar, however when my colleague tries to run it they get java.sql.SQLException: No suitable driver. jar -tf shows the expected driver names on the classpath. any other obvious things I should check?#2017-12-1116:52favilaconnection string?#2017-12-1116:57uwowe’re using the same connection string. mine works there’s does not#2017-12-1211:43gokulreddyWhich is best among deploying datomic in kubernetes and aws??#2017-12-1211:49hmaurer@gokulreddy Kubernetes can run on AWS; you are comparing two different things. On AWS you get DynamoDB, and soon you’ll get the managed Datomic setup from the AWS marketplace#2017-12-1217:46gokulreddyyea….thanks#2017-12-1216:03uwois it possible that the peer’s call to https://docs.oracle.com/javase/7/docs/api/java/sql/DriverManager.html#getDriver(java.lang.String) would find another driver on the system, before the one we packaged in an uberjar?#2017-12-1321:16aaelonyI haven't seen a discussion of whether an analog to sql window functions (e.g. rank(), dense_rank(), cume_dist(), ntile(), etc... ) exists in Datomic. Does something akin to what an over(partition by ... order by ...) clause accomplishes in Datomic exist? If so, where can I read up about it? Thanks in advance (e.g. https://www.postgresql.org/docs/10/static/functions-window.html)#2017-12-1410:11megakorreanyone knows what the problem is when :db.error/not-an-attribute gets thrown?#2017-12-1411:31megakorreIs there any expected situation in datomic where pulled values have a nil value?
ex: (d/pull db '[*] entity-id) => {:user/sample-field nil :db/id 17592186048930}#2017-12-1414:02favilaNo, except maybe using default with nil#2017-12-1415:27nblumoeNice to see large facebook and netflix logos on the Datomic home page: http://www.datomic.com/ Does anyone have more information how they are using it, e.g. related blog posts?#2017-12-1416:30bkamphausMatt Bossenbroek at Netflix gave a talk re: use of Datomic: https://www.youtube.com/watch?v=V2E1PdboYLk#2017-12-1418:09nblumoeOh wow. 2015. Thanks#2017-12-1420:21eraserhdSo I need to add integrity checking, which I can do with a transaction function. The constraints I wish to enforce I have in data - in fact, they are really just a list of datomic queries that should return no data, along with error message strings that have substitution markers in them.#2017-12-1420:22eraserhdI realize that someone might have already done this. Yes?#2017-12-1510:57val_waeselynck@U0ECYL0ET I don't know of any library that does that, but writing such a transaction function is probably straightforward, and speculatively checking data invariants (using db.with()) is a common way of preventing illegal writes. However, be aware that Datalog has some overhead, so you want to be careful about running such queries in the Transactor for frequent writes; you can also consider running them on the Peer.#2017-12-1510:58val_waeselynck@U0ECYL0ET that's definitely something that could be added to Datofu (https://github.com/vvvvalvalval/datofu) btw, I just haven't used this approach frequently enough to consider it for the lib#2017-12-1516:42eraserhdAFAICT, I can't do this on the peer without the possibility of race conditions allowing invalid data.#2017-12-1516:44eraserhdThe d/with thing brings up a question that I don't think was directly addressed in the docs, though... does the db passed to a transaction function have all data available from previous :db/add and so forth from this transaction set?#2017-12-1517:18val_waeselynckNo, consider a transaction as unordered and atomic#2017-12-1512:22david_clojurianHello,
I'm looking for a way to query an entity that is tagged as :db.unique/value. This value exists only once in the database and I hope I can query it without knowing the attribute name.
In this example I only like to query it by an UUID, but I don't know whether it is a :one/v1 or a :another/v2.
I tried to lookup it by [#uuid "59bbb1bb-7827-41e2-b213-db1d58ed9661"], but this doesn't work.
Is it possible to lookup by a db.unique/value without an attribute name?#2017-12-1512:47souenzzo(d/q '[:find ?e :in $ ?v :where [?attr :db/ident] [?e ?attr ?v]] db my-uuid)#2017-12-1512:48souenzzoYou can do [?attr :db/unique :db.unique/value] or something like for optimal performance.#2017-12-1513:29david_clojurianThanks for the response. I will try it.#2017-12-1514:00souenzzosometimes skip full scan is tricky 😄#2017-12-1514:00david_clojurianI changed it a little bit and now it works for me.
(query '[:find ?ident-key ?uuid
:in $ ?uuid
:where
[?attr :db/unique :db.unique/value]
[?e ?attr ?uuid]
[?attr :db/ident ?ident-key]]
[db uuid])#2017-12-1516:47favilait does not#2017-12-1516:48eraserhdIs there a way to get the tx from the db?#2017-12-1516:48favilause a tx tempid#2017-12-1516:48favilastring "datomic.tx" or (d/tempid :db.part/tx)#2017-12-1516:49favilathere's no way to predict what it will be#2017-12-1516:49favilaits value depends on how many entities were "created" by the transaction you want#2017-12-1516:49favilathere is one autoincrement id for an entire datomic db#2017-12-1516:50eraserhdMy goal is to prevent the transaction of data which would violate some user-specified constraints.#2017-12-1516:51favilayou need to test your constraint and throw after the transaction is applied#2017-12-1516:51eraserhdThere's no way to back out a transaction, correct?#2017-12-1516:52favilano foolproof way, no#2017-12-1516:52eraserhdOh my.#2017-12-1516:52favilayou can do this in a locking manner or with optimistic concurrency#2017-12-1516:52favilathe locking is simpler#2017-12-1516:53eraserhdHow would one do locking?#2017-12-1516:53favilamake a transaction function which takes the entire transaction you want to run#2017-12-1516:54favilathe body of the transaction should apply the transaction with d/with to the supplied db, then run validation. if validation fails, it should throw; otherwise it should return the transaction unchanged#2017-12-1516:54eraserhdAh, OK.#2017-12-1516:54favilathe optimistic version just does the same thing, but in the peer#2017-12-1516:55favilaand adds extra precondition checks to the submitted transaction to make sure nothing important changed by the time the transaction reached the transactor#2017-12-1516:55favilapresumably the peer would retry#2017-12-1516:55eraserhdSo the original TX becomes something like, [:db.fn/doAndCheck [[:db/add ...] [:db.fn/retractEntity ...]]]]#2017-12-1516:55favilayes#2017-12-1516:56favilathe transactor is the only writer, so you are essentially locking the db while doAndCheck runs#2017-12-1516:56favila(write-locking)#2017-12-1516:57favilaso there's no possibility of funny business. the db your tx fn receives is absolutely the prior db, and the tx you return will absolutely be applied against that one#2017-12-1516:57eraserhdWill d/with run functions like retractEntity?#2017-12-1516:57favilaif you do it in the peer, you don't have that guarantee#2017-12-1516:58favilad/with is exactly the same#2017-12-1516:58favilait does everything d/transact can do#2017-12-1516:58favilaexcept it doesn't write#2017-12-1516:58eraserhdOK, cool. I think I have a solution, then.#2017-12-1519:20lellisHi guys! I know datomic has an Datom's max size, suppose i reach the maximum size. My question is: Excision datoms will make smaller the occuped size?#2017-12-1620:49matando you think we'll ever see datomic open-sourced and/or free to use without a support license? so that we can use it also for small projects that can't afford the enterprisy licensing?#2017-12-1622:26shaun-mahood@matan: I wouldn't hold my breath for open source, but the cloud offering should work out to $1/day based on the latest info. I can't wait :)#2017-12-1622:33gcasthas there been any update on the datomic GUI showed off at the last conj?#2017-12-1622:33gcastI heard it was planned to be open-sourced, but that was a couple months ago#2017-12-2618:52timgilbertThat was me! Right after the conj I wound up needing to do a ton of work and didn’t have time to look at it, but I’ll have time allotted to finish it up in the several weeks. I hadn’t anticipated the difficulty of stripping out the usable bits from the rest of our internal tooling (which is what I demoed)#2017-12-2618:53timgilbertAnyways, it will be coming soon, watch this space…#2017-12-1622:35matan@shaun-mahood cloud offering?#2017-12-1622:36matanis that AWS specific?!#2017-12-1622:36matananyway that's kind of annoying, that we have no freely available clojure-idiomatic data persistence other than datomic, to use for smaller projects#2017-12-1622:38gcastthere is the free version of datomic#2017-12-1622:39gcastbut not sure of its limitations#2017-12-1701:34olivergeorge@matan There's datomic-free for open source projects. Pretty sure it was made clear that they aren't going to make Datomic open source in one of the presentations at the latest Clojure Conj.#2017-12-1701:35olivergeorgeThey have a unique product. I'm glad they're doing their best to make a living from it.#2017-12-1710:26daveliepmann"we have no freely available clojure-idiomatic data persistence other than datomic" — consider https://github.com/jackrusher/spicerack which stores Clojure data structures directly to disk.#2017-12-1710:29matan@olivergeorge thanks for point that out about the open source option. By the way there was nothing in my question to imply someone should not make a living of their work. #2017-12-1710:29matan@daveliepmann thanks, I will look into that as well!#2017-12-1710:31daveliepmann@matan I'd be interested to hear what you mean about clojure-idiomatic in this context?#2017-12-1710:42matan@daveliepmann mmmm good one. Something that lets you store and retrieve data as clojure data (implying no or less transformation and parsing in user code and/or the library code itself), then hopefully that querying itself is clojure-idiomatic (map, reduce, filter, and so on). Then maybe also something that has smooth concurrency features built into its API.
Do these make any sense?#2017-12-1711:31daveliepmannIt does, but it's interesting to consider how much of that is included in Datomic, which—although it takes EDN rather than concatenated strings—I would say operates on quite Datomic-shaped structures (entity maps and datoms) rather than Clojure-shaped structures (vectors, sets). For instance, out-of-the-box Datomic has no ordered collections. And the query language is Datalog, not map/filter/reduce, right?#2017-12-1711:37daveliepmannMaybe what I'm saying is that I like many aspects of Datomic idiom, but that feels distinct from Clojure idiom.#2017-12-1712:08hmaurer@daveliepmann it most definitely is. Datomic is essentially a triple-store#2017-12-1712:08hmaurer(although every “triple” also has information about the transaction which added it etc)#2017-12-1712:08hmaurer(although every “triple” also has information about the transaction which added it etc)#2017-12-1718:00matan@hmaurer I always wonder why it's not described as a triple-store as such, by the company. It certainly functions as one, which is very useful for some types of applications#2017-12-1718:04hmaurer@matan don’t they mention it somewhere in the doc? that’s odd#2017-12-1718:17matanMaybe now they do#2017-12-1712:09hmaurerIt’s not at all the same as persisting clojure data structures#2017-12-1712:36daveliepmannPrecisely my point. I might go as far to say that something might likely be wrong if one finds oneself filtering the result of a Datalog query. The results are Clojure data structures and therefore mappable & reducible but so are the results from clojure.java.jdbc.#2017-12-1715:45luchini@matan not an open-source alternative to Datomic but a great article diving into some of the aspects that make Datomic great. Someone could one day come up with a simplified open-source flavor of it https://www.aosabook.org/en/500L/an-archaeology-inspired-database.html#2017-12-1717:29luchiniI’ve released a beta version of Migrana today. Feedback is greatly appreciated! https://github.com/luchiniatwork/migrana#2017-12-1717:31matan@luchini @daveliepmann @hmaurer thanks for all the comments, I admit to it being a little off-topic (in my own fault), but these concise insights are oftentimes hard to find elsewhere, so I learned really more than I was looking for#2017-12-1717:32matanAnd about the blog post of designing a database as an archaeologist that's really an original way of looking at it, and I should probably set aside a couple of hours to go through that lengthy write-up 🙂#2017-12-1717:32matanAnd about the blog post of designing a database as an archaeologist that's really an original way of looking at it, and I should probably set aside a couple of hours to go through that lengthy write-up 🙂#2017-12-1717:32luchiniIt’s worth it. I’ve learned a lot#2017-12-1717:37matanThanks for noting! would you also do anything pragmatically different, after having gone through it? I might catch coffee with the author, he looks local from my area I think. Thanks again for commenting#2017-12-1717:42luchiniI love how he went as far as the querying engine. I would have loved to have seen him going deeper and building a disk storage because DataScript already does in-memory (in a more limited fashion). He also never goes into schemas which would make the whole thing even more interesting.#2017-12-1717:47matanYep, me too, just breezed through it#2017-12-1717:35matanSorry for criss-crossing with the festive announcement about migrana from @luchini just above 🙂#2017-12-1717:43luchiniNo worries… I’ll be pinging this channel seeking feedback regularly 😄#2017-12-1722:42SoV4Hi everyone, I have an implementation question#2017-12-1722:42SoV4i have blurbs... :blurb/author :blurb/content :blurb/tag#2017-12-1722:43SoV4I want to add tags additively throughout time to specific blurbs (that have content and an author minimum)#2017-12-1722:44SoV4so I thought: write to db a blurb with b/content and b/author and then do a search, resolve the entity id, and then append tags in a comma-separated way to the db via :blurb/tag and :blurb/bid ... then I could do a search against the BID whenever someone asks for the tags.#2017-12-1722:44SoV4This creates some intermediate states, which I may not need. Is there a way to write to the DB and use the resolved entity id immediately to add tag items one [comma separated chunk] at a time ?#2017-12-1722:46SoV4I suppose tags also have authors as well. I wonder if there is a better way than doing: write blurb with author and with content to db, search for BID for what I just put it, then write tag with author and bid (one at a time)#2017-12-1722:46SoV4search for the blurb EID = BID for what I just put in* *#2017-12-1801:10luchini@sova any reason why you wouldn’t have the relationship blurb -> tag be of cardinality many?… as in 1 blurb -> n tags -> 1 author#2017-12-1810:40daveliepmann@sova A lot depends on what you'll use the tags for. If they're just for discoverability I'm not sure they need authors or an identity of their own. I agree with luchini that by default I'd make tags a cardinality/many string attribute of the blurb.#2017-12-1810:42daveliepmannThe "intermediate states" issue, if I understand it correctly, sounds solvable by putting the blurb and tags into a single transact statement using tempids to define the cross-references between yet-to-be-created entities. (If tags become instead merely a string attribute of blurbs, this becomes unnecessary.)#2017-12-1903:12Kevin BlantonGood evening. We are using the with to allow users to perform some speculative transactions so that they can see how their changes affect other things before deciding whether or not to actually commit the changes. We have captured db-after and tempids from the results of the with statement. We can use the resolve-tempid function to determine the id that was assigned during the with, but has anyone found a method to go the other direction? Meaning… we would like to map through the tempids returned and convert them back to the original temp-id that was sent in the tx for the with.#2017-12-1906:19favila:tempids is just a map. Resolve-tempid is a convenience to construct numeric tempids (the negative numbers) from the tempid records created by d/tempid. You can use d/part, d/tx->t, d/ident and some bit twiddling to get an equivalent tempid record from a negative long#2017-12-1906:20favilaYou can use d/entid-at and d/entid to make the negative long from the tempid record yourself#2017-12-1913:29Kevin BlantonThanks @U09R86PA4. I will give this a try.#2017-12-1919:31atticmaverickim going through the peer getting started guide http://docs.datomic.com/peer-getting-started.html and I keep getting ":db.error/entity-missing-db-id Missing :db/id" when attempting to transact the schema. Im confused as to why this fails.#2017-12-1919:34favilaAre you sure you are transacting (transact conn [{:db/id ...}]) not (transact conn {:db/id ...})?#2017-12-1919:36atticmaverickyes i am sure. I am providing a vector of maps. I've copied and pasted from the site to make sure it wasnt me. I am not using the mem storage protocol and have moved to the local dev setup and am trying to run through some of the examples again. Im not sure if that has something to do with it#2017-12-1919:39favilapaste the expression you are running#2017-12-1919:40favilaalso, verify what version of datomic you are using#2017-12-1919:41favila(being able to transact schema entities without :db/id is a more recent ability)#2017-12-1919:44atticmaverickdatomic-pro-0.9.565#2017-12-1919:47atticmaverick@(d/transact conn [{:db/ident :movie/title
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one
:db/doc "The title of the movie"}])#2017-12-1919:56favilaand this throws the error?#2017-12-1919:56faviladoes it throw it when you use a mem db?#2017-12-1920:04atticmaverickI didnt have any problems with mem#2017-12-1920:22favilais your transactor definitely the same up-to-date version?#2017-12-1920:45atticmaverickI suppose but I dont know for sure. I downloaded the zip and I am running everything from the ./datomic-extracted/bin directory#2017-12-1921:04marshalluser=> (require '[datomic.api :as d])
nil
user=> (def db-uri "datomic:)
#'user/db-uri
user=> (d/create-database db-uri)
true
user=> (def conn (d/connect db-uri))
#'user/conn
(def movie-schema [{:db/ident :movie/title
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one
:db/doc "The title of the movie"}
{:db/ident :movie/genre
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one
:db/doc "The genre of the movie"}
{:db/ident :movie/release-year
:db/valueType :db.type/long
:db/cardinality :db.cardinality/one
:db/doc "The year the movie was released in theaters"}])
#'user/movie-schema
user=> @(d/transact conn movie-schema)
{:db-before #2017-12-1921:04marshalli just tried from the bin/repl on 0.9.5656#2017-12-1921:15atticmaverick@U05120CBV I tried that and I still get the ":db.error/entity-missing-db-id Missing :db/id" error. Maybe I'm not running the transactor and peer server correctly?
bin/run -m datomic.peer-server -h localhost -p 8998 -a myaccesskey,mysecret -d test,datomic:
bin/transactor ./dev-transactor-template.properties
#2017-12-1921:15atticmaverickthe only thing i added to the template is the license#2017-12-1921:24favilawait I thought you were using the "peer getting started" guide?#2017-12-1921:24favilawhy are you running a datomic peer server?#2017-12-1921:25favilapeer server is for the client api#2017-12-1921:45marshall@U09R86PA4 is correct. no peer server required for using the peer itself#2017-12-1921:46marshallhave you started your transactor?#2017-12-1921:46marshallhttp://docs.datomic.com/dev-setup.html#run-dev-transactor#2017-12-1922:15atticmaverick@U05120CBV i have started the transactor. even the basic @(d/transact conn [{:db/doc "Hello world"}]) gives me IllegalArgumentExceptionInfo :db.error/entity-missing-db-id Missing :db/id datomic.error/arg (error.clj:57)
#2017-12-1922:19marshallwhat version of datomic ?#2017-12-1922:19atticmaverickdatomic-pro-0.9.5656#2017-12-1922:20marshallcan you send me your log file? it’s in the log directory under the datomic distro#2017-12-1922:20marshall<mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>#2017-12-1922:38atticmavericksent the email#2017-12-1922:42marshallare you running the peer from a repl started with bin/repl in the same dir?#2017-12-1922:44marshalland can I see your ‘require’ line in the repl please#2017-12-1922:59atticmaverickthis is in my project outside of the directory not using the bin/repl#2017-12-1923:00atticmaverick(require '[datomic.api :as d])
#2017-12-1923:41atticmaverickit's pretty clear im missing something fundamental so i'm going to start over. I have an application written in clojure with a postgres db. I wanted to learn datomic so I thought it would be cool to replace postgres with datomic in this simple app. So a new question (starting from scratch): I want to have a local datomic database (strictly development) on my machine to read and write from within my clojure code. Is there a link or tutorial that would help me achieve this? What components of datomic would I need to run? And what would I use to make queries and transactions to datomic from clojure?#2017-12-2001:36marshallin your project what version of the datomic peer library are you using#2017-12-2001:37marshalli.e. in your project.clj or your pom.xml or whatever you’re using#2017-12-2001:37marshallyou’re using the right tutorial/etc. i now strongly suspect you’ve got an old version of the peer library in your project#2017-12-2015:39drewverleeWhere would i find good marketing material on Datomic? I’m thinking specifically about the ability to compose queries, something i can’t do in SQL.#2017-12-2015:40marshall@drewverlee are you looking for printed/pdf type stuff or videos/interviews/etc?#2017-12-2015:43drewverleeI’m looking for technical material, the format is less important.#2017-12-2015:45marshallhttps://www.youtube.com/watch?v=PTMyTyMcxkU#2017-12-2015:45marshalla customer story specifically about query composition#2017-12-2015:46marshalltechnical details http://docs.datomic.com/query.html#grammar#2017-12-2015:46drewverleethanks!#2017-12-2015:47marshallexamples of query: https://github.com/Datomic/day-of-datomic/blob/master/tutorial/query.clj and https://github.com/Datomic/day-of-datomic/blob/master/tutorial/query_tour.clj#2017-12-2015:49marshallMike Nygard had a few important things to say about Datomic here http://blog.cognitect.com/blog/2016/4/22/the-new-normal-everything-relies-on-sharp-tools#2017-12-2015:50marshallfrom the original source: https://www.youtube.com/watch?v=Cym4TZwTCNU#2017-12-2015:50marshall😉#2017-12-2015:58donaldballHey folks, I’m struggling trying to remember how to do a very prosaic query that finds entities with values that are not in a given input set#2017-12-2016:05marshall@drewverlee another good customer story: https://www.youtube.com/watch?v=7lm3K8zVOdY#2017-12-2018:03eoliphantHi, does anyone know offhand what one would use to determine if a transactor is ‘healthy’. We’re a Terraform shop so i’ve created a config that creates the dynamodb, transactor instnaces, etc etc. Everything works fine, but I need the ‘external’ check such that I can have the AWS infrastructure take a bad instance out of the ASG I was trying to do a TCP ping, but it looks like the standby transactor doesn’t even accept connections, perhaps smartly 😉#2017-12-2018:14souenzzostandby dosent accept http query?
http://docs.datomic.com/transactor.html#health-check-endpoint#2017-12-2119:13eoliphantah… duh 😞 lol, totally missed that, was pinging the actual transactor (e.g 4334) listener Thanks!#2017-12-2018:58gcast@donaldball might you be referring to the function missing?#2017-12-2018:58gcastas in [(missing? $ some-entitty some-attribute)]#2017-12-2018:59donaldballno, I was looking for entities with values that aren’t in a given set. My problem was merely syntax:
(not [(contains? ?the-set ?v)])#2017-12-2019:04gcastoh I see#2017-12-2022:35favilaTo anyone who has wanted to get :db/ident values for refs from pull expressions (i.e. the behavior of d/entity for "enum" entities), consider up-voting this feature request: https://receptive.io/app/#/case/49752#2017-12-2111:18souenzzoI thought only I was bothered by this#2017-12-2022:41gcastindeed that would be a nice feature.#2017-12-2102:52SoV4@luchini @daveeliepmann thank you and thank you. each tag can be authored by a different person. the reason i am keeping track of this is because: other users can "verify" a tag, which awards the original tagger a point. it's important to incentivize cooperative behavior in social platforms. I have a :tag/blurb and it sounds like you both agree I should put a schema property :blurb/tags with cardinality many... good idea and I agree.#2017-12-2103:18luchiniYou should avoid duplicating bi-directional relationships manually. You can always use the reverse notation :blurb/tags or :tag/blurb depending on your scenario#2017-12-2103:30SoV4Hmm I didn't know I could do that. That's cool. So if I have :tag/blurb ... then :blurb/_tag would return many of them, even if tag/blurb was a cardinality one?#2017-12-2423:04luchiniThe way to reverse ir is to add _ to the keyword not to the namespace. So if you have a :tag/blurb of cardinality one, a :tag/_blurb will return you all tags that blurb has. It feels a bit upside down though (semantically speaking). You might just prefer to do :blurb/tags with a many cardinality and use :blurb/_tags to find all blurbs that have a certain tag.#2017-12-2102:52SoV4@daveliepmann *#2017-12-2115:47adamfreyis it possible to update a datomic attribute from unique/value to unique/identity via transacting new schema?#2017-12-2115:48adamfreyI just tried to do exactly that to a running datomic system, the schema attribute is listed as unique/identity now, but when I try to do an upsert, I still get a unique conflict.#2017-12-2115:56adamfreyI see from this table that it should be possible http://docs.datomic.com/schema.html#altering-schema-attributes. I'll keep investigating why I'm not seeing the behavior I expect#2017-12-2117:52eraserhdhmm, what's the easiest way to get datomic to unify with a constant value, for example in an or clause to handle a missing value.#2017-12-2117:53eraserhde.g (or [?thing :foo/bar ?value] [(str "(null)") ?value]) I guess?#2017-12-2118:02eraserhdOh, there's a bunch of new stuff, like ground.#2017-12-2118:36gcastdoes anyone know if its possible to add new schema to a running db?#2017-12-2118:36marshall@gcast absolutely#2017-12-2118:37marshallyou can assert new schema at any time#2017-12-2118:37gcastdo you per chance know where I may read about that? for some reason its not working for me.#2017-12-2118:38marshallhttp://docs.datomic.com/schema.html#2017-12-2118:38marshallwhat specifically isnt working?#2017-12-2118:39gcastwell I transact and it appears to succeed but then I run this query, [:find ?attr ?type ?card
:in $
:where
[_ :db.install/attribute ?a]
[?a :db/valueType ?t]
[?a :db/cardinality ?c]
[?a :db/ident ?attr]
[?t :db/ident ?type]
[?c :db/ident ?card]]#2017-12-2118:39gcastand the new attributes are not there#2017-12-2118:39marshallhttp://docs.datomic.com/tutorial.html#schema#2017-12-2118:39marshalldid you get a new value of the db after you transacted the schema?#2017-12-2118:40gcastyes#2017-12-2118:41marshallmight want to look at these schema queries: https://github.com/Datomic/day-of-datomic/blob/master/tutorial/schema_queries.clj#2017-12-2118:42gcasthmm well I just noticed that I do not do a blocking take when I transact#2017-12-2118:42gcastI put the transact call in a do#2017-12-2118:42marshallyou’ll want to look at the returned value#2017-12-2118:42marshallto make sure it was successful#2017-12-2118:43gcastok. thank you#2017-12-2119:16gcast-_- forgot to request latest db before querying to confirm schema txs#2017-12-2120:15souenzzoit's a great feature from datomic. 😄#2017-12-2119:41marshall😉#2017-12-2119:41marshallthat’ll do it#2017-12-2306:26uwoTrying to figure out recursive rules. I can’t figure out how return the final node in a line. My rule always stops one short: https://gist.github.com/uwo/f901440c4044d6eb337db4994686883f#file-gistfile1-txt#2017-12-2314:17favilaThe node you want does not have a matching ?dir on it, so you need an endpoint-from rule impl that looks for that condition @uwo . The result you are getting is the furthest element that has a matching ?dir#2017-12-2620:27uwothanks @U09R86PA4!#2017-12-2505:39mx2000Hello, what kind of settings should I set for a server with only 1 GB or 512 MB of memory?#2017-12-2506:03mx2000Found an answer https://stackoverflow.com/questions/26102584/decrease-datomic-memory-usage#2017-12-2516:11laujensen(defn get-visitors-with-activities
[siteurl from to]
(d/q '[:find [(pull ?vs [* {:visitor/activity [:activity/path :activity/note]}]) ...]
:in $ ?siteurl ?from ?to
:where
[?s :site/url ?siteurl]
[?s :site/visitors ?vs]
[?vs :visitor/activity ?a]
[?a :activity/timestamp ?dt]
[(.after ^java.util.Date ?dt ^java.util.Date ?from)]
[(.before ^java.util.Date ?dt ^java.util.Date ?to)]
]
(db) siteurl from to))#2017-12-2521:08souenzzoyou can use [(> ?dt ?from)] inside datalog (although it does not work on clojure)#2017-12-2713:21souenzzoWas there a time difference?#2018-01-0314:10laujensenno afraid not#2017-12-2516:12laujensenHey guys, this is filtering a few 100ks of visitors and their activities, returning just 3500 maps, but taking 1500 msecs doing it. The same query would be around 3msecs in mysql. Whats choking my performance?#2017-12-2718:48gcastDoes anyone know how datomic handles repeated :db/add transactions on an attribute that is cardinality one.#2017-12-2718:56favilait automatically :db/retracts what is there#2017-12-2718:57favilabut you can't :db/add twice in the same transaction, that will produce a datom conflict#2017-12-2719:03gcastgotcha. See thats what I thought too, but then I did a index query for datoms with that attribute that had been retracted (using .added) and it found nothing#2017-12-2719:11favilaunless you use a history db, datoms will never see any retractions#2017-12-2719:12favilanormal (non-history) dbs only contain assertions#2017-12-2719:19gcastok got it. thank you#2017-12-2719:56timgilbertSay, are the datomic client libraries available on clojars or anywhere? Trying to write a little open-source datomic utility and I’m not sure what I should tell users about what to do about the libraries#2017-12-2720:00timgilbertI suppose I’ll just instruct them to use bin/maven-install from the datomic distribution, that seems simple enough#2017-12-2720:01favilaI always make datomic a :provided dependency#2017-12-2720:02favila(this only works for libraries though)#2017-12-2720:02favilayou can include datomic as a maven dependency#2017-12-2720:03favilainstructions are at the top of your datomic account page: https://my.datomic.com/account#2017-12-2720:03favilabut you obviously can't share that key#2017-12-2720:03timgilbertYeah#2017-12-2720:58SoV4Hi I was wondering something. How do I commit to datomic elements with cardinality many?#2017-12-2720:59SoV4iterate over them and commit them one by one? o.o#2017-12-2720:59SoV4maybe this is where my procedural software brain still doesn't grok FP#2017-12-2721:00SoV4in my case users can provide a comma separated list of tags, i want to commit them all to the same "blurb" entity in the db. but a commit I can only do one element right? i want to commit many independent tags (with contributing user id) as cardinality many so I can keep track of who submitted which tag (and award verification/participation points)#2017-12-2721:00SoV4hopefully my question makes sense. i am still kinda puzzled by how to resolve temporary IDs if I were to do a commit for the blurb contents and stuff and then subsequent commit for tag/val and tag/author and tag/blurb#2017-12-2721:03favilaYour transaction can do arbitrary things, there is no "one element" restriction#2017-12-2721:16SoV4yay#2017-12-2722:47SoV4that is a simple statement and helps.#2017-12-2722:47SoV4more study required on my part#2017-12-2723:48SoV4does that mean i can transact a vector and search against the set? that's what i want to do essentially#2017-12-2804:59val_waeselynck@U3ES97LAC it would probably be easier for us to help if you gave us a sample schema and expected query results :)#2017-12-2819:01adamfreyI'm trying to use the datomic client library to connect to a running Datomic server on AWS. I get this error every time.
{:cognitect.anomalies/category :cognitect.anomalies/incorrect, :datomic.client/http-error "Throttled"}
is there any way in the client library that I can get more insight into what's going on?#2017-12-2819:16kennytiltonIn the past I have used Postgres’s notify capability to have an app find out automatically when something changed in the DB. When Datatomic (IIUC) pushes data to a client, is there some way for a Clojure app similary to pick up that something has changed (so it knows to go check out the change in some fashion)?#2017-12-2819:17kennytiltonYes, I am digging thru the doc, but I do not think I am smart enough to read Cognitect doc.#2017-12-2819:24marshall@hiskennyness http://blog.datomic.com/2013/10/the-transaction-report-queue.html#2017-12-2819:24marshallyou probably want the tx report queue#2017-12-2819:25marshallhttp://docs.datomic.com/clojure/#datomic.api/tx-report-queue#2017-12-2819:34kennytiltonYep, that looks like it. Thx, @marshall#2017-12-3014:49eoliphant@hiskennyness there’s also http://www.onyxplatform.org/ which might be useful for some use cases. I just finished moving some code I had that was using the report queue directly to a separate onyx process. Nice blog post here (http://www.stuttaford.me/2016/01/15/how-cognician-uses-onyx/) about using Datomic and Onyx#2017-12-3015:39Petrus TheronFollowed Heroku buildpack https://elements.heroku.com/buildpacks/opengrail/heroku-buildpack-datomic to deploy a free Datomic transactor on Heroku, but getting this error:
2017-12-30T15:30:40.144778+00:00 heroku[datomic.1]: State changed from starting to up
2017-12-30T15:30:41.948617+00:00 app[datomic.1]: Device "eth1" does not exist.
2017-12-30T15:30:41.963211+00:00 app[datomic.1]: sed: can't read /app/scripts/transactor.properties: No such file or directory
2017-12-30T15:30:41.970483+00:00 app[datomic.1]: Launching with Java options -server -Xms256m -Xmx2g -Ddatomic.printConnectionInfo=false
2017-12-30T15:30:41.990735+00:00 app[datomic.1]: Picked up JAVA_TOOL_OPTIONS: -Xmx300m -Xss512k -Dfile.encoding=UTF-8 -Djava.rmi.server.useCodebaseOnly=true
2017-12-30T15:30:46.601471+00:00 app[datomic.1]: Critical failure, cannot continue: Error starting transactor
2017-12-30T15:30:46.602179+00:00 app[datomic.1]: java.lang.Exception: 'protocol' property not set
...
#2017-12-3017:21Petrus TheronSigh, looks like "eth1" does not exist might have something to do with needing Heroku Spaces, which is only available on enterprise Heroku :face_with_head_bandage: lost half a day on this. Back to plain AWS#2018-12-3101:48kennytiltonInteresting, @eoliphant Thx for the lead. That blog post makes Onyx sound like a heavy/elaborate lift. Was it simpler than the report queue somehow?#2018-12-3102:02souenzzo@hiskennyness #onyx needs a zookeeper and maintain a quorum of 5 zookeeper isn't a "easy" thing.#2018-12-3107:33kennytilton@souenzzo OMG. Zookeeper, too? (Programming) life just is not that simple any more.#2018-12-3110:46Petrus TheronWhich AWS EC2 instance size should I use for Datomic transactor? The cloudformation templates still have c3.large, which seems to be no longer available (maybe just hidden)#2018-12-3110:56Petrus TheronThis is for testing/staging loads. Will t2.micro / t2.small do?#2018-01-0200:32donaldballIn my experience t2.medium is the smallest that can practically run the transactor, at least when using the cognitect ami#2018-01-0322:27eoliphantyeah I’d second that, we usually do t2.large in dev#2018-01-0208:00devurandomHi!#2018-01-0208:04devurandomDavid Nolen recently mentioned Datomic on the client (i.e. web browser) and from e.g. NodeJS [1]. However, the only Datomic client libraries for other languages like JS [2] appear ancient and unmaintained (last commits in 2015 for Ruby, Python, JS) and targetting the REST API only, which is considered "legacy and will no longer be supported" [3]. So what was David Nolen talking about [1]?
[1]: https://www.youtube.com/watch?v=nbMMywfBXic
[2]: http://docs.datomic.com/languages.html
[3]: http://docs.datomic.com/rest.html#2018-01-0211:22souenzzoI think that the WIP JS client isn't on rest API, it's on "peer API"#2018-01-0504:19devurandomIs that WIP JS client already available somewhere?#2018-01-0512:13souenzzoI think that just in @U050B88UR desktop 😕#2018-01-0210:39nblumoeHey, I would like to get a normalised map of entities from Datomic. So instead of having a map with nested entities, I would like to have all relevant entities on the top level and ids for entity links:
{12345 {:other/entity 54321 :this/name "foo"}
54321 {:this/name "bar"}}#2018-01-0210:42nblumoeI wonder what the best way to accomplish this would be. I tried a couple of things and have a working solution, but this requires a surprising amount of manual work. Feels like I a missing something to get a nice, idiomatic solution. The best I came up with, was using the query API but I ran into issues with optional refs.#2018-01-0213:00robert-stuttaford@U066HBF6J look how close yours is to the index format: [e a v t]. you have {e {a v, a v]}. Perhaps you could group-by a d/datoms on :e, and then process the groups to make an a-v map?#2018-01-0213:00robert-stuttafordyou’ll have some edge cases around things like enums, but should be pretty straightfoward#2018-01-0215:17nblumoeThanks, let’s see if I understood you correctly. You suggest to use d/datoms to retrieve datoms from the index by entity-id directly and then reshape that into the target data structure, right? I am already playing around with this but I struggle to see how to work with the datoms. Is there any documentation you could point to?#2018-01-0215:41favilaHow do you determine the "relevant entities" (the keys in your map?)#2018-01-0215:44nblumoeI have a list of entity ids. For each entity from that list, I want to retrieve all attributes and values. Any ref attribute I would like to have as an entity id in the entity hash-map, but also as a corresponding top level entity (thus normalized data).#2018-01-0215:46favilahow many levels of recursion are you planning to go?#2018-01-0215:47nblumoeSo, “relevant entities” are identified by the list of entity ids and all refs on those entities.#2018-01-0215:47nblumoeone level is sufficient#2018-01-0215:48nblumoeah sry, two. however, a solution that could generalize on the recursion depth would be good in any case.#2018-01-0215:50favilathe recursion is what changes this from a simple massage of d/datoms or d/pull-many results.#2018-01-0216:07favila(->> (d/q '[:find ?e2 ?a2 ?v
:in $ [?e1 ...]
:where
[?ref-type :db/ident :db.value/ref]
[?e1 ?a1 ?e2]
[?e2 :db/valueType ?ref-type]
[?e2 ?a2 ?v]
[?a2 :db/valueType ?ref-type]]
db entity-list)
(group-by first)
(into {}
(fn [[e tuples]]
[e
(reduce
(fn [acc [_ a v]]
(let [{:keys [ident cardinality] attr-info} (d/attribute db a)]
(case cardinality
:db.cardinality/one
(assoc acc ident v)
:db.cardinality/many
(update acc ident (fnil conj []) v))))
{}
tuples)])))#2018-01-0216:09favilaThis will get the refs and collect them into scalar ids. To get all attributes from the first level, I suggest making a map from (d/pull-many db entity-list) and merging the results of this code into it to overwrite the ref-typed attributes#2018-01-0216:10favilato generalize the query to multiple levels, you'll need a rule#2018-01-0216:43robert-stuttafordwhat favila said 👏#2018-01-0216:56favilathe rule would probably look something like this#2018-01-0216:56favila[[(follow-refs [?depth ?e1] ?a1 ?e2)
[(> ?depth 1)]
[?e1 ?a1 ?e2]
[?e2] ; ensures ref type
[(dec ?depth) ?ndepth]
[(follow-refs ?ndepth ?e2 ?a2 ?e3)]]
[(follow-refs [?depth ?e1] ?a1 ?e2)
[(= ?depth 1)]
[?e1 ?a1 ?e2]
[?e2]]]#2018-01-0216:56favila(untested)#2018-01-0308:12nblumoeThanks, I really appreciate providing that code and I will test it. Still surprised how much code is needed for that TBH.#2018-01-0308:16nblumoeIt might be worth describing, WHY I want to have that data structure: I would like to export entities (incl. all entities associated via refs) from a Datomic instance A and import it to B. IDs are not being shared between the two DB instances. I thought having a normalised map of all the relevant entities would be the easiest for import (using the IDs as tempids). But maybe there would be a better way to achieve that export/import cycle? (Export should be handled by an application, exposing it via an HTTP API, so I don’t want to copy data on the storage layer for example)#2018-01-0312:25robert-stuttafordI think exporting an [e a v] list is far simpler. then, when transacting, you only need to do the id -> tempid conversion, and datom -> tx assertion conversion (i.e. add :db/add at the beginning of the datom). which isn’t much code at all#2018-01-0314:25favilaThis may interest you: https://gist.github.com/favila/785070fc35afb71d46c9 It's a little old (before mem storage had a working log, before string tempids, etc) but it demonstrates "application-level" datomic db dump and restore. There may be some ideas to mine.#2018-01-0313:47donmullenI am doing a very large datomic import (over 14 million rows). I believe I’m doing all the ‘best practice’ things: 1) batch transactions 2) pipeline 3) initial schema w/o indexes 4) set transactor for import. I’m noticing however, that the transactions come in relatively quickly at the start (~ 4 sec for 100 transactions) and then gradually degrade as the import proceeds (now over 1 min for 100 transactions after 23,000 transactions. What are some other things to consider that might cause this? Transactor is running locally as dev and I’m using client api to a peer server.#2018-01-0314:03robert-stuttaford@donmullen may i suggest using the peer library instead? what storage backend are you using? what are the threshold values for your transactor? higher = bigger indexing jobs#2018-01-0314:10donmullen@robert-stuttaford I’m running locally against a dev disk storage. Would local-ddb be more efficient? Transactor settings memory-index-threshold=32m / memory-index-max=512m / object-cache-max=64m // running -Xms1g -Xmx1g -XX:+UseG1GC -XX:MaxGCPauseMillis=50#2018-01-0314:10donmullenI will try switching to peer library.#2018-01-0314:12donmullenI seem to be running into memory / GC issues. Likely some rookie clojure dev mistake in the code somewhere (only recently ramping back up in clojure).#2018-01-0314:25souenzzodatomic:mem will store on ram. 1Gb is enough for data + index's + datomic code?#2018-01-0314:32jeff.terrell@donmullen - I'm pretty sure you're holding on to the head of your data sequence. As more of the sequence gets realized, the GC can't free up any of the first elements because you still have a reference to the head (your data binding in your code). This can take some work to get right, and it's also a bit frustrating to debug because the feedback cycles are long. But the good news is that it's a fairly common problem in Clojure when dealing with large datasets, so there should be some good resources out there to learn more about it. Let me know if this doesn't make sense or if you're not sure where to go from here.#2018-01-0314:36donmullen@jeff.terrell Thanks - was reaching the same conclusion.#2018-01-0314:39favila@donmullen degradation on import is normal as the amount of data reaches indexing thresholds; indexing (done by the transactor) slows down the import process because it consumes cpu and io.#2018-01-0314:40donmullen@favila Is there indexing happening even if no attributes have :db/index true?#2018-01-0314:41favilayes, there are still :eavt :aevt and :vaet (for refs) indexes#2018-01-0314:41favila:db/index true only controls the presence of :avet for a given attr#2018-01-0314:41favilaFYI: http://docs.datomic.com/capacity.html#data-imports#2018-01-0314:42favilayou can check your logs for backpressure alarms. that would at least tell you that the slowdown is because the transactor is applying backpressure#2018-01-0314:43favila(transactor logs)#2018-01-0314:45donmullen@favila - thanks - not seeing alarms - so I think the issue likely GC/mem in my code.#2018-01-0314:46favilaI don't see any obvious head-holding in your code#2018-01-0314:46favilanormal process monitoring should tell you which process is consuming cpu#2018-01-0314:47favilasomething like jstat or jvisualvm can tell you what is happening in each process#2018-01-0407:43hanswjust checking... Datomic only allows the use of java.util.Date, correct?#2018-01-0412:11souenzzohttps://receptive.io/app/#/case/17713#2018-01-0412:53hanswthat requires me to login?#2018-01-0412:58souenzzoI think that you need to access from http://my.datomic.com (top, right button) from the first time
But it's a suggestion to support java.time.instant.
And the feedback from datomic team:
We're gathering feedback from customers, so that we can gauge demand for this feature. Make sure to click "Want" on this feature if you need it.
Marshall Thompson
from Datomic wrote a year ago
#2018-01-0413:25hanswaaah ok#2018-01-0413:25hanswthnx#2018-01-0413:25hanswwill upvote#2018-01-0407:45hanswI'm surprised they haven't jumped on the goodies in java.time as of JDK8, being immutable and all...#2018-01-0407:45hanswProbably for backwards compatibility?#2018-01-0414:00michaelsjava.util.Date is the source of 95% of my problems. (The other 5% being NPEs.)#2018-01-0414:04hanswi'm surprised they opted to use java.util.date to begin with... joda-time has been the better option for ages#2018-01-0414:07michaelsThat seems to be the case for a lot of libraries. Jackson doesn’t do serialization natively of them, and we’ve had problems with other libraries as well. 😕#2018-01-0414:14hanswhmm#2018-01-0414:15hanswinteresting choices 🙂#2018-01-0414:15hanswreasons could be something legal...#2018-01-0414:04hanswmaybe something to do with licensing or whatever#2018-01-0417:51hanswWhat would it look like when I 'update' an entity to have the value for an attribute removed? Set it to nil?#2018-01-0418:04hanswaha, :db/retract#2018-01-0419:04favilaIs there an efficient way to look up a transaction in the log by its uuid?#2018-01-0419:05favilaThis is an approximation which I'm not sure would even work with transactions which set a txInstant explicitly:#2018-01-0419:05favila(defn lookup-tx-by-uuid [log tx-uuid]
(let [start (Date. (d/squuid-time-millis tx-uuid))]
(->> (d/tx-range log start nil)
(drop-while #(not= (:id %) tx-uuid))
first)))#2018-01-0419:05favilaI was hoping there was something better#2018-01-0419:26rapskalianHey all, when Datomic Starter says that "updates are limited to one year", does that mean that after 1 year my CI servers will break (they won't be allowed to pull the JAR file)? I'm trying to figure out how exactly the licenses work. If I'm ok with not receiving updates after 1 year, would I need to host that JAR file somewhere manually?#2018-01-0419:32rapskalianOr do I sort of automatically get "locked in" to whichever version is available when my updates expire?#2018-01-0419:47alexmiller@marshall is a good guy to answer such questions#2018-01-0419:52marshall@cjsauer your license key wont work to start transactor versions released past the maintenance period
it will no affect your ability to fetch the peer lib for CI#2018-01-0419:53marshallin general it is best to keep parity between peers/transactor in terms of version#2018-01-0419:54rapskalian@marshall thanks for the response. Makes perfect sense.#2018-01-0419:55marshall@favila I can’t think of anything better off the top of my head#2018-01-0419:56marshallit should be pretty fast since you’re starting the tx-range call at the millisec of the txn of interest#2018-01-0419:58favilais there a guarantee that the tx-uuid's squuid ms part will be = or < the txInstant?#2018-01-0419:58marshallthat i’m not sure about#2018-01-0419:58favilaI would assume this would be false for backdated txs (with explict txInstant)#2018-01-0419:59favilathis fn wouldn't work in those cases; only a full scan of the log would find the uuid#2018-01-0419:59marshallright#2018-01-0419:59favilaok. I am a little sad but this is ok; it's something we use for forensics anyway#2018-01-0420:00favila(digging around in a repl)#2018-01-0420:00marshallyeah, the only other option i can think of would be to walk the log and export uuids with tx-ids to something like elasticsearch#2018-01-0420:00marshallwhich you could do batchwise#2018-01-0420:02favilaI was a little surprised to discover also that tx-range's start parameter is interpreted like d/as-of when a date#2018-01-0420:03favilait doesn't mean "give me txs with this txInstant or later", it means "give me the TX that is the final one at this txInstant or later"#2018-01-0420:04favilaso if you ask for time x and there's no tx at time x, the first tx you get from the seq will be from time < x not > x#2018-01-0420:04favilais that intentional?#2018-01-0420:06marshallhrm#2018-01-0420:06marshallthe docs say “The arguments are startT (inclusive), and endT (exclusive). Legal values for these arguments are:”#2018-01-0420:07marshallso you’re saying you get a tx with txInstant before the date you passed?#2018-01-0420:07favilayes#2018-01-0420:08favilaI expected txInstant to always be >= start-instant#2018-01-0420:08marshallthat seems inconsistent with my expectation#2018-01-0420:08marshallany chance you have a small repro?#2018-01-0420:09favilaI discovered it working with a prod db but it should be easy to reproduce with a test db#2018-01-0420:09marshallok. i’ll look into it also, but if you get one please do file a ticket#2018-01-0420:11favilaThis is how I found it: (-> (d/log pc) (d/tx-range #inst"2018-01-04T18:41:54.000-00:00" nil)
(->> (take 2)
(map tx->times))
)
=>
([[53635182 #uuid"5a4e7571-f3c6-4f58-bcd6-f0da00a898de"]
[1515091313000 #inst"2018-01-04T18:41:53.000-00:00"]
[1515091313924 #inst"2018-01-04T18:41:53.924-00:00"]]
[[53635183 #uuid"5a4e7572-fcd8-47ee-a89d-ded605b565df"]
[1515091314000 #inst"2018-01-04T18:41:54.000-00:00"]
[1515091314031 #inst"2018-01-04T18:41:54.031-00:00"]])#2018-01-0420:11favila(the first time in each entry is the squuid-time-ms from the tx-uuid; the second one is the tx's txInstant)#2018-01-0420:13marshallinteresting#2018-01-0420:13marshallit’s like there’s a rounding issue#2018-01-0420:13marshalli know d/squuid rounds to seconds#2018-01-0420:13favilayeah, that's fine. I don't think txid is used by tx-range#2018-01-0420:13favilathe important bit is the second time#2018-01-0420:14marshallhrm. actually i wonder if it’s in the t->tx vs tx->t conversion#2018-01-0420:14marshallerr the datetime to tx/t i mean#2018-01-0420:14favilaI don't think so#2018-01-0420:14favilahere is the boundary:#2018-01-0420:14favila(-> (d/log pc) (d/tx-range #inst"2018-01-04T18:41:54.030-00:00" nil)
(->> (take 2) (map tx->times)))
=>
([[53635182 #uuid"5a4e7571-f3c6-4f58-bcd6-f0da00a898de"]
[1515091313000 #inst"2018-01-04T18:41:53.000-00:00"]
[1515091313924 #inst"2018-01-04T18:41:53.924-00:00"]]
[[53635183 #uuid"5a4e7572-fcd8-47ee-a89d-ded605b565df"]
[1515091314000 #inst"2018-01-04T18:41:54.000-00:00"]
[1515091314031 #inst"2018-01-04T18:41:54.031-00:00"]])
(-> (d/log pc) (d/tx-range #inst"2018-01-04T18:41:54.031-00:00" nil)
(->> (take 2) (map tx->times)))
=>
([[53635183 #uuid"5a4e7572-fcd8-47ee-a89d-ded605b565df"]
[1515091314000 #inst"2018-01-04T18:41:54.000-00:00"]
[1515091314031 #inst"2018-01-04T18:41:54.031-00:00"]]
[[53635184 #uuid"5a4e7572-61de-4103-b5fd-cb2ed253ed4e"]
[1515091314000 #inst"2018-01-04T18:41:54.000-00:00"]
[1515091314254 #inst"2018-01-04T18:41:54.254-00:00"]])#2018-01-0420:15marshallwhat’s your backend storage?#2018-01-0420:15favilathis db is sql#2018-01-0420:16marshallwasn’t ever backed up / restored from ddb or any other storage?#2018-01-0420:16favilamaybe from dev, ages ago? never ddb#2018-01-0420:16marshalldev wouldn’t do what i was thinking#2018-01-0420:26rapskalian@marshall just hypothetically, if a security hole were discovered in Datomic, would Datomic Starter users outside of their license receive any retro-patches? I'm imagining no, but wanted to double check.#2018-01-0420:27marshall@cjsauer well, it hasn’t happened and i hope it doesnt, but I suspect that decision would probably be made at that time
I would not be the one making it 🙂#2018-01-0420:28rapskalian@marshall gotcha. Appreciate the help.#2018-01-0612:29hanswDoes datomic have an equivalent query for SELECT * FROM account WHERE nr = 23? Note that nr is a natural key. In the tutorials I see lots of examples for querying specific attributes but I would like to query all attributes at once, avoiding a second trip with db/pull...#2018-01-0612:30hanswOr do I have to find the entity-id and then use db/pull?#2018-01-0612:58hanswhmm i guess supplying the ident-value directly to pull should work 🙂#2018-01-0616:19timgilbertYou might use the Entity API for something like that, so you’d have (let [e (d/entity my-db 23)]). Then you can pull fields out of e as you need them. The most direct analog to SELECT * would probably be (pull [*]) though, which gives you every attribute of the entity in question as data#2018-01-0616:22timgilbertSo like:
(defn pull-star [db key]
(d/q '[:find (pull ?e [*]) .
:in $ ?k
:where [?e :attr/nr ?k]]
db key))
#2018-01-0616:25timgilbertAs you get deeper into datomic, you may find yourself passing lookup refs around for this kind of thing, so your key value might be [:attr/nr 23] and your pull-star function would look more like:
(defn pull-star [db ref]
(d/q '[:find (pull ?r [*]) .
:in $ ?r]
db ref))
(pull-star db [:attr/nr 23])
#2018-01-0616:26timgilbertOr more simply:
(pull db '[*] [:attr/nr 23])#2018-01-0616:56timgilbertSay, can anyone explain to me why datomic has a built-in :db.type/uri? Is there any reason to use this instead of a :db.type/string for storing a URI?#2018-01-0617:55hansw@timgilbert thanks a bunch!#2018-01-0618:05hansw@timgilbert does (d/entity my-db 23) assume i mapped my identity column as :db/unique :db.unique/value?#2018-01-0618:06hanswas opposed to db/unique/identity#2018-01-0618:06hanswi'm not quite sure about the difference anyway from the reference#2018-01-0618:07timgilbertOH, YEAH, SORRY, THAT WAS UNCLEAR#2018-01-0618:07timgilbertHa, caps lock, oops#2018-01-0618:07hanswi was kind of confused by datomic magically figuring out what to relate the number 23 to 🙂#2018-01-0618:08hanswoh and, as a datomic novice i too can not think of a use for :db/uri#2018-01-0618:08timgilbertSorry, that was unclear. There I was thinking of 23 as an entity ID, sort of a built-in natural key. The other examples I was thinking of it as an external identifier (eg, something with :db/unique :db.unique/identity#2018-01-0618:09hanswyes, ok, so my key is indeed an external identifier#2018-01-0618:09timgilbertBut in practice you probaly want to use an external unique key, and the entity API would look more like (d/entity db [:attr/nr 23])#2018-01-0618:10timgilbertDatomic resolves that for you to “the single entity ID which has the attribute :attr/nr set to 23”#2018-01-0618:12hanswgot it, thnx#2018-01-0618:13hanswI've come to realize how much I have been relying on types to understand an API in my career, until now...#2018-01-0618:13hanswbut that's a different conversation 🙂#2018-01-0618:14hanswabout that uri thing... i seem to remember Stuart Holloway saying to avoid :db/uri in the Day of Datomic videos#2018-01-0618:15timgilbertYeah, most of the mailing list messages I’ve seen suggest it was an early artifact, I’m going to pretend it doesn’t exist#2018-01-0618:15timgilbertIt’s not like I have a lot of love for the Java.URI API in the first place#2018-01-0618:17hanswi had a similar surprise about java.util.Data for Inst values#2018-01-0618:17hanswbeing a mutable type and all#2018-01-0618:18hanswand joda-time has been around forever... But i saw in Clojure 1.9 there is support for taking a java.time.Instant as a clojure Instant#2018-01-0618:19hanswso i'm guessing java.util.Date is on its way out in Daytomic#2018-01-0619:06timgilbertThat would be nice, though I suspect they wouldn’t ever ditch it wholesale. There’s a good Clojure wrapper around the java,time stuff too, if you haven’t seen it: https://github.com/dm3/clojure.java-time#2018-01-0719:06drewverleeare there any libs, tools for visualizing datomic or datascript data?
I’m thinking as a csv, html, delimiter seperated data, etc..#2018-01-0719:40daemianmackdrewverlee: https://github.com/mtnygard/datoms-say-what visualizes datomic transaction results as SVG. might be a useful starting point.#2018-01-0721:25drewverleeYea. that might help. thanks a lot!#2018-01-0811:50Vincent CantinI found a typo on the web site (http://docs.datomic.com/transactions.html)
"three ways to sepcify an entity id" -> specify#2018-01-0815:11robert-stuttafordcc @jaret#2018-01-0815:12jaretI’ll fix it right now. Thanks for the catch!#2018-01-1309:50Vincent CantinIt is still there.#2018-01-0816:03hanswthere's a couple more there#2018-01-0816:05hanswEach list a transaction contains represents either#2018-01-0816:06hanswand, similarly Each map a transaction contains is equivalent to @jaret#2018-01-0816:30jaretThanks! I’ll fix those as well.#2018-01-0816:53jaret@U8L9BNAJE The wording seems correct to me. I did note that we had a semi colon where we needed a comma though. Here is the original text:
>“Each list a transaction contains represents either the addition or retraction of a specific fact about an entity, attribute, and value; or the invocation of a data function, as shown below.”#2018-01-0817:12hansw@jaret on 3rd read, yes, you are correct. Maybe it's the language barrier but my mind was hung-up on reading it like "Each list [in] a transaction" but now I understand it refers to what is above#2018-01-0817:14hanswMaybe 'Each list that is contained by a transaction represents...' is better?#2018-01-0818:11jaretIt’s definitely a mouthful. I’ll confirm with the team to see if a rewording, like the one you suggested would work.#2018-01-0816:08Vincent CantinI am a beginner with the Clojure eco-system, and after fast-reading Datomic's documentation I still have difficulties understanding its position compared to another db-bound framework/data-flow which I am familiar with : Meteor.
Here are my questions:
1. Suppose that I want to dev a web-app with reactive data update from the db to the ui, similar to what can be done with Meteor, is Datomic a valid candidate for a replacement? Would something be missing in the reactivity chain between the client and the server-side peer?
2. How/where is handled what part of the data each client is/is-not supposed to see?#2018-01-0816:20Vincent CantinNow that I think about it, I did not see anywhere a mention about reactivity or pub/sub. Maybe that's not what Datomic was designed for. Sorry if I misunderstood.#2018-01-0816:37timgilbertDatomic has the capabilities to do this, but isn't particularly specialized for it. Having immutable data is a big enabler of that kind of thing though. If you were to write something like this, you'd probably use the transaction report queue. There's a project aiming to demonstrate this here (I haven't looked at it closely): https://github.com/waf/push-demo#2018-01-0816:38timgilbertYou'd still need a web server in there to do the tx-report-queue to websockets transmission, and to do security filtering, etc. That part of the code isn't built in to datomic and you have a lot of options#2018-01-0816:41timgilbertThere is a project called datsys which aims to do a lot of this type of thing, replicating a datomic database into the client and then hooking that up to react. It still seems like early days though. https://github.com/metasoarous/datsys#2018-01-0823:32Vincent CantinThanks a lot.#2018-01-0816:09hanswWhile we're on the subject of txdata... What could be wrong here:
I have this tx: [:db/retract 17592186045418 :myns/myattr #inst "1999-06-23T22:00:00.000-00:00"] but I get: :db.error/invalid-lookup-ref Invalid list form: [:db/retract 17592186045418 :myns/myattr #inst "1999-06-23T22:00:00.000-00:00"]#2018-01-0816:12favilaIs that the entire tx? no outer containing list?#2018-01-0816:13hanswIt's not the entire tx#2018-01-0816:13favila[map-or-tx-fn, map-or-tx-fn, ...] is the format#2018-01-0816:14hanswyeah, a 'list of lists' as it says in the datomic doc#2018-01-0816:15favilaerror indicates it thinks your assertion is being used as a lookup ref#2018-01-0816:15favilaso it's either a value in a map or nested too deep in a list#2018-01-0816:16favilamaybe you map somewhere where you should mapcat?#2018-01-0816:16hanswmust every retraction be processed in a seperate call to transact?#2018-01-0816:16hanswok so i'm getting the collection-dimensions wrong somewhere...#2018-01-0816:19favilano, retractions do not need to be alone#2018-01-0816:19hanswmy full tx combines updates and retractions. That is not invalid, correct? (as long as i provide the correct list)#2018-01-0816:19hanswok#2018-01-0816:19favilathere are no transaction batching restrctions except you can't do conflicting things#2018-01-0816:19hanswgot it#2018-01-0816:19favilabut that's not the case here#2018-01-0816:19hanswindeed#2018-01-0816:19favilathis is clearly a syntax error#2018-01-0816:20favilaif you pretty-print your transaction (entry per line) it may be easier to see#2018-01-0816:25hanswthnx so far#2018-01-0818:28hanswfixed it, and you were, of course, correct 🙂#2018-01-0816:12hanswCould this be related to the :db/txInstant of my schema?#2018-01-0823:45caleb.macdonaldblackHow are other people managing schema and data changes in their projects? Kinda like database migrations in a relational database.#2018-01-0823:56csmWe wrote a tool that scans the current schema from the database, compares that to our master schema definition (just a bunch of edn), creates a diff between the two, and applies the diff. Overall it wasn’t hard to write.#2018-01-0904:58timgilbertWe generally use https://github.com/rkneufeld/conformity which is simple and just makes sure you don’t apply the same block of transactions twice in a give database#2018-01-0904:59timgilbertWe’ve been looking into https://github.com/luchiniatwork/migrana too, which is somewhat ActiveRecord inspired, apparently#2018-01-0906:59val_waeselynck@U3XCG2GBZ You can also use our take inspiration from Datofu: https://github.com/vvvvalvalval/datofu#managing-data-schema-evolutions. I also wrote about it here: https://vvvvalvalval.github.io/posts/2016-07-24-datomic-web-app-a-practical-guide.html#data_migrations#2018-01-0912:01chrisblomIs anyone using Datomic for timeseries? I've been using datomic for simple timeseries data by using a compound id (event id + attribute + timestamp), but am running into some issues with this approach (slow queries, not easy to remove expire data), so I was wondering if and how other people solve this.#2018-01-0912:02chrisblomi'm thinking now that Datomic is not a good fit for this purpose , but I hope someone can prove me wrong#2018-01-0916:59val_waeselynckIn my experience Datomic is not a very good fit when you want fast aggregations - we offloaded all of ours to ElasticSearch. Datomic still plays an important role in that : making data synchronization easy#2018-01-1114:53chrisblomthanks, what do you mean by data synchronization in this case?#2018-01-1116:03val_waeselynck@U0P1MGUSX making sure the ElasticSearch materialized view gets updated correctly and efficiently.#2018-01-0913:01Vincent CantinOut of curiosity, did anyone already developed an bridge between git's repository format and Datomic? Would there be any practical reason to do it?#2018-01-0913:04Vincent CantinI don't know if Datomic would be appropriate for this usage w.r.t. the potential huge size of the data, but I would definitively see an advantage in the gain of expressiveness of the queries we could run on imported git repositories.#2018-01-0913:19chrisblomHave you seen this project? https://github.com/Datomic/codeq#2018-01-0914:23Vincent CantinI will definitely look into it. thx#2018-01-0916:55conanHi folks, i'm getting this error: ActiveMQSecurityException AMQ119031: Unable to validate user when trying to connect to a transactor running on Heroku. My understanding is that it's a licensing issue, but i'm using a Datomic Pro Starter Edition license which allows unlimited peers. Does anybody have an idea what else could cause this?#2018-01-0918:18conanturns out i'm not able to expose the datomic port using heroku 😞#2018-01-0917:35hanswI have a question about capacity planning. My process compares millions of records, one-by-one (with some parallelism involved). The left-hand version of an entity is almost always already in the db, unless we occur a 'new' one. Also, we will encounter each record only once. Is it fair to say that I should try to look for a way to disable any caching the datomic-peer (and indeed the transactor) might do for this usecase?#2018-01-1008:28chrisblomare you only processing these records once?#2018-01-0917:39hanswSo, actual calls to transact are pretty rare, whereas a read from the db using entity happen in 99% of the time...#2018-01-0918:13hanswOr... i try to get as much of my database in memory as I can upfront, which leads to another question: how do I determine the size of my db?#2018-01-0918:40souenzzo"Size of db" inside SQL/Dynamo? Inside memory/running code? In (d/datoms) form? In backup form?#2018-01-0918:54hanswInside running code... I guess what I'm asking is, is what the ratio sizeof (psqldump) : running memory is.#2018-01-0918:57hanswAlas, I'm pretty sure it won't fit. A single file I process is roughly 22 GB. That doesn't fit on my laptop 🙂#2018-01-0918:58hanswNot in mem, at least... But I would consider getting lots of mem for the production-stage. Just hard to know how much that would require.#2018-01-0919:04souenzzoI'm also interested.
In my case, would be cool to know how many "peer memory"(50% of JVM) is enough to fit a database that has a backup with XGb#2018-01-0920:09hanswi'll let you know if i find out mote#2018-01-0920:09hanswmore#2018-01-1018:15calebpI’m not sure if any of this has changed, but I believe the docs mention being careful about getting the heap too big, so that you don’t introduce large gc pauses. The alternative is to use the memcached intergration for larger memory caches.#2018-01-1018:18calebpI don’t actually see it in the docs, must have been support convos. If your async anyway, longer pauses might not be a big deal#2018-01-1020:53hansw@U0H4HJB08 thanks! i have indeed run into gc problems... I have decided to go the client-api route as to be isolated from the peculiarities of the peer library for this high-traffic scenario i have#2018-01-1020:54hansw@U0H4HJB08 in this usecase i will never hit any of the caches because i am hitting each entity in my database once#2018-01-1020:58hanswso caching is futile#2018-01-1020:58hansweven counterproductive#2018-01-1020:59hanswunless my db were small and i could load all of it in mem#2018-01-0920:09hanswmy postgresql dump is 19 GB#2018-01-0923:40donmullenWhat is the proper syntax for using pull within a clojure peer client query and using defaults?#2018-01-0923:41luchini[:find
(pull ?e [:job/doc-num (:job/filing-date :default "")])
:in $
:where
[?e :job/job-num "01"]]
#2018-01-0923:42luchini☝️ pull as the first form after :find#2018-01-0923:42luchinithat might do the trick#2018-01-0923:46donmullen@luchini Evidently one can put values before the pull that are included as variables in the query. And taking out the ?e doesn’t help. Wondering if I should just use (get-else …) within the :where.#2018-01-1000:20timgilbertIf the query bit is quoted, you need to pass your values into it, like:
(d/q '[:find
(pull ?e [:job/doc-num num])
:in $ num
:where
[?e :job/job-num "01"]]
db (:job/filing-date :default ""))
#2018-01-1011:19souenzzoquote on '(:job/filing-date :default "")#2018-01-1001:50donmullenThanks @timgilbert.#2018-01-1013:19conanI'm trying to run a transactor on Heroku. The transactor is up and running and my peer can connect to storage to retrieve the host and alt-host values, but it cannot connect to the transactor. I can't telnet to the transactor using its port either. Can anybody help with this? I suspect it's similar to running a dockerised setup.
ActiveMQNotConnectedException AMQ119007: Cannot connect to server(s). Tried with all available servers. org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory (ServerLocatorImpl.java:799)#2018-01-1013:31conanThe full error looks like this:
ActiveMQNotConnectedException AMQ119007: Cannot connect to server(s). Tried with all available servers. org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory (ServerLocatorImpl.java:799)
clojure.core/eval core.clj: 3206
...
user/eval2432 REPL Input
datomic.api/create-database api.clj: 19
datomic.Peer.createDatabase Peer.java: 117
...
datomic.peer/create-database peer.clj: 764
datomic.peer/create-database peer.clj: 772
datomic.peer/send-admin-request peer.clj: 752
datomic.peer/send-admin-request/fn peer.clj: 760
datomic.connector/create-transactor-hornet-connector connector.clj: 320
datomic.connector/create-transactor-hornet-connector connector.clj: 322
datomic.connector/create-hornet-factory connector.clj: 142
datomic.connector/try-hornet-connect connector.clj: 110
datomic.artemis-client/create-session-factory artemis_client.clj: 114
org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory ServerLocatorImpl.java: 799
org.apache.activemq.artemis.api.core.ActiveMQNotConnectedException: AMQ119007: Cannot connect to server(s). Tried with all available servers.
type: #object[org.apache.activemq.artemis.api.core.ActiveMQExceptionType$3 0x5d70be8d "NOT_CONNECTED"]
clojure.lang.ExceptionInfo: Error communicating with HOST 0.0.0.0 or ALT_HOST on PORT 3206
alt-host: ""
encrypt-channel: true
host: "0.0.0.0"
password: <redacted>
peer-version: 2
port: 3206
timestamp: 1515588947255
username: <redacted>
version: "0.9.5561.50"
#2018-01-1013:42conanI realise this is probably a networking problem, but I've been trying to get a Datomic transactor running for a long time now and still haven't managed it.#2018-01-1013:51conanMaybe something in here is responsible, such as TCP routing? https://devcenter.heroku.com/articles/http-routing#not-supported#2018-01-1023:36James VickersProbably like some other people, I am curious how the architecture of a single Transactor affects performance. Does the Transactor have any multi-threaded aspects (like accepting data from Peers in parallel)? Can someone draw a (conceptual) comparison of how the write performance of Datomic would compare to a regular SQL database (e.g. PostgreSQL)?#2018-01-1107:26val_waeselynckFrom a Rich HIckey talk I saw a few months ago: the Transactor does leverage several threads, but not via parallelism - via 'pipelining' instead. E.g one thread for unmarshalling the transaction requests, one for running the transactions serially, one (or maybe more) for indexing, one for sending updates to the Peers, etc.#2018-01-1107:29val_waeselynckA regular SQL database will use locking for coordination, i.e reads will slow down writes and writes will slow down reads. Datomic won't have this problem, however I reckon its indexing cost is probably higher - i.e for an extreme use case where you only ever want to write serially and not read, PG will probably have higher throughput.#2018-01-1111:06souenzzoIt's important to say that if you application has TO MANY writes, it may not fit with datomic...#2018-01-1116:04val_waeselynck@U2J4FRT2T I would nuance this statement as follows: if your application has too much data / too many writes, not all of it can go in Datomic.#2018-01-1116:05val_waeselynckI wrote a bit about Datomic's performance characteristics here: https://medium.com/@val.vvalval/what-datomic-brings-to-businesses-e2238a568e1c#2018-01-1104:08tcarls@jamesvickers19515, I can't speak in a great deal of detail wrt. Datomic, but I will note that as someone responsible for scaling PostgreSQL professionally, the PG case very much has a comparable set of bottlenecks -- you're still only able to do horizontal scaling only for reads; but without time-based queries, PG makes it harder to correlate multiple reads together consistently without a bunch of transaction locking ensuring that you're holding a reference to a specific point in time.#2018-01-1104:09tcarls(referring to "horizontal scaling for reads" in PG being via such mechanisms as a pool of secondaries doing streaming replication or PITR recovery).#2018-01-1107:30val_waeselynckHowever, a pool of secondaries is only eventually consistent in PG, correct?#2018-01-1117:14tcarls@val_waeselynck, any given secondary is internally consistent with some prior point-in-time that the master was at. That's effectively equivalent with what datomic gives you, if your PG schema is built to allow point-in-time queries.#2018-01-1117:14tcarls(which is, of course, rare and expensive).#2018-01-1117:15val_waeselynck@U2V9F98N8 not sure I understand, let me ask specifically: if you write to the master then read to the secondary, are you guaranteed to read your writes ?#2018-01-1221:27tcarlsYou're guaranteed a read that accurately reflects a single point in time (in senses that aren't true with some "eventually consistent" databases), but not an up-to-date one.#2018-01-1104:10tcarlsthat said, I've yet to have a write-heavy workload with Datomic, so I'm definitely not the best person to speak directly to the question.#2018-01-1106:36caleb.macdonaldblackCan I update a datomic entity using the entity id? Something like {:client/name "Caleb" :client/eid 123412341234}? Or do I need to provide my own id?#2018-01-1106:38caleb.macdonaldblackits :db/id#2018-01-1108:54val_waeselynck@caleb.macdonaldblack Not sure I understood your question, but you can do both [{:my-ent/id "fjdkslfjlk" :my-ent/name "THIS VALUE CHANGED"}] (a.k.a 'upsert') and [{:db/id 324252529624 :my-ent/name "THIS VALUE CHANGED"}]#2018-01-1109:20caleb.macdonaldblack@val_waeselynck Ah okay cheers. I didn’t know :client/id works too. Thanks!#2018-01-1115:00jaretDatomic 0.9.5661 is now available https://forum.datomic.com/t/datomic-0-9-5661-now-available/273#2018-01-1116:15hanswThe peer API has pull-many but the client-api doesn't. What is the preferred way to pull-many through the client-API if i want to avoid doing a bunch of client/pull requests?#2018-01-1116:20hanswI was looking at the query-api, it allows a pull-expression to be used but I suspect that will only work for a single entity...
Basically, I am looking for the datomic counterpart for select * from books where isbn in (1, 2, 3 , 4)#2018-01-1116:24souenzzo@hans378 you can (d/q '[:find [(pull ?e pattern) ...] :in $ pattern [?e ...]] db '[*] [id1 id2]) but i think that (map (partial d/pull db pattern) [id1 id2]) is faster once dont need to "parse" the query#2018-01-1222:09souenzzo@U0DJ4T5U1#2018-01-1116:25hanswthe latter would be more http-trips...#2018-01-1116:29souenzzonot sure how cache works on "clients"(i just use peers)... but sure, "process" is faster then http.#2018-01-1116:30hanswin my usecase i will only get cache-misses#2018-01-1116:33hanswbecause i am running a relatively shot-lived batch-process where every entity is touched only once#2018-01-1116:26hansw@souenzzo i'll try approach #1, thank you!#2018-01-1116:30souenzzoTalking about "clients", any news about JS/Node?#2018-01-1116:31rapskalianDatomic's pull syntax and GraphQL queries seem super related, and all my front-end guys want to speak Graph...does anyone have experience in sort of "converting" graph queries into pull syntax? Or maybe there's an even more elegant solution to the problem. We've found Lacinia, but I can't help but feel that if you're using Datomic, all these schemas are just unnecessary.#2018-01-1116:50conanThis is something I'm really interested in too#2018-01-1117:02val_waeselynckI have developed a 'variant' of GraphQL for my application, with essentially the same reads semantics as GraphQL. While directly converting GraphQL queries to pull patterns is appealing, as soon as you need either nontrivial authorization logic or derived fields or parameterized fields, Datomic pull is no longer powerful enough#2018-01-1117:07val_waeselynckHowever, developing a basic GraphQL interpreter (as a recursive function) on top of the Datomic Entity API is rather straightforward, to the point you could maybe do it without Lacinia. But be aware that this basic interpreter may be too naive an algorithm - you may get performance issues as soon as some fields require a network call, and maybe even a Datalog query (Datalog queries have much more overhead than Entity lookups).#2018-01-1117:12val_waeselynckMy point being that most production-ready GraphQL interpreters need some way of leaving rooms for optimizations. In the NodeJS world, this is done by making Field Resolvers asynchronous, which also lets you do batch queries with some wizardry; In Lacinia, this is done via sub-query previews. My strategy has been to give up on Field Resolvers (synchronous functions that compute a single field of a single entity) and adopt the more general 'Asynchronous Tabular Resolvers' (asynchronous functions that compute several fields of a selection of entities).#2018-01-1117:13val_waeselynckYou should also have a look on the work done for Om Next / Fulcro - the querying semantics are similar.#2018-01-1117:14val_waeselynckI may end up open-sourcing the query engine I made one of the days - let me know if you're interested.#2018-01-1117:31souenzzo@val_waeselynck I also (started)developed one, but it never finishes (my "main" app will not use it. Just future plans for now)#2018-01-1119:32timgilbertI am working on one right now that we plan to open-source once it's stable#2018-01-1119:32timgilbertHopefully soon#2018-01-1119:33timgilbertIt has two components, one is a library to that hooks up to the entity API and resolves stuff for you, and the other is a program that you point at your db and it extracts a lacinia schema definition from it#2018-01-1119:34timgilbertThere's also the umlaut project, which can go take a graphql schema and produce a datomic schema from it#2018-01-1119:34timgilberthttps://github.com/workco/umlaut#2018-01-1120:00rapskalian@val_waeselynck appreciate the reply, I'd definitely be interested in seeing that code.
For this specific project, I may be concluding that datomic is just not the right tool at present...I really just need a straightforward way to expose my database to a GraphQL client, and unfortunately this doesn't feel "straightforward" enough given my project's extremely tight timeline. We may end up moving over to JS on the backend 😕
I've tried convincing my front-end guys of potentially embracing the Datomic API completely, and letting go of Graph, but they are hell-bent on keeping their familiar tools...such is life.#2018-01-1209:08val_waeselynck@U6GFE9HS7 I haven't used Lacinia yet, but I do think it's still very straightforward with it 🙂 - if that' s enough to tip the balance, you should try and sell the workflow aspects of Datomic to the clientside guys#2018-01-1701:34kennyWe are doing something very similar to this at our company. No GraphQL though. The frontend subscribes to pull patterns by sending a HTTP request to the backend with the pull pattern and eid. The backend responds with the result of that pull pattern. Additionally, the frontend is connected via SSE so the backend sends updates to the frontend any time any datoms that match the pull patterns the client has subscribed to have been updated in the DB.#2018-01-1701:51rapskalian@U083D6HK9 that's exactly what I've been spiking out recently on our latest project. Remove GraphQL as a middle-man, and just speak Datomic over the wire with real-time updates via the datom comparison you described. I'm unfamiliar with SSE though, and have been using WebSockets. SSE seems much more appropriate for this context, I'll have to look into it.#2018-01-1703:27kennyYep it’s pretty clean. You’ll likely need permissions, which are easy to implement using Datomic filters. #2018-01-1121:12denikI’ve checked this URL at least 50 times in the past month https://aws.amazon.com/marketplace/search/results?x=0&y=0&searchTerms=datomic&page=1&ref_=nav_search_box
Did AWS give any updates re: the Datomic Cloud submission process? Is there an estimated duration of how long this will take?#2018-01-1122:44uwois the best (only) way to warm the peer cache and index on startup just to issue a bunch of queries like the ones you’ll be running from that peer?#2018-01-1208:51val_waeselynckProbably yes, maybe you make something clever using the Log API to touch some of the last used index segments... However, you should definitely try the Memcached approach first as it requires zero effort.#2018-01-1208:52val_waeselynckSide note about memcached: it's also very useful on dev machines, especially since the same memcached cache can be shared for dev (on your machine) and production (remote) databases#2018-01-1214:58uwothanks!#2018-01-1215:01uwoAre there any potential issues to consider from running the same memcached instance for staging and production environments of the the same peer application?#2018-01-1216:34uwonevermind. obviously they wouldn’t be sharing anything 🙂#2018-01-1218:50val_waeselynckThey could be sharing most of their data if the staging data is obtained via restoring the production data; I cannot think of any issues except for the additional load#2018-01-1122:49uwoHmm. Actually I think a memcached instance would resolve my issue. Still interested in the answer :)#2018-01-1211:15hanswUsing the client-api and a query like this: `[q '[:find (pull ?e [*])
:in $ [?vals...]
:where
[?e :myns/myattr ?vals]]`
It seems to me that the datomic peer-server won't return more than 1000 results, e.g. there seems to be some kind of cut-off, because when I provide 2000 items in the vals collection, I get no more than 1000 results (i'm sure it should be 2000).
Can I configure this cut-off point somewhere?#2018-01-1211:17hanswaha 🙂, it's in the docs :chunk - Optional. Maximum number of results that will be returned
for each chunk, up to 10000. Defaults to 1000.
#2018-01-1212:19caleb.macdonaldblackIn datomic can I transact a schema with a one-to-many relationship by using a :unique/identity value?#2018-01-1212:20caleb.macdonaldblack[{:db/ident :parent/children
:db/valueType db.type/ref
:db/cardinality :db.cardinality/many}
{:db/ident :child/id
:db/valueType db.type/keyword
:db/cardinality :db.cardinality/one
:db/unique :db.unique/identity}
[{:parent/children [:child-key-1 :child-key-2]}]
#2018-01-1215:46conanI would say yes, but it's also really easy to try it out#2018-01-1212:23caleb.macdonaldblackSomething like that#2018-01-1212:46caleb.macdonaldblackI needed to do it like this: [{:parent/children [{:db/id [:child/id :child-key-1} ...]}]#2018-01-1214:10caleb.macdonaldblackCan I trust my entity ids to be the same every time for my tests using an in memory db? They seem to be#2018-01-1214:45val_waeselynckI would definitely not rely on that, especially is stuff starts getting concurrent#2018-01-1214:46val_waeselynckAlso, this seems super fragile - adding some new data in your test db could break a lot of subsequent tests#2018-01-1300:18caleb.macdonaldblackAhh cheers. I can also see at the bottom of this page http://docs.datomic.com/entities.html they don’t recommend it either#2018-01-1214:34hanswI wouldn't rely on that across instantiations.#2018-01-1215:45conanI need to cascade data down my environments, from production to staging and from staging to developer local databases. I can use Datomic's backup and restore for that, has anyone tried this? I'd really like to be able to spin up a new environment for each pull request and populate the db from the latest backup, but I'm worried it would take too long.#2018-01-1218:39val_waeselynck@conan Maybe you could have each environment use an in memory fork of a common staging database#2018-01-1310:58conanHow can I create that fork? #2018-01-1310:59conanI like the sound of in-memory, but you can’t restore a backup to a mem transactor #2018-01-1311:23val_waeselynck@conan you can't restore to in-memory, but you can do something even better. Just use Datomock: https://github.com/vvvvalvalval/datomock . Just stick a (datomock.api/fork-conn my-shared-staging-conn) and you're good to go.#2018-01-1311:23val_waeselynckThe fact that you can do that is one of the unsung superpowers of Datomic - it's based on the datomic.api/with API.#2018-01-1314:41conanOK this looks really great - so I could set up my database component to connect to a staging database, with a switch that in review instances forks the connection so no changes are actually written to the db? This would then allow me to test schema changes, run test suites and everything without acutally modifying the staging db. Then I can just have a cron job that restores the staging db from production every night or so. If that works then it completely justifies the choice of using datomic.#2018-01-1315:16val_waeselynck> If that works then it completely justifies the choice of using datomic.
It does, doesn't it? 🙂#2018-01-1315:40Vincent CantinCa le fait grave#2018-01-1218:34spieden@conan i believe datomic does incremental restores, so if you ran into perf issues maybe you could do a copy at the storage level and freshen from there.. just an idea#2018-01-1218:39timgilbertWe routinely copy prod data to staging using backup/restore. It's a mild pain in the butt to do all the transactor / peer restarts, but it's perfectly doable.#2018-01-1218:41timgilbertWe also wrote some tools to do the same for locally-running transactors on dev boxes and pull in S3 backups from other environments#2018-01-1218:41timgilbertIt turned out to be easiest to do this with the transactor in a docker container since it's way easier to bring it up/down that way#2018-01-1221:53drewverleeWith datalog, can you return the queries as maps with the attributes as keys? I’m actual asking for datascript, but it seems reasonable it would work the same way in both right?#2018-01-1221:55souenzzo@drewverlee (d/q '[:find (pull ?e [*]) :where [?e :user/name]] db)#2018-01-1222:02drewverleeThanks! Thats perfect.#2018-01-1302:27drewverleeCan someone help me understand why
(def schema {:cli {:db/cardinality :db.cardinality/many}})
(d/transact! conn [{:name "foo" :type :cli} {:name "bar" :type :cli}])
(d/q '[:find (pull ?e [:name])
:where
[?e :type :cli]]
@conn)
;;([{:name "bar"}] [{:name "foo"}])
Returns a list of vectors rather then a list of maps? e.g
({:name "bar"} {:name "foo"})
#2018-01-1311:26val_waeselynck@U0DJ4T5U1 By default, Datalog always returns a collection of tuples. If you want a list instead, wrap your pull pattern in [ ...]; if you want a single result, add a . after the pull pattern.#2018-01-1311:27val_waeselynckSee http://docs.datomic.com/query.html#find-specifications#2018-01-1321:00drewverleeThanks @U06GS6P1N. Thats right on point, with what i wanted. There are a lot of docs to comb over for datomic#2018-01-1510:34val_waeselynck@U0DJ4T5U1 You're welcome. What does 'comb over' mean in English?#2018-01-1614:42rapskalian@U06GS6P1N "comb over" in this context is like to search through, to read through. I think the best and most hilarious way to understand is by watching this clip from Spaceballs:
https://www.youtube.com/watch?v=hD5eqBDPMDg#2018-01-1309:11caleb.macdonaldblackI’m trying out the fulltext in datomic and it looks like it only matches whole words?#2018-01-1309:11caleb.macdonaldblacklike if I search “White” then a result like “Sam White” will return#2018-01-1309:11caleb.macdonaldblackbut if I search “Whit” Than “Sam White” does not return. Am I doing something wrong?#2018-01-1309:12caleb.macdonaldblackCan I get datomic to return results that are parts of words?#2018-01-1312:08val_waeselynck@U3XCG2GBZ may be related: https://groups.google.com/forum/#!msg/datomic/8yrCYxcQq34/GIomGaarX5QJ#2018-01-1312:10val_waeselynckI generally agree with what Stuart Halloway said in this post - if you want powerful search, just send the data to a specialized store like ElasticSearch, with Datomic this is unusually easy to do thanks to facilities like the Log API and tx report queue.#2018-01-1314:43conanI've done this with elasticsearch in the past and it's a breeze - but don't forget that your elasticsearch queries are likely to be an order of magnitude slower than your datomic ones, because your app doesn't have the elasticsearch data in memory.#2018-01-1420:56caleb.macdonaldblackThanks guys. In the end I created a predicated that concatenated all the fields I wanted to search on and used a regex.#2018-01-1420:58caleb.macdonaldblackI’ve been looking up pagination and it seems that the consensus is retrieving all the eids for want and then either running another query or using pull to get my results. Is this what others are doing to limit their results?
Links to what I’ve been looking at:
https://groups.google.com/forum/#!topic/datomic/NgVviV9Sw8g
https://stackoverflow.com/questions/47373356/query-result-pagination-in-datomic
https://groups.google.com/forum/#!topic/datomic/gr9fscuF6oE#2018-01-1510:39mkvlrhey 👋 for syncing between dev/staging/production we’d like to store external db/id attributes in datomic, is there a best-practice we should follow as how to name those attributes? Anything besides http://docs.datomic.com/best-practices.html#unique-ids-for-external-keys#2018-01-1510:50chrisblomi'd avoid sharing :db/id's across databases#2018-01-1510:40mkvlrI guess it’s a similar problem as when sharding datomic or linking to other dbs#2018-01-1513:03laujensenI seem to have choked datomic. I added an index to a single attribute, and shortly after datomic killed MySQL. Now when I relaunch everything, MySql says “Got an error reading communication packets” and datomic says “Indexing retry limit exceeded”. All values a tweaked, max_allowed_package set to 1024M, log file size increased, etc. What could be causing this?#2018-01-1516:12drewverleeWhat is the methodology & tools around restricting the state of the data in datomic. Also, on a more theortical level, when does it make sense to use such restrictions? Should you avoid business logic? In my current db we have triggers like “these db values can’t share the same name”. They tend to cause confusion because its not clear when your writing a db function what triggers you might violate and so its possible to get uncaught exceptions that result in 500's.#2018-01-1517:29val_waeselynck@U0DJ4T5U1 I suggest you ask this on SO - I will provide an answer but I think it's better if others can find it#2018-01-1517:31val_waeselynckMaybe a good title would be "How to prevent transactions from violating application invariants in Datomic"#2018-01-1517:32drewverleeI'll look to the question when I get the chance, might be an hour or so#2018-01-1518:33drewverleeCould you answer the part about how to use such restrictions in slack? That part is probably opinionated the factual and so might not go well over with SO’s guidelines.#2018-01-1518:37drewverleehttps://stackoverflow.com/questions/48268887/how-to-prevent-transactions-from-violating-application-invariants-in-datomic#2018-01-1518:43drewverleeI dont think there is anything inherently wrong about constraints. But ideally there would be a way to know when you could possible violate one, so you could handle it. I think some sort of contract (type, spec) might be able to help with that. e.g
your query can return the desired result OR possible data about one of these constrain violation.#2018-01-1518:44drewverleethat way callers would know they need to possibly handle.#2018-01-1519:25val_waeselynck@U0DJ4T5U1 answered 🙂#2018-01-1519:30val_waeselynck@U0DJ4T5U1 I have not understood what part you want to discuss in Slack however.#2018-01-1610:50chrisblomis there a way to pass which AWS profile to use when connecting to datomic? I want to compare databases on DynamoDB tables in different AWS account, but datomic seems to always use the default aws profile#2018-01-1610:55chrisblomah nevermind, i see now that you can pass the aws access & secret key when connecting#2018-01-1700:43James VickersHas anyone compared Datomic's basic write performance against other SQL databases? I wrote some tests that pit PostgreSQL against Datomic and in most cases Datomic is 2-5x slower - even with tests that always insert new records. Is that other people's general expectation also, or do I need to do some tuning (I run Datomic with defaults) to make write performance somewhat comparable with other SQL databases?#2018-01-1706:28val_waeselynck@U7M6RA2KC the license forbids to published such benchmarks (I know, it sucks).#2018-01-1706:30val_waeselynckHowever, that's not necessarily a very interesting benchmark - for most uses of a SQL db like postgres, the reads will slow down the writes, so raw write capacity won't be the limiting factor for throughput#2018-01-1706:33val_waeselynckThat Datomic has lower raw write throughput is to be expected IMO, because of the differences in the index data structures#2018-01-1713:46fmind@U06GS6P1N agree. Would you know the difference with other storage backend like cassandra or dynamoDB ?#2018-01-1714:45James VickersThanks for the replies. Maybe I'll do a performance test and includes some Peers reading while writes are in progress. The difficulty is selling colleagues on a database with the initial premise that it might be slower than our the more common one (though, like you're saying, in practice it might be faster due to separation between Peers and Transactor)#2018-01-1718:15val_waeselynck@U7M6RA2KC 2 aspects you should sell: low-latency, horizontally scalable reads + reads stay available even when writes are overwhelmed, which is a greeaaaaat situation to be in operationally#2018-01-1714:03stuarthallowayDatomic Cloud is now available! http://blog.datomic.com/2018/01/datomic-cloud.html#2018-01-1714:14Petrus Theronhttps://docs.datomic.com/clojure/ docs seem to be down#2018-01-1714:14stuarthallowayrefresh your cache#2018-01-1714:16stuarthalloway@petrus where did that inbound link come from?#2018-01-1714:16Petrus TheronA google search for "datomic.api/resolve-tempid": https://www.google.co.za/search?q=datomic.api%2Fresolve-tempid&oq=datomic.api%2Fresolve-tempid#2018-01-1714:16Petrus TheronLooks like the docs moved to sub-path /cloud#2018-01-1714:18stuarthallowaythanks @petrus ! — investigating#2018-01-1714:50Petrus TheronThe pricing layout on the AWS Marketplace is confusing. It's not obvious that the The "Infrastructure Pricing Details" is not a line total at bottom although it's laid out like an invoice. I didn't realise I could click between the "Datomic Cloud Bastion" and "Datomic Cloud" (do I need both? Is Bastion like a "lite" version?) so I went ahead and accepted thinking the total cost only that of a t2.nano, but later on it shows I will be billed for both separately? Link: https://aws.amazon.com/marketplace/pp/prodview-otb76awcrb7aa#2018-01-1714:51stuarthalloway@petrus we agree — that layout is compelled by AWS, and we are working with them to implement improvements#2018-01-1714:52sleepyfoxmorning#2018-01-1714:52stuarthalloway@petrus happy to explicate here#2018-01-1714:52stuarthalloway@petrus you want the bastion so you can connect to your system from the internet#2018-01-1714:55stuarthalloway@petrus if you have not yet, you might watch https://www.datomic.com/videos.html#2018-01-1714:55Petrus Theronwith Datomic Cloud, would I automatically get any updates, e.g. if a bug or vulnerability is discovered?#2018-01-1714:56stuarthalloway@petrus depends. AWS can auto update you for e.g. meltdown#2018-01-1714:56Petrus Theroncool - I mean Datomic versioning, not so much OS/VM/hardware level#2018-01-1714:56stuarthallowayBut for a Datomic bug you would need to update your CloudFormation stack after notification from us.#2018-01-1714:57stuarthallowayYou would not need to go back to the marketplace site — you could just grab an upgrade per https://docs.datomic.com/cloud/operation/upgrading.html#2018-01-1715:00Petrus TheronAre "On Prem" upgrades also rolling? I manually set up a transactor recently. No way to "migrate" my existing system to Datomic Cloud?#2018-01-1715:07sleepyfoxFor the production topology of Datomic Cloud, are writes scaled across the number of nodes in the tx group or is there still only a single transactor per database?#2018-01-1715:08stuarthalloway@petrus you can do rolling upgrades to On-Prem. Migration is an ETL job, we will provide tools but have not done so yet#2018-01-1715:10stuarthalloway@sleepyfox each db will have a preferred node for writes. Cloud will not make writing to a single db faster, but you can have many more dbs on a system as you scale horizontally. See https://docs.datomic.com/cloud/operation/scaling.html#2018-01-1715:11stuarthalloway@petrus notes for anyone considering On-Prem to Cloud migration: https://docs.datomic.com/on-prem/moving-to-cloud.html#2018-01-1715:13sleepyfoxOK, thanks.#2018-01-1715:20robert-stuttaford@stuarthalloway dude 🙂 btw “On-Pre” heading typo in “Other Differences” section in your moving-to-cloud doc#2018-01-1715:21robert-stuttaford@stuarthalloway, what does “Symbol magic” mean? (nm found it)#2018-01-1715:21stuarthallowayconversion of strings to symbols to help languages without a symbol type#2018-01-1715:22stuarthallowayI am looking at you, Java#2018-01-1715:22robert-stuttafordgotcha!#2018-01-1715:22robert-stuttafordhow much sleep have you had, Stu? 🙂#2018-01-1715:24johnjdoes the transactor and peer server run in the same ec2 node for the solo version?#2018-01-1715:28mitchelkuijpersReally looking forward to the cloudsearch integration, we would totally love to move over to the cloud offering#2018-01-1715:29mitchelkuijpersin the docs it says future.. Would that mean month's or should I think in that it might take a year?#2018-01-1715:37stuarthalloway@lockdown- the arch change is bigger than that — there are no transactors or peer servers, just cluster nodes. That said, on Solo there is only one node. 🙂 See https://docs.datomic.com/cloud/whatis/architecture.html for more.#2018-01-1715:37robert-stuttaford@stuarthalloway does this mean that cloud could potentially handle more writes somehow?#2018-01-1715:38stuarthalloway@robert-stuttaford more per system: yes, more per db: no#2018-01-1715:38robert-stuttafordright#2018-01-1715:39stuarthalloway1 transactor + 1 for HA <= N cluster nodes#2018-01-1715:40robert-stuttafordok - so the solo topology is actually a transactor and a peer-server rolled into one, and adding node 2 takes you to HA writes + load balancing and adding more nodes adds more read scale after that. right?#2018-01-1715:40robert-stuttafordof course, this summary ignores all the changes on the storage layer#2018-01-1715:40stuarthallowaythere are no transactors, any cluster node can handle any write#2018-01-1715:41stuarthallowaythat won’t allow more writes per db because the underlying CAS in DDB is still the gatekeeper#2018-01-1715:41robert-stuttafordaha, gotcha!#2018-01-1715:41robert-stuttafordthat’s rad#2018-01-1715:41stuarthallowaybut you can have many more dbs#2018-01-1715:42robert-stuttafordis there a different theoretical datom limit for cloud?#2018-01-1715:43robert-stuttafordthe on-prem limit is due to peer memory to hold the roots. i guess peer-server has a similar concern?#2018-01-1715:43stuarthallowaydbs and query still use the same data structures, so nothing is really different there#2018-01-1715:44robert-stuttafordright#2018-01-1715:45bmaddy@stuarthalloway I (and some of the people I'm working with) think your comment about just having cluster nodes is pretty darn cool. It might be worth considering putting that on the main marketing page. I'm excited to try this thing out!#2018-01-1715:46stuarthalloway@bmaddy thanks! Will consider.#2018-01-1715:47robert-stuttaford@stuarthalloway can we now control read-only vs read/write at the connection level? (with client / cloud)#2018-01-1715:47stuarthalloway@mitchelkuijpers TBD on search integration, user demand will drive#2018-01-1715:48mitchelkuijpersThat makes sense#2018-01-1715:48eggsyntax@stuarthalloway re "there are no transactors or peer servers, just cluster nodes" -- the move away from a single-transactor model seems huge (and like it must have been extremely challenging to do while maintaining same guarantees).#2018-01-1715:49stuarthalloway@robert-stuttaford IAM integration is pretty deep, so you can give a client IAM creds that are e.g. read only for a db https://docs.datomic.com/cloud/operation/access-control.html#sec-2#2018-01-1715:50robert-stuttafordperfect#2018-01-1715:50stuarthalloway@eggsyntax at the risk of making it sound less cool, that part was actually pretty easy#2018-01-1715:50eggsyntaxAwww 😉#2018-01-1715:51stuarthallowaytransactors always had that guarantee — if we removed the code that manages HA, you could have N transactors. Semantics would be fine but perf would be terrible#2018-01-1715:52eggsyntaxInteresting, I never realized that.#2018-01-1716:07shaun-mahoodCongrats to the whole Datomic team - really glad you managed to get through all the AWS hoops!#2018-01-1716:07eggsyntaxSeconded! Exciting stuff 🙂#2018-01-1716:08stuarthallowaythanks! it has been intense working through the process#2018-01-1716:15chrismdpHello! Just using your Cloud Formation template now - is there a way of setting up the template to use an existing VPC?#2018-01-1716:15stuarthalloway@cp not at present. We did a bunch of testing and found too many variables to support#2018-01-1716:16chrismdp@stuarthalloway ok, thanks#2018-01-1716:16stuarthalloway@cp happy to discuss in more depth if that is a blocker#2018-01-1716:17stuarthallowayNote that the bastion lets you have dev access from the internet if you want#2018-01-1716:17chrismdpWe’re just figuring out how we’d connect our running services from different VPCs into the nodes#2018-01-1716:18stuarthallowayyes, that seems to be the marketplace preferred way#2018-01-1716:19chrismdpI’m guessing we do that via the bastion#2018-01-1716:19marshallhttps://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-peering.html#2018-01-1716:19chrismdp^^ that makes sense#2018-01-1716:20stuarthallowayuse peering (not the bastion) for prod#2018-01-1716:20stuarthallowayyour security auditor will tell you the same thing 🙂#2018-01-1716:21chrismdpcool - that makes sense thanks#2018-01-1716:22chrismdpcongrats on the launch!#2018-01-1716:23chrismdpone more quick question: why is the API endpoint a http://datomic.net url? (cf https://docs.datomic.com/cloud/getting-started/connecting.html#creating-database) - I’m wondering how the plumbing all fits together#2018-01-1716:42stuarthalloway@cp that name lives only locally inside Datomic Cloud’s VPC, allowing clients to have a stable name to connect to#2018-01-1716:43chrismdpcool - thanks#2018-01-1716:42souenzzo[aws/client - off] I'm on peer API. as-of db's can't be with-ed? I'm trying to do that, but it's not working
(let [db-as-of (d/as-of db t)
;; :my-attr is "isCompomnent true"
{:keys [db-after]} (d/with db-as-of [[:db/retract :my-attr :db/isComponent true]])]
(:db/isComponent (d/entity db-after :my-attr))
;; => true
)
#2018-01-1716:43stuarthalloway@souenzzo that is correct, you cannot with an as-of db#2018-01-1716:44stuarthallowayintuition: as-of is not time travel, it is a filter#2018-01-1716:45souenzzoIt's on docs? It fails with not error. Even tx-data on return is "correct". Very difficult to debug.#2018-01-1716:46stuarthallowayHm, it should be, I know it has come up before.#2018-01-1716:47favila(Feature request: https://receptive.io/app/#/case/26649 )#2018-01-1716:51eggsyntaxLooks like that requires login to view, and no public signup that I can see.#2018-01-1716:52eggsyntax(not a problem for me personally; I was just going to look out of curiosity)#2018-01-1716:52favilahuh. I got to it via https://www.datomic.com/support.html click "feature request"#2018-01-1716:53favilanow that I think about it you probably need a http://my.datomic.com account#2018-01-1716:53eggsyntaxI have one but I'm not signed in; I'll see if that changes it.#2018-01-1716:54eggsyntaxNope, still doesn't work -- unless you're suggesting I should actually put my http://datomic.com name/pwd into http://receptive.io.#2018-01-1716:55eggsyntaxNor can I get to it via the "feature request" link on http://datomic.com.
But again, NBD for me, just letting you know.#2018-01-1716:55favilaThat may have been what I did#2018-01-1716:55eggsyntaxAha! Clicking a different "feature request" link did work. 😜#2018-01-1716:56favilawhere?#2018-01-1716:56eggsyntaxThe one at the very top of the page.#2018-01-1716:56eggsyntaxNext to "log out"#2018-01-1718:22val_waeselynckPlease do upvote the request, we're really missing out on something great there 🙂#2018-01-1718:27favilaMay I point this one out too? https://receptive.io/app/#/case/49752#2018-01-1716:47stuarthalloway@favila until then it should at least throw an exception. Logging it for a future release.#2018-01-1716:48stuarthalloway@souenzzo sorry that threw you, we will make it more failfast#2018-01-1717:07denik@stuarthalloway re: cloud, is there a transition path from solo to production? Also the pricing (since it only shows EC2) is the same. At equivalent usage (what solo can handle) is prod expected to be more expensive? If so, how much?#2018-01-1717:08stuarthalloway@denik you can transition from Solo to Production with a CloudFormation upgrade https://docs.datomic.com/cloud/operation/upgrading.html#2018-01-1717:09stuarthalloway@denik if you are looking at the marketplace pricing, their site is currently not capable of telling you which instance types go with which topologies.#2018-01-1717:10stuarthallowaySolo runs on a single t2.small, and Production runs (typically at least 2) i3.larges#2018-01-1717:11shaun-mahoodI'm going through the setup right now, and for Oregon I got $21/month on solo and $233/month on production as the estimated costs. I think it said that was for 2 i3.large.#2018-01-1717:11stuarthallowayBut there is no fixed equation: pricing is per hour, and running >1 instance for availability is up to you. So Production cost will vary substantially with use#2018-01-1717:12stuarthalloway@shaun-mahood that sounds reasonable — start with Solo until you need more#2018-01-1717:12denikOf course. Thanks @stuarthalloway. I’m very excited!#2018-01-1717:12shaun-mahoodOh yeah, solo will be overkill probably for what I'm doing - I was just curious since the estimator was kind of impenetrable to understand.#2018-01-1717:12stuarthalloway“kind of” is generous#2018-01-1717:14stuarthalloway@shaun-mahood belay that, the sales dept informs me you should run Production for everything, and forget you turned it on 🙂#2018-01-1717:14marshallwith an ASG size of 12#2018-01-1717:14marshallat least#2018-01-1717:14shaun-mahoodStill cheaper than an on-prem license 🙂 (I mean, without Marshall's "help")#2018-01-1717:19stuarthalloway@shaun-mahood yes. That text is inside the CFT, so triggers a deeper review process. We will fix it the next time we navigate that process.#2018-01-1717:19stuarthallowayOTOH, we can change http://docs.datomic.com with an S3 push, so if the text-about-the-text could be better that is an easy fix 🙂#2018-01-1717:20shaun-mahoodSo I'm guessing you have something you can point people to when they complain that requiring a jira ticket to change a docstring is too onerous? 🙂#2018-01-1717:21stuarthalloway@shaun-mahood btw did you watch the video walkthrough? It is short https://www.datomic.com/videos.html#2018-01-1717:23rapskalianGot a solo stack launched and transacted some movies. This is awesome 🙂#2018-01-1717:23shaun-mahoodNot yet, the setup so far has been exceptionally straightforward and easy from the setup instructions. Just waiting for the stack to finish creating in the cloudwatch dashboard right now.#2018-01-1717:24stuarthalloway@cjsauer hooray! Out of curiosity, are you running as AWS owner or did you do the “authorize a Datomic admin with IAM” step?#2018-01-1717:24rapskalian@stuarthalloway I used the admin group setup route and tested it with my non-root account. Worked like a charm.#2018-01-1717:25stuarthallowaysuper cool!#2018-01-1717:25marshallfabulous!#2018-01-1717:25marshallthat’s great to hear#2018-01-1717:25stuarthalloway“easy” vs “securious” always a challenge#2018-01-1717:26stuarthallowaybtw I think I just made that word up, it means “serious about security”#2018-01-1717:26marshalli thought it meant curious about security#2018-01-1717:27rapskalianShort for supersecuriousticexpialadocious#2018-01-1717:49sleepyfoxHave moved past the movie example and moved to importing real data. We used existing IAM users authorised with the datomic policy.#2018-01-1717:50sleepyfoxHave found it to be pretty straightforward, although AWS's console UI is (as usual) awful.#2018-01-1717:51ljosaThe blog post should have info like this in it: https://clojurians.slack.com/archives/C03RZMDSH/p1516209093000142#2018-01-1717:51ljosaRight now, it looks like the ~$30 estimate for production with t2 in the AWS console is real.#2018-01-1718:13marshall@sleepyfox awesome! Glad to hear it#2018-01-1718:14deniklittle typo at https://docs.datomic.com/cloud/getting-started/connecting.html Install the Clojure#2018-01-1718:15marshallthanks @denik#2018-01-1718:23marshallpushed a fix for that typo#2018-01-1718:55denikstuck trying to run the socks proxy. aws ec2 describe-instances... returns the system name, aws iam get-user returns the user in the iam group, yet running the script returns Datomic system <my-system-name> not found, make sure your system name and AWS creds are correct.#2018-01-1718:57stuarthallowayHi @denik. Triple check your spelling of the system name and AWS region wherever they appear#2018-01-1718:58denik@stuarthalloway already did, before the final message it also prints
To see help text, you can run:
aws help
aws <command> help
aws <command> <subcommand> help
aws: error: argument command: Invalid choice, valid choices are:
acm | apigateway
autoscaling | cloudformation....
#2018-01-1718:59stuarthalloway@denik that sounds like an argument is not getting to an invocation of the AWS CLI inside the script#2018-01-1719:01marshall@denik can you check the version of your aws cli?#2018-01-1719:02marshallaws --version#2018-01-1719:02denik@marshall aws-cli/1.10.24 Python/2.7.10 Darwin/16.7.0 botocore/1.4.15#2018-01-1719:02marshalli think your CLI might be too old#2018-01-1719:02marshallaws-cli/1.11.170 Python/2.7.14 Darwin/17.3.0 botocore/1.7.28#2018-01-1719:02marshallis what i’m using#2018-01-1719:02marshalli’m wondering if the version you have doesnt have the required aws sub-command(s)#2018-01-1719:03marshall#2018-01-1719:04marshallerr that should be $ pip install awscli --upgrade --user#2018-01-1719:04marshall🙂#2018-01-1719:04dustingetzIs this channel logged anywhere#2018-01-1719:05jarethttps://clojurians-log.clojureverse.org/datomic/#2018-01-1719:05dustingetzShoot, logs at clojurians-log are busted, the don’t index anymore#2018-01-1719:05jaretbut that only has to 11-16#2018-01-1719:17shaun-mahoodI got everything up and running - the getting started docs and videos are excellent, thanks for putting in the effort to make them so solid.#2018-01-1719:19jaretGlad to hear it @shaun-mahood!#2018-01-1719:19jaret@denik did upgrading the CLI work for you?#2018-01-1719:30denikcurrently yak shaving over the update process. I’ll keep you posted#2018-01-1719:32denikupdating did the trick. it works!#2018-01-1719:34jaretThanks! We’re making a note of that and will update the docs to reflect the need to upgrade.#2018-01-1719:38denikGreat. Note: I spend most time trying to update the CLI using aws official guide which didn’t work on OSX. That’s why I though I had updated even though I did not.#2018-01-1719:39denikthe guide https://docs.aws.amazon.com/cli/latest/userguide/installing.html#2018-01-1719:39denikbrew install awscli did the trick#2018-01-1719:40jaretok, good to know#2018-01-1719:57denikWhat is a query-group? https://docs.datomic.com/cloud/getting-started/connecting.html#creating-database#2018-01-1720:07jaretQuery groups are coming soon as a means for scaling reads https://docs.datomic.com/cloud/operation/scaling.html#sec-3#2018-01-1720:07jaretBut they are not yet fully implemented 🙂#2018-01-1720:08jaretWe’ll have more documentation when they are#2018-01-1720:08jaret@denik ^#2018-01-1720:12denikthanks @jaret for two separate projects, is the idea to create two different databases in the same system or two different systems each with one db?#2018-01-1720:18jaretNo the intention is more for An AutoScaling Group (ASG) of Nodes used to dedicate bandwidth, processing power, and caching to particular jobs. Unlike sharding, query groups never dictate who a client must talk to in order to store or retrieve information. Any node in any group can handle any request.#2018-01-1720:19jaretOh apologies, I misunderstood your question and thought we were still discussing query groups.#2018-01-1720:23jaret@denik Generally, you should create two separate stacks for separate projects. But can you tell me more about the projects? will they share data?#2018-01-1720:29jaret@denik important to note that two databases in the same system is totally doable in cloud. But its the sort of thing that I’d need more details on in order to provide a full recommendation.#2018-01-1720:37denik@jaret I see, they wouldn’t share data. I’d just like to have a system to quickly spin up durable experiments. Most of them will be deleted eventually (after months), but I’d want to migrate the ones that get traction into a new system eventually. It’s a bit like a (into new-system (select-keys databases [proj-specific...]))#2018-01-1720:38jaretYeah, that use case would be fully supported in one system. And definitely recommended.#2018-01-1720:54denik@jaret as well as migration to a new system?#2018-01-1720:54marshallTBD#2018-01-1720:55marshallthere is not currently an API to “move” a db from one system to another#2018-01-1720:55stuarthallowaylikely to be some interesting requirements there ^^ e.g. re-encrypting#2018-01-1720:58ljosaDoes Datomic Cloud include support equivalent to what we get with the $5k on-prem license?#2018-01-1721:00viestihmm, appearance of Datomic Cloud made me look again on this (announced at last reInvent so quite new): https://aws.amazon.com/about-aws/whats-new/2017/11/aws-privatelink-on-aws-marketplace-now-available/#2018-01-1721:06marshall@ljosa Support is covered here: https://www.datomic.com/pricing.html (bottom of page)#2018-01-1721:06marshallwe will be enabling AWS Marketplace PSC (basically opt-in to sharing your support contact info) asap#2018-01-1721:07marshallat which point all users who are subscribed will have access to submit tickets/etc
24x7, shorter SLAs, etc are available as a separate support contract#2018-01-1721:10ljosaYes, I saw that but I wasn't sure how to interpret it. There was a question internally whether support was a reason to stay with the on-prem edition. I wasn't sure whether the support included with Datomic Cloud was equivalent to what we have now with Datomic Pro or whether we'd have to add a separate support contract.#2018-01-1721:10marshallgotcha#2018-01-1721:11marshallthe included support with On-Prem (nee Pro) carries a shorter SLA (2 day IIRC)#2018-01-1721:12marshallif you’re already using on-prem, I’d highly suggest you look at: https://docs.datomic.com/on-prem/moving-to-cloud.html#2018-01-1721:12marshallCloud is a new product, with different underlying structures and requirements, so it’s not “plug and play” WRT moving from pro to Cloud#2018-01-1721:18ljosaThanks! Don't worry, it would take us some time to be ready to switch over … for one thing, we'd have to turn our peers into clients.#2018-01-1721:18marshallcool. just wanted to make you (and everyone) aware of the diferences#2018-01-1721:41ljosado you have plans to eventually release client libraries for other languages, such as javascript?#2018-01-1721:49jaret@ljosa We’re tracking which languages users want on our “suggest a feature” page located by following the link at the top right on http://my.datomic.com. We’re prioritizing issues here and looking for input on which clients are most desired.#2018-01-1721:50jaretSo far we have a request for python (https://receptive.io/app/#/case/18126), elixir (https://receptive.io/app/#/case/17919), and javascript (https://receptive.io/app/#/case/17965).#2018-01-1721:50ljosadoes it matter whether I'm logged in with the paid company account or my own unpaid account?#2018-01-1721:50ljosaerlang here: https://receptive.io/app/#/case/17908#2018-01-1721:51jaretAll votes count, but we weigh larger organizations that represent teams of developers over individual users. I’d recommend voting from your company account or an email with the same domain.#2018-01-1721:53ljosaok, I voted for javascript and erlang from <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>!#2018-01-1721:59shaun-mahood@jaret: Any idea on the possibility of a CLJS client? I couldn't find an issue on receptive but I have this feeling that someone talked about it at some point.#2018-01-1722:01jaretIts on our radar, but we should log a receptive request so we can gather customer feedback. I am going to log that one right now.#2018-01-1722:04timgilbertRe: clients, I'd personally favor the work being put into a documented wire protocol versus like three language-specific clients. But I guess once the first one is released the protocol will be de facto documented#2018-01-1722:05jaretYeah it’s always been the intention to create more language libraries for Client and enable our customers to create their own. So that approach is being considered.#2018-01-1722:08shaun-mahoodThere are 2 questions that I can't find guidance on in the docs - I've got hunches for both but they may be totally out to lunch.
- Is there a recommended way to connect to datomic cloud from an application not running on AWS?
- Is there a story for how to backup and restore a DB?#2018-01-1722:10stuarthalloway@shaun-mahood re q2, there will be N stories for different purposes#2018-01-1722:11stuarthallowaye.g. disaster recovery vs. moving db somewhere else vs redundant copies for safety#2018-01-1722:12stuarthalloway… but not done yet, stay tuned#2018-01-1722:12stuarthalloway@shaun-mahood re q1, for dev you should connect through the bastion server. Is that what you mean?#2018-01-1722:16shaun-mahoodI'm thinking of running our app locally rather than on AWS (at least for a while).#2018-01-1722:17shaun-mahoodI'm planning to integrate it with existing locally hosted storage and systems and gradually replace all the non-datomic and non-clojure bits, then pop the app hosting out to AWS.#2018-01-1722:21stuarthalloway@shaun-mahood we intend to make tooling to help with that, but it is not ready yet#2018-01-1722:21shaun-mahoodThis is my first real foray into AWS (outside of really S3 and really simple things), but I've lost my socks proxy connection once already so I assume it's not a good production connection.#2018-01-1722:23shaun-mahoodI'm glad I'm at least asking questions that make sense and are on the radar 🙂#2018-01-1722:24stuarthalloway@shaun-mahood correct, socks proxy is for dev#2018-01-1805:14olivergeorgeCongratulations for launching Datomic Cloud!#2018-01-1805:28olivergeorgeAlso struggling with the "Authorize a Group" bit. Am I in the right place?#2018-01-1808:25olivergeorgeI think I'm getting there. Copy in this section might benefit from a review.#2018-01-1808:52olivergeorgeYeah, totally works. Nice.#2018-01-1808:52val_waeselynckI had not taken too much interest in Datomic Clients (and thus Datomic Cloud) so far because I assumed db.with() could not be supported - but I now see that it is. How does it work? Does it hold a with-ed database on the server-side, or does it re-apply the speculative transaction on each subsequent query?#2018-01-1813:01marshall@olivergeorge glad you got it sorted#2018-01-1813:13stuarthalloway@val_waeselynck Datomic Cloud holds the with db on the server#2018-01-1813:17val_waeselynck@stuarthalloway interesting, how does this work w.r.t resource reclamation?#2018-01-1813:18stuarthallowayReclaimed when space needed#2018-01-1813:19val_waeselynck@stuarthalloway same thing with acquiring the current db of a connection from a client?#2018-01-1813:21stuarthallowaythat can always be recovered via a filter#2018-01-1813:22val_waeselynck@stuarthalloway not really, you can't db.with an asOf db properly#2018-01-1813:22val_waeselyncknor can you emulate it properly with a filter AFAICT#2018-01-1813:24stuarthallowayYou can’t db.with an asOf db at all, that is not supported in any version of Datomic.#2018-01-1813:31val_waeselynck@stuarthalloway let me ask a bit differently: 1- from a client, when using conn.db(), to what extent can I rely on the returned value not being deleted from under me? 2- same question with db.with().#2018-01-1813:33stuarthallowaynormal db value cannot go away, always recoverable via filter#2018-01-1813:33stuarthallowaywith value can be dumped from the cache#2018-01-1813:33stuarthallowayonce we release query groups you could have 1 or more groups of machines dedicated to with queries, so they are not competing with other uses of the system#2018-01-1813:34stuarthallowayor dedicated to any other read task you wish to isolate, for that matter#2018-01-1813:37val_waeselynck> normal db value cannot go away, always recoverable via filter
@stuarthalloway and this recovery is automatic, right?#2018-01-1813:45stuarthallowayyes#2018-01-1814:18val_waeselynckGot it. So no long-lived with'ed dbs, at least not in production#2018-01-1814:22stuarthallowayWe will update the docs to make this more clear. Thanks!#2018-01-1815:16souenzzoI think that i dont understand one thing about client api
(let [db (d/db conn) ;; basis-t = 42
data (d/q MY-QUERY-1 db)
tx (long-computation-10h data)
{:keys [db-after]} (d/with db tx)]
(d/q MY-QUERY-2 db-after))
- how do peer know that it cant "free" the basis-t 42?
- if the peer free the basis-t 42, then receive it can recover it by basis-t, but will not be allowed to do the with..
how peer/clients talk in this case?#2018-01-1816:04stuarthalloway@U2J4FRT2T I don’t think I understand the question#2018-01-1816:06val_waeselynckI think the example shows that d/with can legitimately be called on a db value that may or may not be resolved using asOf on the server side#2018-01-1816:08stuarthalloway@val_waeselynck will investigate, thanks#2018-01-1816:09val_waeselynck(secretely hoping that this will result in filing a bug leading to 😛#2018-01-1816:10souenzzoThe question can be:
- (d/db conn) on peer returns basis-t 42
- a client, that was which is operating a db on t=20, request to this peer to do a d/with over the t=20.
The peer will use (d/as-of db 20) or will use some other "internal dark magic"?
If it uses d/as-of, it will not be allowed to do the with requested by the client.#2018-01-1816:11stuarthalloway@U2J4FRT2T got it, will get back to you#2018-01-1816:11stuarthallowaythanks!#2018-01-2212:35souenzzoSorry, but I'm really curious about that. 🙂#2018-01-1814:08chrisblomdoes anyone know of a library for managing users & permissions using datomic?#2018-01-1814:17roklenarcicquick question: I've noticed that Cloud variant is offering a different feature set than OnPrem. Are you planning to make two very divergent products? I see that you're excited by cloud, but there are some applications where AWS is not an option.#2018-01-1814:17stuarthallowaydivergence is not an objective 🙂#2018-01-1814:18stuarthallowaybut AWS provides a much richer shared baseline on which to build#2018-01-1814:19stuarthallowayso there will be continue to be differences#2018-01-1814:21roklenarcicI understand that there will be options (like CloudSearch integration), which integrate with services offered by AWS, so obviously you can't use those without access to AWS.#2018-01-1814:43potetmIs there any chance that on-prem will get the “generic node” notion? (i.e. no explicit txor needed, per-database dispatching for a node group)#2018-01-1814:44val_waeselynckI do hope the Peer model will continue to be well supported though. In my case / opinion, that's where most of the leverage lies, and I don't think I would have made the switch to Datomic if there were only clients (however comfortable Datomic Cloud makes them).#2018-01-1814:45potetmYeah my perception is the same. I’ve gotten a lot of leverage from on-board client caching/lazy entity crawling.#2018-01-1814:46potetmBut I’ve not used the Client API. Perhaps the difference isn’t as stark as I imagine.#2018-01-1814:48potetmBut the “node” thing, multiple durable locations, and encrypted at rest are all pretty rad. Would love to see at least a few of those end up in on prem.#2018-01-1814:50stuarthalloway@potetm yes, On-Prem may get nodes, encryption at rest, etc.#2018-01-1814:51stuarthallowayand conversations like these help us prioritize, thanks!#2018-01-1814:52stuarthallowayand Cloud will have a more complete story for keeping code and data colocated, although not necessarily the peer model#2018-01-1815:15johnjfor cloud solo, why only two options for the node? (t2.small and i3.large) $30 vs $224, more middle ground would be nice, also, why an i3.large, what is the NVMe SSD used for?#2018-01-1815:16marshallsolo uses only a t2.small
production uses only i3.large instances#2018-01-1815:16marshallthe configuration on the marketplace page that shows them both is a limitation of how Marketplace listings work#2018-01-1815:17johnjah i3.large shows as an option for solo#2018-01-1815:17johnjok#2018-01-1815:17marshallthe SSD provides a large local cache#2018-01-1815:18johnjoh, per the docs I though only EFS was used for that#2018-01-1815:37cch1Is the datomic-socks-proxy the only means of accessing datomic in the cloud? We use a VPN tunnel to connect to our VPC when developing locally -it supports accessing AWS services transparently as though our local machine were in the VPC. Having to run the SOCKS proxy as well seems like a waste.#2018-01-1815:38cch1Docs say To run Datomic Cloud, currently you must have an AWS Account that supports only EC2-VPC in the region in which Datomic Cloud runs. When is that requirement expected to be lifted?#2018-01-1815:38marshallas long as you can resolve the endpoint that should work#2018-01-1815:39cch1OK. That is promising.#2018-01-1816:06timgilbertSay, random question but can anyone point me to some good open-source datomic databases / schemas similar to the mbrainz data set used in the tutorial?#2018-01-1816:11timgilbertI did find the seattle sample data included with the distro, too...#2018-01-1816:21stuarthalloway@timgilbert there are a bunch of small examples at https://github.com/cognitect-labs/day-of-datomic-cloud/tree/master/tutorial#2018-01-1816:23uwocould running backups somehow cause this failure on the transactor?
Critical failure, cannot continue: Critical background task failed
ActiveMQInternalErrorException[errorType=INTERNAL_ERROR message=AMQ119000: ClientSession closed while creating session]
#2018-01-1816:23uwo(on-prem)#2018-01-1816:24stuarthalloway@uwo running backups can overwhelm your storage depending on config#2018-01-1816:25stuarthallowayparticularly if you are running on DynamoDB which is provisioned#2018-01-1816:25uwothat backup is continuing, it’s only the transactor that’s falling over with a failed heartbeat. we’re using sqlserver as a store#2018-01-1816:26stuarthallowayI haven’t seen sqlserver get so overwhelmed that this problem happens#2018-01-1816:27uwothx!#2018-01-1816:29stuarthalloway@uwo if heartbeat interval goes wonky just before failure, be suspicious of storage#2018-01-1816:41marshall@cch1 We originally endeavored to have the system work with both EC2 Classic and EC2-VPC. There is an inherent limitation between EC2 Classic and CloudFormation. We raised this issue with AWS Support and received the following response:
“In EC2-Classic, Fn::GetAZs returns all the available AZs including the ones that you do not have access to. In EC2-VPC, Fn::GetAZs returns only AZs you have access to and which have a default subnet. So if a customer removes all but one of their default subnets, Fn::GetAZs will only return the AZ where the remaining subnet resides. Then if you try to use Fn::Select to get the second and third subnets, you will get an error because Fn::Select will try to reference an index that doesn’t exist. This is the downside of this approach.
Unfortunately, i checked internally and we do not seem to have a workaround in place to fix this. So if you have a mix of EC2-Classic and EC2-VPC enabled for your account this approach may not be ideal for you”
The CFTs we provide for Marketplace are generic and use discovery to set up VPCs/AZs.
We’re considering options for you. I’ll get back to you by next Wed.#2018-01-1816:44DesmondDoes anyone have any idea why I might be seeing a 10-15 second response time for the first request after restarting my Peer? After that response times are sub-second. I have a small dataset so I'm trying to distinguish between whether the initially slow response is due to the cache not being full yet, which would be concerning as the data grows, or due to the Peer establishing a connection with the Transactor, which I wouldn't really care about.#2018-01-1817:00favilaThe very first d/connect call for a db always takes a few seconds and appears to be a fixed cost in my experience#2018-01-1817:01favilawell, it varies by network speed#2018-01-1817:02favilaI suspect part of what is happening is transferring the tx-log since the last index#2018-01-1818:05Desmondok, good to know. I won't sweat it.#2018-01-1816:45Desmondrunning on dynamo#2018-01-1816:45stuarthalloway@captaingrover peer has to reload your database#2018-01-1816:46stuarthallowaythat is a bounded cost, won’t get bigger as data grows#2018-01-1816:47stuarthallowayin a load-balanced deployment setting, you could load databases you want hot in the peer before telling the load-balancer you are ready for requests#2018-01-1816:50Desmond@stuarthalloway great! that's what I wanted to hear.#2018-01-1816:51DesmondI don't need the fancy load-balancer setup yet but I will definitely keep that in mind#2018-01-1817:27luchiniAnyone else having the following problem when creating a Datomic Cloud Solo?#2018-01-1817:27luchini> The following resource(s) failed to create: [ExistingS3Datomic, ExistingTables, ExistingFileSystem, EnsureEc2Vpc].#2018-01-1817:30marshall@luchini are you able to see any additional details from the CFT errors?#2018-01-1817:31luchini@marshall this is the whole error:#2018-01-1817:31luchiniThe following resource(s) failed to create: [StorageF7F305E7]. . Rollback requested by user.
Embedded stack arn:aws:cloudformation:us-east-1:332243152968:stack/datomic-cloud-solo-test-StorageF7F305E7-1NVIQEOD7DDRO/7036b990-fc74-11e7-be75-500c28635c99 was not successfully created: The following resource(s) failed to create: [ExistingS3Datomic, ExistingTables, ExistingFileSystem, EnsureEc2Vpc].
#2018-01-1817:32marshallAh. So you can go look at the nested Storage Template and see if it reports an issue#2018-01-1817:33luchiniIs there a way to see the Storage Template logs separately? (I’m a complete noob in CloudFormation)#2018-01-1817:34marshallunder failed & deleted stacks#2018-01-1817:34marshallyou can see the storage one#2018-01-1817:34luchiniI did found the Storage Template itself (from your S3) and was reading it through… but it will take me on a tangent 🙂#2018-01-1817:36marshallYou should see the Storage stack in the CloudFormation stack list#2018-01-1817:36marshallhttps://console.aws.amazon.com/cloudformation/home#2018-01-1817:36luchiniFound it. It says the failure log is on CloudWatch#2018-01-1817:36marshallit may say Failed or RolledBack in status#2018-01-1817:36marshallthen go to Events#2018-01-1817:36marshalland you can find the first event that failed#2018-01-1817:37luchiniExistingS3Datomic Failed to create resource. See the details in CloudWatch Log Stream: 2018/01/18/[$LATEST]d0fb81bad8dd4f83917b5428f051f03f#2018-01-1817:37marshallhave you tried to launch unsuccessfully prior to this attempt?#2018-01-1817:37marshallor succesfully for that matter?#2018-01-1817:37luchiniyup… a few times… same result always#2018-01-1817:39luchiniI can’t find any left over S3 bucket from the previous runs#2018-01-1817:39marshallso i suspect that the first time you had some kind of failure, but now when you try to re-create with the same name you’re hitting a separate issue; if some of the parts of the system were created the first failed attempt they may interfere with creating one with the same name#2018-01-1817:40luchiniEven if I can’t find those resources? Like in some kind of delayed naming cache#2018-01-1817:44luchiniYou are partially right @marshall. Tried with a different stack name and it got me a bit further but still failed with The following resource(s) failed to create: [EnsureEc2Vpc]. #2018-01-1817:44marshallaha!#2018-01-1817:44marshallone sec#2018-01-1817:44luchini^ in the storage template#2018-01-1817:44marshallhttps://docs.datomic.com/cloud/setting-up.html#aws-account#2018-01-1817:45marshallas was discussed above with cch1, Datomic Cloud requires EC2-VPC#2018-01-1817:45marshallfor now you can either create a new AWS account or start up in a new region in the same account that supports EC2-VPC#2018-01-1817:45luchiniah…. the keyword here is “only EC2-VPC”, correct?#2018-01-1817:46marshallyep#2018-01-1817:47luchinibecause I assumed my region did support EC2-VPC (but it also has classic for some old stuff that has not migrated yet)#2018-01-1817:47marshallright; if it’s a classic it won’t work#2018-01-1817:47luchiniand now I see in the template FailIfEc2ClassicLogGroup 😄#2018-01-1817:48luchinithanks @marshall. This was tremendously helpful#2018-01-1817:48marshallyep; i suspect that log would say the same thing as I just did ^#2018-01-1817:48marshallno problem#2018-01-1817:49luchinitip here: maybe rename EnsureEc2Vpc to EnsureEc2VpcOnly (I know, a bit pedantic, but semantics always help 😄 )#2018-01-1817:49luchinicongrats to the Cognitec team… we are stoked over here with Datomic Cloud#2018-01-1820:04donmullen@marshall I have a large import I’d like complete to Datomic Cloud. Using local peer against local data store - I can configure transactor to speed things up a bit - and I understand one can adjust DynamoDB for import using On-Prem. Are there similar adjustments that can be made for Datomic Cloud? I’m a new to CloudFormation - but if you have pointers to docs - or advice that’s be appreciated. Local import currently takes several hours to a datomic:dev database - and I’m doing all the batching/pipelining that is recommended for large imports.#2018-01-1820:09stuarthalloway@donmullen Cloud will autoscale DDB for you, so the import will start slow and speed up, then scale DDB back down automatically#2018-01-1820:10donmullen@stuarthalloway magic. Congrats on the launch.#2018-01-1820:10stuarthalloway@donmullen Cloud will make much less use of DDB than an equivalent On-Prem import#2018-01-1820:10stuarthallowayno DDB writes for indexing#2018-01-1820:10donmullenTiming was perfect - was just getting ready to spin up On-Prem on AWS.#2018-01-1820:12stuarthalloway@donmullen how big is the dataset? Do you hope to stay in the Solo topology?#2018-01-1820:16donmullenHoping to stay for short term as we work on queries and data analytics - but the data is very large - millions of rows five primary data sets and about 6 GBs of raw data from csv’s for import data. That something Solo can handle?#2018-01-1820:24donmullen@stuarthalloway just did a restore to local dev database from backup - it’s 12 GB in datomic/data.#2018-01-1820:25donmullenGot thru video one - on to video two… 🙂#2018-01-1820:26stuarthalloway@donmullen I have imported full mbainz (100 million datoms) into Solo. It takes a while, especially after the AWS burst ends and you get only a fractional CPU#2018-01-1820:28donmullen@stuarthalloway How long does mbainz take running locally against datomic:dev storage? Then compared to Solo? My import locally takes about 7.5 hours (I don’t create the indexes until after the import).#2018-01-1820:29donmullenI’m assuming I should leave indexes off the schema until post data import.#2018-01-1820:29stuarthallowayever since we added adaptive indexing (http://blog.datomic.com/2014/03/datomic-adaptive-indexing.html), the index thing matters less#2018-01-1820:30stuarthalloway@donmullen Cloud always builds all indexes, :db/index is not even a thing there#2018-01-1820:30donmullen@stuarthalloway interesting#2018-01-1820:36stuarthalloway@donmullen your local setup likely has more CPU horsepower than i3 large (Prod), which in turn has way more CPU than t2 small#2018-01-1820:37stuarthallowayso if you are already sweating making imports faster, you will likely end up on Production#2018-01-1820:38donmullenyeah - I figure we’ll land there - though once we get the import done ‘right’ we won’t be doing that much (at all?) - and will be incrementally adding data on a weekly basis to the core data set.#2018-01-1821:39fingertoeI am having trouble getting the datomic-socks-proxy to work. It gives me the error “Datomic System not found” although running the ‘aws ec2 describe-instances’ query seems to find it without issue..#2018-01-1821:44fingertoeI also get an “aws: error: argument command: Invalid choice, valid choices are:” prior to the datomic sys not found error.#2018-01-1821:47stuarthalloway@fingertoe you need a newer AWS CLI client#2018-01-1821:54fingertoe@stuarthalloway That did it! Thanks…#2018-01-1901:18donmullenI seem to have the datomic-socks-proxy up and running - and can create a database successfully and establish a connection. However, when transact the initial schema I get {:cognitect.anomalies/category :cognitect.anomalies/forbidden,
:datomic.client/http-result {:status nil, :headers nil, :body nil}}#2018-01-1901:20donmullenThis implies that my AWS credentials aren’t set up correctly - but I’m surprised I can create-database and delete-database without a similar exception.#2018-01-1906:12Hendrik PoernamaIs Datomic Cloud available for Asia Pacific Regions? The estimator only shows US and EU.#2018-01-1908:08laujensenis CloudDatomic locked to any special storage db ?#2018-01-1908:12robert-stuttaford@laujensen you don’t worry about storage with Cloud. it uses a combination of DDB, S3 and EFS internally; using the best of each for the most suitable cases#2018-01-1908:13laujensen@robert-stuttaford - I only worry because Dynamo would incur an extra expense relative to the number of writes#2018-01-1908:14robert-stuttafordthey only use DDB for consistency roots, i believe; all the actual data is elsewhere (S3, EFS)#2018-01-1908:14robert-stuttafordso DDB throughput would be much lower than on-prem usage#2018-01-1908:21laujensenGreat, thanks#2018-01-1908:24robert-stuttaford@stuarthalloway is memcached in the mix with Cloud? if not, is that because the new design makes it redundant, or because it’s still coming?#2018-01-1912:00stuarthalloway@poernahi not yet for Asia Pacific. Datomic Cloud makes extensive use of AWS features, not all of which are available in all regions. We will be working with AWS to roll out to more regions over time.#2018-01-1912:07malcolm.edgar@robert-stuttaford I recall Marshall and Stu talking about memcache not being used. Instead I think the cloud caching hierarchy is RAM, EFS then S3.#2018-01-1912:09stuarthalloway@robert-stuttaford Instead of memcache, let me introduce valcache. It implements a nice immutable subset of the memcache API, but is backed by SSDs. Latency like memcache but vastly cheaper capacity. This is one reason Production uses i3.large#2018-01-1912:11stuarthallowayProduction dedicates most of the 475GB SSD to valcache, so if e.g. your entire set of dbs added up to 300GB, then all of it would end up in the valcache on all the primary compute nodes.#2018-01-1912:20malcolm.edgarLooking at the AWS docs https://aws.amazon.com/blogs/aws/now-available-i3-instances-for-demanding-io-intensive-applications/ that is a #/cdn-cgi/l/email-protection! load of I/O:
"... Designed for I/O intensive workloads and equipped with super-efficient NVMe SSD storage, these instances can deliver up to 3.3 million IOPS at a 4 KB block and up to 16 GB/second of sequential disk throughput. This makes them a great fit for any workload that requires high throughput and low latency including relational databases, NoSQL databases, search engines, data warehouses, real-time analytics, and disk-based caches..."
@stuarthalloway @marshall - congratulations on the release, awesome news.#2018-01-1912:21stuarthalloway@malcolm.edgar it is a full time job keeping up with what all the different EC2 flavors can do 🙂#2018-01-1912:23stuarthalloway@donmullen that forbidden error seems weird to me too — did you get it sorted?#2018-01-1912:33robert-stuttafordthat is rad, @stuarthalloway!#2018-01-1912:35robert-stuttafordwill valcache be OSS at some point, @stuarthalloway? (idle curiosity)#2018-01-1912:43stuarthallowayDunno#2018-01-1912:57potetmOr just available? That’s worth purchasing IMO.#2018-01-1913:55stuarthallowayinteresting — note that the memcache API is not fully supported, only the good (immutable) parts#2018-01-1914:00marshall@laujensen Robert’s comments about DDB usage are spot on. Cloud handles all of that and uses DDB autoscaling as well. All together it will use significantly less DDB throughput than a comparable On-Prem system#2018-01-1914:06laujensenReassuring thanks @marshall. Its not always clear when we’re talking pennies vs thousands of dollars on Amazon 🙂#2018-01-1914:13stuarthalloway@laujensen staying around $1/day all-in is a real thing with Solo#2018-01-1914:13stuarthallowayit is amazing to watch DDB scale up for an import and have an “expensive” day that is $0.25 more 🙂#2018-01-1914:13laujensen@stuarthalloway We might move SabreCMS to AWS and I expect that’ll be hammering out millions of hourly writes since every page view is at least 2 writes#2018-01-1914:14stuarthallowaywell that will need Production 🙂#2018-01-1914:14laujensenYes it will 🙂#2018-01-1914:16laujensenDo you always start with Solo, or is it non-trivial to convert to production?#2018-01-1914:18robert-stuttafordif i guess, i’d say it’s a matter of adding an i3 and removing the t2. if you add then remove, no downtime#2018-01-1914:18robert-stuttafordas s3 / ddb have full storage coverage#2018-01-1914:18robert-stuttaford-waits for correction-#2018-01-1914:18stuarthallowaygoing from Solo to Production is a CloudFormation upgrade https://docs.datomic.com/cloud/operation/upgrading.html#upgrading-solo-to-production#2018-01-1914:19robert-stuttafordah, yeah. more stuff 🙂#2018-01-1914:27stuarthallowaythe upgrade only takes a few minutes, totally sensible to start with Solo and switch on need#2018-01-1914:54donmullen@stuarthalloway - have not sorted out the credential issue yet. Will look at it this afternoon. Likely some aws user error on my part - but did seem strange to do create/delete and not transact.#2018-01-1914:55stuarthalloway@donmullen I don’t want to lose track of that, please let me know what you find. Maybe I will just mosey over there. 🙂#2018-01-1915:28jaret@donmullen I am looking into reproducing the issue you ran into. Were you running as an AWS admin or using the admin created by datomic?#2018-01-1916:00donmullen@jaret my creds have full admin and I added datomic policy to group as well. Did not help. Will be back online around 1est. #2018-01-1916:03donmullen@stuarthalloway better keep your distance. Just getting over flu and now at cvs minute clinic for strep test - though that’s unlikely. Fun times. EB had to get out in snow without me 😞#2018-01-1917:07jaret@donmullen are you certain that you created the DB and transacted in the same session? I can’t even create a DB without admin credentials.#2018-01-1918:28donmullen@jaret - yes - same session#2018-01-1918:38donmullenHmm.. @jaret @stuarthalloway - just going through the movies example and was able to transact the schema.#2018-01-1918:41donmullenAnd first-movies and query. Strange. Will let you know if I see what I was getting before.#2018-01-1918:41jaretI am wondering if your socks proxy ran into a broken pipe after DB create#2018-01-1918:42donmullen@jaret I stopped / restarted proxy a few times - and went through the steps to create database and try a transaction - always was getting the “forbidden” error.#2018-01-1918:45jaretThat would rule out that possibility. I am going to continue to try to re-create please let us know if you run into it again.#2018-01-1918:46donmullen@jaret - will do.#2018-01-1919:17chris_johnsonAWS “on-Prem” operations question to which I think I know the answer but also think that any experiments I would do to prove myself wrong or right would be hopelessly blinded by own mental model of the Datomic storage layer: If I have a Datomic database in a DynamoDB backend, and I stop the running transactor, copy the DDB table somewhere else (another region, say), and start a new transactor instance of the same version as the first one, pointed at that new table, does the new transactor have the “same” database in it (that is, the same schema and datoms as the original such that queries will give the same results)? Follow-up: same question but in the case where the original transactor has a rock fall on it from space instead of being stopped gracefully.#2018-01-1919:23marshall@chris_johnson In theory yes the DB would be “identical”. In practice a perfect copy of DDB is trickier than the same operation for, say, a SQL database.
https://docs.datomic.com/on-prem/ha.html#other-consistent-copy-options discusses this somewhat further#2018-01-1919:24marshallif you can ensure a “consistent copy” (as defined there) ^ then storage-level backup/copy is acceptable#2018-01-1922:57macrobartfastwhile following https://docs.datomic.com/on-prem/getting-started/connect-to-a-database.html I get
[{:type clojure.lang.Compiler$CompilerException
:message java.lang.RuntimeException: Unable to resolve symbol: halt
-when in this context, compiling:(datomic/client/api.clj:57:11)
:at [clojure.lang.Compiler analyze Compiler.java 6688]}
{:type java.lang.RuntimeException
:message Unable to resolve symbol: halt-when in this context
:at [clojure.lang.Util runtimeException Util.java 221]}]
when trying to get a repl via 'lein repl'#2018-01-1922:58macrobartfastany thoughts?#2018-01-1922:58macrobartfastsame result via cider in emacs, btw.#2018-01-1923:03fingertoe@macrobartfast I got that until I switched to Clojure 1.9#2018-01-1923:04macrobartfastoh sweet let me try that then#2018-01-1923:10macrobartfastbam! resolved. thanks!#2018-01-1923:22macrobartfastof course, that triggers 1.9 related cider errors... nice.#2018-01-1923:47macrobartfastinstinctively went to file an issue on github for the 1.8/1.9 issue, then remembered you can't for datomic!#2018-01-1923:47macrobartfastnot even sure what you're supposed to use... haven't used proprietary stuff in so long.#2018-01-1923:52macrobartfastsearching on 'where to file a datomic issue' and so on brings up nothing readily.#2018-01-1923:52marshallhttp://Forum.datomic.com#2018-01-1923:53macrobartfastsweet, thanks.#2018-01-1923:53marshallYep. Love the username. We were discussing wanting pan galactic gargle blasters the other day#2018-01-1923:56macrobartfastthanks!!#2018-01-1923:56macrobartfastyes, downing one as we speak.#2018-01-1923:57marshallMmmm. Lemon-wrapped gold brick#2018-01-1923:58macrobartfastbest drink in existence.#2018-01-1923:59macrobartfastI know I shouldn't, but considering having a third blaster.#2018-01-1923:59macrobartfastdrats, running low on Fallian marsh gas.#2018-01-2000:28timgilbertDon’t panic#2018-01-2000:46macrobartfastadded [com.datomic/client-pro "0.8.14"] in project.clj and (:require [datomic.client.api :as d]) in core.clj but (def client (d/client cfg)) is producing a java.lang.Exception:
namespace 'datomic.client.api' not found#2018-01-2000:47macrobartfastjust trying to follow along at https://docs.datomic.com/on-prem/getting-started/connect-to-a-database.html#2018-01-2000:49marshallHmm. I'll have a look at that#2018-01-2000:49marshallThat should be working#2018-01-2000:53macrobartfastthanks!#2018-01-2000:53macrobartfastlooks like there might be a different dep at https://docs.datomic.com/on-prem/project-setup.html#2018-01-2000:53marshallMight not be till tomorrow. I'm away from my laptop atm#2018-01-2000:53marshallOh, that one is out of date #2018-01-2000:54marshallThe one in getting started should be correct#2018-01-2000:54marshallI'll look at updating both#2018-01-2000:55macrobartfastok cool... no worries... and thanks for the help so far.#2018-01-2000:55marshallDefinitely #2018-01-2001:04macrobartfastIt was related to the clojure 1.8 issues mentioned above; switching to clojure 1.9 fixed the datomic.client.api not found issue. I accidentally reverted to 1.8. However, I can't use 1.9 in emacs atm as I haven't resolved my emacs/cider issue, which isn't a datomic issue, certainly.#2018-01-2001:05marshallAh, glad you got it to work.
I'm glad you found that other spot with the old dep I'll fix it this weekend#2018-01-2001:05macrobartfastcool cool#2018-01-2002:13James VickersI have a question about entity references (`:db.type/ref`) and history after reading https://docs.datomic.com/on-prem/entities.html#basics.#2018-01-2002:13James VickersSuppose after the transactions in the example, Entity 42 ("Jane Doe") has :person/lastName changed to "Smith". If you retrieved Entity "John" and traversed to the :person/lastName of the Person entity it refers to (42) will it return "Doe" still? The Entity reference seems to say this, just want to clarify:
>Navigation from an entity will always and only reach other entities with that same time basis#2018-01-2002:48marshall@jamesvickers19515 it depends on the basis t of the db value you pass to the query (or entity api call)#2018-01-2002:49marshallIf you assert that Jane's last name is now Smith, then you got a new db value (ie with (d/db conn)) and looked at the last name of the person John likes it would be Smith.#2018-01-2002:50marshallIf you assert the name change but then ask who John likes with the old db value you'll see the original last name. The db value is immutable #2018-01-2003:53James VickersSo if I wanted to value to still be "Doe" (i.e. not change) for the last name of who John likes, I'd have to track when the reference was associated right? As in keep a basis-t or time along with that reference? #2018-01-2004:02James VickersI think a better example of what I'm asking about would be an Order that references a Customer entity. If the Customer's address changes later, you still might want to know that at the time the Order was made, the Customer entity had the old address. #2018-01-2004:08James VickersOh, maybe an easy way is to use the basis t of the entity itself - entity.db().basisT() - to find the version of Jane with the old last name? Thanks for the replies, I think a REPL session is in order.#2018-01-2012:50dazldI’m trying to write a spec that can cover a function that returns either an integer id, or a tempid. I feel like I’m missing something, perhaps there’s a better way to do it than this…?#2018-01-2012:50dazld(s/def ::entity-id (s/or
:raw-id int?
:db-id #(instance? datomic.db.DbId %)))
#2018-01-2012:51dazldfeels weird doing instance checks#2018-01-2012:52dazldusing it like this:#2018-01-2012:52dazld(defn find-destination
“returns an id to either an existing destination or a tempid for insertion”
[dest]
{:pre [(s/valid? ::domain/url dest)]
:post [(s/valid? ::domain/entity-id %)]}
(let [db (d/db conn)
existing (ffirst (d/q ’[:find ?e
:in $ ?destination
:where
[?e :destination/url ?destination]]
(d/db conn)
dest))]
(or
existing
(d/tempid :db.part/user))))
#2018-01-2013:12stuarthalloway@dazld if you are not doing explicit partition management, you should consider using string tempids. This solves your spec problem, and is also compatible with both On-Prem and Cloud#2018-01-2013:14dazldthat’s interesting!#2018-01-2013:14dazldso the spec would be str or int#2018-01-2013:14dazldand the tempid can be just any arbitrary string..?#2018-01-2013:14dazld(assuming one needed per tx)#2018-01-2013:40stuarthalloway@dazld yes, str or int. And tempids are needed only when you need to link up related datoms (or manage partitions). Otherwise leave ’em out.#2018-01-2014:38marshall@jamesvickers19515 that's definitely an approach. You can also use an asOf database and/or a history db if you want to see the state of an entity or the whole history of an entity.#2018-01-2014:40marshallIn the customer address order example, you could look at the transaction entity for the order to get a basis t to use for a query against an asOf database to ask "what was the customers address at the time this order was placed"#2018-01-2019:33johnjWhat are the best resources for learning how to model data in datomic?#2018-01-2101:03donmullenBack to working on import to Datomic Cloud — getting the following : {:error #error {
:cause "No implementation of method: :value-size of protocol: #'datomic.cloud.tx-limits/ValueSize found for class: java.lang.Float"
:data {:datomic.client-spi/context-id "34d66806-8e6f-4c5e-a9dd-205ae330a9c7", :cognitect.anomalies/category :cognitect.anomalies/incorrect, :cognitect.anomalies/message "No implementation of method: :value-size of protocol: #'datomic.cloud.tx-limits/ValueSize found for class: java.lang.Float", :dbs [{:database-id "a601a3f8-0af7-4c89-a082-37108f5d0b65", :t 14, :next-t 15, :history false}]}#2018-01-2101:36marshall@donmullen can you share what you were transacting when you got that error?#2018-01-2102:11donmullen@marshall - sample trx data [#:master{:filed-date #inst "2015-08-04T04:00:00.000-00:00"
:doc-amount 3300000.0
:doc-type "AGMT"
:borough "1"
:good-through-date #inst "2015-08-31T04:00:00.000-00:00"
:doc-id "2015072200337005"
:modified-date #inst "2015-08-04T04:00:00.000-00:00"
:crfn "2015000266648"
:doc-date #inst "2015-07-16T04:00:00.000-00:00"}]#2018-01-2102:13donmullenwhere :master/doc-amount is :db.type/float in the schema#2018-01-2102:15marshallI'll look into it. May be Monday before I can get with Stu to discuss#2018-01-2102:17donmullenNP - thanks. FYI : Same schema and data transact into datomic via peer and clojure client apis.#2018-01-2102:38DesmondSo I'm trying to understand all the aws resources that I got by creating the cloudformation stack as described in https://docs.datomic.com/on-prem/aws.html.#2018-01-2102:39DesmondMy first question is why do I see ec2 instances that run for a little while and then shut down? Is that the transactor?#2018-01-2102:40marshall@captaingrover they shouldn't shut down immediately. That suggests a config issue#2018-01-2102:41marshallThe On-prem CFT only creates Transactor instances #2018-01-2102:42marshallThe ensure-transactor script creates a ddb table and some IAM roles#2018-01-2102:42DesmondI created this stack about 6 hours ago and I see about 20 instances in a terminated state#2018-01-2102:43DesmondI guess i ran the scripts a few times before I was satisfied#2018-01-2102:43marshallAre you ever getting instances that stay running?#2018-01-2102:43Desmondnope#2018-01-2102:43marshallYou should kill the stack#2018-01-2102:44Desmondi also inactivated the keys i used with the ensure transactor scripts because they had admin priveleges#2018-01-2102:44marshallAnd regenerate the CTF with the scripts. The .most common issue is a typo or paste issue with the license key#2018-01-2102:44Desmondit looked like the stack made its own roles so i thought i wouldnt need those keys anymore#2018-01-2102:47Desmond...or completely skipping the step about the license key#2018-01-2102:48Desmondthanks for helping me realize that#2018-01-2102:51marshallThat would do it #2018-01-2102:58DesmondOk, reran the scripts and it looks promising.#2018-01-2102:59Desmondmy next question is: if i have a user for my peer (which is not on aws), can I delete the trust relationship for ec2 from the datomic-aws-peer role?#2018-01-2103:06Desmondi am including my peer user in the datomic-aws-peer role's trust relationships#2018-01-2103:14marshallI think so, but I'll have to double check #2018-01-2104:07Desmondwell i removed the relationship and everything still works fine.#2018-01-2104:10Desmondim also curious how other people have set up dev, staging, and prod databases.#2018-01-2104:12Desmondthe app im using datomic for hasn't launched yet and i'm not expecting a high load at first so to conserve costs I was planning to just create three databases with the same stack and the same dynamo table.#2018-01-2104:13Desmondthe only reasons i can think of not to do that would be security and the amount of traffic going through the transactor#2018-01-2104:15Desmondare there other reasons to duplicate components or create whole separate stacks for each environment?#2018-01-2104:42DesmondI just realized that backup and restore is on a per table basis meaning that if i want to backup prod and restore it to staging i need two separate dynamo tables#2018-01-2104:58Desmondit looks like the ec2 instances incur the majority of the cost (at least without much data in dynamo). is there any reason i couldn't create multiple tables with only one transactor?#2018-01-2105:00Desmondi'm not so familiar with cloudformation. which bits would i need to create just another table?#2018-01-2105:00Desmondon a related note are there any plans to port these cloudformation templates over to terraform?#2018-01-2116:01marshallAre you tied to Datomic On-Prem? Datomic Cloud may be a better fit for this approach#2018-01-2116:02marshallAlso, you should use Datomic backup and restore, not dynamo db#2018-01-2116:05marshallI wouldn't recommend running staging and dev on the same transactor as your prod db for a couple reasons. If you want to test something like a large import or a config change, you have no separate infrastructure to test the change, every tweak to staging will also affect prod #2018-01-2116:05marshallYou could certainly run staging without HA (asg size of 1) to save on cost#2018-01-2116:06marshallIf you really want to cut ec2 cost you could even turn off dev and staging when you're not testing / using them #2018-01-2117:29DesmondDatomic Cloud looks very appealing. Part of my reason for using On-Prem was to learn a bit more about the pieces in play. That said our use case looks like a perfect fit for Cloud so I will certainly investigate further.#2018-01-2117:31DesmondIn either case I would want to back up prod and restore it to staging with a chron job. When I mentioned backup and restore before I did actually mean the datomic backup and restore rather than the dynamo backups. Isn't datomic's backup and restore on a per table basis?#2018-01-2117:34DesmondActually, I don't know what i was reading before because the backup and restore doc clearly says "Backup URIs are per database"#2018-01-2117:35DesmondSo at the very least I could run staging and dev together and still backup prod to staging#2018-01-2206:01Desmond@marshall So I tried out the backup and restore within a single table just to get started and it seems that this is not allowed: :restore/collision The database already exists under the name 'production'#2018-01-2206:02Desmondis there a way around this? I would like to avoid beefing up my deployment for a little while#2018-01-2214:25marshallyou can’t restore the same database to the same storage with a different name#2018-01-2301:36Desmondok, cool. I split the prod infrastructure out. would have needed to do it eventually anyway.#2018-01-2301:36Desmondthanks for helping me!#2018-01-2301:38marshall👍 #2018-01-2104:32steveb8nif I want to target Cloud but want to develop locally i.e. offline, can I use the client lib with a local datomic instance? if so, what would the connection string look like? caveat: I haven’t tried this yet so feel free to respond with RT(F)M. I’m just curious since the cloud docs seem to assume dev always uses cloud#2018-01-2111:24Hendrik PoernamaWith datomic cloud, how does one atomically update a value based on another value? For example: [:db/add e a1 v] based on [e a2 v] which may change between creation of tx-data and transactor acknowledgement. Used to do this with database function. I'm thinking now I will have to use CAS and retry? Not sure if there is a better way. #2018-01-2112:50donmullen@marshall ^^ simplified float issue case with movies example.#2018-01-2113:30stuarthalloway@donmullen for now can you use doubles instead of floats?#2018-01-2113:48stuarthalloway@donmullen actually, hold on that, bet you would hit the same issue#2018-01-2113:58stuarthalloway@donmullen confirmed I can repro. Please use BigDecimal until we can push a fix#2018-01-2114:34donmullen@stuarthalloway got it#2018-01-2114:34stuarthalloway@donmullen do you need floating point semantics, or could you stick with BigDecimal?#2018-01-2114:42donmullen@stuarthalloway Likely BigDecimal is better for currency - correct? I then have attributes that represent area ratios and some representing measurements in feet and square feet.#2018-01-2114:48stuarthallowayBigDecimal for currency for sure#2018-01-2114:59donmullen@stuarthalloway The cloud client api requires clojure 1.9 currently, correct? Clojurescript client library to be released at some point?#2018-01-2115:00donaldballA minor note of caution for bigdecs: be sure to set the scale to a consistent value (e.g. 2). Java and clojure have slightly different opinions about equality for bigdecs with the same amount but different scales.#2018-01-2115:03donmullenok - thanks @donaldball#2018-01-2115:05stuarthalloway@donmullen Cloud API should work with 1.8#2018-01-2115:06stuarthalloway@donmullen how would you use a ClojureScript library? from the browser or node or ?#2018-01-2115:12donmullen@stuarthalloway was thinking from browser - going to put together a simple web portal that returns various filters/queries of the data. need a backend anyway to update data and do various analytics - but that could be microservice that only runs periodically. for now will have a full backend to handle sending results to web portal.#2018-01-2115:12stuarthalloway@donmullen so how would you secure that?#2018-01-2316:18cch1Something like AWS Cognito might fit the bill.#2018-01-2115:13donmullenGood point - hadn’t thought through security.#2018-01-2115:15stuarthallowayread only db is straightforward, but nothing finer-grained yet#2018-01-2115:17donmullenread-only from browser likely all I’ll need in the near term.#2018-01-2115:26stuarthalloway@donmullen thanks, hammocking#2018-01-2116:00marshall@poernahi Cloud includes cas as a built in txn function https://docs.datomic.com/cloud/transactions/transaction-functions.html#sec-2#2018-01-2116:09marshall@steveb8n you can do that in theory with peer server locally. However, there are some differences between cloud and on-prem you should be aware of: https://docs.datomic.com/on-prem/moving-to-cloud.html#2018-01-2122:07steveb8n@marshall good to know, thanks.#2018-01-2122:43donmullen@jaret I did see the :forbidden issue again this afternoon. Restarted proxy and repl and it went away. Will try and narrow down some way to reproduce if I can.#2018-01-2123:22bbloombad link in the docs: https://docs.datomic.com/javadoc/datomic/Entity.html is bad in https://docs.datomic.com/on-prem/entities.html#2018-01-2123:22bbloomin fact, all the links in that article are broken#2018-01-2123:29jaret@bbloom Thanks for reporting that. I’ll take a look. EDIT I’ve fixed the links and I’ll audit the rest of our api links.#2018-01-2123:33jaret@donmullen if you get it again can you restart repl test and then restart the proxy? I’d like to isolate which step resolved the issue or if it requires both.#2018-01-2211:07Hendrik PoernamaI'm trying to switch from peer api to client api in preparation for eventual cloud migration. Is there an established best practice on how to pass data around business logic functions? I used to pass almost everything as datomic entity (entity api). Now if I pass entity ids around, I end up with scattered ad-hoc pull queries and would sometimes pull the same entity multiple times on slightly different queries. Not sure if this a good design.#2018-01-2216:15val_waeselynck@U7Q9VAXPT Note that passing around lookup-refs is not sufficient to emulate Peer Entities: you also need to pass the database values, otherwise you may run into inconsistencies.#2018-01-2216:18val_waeselynckYou're definitely going to face dilemmas you didn't have on Peers, typically simplicity (I want my functions to have little dependencies to each other, and my queries to be about just one thing) vs performance (I don't want the N+1 problem). I advise you do some benchmarking - you'll probably find as I did that a Datalog query has usually much more overhead than an Entity Lookup for the same amount of work.#2018-01-2216:24val_waeselynckMy intuition is that tools like GraphQL or other similar demand-driven can alleviate a lot of this problem, because a lot of business logic can be expressed via derived attributes, and you can relatively easily build an efficient GraphQL server by using a combination of asynchrony and batching - which is a good fit for a Datomic Client.#2018-01-2216:25val_waeselynckGraphQL is for the read-site; as for the write-side, you usually have looser latency and throughput requirements for writes, so I wouldn't worry too much about the performance of that#2018-01-2216:25val_waeselynckBut I'm very curious to know what you find down that road.#2018-01-2304:42Hendrik PoernamaMy first take on this, business functions have db as first argument and can take either entity id or lookup ref as additional arguments. I then have a set of functions to create lookup ref from name/uuid/natural key, and another set of functions to resolve entity id from lookup ref (so far only needed for existence test).#2018-01-2304:46Hendrik PoernamaI feel like this is a bit worse than N+1, because I'm seeing a lot of colocated on-demand pull of essentially the same information multiple times. Now that each pull is a network request, this worries me. Especially since I'm using ring without async handler...#2018-01-2304:51Hendrik PoernamaI think GraphQL clients work around this issue by locally caching every query result. Essentially almost what a peer is doing. So I could theoretically wrap pull with some custom memoize/caching if needed.#2018-01-2304:57Hendrik PoernamaWrite is actually getting a bit more complicated if I'm designing for Datomic Cloud where db/cas is currently the only transaction function. I can use cas like clojure's ensure but I then have to explicitly handle retries.#2018-01-2305:00Hendrik PoernamaMaybe I'm looking at this from the wrong angle. The client library is designed for microservice architecture and I should not worry about performance as long as it is within the same algorithmic complexity and just scale horizontally.#2018-01-2306:54val_waeselynckStill, you may have a latency problem. Maybe you should fetch data once and pass data structures to functions that do a lot of validation#2018-01-2211:23Hendrik PoernamaI also tried passing lookup-refs around as entity with the benefit of not having to do separate eid lookup.#2018-01-2211:25Hendrik PoernamaMaybe pulls are cheap enough and I should not worry about it?#2018-01-2212:26stuarthallowayyou can do multiple pulls in a single query#2018-01-2214:59donmullenIs there sample code that shows a graceful way to handle getting this? {:cognitect.anomalies/category :cognitect.anomalies/busy, :cognitect.anomalies/message "Busy rebuilding index", :dbs [{:database-id "954cc441-8125-45b1-a2d6-6547c985bfad", :t 1712, :next-t 1713, :history false}]} I’m currently using tx-pipeline from https://docs.datomic.com/on-prem/best-practices.html — and calling the synchronous api via (client/transact conn {:tx-data data}). Wondering if I should switch to asynchronous, check for anomolies, wait and retry - or stick with synchronous and do the same. Hmm.. seems I should bit the bullet and pull in code from https://github.com/Datomic/mbrainz-importer - looks like the batch xform handles all the retries.#2018-01-2216:29stuarthalloway@souenzzo @val_waeselynck we have updated the docs to clarify the behavior of as-of plus with https://docs.datomic.com/cloud/time/filters.html#as-of-not-branch#2018-01-2216:30stuarthallowayI was mistaken in my comments before, please go by the (new) docs#2018-01-2216:33val_waeselynck@stuarthalloway thanks for the clarification, have you been able to determine what happens when a client-side with'ed db is implicitly resolved via asOf on the serve-side?#2018-01-2320:59souenzzonews?#2018-01-2216:35stuarthalloway@val_waeselynck yeah, that is separate, will get back to you#2018-01-2217:04asierHi there. I have downloaded Datomic Starter and included the Client library [com.datomic/client-pro "0.8.14"] in my project.clj. Now I have this error when doing lein check:#2018-01-2217:04asierI´m using Clojure 1.9.0#2018-01-2217:13alexmillerMaybe the output of lein deps :tree would help shed some light on your dependencies.#2018-01-2217:17asiercheers @alexmiller#2018-01-2221:51souenzzo[datomic cloud]
I have a library/framework and I want to allow the user choose between cloud or peer API.
how to do that? there is future plans?#2018-01-2312:35chrjsHey folks. Could anyone point me in the right direction for a Busy rebuilding index exception on a transaction? Can’t seem to find much in the docs or via search, though it’s possible I’ve missed something.#2018-01-2312:40stuarthalloway@chrjs that happens when data is coming in faster than processor can keep up, and will resolve on its own. Apps should plan for it and slow down#2018-01-2312:41stuarthallowayWere you doing a bulk load? I would not expect to see that from a single person’s interactive use.#2018-01-2312:41chrjsYep, bulk backload.#2018-01-2312:42chrjsWe can split the load to give it more time, no problem. Only running on a t2.micro, could lack of memory contribute to this?#2018-01-2312:42stuarthalloway@chrjs check out https://github.com/Datomic/mbrainz-importer/blob/master/src/cognitect/xform/batch.clj#L70-L92#2018-01-2312:43stuarthallowaydon’t copy and paste that directly, but you should decide what you want to do in the face of various things that can fail during an import#2018-01-2312:43chrjsSure, that’s great. Thanks!#2018-01-2312:44stuarthallowayI also gave a talk on this topic https://www.youtube.com/watch?v=oOON--g1PyU#2018-01-2312:45stuarthalloway@chrjs if you switch from Solo to Production you will be able to import substantially faster#2018-01-2312:46stuarthallowayalso: the topologies dictate EC2 instance sizes, so you cannot just switch instance sizes#2018-01-2312:47chrjsI’ll check out that talk. We’re spinning up a production instance right now 👍#2018-01-2312:48chrjsAnd thanks for being so responsive on datomic questions with the rollout of cloud. It must be your 6am or something right now (!?).#2018-01-2312:49stuarthallowayEast Coast US. I am plenty responsive while waiting for kids to get dressed 🙂#2018-01-2312:51chrjs🙂#2018-01-2313:50stuarthalloway@chrjs how much data are you loading?#2018-01-2314:15chrjs@stuarthalloway We’re loading in loading from a few hundred flat files, each of which translates to about 12MB in memory, with approximately 500k datoms each. We partition into 50k ish datom chunks for each transaction. We’re seeing a rebuilding index error after the first three files on a t2.small. Spun up a prod instance now which delays the error to a few files later. We think a suitable variant of you backoff code will fix it though, probably just too much throughput - it’s a backfill job that is way more write intensive than our expected production load.#2018-01-2314:42stuarthalloway@chrjs can we get the exception from the CloudWatch logs?#2018-01-2314:43stuarthallowayalso would be interested in seeing how your CloudWatch dashboard looks during the import#2018-01-2314:47chrjsGive us just a few minutes to generate the error again so we can pinpoint the time for the corresponding CloudWatch log.#2018-01-2315:32stuarthalloway@chrjs you should not have to track that down yourself, just search for the pattern “Alert - Alerts” in the log group named datomic-{your-system}#2018-01-2315:32chrjs@stuarthalloway I direct-messaged you the exception#2018-01-2315:33sleepyfoxand I've DM'd you the dashboard capture#2018-01-2315:34sleepyfox(am working with @chrjs )#2018-01-2315:36stuarthalloway@chrjs @sleepyfox the dashboard capture shows 0 alerts in the Events/Alerts chart, so does not explain the nodes being unhappy. What client-side problem did you see along with this?#2018-01-2315:50chrjsException! Backing off for 3200 : #error {
:cause Busy rebuilding index
:data {:cognitect.anomalies/category :cognitect.anomalies/busy,
:cognitect.anomalies/message Busy rebuilding index,
:dbs [{:database-id 3109e63e-2a3b-44c9-92a1-83015c326b07,
:t 31,
:next-t 32,
:history false}]}
#2018-01-2316:11stuarthalloway@chrjs gotcha — that is to be expected, and should work after backoff and retry#2018-01-2316:11stuarthalloway@chrjs also, it probably takes DDB autoscaling 30 min or more to fully “notice” that you are doing an import#2018-01-2316:11chrjsYeah, we’ve got it working now using exactly your async backoff trick.#2018-01-2316:11chrjsAh, that last is interesting, noted.#2018-01-2316:13stuarthallowayit’s cool to check that out in the DDB console — go to the table, capacity tab, and open up the scaling activities#2018-01-2321:33denikI’m puzzled by this sentence from the docs: Datomic apps that do not run on AWS must target On-Prem in the docs https://docs.datomic.com/on-prem/moving-to-cloud.html
For cloud, does this mean even servers using the client API must be on AWS or only the database system? If it’s the former, what’s the rationale?#2018-01-2321:35marshall@denik Yes, clients of Datomic Cloud should be in AWS. Datomic Cloud runs in a private VPC with access controlled through IAM roles. You can run your client applications in another VPC and connect them using VPC Peering#2018-01-2321:36marshallhttps://docs.aws.amazon.com/AmazonVPC/latest/PeeringGuide/Welcome.html#2018-01-2321:37marshallyou can access Datomic Cloud through the Bastion server (via SOCKS proxy), but that route is intended for dev use, not production#2018-01-2321:37marshall(i.e. for accessing from your local dev laptop)#2018-01-2321:43denik@marshall Why? Security? Speed? I’m not that familiar with VPC. If I have a server outside of AWS, is there any way to use the client API to talk to Datomic? I’m looking for an experience that’s more akin to a serverside firebase.#2018-01-2321:45marshallSecurity#2018-01-2321:45marshallprimarily#2018-01-2321:46marshallhaving your database of record open to the internet isn’t a great position to be in#2018-01-2322:02deniksure, however there are use cases where data is user insensitive and open is desired. Is there a way to open the VPC? Or allow access via a generated token?#2018-01-2413:57denik^ @marshall#2018-01-2414:05marshallDatomic Cloud runs in your AWS account; configuration of the VPC is up to you. It is launched initially with AWS best practices in mind.#2018-01-2321:56luchiniWill we have something like console for Datomic Cloud? It’s a tool we’ve come to appreciate a lot on the pro version 🙂#2018-01-2400:08jaretTo quote Stu from the forums: “Not at present, but we understand and agree that this is important. Stay tuned.” https://forum.datomic.com/t/console-and-cloud/286#2018-01-2420:40luchiniThanks @U1QJACBUM and @stuarthalloway#2018-01-2400:53kennyI'm trying to create a Nippy serializer for a datom. Is there a way to construct a Datum object with the added field? I see the Datum constructor takes values for e, a, v, and tOp but not added. And it looks like the added field is defined as final, so I'm not sure how that's set (unless reflection is used).
Technically I could just deserialize into a vector but it's a tad annoying that the serializer and deserializer are not symmetric.#2018-01-2401:01kennyI guess I could create a Datum record that looks and acts like a datomic.db.Datum. Still a tad annoying that it isn't symmetric 🙃#2018-01-2401:07kennyOhhhh, I misunderstood. This is how the added field is defined 🙂 Makes sense now!
public boolean isAssertion() {
return Numbers.isPos(this.tOp & 1L);
}
#2018-01-2401:25kennyFor those interested...
(nippy/extend-freeze Datum ::datom
[^Datum datom ^DataOutputStream data-output]
(nippy/freeze-to-out!
data-output
[(.-e datom) (.-a datom) (.-v datom) (.-tOp datom)]))
(nippy/extend-thaw ::datom
[^DataInputStream data-input]
(let [[e a v tOp] (nippy/thaw-from-in! data-input)]
(Datum. e a v tOp)))
#2018-01-2407:35val_waeselynckUpdated the README of Datomock - I hope this makes the use cases more clear, and that it shows how powerful datomic.api/with is https://github.com/vvvvalvalval/datomock#2018-01-2408:21val_waeselynck@U053032QC related to our previous discussion#2018-01-2410:14conanI'm really excited to try this out!#2018-01-2412:47steveb8nI've been using this to great effect to make tests run fast. I'm sad to lose it with cloud/peer API but c'est la vie. Thanks Val, this is a great lib#2018-01-2415:04val_waeselynck@U0510KXTU curious about how you used it - only to make tests run fast? I find the most rewarding aspects to be more workflow-related (debugging, dev etc.)#2018-01-2415:29uwothis has been a godsend for our development workflow#2018-01-2417:14souenzzo@U06GS6P1N I'm also using for test's
something like
(let [root-conn (d/connect "mem")]
(install-schema! root-conn)
(reset! test-conn (dm/fork-conn root-conn)))
(defn test-fixtures
[f]
;; no more install schema each fixture !!!
(reset! system/conn (dm/fork-conn @test-conn))
(f))
#2018-01-2421:11steveb8n@U06GS6P1N same as @U2J4FRT2T for fixtures run once but used in N tests#2018-01-2414:42stuarthalloway@denik With Datomic Cloud, all data and processing lives in your account, and so you can choose if and how it is exposed to the internet. That said, the defaults (and our development efforts) are pointed at the secure deployment of data-of-record systems.#2018-01-2414:43denikthanks @U072WS7PE#2018-01-2414:53stuarthalloway@kenny out of curiosity, why do you need to serialize concrete Datoms?#2018-01-2418:53kennyWe have a series of Onyx tasks that publish segments to a Kafka topic. In order to publish to a Kafka topic you need to have serializable data. We need to publish the :tx-data off of the transaction report for running business logic. The options were either:
- Write our own version do transact and transact-async that modify the returned transaction report such that :tx-data was serializable. Then require anyone who transacts data to Datomic to use that interface and know that the interface is slightly different that the Datomic one they were used to.
- Write a function that serializes :tx-data in the transactions report and require that all developers remember to call that function before returning their segment.
- Write a Datom serializer that allows devs to use the same Datomic API they've been using without needing to remember to call a function or use a special Datomic API only when working with Onyx.#2018-01-2420:18stuarthallowaymakes sense, thanks for sharing!#2018-01-2500:31shooodookenfrom http://augustl.com/blog/2016/datomic_the_most_innovative_db_youve_never_heard_of/
> Datomic runs queries on the client, not on the server
so, is this the equivalent of consumer pulling all recs in all 'tables' listed in query and then querying/filtering against all those recs client-side?#2018-01-2500:34shooodooken^ second to that, what would be the best hands on way to see this happening (i.e. see all recs on current client)#2018-01-2500:35shooodooken@augustl ^#2018-01-2500:50mvHey there, are there any well known examples of code available rest APIs backed by datomic? I am trying to build an app and am having some hurdles. Would love to see an existing example#2018-01-2509:55val_waeselynck@U0PD452UA http://www.flyingmachinestudios.com/programming/building-a-forum-with-clojure-datomic-angular/#2018-01-2518:21mv@val_waeselynck awesome, thanks!#2018-01-2507:14robert-stuttafordhi folks, please vote on this Twitter poll for me?
https://twitter.com/RobStuttaford/status/956407817245724677#2018-01-2507:15augustl@shooodooken not really!#2018-01-2507:15augustlIt's more the equivalent of the database server having to have the working set in memory #2018-01-2507:16augustlAnd in these days, when servers are cheap, why couldn't it just be your app that has that? :)#2018-01-2507:50Desmondi'm having trouble retracting an entity using a ref lookup:
(defn unban [userIdentifier]
(d/transact (connection/connect)
[[:db.fn/retractEntity [:ban/user userIdentifier]]]))
with :ban/user being defined:
{:db/ident :ban/user
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/one
:db/unique :db.unique/value
:db/doc "The banned user"}
it appears to work because i see the usual db-before/db-after but the entity is still there when i query with a fresh d/connect
any ideas?#2018-01-2509:53val_waeselynck@U7Y912XB8 1- do you see the datoms being retracted in the tx-result? 2- what connection to you connect to, and when relative to calling transact?#2018-01-2606:04Desmond@val_waeselynck the tx result:
{:status :ready, :val {:db-before #2018-01-2606:06Desmond(connection/connect) just runs (d/connect) with my db url#2018-01-2606:15Desmondand when I query I am doing (d/db (d/connect ...)) to get the latest db#2018-01-2606:19Desmondand i'm seeing the correct db-after id when i query again#2018-01-2607:58val_waeselynck@U7Y912XB8 this :tx-data shows that the transaction had essentially no effect - no additions nor retractions except for adding a :db/txInstant timestamp to the transaction entity.#2018-01-2510:23maxtHey! When you say Solo from about 1$ / day, does that include licensing costs?#2018-01-2510:41robert-stuttafordi believe so @maxt - you can view the details of the price calculator, and see it split between vendor and AWS#2018-01-2511:12maxtThank you, I hadn't seen the calculator. Seems to be $ 1/3 for the license and $2/3 AWS for minimal Solo.#2018-01-2511:18maxtThe calculator acts a little funny though, I get the same or lower quote if I change to the production fullfillment.#2018-01-2512:40stuarthalloway@maxt the AWS calculator design predates CloudFormation templates, so it does funny things. You should drive by the instance types, not the fulfillment#2018-01-2512:42stuarthallowayt2.small is only for Solo, and i3.large is only for Production#2018-01-2512:47stuarthalloway@maxt there are no “licensing costs” with Cloud, just usage markup on EC2 instances. That is how the AWS marketplace works. Solo is about $1/day, total#2018-01-2513:06maxt@stuarthalloway Wonderful, thank you!#2018-01-2515:17denikin cloud, when calling (client-api/delete-database client {:db-name "<my-db>"}) is the db’d data being excised?#2018-01-2515:21stuarthalloway@denik no, just deleted#2018-01-2515:21stuarthallowayas in “all resources reclaimed” — albeit not necessarily immediately#2018-01-2515:24denik@stuarthalloway will the “deleted” data eventually be overwritten on disk or does it stay in tact?#2018-01-2515:37stuarthallowayDatomic calls delete operations on the underlying stores: DDB, S3, and EFS#2018-01-2515:42uwonot sure if intentional, but just a heads up that this section doesn’t have a link at the top like the others https://docs.datomic.com/cloud/best.html#optimistic-concurrency#2018-01-2516:20marshall@uwo thanks - fixed https://docs.datomic.com/cloud/best.html#datomic-transaction#2018-01-2516:45souenzzo(catch ExceptionInfo t
(if (= (:cognitect.anomalies/category (ex-data t)) :cognitect.anomalies/conflict)
anomalies is coming to peer API?#2018-01-2519:00eggsyntaxI just wanted to take a moment to say thanks to @val_waeselynck for his Datomock library (https://github.com/vvvvalvalval/datomock). It's been a massive improvement to my team's dev workflow, making it completely trivial to always work against the latest production data with complete confidence that we're not going to break anything. IMHO, Datomock is one of the most useful tools to come out of the Clojure community in recent times, and I suggest that anyone using Datomic via the peer API should give serious thought to whether they'd find it useful. And of course, thanks as well to the Datomic team for the fundamental insights that make a tool like Datomock even possible! Speculative transactions are just an amazingly powerful tool.#2018-01-2519:23souenzzoPlease vote here of even better speculative tx's
https://receptive.io/app/#/case/26649#2018-01-2519:27eggsyntax@U2J4FRT2T already voted for that one, it'd be excellent to have 🙂#2018-01-2519:48val_waeselynckWow, thanks for the kind words @U077BEWNQ 🙂 it's very good to know that other people find it useful. As you said, the credit goes to the authors of Datomic who really, really got the fundamentals right#2018-01-2520:33eggsyntax@val_waeselynck some truth to that, but BOY is it easier to use it in a dev workflow with Datomock added 🙂#2018-01-2601:10steveb8nIn related news : I’m also getting great value out of the scope-capture lib as well. Val, you are doing great work#2018-01-2520:30souenzzoBit off, but there is how to connect on "datomic:" from local repl using ssh tunnel or something like?#2018-01-2521:44potetmIs there anything in datomic to grab the 10 biggest values in ~constant-time?#2018-01-2521:44potetme.g. https://github.com/tonsky/datascript/wiki/Tips-&-tricks#getting-top-10-entities-by-some-attribute#2018-01-2521:44potetmI think the answer is no… but I think in theory it’s possible.#2018-01-2521:46potetm(Also, I’m talking the on-prem)#2018-01-2522:52bkamphaus@potetm depending on the assumptions you can make about the range of values that might be there, you could at least beat the perf of the naive case using index-range or seek-datoms. Real bottleneck you’re up against, or more of a perf golf/curiosity thing?#2018-01-2522:55potetmPerf golf for sure#2018-01-2522:55potetm🙂#2018-01-2522:58potetmThanks @bkamphaus! Hope all is well with you!#2018-01-2615:25luchiniWe are investigating using tx-report-queue from the the Peer Client library. Datomic Cloud and the new Client API doesn’t seem to support it. Is there an alternative or are we stuck to On-Prem?#2018-01-2615:28marshall@luchini You can use the log API to poll for updates#2018-01-2615:29marshallhttps://docs.datomic.com/cloud/time/log.html#2018-01-2615:33luchini@marshall the app we are building has a series of client nodes that need to react when certain transactions have finished. Our initial thought was a light-weight passive approach if possible. With polling we’ll need to have a cron-like feature polling from each client. Won’t this be too heavy?#2018-01-2616:05marshall@luchini how much polling and how often would it have to do so?#2018-01-2616:06marshallalso, you can get the latest db value from a conn and only query the log when the basis-t has changed#2018-01-2616:06luchiniThe intention is to give a real-time feeling to users… so, potentially frequent 🙂#2018-01-2616:15stuarthalloway@luchini how many clients?#2018-01-2617:15rapskalianCurrently preparing to switch our db from on-prem to cloud. Is there a guide available for doing this efficiently? It'd be awesome if I could just reuse my current dynamo table, but I'm thinking an import/export operation will be necessary...#2018-01-2617:20donmullen@cjsauer It’s my understanding that the migration story and tooling are a WIP.#2018-01-2618:12rapskalianAh ok. I'm imagining breaking it down like this:
1. Launch datomic cloud in AWS
2. Migrate from peer to client API
3. Export all data from old db (all this while site is in maintenance mode)
4. Import all data into new db
My main concern is with step 2...it looks like my local dev workflow will change rather drastically. Currently I'm just using a datomic:mem://... setup to develop locally. Is something like this possible using the client API?#2018-01-2617:43luchini@stuarthalloway approx 1k users for MVP and up to 20k in the long run. We are thinking of 2 client nodes for starters and scaling up to max 10. #2018-01-2618:08stuarthalloway@luchini I don’t think that 2-10 clients polling the log would be a problem. What poll interval would meet expectations?#2018-01-2618:22luchiniEvery second or so. #2018-01-2618:17stuarthalloway@cjsauer what I have been doing is run against a bastion + Solo topology for interactive development from the REPL. Need a network connection, but with that in place it is very cool#2018-01-2618:18stuarthallowayhaving a real db beats memory db — you can come back to it tomorrow#2018-01-2618:19rapskalianThat makes sense. I suppose I can also include the datomic-socks-proxy script right in my repo for the rest of the team.#2018-01-2618:19stuarthallowayyes, one thing that is cool is that connection security lives entirely outside the config, so the proxy script and connection config are safe things to have in a repo#2018-01-2618:20denik@dnolen mentioned a js client for datomic in his talks. What is the roadmap and estimate release date?#2018-01-2618:21dnolen@denik far as I know there is no roadmap, and no estimated release date#2018-01-2618:22dnolenbut clients are pretty simple, you can examine the Clojure client source - wire protocol just needs more documentation#2018-01-2618:24denik@dnolen where can I find the source?#2018-01-2618:24dnolenin the Clojure client JAR#2018-01-2618:44stuarthalloway@luchini that polling load seems fine, let me know if you encounter issues#2018-01-2618:51luchiniThanks @stuarthalloway #2018-01-2618:53timgilbertSay, in the datomic entity API is there a way to enumerate the backrefs of an entity?#2018-01-2618:54timgilbert(Apart from just (:back/_ref entity) - I can get the forward refs via (keys entity))#2018-01-2619:14drewverleeI found this article on business logic in the DB very interesting. http://www.vertabelo.com/blog/notes-from-the-lab/business-logic-in-the-database-yes-or-no-it-depends
Has anyone written about the idea of business logic being in the db (constraints, stored procedures, validation, data integrity, etc…) from a datomic perspective?#2018-01-2619:21stuarthalloway@drewverlee the reddit critics that the author quotes are more about the historical legacy of specific SQL database implementations than about databases in general#2018-01-2619:21stuarthallowaye.g. “How the hell do you unit test your business logic when it’s in the database?”#2018-01-2619:22stuarthallowayI would counter with “How do you test anything that isn’t written in a functional style?”#2018-01-2619:25stuarthalloway… not that you cannot test imperative code, but it is a lot harder#2018-01-2619:39rapskalianRunning into this while trying to migrate from peer to client API: java.lang.RuntimeException: No reader function for tag db/id, I'm probably missing something obvious...#2018-01-2619:44drewverlee@stuarthalloway thanks for the reply. I agree, its harder to test imperative anything.
I’m doing a lot of backend api work currently, which means turning HTTP requests into sql queries. I have talked a bit with experts that made we aware of a lot of issues that can happen when you keep a lot of logic in the db. They suggest that db constraints (at least in most relational dbs) can’t fully model the range of business logic constraints you want. That chained triggers are hard to follow because there reactionary. The changing a database with lots of constraints is hard. All these arguments against business logic in the db make sense, as do the ones in that article.
My institution tells me that i want
* the semantics of my system to be uniform across by db and clients/servers
* validations and security at the persistent layer (so in the db)
Sense Datomic offers at least the first one and possible the second, it might change the conversation around the pros and cons of pushing more and more logic into db like queries. I feel drastically uneducated about this entire topic.#2018-01-2620:23rapskalianAdded datomic-free as a dependency for the data-readers support to fix the above. Migrating from peer to client API is turning out to be really tricky actually...these deps run pretty deep in my source code. May have to rethink my plan 🤔#2018-01-2621:01jocrau@cjsauer The Client API does not support #db/id tempids. I replaced the ids with strings and it worked. More here: https://docs.datomic.com/on-prem/moving-to-cloud.html#2018-01-2622:01rapskalian@jocrau thank you! That link was exactly what I was after.#2018-01-2700:23jdubieWe are running a datomic transactor and using MySQL for storage in AWS. I’d like to be able to read from the MySQL database but enforce no writes at the network level. Can you run the peer library without a transactor? I’d prefer that this peer have no ability talk to the transactor. Is that possible? I know peers are probably designed to get streamed transactions from the transactor.#2018-01-2700:44favilaUse different database users+creds and a different jdbc connection string for peers vs transactor#2018-01-2700:45favilae.g. we run google cloud MySql, the peers all use user "datomic_ro" which has only SELECT privileges; transactor uses "datomic_rw", which has only SELECT INSERT UPDATE DELETE (which is all it needs.)#2018-01-2700:46favilaso peers never see anything which could be used to write#2018-01-2700:46favilayou still need a transactor running though, to communicate not-yet-indexed tx-log data to peers#2018-01-2700:47favilabah, I misunderstood what you want. No, there's no way to make d/transact impossible#2018-01-2700:49favilayou would have to run a peer in a trusted process and have it expose its own RO interface to other processes which use it#2018-01-2700:49favilapeer = trusted#2018-01-2716:19jdubieExactly I’d like d/transact to fail at a network level#2018-01-2710:57stuarthalloway@jdubie in the peer world, you would implement read-only yourself, as @favila says. But in Datomic Cloud, is is straightforward to use IAM to make a client process read-only. See https://docs.datomic.com/cloud/operation/access-control.html#sec-2#2018-01-2711:30val_waeselynck@drewverlee regarding testing, Datomic is about as good as it gets - consider that you won't have to 'mock' anything thanks to in-memory connections, which makes for an even better testing story than the traditional client-side testing of SQL databases. About storing business logic in the db, you should also consider that it may also cause operational difficulties (deployment, new versions etc.) compared to having your business logic share the ephemeral lifecycle of Peer code. I think one of the main incentives to use stored procedures in the SQL world is to bring the business logic closer to where the data lives, and sometimes to overcome the limitations of SQL - but you don't have those problems on a Datomic Peer, at least not for reads. This testing power and this expressiveness of Peers will probably make for much fewer reasons to use stored code than in the SQL world.#2018-01-2711:37val_waeselynck@drewverlee so my advice would be: don't be too eager to put all your business logic in transactions functions, and don't be too strict about enforcing all invariants upfront. With Datomic you have a lot more testing, querying and debugging power to ensure the correctness of your system than with a SQL database.#2018-01-2717:01drewverleeThanks for the insights. I’m i correct in assuming the big tradeoff seems to be control for evolvability?
As an extreme example of the control having business logic in the db offers, the article i post makes a good observation. There might be data in the database you only want a select few people to know exists at all (fraud detection).
Another example of control that having business logic in the db, is invariants on the data. I think this is where the mental model (graph and immutable) that datomic presents is different then relational databases. In a relational db, your much more worried about someone over writing your “good data” with “bad data” in dataomic, thats less of a concern, because you can probably recover (because you never through the record away). So its likely to be more open.
Your claiming that keep your business logic and db logic separate is allows for easier evolvability. In the form of easier deployments and new versions, etc..
This is the side of things i don’t fully understand. Simply because i have never hard to worry about this sort of thing before 🙂.
But i feel like you have really helped me narrow in on the tradeoffs. Thanks#2018-01-2810:01val_waeselynck@drewverlee it's hard for me to answer because it's not clear what "in the db" means. In the SQL world, "in the db" means "it executes over there" but in Datomic most of your db code is in your app process - because it can#2018-01-2812:59drewverleeI'm trying to compare the relational + sql approach to the datomic + datalog in the context of the discussion around business logic. So in the conversation I talked about both. #2018-01-2814:24val_waeselynck@drewverlee ah right, so this is more about "should I use Datomic" than "how do I use Datomic", correct?#2018-01-2722:26cap10morganOn the Capacity Planning page, it says that if the MemoryIndexMB goes above a certain threshold, then "you need to plan carefully to avoid back pressure." Is anyone aware of what the details are behind that? What should I do if I see it going above that threshold?#2018-01-2722:30cap10morganIf I have plenty of RAM, is it just a matter of increasing memory-index-max and memory-index-threshold?#2018-01-2723:59bkamphaus@cap10morgan increasing them might delay but likely won’t solve the problem (if there is one). Memory index MB is just an indication of how the transactor is keeping up with the load of indexing it has to do. Those two tunable knobs - threshold at which to start an indexing job and when to throttle additional writes may help you find a subtle sweet spot for how bursty your write loads might be, but if the memory-index-mb metric gets high and stays high for long periods of time upping those knobs won’t help much b/c the problem is essentially one of indexing throughput at that point, (not the degree to which your settings allow the transactor to endure bursts of writes). At worst it could make indexing/back-pressure more catastrophic when you hit the threshold (essentially halting the system for new writes until you finish a huge indexing job).#2018-01-2800:00cap10morganso what should I do to enable a transactor to sustainably keep up with this load?#2018-01-2812:53stuarthalloway@cap10morgan it depends on the load, can you describe it in some detail?#2018-01-2814:38cap10morganIt's a high number of concurrent website users. They're signing up for our service through a multi-step process. This results in lots of transactions from various microservices to add their data to the database. When I scale up the cluster of services to hundreds of each, I find that things gets worse and Datomic throws back pressure alarms and the MemoryIndexMB gets pretty high. We've recently re-architected one of the services to pipeline transactions w/ transact-async, but I was hoping that we could also get a lot of transactions in flight by having many instances of the synchronous services. But it pretty quickly hits a bottleneck at the transactor, and I'm not sure how to widen it.#2018-01-2813:29laujensenlaujensen [2:25 PM]
I have an immutant app, which has many handlers responding to web requests. These pull out some data, generate some html and store some stuff in datomic. When it hands off to the browser, nothing is left hanging in memory. And still, once per day, sometimes twice, the entire system crashes due to the GC spinning out of control. What am I looking for?
noisesmith [2:27 PM]
> When it hands off to the browser, nothing is left hanging in memory.
do you have proof of this?
[2:29 PM]
doesn't datomic implicitly use a local cache so that you'd have to make new dbs and let the old be gc'd or else the other facts are sticking around in memory?
#2018-01-2813:29laujensenIs the peer library hoarding data?#2018-01-2813:46laujensenIve just run a profiler. As soon as the app is taken into use, datomic hoards 2gb of char[]. How do I control this?#2018-01-2814:10stuarthallowayhi @laujensen — by Datomic do you mean peer, transactor, client, peer server, … ?#2018-01-2814:10laujensenpeer#2018-01-2814:10laujensen@stuarthalloway#2018-01-2814:11laujensenThis is what happens to the heap when I trigger the first few handlers @stuarthalloway, GC does not diminish it, and the heapdump has 2GBs og char[] from Datomic#2018-01-2814:12stuarthallowaylemme try it locally#2018-01-2814:12stuarthalloway(with my own data -- back in a minute)#2018-01-2814:13stuarthallowayhow much JVM RAM are you allowing the peer?#2018-01-2814:16laujensenThe App and Peer have 3Gb#2018-01-2814:16laujensenSame for Transactor#2018-01-2814:17laujensenWould lowering the objectCacheMax reduce performance but guarantee lower memory footprint?#2018-01-2814:18stuarthallowayI was about to say almost that#2018-01-2814:18laujensenMy OCM is 512m#2018-01-2814:19stuarthallowayI just ran a peer on a small database here, confirming it doesn’t hold nearly that much memory#2018-01-2814:19stuarthallowayso you have three variable knobs that relate to the size of your db#2018-01-2814:20stuarthalloway1. objectCacheMax (like you said)#2018-01-2814:20laujensenThis is not a small db (relatively), its 25 websites, each holding about 20 pages, each having quite a bit of html and css stored in the db#2018-01-2814:20stuarthalloway2. memoryIndexMax (unindexed log tail held in all peers)#2018-01-2814:20stuarthalloway3. index nodes held in memory (implied by your usage)#2018-01-2814:22stuarthallowayHTML and CSS do not make good datoms, better to content address them and store in S3#2018-01-2814:22laujensenThat would be better in terms of datomic, but we're rendering webpages in about 260msecs now and I'd hate to see that drop. Which reaching into s3 would do#2018-01-2814:23laujensenBut do you have a way for me to calculate a good objectMax for 3Gb total heap ?#2018-01-2814:23robert-stuttafordcore.memoize - put it on S3 but memoize it in memory#2018-01-2814:23stuarthallowayor core.cached or guava — don’t think memoize evicts things#2018-01-2814:23robert-stuttafordor even better use cloudfront and avoid dynamic processing#2018-01-2814:25stuarthallowayif your HTML or CSS have occasional changes halfway down the page and you save the history, can be particularly nasty for Datomic indexes#2018-01-2814:25stuarthallowayroot and branch nodes must store a prefix big enough to distinguish two datoms#2018-01-2814:25laujensenThe HTML/CSS are frequently updated, as any change in the editor updates the datoms#2018-01-2814:26stuarthallowayif those datoms are 50K HTML blobs that change say at the 25K mark, then you will blow up the dir and root nodes#2018-01-2814:26laujensenThen Im pretty sure thats happening routinely#2018-01-2814:26stuarthallowaythis will get worse quickly as you add data#2018-01-2814:27stuarthallowayand I am tallying another implicit vote for blob support in Datomic 🙂#2018-01-2814:27laujensenThis will be problematic to resolve then, as I need to store html/css outside datomic, and implement my own version control, for which Im relying on tx history now#2018-01-2814:28stuarthallowaystore the hash in Datomic and you still get version control#2018-01-2814:28stuarthallowaythe only thing you don’t get is garbage collection#2018-01-2814:28stuarthallowayof S3 or wherever#2018-01-2814:28laujensenYeah hashes would work#2018-01-2814:29laujensenThanks for your input, very valuable! I'll start by reducing the OCM and then work out a file-storage solution#2018-01-2814:29stuarthalloway@laujensen thanks for your feedback! definitely thinking about how to automate this in Datomic#2018-01-2814:30robert-stuttaford@stuarthalloway what’s the backup/restore story for Cloud? could I e.g. restore an on-prem db?#2018-01-2814:31stuarthalloway@robert-stuttaford still in development. There is more subtlety with Cloud being encrypted at rest#2018-01-2814:31robert-stuttafordgotcha#2018-01-2814:32stuarthallowayteaser: Cloud’s support for the time model is superior to On-Prem in such a way that On-Prem data is insufficient to directly populate a Cloud database#2018-01-2814:32laujensen@stuarthalloway thanks! And just an FYI, we would have switched to cloud ASAP if there was a good import story.#2018-01-2814:32robert-stuttafordis it the plan to be able to restore from one to the other?#2018-01-2814:32laujensenIdeally, just providing a datomic-uri during the first setup, and having it import everything#2018-01-2814:32robert-stuttafordah 🙂 now i’m super curious about how the time model is superior 🙂#2018-01-2814:32laujensenWe'd also want a backup-db option, but thats prio #2 🙂#2018-01-2814:33stuarthalloway@laujensen hear you loud and clear. The import will need to have knobs to control how semantic differences are handled#2018-01-2814:53robert-stuttafordare these semantic differences still in flux, @stuarthalloway? i’ve read the words you’ve shared thus far - curious what else has changed!#2018-01-2816:09robert-stuttaford@stuarthalloway seems datomic-client doesn’t support [:find [?e ...] :in or [:find ?e . :in or [:find [?e] :in ? [:find ?e works fine. com.datomic/client-pro "0.8.14".
"Only find-rel elements are allowed in client find-spec, see "#2018-01-2817:04marshall@robert-stuttaford that's correct. Clients only support the relation find #2018-01-2817:05marshallThat's true of all clients (for peer server and for cloud)#2018-01-2817:06robert-stuttafordgosh. might want to make that clear in the docs, @marshall 🙂 curious why that is?#2018-01-2817:12marshallhttps://docs.datomic.com/cloud/query/query-data-reference.html#arg-grammar
#2018-01-2817:32robert-stuttafordright - i meant more that it was surprising that client doesn’t support this, and then i couldn’t find explicit mention of this difference in the docs, leading me to think i was doing something wrong#2018-01-2817:51chris_johnsonNewbie question about this, just to help cement my own understanding: would you work around that limitation of the client query model by using a pull as your find-rel and then extracting your tuple or scalar therefrom?#2018-01-2818:02robert-stuttaford@chris_johnson the pull result would still be wrapped in a vec, so even then, you still have to use first or similar to get at the value per result#2018-01-2819:00robert-stuttaforddatomic.api/squuid has no counterpart in Datomic Client. what’s the reasoning for that? it’s the only thing i was still using from Peer in a project i converted. happily i still have Datascript in the project, and could use its impl instead — however, i’m curious - what should folks typically do here?#2018-01-2917:24denik@marshall it would be great if you guys communicated which API omissions of client between on-prem and cloud are permanent vs. temporary. For example, I found find-scalar very useful, and just refactored a ton of my code to use ffirsts due to the omission of it. It would be important for me to understand whether it was removed permanently because it’s considered a bad idea (and why) or because of technical limitations or if we can expect for it or a similar feature to come back.#2018-01-2917:34robert-stuttafordi second this request from @U050CJFRU, @marshall. please let us know what the plan is?#2018-01-2919:54jocrauI found squuids also useful for making the order of a returned collection deterministic when there is no order defined within the business domain.#2018-01-2919:57favila:db/id is generally a better choice for that#2018-01-3014:54jocrauI use Datomic on the JVM and Datascript in JS. IMO squuids are easier to create and need no correlation between client/server.#2018-01-2819:04robert-stuttafordalso amusing, needing this pattern to deal with the lack of d/entid / d/ident:
(defn ea->v [db e a]
(get (d/pull db [a] e) a))
#2018-01-2819:04robert-stuttafordi must say, the client api feels a whole heck of a lot simpler and therefore approachable, so i’m feeling empathy for removing all these conveniences#2018-01-2916:37frankiesardoHello folks, I'm trying out datomic cloud#2018-01-2916:37frankiesardoI have set it up and I can connect to the instance on aws through the bastion using the ssh proxy#2018-01-2916:38frankiesardoNow I'm deploying a simple lambda function to test the connection and I get an unavailable error#2018-01-2916:38frankiesardo"errorMessage": "Unable to connect to system: {:cognitect.anomalies/category :cognitect.anomalies/unavailable, :cognitect.anomalies/message \"Connect Timeout\" ..}#2018-01-2916:39frankiesardoI'm using the same config I use for the basion connection, removing the proxy-port map#2018-01-2916:39frankiesardo{:server-type :cloud
:region "eu-west-1"
:system "datomic-cloud-demo"
:query-group "datomic-cloud-demo"
:endpoint ""}#2018-01-2916:40frankiesardoAnd I have the lambda running on the same VPC as datomic#2018-01-2916:40denik@marshall FYI client-api docs linked to from cloud (https://docs.datomic.com/cloud/client/client-api.html) point to on-prem docs in the q docstring (https://docs.datomic.com/client-api/datomic.client.api.html#var-q), since there are two client libraries now (cloud and on-prem), it should probably point to both or have separate docs?#2018-01-2916:45marshall@denik yes, thanks - I’ll take a look at all the urls in the docstrings#2018-01-2916:51marshall@frankie you’d have to authorize security group ingress from the security group you’re running the lambda in#2018-01-2916:52marshallsee https://docs.datomic.com/cloud/operation/client-applications.html#2018-01-2916:52marshallyou can use the provided $(SystemName)-apps security group#2018-01-2916:52marshallit should have the correct permissions#2018-01-2916:52frankiesardoExcellent, thanks!#2018-01-2916:54frankiesardoOk, now I've got a different error#2018-01-2916:54frankiesardo"errorMessage": "Unable to execute HTTP request: Connect to [] failed: connect timed out",
#2018-01-2916:56marshallwhat IAM role are you using for the lambda?#2018-01-2916:57marshallit will need IAM permissions to access Datomic Cloud#2018-01-2916:57marshallhttps://docs.datomic.com/cloud/operation/access-control.html#2018-01-2916:59marshallAlso, your lambda security group needs to allow outbound connections to S3#2018-01-2916:59marshall@frankie ^#2018-01-2917:05frankiesardoMhmhm, I've added that but it didn't change the error#2018-01-2917:05frankiesardoI will have a closer look, but thanks for the help so far!#2018-01-2918:50robert-stuttaford@stuarthalloway are :db.type/keyword less storage efficient than :db.type/ref-as-enum? other than the cool VAET trick where you can d/entity-reverse-walk to references of an enum value, why would I use enums over keywords? so far, i’m finding that the api for keywords is far nicer — no having to handle idents in d/pull or d/q when using keywords, for instance.
it does seem to be faster to use enums over keywords in Datalog, likely because of the under-the-hood switch to entity ids there#2018-01-2918:59stuarthalloway@robert-stuttaford I would not worry about perf — use enums only if you need them for something keywords can’t do#2018-01-2919:00robert-stuttafordright - such as adding other AVs alongside the ident#2018-01-2919:01robert-stuttafordthank you#2018-01-2919:01stuarthalloway@robert-stuttaford going back to an earlier question: squuids are ancient, not particularly important since http://blog.datomic.com/2014/03/datomic-adaptive-indexing.html#2018-01-2919:02robert-stuttafordwow. shows how ancient my knowledge is 🙂 i’ve been religious about using squuid on our team. one less thing to worry about…#2018-01-2919:02stuarthalloway@robert-stuttaford @denik updating the docs about other On-Prem/Cloud questions, will update you here#2018-01-2919:02robert-stuttafordthank you Stu - your quick response is appreciated!#2018-01-2919:14marshallhttps://docs.datomic.com/on-prem/clients-and-peers.html#peer-only < Updated differences between clients & peers#2018-01-2920:15donmullen@marshall - is there a similar table or doc comparing on-prem client api to peer server and cloud/client?#2018-01-2920:15marshallno(t yet) 🙂#2018-01-3006:07robert-stuttafordthank you @marshall!#2018-01-2919:15marshall@robert-stuttaford ^#2018-01-2919:46jocrauI am currently experimenting with the Client API using [com.datomic/client-pro "0.8.14"] (rather than [com.datomic/client-cloud "0.8.50"]). It seems as if the Client does neither implement (delete-database [_ arg-map]) nor (create-database [_ arg-map]) of the Client protocol. Is that by design or just for now?#2018-01-2919:48marshallThat’s correct @jocrau - using client with Datomic on-prem you’ll need to do the delete or create database calls from a peer#2018-01-2919:50jocrauThanks @marshall. Will that also be possible from a client in the future?#2018-01-2919:50marshallnot sure. I’ll double check with the team and also try to clarify in the docs#2018-01-2921:15ghadi@stuarthalloway what aspect of Adaptive Indexing obviates the need for squuids?#2018-01-2921:36stuarthalloway“Sustainable import rates independent of db size” could also include “… and independent of distribution of data values”#2018-01-2921:57stuarthalloway@souenzzo @val_waeselynck the explanation of as-of + with at https://docs.datomic.com/cloud/time/filters.html#as-of-not-branch covers all usage in Datomic, whether cloud or on-prem, client or peer. Does that answer the original question?#2018-01-3009:34val_waeselynck@stuarthalloway well, if you confirm that long-lived db values may get resolved via asOf on the server-side, it follows that with() is broken on clients#2018-01-3000:59kennyWhy is #datom[17592186045433 87 "start" 13194139534330 false] included twice in the :tx-data for the last transaction in this snippet:?
(let [conn (d/connect db-uri)]
@(d/transact conn [{:db/valueType :db.type/string
:db/cardinality :db.cardinality/one
:db/ident :test/attribute}])
(let [{:keys [tempids]} @(d/transact conn [{:db/id "start"
:test/attribute "start"}])
id (get tempids "start")]
(:tx-data
@(d/transact conn [[:db/add id :test/attribute "new"]
[:db/retract id :test/attribute "start"]]))))
=>
[#datom[13194139534330 50 #inst"2018-01-30T00:58:22.745-00:00" 13194139534330 true]
#datom[17592186045433 87 "new" 13194139534330 true]
#datom[17592186045433 87 "start" 13194139534330 false]
#datom[17592186045433 87 "start" 13194139534330 false]]
#2018-01-3001:20marshall@kenny you don't need to explicitly retract "start". The new value will 'upsert' and the retraction will be added automatically #2018-01-3001:21marshallI have a theory as to why you see it in your example. If I'm correct it could be considered a bug, but won't influence anything negatively #2018-01-3001:21kennyRight but in this case the transaction data is generated based on a "test" transaction with d/with.#2018-01-3001:22marshallThe example isn't using with#2018-01-3001:23kennyI pasted the generated transaction data so others could replicate the behavior.#2018-01-3001:23marshallAh#2018-01-3001:24kennyI can explicitly de-dupe to work around it but it doesn't seem like the :tx-data was meant to include duplicate datoms.#2018-01-3001:25marshallIs the duplicate causing a problem? #2018-01-3001:25kennyYes. DataScript doesn't like it 🙂#2018-01-3001:25marshallAh#2018-01-3001:25kennyPlus we sync all transaction data to a Kafka topic and this will produce lots of duplicate data.#2018-01-3001:26marshallI'll bring it up with the team tomorrow. #2018-01-3001:26kennyAwesome. Thanks!#2018-01-3001:48kennyActually, I spoke too soon. DataScript doesn't appear to be affected by it. Duplicate data argument still holds, however.#2018-01-3001:49kennyData would be duplicated in a Kafka topic and use additional bandwith to send to every connected client.#2018-01-3003:51caleb.macdonaldblackI want to store my entity defaults in Datomic. Can I attach this to the attribute somehow? Can I create a custom attribute for my entity schema? Or do I need to have two seperate attributes like: entity/attr-a enitity/attr-a-default#2018-01-3006:09robert-stuttaford@caleb.macdonaldblack you can assert additional facts onto the attr itself {:db/id :your/attr :your/attr-default-value <value>} of course, this means you need a definition of :your/attr-default-value with the same type 🙂#2018-01-3006:13caleb.macdonaldblack@robert-stuttaford Thanks! I think that’s what I’m looking for#2018-01-3010:48sleepyfoxQuestion: if I want to write black-box tests for a REST API that is powered by datomic, is it possible to run a dev instance of datomic as a docker image with a canned test data-set?#2018-01-3010:51robert-stuttafordyep @sleepyfox - if the transactor’s storage is inside the docker image (which dev does via an h2 database)#2018-01-3010:52robert-stuttafordnot sure if docker filesystem are mutable though? the transactor would need to be able to add stuff if you’re testing transactions#2018-01-3010:52robert-stuttafordi don’t know docker at all 😉#2018-01-3010:53sleepyfoxI was wondering whether anyone had actually tried this before, or whether I am missing a trick that actually makes this unecessary...#2018-01-3010:53sleepyfoxDon't worry about Docker, it can do everything that I need it to, my question isn't really about Docker, but rather testing (micro)services backed by datomic#2018-01-3010:54robert-stuttafordwell, you could just use an in memory database, but that’s not black-box, because you’d need extra code to set the db up#2018-01-3010:54robert-stuttafordit is much simpler though, because it can work basically the same as fixtures for unit tests#2018-01-3010:55robert-stuttafordone option is to prepare a database with everything you need, back it up, then restore that db to a fresh transactor and provide that transactor uri to your service#2018-01-3010:56robert-stuttafordthat at least makes the process repeatable#2018-01-3010:56robert-stuttafordand lets you iterate on the data and the tests without touching the service#2018-01-3010:56sleepyfoxYes, it needs to be deterministic#2018-01-3017:20donmullenWhat are some of things that might cause this error - a query that pulls in a rule : java.lang.Exception: processing rule: (q__30187 ?job-num ?latest-action-date), message: processing clause: (is-large-alt? ?e), message: java.lang.ArrayIndexOutOfBoundsException: 14, compiling:(NO_SOURCE_FILE:47:27)
#2018-01-3017:46donmullenSeems strange that I get processing rule and java.lang.ArrayIndexOutofBoundsException after the query has been running for a while.#2018-01-3017:50donmullenTaking out the attributes in the query that are also referenced in the rule seems to remove the exception. I’m new to using rules - but that doesn’t seem like something that would be disallowed.#2018-01-3017:51donmullen@marshall any thoughts here?#2018-01-3018:07marshallcan you share the query and rule?#2018-01-3018:07marshallsorry just saw you did#2018-01-3018:07marshallone minute#2018-01-3018:12marshalli think this is because you’re asking for ?prop-area twice#2018-01-3018:12marshallin your find spec#2018-01-3018:14marshalluser=> (d/q '[:find ?e ?e :in $ :where [?e :db/doc _]] (d/db conn))
ArrayIndexOutOfBoundsException 1 clojure.lang.RT.aset (RT.java:2376)
user=>
#2018-01-3018:18marshall@U09AQ4KB2 ^#2018-01-3018:33donmullenDoh! 😦 Thanks @marshall.#2018-01-3018:34marshallnp#2018-01-3017:27devnDid the some of the Datomic videos go away? Specifically, the ones from Datomic conf?#2018-01-3017:27devnWas looking for Tim Ewald's talk on reified transactions, specifically.#2018-01-3017:41devnnevermind, found 'em https://docs.datomic.com/on-prem/videos.html#2018-01-3017:56sleepyfoxI'm trying to use a value instead of a db like so:
(d/q '[:find ?last ?first :in [?last ?first]]
["Doe" "John"])
ExceptionInfo Query args must include a database clojure.core/ex-info (core.clj:4739)
#2018-01-3017:57sleepyfoxI'm following this gist: https://gist.github.com/stuarthalloway/2645453#2018-01-3017:58sleepyfoxAnd I'm using (:require [datomic.client.api :as d]) with [com.datomic/client-cloud "0.8.50"] in my :dependencies#2018-01-3017:59sleepyfoxBut I get the 'Query args must include a database' error as shown above. What am I doing incorrectly?#2018-01-3018:05jocrau@sleepyfox AFAIK the Client API does not contain query capabilities itself but sends the query to the peer server. Thus, it lacks the capability to process collections as DB value. You might have to use the Peer library for that.#2018-01-3018:05sleepyfoxI was afraid you were going to say that.#2018-01-3018:06jocrauhttps://docs.datomic.com/on-prem/architecture.html#storage-services helped me to understand the bigger picture.#2018-01-3018:06sleepyfoxCan you use the peer library with Cloud?#2018-01-3018:06sleepyfoxI'm guessing that's a 'no'...#2018-01-3018:07jocrauYes: No 😉#2018-01-3018:07jocrauSee https://docs.datomic.com/on-prem/clients-and-peers.html#comparison#2018-01-3018:07sleepyfoxContext: we want a clean and simple way to test code, and mocking the db by passing it as a value seemed like a great way.#2018-01-3018:10jocrauI did some experiments with the Client API. But I got stuck when I needed an easy way mock Datomic for testing purposes, like:
(def ^:private uri (format "datomic:$s" (datascript/squuid)))
(defn- scratch-conn
"Create a connection to an anonymous, in-memory database."
[]
(d/delete-database uri)
(d/create-database uri)
(d/connect uri))
#2018-01-3018:13sleepyfoxYup, this is the kind of thing that I wanted to use.#2018-01-3018:14marshallYou can use Peer Server to launch a mem database#2018-01-3018:14marshallfor local testing#2018-01-3018:15marshallalternatively, you can easily tear off a testing db in your cloud system#2018-01-3018:15marshalli.e. have a Datomic Cloud system for dev/testing and create a database, use it, then delete it#2018-01-3018:15sleepyfoxYup. I was hoping to not have to switch between the Client and Peer APIs between actuals code and tests#2018-01-3018:15marshallusing Peer Server would not require you to switch to the peer API#2018-01-3018:16sleepyfoxI'd rather be able to mock out a db instead of creating an actual 'test' db in Cloud#2018-01-3018:16sleepyfoxBut it seems like that isn't an option using the Client API only. Ah well.#2018-01-3018:17jocrauLaunching a local Peer Server locally to run a mem database would be a good compromise. But the current lack of support for delete-database and create-database makes creating a scratch-conn a bit messy.#2018-01-3018:17marshalltake a look at the very top of this https://docs.datomic.com/on-prem/getting-started/connect-to-a-database.html#2018-01-3018:17marshalllaunching peer server with the mem database option creates the DB#2018-01-3018:18marshallyou don’t need to call create-database specifically#2018-01-3018:18sleepyfoxThanks @marshall - I understand that I can spin up a Peer server to do this, but I'd prefer something more lightweight.#2018-01-3018:19jocrau@marshall That’s right, but wouldn’t I have to either restart the Peer Server or retract the previous test facts to get a “clean slate” for the next test?#2018-01-3018:20marshallyep#2018-01-3018:20marshallor start it with a few dbs#2018-01-3018:21marshall$ bin/run -m datomic.peer-server -h localhost -p 8998 -a myaccesskey,mysecret -d hello,datomic: -d hello2,datomic: -d hello3,datomic:
Serving datomic: as hello
Serving datomic: as hello2
Serving datomic: as hello3
#2018-01-3018:22timgilbertThere's also https://github.com/vvvvalvalval/datomock which is super useful for this kind of testing scenario#2018-01-3018:22timgilbertOh, but it's peer-only, never mind#2018-01-3018:23marshallTesting / dev is definitely one intended target of Cloud solo topology#2018-01-3018:23marshallinternally we use a solo system that is up all the time to provide tear off dbs for that kind of thing#2018-01-3018:26rapskalianSpeaking of datomic:mem://... connections, what does datomic.memoryIndexMax default to here? I'm currently trying to capacity plan and having trouble understanding the balance between this and the object cache...#2018-01-3018:27marshall@cjsauer https://docs.datomic.com/on-prem/caching.html#memory-index#2018-01-3018:28marshallmem databases don’t have a persistent index (by definition), so they don’t have indexing jobs. they are entirely “memory index”#2018-01-3018:29marshalladditional info https://docs.datomic.com/on-prem/capacity.html#transactor-memory#2018-01-3018:29rapskalianI see. I'm able to get this error to occur in my experiments: Caused by: java.lang.IllegalArgumentException: :db.error/not-enough-memory (datomic.objectCacheMax + datomic.memoryIndexMax) exceeds 75% of JVM RAM
If objectCacheMax defaults to 50% of VM memory, I imagine memoryIndexMax must be set to something in order to exceed 75%. This is where my question is coming from.#2018-01-3018:30marshallboth values are set in your transactor properties file#2018-01-3018:30rapskalianEven for an in-memory database? I'm just connecting to datomic: for this test.#2018-01-3018:31marshallwhat’s your xmx setting?#2018-01-3018:31rapskalian300m I believe. Trying to "reverse engineer" how these capacity settings work, that's why it's so low.#2018-01-3018:32marshallyeah, that’s unlikely to work to run a memory db#2018-01-3018:33rapskalianI'm able to get it to run by setting the objectCacheMax system property really low, say 50m#2018-01-3018:33rapskalianSo I was assuming there was some sort of implicit memory index max too#2018-01-3018:33marshallthere probably is; unsure what it is set to#2018-01-3019:32rapskalian@marshall I'm thinking it's something arbitrary from my tests. Appreciate the help, your links provide plenty of context 🍻#2018-01-3020:58alexkEven though I’ve got write-concurrency=2 in my transactor’s properties, and allocated a write capacity of 400 (!) for DynamoDB, I’m still getting throttled writes…I’m a bit surprised. How have you dealt with the limited nature of DynamoDB when running a transactor on it, how do you judge how much to bump capacity during a bulk import, etc.?#2018-01-3020:59marshall@alex438 We have customers running sustained AWS write capacity of 1500, with a setting of 4000 for bulk imports.
I would say 400 is on the low end for an active production system and I’m not surprised you’re getting throttled during a bulk import#2018-01-3021:00marshallyou can either increase the capacity or provide some throttling at your load#2018-01-3021:00marshalli.e. reduce the rate at which you’re transacting#2018-01-3021:04alexkIntriguing. Lots to review, thanks.#2018-01-3021:12marshall@alex438 One value prop of Datomic Cloud is that it doesn’t use Dynamo the same way and a similar write load against the system can be handled with a much lower Dynamo throughput setting#2018-01-3021:13marshallin many of our internal experiments the Dynamo autoscaling with Datomic Cloud rarely even hits 100 while running a large batch import#2018-01-3021:13alexkNeat, and what about backing a transactor by Postgres?#2018-01-3021:14marshallOn-Prem can run with Postgres storage. it works well, we have a lot of customers doing so
You would need to provision a sufficiently beefy PG instance to handle the write load; it may or may not be a win vs. DDB on the cost front#2018-01-3021:15alexkgot it, thanks#2018-01-3107:44laujensen@stuarthalloway / @marshall. We’re about ready to remove CSS/HTML from the Database (due to missing blob support), but I assume I’ll have to nuke the history of these entities in order to reduce the memory load. How do I go about that?#2018-01-3108:16robert-stuttaford@laujensen how old is your db? you may have a better time of it if you build a new db by replaying the transaction log and eliding the data you want removed from the transactions. @stuartsierra calls this ‘decanting’#2018-01-3108:17laujensen18 months#2018-01-3108:18robert-stuttafordthe only way to actually remove data is to excise it, which has very poor performance semantics. it’s not meant for large data removals#2018-01-3108:25laujensenYikes. Hope I wont have to set :no-history, dump the db and re-import it, then re-add history#2018-01-3108:28robert-stuttaforddo you care about the history of this data, or only the present state, @laujensen? because you could just build a new db with the present state#2018-01-3108:28robert-stuttafordyou can always keep the original db around to answer history questions#2018-01-3108:28laujensenI do care about the history, but the large blobs we’ve saved is killing performance, so if I have to make a choice, I choose speed#2018-01-3108:28laujensenAnd yes, I can find a way to fake the data#2018-01-3108:29robert-stuttafordi guess what i’m asking is, does your app use historical data, or is it only accessed ad-hoc manually#2018-01-3108:29robert-stuttafordif only ad-hoc, i’d say start afresh and archive the current one 🙂#2018-01-3108:44laujensenWe use it for many things, among others you can pull up past versions of any webpage on your site#2018-01-3112:28robert-stuttafordyeah, then decanting is your best long-term option#2018-01-3114:03marshallhttps://docs.datomic.com/on-prem/javadoc/datomic/Datom.html#2018-01-3114:03marshall@dbernal ^ for the java doc#2018-01-3114:03marshallthere’s also the clojure documentation#2018-01-3114:04marshallhttps://docs.datomic.com/on-prem/clojure/index.html#datomic.api/datoms#2018-01-3114:04dbernal@marshall ok thank you!#2018-01-3116:27uwo@val_waeselynck sync-schema would fail on a forked connection, no?#2018-01-3117:02val_waeselynck@U09QBCNBY I'm assuming you're talking about Datomock? Why would it fail?#2018-01-3117:10uwooh I was just poking in the dark really. I was testing a migration against a forked connection that added unique, which of course requires adding an index first and sync-schema’ing. I had an odd failure, but it must be something on my end#2018-01-3117:10val_waeselynckIn Datomock , sync is supported and easy to implement (since the coordination is only local). From what I understand, sync-schema does less work than sync, and therefore is supported as well.#2018-01-3117:10uwothanks!#2018-01-3117:10val_waeselynckHowever, the burden of not forking too early is on you;#2018-01-3117:11uwotoo early would be when?#2018-01-3117:11val_waeselynckso maybe you'll want to sync-schema then datomock.api/fork.#2018-01-3117:11uwowouldn’t that defeat the point of testing the migration?#2018-01-3117:14val_waeselynck@U09QBCNBY I'm not sure what you're doing so it's hard to answer. If you're dry-running a migration on a connection that was forked via Datomock, there's no syncing to do - just wait for the transaction to return and you're good. If you've run a migration on an actual Transactor and need to see the effects of that in a forked Connection, then you can sync-schema the real connection then fork.#2018-01-3117:16uwoah. hmm. so this is a dry run migration that adds unique to an existing attribute. That requires first transacting :db/index true for the existing attr, then calling sync-schema, and then asserting unique.#2018-01-3117:16val_waeselynck@U09QBCNBY if you want to master the principles behind Datomock, here's a crash course 🙂 Imagine Datomic connections are Clojure agents holding db values, and (defn fork-conn [conn] (agent @conn)). That's it!#2018-01-3117:19val_waeselynck@U09QBCNBY but do keep me posted if my assumptions seem incorrect, maybe Datomic does some dark magic in this case that I don't know about. Datomock basically gives you everything that datomic.api/with provides.#2018-01-3116:27uwoman, I can’t type today 🙂#2018-01-3117:39stuarthalloway@laujensen one other option to consider: store the blobs in Datomic with a content-address prefix in the value itself#2018-01-3117:39stuarthallowayyou would have to write application logic everywhere such values were used to strip/add the prefixes#2018-01-3117:40stuarthallowaypretty gross, but in the spirit of covering all options…#2018-01-3121:27alexkWhat’s the natural way to truncate a Datomic database? I know that I could retract every entity, but that seems like a lot of work. So far I’ve been deleting the database and creating another with the same name. That definitely “truncates” the database, but it leaves any currently-connected clients in a bad state. Truncation is useful because I want to migrate data into a clean database each time.#2018-02-0100:47Joe LaneIs there any way to emulate the behavior of the tx-report-queue in this code snippet with datomic-cloud?
(defn react-on-transactions! [conn]
(let [tx-queue (d/tx-report-queue conn)]
(thread
(while true
(let [txn (.take tx-queue)]
(try
(when (:tx-data txn)
(on-transaction txn))
(catch Exception e
(log/error e "There was an error diring processing of datomic transaction"))))))))
I really like the idea of being able to subscribe to transactions but I read that with datomic cloud only the client api is supported.#2018-02-0101:06Joe LaneSo I read above, found the approach that @luchini was suggested with tx-log and I’m going through https://github.com/cognitect-labs/day-of-datomic-cloud searching for usages of tx-log. Maybe polling for transactions since the last basis-t is the best approach here.#2018-02-0114:47mgrbyteIs there anyway to speculativly transact fixtures into a db using d/withthat a transaction function can be tested with?#2018-02-0114:56val_waeselynckIf I understand your question correctly, yes, just do what you would do with transact#2018-02-0115:27mgrbyteAm probably doing it wrong.
trying to test "updating" (via cas);
Using d/with, sending in the fixture, then attempting to test my client update fn (that does a transact)#2018-02-0116:06mgrbytenm, sorted it; was using a combination of ideas from two different blog posts and got confused#2018-02-0121:23mgrbytethanks @val_waeselynck, sorry for the noise 🙂#2018-02-0117:15chrjsHey folks. Is it possible to store small data structures as values without converting them to a byte array or string? For instance
[?entity-id :some/attribute {:a 1 :b 2} ?transaction_id]
#2018-02-0117:16chrjsIf so, what is the :db/valueType to use for such a thing? (The lack of an obvious answer to that makes me think that I would have to convert to byte array or string).#2018-02-0117:19val_waeselynck@chrjs not possible at present, but there are probably feature requests for that. My approach is to serialize as EDN, JSON or Fressian, my choice depending of the requirements in terms of availability, speed and readability of these various formats#2018-02-0117:19chrjsOk, great, thanks @val_waeselynck 👍#2018-02-0119:44chris_johnsonHas anyone standing up or updating a recent on-prem-flavor CloudFormation stack run into this: instances refuse to converge, stopping right at launch, and in the system log is Cloud-init v. 0.7.6 finished at Thu, 01 Feb 2018 19:30:16 +0000. Datasource DataSourceEc2. Up 18.22 seconds
mount: special device /dev/sdb does not exist
Stopping sshd: [ OK ]
#2018-02-0119:45chris_johnsonAfter which the datomic .zip gets downloaded successfully, but startup.sh appears to silently die and then the machine begins orderly shutdown#2018-02-0119:45chris_johnsonNot sure if startup.sh is failing or the box is just kindly letting the download finish while stopping#2018-02-0119:45marshall@chris_johnson the most common cause of transactor startup failure is license expired or key put in improperly#2018-02-0119:55chris_johnsonIt looks correct but I am re-building the template JSON “from scratch” just to make sure#2018-02-0119:56marshalland you’re certain your license is valid for the version you’re running?#2018-02-0119:56marshallyou can test it by trying to launch that same version locally with the same properties file#2018-02-0120:02marshall*by same i mean of course with a different storage 🙂#2018-02-0120:03marshallhowever, if you have AWS creds locally, you could also run a local transactor against your Dynamo storage to make sure that all works as well#2018-02-0120:35chris_johnson@marshall Who do I talk to if I think a license should be valid for the latest version of Datomic Pro and the license validator disagrees? 😇#2018-02-0120:36marshallthat would be me 🙂 Can you send me an email?#2018-02-0120:36marshall<mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>#2018-02-0121:46dbernalwhat's the most efficient way of returning the last 10 transactions for a particular query?#2018-02-0208:26val_waeselynck@U2JACTBMX what does it mean for a transaction to be 'for' a query?#2018-02-0214:01dbernalSorry if I seem unclear, I'm still getting the hang of Datomic. Let's say I have an attribute that is of db.type/ref and cardinality of many. How can I efficiently retrieve the most recently transacted datoms for this attribute in some entity that I've also queried for. That is to say the most recently added edges from entity A (one that was selected for in the query) to the other entities pointed to by the ref type attribute#2018-02-0214:05val_waeselynck@U2JACTBMX you can retrieve all the datoms for that entity and attribute with (d/datoms (d/history db) eavt my-eid my-attr), or in Datalog (d/q [:find [?e ?a ?v ?t ?op] :in $h ?e ?a :where [$h ?e ?a ?v ?t ?op]] (d/history db) my-eid my-attr)#2018-02-0214:29dbernal@val_waeselynck I'll try that out. Thank you!#2018-02-0214:09frankiesardoSo let's say I have some code that uses the datomic cloud infrastructure and I want to test it locally#2018-02-0214:09frankiesardoIs there a way to use the in-memory datomic db to test the code that uses datomic.client.api?#2018-02-0214:13stuarthalloway@frankie I would encourage you to try testing against Solo (possibly plus with-db)#2018-02-0214:14frankiesardoWhat about unit tests?#2018-02-0214:14stuarthallowaypeople mean a lot of different things when they say “unit”#2018-02-0214:15stuarthallowayA test for assembling a transaction doesn’t need to touch the db at all — it is a value prop of Datomic that you can “stay in data”#2018-02-0214:16stuarthallowayonce you are careful about that separation, I rarely find the need for mocks/stubs or in-memory dbs#2018-02-0214:17stuarthalloway(note to self: blog post about how I would test a Datomic app)#2018-02-0214:17frankiesardoThat would be an interesting read#2018-02-0214:17frankiesardoLet's say I want to test some code that transact some data into the db and then queries it to retrieve part of it based on some user input#2018-02-0214:18frankiesardoI can check that the transaction data structure is correct and the query is correct based on my understanding of datomic, but I would like to have the confidence that a successful round trip gives me#2018-02-0214:19frankiesardoParticularly testing :offset and :limit which are client only#2018-02-0214:20stuarthallowaythat sounds fine (to a point), but what are you saving by avoiding the db?#2018-02-0214:21frankiesardoDo you mean 'avoiding the cloud db'? The fact that I can run many tests on my machine without needing a connection to aws#2018-02-0214:21stuarthallowaybecause you don’t have internet connection?#2018-02-0214:23frankiesardoPossibly. Possibly because I want to do some crazy stuff with autogenerated data and run many many tests without impacting the bill. Possibly because we want to all run tests in our team without having to spin a aws db for each one of us#2018-02-0214:23frankiesardoI got used to the convenience of having a fully functional in memory db for testing and not having it feels like a step back#2018-02-0214:23stuarthallowayit is a tradeoff for sure#2018-02-0214:24stuarthallowayI think it is valuable to break these cases apart as much as possible#2018-02-0214:25stuarthallowayre: running up the bill — if you are testing intensively enough to do that you are probably doing perf testing and so I think you want to run against your planned topology#2018-02-0214:25stuarthallowayre: spinning a db for each. Fair point, OTOH we have worked hard to make that easy#2018-02-0214:26stuarthallowayHere’s how the Datomic team does it:#2018-02-0214:26stuarthalloway(1) we have a solo topology system for department information (dogfooding, not perf sensitive)#2018-02-0214:27stuarthalloway(2) we have a solo for devs to dev against (create and delete dbs as needed, named with prefixes so we don’t get in each other’s way)#2018-02-0214:28stuarthalloway(3) CI uses another solo (db names have UUID components so tests have isolation)#2018-02-0214:28robert-stuttaford@frankie, you can stand up a free transactor and a peer server locally and use client api with that#2018-02-0214:28stuarthalloway(4) app staging on prod topology#2018-02-0214:29stuarthalloway(5) pop-up additional prod topology systems when we want isolation for perf testing#2018-02-0214:29robert-stuttafordloving reading about your testing setup, Stu!#2018-02-0214:29stuarthalloway@frankie and you can totally do what @robert-stuttaford said too 🙂 I am just laying out the future we are trying to make#2018-02-0214:30frankiesardo@robert-stuttaford Yes I was thinking about that, it's just a little awkward to shell out to the datomic starter edition (I guess the way to do it is this one https://docs.datomic.com/on-prem/first-db.html)#2018-02-0214:31robert-stuttafordit’s honestly not that tough, @frankie - it’s a once off investment, take you a couple hours to get set up to your liking, and then you have it. we have a zsh-plugin with everything in it 🙂 (we’re Peer based, but i’m dabbling with Client right now)#2018-02-0214:31frankiesardoThanks for taking the time to explain how the datomic team does it @stuarthalloway#2018-02-0214:31robert-stuttafordi have 2 letter shell aliases to start txor and peer server#2018-02-0214:32stuarthalloway@robert-stuttaford and similar aliases for launch missiles and ejector seat I hope 🙂#2018-02-0214:32robert-stuttafordyeah, those are the capitalised ones!#2018-02-0214:33frankiesardo@stuarthalloway would you consider in the future the ability to include the peer server as a jar dependency you can start within your test process?#2018-02-0214:33frankiesardoOr there is some kind of technical/licence limitation here#2018-02-0214:35stuarthalloway@frankie feature suggestions always welcome! I think it more likely that we would build a dedicated test-local feature a la DDB local mode#2018-02-0214:36stuarthallowaybut before we do I am going to continue advocating “think differently” about testing 🙂#2018-02-0214:36frankiesardoAh, fair point 🙂 We're definitely going to try out the Datomic Team approach#2018-02-0214:37frankiesardoBut something tells me I'm not the only one who's going to miss the in memory convenience of peer datomic#2018-02-0214:38stuarthallowayno doubt, and code-local-with-nodes features are an active area of consideration for Cloud#2018-02-0215:19val_waeselynck@stuarthalloway sounds interesting, can you clarify what you mean by 'code-local-with-nodes'?#2018-02-0214:38sleepyfoxWhat are the differences between the peer and client apis with respect to swapping out client for peer to be able to test locally with a :mem db?#2018-02-0214:39frankiesardo@sleepyfox :offset and :limit , for one#2018-02-0214:39stuarthallowaySee https://docs.datomic.com/on-prem/moving-to-cloud.html#2018-02-0214:39robert-stuttafordyou need a peer-server to connect to with Client, @sleepyfox, and the api is substantially simpler https://docs.datomic.com/client-api/datomic.client.api.html#2018-02-0214:39sleepyfoxI have already run up against the fact that pull doesn't seem to work with the client API#2018-02-0214:40stuarthalloway@sleepyfox pull works with the client API#2018-02-0214:40sleepyfox(we're using Cloud)#2018-02-0214:40robert-stuttaford@sleepyfox if you’re curious, i converted my blog code from peer to client recently, to begin learning about this https://github.com/robert-stuttaford/stuttaford.me/commit/ae31ad1899b7977adc74064227c0a126c1d39662 - it’s a toy app and so therefore a toy migration, but some noticeable changes there#2018-02-0216:21Hendrik PoernamaCurious about your architecture here. Seems like you are passing datoms to cljs and perform query using datascript?#2018-02-0221:58robert-stuttafordyes; because the dataset is small, i wanted to have an instant-search experience on the client. check it out: http://www.stuttaford.me/codex/#2018-02-0319:42Hendrik Poernamalooks like something is broken in the current build. Getting this in my browser Uncaught Error: ... is not ISeqable#2018-02-0319:43Hendrik Poernamainteresting titles on the blog posts though, I'm gonna read 🙂#2018-02-0320:44robert-stuttafordThanks - pr-str print length issue. resolved, @U7Q9VAXPT!#2018-02-0214:40stuarthallowaybut more than one person has said that, wonder where the confusion comes from#2018-02-0214:40sleepyfoxReally? OK, then it's our fault, we'll have to investigate that further#2018-02-0214:41sleepyfoxThanks @robert-stuttaford - I'll take a look at that#2018-02-0214:42sleepyfoxWe tried pull with the client API fronting Cloud, and got an exception which led us to believe that that feature wasn't supported by the Client API#2018-02-0214:42sleepyfoxsomething to do with the API not implementing a protocol#2018-02-0214:43stuarthalloway@sleepyfox keep in mind that pull-many is not (and never has been) needed as a separate API, as it is basically a shim on query+pull#2018-02-0214:43sleepyfoxwe were just trying pull, not pull-many#2018-02-0214:51stuarthalloway@sleepyfox let me know if you have a repro, I would want to jump on that#2018-02-0214:51sleepyfoxIf I get time next week I'll set up something reproducible.#2018-02-0215:14sleepyfoxIs it possible to subscribe to the tx report queue from the client API (i.e. using Cloud)?#2018-02-0215:14sleepyfox(context: thinking of event-driven systems)#2018-02-0215:14robert-stuttafordnot according to the api docs#2018-02-0215:15sleepyfox^^that's what I suspected#2018-02-0215:15sleepyfoxand we can't use the peer API with Cloud?#2018-02-0215:16robert-stuttafordtx-report-queue exists because it basically exposes something Peer does for its own needs; receive pushes of live index from the transactor. clients don’t receive such, and so wouldn’t be able to expose it#2018-02-0215:16sleepyfoxthanks, thought so#2018-02-0215:16robert-stuttafordno Peer + Cloud availability yet, but i believe the possibility exists#2018-02-0215:43timgilbertSay, random question here. I'm using the newish datomic feature where you can add schema without explicitly specifying the :db.install/_attribute attribute, by transacting some data like this:
{:db/ident :artist/id
:db/valueType :db.type/long
:db/cardinality :db.cardinality/one
:db/unique :db.unique/identity}
That works fine, but then I'm also working on some code to do some introspective analysis on the db, so I'm trying to find every defined attribute. That code looks like this:
(defn- attr-list [db]
(d/q '[:find ?ns ?attr ?doc
:where
[_ :db.install/attribute ?a]
[?a :db/ident ?ident]
[?a :db/doc ?doc]
[(namespace ?ident) ?ns]
[(datomic.api/attribute $ ?a) ?attr]
(not [(re-find #"deprecated" ?ns)])
(not [(re-find #"fressian" ?ns)])
(not [(re-find #"^db" ?ns)])]
db))
#2018-02-0215:44timgilbertWhat I've found is that I can find older attributes via [_ :db.install/attribute ?a], but when I add them to the schema in the new-school way, they don't seem to come back in the query. Looking here I see that datomic is supposed to automatically add them: https://docs.datomic.com/on-prem/schema.html#explicit-schema#2018-02-0215:45timgilbertIs there a better way I should be scanning for user-defined attributes that will handle both cases?#2018-02-0215:58stuarthalloway@timgilbert the presence of any of the other attributes-of-attributes would work#2018-02-0215:58stuarthallowaye.g. :db/cardinality or :db/valueType#2018-02-0216:04timgilbertAh, good idea. Thanks!#2018-02-0216:31dbernalIs there a particular order to how entities come back from a query? My question being how does Datomic decide in what order to return entities#2018-02-0216:57favilaThere is no guaranteed order. It's an accident of implementation (hashing, order of operations done in parallel, etc)#2018-02-0216:58favilabut I think it's stable, so if you repeat a query on the same db you will get them in the same order#2018-02-0219:52dbernalgotcha. thank you!#2018-02-0218:24alexkI’d appreciate some guidance for (or tales of) modelling time in Datomic. Should be the easiest thing, right? But there’s a problem because time comes in at least two flavours, real time (i.e. monotonically increasing) and business logic time (i.e. when did something have an effect). The former is absolute, the latter is fluid and can be retroactively changed. Question: Should the time-value of a datom be used for business logic? If yes, then time-travel becomes easier, but doesn’t rewriting history also become hard/impossible? How would you then store a fact that somebody retroactively applied a discount to a user’s invoice??#2018-02-0218:32favilaGenerally no. Think of datomic's time as "git time"#2018-02-0218:32favilait's for debugging, auditing, etc, not business logic#2018-02-0219:07alexkThanks @U09R86PA4#2018-02-0219:07favilain Git you wouldn't rewrite history to make a bug never get committed, you'd make a new commit (= transaction)#2018-02-0219:09favilahowever this technique may sometimes be useful: have a source-of-truth db which models business time explicitly; then occasionally you can construct a derived datomic db where business time = tx time#2018-02-0219:10favilaThis is useful if you have some time-series application that would benefit from the nicer time indexing and api of db-as-of or db-since#2018-02-0219:11favilabut every time you "change the past" you would have to regenerate the db, so it's an offline batch-type workflow only (e.g. for analysis)#2018-02-0220:58alexkHmm right, that’s a neat compromise#2018-02-0221:54joelsanchezI think this article is related#2018-02-0221:54joelsanchezhttps://vvvvalvalval.github.io/posts/2017-07-08-Datomic-this-is-not-the-history-youre-looking-for.html#2018-02-0221:54joelsanchezevent time vs recording time#2018-02-0221:50dadairIn the Datomic pricing model for Pro, it says $5000/yr for a “system”. I assume “system” relates to number of transactors; where 1 system = 1 transactor?#2018-02-0222:01dominicmHaving a failover is okay too#2018-02-0222:04dadairSo one “system” is a primary transactor and one fail-over transactor + any number of peers/clients?#2018-02-0222:08timgilbertThat's my understanding, yes. It was changed last year to include unlimited peers, IIRC. I'm sure someone from Cognitect can confirm though#2018-02-0222:13favilaA "system" is easier to discern by storage#2018-02-0222:13favila"system" = peers and transactors used in production which share a storage#2018-02-0222:14favila(a transactor can only manage one storage)#2018-02-0222:31dadairawesome thank you!#2018-02-0221:50dadairSorry if there are docs on this, couldn’t see anything obvious in the docs.#2018-02-0311:03robert-stuttaford@marshall “The Schema Growth principle provides a means for entites” https://docs.datomic.com/on-prem/best-practices.html#2018-02-0311:05robert-stuttaford@marshall / @stuarthalloway you may want to drop the note about using squuid on the best practice doc, given that adaptive indexing makes it redundant and Client doesn’t expose it#2018-02-0315:26jocrau@alex438 The Datom itself does not have a time information but references the Transaction which records the transaction time (:db/txInstant). The transaction time provides a reference point for the state of the world from the perspective of the database (used as “aggregate all datoms from the beginning of time up to the transaction time”). I keep that completely separate from other notions of time in my business domain. In my business domain I want to answer questions like “When was Alice born?” not “When became Datomic aware of the fact that Alice was born on …“; a subtle but important distinction.#2018-02-0315:32jocrauI would assert the discount information simply as new fact.#2018-02-0419:13didiercrunchIs there a way to do a "group by" in datomic? Let say I have a music database with three columns in the schema: song-title artist-name and release-date. Is it possible to get the latest song-title released by each artist-name in a single query?#2018-02-0419:18donmullen@didiercrunch if your schema uses references, then pull with map specifications would do what you want : https://docs.datomic.com/on-prem/pull.html#map-specifications#2018-02-0419:19didiercrunchthanks, let me check that!#2018-02-0419:20didiercrunchnevertheless, it is a bit strange there are no way to do grouping as a query... Maybe I need to change more my minds.#2018-02-0420:53jeff.terrell@didiercrunch - You may be able to do this with the :with clause in the query API. I don't understand it well enough to know for sure though.
https://docs.datomic.com/on-prem/query.html#with
If that doesn't work, I would probably just pull all the data and do a manual clojure.core/group-by on the results. Although if you're not using Datomic on-prem, that might not work so well…#2018-02-0421:21didiercrunchthat was my first approach. But I fell it is not idiomatic. I need very fast queries too#2018-02-0501:47bkamphaus@didiercrunch — subquery example similar to what you’re asking about here: https://groups.google.com/d/msg/datomic/5849yVrza2M/31--4xcdxOMJ — useful for clients (one roundtrip), matters less for peer/on-premise use (just chaining queries or doing clojure seq operations is roughly the same as it’s cached locally. I.e., since you’ve realized the segments in memory it doesn’t mean an additional roundtrip).#2018-02-0502:01didiercrunchthat looks to be exactly what I wanted!#2018-02-0502:02didiercrunchI'll try it now right now#2018-02-0505:25jocrau@didiercrunch You could also make use of the fact how the aggregation function (max …) sorts vectors:
(d/q '{:find [?artist-name (max ?release-song)]
:where [[?e :artist-name ?artist-name]
[?e :release-date ?release-date]
[?e :song-title ?song-title]
[(vector ?release-date ?song-title) ?release-song]]}
db)
#2018-02-0508:07msolliI’m intrigued by Datomic Cloud, having never used Datomic before, but I’ve read that it doesn’t support excision. My use case is a job application tracking system operating in the EU, which means there’s lots of personal data that the system must be able to “forget” for regulatory reasons. Are there other ways to achieve this in Datomic Cloud?#2018-02-0509:15val_waeselynckI may implement something like this soon. The idea is to store sensitive fields in a secondary mutable store and store references to that in Datomic#2018-02-0510:47msolliInteresting. Are you thinking along the lines of just storing the id (int, uuid) of an entity in another database (like Postgres)?#2018-02-0513:01val_waeselynckEssentially, it's just storing the values in a secondary KV store and reference them in Datomic via a UUID key. It's actually a tiny bit more complicated than that, because each value must me associated with a person Id so that it can be later deleted.#2018-02-0512:11mgrbyteDoes the documented feature of :db.fn/cas to assert the value only if the old value is nil (doesn't exist) apply when the :db/valueType of the attr is a ref? I'm hitting an issue where this doesn't seem to be the case...#2018-02-0512:13mgrbyte[[:db.fn/cas 17592186045451 :gene/biotype nil 67]]
ERROR java.lang.IllegalStateException: :db.error/cas-failed Compare failed: 67
java.util.concurrent.ExecutionException: java.lang.IllegalStateException: :db.error/cas-failed Compare failed: 67
Am 100% certain that no value exiists for :gene/biotype for eid 17592186045451.#2018-02-0611:01souenzzohttps://receptive.io/app/#/case/26858#2018-02-0513:26mgrbytenm, I my code must have a subtle issue that I'm missing. Managed to prove to myself that the above works in different context. Sorry for the noise.#2018-02-0602:36didiercrunchI have a database which has many reads and very few writes. Can I expect my proof of concept project build on top of datomic:mem:// to have similar performance than a production instance built sitting on top of postgres?#2018-02-0611:26chrisblomno, it is in-memory so it has different performance characteristics#2018-02-0612:12julienvincentHi all,
I am trying to understand the current state of non-jvm language interop with datomic. In the datomic documentation it is stated that the REST API is deprecated and is advised to use the client API instead.
As far as I can see, there is no spec on how to implement a Client API in another language, nor are their any existing implementations (other than the clojure one).
Am I correct in assuming that the client API is an ongoing development and the correct way of doing non-jvm interop is to still use the REST API?#2018-02-0612:28sleepyfoxIs it possible or even advisable to use something like juxt/tick with datomic, or should I just stick with clojure.instant?#2018-02-0613:35jocrauCan you elaborate a bit on how you would use juxt/tick and how that relates to Datomic? Datomic’s :db.type/instant maps to instances of java.util.Date. Tick is using java.time.Instantwhich needs to be coerced into java.util.Date. If you are interested in using tick.interval/Interval, you would have three options: 1) reference to an entity constructed from the interval, or 2) serialize it as string, or 3) serialize it as bytes (`:db.type/bytes`).#2018-02-0614:49mitchelkuijpersIs there a GDPR solution for datomic cloud? Something along the lines of excision#2018-02-0615:06alexkPossible related message at 3:07pm yesterday#2018-02-0617:14mitchelkuijpersYeah i've read that message, but we are currently on on-premise datomic and are considering cloud but this would be a blocker#2018-02-0615:18robert-stuttaford@stuarthalloway - it may be beneficial for Datomic Cloud if you talk on your intentions to support excision or similar; the new EU regulations, known as GDPR, are due to be fully complied by in mid May this year, with harsh punishment for non-compliance. it may prevent DCloud adoption for some folks who would otherwise jump in with both feet and eyes closed, if they can’t check this box for their exco 🙂#2018-02-0708:11laujensenTopological question. I have a server with datomic on it. It has a transactor and a mysql running as the only services. When connecting from a peer, I specificy only the mysql ip and port, but not the transactors. Is it even used then? The transactor I mean#2018-02-0708:47robert-stuttaford@laujensen yes. peers discover transactor details from storage; this is how high-availability works (transactors heartbeat into storage)#2018-02-0708:47laujensenbrilliant, thanks 🙂#2018-02-0709:18val_waeselynckHow do you go about giving your coworkers access to the Peer library without giving them your http://my.datomic.com credentials?#2018-02-0713:03val_waeselynckah nevermind, bin/maven-install was really what I was after.#2018-02-0802:51jaretI know several of our clients with large teams keep their datomic account on an “<mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>” shared e-mail. If you find you need something like that let me know and I can update your technical contact information/issue a password reset.#2018-02-0813:09val_waeselynckThanks @U1QJACBUM, the problem I see with that approach is that each time a developer leaves the company I will probably have to change the credentials :s#2018-02-0714:01laujensendoes setting :no-history on an attribute erase all historic data?#2018-02-0714:07marshall@laujensen https://docs.datomic.com/on-prem/schema.html#altering-nohistory-attribute#2018-02-0716:40Hendrik Poernamais there a way to do something like clojure's ensure in cloud? I can use :db.fn/cas with the same value, except when value is nil#2018-02-0717:02favila:db.fn/cas works with nil as the old value#2018-02-0717:02favila(not with nil as the new value)#2018-02-0717:05favilaso you want to make sure something is nil when the tx starts and ends?#2018-02-0717:05favilayou could use a nonce attribute#2018-02-0717:06favilano-history attr, assert it on an entity with a value; assert a new value at end of tx, update with db.fn/cas. lets you do optimistic locking#2018-02-0717:06favilaclient just sets it to something random#2018-02-0720:15Hendrik Poernamainteresting idea, will test it out#2018-02-0722:03tomcI have a rule with multiple definitions and one output variable. It looks like:
[(references-somewhere ?scope ?referencee ?referencer)
[?scope ?a ?referencee]
[?a :db/isComponent false]
[?a :db/valueType :db.type/ref]]
[(references-somewhere ?scope ?referencee ?referencer)
[?referencer ?a ?referencee]
[?a :db/isComponent false]
[?a :db/valueType :db.type/ref]
(descendant ?referencer ?scope)]
I can't figure out what syntax to use in that first definition to indicate that ?referencer should be bound to the value of ?scope. Can someone help me out?#2018-02-0813:03marshallYou can use Clojure functions in a rule. I’m not 100% clear on what you’re looking for, but you could, for example, include a clause:
[(= ?referencer ?scope)]#2018-02-0800:34hiredmanmaybe (references-somewhere ?scope ?referencee ?scope), I haven't done much with datomic rules#2018-02-0800:35hiredman(as in replace the head of the first rule with that)#2018-02-0801:15laujensenSo, I moved my datomic+mysql off our main server onto its own. Now when the peer connects, it gets a user cant be validated error, despite being the only peer connecting to this transactor. Whats going on ?#2018-02-0801:18laujensenIts the exact same config thats copied over to another server, and the only difference is that the mysql user now is allowed to login from another ip#2018-02-0801:36laujensenAnd to add to the confusion. If I start a repl on the main server (which still has datomic running) and I create a 2nd connection pointing to the other server. Than that works and I can run queries on it#2018-02-0801:37laujensenUsing the exact same uri that got the Unable to validate user error on app boot#2018-02-0802:47jaret@laujensen Are you able to reach the transactor IP from the peer that cannot connect? Do you have your alt-host set to an externally reachable address? In general, your HOST will be set to the internal address and the ALT-HOST should be set to the externally-reachable address. I am also assuming the new system is distributed (peer and transactor are separate machines). Otherwise my guess would be a security group or permissions issue that the new peer is missing.#2018-02-0802:52jaretHOST and alt-host are set in the transactor properties file.#2018-02-0806:53laujensen@jaret when the app starts, it makes a connection to datomic. If the URI points to the new machine, boot breaks due to User not validated. If it points to the Datomic on the same machine, everything works. If I then jack-in to a repl, make a new d/connect to the other machine, that works.#2018-02-0812:26laujensen@marshall @stuarthalloway - Why does the transactor reject the connection if I havent first connected to the local version?#2018-02-0813:14jaret@laujensen Could you copy out the exact error you were receiving? I’d like to move this to the forums (https://forum.datomic.com/). Slack conversations aren’t archived and I’d like to make the solution to your problem searchable.#2018-02-0814:44ozHowdy!#2018-02-0814:45ozAt Room Key I'm setting up Datomic Cloud. We got it running in the same org and region. However we had to bridge the datomic vpc to our existing vpc. I didn't see any option to launch Datomic Cloud in an existing vpc.#2018-02-0814:46ozAny advice on how do launch Datomic Cloud in a vpc that is already existing?#2018-02-0814:50laujensen@jaret, I'll post it in the forum#2018-02-0814:50stuarthalloway@ozanzal not at present. It is much simpler to create a VPC that is correctly configured than to try to programmatically verify the suitability of an existing VPC#2018-02-0814:51ozOk, that's what I gathered from looking at the Cloud Formation.#2018-02-0814:52ozWe are ok with peering the VPC's.#2018-02-0816:28laujensenhttps://forum.datomic.com/t/unable-to-connect-to-remote-datomic-mysql/318#2018-02-0907:46isaacDoes datomic team have plan support reverse index(descending sort)?#2018-02-0907:57robert-stuttaford@isaac it’s on their feature voting system. log into http://my.datomic.com, top right, search, vote!#2018-02-0907:58isaacyeah, I saw it, this feature hasn’t any replies of datomic’s team#2018-02-0907:59robert-stuttafordthe Datomic team generally replies by shipping features 🙂#2018-02-0914:04julienvincentHi guys, just wondering if anyone has an answer to my previous question?#2018-02-0914:13marshall@julienlucvincent https://www.datomic.com/cloud-faq.html#_will_you_publish_the_client_protocol_so_i_can_write_my_own_datomic_client#2018-02-0914:14julienvincent@marshall Thanks, I missed that info. In the meantime while we wait for a spec is the REST api the correct way to go (even though deprecated)?#2018-02-0914:16chris_johnson@julienlucvincent You can also go to http://my.datomic.com -> log in -> Suggest features and upvote the existing suggestion for your desired non-JVM client (I see one for Elixir, one for Python, one for Javascript already) or add the one you want#2018-02-0914:17chris_johnson@marshall Thanks again for your help the other day - turns out our issues with the license upgrade were all unforced errors on our part, but your patience and willingness to help us get more information about what our stack was doing helped a lot#2018-02-0914:17chris_johnsonI’ve made some …improvements to the non-automated parts of that process for us that should prevent that particular footgun from going off again#2018-02-0914:18marshall@chris_johnson happy to help. glad you got it sorted#2018-02-0914:19marshall@julienlucvincent as @chris_johnson said, there are several requests in the feature portal
What language specifically are you wanting to use?
One consideration with the REST server is that it is On-Prem only, so if you want to target Datomic Cloud you’ll probably want to do something more like a thin wrapper around the Clojure client API#2018-02-0914:28julienvincent@marshall @chris_johnson For now I am more investigating what would be available to me if I moved my org to datomic. We have some existing systems in JS in addition to Clojure and so some sort of JS interop would likely be required.
I like the suggestion about a wrapper around a clojure client, that sounds preferable over the REST api.#2018-02-0914:31marshallare you using node.js or are you considering front-end db access?#2018-02-0914:32julienvincentUsing nodejs.#2018-02-0914:36marshallthen, yes, if you want to move toward Datomic prior to the release of the wire protocol details, I’d say something like a wrapper on the client would be what I would consider
especially if you already have Clojure in some systems and some Clojure experience#2018-02-0914:43alexkIs there any way to run peers with Datomic Cloud?#2018-02-0914:46marshall@alex438 https://docs.datomic.com/on-prem/moving-to-cloud.html#sec-3
Cloud can only be used from the Client API. What specific aspect(s) of Peers are you wanting? We’re considering options for supporting functionalities of Peer (i.e. code colocated with data) for Cloud#2018-02-1000:11steveb8n@marshall I’m very interested in this idea. I want to move to cloud but I have lots of code that assumes peer/cheap pull/entity/q calls that would need to be migrated/optimised. peer support would save me a lot of work. it would also be a very valuable feature for cloud because the “cache in peer” design is powerful and a real distinction between Datomic and most other databases#2018-02-1000:14steveb8nevery time we write code to optimise network round trips, it adds complexity. not worrying about network vastly simplifies the code, and allowing some lazy coding practices, which is great for fast prototyping and experimentation#2018-02-1018:37Hendrik Poernama+1 to this, I also find it to be much easier to architect around entity api.#2018-02-0914:48alexkBeing able to query without regard for round trips is a fantastic selling point for D in general, not to mention the queries are simpler to write because you don’t have to use pull#2018-02-0915:08jocrauI plan to ingest and persist many different datasets (eg. from CSV files or intermediate computational results). For each of the ingested columns I currently create a new attribute (with some meta-data like a business level data-type or provenance information). Is this approach feasible? What do I need to consider regarding an ever growing set of attributes, indexing, etc.?#2018-02-0915:10marshall@jocrau The limit of number of schema elements is 2^20 https://docs.datomic.com/on-prem/schema.html#schema-limits#2018-02-0915:11marshallare you likely to hit/approach that many?#2018-02-0915:20jocrauThat could happen if I store intermediate computation results. But there are ways to mitigate that (reusing existing attributes, recreating the database, etc.). What is the technical reason to limit schema elements to 2^20?#2018-02-0915:26jocrauThe way I use attributes is not the standard Datomic way which relies on the :db/ident of the attribute to “discover” data. It usually well known, human readable name like :person/first-name. My approach is to discover similar values by looking for meta-data on attributes and have generated attribute idents (like :attr/urn:uuid:81b99381-3a74-4aaa-8e76-1034184ca5dd).#2018-02-0915:41marshall@jocrau there are definitely users with dynamically created attributes with generated IDs, nothing wrong with that approach as long as you keep the 2^20 limit in mind#2018-02-0915:47jocrau@marshall Ok, thank you!#2018-02-0916:40hmaurerHi! Has the Datomic client API wire protocol been documented?#2018-02-0916:41hmaurerI think @marshall mentioned this would be the case with the release of datomic cloud#2018-02-0916:41marshall@hmaurer https://www.datomic.com/cloud-faq.html#_will_you_publish_the_client_protocol_so_i_can_write_my_own_datomic_client#2018-02-0916:41hmaurer@marshall ah, I see. thank you 🙂#2018-02-0916:42marshallnp#2018-02-0916:44hmaurer@marshall around how long will that process of gathering information and finalising the spec take?#2018-02-0918:10jaret@jocrau I thought your question and discussion with Marshall would make a good discussion topic for our forums. I took the liberty of summarizing the question and answer here https://forum.datomic.com/t/total-number-of-attributes/325#2018-02-0918:10jaretI am hoping to make these types of probelms/questions searchable by posting there. If you think I am missing anything or would like to add an experience report to that thread it would be much appreciated#2018-02-0918:40jocrau@jaret That’s a good summary. I added more info to the topic in the forum.#2018-02-0922:22juliobarrosI’m having some problems getting datomic cloud client to play well with jetty in my app. It looks like datomic includes cognitec http client which pulls in an old version of jetty. I added the exclusions as mentioned in the docs but then I get class not found, jetty.client.HttpClient, errors unless I specifically add back in jetty dependencies. But then with 9.4.8 I get new class not found errors HttpParser$ProxyHandler unless I use a slightly older version of jetty (9.2.9)… when I get illegalstateexceptions: SSl doesnt have a valid keystore … does anyone know know what version of jetty is compatible and/or what I’m doing wrong? Thanks in advance.#2018-02-1007:22robert-stuttafordthis is how i fixed it @juliobarros https://github.com/robert-stuttaford/stuttaford.me/blob/master/project.clj#L22#2018-02-1014:01jaret@juliobarros If Robert’s exclusion doesn’t work I can look at all of your deps. If you want to open a case and send me your project.clj or the output of lein deps :tree just log me a ticket https://support.cognitect.com/hc/en-us/requests/new#2018-02-1018:45juliobarrosThank you @robert-stuttaford That did it for me. I was trying to add the jetty packages individually.#2018-02-1101:45macrobartfastI ran into dependency issues with jetty and datomic and finally found https://stackoverflow.com/questions/43291069/lein-ring-server-headless-fails-when-including-datomic-dependency...#2018-02-1101:46macrobartfastthat SO thread suggests http-kit but someone later says the issue returned with datomic client pro...#2018-02-1101:46macrobartfastany suggestions for a strategy or workaround?#2018-02-1101:46macrobartfastI'm not planning on putting this in AWS, btw, in case that's a solution (though in those docs I saw a mention of a related issue).#2018-02-1101:47macrobartfastjetty and compojure are basic staples for me but I'm open to new things if that solves this problem. Trying to use datomic but this is definitely a blocker so far.#2018-02-1109:18robert-stuttaford@macrobartfast literally the three messages above your own contain the solution i found, and that worked for juliobarros 🙂#2018-02-1110:30laujensenEntity docs seem broken: https://docs.datomic.com/javadoc/datomic/Entity.html#2018-02-1115:32cap10morganIs it possible to use datomic-pro w/ tools.deps.alpha? I'm not seeing a way to put maven repo credentials in the deps.edn file.#2018-02-1116:26alexmillerNot at the moment#2018-02-1116:26alexmillerThere is a ticket with some work on this#2018-02-1119:55macrobartfast@robert-stuttaford holy smokes... re: the messages before mine: face palm... I should probably read a bit of the channel for the day before posting a question! Thanks... I look through those.#2018-02-1121:01Desmondhi, I have an ident that i want to make unique. So far I have been getting away with always adding to my schema without breaking anything so that I can run new migrations against the production db. I'm trying to avoid any fancy tooling as long as possible. In this case I need to change an existing ident. What is the simplest way to do that? Do I need to retract the non-unique ident and then add the unique one?#2018-02-1205:04DesmondI thought I could add to the ident like a normal transaction:
(d/transact connection [{:db/id [:db/ident :question/source-identifier]
:db/unique :db.unique/identity}]))
#2018-02-1205:04DesmondThat didn't throw an error but it also doesn't seem to have worked#2018-02-1205:23jaret@captaingrover for your schema question I recommend reading through Stu’s blog on Schema growth. http://blog.datomic.com/2017/01/the-ten-rules-of-schema-growth.html#2018-02-1205:24jaretSpecifically though, if you want to change an attribute to unique I recommend:#2018-02-1205:26jaret1. renaming the attribute (i.e. :user/name-deprecated)#2018-02-1205:27jaret2. Make a new attribute with unique (:user/username)#2018-02-1205:27jaret3. Migrate old values from the old attribute to the new attribute .#2018-02-1205:28jaretIt should be noted that if you have a lot of data to merge you will want to appropriately batch the merge transactions.#2018-02-1205:28jaretThis is also not an approach that is a solution for bad schema design and it should not be relied upon to correct what are in reality schema design problems.#2018-02-1205:28jaretd/history will still point to the previous entry and :db/ident is not t-aware.#2018-02-1205:31jaretAnd when you transact over you’ll need to de-dupe#2018-02-1205:33Desmond@jaret thanks for the help! I was trying to follow the growth not breakage rules but learning datomic at the same time makes it harder. I was hoping to get off easy this time and not need a data migration. It sounds like that's not the case.#2018-02-1205:34DesmondThe truth is I only wanted to make this ident unique for the convenience of being able to do a ref lookup. From what you're saying it sounds like I might be better off just living with the extra query.#2018-02-1205:40jaret@Desmond if all current values of the attribute are unique then you might be able to get away with altering the schema see https://docs.datomic.com/cloud/schema/schema-change.html#sec-5#2018-02-1205:40jaret@captaingrover#2018-02-1205:44jaretMake sure you backup before altering the schema, but altering unique is supported.#2018-02-1205:46jaretIf you already sent the schema alteration you posted before… You should call sync-schema#2018-02-1205:46jarethttps://docs.datomic.com/on-prem/clojure/index.html#datomic.api/sync-schema#2018-02-1205:46jaretin order to add :db/unique, you must first have an AVET index including that attribute.#2018-02-1205:46jaretAll alterations happen synchronously, except for adding an AVET index.#2018-02-1205:47jaretf you want to know when the AVET index is available, call sync-schema#2018-02-1205:58DesmondYeah i'm all backed up on s3 and running my experiments against a restored copy in a staging environment before running them against prod. The transaction in the docs ran and all the values should be unique since they are uuids but i'm still seeing the non unique error when i try to ref lookup. I ran that a while ago though so i imagine it would be done. For sync-schema what should t be? I haven't worked with the time-travel features at all yet.#2018-02-1206:06jaretAh.. you’ll want to make sure your attribute has :db/index set to true then call sync-schema on the current T.#2018-02-1206:06jaret>In order to add a unique constraint to an attribute, Datomic must already be maintaining an AVET index on the attribute, or the attribute must have never had any values asserted. Furthemore, if there are values present for that attribute, they must be unique in the set of current assertions. If either of these constraints are not met, the alteration will not be accepted and the transaction will fail.#2018-02-1206:07jarethttps://docs.datomic.com/on-prem/schema.html#altering-avet-index#2018-02-1206:07jaretJust realized some of the API links are broken in that doc page#2018-02-1206:08jaretBut it’s linking off to https://docs.datomic.com/on-prem/clojure/index.html#datomic.api/sync-schema#2018-02-1206:08jaretI’ll have to fix the links tomorrow (later today :))#2018-02-1206:46Desmond@jaret yes! that worked!#2018-02-1206:46Desmondthank you!#2018-02-1216:00alexkTo atomically increment the value in a datom, must I implement a custom add/inc function and include it in the schema?#2018-02-1216:01matthavener@alex438 you can do a :db/cas#2018-02-1216:02matthavener[:db/cas 123456 :some/attr old-value (inc old-value)]#2018-02-1216:03matthavenercustom function is arguably better to avoid retrying after collisions#2018-02-1216:03alexkhow would you address the race condition where multiple writers are calling that at the same time?#2018-02-1216:03alexkah ok#2018-02-1216:03alexkI understand - it’s not a perfect solution but it’s a way to guarantee data isn’t overwritten#2018-02-1216:18matthaveneryeah just depends on your requirements#2018-02-1217:02juliobarrosWhat’s the best practice for unit testing with Datomic cloud? Is it possible to create an in memory db? I’d like to be able to delete a db on demand to reset it but that may not be optimal/feasible with an on disk db. I didn’t find anything in the docs.#2018-02-1220:38gerstreeDid anyone manage to run transactors / peers on ecs FARGATE?#2018-02-1220:39gerstreeWe have been running a transactor on ecs backed by an ec2 autoscaling cluster for over a year, but on FARGATE no luck#2018-02-1222:41chris_johnsonI’m successfully running a Vase API service on FARGATE, and that uses the peer library#2018-02-1222:41chris_johnsonno intel on transactors, though - we still run those on r4.large instances in an autoscaling group#2018-02-1301:45csmI’m running into a transactor failure after trying to add indexes to some entities; I’m getting a “no such file or directory” error creating a tempfile#2018-02-1301:47csmwhere is it trying to create a temp file, and why is it failing? This is a fairly vanilla Ubuntu system the transactor is running on#2018-02-1301:48csmok, found this — https://docs.datomic.com/on-prem/deployment.html#misconfigured-data-dir — I’m reasonably sure the datadir config is correct, but that’s what I’ll check#2018-02-1301:58csmhm, I set data-dir=/data in my transactor properties, but it appears to use data, ignoring the absolute path.#2018-02-1307:56kenji_signifierHi, I worked on adding Datomic Cloud support to Onyx datomic plugin for the last couple of weekends and finally I managed to pass tests both on peer and client API. https://github.com/onyx-platform/onyx-datomic/pull/29 I was wondering if there is a good solution to test the code against datomic cloud in CI environments. Two concerns 1) it would be very nice if there is free option for OSS products. I’m more concerned with account/license management than actual amount out-of-my-pocket. 🙂 2) Running socks proxy as a helper process in CI. Any suggestions welcome.#2018-02-1309:38gerstree@chris_johnson thanks, I will keep pushing on the transactor part then, still no clue what is keeping it from connecting to dynamodb#2018-02-1312:58stuarthallowayhi @kenji_signifier — if CI is running in AWS you don’t need or want the socks proxy, just use ordinary AWS VPC mechanisms to give CI access to Datomic Cloud, see e.g. https://docs.datomic.com/cloud/operation/client-applications.html#2018-02-1321:03kenji_signifierThx @U072WS7PE, they use circleCI, and I’m gonna look into if it is doable to VPC peer. Otherwise, I’ll consider to run a different CI in AWS.#2018-02-1314:57rauhPSA: If you run Cassandra with datomic, don't upgrade your java to the latest version (1.8.0_161) which will brake Cassandra. (it wont start anymore)#2018-02-1314:58mpenetjust curious what's the error?#2018-02-1314:59rauh@mpenet https://stackoverflow.com/questions/48597284/cassandra-does-not-start-cause-of-an-abstractmethoderror-with-jdk-to-8u161?rq=1#2018-02-1314:59rauhHad to manually download an old JDK and do update-alternatives on ubuntu. That fixed it.#2018-02-1314:59rauh( )#2018-02-1315:00gerstree@chris_johnson thanks again for responding. I managed to fix it in the meanwhile. I can now confirm that I have the transactor running in FARGATE as well.#2018-02-1315:01chris_johnson@gerstree That’s great! I may come ask you for help at some point. 😉#2018-02-1320:59gerstreeI did a quick write up with the most important pieces. Hope this helps others. The blog will come online shortly and I will send you the link#2018-02-1321:58gerstreehttps://www.avisi.nl/blog/2018/02/13/how-to-run-the-datomic-transactor-on-amazon-ecs-fargate/#2018-02-1315:04gerstreeI will write up a blog post about it and I'm happy to share our cloudformation template#2018-02-1315:23jjfinehey, is there a way to view peer metrics on one that's running, but doesn't have a metrics callback configured? will setting datomic.metricsCallback on a running peer do anything?#2018-02-1316:01laujensenIs Datomic bothered if I create another table (or several) in the same database on mysql ?#2018-02-1316:20favilaNo. There's actually no way to even make it use a table name other than datomic_kvs#2018-02-1316:20favilayou can restrict the user permissions if you are paranoid#2018-02-1316:21favilathe transactor and gc-deleted-dbs only needs SELECT INSERT UPDATE DELETE; Peers only need SELECT#2018-02-1317:33laujensenk, thanks!#2018-02-1316:31maleghastGeneral n00b question - when transacting loads of datoms into Datomic in line with a schema, are people building their maps by getting the db/ident keys from the schema, or are you holding your key-names in config separately (for convenience)?#2018-02-1316:33favilaI'm not sure I understand? Attributes are usually just literals in your transacting code; there is no need to get ident keys or hold ident keys somewhere.#2018-02-1316:33favilaAre you doing something unusual?#2018-02-1316:36maleghastSo, I have a schema that looks like this:
[{:db/ident :crop/id
:db/valueType :db.type/uuid
:db/cardinality :db.cardinality/one
:db/doc "The unique identifier for a Crop"
:db/unique :db.unique/value}
{:db/ident :crop/common-name
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one
:db/doc "The Common Name for a Crop"}
{:db/ident :crop/itis-name
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one
:db/doc "The ITIS Name for a Crop"}
{:db/ident :crop/itis-tsn
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one
:db/doc "The ITIS TSN (Taxinomic Serial Number) for a Crop"}]
#2018-02-1316:38maleghastI am expecting to have a vector of maps that look like this:
{:crop/id #uuid "bd7c5850-051f-5305-88a7-076e53822448"
:crop/common-name "Cocoa"
...
}
#2018-02-1316:39maleghastSo when I take in the data I need to build a map using the keys: :crop/id :crop/common-name :crop/itis-name :crop/itis-tsn#2018-02-1316:39maleghastRight?#2018-02-1316:39favilayes#2018-02-1316:39maleghastSo, if I want a function to build that map, I need the keys from somewhere.#2018-02-1316:39favilayes, in your function that builds the map#2018-02-1316:40favilae.g., lets say your input data is a csv file#2018-02-1316:40maleghastI was wondering if people generally hold them in config elsewhere as a simple list / vector, or if they munge the schema as an input#2018-02-1316:40favilamunge the schema?#2018-02-1316:41favilahere's how I would write a function to make a transactable map of data#2018-02-1316:46favilae.g. if the input is a csv#2018-02-1316:46favilathis function takes a csv row and turns it into a transactable map#2018-02-1316:46favila(defn crop-csv->map [[^String id common-name itis-name tsn]]
{:crop/id (java.util.UUID/fromString id)
:crop/common-name common-name
:crop/itis-name itis-name
:crop/itis-tsn tsn})#2018-02-1316:46favilathen transacting is just collecting these maps:#2018-02-1316:46favila(with-open [csv ( the-csv-file)]
(->> (clojure.data.csv/read-csv csv)
(map crop-csv->map)
(partition-all 100)
(run! #(deref (datomic.api/transact-async conn %)))))#2018-02-1316:46favila(very simple example)#2018-02-1316:47favilathere's no looking up of idents anywhere; they're literals in your code#2018-02-1316:47favilathere's a contract between the code and the database it is transacting against, namely that both have the same idents with compatible schemas#2018-02-1316:49favilathere are some datomic libs that will allow your code to essentially make "schema preconditions" to ensure that code and db have compatible schema#2018-02-1316:49favilae.g. https://github.com/rkneufeld/conformity#2018-02-1316:50favilabut in 5 years of production datomic I haven't found a need for anything like this#2018-02-1316:50maleghastI see…
OK, but that means a separate function for each data source.
I am trying to write a generic “put datoms in Datomic” function that gets the db/idents either from a separate config or the schema on use.#2018-02-1316:50favilathe db is the source of truth#2018-02-1316:50maleghastWhich is what I assumed anyone would do.#2018-02-1316:50favilathis function is so simple; what is the value in abstracting it?#2018-02-1316:51maleghastI just was curious as to whether or not people use a separate config value for convenience or just write a function to pull them out of the schema#2018-02-1316:51favilaI'm still not sure what you are doing that would make pulling schema a useful exercise#2018-02-1316:51favilaare the transformations expressed as data?#2018-02-1316:51maleghastPersonally the value is:
1. Why have 25 functions when I can have 3
2. The thought exercise of it.#2018-02-1316:52maleghastThe Schema is to hand and is a “place” where the db/ident attributes are enumerated.#2018-02-1316:52maleghastSo pulling them from the schema is reliable.#2018-02-1316:52favilauh, sure, but how does the code know the transformation to make?#2018-02-1316:53maleghastin fact the convenience of having them already extracted into config runs teh risk of the schema changing and the config not being updated.#2018-02-1316:53favilaunless the entire transform is data driven, in the end you have to write some code that knows what an attribute means#2018-02-1316:53favila(not just what it's schema is--what it means)#2018-02-1316:54favilaif you are contemplating such an approach I would definitely put everything you need in the db#2018-02-1316:54maleghastI think that I see what you mean, thanks#2018-02-1316:55favilato make this useful you will probably need to annotate your schema entities with more attributes that are understood by your data-transformation-building layer#2018-02-1411:57joelsanchezI have an application that's using Datomic, and I want to add traffic analytics. Is Datomic a good fit for writing traffic events, or should I use, say, Kafka instead?#2018-02-1414:08marshall@joelsanchez I would use something more like Kafka for raw events; you can use Datomic for aggregations/reports/etc, but is likely not a great fit for raw event stream data#2018-02-1414:09joelsanchez@marshall thought so, thanks!#2018-02-1416:22octahedrionhi, anyone know what might be causing this error: Embedded stack arn:aws:cloudformation:us-west-...
was not successfully created: The following resource(s) failed to
create: [StorageGetSystemNameRole].
?#2018-02-1416:24marshall@octo221 Can you look in the failed Storage Stack for the error that caused the failure#2018-02-1416:24marshall(not the master stack)#2018-02-1416:27octahedrionok thanks - will check...#2018-02-1417:01matanHey guys,
I'm thinking about basing off Datomic when bootstrapping a new project soon. Would you recommend actually developing my app against the cloud instance? or would it be a bad idea to do that and I should rather have a local install when developing the core of the new project, which is intended to run against Datomic cloud?
Would be extra nice not fiddling a local install (especially if the online service may not be at version parity with any local install!)#2018-02-1418:16stuarthalloway@matan I love developing against Cloud only, and being out of the “run servers” business#2018-02-1418:31matan@stuarthalloway how refreshing#2018-02-1500:36laujensenWhen I first connect my Peer to the Transactor, about 2GB of strings are stored in memory. The database has a number of items floating around that are no longer referenced anywhere. If I db.fn/retractEntity this items, will that reduce the amount of data cached?#2018-02-1523:17kennyIs there a reason a Long is not coerced to a Double when transacting to Datomic? Seems like that would make working with other languages a bit easier (i.e. JS).#2018-02-1600:33souenzzo@kenny there is a "feature request" here (please vote!)
https://receptive.io/app/#/case/17714#2018-02-1615:12madstapQuick question, would changing an attribute's schema from unique-value to unique-identity be breaking or have any unintended consequences in any way?#2018-02-1615:18jaret@madstap https://docs.datomic.com/on-prem/schema.html#altering-avet-index#2018-02-1615:20jaretIf you had :db/index true on the same attribute and you kept your values unique it shouldn’t have much change#2018-02-1615:20jarethowever if you did not have db/index set you’re going to keep an AVET index after the change#2018-02-1615:20jaretand there are some other notes in that section of the docs if your values aren’t truly unique#2018-02-1615:22madstapRight, so since it was :db/unique :db.unique/value, that means that there was an AVET index and the change won't break anything. Thanks!#2018-02-1616:32mgrbyte@madstap unless your app code is expecting exceptions to be thrown if a transaction tries to violate a uniqueness constraint, whereas the with :db.unique/identity: upsert - I believe.#2018-02-1616:35madstapOh, right. I actually think it uses an exception being thrown to check for duplicate use emails. So it does break my code I guess...#2018-02-1620:56uwowould y’all say nil-punning with boolean attributes is usually a poor modeling choice for flags, since it forces queries to search with (or [?e :attr false] [(missing? $ ?e :attr)])? The alternative being to always ensure that entities that can have that attribute do have it.
(I personally prefer modeling flags with enums, but assuming the choice has already been made to use a boolean flag.)#2018-02-1621:08uwolol a colleague just pointed out that I could use (not [?e :flag true]), so it doesn’t really add any additional work to the query, I suppose#2018-02-1621:08bkamphaus@uwo my opinion: I like to infer things from the presence of attributes. You could also use not, but I’m OK writing the occasional query like the or .. missing example you provide, but say I have entities that have possible statuses processed and not-processed — I would model something like :status/to-process and query for that if I expected to mostly write queries that looks for things to process. Then retract :status/to-process e.g. when the entity was processed.
If there are multiple stages though (not binary), I enumerate all the possibilities and basically state-machine it out. In practice for me thus far the perf aspect of realizing an additional set for negation or whatnot has only been an issue with relations between entities (where e.g. relation b/t entities is M * N instead of linear), not with queries just doing the single attr.#2018-02-1621:16uwoI feel this way too. In this case it’s just a simple boolean flag, but I like to create an db.type/ref attribute and use enums as the value.
We’re working with a flag that has already been created as a boolean, and we don’t want to rewrite code at the moment. So we were talking through potential tradeoffs of nil-punning, so to speak.#2018-02-1621:32uwothanks @U06GLTD17#2018-02-1621:11bmaddyI believe (not [?e :flag true]) requires the :flag attribute to exist (haven't tried it though).#2018-02-1621:15bmaddyNevermind, that does seem to work.#2018-02-1717:19uwoWe’re doing change data capture from our legacy database, and we found ourselves making an assumption that was silly in retrospect. We would construct a tree of txdata to insert, where the root was uniquely identified and thus would upsert, but where the branches were component references. By asserting this as is, we were orphaning the component references. Clearly, we were in a document mindset. Whoops!
In order to remedy this, we’re going to try the following:
1. Determine novelty by diffing the txdata prepared for assertion with the corresponding entity in the db and, recursively, its component references.
2. assoc in db/ids to the relevant component refs to ensure upserts
I know cardinality many component references will require additional work, but I wanted to run this past y’all to make sure I’m not making a poor design decision or missing something obvious.#2018-02-1722:40steveb8n@uwo I have a “public” id (a uuid) in all entities, including component entities for this a many other benefits. Also in data layer tests I have a fn which checks the ids persisted against the before and after db and it can report on which ids were inserts. this technique allows me to know exactly when I am (accidentally) adding new entities and it reports on component entities just as well as top level entities#2018-02-1816:37uwoThanks @U0510KXTU! In this case the data from our legacy system doesn’t map directly to the new data model. Most of the data that becomes component refs doesn’t have a primary key in the old sql database, so the only thing that identifies it is a unique attribute on the parent entity.
When the data from the change data capture is prepared for assertion, we effectively want to merge it with the same structure that we can d/pull from datomic.#2018-02-1816:38uwoWhile adding a uuid to all our component refs may have other benefits, it doesn’t offer any additional leverage here unless we were to also add it as a primary key to the legacy database#2018-02-1823:26steveb8nHmm, is there no way to compose the parent PK and some child attribute to make a unique string? That upsert behaviour is too good to miss out on#2018-02-1723:28roklenarcicI'm not sure if datomic has an equivalent of select for update. To give an SQL example, let's say you have Accounts table. In one transaction an account wires $100 to 3 other accounts ($30 $20 $50). In SQL what you can do is select for update all 4 rows, calculate the "after-transaction" balances and update all of them. What would the equivalent operation in Datomic look like?#2018-02-1723:35steveb8n@roklenarcic if you want to do the select and write with no other txns in the middle then a transactor fn is the way to go. https://docs.datomic.com/on-prem/database-functions.html I’m not sure if they are available in Cloud (yet). you can also look into :db.fn/cas#2018-02-1723:51roklenarcicthanks#2018-02-1802:27cap10morganshould enum refs be isComponent or not?#2018-02-1802:45cap10morganI answered my own question at a REPL#2018-02-1802:45cap10morganthey should not be#2018-02-1802:45cap10morganShould this be called out in the docs? I just spent many frustrating hours trying to figure out why my data was disappearing b/c of this confusion.#2018-02-1804:52favilaRule of thumb for isComponent: an entity referenced by an isComponent attr should only be referenced once in the whole db#2018-02-1804:53favilaIf your data model requires multiple refs it is very likely the attr should not be isComponent#2018-02-1804:54favila(Referenced once means (d/datoms db :vaet the-entity) is count = 1)#2018-02-1811:11matanHi, is this also a good place to ask about Datomic cloud?#2018-02-1811:12matanI don’t really get the pricing... too much fine print on the official Amazon pricing page https://aws.amazon.com/marketplace/pp/prodview-otb76awcrb7aa#2018-02-1811:14matanAnyone care to comment about storage and networking costs which are incurred by their usage?
Clearly a bit hard to guess how much my data would consume in Datomic storage... #2018-02-1811:14matanIn terms of the Datomic overhead and such#2018-02-1814:48stuarthallowayhi @matan — see https://forum.datomic.com/t/datomic-cloud-pricing-fully-loaded-cost/306/4#2018-02-1816:15matan@stuarthalloway thank for the link! :-) now it only remains how much “domination” is there as the data grows and the number of instances remains constant ;-)#2018-02-1816:16matanIs there any orthogonal community channel dedicated specifically to Datomic cloud?#2018-02-1906:36steveb8nIs “filter” likely to be supported by the Cloud/Client API in future? It’s not there now but, because as-of is already supported, it seems like using custom filters might be possible/in the works. This is really valuable for building a multi-tenant web service so I suspect I’m not the only one interested in this.#2018-02-1916:27sleepyfoxQuestion: I've looked through the doco but I can't find anything clear on this. I have two entities, let's call them order and order-item. I want to create an order, which will have two order items. I will transact both order items, then transact the order, which will include two references to the two order-items. I have created a :db.type/ref entry with a cardinality of many in order to hold these references to order-items. How do I actually get the refs from transacting the order-items so to 'insert' them into the order transaction?#2018-02-1916:28sleepyfoxIt's plain that I'm missing something obvious, but I can't quite see what.#2018-02-1916:36stuarthalloway@sleepyfox the tempids section of https://docs.datomic.com/cloud/transactions/transaction-processing.html may help#2018-02-1916:37stuarthallowaybut I would back up and ask “how can an order line item exist without an order”?#2018-02-1916:37stuarthallowayaren’t they “cart items” or “potential order items” or something at that moment in time?#2018-02-1916:38sleepyfoxCan I add them at the same time then? If so, how?#2018-02-1916:39sleepyfoxI assume that there must be some sort of equivalent of a foreign key reference, but the doco is rather light on examples#2018-02-1916:40stuarthalloway@sleepyfox have you seen the tutorials repo? This stories/comments example is fairly similar to orders/items https://github.com/cognitect-labs/day-of-datomic-cloud/blob/master/tutorial/component_attributes.clj#2018-02-1916:41marshallAlso in thr docs here https://docs.datomic.com/on-prem/transactions.html#adding-entity-references#2018-02-1916:41sleepyfoxThanks, I'll take a look.#2018-02-1916:42donmullen@sleepyfox - also https://docs.datomic.com/on-prem/transactions.html#nested-maps-in-transactions#2018-02-1917:19rapskalianHey all, quick question: part of my client's requirement is to "reset their account information", meaning clear all fields. If I remember correctly, I wasn't able to transact an empty map as an entity into datomic...what would be the recommended way to accomplish something like this?#2018-02-1917:21joelsanchezretract the datoms: [:db/retract entity-id ident val-to-retract]#2018-02-1917:21rapskalianI was considering potentially transacting something like { :user/reset (Date.) } to capture that event as data as their new profile#2018-02-1917:22rapskalian@joelsanchez thaaat makes more sense. Thanks 🙂#2018-02-1917:23rapskalianStill getting used to the datomic information model#2018-02-1917:23joelsanchezfor reference: https://docs.datomic.com/on-prem/transactions.html#retracting-data#2018-02-2005:11Hendrik Poernamahello, anyone knows how to set peer property such as datomic.objectCacheMax with leiningen? Specifically when running a development build using lein run. Leiningen's :jvm-opts doesn't seem to work.#2018-02-2005:33robert-stuttaford@poernahi that’s precisely what we’re doing#2018-02-2005:33robert-stuttaford:jvm-opts ["-server"
"-Xms3584m"
"-Xmx3584m"
"-Ddatomic.objectCacheMax=1792m"
"-Ddatomic.memoryIndexMax=448m"
"-Ddatomic.peerConnectionTTLMsec=15000"
"-Ddatomic.txTimeoutMsec=300000" …]
#2018-02-2005:34Hendrik Poernamahmm ok, maybe I formatted it incorrectly#2018-02-2005:34Hendrik Poernamathanks!#2018-02-2006:02Hendrik Poernamait was working after all. Turns out I was getting memory errors due to missing datomic.memoryIndexMax config. According to the docs https://docs.datomic.com/on-prem/system-properties.html, this should be a transactor property. But it seems to affect peer as well.#2018-02-2006:03Hendrik PoernamaAs for context, I was trying to configure a development environment using as few memory as possible#2018-02-2006:24DesmondHi everyone, I would like to do an equality comparison in my query. I got it working when both of the idents I want to compare are present. The issue is that one of the idents is new in the version I'm about to release. If the entities don't have that ident I just want to return false. I'm struggling to come up with the right query to do this. Here's the interesting part of what I have:
[?c :comment/question ?q]
(or-join [?c ?q]
(not [?c :comment/creator])
(and
[?q :question/creator ?question-creator]
[?c :comment/creator ?comment-creator]
[(= ?question-creator ?comment-creator) ?from-asker]))
#2018-02-2006:24Desmondand this gives me an ArrayIndexOutOfBoundsException#2018-02-2006:47favila?from-asker doesn’t seem meaningful?#2018-02-2006:48favilaYou don’t need explicit equality check: just unify to same binding#2018-02-2006:50favila[?c :c/creator ?creator][?q :q/creator ?creator]#2018-02-2007:13Desmond@favila I need to return ?from-asker as true/false rather than restrict the set that gets returned#2018-02-2007:14DesmondIf I unify to the same binding then I won't get the false ones back#2018-02-2014:19favilaThen your “not” branch needs to bind it ani it needs to be included in the or-join bindings. Your error is because of this extra dangling binding#2018-02-2014:21favilaThis might be clearer with an actual named rule with two impls #2018-02-2020:38Desmondyes, that makes sense. thank you!#2018-02-2008:03biscuitpantscould someone help me? i'm not sure if this is a bug, or something i'm doing incorrectly#2018-02-2008:03biscuitpantsclj
(d/q '[:find ?e .
:in $ ?uuid
:where [$ ?e _ ?uuid]]
(db/db) "5a269adc-72c4-483d-abc4-b9b70707db81")
=> []
(d/entity (db/db)
[:resource/uuid "5a269adc-72c4-483d-abc4-b9b70707db81"])
=> #{{:db/id ...}}
#2018-02-2012:38souenzzotry
[?a :db/unique :db.unique/identity]
[?e ?a ?uuid]
#2018-02-2014:12marshallIf you change the query to:
[$ ?e :resource/uuid ?uuid]
do you get a result?#2018-02-2008:03biscuitpantsi assume the first query to return the same entity, as using the entity api#2018-02-2008:03biscuitpantsare we not able to elide attributes like this?#2018-02-2008:04biscuitpants:resource/uuid is indexed, and unique#2018-02-2011:50donmullen@captaingrover - likely get-else would help get you there - so if :question/creator is the new ident and sometimes doesn’t have value - put [(get-else $ ?q :question/creator "NONE") ?question-creator] in and you’ll get values set - in this case “NONE” - you could set that to be whatever makes sense.#2018-02-2020:39Desmondthis is a much easier way to go. thank you for the tip!#2018-02-2014:22favilaAlso is this actually type string or uuid?#2018-02-2015:44souenzzoI'm trying to get ALL entities from a ref-to-many using pull with :limit nil#2018-02-2015:46souenzzo[{(:user/address :limit nil) [:address/code]}]. But it's returning just 1000 (there is 10000 on my scenario)#2018-02-2020:39jaret@souenzzo would you be able to make a small repo? I ran into this issue before with another client who was unable to share their repo, but was unable to reproduce. I created this repo that showed me that I was able to return all results.#2018-02-2020:39jarethttps://gist.github.com/Jaretbinford/3920c36e332d728f73f0256b6e2f5101#2018-02-2020:54souenzzoyep. I can reproduce. I will do a clj+deps when possible#2018-02-2207:15fmindHi. I want setup a large import pipeline to migrate my data from couchdb to datomic. I have a single local server that runs on Linux. Which storage backend should I use ? Postgresql ? Cassandra ? Use free and do a backup restore ?#2018-02-2213:45jaret@fmind Backup/restore is good for moving a datomic database from one underlying storage to another. If you’re taking a database on couchdb and making it a datomic db you’ll want to write an ETL job to take the couchdb dataset from SQL(?) to Datomic. I would recommend that you watch Stu’s video on simplifying ETL with Clojure and Datomic. (https://www.youtube.com/watch?v=oOON--g1PyU).#2018-02-2213:46jaretIn terms of choosing your underlying storage they all have their trade offs. I recommend going with the storage you have the most experience with. Many of our users find the experience with AWS DDB to be excellent. Our Storage docs go into detail on the configurations per storage. https://docs.datomic.com/on-prem/storage.html#2018-02-2214:04fmind@jaret thanks for the explanation. I've tried postregsql and cassandra storage already, but the results were not impressive#2018-02-2214:05bmabeyHow do people typically store MD5 hashes in datomic? In Postgres I typically use the UUID type since it is more compact than the string representation. By that same logic it seems that db.type/uuid may be the better option over db.type/string but I'm not sure if any extra JVM considerations (e.g. the size of a UUID in memory vs that of a string) need to be taken into account.#2018-02-2312:09val_waeselynck@U5QE6HQK0 I'm assuming you want the hashes to be indexed? You could also use a bigint.#2018-02-2315:05bmabeyGood point, thanks.#2018-02-2214:05fmindI'll try to build the import pipeline with the backup/restore feature. But I don't like having the transactor and peer library running on two different process ... I use twice the memory for nothing#2018-02-2214:13marshall@fmind there are numerous reasons that Datomic is specifically designed with peers and transactors as separate processes. https://docs.datomic.com/on-prem/deployment.html#process-isolation#2018-02-2220:36adamfreyFor someone at cognitect, I'm getting an Access Denied XML response when I try view the Datomic java docs: https://docs.datomic.com/javadoc/datomic/Connection.html#log()#2018-02-2220:37marshall@adamfrey https://docs.datomic.com/on-prem/javadoc/datomic/Connection.html#log()#2018-02-2220:37marshallWhere was that linked from?#2018-02-2220:37adamfreyhttps://docs.datomic.com/on-prem/log.html#2018-02-2220:37marshallok thanks#2018-02-2220:37marshalli’ll fix it#2018-02-2220:37adamfreythank you#2018-02-2220:49marshall@adamfrey thanks for catching that. there were a bunch of those throughout the docs. i think i’ve fixed them all; it will take a couple minutes for the changes to percolate#2018-02-2220:49adamfreyno problem!#2018-02-2304:05thosmosI'm having some trouble calling a local function inside of a peer query:
(d/q '[:find ?ft
:where
[_ :sitevisit/SiteVisitDate ?date]
[(hello ?date) ?ft]]
db)
I'm getting this error: CompilerException java.lang.RuntimeException: Unable to resolve symbol: hello in this context, compiling:(NO_SOURCE_PATH:54:9)
is there something I'm missing? regular std lib functions work fine. this function is declared directly before the function that contains this query.#2018-02-2307:27robert-stuttafordyou need to use fully-qualified namespace names, @thosmos your-ns/hello#2018-02-2307:43thosmosoh, of course! thanks @robert-stuttaford#2018-02-2314:25souenzzocan I restore a backup into a memory database?#2018-02-2314:28marshall@souenzzo No, there is no out-of-process access to mem databases#2018-02-2319:14adamfreyif I have a datomic.db.Datum object is there a way to clone it? My ultimate goal is to convert datoms with byte arrays as their values into datoms with equivalent strings as their values.#2018-02-2319:47favilaThe constructor requires e a-without-partition-bits, object, and tx with the op bit set (called tOp)#2018-02-2319:47favilathese are all public fields so you can just copy them#2018-02-2319:48favila(Datum. (.e dat) (.a dat) (.v dat) (.tOp dat))#2018-02-2319:48favilanone of this is part of the public documented api though, so be careful#2018-02-2319:58adamfreythanks @U09R86PA4. I saw .tOp in the class info but I thought it was .t0p with a zero and that didn't work#2018-02-2320:00favilameans "t plus asserted/deleted"#2018-02-2320:00favilaso the t value (not tx) shifted left one bit; last bit is 0 for retract and 1 for assert#2018-02-2320:01favilaOp = operation#2018-02-2403:44DesmondHi, I've had a datomic db in production for a few weeks now. Until a few days ago the response times were super fast. In the past few days they've started to go up a lot and occasionally spike. There really isn't a ton of traffic and not much data (i'm not sure how to check this exactly - my s3 backup has 137 items in the 'values' directory). I've never done any performance tuning for datomic. Where should I start?#2018-02-2404:14Desmondthe big spikes might be related to restarts but even the small spikes are around 10 seconds#2018-02-2406:36Desmondok, i found an n+1 query. my bad!#2018-02-2406:36DesmondI'm still curious about recommendations regarding performance, though#2018-02-2406:37DesmondI mean besides not doing n+1 queries#2018-02-2514:56stuarthalloway@captaingrover in general, whenever your system’s behavior does not match your expectations, I would recommend the scientific method https://www.youtube.com/watch?v=FihU5JxmnBg#2018-02-2521:55Desmond@U072WS7PE Yeah, I wound up finding a couple of issues besides the n+1 just by removing different parts of the query and timing the result from the repl. For example, I was joining across attributes that were not :db/id & :db.type/ref. I probably should have know that would be slow but seeing the numbers makes it a lot more real. Good talk! Thank you!#2018-02-2514:57stuarthallowaythat said, you can also improve your expectations in advance by creating SLAs, by modeling your system, and by generative and simulation testing#2018-02-2514:59stuarthallowayin your specific example, testing with simulated load would certainly uncover any n+1 roundtrip problem in a system#2018-02-2520:33drewverleeWatching the day of datomic video’s i’m somewhat confused by how were using Ids in the transaction to refer to other entities. https://vimeo.com/208663465#t=600s
here is the code that stuart is referring to:
[{:person/name "bob"}
{:person/id "a"
:person/name "Alice"
:person/spouse "b"}]
Here i would assume what he is describing is how we can transact the idea that bob is married to alice without yet having an id for bob. i would expect that to look something like:
[{:person/name "bob"
:db/id "b"}
{:person/name "Alice"
:person/spouse "b"}]
#2018-02-2520:50potetm@drewverlee I’m pretty sure you’re correct. Either that’s a typo or datomic is inferring from the fact that there are no other entities in the transaction.#2018-02-2520:50potetmI’ve not tried it, but what you have is what I would do.#2018-02-2520:57drewverleeThanks @potetm. I assuming the schema also says that person/spouse is a ref. Also these docs on creating tempIds seem to indicate that the above (as i wrote it at least) would work: https://docs.datomic.com/on-prem/transactions.html#creating-temp-id#2018-02-2521:02drewverleeIs (let [schema {:aka {:db/cardinality :db.cardinality/many}}
conn (d/create-conn schema)]) a shorthand for {ident {shema-key shema-value}} as opposed to: (schema [{:db/ident :aka, etc…}]#2018-02-2600:54drewverleeAny good tools for visualizing datomic schemas?#2018-02-2600:55drewverleeOr maybe a quick explanation why ppl wouldn’t normally want that?#2018-02-2615:15favilaDatomic schema only exists on the attributes; that's just not enough information to get a useful ETL-style visualization#2018-02-2615:17favilaEntities don't have types, so domain (what kind of entity an attribute can appear on) and range (possible value-space of an attribute) are both unexpressable#2018-02-2615:19favilaYou can do it, but you need your own conventions on top so any visualizer cannot be general#2018-02-2615:19favilaThis video describes one way: https://www.youtube.com/watch?v=sQCoTu5v1Mo#2018-03-0121:23drewverleeThanks for getting back to me! i didnt see your reply until now.#2018-03-0121:24drewverleeAh i think i understand. After some thought, i guess it would be useful to just have a map of what was, rather then what could be.#2018-03-0121:24drewverleein my use case.#2018-02-2600:56drewverleeor maybe its so simple that it goes without saying lol#2018-02-2600:57drewverleehttps://github.com/felixflores/datomic_schema_grapher#2018-02-2603:05isaacclojure.lang.ExceptionInfo: Transactor request timed out {:db/error :peer/request-timed-out, :request :create-database, :result #object[java.lang.Object 0x5d32f5db "#2018-02-2603:06isaacGot this error when try to create-database?#2018-02-2609:07isaacWhat use of port 4334,433,4336 respectively?#2018-02-2615:09favilaport 4334 is for transactor-peer communication (using Artemis)#2018-02-2615:10favila4335 is only for dev storage. It's the "storage" port peers use to talk to the embedded h2 db using sql.#2018-02-2615:11favila4336 is the port for the web-based embedded sql GUI provided by h2. You can use it to look inside the embedded h2 db (not that there's much value in this)#2018-02-2609:59Georgiana Maniahello! how can you excise an entity when using datomic cloud api? is there something similar to db/excise?#2018-02-2611:40robert-stuttafordno excision is supported on Cloud as far as i know, @georgiana.mania#2018-02-2613:08Georgiana Maniathanks @robert-stuttaford. I was hopping that they would support it.. because otherwise it can no longer be used to store data of EU citizens 😞#2018-02-2613:09marshall@georgiana.mania see: https://forum.datomic.com/t/support-for-excision-or-similar/323#2018-02-2613:11Georgiana ManiaThanks!#2018-02-2615:12adamfreylow priority, but there is a dangling closing square bracket on this code example in the docs: https://docs.datomic.com/on-prem/best-practices.html#set-txinstant-on-imports#2018-02-2615:36jaretI’ll fix that! Thanks for the catch!#2018-02-2616:04alexkI have data in MongoDB, and a script to generate a Datomic schema and migrate it and the data into Datomic. But if I run the script twice, it creates twice the entities because of auto-ids when transacting maps. How best to modify my script to ignore entities that already exist in Datomic?#2018-02-2616:05unlistedI'm just getting started, and could use some help.
I have a database available, and I have created a simple app to run a query.
Currently, it just runs a hard-coded query.
How do I create Datalog queries dynamically?#2018-02-2616:06alexkEither pass the data into the query using bound variables, or generate the query (which is just data) programmatically.#2018-02-2616:10unlistedCan you point me to an example? I've been trying that, but it's not working for me.#2018-02-2616:11marshall@U8J1APQ9Z are you using Client or peer?#2018-02-2616:17marshallYou might want to review https://github.com/cognitect-labs/day-of-datomic-cloud/blob/master/tutorial/building_queries.clj#2018-02-2616:18marshallNote that the map form provides an easy path to generate the whole query programmatically; you can build up your query map with regular clojure data structure manipulation#2018-02-2616:24unlisted@U05120CBV I’ll start there. Thanks!#2018-02-2616:35marshall👍#2018-02-2616:08marshall@alex438 You may want to watch Stu’s talk on ETL and Datomic for inspiration:
https://www.youtube.com/watch?v=oOON--g1PyU
https://github.com/stuarthalloway/presentations/wiki/Simplifying-ETL-with-Clojure-&-Datomic#2018-02-2616:09marshalland the resulting importer: https://github.com/Datomic/mbrainz-importer#2018-02-2616:10marshallThe key is to make the import idempotent, so that it can resume wherever it left off#2018-02-2616:10marshallThe specifics of how you do that will depend on your particular data source and schema#2018-02-2616:11alexkSounds almost impossible with complicated nested objects#2018-02-2616:12alexkI make use of isComponent#2018-02-2616:12alexkI’ve watched the video, I don’t remember anything addressing my problem beyond “query for things, detect differences, and insert/update appropriately “, but that doesn’t answer my question.#2018-02-2616:12marshallit shouldn’t be if you have a unique identifier of some sort#2018-02-2616:12alexkthere can’t be a unique identifier for components#2018-02-2616:12alexkbecause they’re only unique by virtue of their parent#2018-02-2616:12marshalli mean for your parent ‘object’ in the source dataset#2018-02-2616:13marshallone thing that particular ^ import job does is key individual transactions with a UUID#2018-02-2616:13alexkI can do that, sure. Each top-level entity has an existing ID from the source db#2018-02-2616:13marshallthen you can ask for any given transaction whether you’ve asserted it before or not#2018-02-2616:15marshallin particular, I believe that is done here https://github.com/Datomic/mbrainz-importer/blob/5d2d90f9a35789824675a4cc86a9a433527cb41b/src/datomic/mbrainz/importer.clj#L261#2018-02-2616:16alexkI guess that’s what I’m missing. I’ll think about that and see where it gets me, thanks#2018-02-2616:16marshallsure#2018-02-2616:16marshallalso, here is the bit that looks at an existing import in-progress to find things that have already been completed https://github.com/Datomic/mbrainz-importer/blob/master/src/cognitect/xform/batch.clj#L46#2018-02-2616:16marshallin the context of a transduction#2018-02-2618:50souenzzosomeone else has problems with datomic restore on dynamodb (throughput exception)#2018-02-2619:14adamfreyThis looks like a typo at the beginning of this paragraph in the docs: https://docs.datomic.com/on-prem/transactions.html#list-forms#2018-02-2619:14adamfreyAlso, there's the same typo in the next paragraph "Map forms"#2018-02-2619:59marshall@adamfrey not a typo, just somewhat awkward wording#2018-02-2619:59marshallEach list (contained within a) transaction represents…#2018-02-2620:18adamfreyoh I see now. That was a misreading on my part#2018-02-2701:45James VickersDoes anyone have experience writing a GraphQL resolver that queries from Datomic (pull)? Is that pretty straightforward because of how pull works? #2018-02-2710:33val_waeselynckDepends on your authorization model and of the need for derived fields. I have set up a query server with similar semantics to graphql, and Datomic pull fell short for these reasons.#2018-02-2710:35val_waeselynckCustom caching logic and complementary data sources (e.g blob stores) can be other reasons why pull might be insufficient.#2018-02-2710:37val_waeselynckOh and I forgot parameterized fields, of which pull has none either#2018-02-2710:38val_waeselynckSo pull might be helpful locally, but I wouldn't count on it to just execute a whole GraphQL query.#2018-02-2710:39val_waeselynckIf you're on a Peer and using Lacinia, I think expressing your resolvers in terms of the Entity API and occasionally Datalog is much less limiting.#2018-02-2721:02timgilbertYeah, the entity API + lacinia work beautifully together#2018-02-2720:38adamfreyI just noticed that the client api find spec only has find-rel and is missing find-coll find-scalar and find-tuple. Are those coming later or were they intentionally omitted?#2018-02-2721:35marshall@adamfrey the client API has always used only find-rel
if you’re interested in other options, I’d suggest you add a feature to our feature suggestion portal#2018-02-2817:06chris_johnsonI would like to confirm my understanding of the current state of clients and Datomic Cloud: Cloud offers, at present, only the Client API in Clojure, correct? There isn’t a REST API server nor other implementations of the Client API wire protocol to allow for things like JavaScript or Python applications to natively consume Datomic Cloud databases?#2018-02-2817:32jaret@chris_johnson You’re correct. There are feature requests for a JavaScript Client, Python Client, Elixir Client on our suggest a feature page (top right of http://my.datomic.com). You should log in and add your vote to one of those features.#2018-02-2817:33jaretThere is also a feature request to support ClojureScript Client lib.#2018-02-2818:06grounded_sageI haven’t done much with Datomic. But the project on at work is ideal for it. It’s already months in and we are nowhere with the current DynamoDB/Serverless set up. I’m currently on front end tasked with Elm. Is there an easy way for me to pass queries to Datomic kind of like GraphQL In nature. #2018-02-2818:07jonoflayhamI’ve a fairly dumb question about Datomic pricing. https://www.datomic.com/pricing.html leads me straight to AWS Marketplace calculators which talk about “software and infrastructure” costs… but what about licence fees for Cognitect? I see those mentioned elsewhere, on https://www.datomic.com/get-datomic.html, but that seems to be talking only about the on-prem option.#2018-02-2818:08marshall@jonoflayham the “Software Cost” for Datomic Cloud when you’re on the Marketplace listing is the fee to Cognitect#2018-02-2818:12jonoflayhamThanks, @marshall. So just the $0.012 per hour, roughly? I’ve tried in 3 browsers and on 2 laptops, but I can’t get the software cost or infrastructure cost figures to change no matter what I use for region or fulfilment option.#2018-02-2818:13jonoflayhamSorry; crossed in post. Wow - that seems like great value.#2018-02-2818:13marshallThat’s correct. The only difference is in the toplogy. So the Solo topology is $0.012 per hour and the production topology is $0.156 per hour#2018-02-2818:14marshallper instance#2018-02-2818:14jonoflayhamNone of those figures updates for me when I change topology or region, at the moment. But great to know the ballpark values - thanks.#2018-02-2818:15jonoflayhamAnd - dumber still - you’ll stop being charged when you tear down whatever it is that the CloudFormation templates have built?#2018-02-2818:16marshallThe fee to cognitect is only charged when you are running cluster node instances#2018-02-2818:16marshallWe don’t charge anything for the other infrastructure (dynamo, s3, etc)#2018-02-2818:17marshallso, yes, when you delete your compute stack you are no longer paying Cognitect#2018-02-2818:17marshallthere are some storage resources (s3, dynamo, efs) that are retained when you delete a stack#2018-02-2818:17marshallwhich you will still pay AWS for unless you explicitly delete them#2018-02-2818:18marshallinfo here: https://docs.datomic.com/cloud/operation/deleting.html#2018-02-2818:37jonoflayhamThanks for all that.#2018-02-2818:37jonoflayhamThis isn’t a complaint, just feedback: I think there’s a problem with the AWS Marketplace widget for Datomic Cloud. It presents four fulfilment options and usage information entries - solo, production, solo, production. Separately, both solo and production links on https://www.datomic.com/pricing.html lead to the solo pricing. And solo + production pricing are identical, each varying in the same way according to the EC2 instance type.#2018-02-2818:38jonoflayhamSafari, Chrome, Firefox#2018-02-2818:40marshallAgreed. We are aware of the issue and are discussing it with Marketplace.#2018-02-2818:19jonoflayhamUnderstood. I guess much of that is obvious to people used to AWS Marketplace; but if it isn’t, maybe the Cloud-oriented page could drop a few hints.#2018-02-2818:21grounded_sageAm I right in my assumption that I can pass a data structure in JSON, parse that and then send it off in a Datomic pull?#2018-02-2818:23marshall@grounded_sage are you wanting to access Datomic directly from your frontend app or via a webserver?#2018-02-2818:27grounded_sageWeb front end would be ideal. #2018-02-2818:28grounded_sageIt’s literally an app to keep track of inventory. #2018-02-2818:31grounded_sageIf I could query data and update data directly from my web app I would be happy as Larry. There is also an iOS app which is for warehouse workers.#2018-02-2818:47grounded_sageDoesn’t seem to be any way to do it with Cloud. #2018-02-2818:52alexkcan I switch between solo and production later on?#2018-02-2818:52alexkwithout losing data, I mean#2018-02-2818:52marshall@alex438 yep https://docs.datomic.com/cloud/operation/upgrading.html#upgrading-solo-to-production#2018-02-2818:54marshallmake sure you’ve done your First Upgrade Ever before doing that ^ https://docs.datomic.com/cloud/operation/upgrading.html#first-upgrade#2018-03-0100:26devna little confused as it's been awhile since I played with datomic. I am trying pro starter. What I remember doing in the past was setting up license key, storage, adding the postgres driver, adding datomic-pro (not client) to my deps, running the transactor, requiring datomic.api :as d, specify URI, create-db on it, make connection#2018-03-0100:48souenzzoWhat storage backend are you using?
Besides the peer, you're also running the transactor, right?#2018-03-0100:49devnpostgres#2018-03-0100:26devnthe docs now refer to starting a peer server, does this start a transactor?#2018-03-0100:28devnI tried it the old way with the latest datomic-pro library using the URI, but it failed to transact my schema. now trying it with the client library and it's complaining about "SSL doesn't have a valid keystore" when trying to make the client#2018-03-0100:49souenzzoTransactor "alone" on a JVM.
You need to download the jar and run#2018-03-0100:55souenzzothe minimal production datomic setup you need 3 components:
1- Storage backend (SQL, in your case)
2- Your app, that requires and use datomic.api (we call it peer)
3- The transactor, that you download a jar and run in a independent JVM(looks like it's missing).
---
Dataflow:
peer READ from DB
peer send writes to transactor
transactor WRITE on DB#2018-03-0101:42devngot it figured out. had some problems with conflicting dependencies that caused a few of the issues, and then there was some confusion over small diffs in the API, like the client using {:tx-data ...} when transacting vs the peer library.#2018-03-0105:58Desmondi'm back with more performance questions. last time I found that my query was slow because of some naive mistakes. now the same query is slow because of a not-join across :db/id values.#2018-03-0105:59DesmondThis is the part of the query that's slowing it down:
(not-join [?c ?u]
[?v :comment-viewing/user ?u]
[?v :comment-viewing/comment ?c])
#2018-03-0106:00Desmondwhere ?c and ?u are ids#2018-03-0106:06Desmondtotal in the database there are up to 114 ?u datoms, 1649 ?c datoms, and 6846 ?v datoms#2018-03-0106:09Desmondthe query takes over 5 seconds with the not-join, 500 ms without the not-join, 700 ms when I drop the [?v :comment-viewing/comment ?c] clause, and 99 ms when i drop the [?v :comment-viewing/user ?u] clause.#2018-03-0106:11Desmondare :db.type/ref indexed by default?#2018-03-0106:12Desmondlooks like yes: https://docs.datomic.com/on-prem/indexes.html#2018-03-0106:23Desmondcould :db/noHistory help here?#2018-03-0107:42Desmondsolved! I just need to flip the clauses around like so:
(not-join [?c ?r]
[?v :comment-viewing/comment ?c]
[?v :comment-viewing/rapper ?r])
Takes the time from 5 seconds to 90 ms#2018-03-0107:42DesmondOnly figured this out by looking at day-of-datomic:
;; This query leads with a where clause that must consider *all* releases
;; in the database. SLOW.
(dotimes [_ 5]
(time
(d/q '[:find [?name ...]
:in $ ?artist
:where [?release :release/name ?name]
[?release :release/artists ?artist]]
db
mccartney)))
;; The same query, but reordered with a more selective where clause first.
;; 50 times faster.
(dotimes [_ 5]
(time
(d/q '[:find [?name ...]
:in $ ?artist
:where [?release :release/artists ?artist]
[?release :release/name ?name]]
db
mccartney)))
#2018-03-0112:06laujensen@stuarthalloway / @marshall: We talked about Html and CSS blowing up the indexing service. Would prepending something like <!-- rev 0x0f32f3ffff --> fix that, if I remove all history on these entities?#2018-03-0113:05marshall@laujensen prepending to the value with something that changes (i.e. the sha of the value itself) would help with the segment size issue
Note that enabling noHistory will not remove existing history from an attribute#2018-03-0115:42alexkWhen I get :restore/collision from datomic restore-db, is there any way to tell it to overwrite without having to manually delete the db?#2018-03-0115:58alexkI’m upset that I have to delete it at all, because I don’t know what that would do to a datomic client/peer connected to it#2018-03-0115:58alexkWhat am I missing?#2018-03-0115:59marshall@alex438 Why are you restoring into a ‘live’ database? You must restart all peers and transactor after a restore: https://docs.datomic.com/on-prem/backup.html#sec-5#2018-03-0116:00alexkand transactor? aww#2018-03-0116:00alexkIt’s more of a “migrate latest real db into a staging db” than “restore into live db”#2018-03-0116:00marshallyeah, you’ll need to restart#2018-03-0116:51alexkHow to restart a transactor running with CloudFormation? There’s no restart/stop option - would I have to reboot the associated EC2 instance?#2018-03-0116:52marshallthe provided cloudformation makes an autoscaling group#2018-03-0116:52marshallso you can terminate the instance and the ASG will replace it#2018-03-0116:01marshallif you dont you should get a transactor failover the first time you try to write anyway#2018-03-0116:03alexkWhat’s the meaning of “You do not need to do anything to a storage (e.g. deleting old files or tables) before or after restoring.“, which seems to be at odds with the :restore/collision error?#2018-03-0116:05marshallwhat version of Datomic?#2018-03-0116:05alexkdatomic-pro-0.9.5561.62 more or less#2018-03-0116:06marshalland you get that error on the peer that is running the restore job?#2018-03-0116:06alexkyes#2018-03-0116:06alexkthe transactor is created by the ensure* commands#2018-03-0116:07alexkand I’m calling bin/datomic restore-db file:/etc file:/Users/etc/Desktop/etc datomic:#2018-03-0116:07marshallare you getting “is already in use by a different database” or “database already exists under the name”#2018-03-0116:07alexkIt worked if I gave it a db name that didn’t exist yet#2018-03-0116:08alexk> already in use by a different database#2018-03-0116:08marshallok. yes you can’t ‘fork’ a DB within the same storage under two different names#2018-03-0116:08marshallthe basis from which that database was created (call it foo) has been restored into that storage before#2018-03-0116:08alexkoh I read that but didn’t realize that’s the rule I broke, but yeah of course#2018-03-0116:09alexkok so I can restore, but not update#2018-03-0116:09marshallso if you restore foo into storage then change some stuff in it (and in the original source of foo) then try to backup the original source again and restore into the same storage you’ll see that error#2018-03-0116:10alexkis that something that can be fixed in a later release of datomic - to use new ids for restored data?#2018-03-0116:10marshallb/c those two databases share some history (whatever was there before the original restore), but diverge later#2018-03-0116:10marshallit’s something that could be suggested in our Feature request portal#2018-03-0116:10alexkok, just wondering if it’s design or implementation#2018-03-0116:10alexkthanks marshall#2018-03-0116:10marshallsure#2018-03-0117:20mgrbytewhere's the right place to report an issue with the peer api? I've found that tx-range doesn't work when supplied a tx id for start (but does when using (d/tx->t tx-id))#2018-03-0117:23mgrbyteI'm using [com.datomic/datomic-pro "0.9.5561"]#2018-03-0121:22Datomic PlatonicI saw a video where the presenter mentioned tricking the transactor into inserting custom transaction timestamps. What is the best way to do that?#2018-03-0121:41devnFun fact: midtaco is an anagram for datomic#2018-03-0121:41devnone of my prouder local repo naming moments, thought i'd share#2018-03-0122:52marshall@clojurians873 https://docs.datomic.com/on-prem/best-practices.html#set-txinstant-on-imports#2018-03-0202:24Datomic Platonic@marshall Brilliant! We'll look into it.#2018-03-0212:44richardwongWhen datomic connection with sql database and :datasource, the first used datasource will be cached and after that, all call will route to first datasource. It's fine with normal jdbc str, but it didnot work with :datasource.#2018-03-0214:07dominicmJust to check, has anybody open sourced a Datomic exporter to other (SQL) databases? I've asked before, but hoping someone wrote one since then 😛#2018-03-0214:42Datomic Platonic@marshall If custom txInstants can't come before the first timestamp in the database, should we start everybase with an init transaction in 1900 or so? Sorry if I didn't understand the docs correctly.#2018-03-0215:01favila@clojurians873 custom txInstants cannot come before the last tx's instant, i.e. they must be strictly increasing#2018-03-0215:02favilathe point being if you're recreating a db of past transactionss, you need to be careful to transact in a historical order#2018-03-0215:05Datomic Platonic@favila ah, that makes sense actually. Sounds like we should be viewing databases more as temporary artifacts that could be reconstructed from data at a whim, rather than have a single 'common source of truth' database.#2018-03-0215:06favilayou need a source of truth somewhere#2018-03-0215:06favilabut the use case here is constructing easier-to-use derived views of whatever that source of truth is#2018-03-0215:06Datomic Platonic@favila interesting#2018-03-0215:07favilafor example, you could have a source-of-truth datomic db where tx is time of record and you explicitly model problem domain times#2018-03-0215:07favilabut every once in a while for analysis you can generate another db from that one where the tx time is the problem domain time#2018-03-0215:08favilaso as-of and since views of this db exactly correspond to a reconstructed record of historical events#2018-03-0215:09Datomic Platonic@favila interesting idea... i'll ponder that for a bit. We're basically pulling from kafka and other log sources because they're in some cases too large to put in datomic#2018-03-0215:09Datomic Platonic@favila good idea on the problem domain time...#2018-03-0215:11Datomic Platonichttps://vvvvalvalval.github.io/posts/2017-07-08-Datomic-this-is-not-the-history-youre-looking-for.html?t=1&cn=ZmxleGlibGVfcmVjc18y&iid=1e725951a13247d8bdc6fe8c113647d5&uid=2418042062&nid=244+293670920#2018-03-0215:11Datomic Platonic(sorry for the spam), but the above post by some guy had us scared for a bit, but your solution sounds like the best#2018-03-0215:12Datomic Platonicin the post he basically says you can't model your problem domain times in datomic, etc#2018-03-0216:19mgrbyteto be fair, I don't think @U06GS6P1N is saying that you can't model your problem domain times in datomic in this post; he's (quote coherantly imho) pointing out that using datomic's transactor time (`:db/txInstant`) for domain-event times is not a good idea, and that you could/should model domain event times using your own schema.#2018-03-0216:56val_waeselynckConfirmed, and I agree with @favila said above#2018-03-0217:03favilaWell the hype when datomic was first released was that we would finally not have all those extra time related columns all over our data. But really what datomic solves is more specific#2018-03-0217:09val_waeselynckwell I would say it depends what these columns were intended for in the first place. The way I see it, Datomic is great for change detection, but not especially helpful for managing versions. People hoped for the latter, but the former is a much more common need#2018-03-0215:54favilawell imagine git had no force-push#2018-03-0215:54favilait's kind of like that#2018-03-0215:55favilatxInstance is commit time, but the data in the commit may have other times#2018-03-0215:56favilaif you got a date wrong in a git commit, you make another commit to fix it; you don't alter the original commit and force-push#2018-03-0420:27igrishaevHi, can anybody help me with that SO question? https://stackoverflow.com/questions/49100045/#2018-03-0501:01drewverleeIn datomic whats the mechanism by which you reference another entity when doing a transaction when that entity isn’t in the db yet? something like:
[{:db/id -1
:person/name "sally"}
{:person/name "joe"
:person/spouse -1}
]
I’m not sure how to even phrase the question properly.#2018-03-0501:04drewverleemaybe this section on temp ids is what i want: https://docs.datomic.com/on-prem/transactions.html#temporary-ids, There is no way to combine that with the map form insert?#2018-03-0501:40devn@drewverlee yeah that’s basically it. There are examples in the day of datomic repo that I think show this.#2018-03-0501:41devnLet a tempid and then reference it in the transaction elsewhere #2018-03-0502:01drewverleemaybe i have an old version of something#2018-03-0517:18drewverleeturns out i just didn’t tell the schema it was a ref. I wasn’t sure that was necessary in dataScript.#2018-03-0503:02alexmillerYou can now use strings as temp ids#2018-03-0508:09ignorabilishi, is there a simple way to hook an on-change sort of handler? I want to monitor all transactions and if a transaction adds/retracts a value for a specific entity to invoke a side-effectfull function#2018-03-0509:14souenzzoThis?
http://blog.datomic.com/2013/10/the-transaction-report-queue.html#2018-03-0514:19ignorabilis@U2J4FRT2T - Sorry, I meant Datomic Cloud.#2018-03-0514:23souenzzoCheck if there is somethink like tx-report-queue on client api reference
if there isn't, there's no support. 😕
you may suggest this feature on http://receptive.io (top right corner on http://my.datomic.com)#2018-03-0910:10ignorabilis@U2J4FRT2T - Thanks. Unfortunately there isn't and that's why I asked. I will vote/suggest this feature though.#2018-03-0509:48petterikIs there a way to use the datomic.client.api against an in-memory database/connection? Keeping everything in the same process#2018-03-0514:33jaret@petterik No — on-prem requires a peer server to use the client API which would break your “in the same process” req.#2018-03-0514:37petterikOk. Makes it a bit harder when developing stuff, not being able to create/destroy connections in-memory on the fly. An in-memory "peer-server" would be nice to have#2018-03-0518:26uwoI have a question about how to approach data migrations. I understand that if you follow the best practices for schema growth, older versions of an application will always have the appropriate schema available.
My question is, should it be a goal to also have inverse data migrations (“down” data migrations), so that earlier versions of the app can be run in a dev setting?#2018-03-0518:47drewverleeWith the pull api, is there a limit to how many relationships you can follow? Eg to get great grandson son >son>son#2018-03-0518:49favilahttps://docs.datomic.com/on-prem/pull.html#recursive-specifications#2018-03-0519:15souenzzoThis {:persn/friends 6} is a new feature?
Sometime ago I wrote a spec for pull pattern and I do not remember seeing this specification#2018-03-0519:24favilano, not new#2018-03-0519:25favilathe grammar may have been wrong (happens frequently)#2018-03-0519:25favilabut the feature was always there#2018-03-0519:26favilaI think the grammar in the docs is hand-written from their implementation; it's not copy-pasted from a parser generator's input#2018-03-0521:09drewverleeThanks Favila, ill look at this when i get home#2018-03-0602:22drewverlee@U09R86PA4 Your right my example would work with recursion. Thanks!
I guess i should have given my actual problem! In my case the relationships aren’t always the same, in fact each ref is different.#2018-03-0602:23drewverleeThis might be better suited for the logic programming side of datomic i suppose#2018-03-0603:35isaacHow to use pipeline transactions for imports (https://docs.datomic.com/on-prem/best-practices.html#set-txinstant-on-imports)?#2018-03-0603:43isaac@marshall#2018-03-0610:52alex.gheregahi guys, I'm getting "Cannot write 210 as tag null when transacting a 210N" on datomic for cloud. can anyone explain this?#2018-03-0613:04marshall@alex.gherega Can you provide the full error / stack trace ?#2018-03-0613:19marshalllooking into it#2018-03-0613:20alex.gheregathank you, @marshall !#2018-03-0616:02dominicmhttps://docs.datomic.com/javadoc/ have the javadocs died / moved?#2018-03-0616:03dominicmah, they've moved to /on-prem/, looks like duck duck go hasn't picked that up yet.#2018-03-0616:03marshallhttps://docs.datomic.com/on-prem/javadoc/index.html#2018-03-0616:03dominicmhttps://docs.datomic.com/on-prem/javadoc/ is dead too#2018-03-0616:03marshallah, yeah i was going to ask where you found that link#2018-03-0616:03dominicm/index.html#2018-03-0616:03dominicm@marshall first my history, then duck duck go 😛#2018-03-0616:03marshallsearch engines are tough - not much we can do about that 🙂#2018-03-0616:03marshalli can update all our own internal links (and have been trying to)#2018-03-0616:04dominicmWell, HTTP 301 is the proper approach, no? Whether or not S3 let you do that, I don't know.#2018-03-0616:05dominicmhttps://docs.datomic.com/on-prem/javadoc/index.html The link to "entity id" does nothing, just while I'm at it 😛#2018-03-0616:05marshallhm. i’ll have to have a look#2018-03-0616:05dominicm(it's also to the wrong place, probably relevant)#2018-03-0616:41marshall@dominicm i’ve fixed those links. i will look into the path without index.html#2018-03-0617:36sparkofreasonSuppose I'm storing timestamped entities, and want to retrieve the N with the most recent timestamps. How would you do that using Datomic Cloud?#2018-03-0617:52souenzzoOn peer, you can query by all entities, sort-by inst and take n.
but on cloud/client, I'm curious to know once query all entities can be potentially huge.
For client, sort on query is a missing feature.#2018-03-0618:26marshallI would use a nested query#2018-03-0618:28marshallhttps://stackoverflow.com/questions/23215114/datomic-aggregates-usage/30771949#30771949#2018-03-0618:29marshallexample ^#2018-03-0618:31marshalldepending on whether your timestamps or the entities are the more restrictive, i’d either find the N most recent or find all relevant entities in the inner query#2018-03-0618:32sparkofreasonThat's pretty slick. Thanks.#2018-03-0618:32marshallsure#2018-03-0620:16rapskalianFun aside: just bragged about our company's use of Datomic on a client call, mainly surrounding its data history capabilities and analytic flexibility, and they were blown away metal#2018-03-0620:19marshallawesome!#2018-03-0623:49erichmondMy company was primarily acquired because of the last product we built, which was in clj/cljs/datomic, and their CEO was legit blown away by the stack, datomic in particular#2018-03-0703:31Desmondso I have been testing my queries by passing in sample datoms as vectors of vectors. Now I want to use pull syntax in the query and i'm getting the error: clojure.lang.PersistentVector cannot be cast to datomic.Database. what's going on here? how can I do this?#2018-03-0704:00favilaYou need an actual database#2018-03-0704:01favilaConsider using the mem (in memory) database#2018-03-0704:06DesmondThanks for clarifying. Yeah i just separated the main query from the pulling part. I wasn't sure the best approach to this separation. I need to keep the association between the attributes from the pull and the attributes from the main query so I'm doing a pull for each entity. Works fine on the small dataset I just tested it with but i'm not sure how the pull-per-entity strategy will perform with more data. Any ideas?#2018-03-0714:29fmnoisehi there! for some reason I can't open https://www.datomic.com/training.html#2018-03-0714:50marshall@fmnoise Are you looking for the Day of Datomic videos?#2018-03-0714:50marshallhttps://docs.datomic.com/on-prem/day-of-datomic.html#2018-03-0714:50marshallThey’ve moved to the docs ^#2018-03-0714:50marshallalso, can you tell me where you found that link ?#2018-03-0715:24fmnoisethanks, that works!#2018-03-0715:24fmnoiseI had the link in bookmarks#2018-03-0715:04mgrbyte@marshall / @stuarthalloway could you clarify that the claims in the doc string for datomic.api/tx-range should hold pls? I've found that only using t values work for me (and not transaction ids) - thanks!#2018-03-0715:14marshall@mgrbyte in the peer API?#2018-03-0715:14mgrbyteyes#2018-03-0715:14marshallok i’ll have a look#2018-03-0715:14mgrbytethanks 👍#2018-03-0715:15mgrbyteon latest version of datomic-pro, memory db for context#2018-03-0715:16mgrbyteI'm using tx->t atm, so not an issue, just wanted to know if the doc string was telling me the truth/if I'd found a bug#2018-03-0715:24unlistedWho has experience migrating from Datomic on-prem to cloud (AWS)? Anyone? Anyone?...#2018-03-0715:27marshall@mgrbyte ^ seems to work#2018-03-0715:28mgrbytehmm. not sure what's going on then.
I have code which works with (d/tx-range (d/log conn) (d/tx->t tx) nil) but not with (d/tx-range (d/log conn) tx nil)#2018-03-0715:31mgrbytealso, in your example it could be that equality assertion is comparing true for two empty seqs?#2018-03-0715:31marshallit will work in that case, but i also checked that they contained something#2018-03-0715:33marshalluser=> (let [t-val 1
tx-id (d/t->tx t-val)
using-tx (seq (d/tx-range (d/log conn) tx-id nil))
using-t (seq (d/tx-range (d/log conn) t-val nil))
_ (println (count using-tx))]
(= using-tx using-t))
5
true
#2018-03-0715:37mgrbytethanks for looking. when I have time, I'll see if I can extract out the code I have that's failing into a gist/repo#2018-03-0716:18donmullenI have a simple sample project up and running on AWS Elastic Beanstalk and would like to access Datomic Cloud. I’m relatively new to AWS — how straight-forward will it be to give my server access? Any pointers into sample configurations for beanstalk and/or docs before I dive in this afternoon? Being new to AWS - how realistic is it for me to ramp up and get connected today? AWS docs are pretty dense! I liked how straight-foward it was to get datomic cloud up an running locally - would be great to have a similar guide for a sample app running in elastic beanstalk (where you can just upload Procfile/uberjar to get going). Will cross post to #aws.#2018-03-0716:39marshall@donmullen if by “my server” you mean your EB app, you should have a look at https://docs.datomic.com/cloud/operation/client-applications.html#vpc-peering#2018-03-0716:39marshallto set up VPC peering between the VPC where your EB app is running and the Datomic Cloud VPC#2018-03-0717:05donmullenYes - my elastic beanstalk app. So VPC peering is the way to go - I’m assuming there isn’t a way to tell EB to use the Datomic Cloud Application VPC. Thanks @marshall.#2018-03-0717:12marshall@donmullen I think you could do that, but the Datomic Cloud VPC may not have the settings you desire for your app (i.e. public IPs / NAT gateway /etc)#2018-03-0717:22marshallfor just trying it out, that may be the quickest path, though#2018-03-0717:29donmullen@marshall - I think I have first two of three steps completed to do the peering. Stepping through Associating a VPC with a Privated Hosted Zone now.#2018-03-0719:02donmullen@marshall - The only thing I think I may have gotten wrong is setting the routing table information. There was already a 0.0.0.0/0 entry with target igw-f8675881 for the EB and Datomic routes - where evidently igw is the internet gateway. I replaced the `0.0.0.0/0' with the IP of the other VPC peer IP. I’m thinking now that I should leave the 0's - and add an entry (as the server is no longer coming up). Trying that now. Thoughts?#2018-03-0719:24donmullenOK - so going back to 0.0.0.0/0 seems to have resolved the EB app problem. I added an entry to the routing for each - selecting the vpc-peer-connection as the target. Is that correct? In testing a call to datomic.client.api/client - I’m now getting Exception in thread "main" java.lang.NoClassDefFoundError: org/eclipse/jetty/util/thread/NonBlockingThread. Switching back to local to make sure that works still. Any thoughts on this error from client call, @marshall?#2018-03-0719:25marshallyou are getting that error or you’re not ?#2018-03-0719:25marshall@donmullen ^#2018-03-0719:25marshallthe NoClassDefFoundError usually points to a deps conflict and/or a missing dep#2018-03-0719:36donmullen@marshall - yes - getting that error (not -> now). Indeed thinking it’s a dep conflict of some sort. I seem to recall something about a jetty exclusion? Trying [com.datomic/client-cloud "0.8.50" :exclusions [org.eclipse.jetty/jetty-io]] now.#2018-03-0720:08donmullenOK - progress. deps resolved - now back to getting vpc peer connection correct. Now I’m getting Exception in thread "main" clojure.lang.ExceptionInfo: Unable to connect to system: {:cognitect.anomalies/category :cognitect.anomalies/unavailable, :cognitect.anomalies/message "Connection refused"} {:config {:server-type :cloud, :region "us-east-1", :system "nycdvp", :query-group "nycdvp", :endpoint "", :proxy-port 8182, :endpoint-map {:headers {"host" ""}, :scheme "http", :server-name "", :server-port 8182}}} which implies that I don’t have the vpc peers set up correctly.#2018-03-0720:09marshalli would take out proxy-port from the config map#2018-03-0720:09marshallyou’re not using a proxy anymore#2018-03-0720:10marshallAnd you’re using a security group with the correct permissions?#2018-03-0720:10marshallin particular, one that has ingress allowed in the Datomic entry group#2018-03-0720:27donmullenRemoving the proxy-port - now getting Connect Timeout instead of Connection refused : Exception in thread "main" clojure.lang.ExceptionInfo: Unable to connect to system: {:cognitect.anomalies/category :cognitect.anomalies/unavailable, :cognitect.anomalies/message "Connect Timeout"} {:config {:server-type :cloud, :region "us-east-1", :system "nycdvp", :query-group "nycdvp", :endpoint "", :endpoint-map {:headers {"host" ""}, :scheme "http", :server-name "", :server-port 8182}}}#2018-03-0720:32redinger@donmullen Does your datomic entry security group allow ingress on port 8182 from your eb security group?#2018-03-0720:35redingerAnd does your datomic route table have an entry to route back to the eb route table?#2018-03-0720:43donmullen@redinger - see routing above - I did add entries both ways.#2018-03-0720:45redingerYep, that’s the right place#2018-03-0720:45donmullenBut it doesn’t accept sg-66193f12 - which the EB default security group….#2018-03-0720:46donmullen@redinger - at least for Custom TCP rule - what should Type and Protocol be?#2018-03-0720:47redingerThe peering is to vpc-90b1b3e8. You should use the security group for that vpc - sg-c212b2b4#2018-03-0720:48redingerType is Custom TCP Rule, Protocol is TCP#2018-03-0720:54donmullen@redinger - ok - so that sg-c212b2b4 also wasn’t in the drop-down list - but I put it in anyway and trying that. Any idea why that value not in the list of selectable items?#2018-03-0720:58redinger@donmullen Hmm, not sure. Maybe their auto suggest doesn’t work across VPCs.#2018-03-0721:00donmullenCould be — progress though!#2018-03-0721:00donmullenException in thread "main" clojure.lang.ExceptionInfo: Forbidden to read keyfile at . Make sure that your endpoint is correct, and that your ambient AWS credentials allow you to GetObject on the keyfile. {:cognitect.anomalies/category :cognitect.anomalies/forbidden, :cognitect.anomalies/message "Forbidden to read keyfile at . Make sure that your endpoint is correct, and that your ambient AWS credentials allow you to GetObject on the keyfile."}#2018-03-0721:01redingerSo now the IAM policy for that EB needs to grant access to S3 for those keys#2018-03-0721:06donmullenokay dokie - feel like I’m getting close… no? most of this aws stuff is new to me - realize that technically this isn’t datomic support - but guess it’s good to over issues that people new to aws have that need resolution in order to get connected to datomic cloud. help is much appreciated.#2018-03-0721:08redingerYeah, I think once the IAM permissions are sorted, this should be working. This feedback is super useful, since we need to find the balance in our documentation about how to consume this stuff.#2018-03-0721:18donmullen@marshall @redinger Success! Thanks much! INFO [fulcro.easy-server:181] : Web server () started successfully. Config of http-kit options: {:port 5000}
18-03-07 21:16:01 ip-172-31-44-80 INFO [aws.server-main:23] - Testing Datomic
18-03-07 21:16:10 ip-172-31-44-80 INFO [aws.server-main:26] - created client
18-03-07 21:16:11 ip-172-31-44-80 INFO [aws.server-main:28] - created database
18-03-07 21:16:21 ip-172-31-44-80 INFO [aws.server-main:30] - connected to database
18-03-07 21:16:22 ip-172-31-44-80 INFO [aws.server-main:32] - deleted database#2018-03-0721:22redingergreat!#2018-03-0721:23marshallCool!#2018-03-0723:21olivergeorgeHello. I'm interested in seeing how people prepare datomic queries. Specifically, adding more constraints depending on, say, api query params.
Google and I ended up with what feels like an elegant hack at best: https://gist.github.com/olivergeorge/e086543f519d5e0559c1179545e20c68#2018-03-0723:21olivergeorgePerhaps there are other approaches I'm not seeing.#2018-03-0805:19Desmondwhen i restore after deleting my database it doesn't actually copy the segments and I end up with data that was in the deleted database but not in the backed up one. why is this? how can I delete my database for reals and then restore fresh data?#2018-03-0812:59marshallany data that gets skipped in the restore should be the “same” as the corresponding data in the backup - all the segments are immutable#2018-03-0813:00marshallhowever, if you need to blow away all the segments for other reasons, you can run gc-deleted-dbs (https://docs.datomic.com/on-prem/capacity.html#garbage-collection-deleted) after you delete the database. Depending on your storage you may then also have to do something like a VACUUM FULL operation#2018-03-0905:38Desmondvery interesting. i'll check this out. thanks for the tip!#2018-03-0813:00marshall@captaingrover ^#2018-03-0906:18olivergeorgeHello. I'm trying to write a query which takes a list of filters and should only return when all filters return true. This is what I have. It's not right but perhaps on the right track.#2018-03-0906:18olivergeorgehttps://gist.github.com/olivergeorge/e086543f519d5e0559c1179545e20c68#file-v4#2018-03-0906:18olivergeorgeQuestion is how to destructure the filter map and "and" the tests#2018-03-0909:35olivergeorgeSlowly getting there. I have a recursive step to expand out the filters now. https://gist.github.com/olivergeorge/e086543f519d5e0559c1179545e20c68#file-v5-clj#2018-03-0909:41olivergeorgeLast odd thing is that my filter? function is called twice as often as I would expect (so same inputs presented twice). I'd love to know why.#2018-03-0917:58uwoModeling question: has there been anything written around the idea of potentially collapsing the (fully qualified) attributes on cardinality one, component refs into their parent refs?#2018-03-0918:00val_waeselynck@U09QBCNBY to what end?#2018-03-0918:13uwoA few things: updates to a component trees become less complex, also it’s a little involved to explain but I think it becomes a little less complicated to create strict specs as it regards required keys. Anecdotally, I’ve noticed cases internally where we’re constantly flattening nested component refs, that we initially modeled that way because it matched a business concept.
I wouldn’t say it’s reason, but you also drop the collecting attribute.
Any case where you might refer to the component aggregate you can select keys.#2018-03-0918:15uwoby updates, I mean upserting novelty into a nested document. Though it’s not that bad with a tree you just have to pull the matching :db/ids#2018-03-0918:44uwoso, does that sound unfounded then?#2018-03-0919:01timgilbertI've wanted to do something similar to this before, as datomic results can get very deeply nested if you're traversing long paths in the graph#2018-03-0919:03timgilbertFWIW, there's no technical reason you couldn't put :user/name and :address/street in the same entity, though, and just sort of manually smoosh two different entities together#2018-03-0918:09donmullenI have a query against solo that returns about 3000 items - and takes about 22 seconds. I was hoping to reduce that by using :limit - but it seems to take the same amount of time. Thoughts on speeding up the query? Likely will just cache those on the server for now so the web based client can get them quickly. Wondering in general how to approach this with datomic cloud.#2018-03-0918:21donmullenHaving to adjust from having frequently used queries automatically cached into the peer server!#2018-03-0918:41marshall@donmullen that should speed up on a warm cache#2018-03-0918:57axsHi people, we have a lot of entities that we want to get rid of (~1m per day from early january). In the schema of those entities we have noHistory true for all of the attributes. Yesterday i was playing with excision on my laptop, and got datomic full indexing for hours and hours with only 10k of excised entities. The procedure was: excise, excise-sync, gc-storage... So today I was thinking about that if we had nohistory, and we retract it, that entity... will show up in a backup? What is your strategy to deal with old data?#2018-03-0920:14denikI’m suddenly getting the permission error (`Forbidden to read keyfile at s3://....`) and can’t reproduce on another machine with the same credentials (it works!). What else could cause this?#2018-03-0922:27marshallAWS Creds. That error indicates that you are running in a role / with credentials that dont have the correct permissions @denik
see: https://docs.datomic.com/cloud/operation/access-control.html#2018-03-0922:27marshallit’s possible you have your local AWS profile configured in one place but not the other#2018-03-0922:28marshallthere is a hierarchy/order of precedence for the various credential sources (env creds, profile, etc)#2018-03-1010:42hmaurerHello. Quick question: I have read a number of soft restrictions on the amount of data that can be stored in a Datomic database. A number that often pops up it “10 billion datoms”. Would Datomic be suitable for large datasets (in the order of multiple terabytes), and can I expect efficient common queries (sub 50ms) over that sort of dataset?#2018-03-1121:36hmaurer@marshall would you have an idea?#2018-03-1121:37marshallIt would depend a lot on the type of data. Terabytes might be pushing it#2018-03-1213:20hmaurer@marshall I see. The 10 billion datoms soft limit seems quite low though. Assuming entities have ~ 1000 datoms on average, that’s about 10,000,000 entities in the system.#2018-03-1213:32marshallThe 10 billion number you mentioned is definitely not a hard limit. There is no hard limit to # of datoms - the use case, schema, and data access pattern will all affect how Datomic behaves with a large dataset.
In practice we have very rarely seen people exceed, or even come close, to the 10B datom size for systems that are a good fit for other attributes of Datomic. By that I mean most systems that need transactional fully ACID semantics are not also true ‘write scale’ systems. Obviously that’s a generalization and there are some workloads/systems that may not be a great fit or that would benefit from a sharded (multiple txors) or hybrid (i.e. Datomic + a write-scale store) approach.#2018-03-1014:37denik@marshall we printed the key, secret and region env vars and they were correct. This also worked before and we didn’t change anything around permissions. I’m not sure this has anything todo with it, but I reran the cloud formation templates to shut down the bastion.#2018-03-1015:52Datomic PlatonicIs there a cloud client that has the same semantics as the datomic-free version? Otherwise, well have to have different code in dev and production? (e.g., adding {:tx-data} to a transact operation)#2018-03-1113:47souenzzoDatomic-free uses peer API#2018-03-1015:53Datomic PlatonicIf not, are people solving this issue with multimethods or is there a more straightforward approach?#2018-03-1017:54JJIs choosing a bastion server obligatory? I would like to create my own ec2 inside the vpc for access.#2018-03-1120:15thosmosI'm attempting to restore a db from one system into another for the second time. The first restore created the db on the target system, but the second time it reports that all segments were skipped. Any ideas what I'm doing wrong? I'm using exactly the same URI on both servers.#2018-03-1120:33marshall@thosmos you're seeing skipped segments because they are already present in the storage. Restore only copies the things that aren't there yet#2018-03-1120:37thosmos@marshall yes I understand that, but ALL of the segments are skipped, including the newer ones that have been added to the source db since the previous backup/restore#2018-03-1120:37thosmosin other words, when I do an incremental backup, 2 new segments were added, but when I do a restore, 0 segments were copied#2018-03-1120:40marshallAre you restoring with a t argument?#2018-03-1120:40marshallCan you list backups in the backup location as well?#2018-03-1120:43thosmoswondering if it is a timezone issue. destination system reports a lower t than the backup point:
~# datomic/bin/datomic list-backups file:/root/backup-20180311
(140416)
~# ./datomic/bin/datomic restore-db file:/root/backup-20180311 $URI 140416
Copied 0 segments, skipped 0 segments.
Copied 0 segments, skipped 1578 segments.
:succeeded
{:event :restore, :db test2, :basis-t 140415, :inst #inst "2018-03-11T19:16:06.226-00:00"}
#2018-03-1120:45marshallThats the same basis. Iirc list backups reports the next-t#2018-03-1120:46marshallHave you queried the restored db for some of the newly added datoms?#2018-03-1120:46thosmosyes, it's still the old state#2018-03-1120:48marshallHave you transacted against the db in the secondary storage between original restore and this attempt?#2018-03-1120:48thosmoswondering that now. I didn't think so, but I'm looking up how to find the latest tx#2018-03-1120:49marshallIf you had that would cause issues#2018-03-1120:49marshallYou can't fork history#2018-03-1121:16thosmosI just did a repl on the destination system and am seeing the new data at the end of the log with the right :t of 140415, so at least it's clear that the data is there in the DB. If the history had forked, would the datoms from the restore be in the log, but be inaccessible for any reason?#2018-03-1121:16marshallDo you mean it's in the log but you can't find it with query?#2018-03-1121:17thosmosOh I see, user error, I was looking at an old derived view on the other end, NOT the actual db value#2018-03-1121:17marshallThat' would do it#2018-03-1121:17thosmosthank you for you help!#2018-03-1121:17marshallSure!#2018-03-1121:18thosmosusually the most obvious explanation 😉#2018-03-1121:18marshallI'm an expert rubber duck#2018-03-1121:20thosmoswish i got that joke#2018-03-1121:23marshallhttps://en.m.wikipedia.org/wiki/Rubber_duck_debugging#2018-03-1212:23Christian JohansenHey people! We’re trying to connect a Java app to datomic:<dev://localhost:4334> using the Datomic pro library, and running into some issues#2018-03-1212:25Christian Johansen2018-03-12 13:15:14.122 ERROR default o.a.activemq.artemis.core.client - AMQ214013: Failed to decode packet
java.lang.IllegalArgumentException: AMQ119032: Invalid type: -12
#2018-03-1212:26Christian Johansenwe successfully connected the same app to a datomic:free catalog using the free library#2018-03-1212:27Christian Johansenour ultimate goal is to connect to datomic:ddb, but using dev for local development#2018-03-1212:27Christian JohansenI can connect just fine from Clojure#2018-03-1212:38souenzzoBoth are using datomic pro? (Pro peer lib and pro transactor)#2018-03-1212:58Christian Johansenyep#2018-03-1212:58Christian Johansenseems there was a problem with conflicting versions of artemis in our app#2018-03-1212:59Christian Johansendatomic ships with a quite old one#2018-03-1214:56robert-stuttaford@marshall Datomic Client (not Cloud) when used with on-prem supports excision, right?#2018-03-1215:07JJis on-prem getting "nodes" ?#2018-03-1215:07alexmiller@devicesfor not to my knowledge#2018-03-1215:11JJok#2018-03-1215:22marshall@robert-stuttaford it should yes#2018-03-1215:32alexkTrying the forum 🙂 https://forum.datomic.com/t/field-v-has-desired-values-and-only-them-in-any-order/365#2018-03-1215:35alexkAlso, bug report: when editing an existing question, the box to the right contains translation missing: en.education.new-reply#2018-03-1216:06adamfreythe "syncSchema API" link on this blog post is now a 404 to javadoc: http://blog.datomic.com/2014/01/schema-alteration.html#2018-03-1216:11marshallFixed @adamfrey thanks#2018-03-1313:04chrisblomare there any libraries for ACL-like authorisation using datomic?#2018-03-1313:09chrisblomanother question: what is a good way to debug datalog query rules? I find it quite hard to diagnose & fix mistakes in them.#2018-03-1313:10jaret@chrisblom are you referring to datalog query performance? https://github.com/Datomic/day-of-datomic/blob/master/tutorial/decomposing_a_query.clj#2018-03-1313:11chrisblomno, debugging problems with datalog rules#2018-03-1313:22jaretWhat kind of problems are you running into? I often find that I have to separately break up the group of clauses to confirm I have exactly what I am after. I am just curious if you are having a different problem debugging rules.#2018-03-1313:28chrisblomi’m developing some rules for an authorisation scheme, i have a rule
[[check-permission ?p ?action ?resource]
[?p :permission/action ?action]
[?p :permission/resource ?resource]]
#2018-03-1313:29chrisblomwhich can be used to check if a permission is granted to perform an action on some resource#2018-03-1313:30chrisblomwhere actions are a fixes set of idents, like :action/read, :action/update, action/delete#2018-03-1313:31chrisblomand resources can be any entity in the db#2018-03-1313:31chrisblomthis works fine, but i want to add a special wildcard action :action/ANY#2018-03-1313:32chrisblomsuch that when a permission grants :action/ANY on a resources, check-permission will match any action#2018-03-1313:33chrisblomi’ve tried adding [[check-permission ?p _ ?resource]
[?p :permission/resource ?resource]
[?p :permission/action :action/ANY]]
but it does not work#2018-03-1313:34chrisblomi’ve also tried a bunch of other rules, but as the rules engine is a black box, i have no idea when a rule matches or not#2018-03-1313:40marshalli would try something like
[[check-permission ?p ?action ?resource]
[?p :permission/resource ?resource]
[(= ?action :action/ANY]]
#2018-03-1313:40marshall@chrisblom ^#2018-03-1313:43chrisblomthanks, but thats not what i’m looking for. I’m looking for a rule such that [check-permission ?p ?action ?resource] will match when ?action is :action/read | :action/write | :action/…#2018-03-1313:44chrisblomprovided that {:db/id ?p
:permission/resource ?resource
:permission/action :action/ANY}#2018-03-1313:45marshallah.#2018-03-1313:47chrisblomso when a permission has :permission/action :action/ANY it should not unify the ?action variable of check-permission, but i’m not able to achieve this#2018-03-1313:48marshalli think youll need to use an OR in the rule#2018-03-1313:50marshall[[check-permission ?p ?action ?resource]
(or [?p :permisison/action :action/ANY]
[?p :permission/action ?action])
[?p :permission/resource ?resource]]
#2018-03-1313:52marshallif entity p has either the specified action or the ANY action set, then match#2018-03-1313:54chrisblomok, seems logical, but datomic does not allow it
Assert failed: All clauses in 'or' must use same set of vars, had
[#{?p} #{?action ?p}] (apply = uvs)#2018-03-1313:56marshallor-join#2018-03-1313:57marshallerm. one sec#2018-03-1314:03marshallactually, i think i was closer the first time#2018-03-1314:04marshall[[check-permission ?p ?action ?resource]
[?p :permission/action ?action]
[?p :permission/resource ?resource]]
[[check-permission ?p ?action ?resource]
[?p :permission/action :action/ANY]
[?p :permission/resource ?resource]]
#2018-03-1314:09marshallmultiple rule heads are treated as logical OR#2018-03-1314:09chrisblomah, yeah that works, thanks#2018-03-1314:09chrisblomi had an underscore instead of ?action#2018-03-1314:09marshallyou were pretty much right the first time, just minus the _#2018-03-1314:09marshalli’d have to think about why that didnt work#2018-03-1314:10chrisblomyeah, i’d expect that _ would unify with anything#2018-03-1314:10marshalldid you get an error or did it just not work?#2018-03-1314:10chrisblomit just did not work, no error#2018-03-1314:11marshallfollowing the grammar, the rule head is:#2018-03-1314:11chrisblomfyi, an or-join is also possible:
'[[check-permission ?p ?action ?resource]
(or-join [?p ?action]
[?p :permission/action :action/ANY]
[?p :permission/action ?action])
[?p :permission/resource ?resource]]#2018-03-1314:11marshall[rule-name rule-vars]#2018-03-1314:12marshallyeah, an or-join would do the same thing#2018-03-1314:13marshallanyway, the grammar does say that a rule head is [rule-name rule-vars] , rule vars is [variable+ | ([variable+] variable*)] and variable is symbol starting with ?#2018-03-1314:13marshallso an underscore isn’t actually permitted in a rule-var list#2018-03-1314:15chrisblomah ok, yeah i see it now in the grammar#2018-03-1314:20chrisblomi expected that using _ in a rule should work, it works in queries#2018-03-1314:21chrisblomsome other datalog query engines, and prolog allow it#2018-03-1314:24chrisblomcan i file a bug for this? It kept me busy for a few hours and i’d like to avoid others from experiencing the same.#2018-03-1314:29chrisblom@marshall thanks a lot for your help!#2018-03-1314:34marshallnp. yeah, i’ll pass it along#2018-03-1320:06marshallLooks like a deps conflict @donmullen#2018-03-1320:07donmullenYep - just seeing that — some conflict with com.socrata/soda-api-java — sorry for the noise.#2018-03-1320:17marshallNp#2018-03-1407:29Hendrik Poernamais there a documentation on datomic reader macros? #db/fn, #datom, etc#2018-03-1407:37rauh@poernahi It's documented in d/function#2018-03-1408:04Hendrik Poernamathanks!#2018-03-1417:58gworley3i'm thinking about migrating datastores used to power a datomic install. has anyone else done something like this or is there documentation on how to do it with zero downtime?#2018-03-1417:59gworley3i see there is a way to backup and restore, which gets most of the way there, but that would require taking the system down during the process since otherwise i'm not sure how i'd get the latest changes that happen between when the backup starts and the restore completes#2018-03-1418:01gworley3and ideally i'd like to be able to run both backends side-by-side for a while so i could very performance of the new datastore in production without being fully committed to it#2018-03-1419:39stijnis there an option to launch datomic cloud from terraform?#2018-03-1419:42marshall@amarjeet is your system running with the correct AWS credentials (i.e. in a role or with ambient creds)? Is it running on an EC2 instance? If so, is it in the Datomic VPC? What security group?#2018-03-1419:44amarjeetyes, those things seem fine. My datomic-socks-proxy consol prints this debug1: channel 2: free: direct-tcpip: listening port 8182 for entry.<system-name>.<region>. port 8182, connect from ::1 port xxxxx to ::1 port 8182, nchannels 3#2018-03-1419:45amarjeetand this command curl -x .$DATOMIC_SYSTEM.$ also prints {:s3-auth-path <bucket-name>}#2018-03-1419:45marshallyou’re running with a socks proxy or you’re running on AWS?#2018-03-1419:45marshallyour client application that is#2018-03-1419:45amarjeetSo, while launching datomic cloud, I used socks proxy process#2018-03-1419:46amarjeetNow, I have a pedestal app, that I want to connect with that cloud db#2018-03-1419:46marshallright. and where is your pedestal app running?#2018-03-1419:46amarjeetits my local system#2018-03-1419:46marshallok. your cfg map doesn’t specify the proxy-port#2018-03-1419:47marshallhttps://docs.datomic.com/cloud/getting-started/connecting.html#use-datomic#2018-03-1419:47marshallwhen you’re running locally through the socks proxy you need to include the :proxy-port in your cfg map#2018-03-1419:47amarjeetokay, so when I specify the port as 8182, I get connect refused error#2018-03-1419:47amarjeetjust a sec#2018-03-1419:48amarjeetm checking again#2018-03-1419:48marshallyour pedestal app (however you’re launching it) will also need to be running with ambient AWS credentials#2018-03-1419:51amarjeetUnable to connect to system: #:cognitect.anomalies{:category :cognitect.anomalies/unavailable, :message "Total timeout 60000 ms elapsed"} thsi is the erro I am getting with proxy-port#2018-03-1419:51amarjeetaws creds mean, I should put the credentials in project.clj as {:user {:aws ...}}#2018-03-1419:53alexmillermuch better to put them in the ambient environment via standard AWS env vars#2018-03-1419:53marshalli don’t know about lein specifically; i usually use the ambient env creds#2018-03-1419:53marshallyeah, what @alexmiller said#2018-03-1419:53alexmillerdon’t check your aws creds into your git :)#2018-03-1419:54marshallhttps://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#cli-quick-configuration#2018-03-1419:54amarjeetokay 🙂 let me try this. I am pretty much newbie to the deployment side of things#2018-03-1419:57amarjeetaah, its connected now#2018-03-1419:57amarjeetthanks much both of you 🙂#2018-03-1419:58marshallnp#2018-03-1422:24Datomic Platonicdoes the datomic-pro peer server allows backups? we run datomic.peer-server on localhost: Serving datomic:<mem://hello> and then try a backup but keep getting Invalid database URI errors#2018-03-1422:25marshallyou can’t run a backup of a mem database#2018-03-1422:25marshallbackup and restore require storage#2018-03-1422:26Datomic Platonicgot it -- thanks... we ultimately plan to run on AWS but want to understand backup/restore before we push there#2018-03-1422:26marshallsee https://docs.datomic.com/on-prem/backup.html#sec-8#2018-03-1422:26marshallyou can run a local transactor with a dev database and use both peer-server against it as well as backup and restore#2018-03-1422:27marshallsee https://docs.datomic.com/on-prem/dev-setup.html#2018-03-1422:27Datomic Platonicthanks @marshall, we'll keep reading...#2018-03-1422:28Christian JohansenWhen using Dynamo as the storage backend, is it safe to rely on Dynamo snapshots for backup, or should we still be using Datomic’s backup tool?#2018-03-1423:13marshallYou should use datomic backup #2018-03-1423:14marshallhttps://docs.datomic.com/on-prem/ha.html#use-datomic-backup#2018-03-1423:15marshallDdb snapshots don't provide the required semantics for a consistent copy of a datomic db#2018-03-1505:21Christian JohansenThanks #2018-03-1511:49donmullenIs it possible to pass a pull pattern that contains some :default settings to query? If so - what’s the correct format? I get an error: (d/q '[ :find (pull ?j query)
:in $ ?bin query
:where
[?j :job/bin ?bin]]
db
"1047470"
'[ (:job/fully-paid :default false) ])
datomic.impl.Exceptions$IllegalArgumentExceptionInfo: :db.error/invalid-attr-spec Cannot interpret as an attribute spec: (:job/fully-paid :default false) of class: class clojure.lang.PersistentList
#2018-03-1512:56marshall@donmullen you might want to look at https://docs.datomic.com/cloud/query/query-data-reference.html#get-else and https://docs.datomic.com/cloud/query/query-pull.html#default-option#2018-03-1513:32donmullen@marshall I’ve used get-else successfully in queries - but not :default in a pull pattern. Not sure what I’m missing. My example above seems to follow pattern in the docs :[:artist/name (:artist/endYear :default "N/A")]
and in the day-of-datomic example: ;; default option
(d/pull db '[:artist/name (:artist/endYear :default 0)] mccartney)
#2018-03-1513:35marshallahh; don’t think you can pass the pull expr as a parameter#2018-03-1513:35marshallif you’re using it in the find expression#2018-03-1513:38marshalloh, yes you can#2018-03-1513:38marshallhrm. one second 🙂#2018-03-1513:42donmullen@marshall I also got that error using d/pull as well which I was thinking would surely work - hmmm…#2018-03-1513:42marshallwhat version of datomic/#2018-03-1513:42marshall?#2018-03-1513:42marshallor is this in cloud?#2018-03-1513:46donmullendatomic-pro-0.9.5651#2018-03-1513:47marshallah#2018-03-1513:47marshallthat feature was released in 5656#2018-03-1513:47marshallhttp://blog.datomic.com/2017/12/datomic-pull-as.html#2018-03-1513:48marshallthere is an older syntax for limit exprs, but the one you’re using there ^ is the newer one based on attr-with-options#2018-03-1513:49marshallhttps://docs.datomic.com/on-prem/pull.html#sec-2-2#2018-03-1513:50donmullenOK - thanks @marshall#2018-03-1513:53marshall(d/q '[:find [(pull ?e pattern) ...]
:in $ ?artist pattern
:where [?e :release/artists ?artist]]
db
led-zeppelin
'[(:release/name :as "Release")])
#2018-03-1513:53marshall^ that gives:
[{"Release" "Houses of the Holy"} {"Release" "Immigrant Song / Hey Hey What Can I Do"} {"Release" "Immigrant Song / Hey Hey What Can I Do"} {"Release" "Heartbreaker / Bring It On Home"} {"Release" "Led Zeppelin III"} {"Release" "Whole Lotta Love / Living Loving Maid"} {"Release" "Led Zeppelin III"} {"Release" "Led Zeppelin III"} {"Release" "Led Zeppelin"} {"Release" "Led Zeppelin IV"} {"Release" "Led Zeppelin"} {"Release" "Led Zeppelin IV"} {"Release" "Led Zeppelin"} {"Release" "Led Zeppelin II"} {"Release" "Led Zeppelin II"} {"Release" "Led Zeppelin II"} {"Release" "Led Zeppelin II"}]#2018-03-1513:53marshallbased on this example: https://github.com/Datomic/day-of-datomic/blob/master/tutorial/pull.clj#L113#2018-03-1513:54marshalllimit and default should work the same way#2018-03-1616:42Datomic Platonicdoes re-transacting the same schema on top of an old one cause any damage? on client-pro 0.8.14 it causes the number of datoms to increase even though contents are same...#2018-03-1616:49Datomic Platonicour second question is how to connect the peer api to a sql-backed server: we're getting a :db.error/unsupported-protocol :sql error#2018-03-1616:49Datomic Platonic(we have been able to succesfully connect to the same sql-backed peer server using the client api, though)#2018-03-1617:01JJYour datoms increase because new transactions are being recorded, transactions being themselves entities#2018-03-1617:03JJby the map response, I don't think new schema datoms are being added or replaced, I just see a new tx datom#2018-03-1617:04JJOf course, an expert here needs to verify this 😉#2018-03-1617:06JJso no damage is my guess#2018-03-1617:13Datomic Platonic@devicesfor interesting, our understanding was that if you transacted [user :likes 🍕] twice, the number of log/history datoms would increase but the number of datoms in the (d/db conn) current db would stay the same... confusedparrot#2018-03-1617:17marshallYou get an additional transaction entity#2018-03-1617:18marshallwhich would have a txInstant#2018-03-1617:18marshallso at least 1 more datom#2018-03-1617:18marshallbut if your schema is exactly the same, Datomic’s redundancy elimination would remove any other datoms#2018-03-1617:21Datomic Platonic@marshall interesting... thanks#2018-03-1617:22Datomic Platonic@marshall do you know if the datomic-free api can connect to the sql backend peer? we're getting connection errors (we're trying to hook into the same postgres-backed api in dev and production, and the producton client api is working fine but the free dev api is not working)#2018-03-1617:23marshallno, the free protocol is only for Datomic Free use with local storage#2018-03-1617:23marshallif you have a Starter (or Pro) license, you should use that for a sql-backed instance#2018-03-1617:23marshall(that is, you should use the pro peer library)#2018-03-1617:25Datomic Platonic@marshall when we include [com.datomic/datomic-free "0.9.5656"] we can only see the datomic.client.api namespace -- is there another maven artifact for the peer library?#2018-03-1617:26Datomic Platonicoops, meant to say [com.datomic/client-pro "0.8.14"]#2018-03-1617:36Datomic Platonicsorry, we found them in the tarball (i guess they're not in the maven site because of the closed source nature of the project...)#2018-03-1619:07marshall@clojurians873 Correct - the peer library can be installed from the downloaded zip with bin/maven-install or you can get it from the private maven repo with the credentials listed in your http://my.datomic.com account#2018-03-1623:36JJisn't the "transact a movie" example here wrong? no :tx-data https://docs.datomic.com/cloud/transactions/transaction-processing.html#submitting#2018-03-1704:13fdserrHi there. figwheel build blows up when I add a dep to Datomic:
Figwheel: Cutting some fruit, just a sec ...
Exception in thread "main" java.lang.ExceptionInInitializerError
at clojure.main.<clinit>(main.java:20)
Caused by: java.lang.NoSuchMethodError: com.google.common.base.Precondition
s.checkState(ZLjava/lang/String;Ljava/lang/Object;)V, compiling:(
j:100:1)
... (much more crap, will provide it for free if needed)
Tried to downgrade with no avail.
Thanks in advance if you can help.#2018-03-1704:14fdserrHow to reproduce:
$ lein new figwheel dat-no-like-fig
$ cd dat-no-like-fig
$ lein figwheel
==>>sweetness
Edit project.clj
...
:dependencies [[org.clojure/clojure "1.9.0"]
[org.clojure/clojurescript "1.9.946"]
[org.clojure/core.async "0.4.474"]
[com.datomic/datomic-free "0.9.5661"]] ; <- added (jar is installed)
...
quit and restart figwheel
bam!
(see above)
#2018-03-1715:28souenzzo@fdserr run a lein deps :tree and post here#2018-03-1719:04robert-stuttafordsome progress on my bridge project here: https://www.stuttaford.me/2018/03/17/bridge-dev-diary--modelling-access/ of relevance is my experience using Datomic Peer and Client together. may be of interest if you’re using one and considering switching to the Client API for Cloud later#2018-03-1902:31bmabeyThanks for sharing! Your project has some great datomic starter code and is a good modern example. I'll be copying your abstraction over the peer and client libs for an upcoming project. 🙂#2018-03-1906:01robert-stuttafordmost welcome!#2018-03-1719:30amarjeetI had asked a related question earlier. I am facing an issue again. So, I am using pedestal + datomic cloud. My datomic cloud is up and running and I am able to connect my pedestal app when I run the app at my local machine (with socks proxy). Now, I am trying to deploy the pedestal app on aws/heroku and I am getting Connection refused error. What config I am missing?#2018-03-1803:00fdserr@souenzzo Snippet above, thanks!#2018-03-1803:09fdserrAdded the exclusions as proposed by lein deps:
:dependencies [[org.clojure/clojure "1.9.0"]
[org.clojure/clojurescript "1.9.946"
:exclusions [[com.google.guava/guava]]]
[...]
:profiles {:dev {:dependencies [[binaryage/devtools "0.9.9"]
[figwheel-sidecar "0.5.15"
:exclusions [[org.clojure/tools.nrepl]
[com.google.guava/guava]]]
[...]
#2018-03-1803:10fdserrStill blowing up:
$ rlwrap lein figwheel
Figwheel: Cleaning because dependencies changed
Figwheel: Cutting some fruit, just a sec ...
Exception in thread "main" java.lang.ExceptionInInitializerError
at clojure.main.<clinit>(main.java:20)
Caused by: java.lang.NoSuchMethodError: com.google.common.base.Precondit
ions.checkState(ZLjava/lang/String;Ljava/lang/Object;)V, compiling:(clos
ure.clj:100:1)
at clojure.lang.Compiler$DefExpr.eval(Compiler.java:470)
at clojure.lang.Compiler.eval(Compiler.java:7067)
[...]
#2018-03-1809:08amarjeetAny help on this?#2018-03-1913:07dominicmDoes anyone have some handy code for producing an infinite tx-range? Where you're waiting for new transactions when you run out of what's already in the db.#2018-03-1913:08dominicmThe tx-report-queue has properties I don't particularly like, as it fills up memory.#2018-03-1913:43Wes HallNot sure who to message with this, but I have a suggestion.
I'm using datomic cloud and developing against it, which basically means a long running datomic-socks-proxy process. This was quite painful due to frequent timeouts and disconnects, causing me to having to keep jumping across and restarting it.
I installed autossh instead and hacked the script to use this, and it is now much more stable (and survives sleeps of my laptop). I wonder whether it might be worth having the standard script check for the installation of autossh and if found, use that instead (and maybe print a message to the user if not found, before continuing with the regular ssh client).
For anybody interested in my little hack, I just commented out the ssh command at the bottom of the script, and added the autossh one. Like this...
#ssh -v -i $PK -CND ${SOCKS_PORT:=8182}
autossh -M 0 -o "ServerAliveInterval 5" -o "ServerAliveCountMax 3" -v -i $PK -CND ${SOCKS_PORT:=8182} #2018-03-1913:51robert-stuttaford@dominicm the onyx datomic plugin has a tx-range poll with backoff#2018-03-1914:01dominicm@robert-stuttaford That's not far from where I've ended up. Except with less volatiles 😛. A shame there isn't a clever solution to have a sequence/channel of some kind only.#2018-03-1916:19JJ@wesley.hall check out https://mosh.org/#2018-03-1917:48lboliveira@(d/transact conn [[:custom-fn1][:custom-fn2]])
Is it possible to custom-fn2 see the changes caused by custom-fn1?#2018-03-1917:50marshall@lboliveira No - the atomicity of transactions means that there is no ‘before’ or ‘after’ within a transaction - everything either happens or doesn’t and it all occurs “at the same time”#2018-03-1917:51marshallyou can enforce arbitrary constraints within a single transaction function, but there is no way to “inspect” the results of one txn-fn from another#2018-03-1917:52lboliveira@marshall Thanks for the clarification.#2018-03-1917:53marshalloh. one thing to note - you can call a txn function from within a txn-fn#2018-03-1917:53marshallwhich might provide the semantics you need#2018-03-1917:53marshallbut two top-level calls (like you’ve shown) can’t interact#2018-03-1917:55lboliveiracould you please show an example of calling a txn function from within a txn-fn ?#2018-03-1917:55marshallthe output of any transaction function must be only valid tx-data#2018-03-1917:56marshallessentially lists that look like : [[:db/add 124215 :db/doc “test”][:db/retract 11111 :person/name “Marshall]]#2018-03-1917:56marshallyou can emit something like [[:my-fun-2 arg1 arg2]]#2018-03-1917:57marshallwhich is valid tx-data, and when it’s processed it will invoke the :my-fun-2 txn function#2018-03-1917:57lboliveiraok#2018-03-1917:57lboliveirai got it#2018-03-1917:57marshallkeep in mind that all transaction functions are serialized in the single writer thread of the transactor, so i tend to avoid them if possible#2018-03-1917:59lboliveirathank you. I am experimenting transaction functions now and they are great.#2018-03-1918:00lboliveiraI promisse I will take care. 😃#2018-03-1918:25favila@lboliveira Think of tx-fns as something like macro-expansion#2018-03-1918:26favila@lboliveira If you really need to see the result of a tx and do something conditionally, you can make a tx fn whose argument is a full transaction, call d/with inside that tx fn to see what the db result would be, then emit the final tx#2018-03-1918:27lboliveirawow#2018-03-1918:27lboliveiramind blowing#2018-03-1918:27favilaYou can also do this on the peer, but include a tx fn which asserts some invariant#2018-03-1918:28favilaso the tx fails if something else happened in the meantime which would have invalidated your conditional#2018-03-1918:28favilathe peer would have to know to regenerate the tx and retry (and eventually to give up)#2018-03-1918:29lboliveira@favila Thank you. You gave me soo much to study.#2018-03-1922:04val_waeselynck@lboliveira see also https://stackoverflow.com/questions/48268887/how-to-prevent-transactions-from-violating-application-invariants-in-datomic/48269377#48269377#2018-03-1922:09lboliveirathis is exactly what I need. @val_waeselynck#2018-03-1922:09lboliveirathank you! 😃#2018-03-1922:13JJ@val_waeselynck from I can gather, your write up https://medium.com/@val.vvalval/what-datomic-brings-to-businesses-e2238a568e1c mostly comes from your experience working with mongo, would you have felt the same if you had come from postgresql?#2018-03-1922:18val_waeselynck@U92K3MU66 yes, the differences between MongoDb and PostgreSQL are mostly insignificant for this discussion. Both of them are "mutable-cells", client-server, "monolithic process" DBMS.#2018-03-1922:21val_waeselynckNote: that does NOT mean that the differences between MongoDB and PostgreSQL are irrelevant in general. I personally cannot really think of a use case where MongoDB is an optimal choice, except maybe when it's part of some framework like Meteor - whatever problem I consider, I would always replace it with either Datomic, Postgres, ElasticSearch, Firebase, Cassandra, Redis...#2018-03-1922:33JJoh ok, just though you were enlighten when doing mongo -> datomic, which is no surprise I guess, 😉#2018-03-1922:34JJbesides having history for free(which is something you can achieve in postgres with some effor), do you really feel more productive in datomic? if so, are the productivity gains worth using datomic over a open source widely used and mature sql database like postgresql?#2018-03-1922:47hmaurer@U92K3MU66 I am curious; how would you achieve “history for free” in postgres? I”ve gone through many approaches and have yet to find a satisfactory one#2018-03-1922:47val_waeselynckShort answer: yes and yes, mostly because of the workflow, testing, and programmability possibilities Datomic enables.#2018-03-1922:52JJ@U5ZAJ15P0 https://github.com/arkhipov/temporal_tables#2018-03-1922:53hmaurer@U92K3MU66 have you used this approach in production?#2018-03-1922:56JJnope 😉 I have use this method and works fine https://www.postgresql.org/docs/current/static/plpgsql-trigger.html#PLPGSQL-TRIGGER-AUDIT-EXAMPLE#2018-03-1922:57val_waeselynck@U92K3MU66 from what I gather, this approach is much more coarse-grained (rows) than what Datomic offers.#2018-03-2000:17JJyes, it works if you just want history on data#2018-03-1922:48hmaurer@val_waeselynck Hi! I’ll take the opportunity of having you around to ask a quick question: on what order of magnitude does your business operate in terms of number of datoms?#2018-03-1922:49val_waeselynck1e7#2018-03-1922:50val_waeselynckThe trend is: more and more of the schema lives in Datomic, and more and more of the data lives outside 🙂#2018-03-1922:51hmaurer@val_waeselynck oh, how so? I am curious 😛 Why do you push data out of Datomic?#2018-03-1922:52hmaurerAnd I assume you ensure that the external data storage is immutable?#2018-03-1922:52hmaurerdoesn’t that impede querying a bit?#2018-03-1922:54val_waeselynckEither because of data size (too big to fit in Datomic) or privacy regulations (GDPR). Querying is solved by sending data to materialized views, which is unusually easy in Datomic, because Change Data Capture is trivial to implement.#2018-03-1922:54hmaurer@val_waeselynck how does it help with GDPR? Wouldn’t excision in Datomic be an option? Sorry for the flow of questions!#2018-03-1922:55hmaurerAnd what’s your preferred out-of-datomic storage option?#2018-03-1922:56val_waeselynckDepends of what you need; I like ElasticSearch for fast aggregations and search, Postgres for gneral-purpose complementary storage, S3 for complementary BLOB storage... no reason to limit yourself really#2018-03-1923:04val_waeselynckDatomic excision is useful to check the 'I can delete data' legal box, but not really a practical solution given its performance and availability impact. A complementary store can make it easier to delete data. Will blog about this soon#2018-03-1923:25hmaurer@val_waeselynck looking forward to the blog post!#2018-03-2011:47hmaurer@val_waeselynck So just to see if I got this straight: you use Datomic (to some extent) as storage for pointers to immutable data on external storage (unless forced to remove it for compliance), and then build materialised views for querying purposes by listening to the transaction stream?#2018-03-2012:31val_waeselynck@U5ZAJ15P0 yes; in some cases (such as personal data) the data in the external storage can be "exceptionally mutable".#2018-03-2014:42hmaurer@val_waeselynck that’s quite neat actually 🙂 “Datomic as a timeline”#2018-03-1923:13dehliHi! Is there a recommended solution for backing up Datomic when using Cloud? I found https://docs.datomic.com/on-prem/backup.html for on-prem but didn’t see anything for Cloud.#2018-03-2007:37dominicm> Specify a unique name for aws-cloudwatch-dimension-value
> https://docs.datomic.com/on-prem/aws.html
Should this be unique across re-deploys? Or just globally across deploys in the environment?#2018-03-2007:42dominicm> The ensure-transactor command will create the necessary AWS constructs,
There is limited permission for creating things on AWS. What permissions are needed for ensure-transactor? What constructs will it create, can I create them myself? I'm not using the cloudformation template.
I tried running it, but I got java.lang.IllegalArgumentException: No method in multimethod 'ensure-transactor*' for dispatch value: :sql which may be a bug, or may mean I don't need to run it.#2018-03-2007:59dominicm#2018-03-2008:43Petrus TheronHow are keywords stored in Datomic? I assume as strings, but cast to keyword in any language that supports it based on the schema. Are there are length limits or performance implications of using keywords vs strings?#2018-03-2008:56val_waeselynckI guess that's an implementation detail, but you can have a look at Fressian. Note that in-memory representation may differ from storage representation (and be more relevant to you!)#2018-03-2810:59Petrus TheronIf there are performance implications, then it's no longer a detail. My understanding is that it's like a string that gets cast to the caller's native keyword type, if one exists? I use EDN wherever possible.#2018-03-2817:27favilathey are stored as a tagged pair of strings#2018-03-2817:27favilaperformance implications are likely minimal or irrelevant#2018-03-2013:14marshallYes, CloudWatch custom metrics only show up once they’re “used”. If, for example, you haven’t ever had an indexing job you won’t see the CreateEntireIndexMsec metric.#2018-03-2015:31dominicm@U05120CBV how long until I get a datoms count metric?#2018-03-2015:35dominicmI was quite surprised that one hadn't showed up yet.#2018-03-2015:36marshallThere is a :Datoms metric reported on every indexing job#2018-03-2015:37dominicmI see, so I'll get that as often as indexing happens.#2018-03-2015:37dominicmHow often is that? I haven't seen it since I deployed on the first message above ^#2018-03-2015:49marshallan indexing job starts when the memory index reaches mem-index-threshold#2018-03-2015:49marshallif you want to force one you can call request-index#2018-03-2016:02dominicmI see. This is staging, so it's probably not hit that threshold all day 🙂. I'll watch for it in prod.#2018-03-2013:14marshallYes, you can create them manually.
See: https://docs.datomic.com/on-prem/storage.html#manual-setup#2018-03-2013:17marshallRegarding the unique name of the CW dimension, that’s entirely up to you.
CW metrics are aggregated by dimension name#2018-03-2013:17marshallso if you want the metrics to be continuous across re-deploys, then use the same name#2018-03-2013:18marshallif you want them separated in CloudWatch choose different names#2018-03-2015:30dominicmah, that wasn't clear to me the relationship between these things. I've already done the permissions I want, so I'm golden.
I don't want S3 log rotation, as I've updated the logback to point at journald, which then also goes into Cloudwatch 😛#2018-03-2015:08jaret@petrus What language are you working with? Datomic on-prem only supports clojure and java.#2018-03-2015:09Petrus TheronClojure#2018-03-2015:10jaretthen Keywords are stored as keywords and the keywords are stored in memory.#2018-03-2015:10jaretWhat are you about to do where you’re concerned? Make a bunch of keywords?#2018-03-2015:29alexmillerkeywords are made of strings. the keywords are interned (so any particular keyword only exists once in a Clojure runtime)#2018-03-2016:42JJis this outside VPC?#2018-03-2016:43amarjeetI am creating Uberjar at my local machine so that I can use the uberjar for Docker container in aws beanstalk#2018-03-2016:44JJhttp://entry.<system-name>.<region>.http://datomic.net:8182/ only works inside the VPC#2018-03-2016:44JJyour DNS can't resolve it locally#2018-03-2016:45JJI would put the cfg inside a function#2018-03-2016:45amarjeetSo, should I create Uberjar without the :endpoint ? Will beanstalk work? My beanstalk environment is in the same vpc as datomic cloud#2018-03-2016:46JJno, you need the :endpoint#2018-03-2016:47JJput your cfg inside a function so it doesn't get call when creating the uberjar#2018-03-2016:47JJonly when you start your app#2018-03-2016:47amarjeetokay, then, should this function be called in the -main function#2018-03-2016:47JJand yes, inside the same VPC that url should resolve#2018-03-2016:48amarjeetokay#2018-03-2016:48JJthat's up to you#2018-03-2016:48JJ🙂#2018-03-2016:48JJwhere you want to call it#2018-03-2016:48amarjeetokay, let me try#2018-03-2017:00amarjeet@devicesfor yes, now Uberjar created. Thanks for this 🙂#2018-03-2017:33JJ@amarjeet where you AOTing?#2018-03-2017:34amarjeet:uberjar {:aot [my-app.server]}#2018-03-2017:34amarjeetand :main ^{:skip-aot true} my-app.server)#2018-03-2017:35amarjeetthe standalone jar is not getting recognized (its not recognizing any docker file) in beanstalk#2018-03-2017:37amarjeetlocally the docker image is running#2018-03-2119:56mynomotoIs there a reason why instance sizes other than i3.large are not supported for production usage on datomic cloud?#2018-03-2119:58marshall@mynomoto Datomic Cloud production nodes use a large local SSD for caching. The i3.large instances have NVMe SSDs (475GB), which provide a huge and fast local cache#2018-03-2120:02mynomoto@marshall cool, thanks!#2018-03-2120:06mynomotoIs it possible to use reserved instances with datomic cloud?#2018-03-2120:14marshall@mynomoto I believe so, but that might be something to ask your AWS acct manager. I’ll also look into it#2018-03-2120:16marshall@mynomoto it appears that yes you can: https://aws.amazon.com/marketplace/help#topic7#2018-03-2120:16mynomoto@marshall Thanks, will do it.#2018-03-2120:17mynomoto@marshall Thanks!#2018-03-2120:18marshallAgain, I’d check with your AWS acct rep, but I believe they automatically apply your instance reservation to any matching instances (type) that you run in your acct, so it should be pretty seamless#2018-03-2208:52laujensenPer @stuarthalloway’s advice, Ive prepended a hash of the html content we store in the db to assist indexing. In order to clear up the non-hashed older versions of each page. I’ve done a (d/transact (conn) [{:db/excise :pages/content :db.excise/before (instant (time/minus (now) (time/days 1)))}])which I think should clear anything saved before yesterday. The command does start a lot of work in datomic, but after about 2 minutes it hits a Critical failure saying the index retry limit is reached. No amount of restarting mysql or datomic, or the app for the matter can break this cycle. The system boots, then craters after 2 minutes.#2018-03-2210:23laujensenAnd while we’re on excision. The tx above also kills the latest value if its not saved before yesterday - Is there a way to kill all but the last tx?#2018-03-2213:31marshall@laujensen Excision shouldn’t be used to remove large amounts of data. it is a very expensive operation and can result in the situation you found where the indexing job is too large to complete#2018-03-2213:32laujensenSo how do I go about my history @marshall ? We’ve talking 5000+ pages that need some help#2018-03-2213:32marshallsee: https://docs.datomic.com/on-prem/excision.html#performance#2018-03-2213:33marshallyou can either:
1) run multiple smaller excision jobs, waiting for each to complete before starting the next
2) create a new database and “decant” the data from the current DB into it, filtering out the things you want to remove#2018-03-2213:34laujensen1) How do I query the status of the indexing job?#2018-03-2213:35marshallyou will see a metrics report in the logs (and cloudwatch metrics) that includes :CreateEntireIndexMsec when the job has completed#2018-03-2213:36laujensenthanks#2018-03-2213:49alexkI’m interested in testing a couple small datomic queries in unit tests. I realize I could spin up an in-memory db but I think there’s a way to use a plain Clojure data structure as the db, is that true? What shape would it need to have so that something like the q and transact functions would work with it (treat it as a real database)?#2018-03-2213:49alexkI’m interested in testing a couple small datomic queries in unit tests. I realize I could spin up an in-memory db but I think there’s a way to use a plain Clojure data structure as the db, is that true? What shape would it need to have so that something like the q and transact functions would work with it (treat it as a real database)?#2018-03-2213:50val_waeselynckTransact won't work with anything else than a Datomic connection.#2018-03-2213:58alexkAlright, that’s too bad, it means I’d have to build the db by hand and wouldn’t be able to test anything that involves the transact function. How about the q function - would it work with a plain data structure?#2018-03-2213:58alexkAnd the entity function too, I guess…#2018-03-2214:00val_waeselynck@U8ZA3QZTJ what advantages do you see in not using an in-memory db for unit testing ?#2018-03-2214:09colindresjIf you’re looking to isolate your transactions against a DB during unit tests, https://github.com/vvvvalvalval/datomock is work fairly well#2018-03-2214:24alexkThe rationale would be to minimize the amount of code that the test interacts with. I’m not completely against using in-memory databases in unit tests, but I was hoping I could even avoid that! Thanks, both of you#2018-03-2214:29val_waeselynckFor Datalog querying, you could just use a vector of tuples, but you will have less confidence that the query behaves the same as against a real db.#2018-03-2214:29alexkFair enough, thanks#2018-03-2214:26laujensen@marshall - Ive just tried running a backup-db on one of our shards to import it locally using restore-db, but on import I see that several transactions arent carried over. They are visible on the live system, but not locally. And they’re about 1 day old. What causes this?#2018-03-2214:52marshalldid you restore into a clean (empty) storage? restoring on top of an existing DB that has diverged from the original source isn’t supported#2018-03-2215:31laujensenThats it, thanks#2018-03-2214:37stijndoes anyone have any experience running aws lambda with clojure connecting to datomic-cloud? any gotchas (like uberjar size, startup time, ...) that I can avoid running into?#2018-03-2214:48stijnwhen i'm following the Getting Started guide, i'm seeing the following error when trying to create a database#2018-03-2214:48stijn(d/create-database client {:db-name "movies"})
ExceptionInfo Forbidden to read keyfile at s3://...#2018-03-2214:49stijnall steps before worked properly#2018-03-2214:49alexmillerthat’s an issue with your aws creds#2018-03-2214:49stijnyes, but how can I specify the AWS credentials to the client?#2018-03-2214:49stijni'm using a aws profile to authenticate#2018-03-2214:49alexmillerthe normal ways - ~/.aws/credentials, AWS_ACCESS…#2018-03-2214:50alexmillerAWS_PROFILE#2018-03-2214:54stijnhmm that's still not working. is the client using the order of the aws java sdk for credentials?#2018-03-2215:00marshallthe client uses the default credentials provide#2018-03-2215:00marshallprovider#2018-03-2215:00marshallwhich has an implicit order, documented by AWS#2018-03-2215:00marshallhttps://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/DefaultAWSCredentialsProviderChain.html#2018-03-2215:08stijnok, it seems like I'm running into this issue https://github.com/aws/aws-sdk-java/issues/803#2018-03-2215:09stijndelegation of credentials to a root profile doesn't work the same way in the java sdk and the cli 🙂#2018-03-2215:09marshallah. good find#2018-03-2215:33laujensen@marshall: I’ve tried partitioning the excisions into chunks of 100 entities. The indexing service has been running for 30+ minutes now with half-load on a quadcore system on just the first chunk. Is that to be expected? If so, I guess there’s no way around putting a heavy load on the live system until all 5000 pages have had their history removed?#2018-03-2216:42marshallYes, excision is very expensive. It can require re-writing major portions of the index#2018-03-2216:33folconHi, just to check is datomic rest the only way to connect to datomic from another language?#2018-03-2216:45marshall@folcon At present, yes. We provide Clojure and Java versions of the Peer library and a Clojure Client library.
We intend to publicize the client wire protocol in the future to allow other language clients (see: https://www.datomic.com/cloud-faq.html#_will_you_publish_the_client_protocol_so_i_can_write_my_own_datomic_client)#2018-03-2216:49folcon@marshall Thanks, if you wouldn’t mind me asking, I have two questions.
1) Is there anyway to secure datomic rest other than sitting a nginx with basic auth in front of it? I’ve successfully managed to deploy it and it’s working and now need to work out at least a basic level of security.
2) How would I go about writing a subquery then? I’m trying to ensure that all queries to the system respect some level of auth, so I want to ensure that each user only looks at a database which contains their datoms. My understanding is that this is possible by first querying what a user can see, and then passing on the user query to the resulting database. I’m just unsure how to specify that to datomic rest.#2018-03-2216:59JJ@folcon https://docs.datomic.com/on-prem/rest.html#2018-03-2216:59JJin case you miss the no more development notice#2018-03-2217:01folconI am aware it’s been deprecated for some time, I’m just wondering if there exists anything for this? I’m basically bumbling my way with these unfortunately :)… And unfortunately the client api’s don’t seem targeted to what I’m doing at the moment. I can’t access them from clojurescript as far as I can see, and running yet another server just to act as a clojure proxy to talk to datomic seems a little extreme?#2018-03-2217:02JJexposing datomic directly to the internet seems more extreme to me, even if behind nginx#2018-03-2217:04JJbut nginx has a lot features and is highly configurable, maybe you can use something like openresty#2018-03-2217:06marshall@folcon “running another server just to act as a clojure proxy” is exactly what the REST service is. It is a Datomic Peer that consumes HTTP calls and executes them against Datomic#2018-03-2217:07marshallIf you want native access from your language of interest and are already on the Peer model, I’d suggest writing a Peer-based system that consumes whatever it is your application uses (http/tcp/other) and do it that way#2018-03-2217:16folconThat’s a valid point, however for the moment the rest service does meet most of my needs. Is my understanding of my second question correct or should I be trying something different?
I’m having a fair bit of difficulty finding information about what is the best way of querying against a subset of the datoms in the database.#2018-03-2217:18marshallyou can use filters (https://docs.datomic.com/on-prem/filters.html) to restrict what a given query can “see”#2018-03-2217:18folconThank you, I’ll give that a read 🙂#2018-03-2217:42folconI might be doing this completely wrong, but trying to reproduce the filtering technique is giving me odd results:
q*:
[:find ?e ?ent ?doc ?f_db
:in $plain ?filterfn
:where [(datomic.api/filter $plain ?filterfn) ?f_db]
[(datomic.api/entity ?f_db ?e) ?ent] [$plain ?e :db/doc ?doc]]
args:
[{:db/alias "sql/test"} (fn [_ datom] (< 20 (.e datom)))]
As a sanity test, I’m trying to see if I can filter all the :db/doc strings that have an entity id lower than 20.
Which is still giving me results such as:
?e ?ent ?doc ?f_db
8 {:db/id 8} "System-assigned attribute set to true for transactions not fully incorporated into the index"
I’m probably constructing this query completely wrong.#2018-03-2218:08folconOk, so from what I can tell, the query language cares what the names of the variables are, and you can’t define a database and reuse it as you’ll have a var that starts with a ? not a $.
java.lang.Exception: processing rule: (q__355 ?e ?ent ?doc ?f_db), message: processing clause: [?f_db ?e :db/doc ?doc], message: :db.error/invalid-data-source Nil or missing data source. Did you forget to pass a database argument?
#2018-03-2218:18marshallthe entity with EID 8 does pass that filter#2018-03-2218:19marshall@folcon ^#2018-03-2218:19folconsorry?#2018-03-2218:19marshallthe entity ID you found there is ‘8’, which is < 20#2018-03-2218:19folconoh, am I checking a string?#2018-03-2218:19marshallno#2018-03-2218:19marshallentity id is a long#2018-03-2218:20marshallsorry i misread your filter function; one second#2018-03-2218:20folconI’m pretty sure (< 20 8 ) is false?#2018-03-2218:21folconSorry, it’s been a long day 🙂#2018-03-2218:25marshallahh. i think i see#2018-03-2218:25marshallyou need to get a filtered value of the db as an input to the query#2018-03-2218:25marshallone sec#2018-03-2218:28folconsure :)…#2018-03-2218:31marshallso the functionality you’re looking for doesnt require a join on 2 dbs#2018-03-2218:31marshall(d/q
'[:find ?e ?doc
:in $
:where
[?e :db/doc ?doc]]
(d/filter (d/db conn) (fn [_ datom] (< 20 (.e datom)))))
#2018-03-2218:31marshallfind all the entities in a filtered db#2018-03-2218:31marshallfor perf reasons, you could join against a non-filtered db#2018-03-2218:31marshall(which is what the example shows#2018-03-2218:34marshalli’m not sure if/how you’d do the filter inside the query body and then also bind it to a datasource#2018-03-2218:35marshalli have to run, but i’ll be back a bit later#2018-03-2218:35folconSo wait, do I need to make two queries to the rest api? I’m trying to understand how to translate that query, my two args that I can use are the q* and args. Args needs to be at least [{:db/alias "sql/test"}] from what I understand and as far as I can work out I can’t pass a filtered db as I have no idea how to reference it…#2018-03-2218:35folconthanks#2018-03-2218:35folconI’ll be at it for a bit :)…#2018-03-2218:40marshallI suspect you may not be able to pass an arbitrary filtered database to the rest API#2018-03-2219:45folconThat’s frustrating#2018-03-2220:10stijn@marshall regarding the issue above on AWS credentials, we are using IAM role assumption to access different aws accounts for dev, staging, prod. In order to make this work, you need to add a dependency [com.amazonaws/aws-java-sdk-sts "1.11.210"]. Not sure if you want to add this to the datomic client library or mention it in the documentation, but debugging the problem was a bit annoying since d/create-database swallows the original error of the aws sdk#2018-03-2220:10stijnwhich was#2018-03-2220:10stijn(.getCredentials (com.amazonaws.auth.profile.ProfileCredentialsProvider.))
ClassNotFoundException com.amazonaws.services.securitytoken.internal.STSProfileCredentialsService java.net.URLClassLoader.findClass (URLClassLoader.java:381)
#2018-03-2220:11stijnsome more info: https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/securitytoken/internal/STSProfileCredentialsService.html#2018-03-2220:11stijnit works for me now, but you can maybe help other customers with that info 🙂#2018-03-2220:12johnjis java 9/10 not supported for the datomic cloud client?#2018-03-2220:20folconHuh, you can introspect the db enough to get the database credentials by querying it.#2018-03-2220:34alexmiller@lockdown- you seem to be assuming it’s not - any reason why?#2018-03-2220:40alexmillerI haven’t tried it, but one problem that arises in several places right now is due to the removal of javax.xml.bind from the default classpath. Using --add-modules java.xml.bind on the jvm will add it back in.#2018-03-2220:42marshall@stijn Thanks for the heads up - I will see if I can find a place in the docs where that would fit well#2018-03-2220:49johnj@alexmiller correct, adding java.xml.bind fixes it, was curious if anything greater than java 8 was discouraged by datomic devs.#2018-03-2220:51alexmillerthat was just a guess, would be curious if it’s a datomic lib dep or something else that you’re running into#2018-03-2220:53johnjthe datomic client eats the stacktrace I think but per your clj -J-verbose:class advice it might be the aws sdk version but not sure, didn't dig more#2018-03-2220:56alexmillerprobably a good thing for @marshall to know if he doesn’t already#2018-03-2221:00marshallyes indeed. thanks!#2018-03-2221:00marshalli will also look at adding that to docs#2018-03-2221:09johnj@marshall indeed is discouraged? using something greater than java 8#2018-03-2221:09folcon@marshall I have got the filtered db in the query though and I can inspect it:
[:find ?e ?ent ?doc ?f_db ?prs :in $plain ?filterfn :where [(datomic.api/filter $plain ?filterfn) ?f_db] [(keys ?f_db) ?prs] [(datomic.api/entity ?f_db ?e) ?ent] [$plain ?e :db/doc ?doc]]
?prs
(:id :memidx :indexing :mid-index :index :history :memlog :basisT :nextT :indexBasisT :indexingNextT :elements :keys :ids :index-root-id :index-rev :asOfT :sinceT :raw :filt)
the :filt is
(fn [_ datom] (< 20 (.v datom)))
so this is clearly the filtered database. I just don’t know how to query against it.#2018-03-2221:14folconI’ve been trying to pass a string query, as the api states that’s possible:
[:find ?e ?ent ?doc ?f_db ?prs :in $plain ?filterfn :where [(datomic.api/filter $plain ?filterfn) ?f_db] [(:filt ?f_db) ?prs] [(datomic.api/q "[:find ?e :where [?e :db/doc _]]" ?f_db) ?ent] [$plain ?e :db/doc ?doc]]
but it’s erroring:
java.lang.Exception: processing rule: (q__1170 ?e ?ent ?doc ?f_db ?prs), message: processing clause: {:argvars (?f_db), :fn #object[datomic.extensions$eval1162$fn__1163 0x1b0f3213 "#2018-03-2221:41marshall@lockdown- no, i meant it was useful for me to know. No reason not to use 9 or 10 if that fix works#2018-03-2221:42marshall@folcon I wonder if you can then use the filtered DB in a nested query#2018-03-2221:44folcon@marshall That’s what I’ve been trying to do here -> https://clojurians.slack.com/archives/C03RZMDSH/p1521753296000093, but it doesn’t seem to be working?#2018-03-2221:45marshall[:find ?e ?ent ?doc ?f_db ?prs ?filtecount
:in $plain ?filterfn
:where [(datomic.api/filter $plain ?filterfn) ?f_db]
[(keys ?f_db) ?prs] [(datomic.api/entity ?f_db ?e) ?ent]
[$plain ?e :db/doc ?doc]
[(datomic.api/q '[:find (count ?ents)
:where [?ents :db/doc]]
?f_db) [[?filtecount]]]]#2018-03-2221:45marshalltry that ^#2018-03-2221:45marshalli’m on a phone call or I’d try#2018-03-2221:56folcon@marshall Funnily enough:
com.google.common.util.concurrent.UncheckedExecutionException: java.lang.RuntimeException: Unable to resolve symbol: ' in this context, compiling:(NO_SOURCE_PATH:0:0)
manually calling quote instead of ' gives:
java.lang.Exception: processing rule: (q__1315 ?e ?ent ?doc ?f_db ?prs ?filtecount), message: processing clause: {:argvars (?f_db), :fn #object[datomic.extensions$eval1307$fn__1308 0x1f4747dd "
The string variant doesn’t do much better
[:find ?e ?ent ?doc ?f_db ?prs ?filtecount
:in $plain ?filterfn
:where [(datomic.api/filter $plain ?filterfn) ?f_db]
[(keys ?f_db) ?prs] [(datomic.api/entity ?f_db ?e) ?ent]
[$plain ?e :db/doc ?doc]
[(datomic.api/q "[:find (count ?ents)
:where [?ents :db/doc]]"
?f_db) [[?filtecount]]]]
java.lang.Exception: processing rule: (q__1277 ?e ?ent ?doc ?f_db ?prs ?filtecount), message: processing clause: {:argvars (?f_db), :fn #object[datomic.extensions$eval1269$fn__1270 0x6f32c7f9 "#2018-03-2221:57marshallcopy paste issue with single quote probably#2018-03-2221:58folconNot sure what the casting issue is#2018-03-2221:58marshallgot it#2018-03-2221:58marshall(d/q
'[:find ?filtecount
:in $ ?filterfn
:where [(datomic.api/filter $ ?filterfn) ?f_db]
[(datomic.api/q '[:find ?ents
:where [?ents :db/doc]]
?f_db) [[?filtecount]]]]
(d/db conn) (fn [_ datom] (< 20 (.e datom)))) #2018-03-2221:59marshallbad var names. sorry i’l fix#2018-03-2222:00marshallmore interesting with correct name:
(d/q
'[:find ?filtecount
:in $ ?filterfn
:where [(datomic.api/filter $ ?filterfn) ?f_db]
[(datomic.api/q '[:find (count ?ents)
:where [?ents :db/doc]]
?f_db) [[?filtecount]]]]
(d/db conn) (fn [_ datom] (< 20 (.e datom)))) #2018-03-2222:00marshallfind the count of entities with :db/doc in the filtered db#2018-03-2222:04folconSo the reader can’t deal with the single quote at all, so I’ve been replacing the query with the string version or manually calling the quote function, however there’s a relatively consistent issue of message: clojure.lang.PersistentList cannot be cast to clojure.lang.IFn in both cases.#2018-03-2222:04folconI’m really not sure what the problem is here.#2018-03-2222:40marshallThis is specifically with the rest api?#2018-03-2222:41marshallI'll have to try that tomorrow morning#2018-03-2222:47folconyep, all of the queries I’m running are through the rest api.#2018-03-2312:23folcon@marshall Ok, so I might have figured out a work around. I can address and query different datomic databases, so I’m going to try the model of each user having a separate datomic db on the same store. Is there a flaw with this design?#2018-03-2313:19marshall@folcon how many databases are you thinking?#2018-03-2313:20folconwell in our trial period we might end up with a few hundred users#2018-03-2313:21marshallDatomic On-Prem is designed to have a single primary db behind the transactor. a few ‘housekeeping’ dbs in addition would be OK, but having many dozens of active databases isn’t recommended#2018-03-2313:27marshall@folcon one second - I had a chance to look at the REST api and was able to run the query I wrote yesterday#2018-03-2313:28marshall[:find ?filtecount
:in $
:where [(datomic.api/filter $ (fn [_ datom] (< 20 (.e datom)))) ?f_db]
[(datomic.api/q (quote [:find (count ?ents)
:where [?ents :db/doc]]) ?f_db) [[?filtecount]]]] #2018-03-2313:28marshall@folcon ^ the cast issue was from passing a function as an arg#2018-03-2313:28marshallif you put it inline in the query it works fine#2018-03-2313:31folconHmm, that limitation is rather irritating, here I thought I’d found a the perfect way to ensure user data remained separate while still being able to query across it :(…#2018-03-2313:31marshallyou should still be able to do that; you can parameterize constants inside the function (i believe)#2018-03-2313:31folconok, I’m currently in the middle of something else, but I’ll be able to give that a go in an hour and a half :)… Definitely going to give that a shot. Thank you!#2018-03-2313:32folconI’m concerned that if I can’t pass the function as a parameter I’m going to have to do query mangling with strings/datastructures#2018-03-2313:32marshallwhat language are you coming from?#2018-03-2313:37marshalli believe i was mistaken - you can’t parameterize constants within the nested filter predicate I dont think#2018-03-2313:55folconMy backend is python based#2018-03-2315:37folconfrontend clojurescript 🙂#2018-03-2316:49folcon@marshall It looks like it worked, the query you mentioned here -> https://clojurians.slack.com/archives/C03RZMDSH/p1521811630000098#2018-03-2316:50folconThanks, I’m going to unpack this and see if I can work out how to get the rest of my queries to use this filtering technique :)…#2018-03-2323:09James VickersWhat storage services do most people seem to use for on-prem? Do a lot of you use Cassandra or is it pretty much all SQL?#2018-03-2401:12johnjI have no idea, but my guess would dynamodb#2018-03-2506:50kardanI’m playing with Datomic, was to write my first test using create & delete-database. Can’t I use the client api for that?#2018-03-2521:05a.espolovHello#2018-03-2521:06a.espolovGuys simulant is actual tool for testing db?#2018-03-2600:03alexmillerIt’s really a tool for testing systems, of which the database is one component#2018-03-2613:51alexkI’ve got a funny thing happening when I call a transaction function#2018-03-2614:08alexkAnswer: make sure the attribute you’re reading is present on the entity. In my case, :myns/counter hadn’t ever been set on that particular entity, and that resulted in addition to nil within the db function#2018-03-2615:55Petrus TheronCan I use d/filter for user-data privacy of present day (not historical) data? E.g. given that every user-related fact belongs to an entity that has an :entity/owner attribute set to that user's ID, can I efficiently filter out all datoms not belonging to that user? Or do I need to apply "own" these facts at the transactional level and filter by tx-meta?#2018-03-2617:01kardanThe /bin/maven-install (datomic-free at least) script could do with a she-bang. Is there a place to report these things?#2018-03-2617:18jaret@kardan if you’d like an absolute path or shebang line added to the script, you could log a feature request on our “suggest a feature” portal. You can get there from your datomic account at https://my.datomic.com/account and clicking on the “suggest a feature” link, top right.#2018-03-2617:25kardan@jaret ok cool. It was nothing huge for me, just noticed that I could not run the script. Still trying to figure out how things hang together.#2018-03-2620:31donaldballA fairly simple question about the #db/fn literal: I’m trying to apply some formatting to our schema.edn file which contains one of these, and the resulting edn is unreadable. My rewrite fn is:#2018-03-2620:31donaldball(defn rewrite-schema!
[]
(binding [*print-namespace-maps* false]
(let [schema (into []
(map (fn [m]
(into (sorted-map)
(remove (fn [[k v]]
(contains? #{:db.install/_attribute :db/id} k)))
m)))
(edn/read-string {:readers *data-readers*}
(slurp "resources/data/schema.edn")))]
(clojure.pprint/pprint schema (io/writer "resources/data/schema.edn")))))
#2018-03-2620:32donaldballIs there a convenient way to have #db/fn roundtrip without evaluation?#2018-03-2620:52joshwolterNew to the #datomic channel here... i have a 43 GB datomic db on PG storage. I run incremental backups every 15 minutes and after a ~10 days, i.e., my backup directory is at 425 GB's. Is there a way to not log garbage-collection (I have a gc job running every 3 hours btw)?#2018-03-2622:02jaret@josh.wolter are you running PG’s vacuum full pg job? It sounds like you’re backing up garbage with each backup. (seems like you knew that already). I’d also be curious to get the output from your diagnostics command:
;;Prints a map with information about a database, storage, the catalog, and peer settings:
bin/run -m datomic.integrity $DB-URI
Substitute your URI ^ for $DB-URI and ensure it is from a machine that is able to reach storage.#2018-03-2622:03jaretIf you’d like you can private message me the output, or we could open a case. Just e-mail me at <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>.#2018-03-2622:04jaretThe output from diagnostics may not be information you’d like to share over slack 🙂#2018-03-2622:05jaret<mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection> works too 🙂#2018-03-2701:49joshwolterThanks @jaret! I had to run, will do tomorrow morning.#2018-03-2710:32iarenazaIs it possible to have database functions if using the Client API? Does it depend on whether you are using Datomic On-Prem or Datomic Cloud?#2018-03-2720:04stijnIs it possible to run in-mem with the client lib?#2018-03-2720:05stijnI can't seem to find anything in the docs#2018-03-2720:06stijnor is it this? https://docs.datomic.com/on-prem/first-db.html#2018-03-2720:23donmullen@stijn - no in-mem with client lib#2018-03-2720:49donmullenSorry @stijn - was thinking cloud client - Jaret / Marshall obviously correct!#2018-03-2720:23Wes HallAnyone know if the datomic cloud client library works in cljs? Or if there is a cljs port?#2018-03-2720:25donmullen@wesley.hall - no - I believe there is somewhere you can vote for new client libraries - cljs is one people (including myself) have been asking about. There is an issue with security that would need to be addressed.#2018-03-2720:27Wes HallOk, thanks. I will look for the vote. It's only really that I want to access from AWS lambda. I can build the lambda on the JVM but startup time is a bit of a PITA when it comes to JVM lambdas, node is better.#2018-03-2720:29stijnok, no in-mem then, but how do you develop with rapid schema changes when trying out stuff?#2018-03-2720:34marshall@stijn yes, you can run an in memory db with peer server using on-prom and connect to it with client#2018-03-2720:38jaret@stijn
$ bin/run -m datomic.peer-server -h localhost -p 8998 -a myaccesskey,mysecret -d baskethead,datomic:
#2018-03-2813:15stijnThanks!#2018-03-2813:33jaretNp! Note that I left in “baskethead” that’s whatever you want to serve the DB as. I copied over from a project where I am in the process of messing with schema and didn’t remove that.#2018-03-2818:00stijn😄#2018-03-2720:39jaret(def cfg {:server-type :peer-server
:access-key "myaccesskey"
:secret "mysecret"
:endpoint "localhost:8998"})
(def client (d/client cfg))
(d/list-databases client {})
(def conn (d/connect client {:db-name "your-db-name"}))
#2018-03-2721:13markbastianHow might I use datomic’s history to get key-value pairs over time? For example, suppose I have a series of weather measurements that look like this:
{:weather/temp 105.0,
:time/toa #inst"2018-01-03T00:00:00.000-00:00",
:db/id 1,
:weather/location "Eagle, ID"}
What I want is to do two things:
1. Get the current weather at a given location. This is easy enough:
(d/pull (d/db conn) '[*] 1)
2. Get a time series of all :time/toa to :weather/temp values. I tried a couple of things:
;;This returns every combination of toa and temp.
(d/q
'[:find ?toa ?temp
:in $ ?e
:where
[?e :time/toa ?toa]
[?e :weather/temp ?temp]]
(d/history (d/db conn)) 1)
;;This returns combinations where the transaction is the same, but doesn't quite do what I want.
(d/q
'[:find ?toa ?temp
:in $ ?e
:where
[?e :time/toa ?toa ?t]
[?e :weather/temp ?temp ?t]]
(d/history (d/db conn)) 1)
Any idea as to the right query? Alternatively, is the right answer to just map over my history and take those pairs?#2018-03-2722:18adammillerI'm building an app that will eventually deploy on Datomic Cloud (or that is the intent). However, my tests I like to spin up in memory db on demand. There are of course slight differences in using the cloud vs. peer api so I was wondering how people typically handle this? I saw Robert's solution in his Clojure bridge code he is building but was wondering what others do in this situation?#2018-03-2723:19Datomic Platonic@adammiller you can use the peer api in production as well, so if you'd like to use the same api for dev and production just still with peer api, that's what we're doing#2018-03-2723:21Datomic Platonics/still/stick#2018-03-2723:32steveb8n@adammiller I’m building my own version of Roberts solution, based on component. basically the same idea, without dynamic binding#2018-03-2723:49adammiller@clojurians873 you can't use the peer api with datomic cloud though, correct?#2018-03-2723:50adammiller@steveb8n I'd be interested in seeing what you come up with. I think whatever end result that can be achieved would be nice to have a very small open source lib we can pull into new projects to handle this. I'd think it would be a common issue.#2018-03-2800:07Datomic Platonic@adammiller correct. we're going to deploy the on-prem version on an ec2 instances backed by postgres, using the datomic-pro maven library bundled with the download. it turns out so much of our code had (datoms ...) and other peer-only features#2018-03-2800:08Datomic Platonicor is it (entitiy ...) instead of datoms, some functions are only avilable on the peer server#2018-03-2800:09adammillerYeah, api is just slightly different in certain situations.#2018-03-2800:10Datomic Platonicso in dev we use mount (its like sierra's component) to start datomic in memory peer, and run our tests etc, knowing it will work in production (but we are stil amateurs!! 😼 )#2018-03-2800:11Datomic Platonicthe other advantage of using on-prem version is that we can start postgres on our laptops and run the full version with tests, etc#2018-03-2800:13adammillerYes, I'm also using mount and I determine by the config currently whether to launch a peer or client connection then use Robert's method basically to wrap the api between client and peer api methods (https://github.com/robert-stuttaford/bridge/blob/master/src/bridge/data/datomic.clj)#2018-03-2800:15Datomic Platonici like robert's approach; it's a nice first try of bridging the gap#2018-03-2800:15Datomic Platonici would have used a multimethod or so#2018-03-2800:16Datomic Platonicbut still would be scared of breaking things deep inside the AWS cloudformation stack#2018-03-2800:17adammillerproblem with multi-method i believe is you would have to pass the mode around (at least if you were thinking of dispatching on that). His method works nice using the with-datomic-mode macro to bind what mode we are working in.#2018-03-2800:20adammillerNone of this would be needed if we could launch an in memory db from code so that we could connect with the client but I don't believe that is possible.#2018-03-2800:26noonianWould be amazing if datomic-free supported the client api#2018-03-2801:59drewverleeHow would i approach the abstract problem of walking a graph from leaf to root(s) with datomic. Is it even a good fit for this type of question? I understand its a graph db, but its not clear how well the semantics support this type of thing.#2018-03-2802:00drewverleesay n1 -> n2 -> n3 and also n1 -> n4 and just for contrast n5 -> n6
if i handed this thing n1 it would return n1 -> n2 -> n3 and n1 -> n4 but not n5 -> n6#2018-03-2802:03drewverleei think component pulls nested components so thats one way to think about tackling this.#2018-03-2802:34chris_johnsonSo, this is off the top of my head pseudocode but you would navigate the “edges” that are ref values from one entity to the other#2018-03-2802:35chris_johnsonI won’t try to come up with a schema on the fly, but your query would look something like#2018-03-2802:36chris_johnsonhm, wait - sorry, I misread your question and was about to answer a different one 🙂#2018-03-2802:37chris_johnsonspecifically, “how do I walk from root to leaf of a structure I know”, where I now think you’re asking “how do I find all the leaves and paths to them from a given root without knowledge of the structure beforehand”#2018-03-2813:10drewverleeNot quite. I’m saying, given a leaf, how do you walk back to the root.
Put another way, given X find all the things that X depends on.
I have that information in a relational format:
item | deps on
X | Y
Y | Z
But i’m trying to find the right data structure for storing it for my purpose. Which is answering questions like given X walk backwards to all its deps. Then reverse that list:
so foo(X) => [Z Y X] where the order represents that Ni + 1 depends on Ni#2018-03-2813:11drewverleedatomic or datascript might not be an ideal way todo this. But if it is, then it might offer some advantages to storing and querying that data.#2018-03-2816:52drewverleeIn fact, after some thought, its easy enough to express this with hashmaps…#2018-03-2804:14steveb8n@adammiller no problem. Once I’ve tested it in my codebase, I’ll be happy to extract a micro-library.#2018-04-1204:20bmabey@U0510KXTU I just saw this thread... did you ever extract a micro-library?#2018-04-1204:57steveb8nSure did https://github.com/stevebuik/ns-clone#2018-03-2821:26madstapThe first link on this page is wrong https://docs.datomic.com/cloud/time/log.html#2018-03-2913:20jaretThanks for catching that! I’ll correct that today.#2018-03-2912:06stijnI understand that the entity API is not available in the Client API (and hence Datomic Cloud). Is this something that will be added later on (i.e. peers on datomic cloud) or is it sure this will never be part of the cloud solution? Reason I'm asking is that the Entity API is a great match with graph query systems like GraphQL, om.next, qlkit.#2018-03-2913:22alexmillerpull is generally a better match with Client#2018-03-2913:27jaret@stijn The entity api was not well-suited for wire protocol, and wasn’t included in the client API. and as @alexmiller indicated, pull is the client alternative.#2018-03-2913:29jaretThe entity api due to its laziness is chatty and therefore a misfit over the wire.#2018-03-2913:31stijnOK I understand.#2018-03-2913:32stijnIs there a specific reason that peers could not be part of a Datomic Cloud installation (in the future)?#2018-03-2913:34stijn(just asking because we need to make a choice between an on-prem or cloud installation)#2018-03-2914:25robert-stuttaford@stijn some nice notes here which may help you Datomic Cloud conj workshop notes https://docs.google.com/document/d/1WhotOK6v0ZkBBc2G6s5BKp8gOEP9pXazik_4hcArZ9o/edit#2018-03-2915:03jaretDatomic 0.9.5697 now available, important security fix for free: and dev: transactors.
https://forum.datomic.com/t/important-security-update-0-9-5697/379#2018-03-2916:27donaldballFWIW the Release Date for 0.9.5697 on the my.datomic downloads page appears to be incorrect#2018-03-2916:29jaretThanks Donald! I must have missed a step in updating the page. I’ll see if I can update that#2018-03-2916:35jaretI’ve updated the date! Thanks again.#2018-03-2919:27alexandergunnarsonSo I've been wondering for a while now — will Datomic support Google Spanner?
It seems to me that if it supports MySQL, Postgres, and Oracle, it should in theory be able to (in a way) support Google Spanner, which has an ANSI 2011 SQL interface (admittedly not identical to the respective interfaces of each of the flavors currently supported, but sufficient for Datomic's needs, I would have to assume). Then again it seems that Datomic's single-point-of-failure model would effectively preclude horizontalizability and thus would negate the benefits Google Spanner has over other strongly consistent backends. From what I understand, it's not just about slapping a Datomic-flavored Datalog interface over top of a single (potentially massive) EAVT table; there's (at least) the transaction queue, transaction functions, and datom cache to take into account as well.#2018-03-2919:27alexandergunnarsonSo I've been wondering for a while now — will Datomic support Google Spanner?
It seems to me that if it supports MySQL, Postgres, and Oracle, it should in theory be able to (in a way) support Google Spanner, which has an ANSI 2011 SQL interface (admittedly not identical to the respective interfaces of each of the flavors currently supported, but sufficient for Datomic's needs, I would have to assume). Then again it seems that Datomic's single-point-of-failure model would effectively preclude horizontalizability and thus would negate the benefits Google Spanner has over other strongly consistent backends. From what I understand, it's not just about slapping a Datomic-flavored Datalog interface over top of a single (potentially massive) EAVT table; there's (at least) the transaction queue, transaction functions, and datom cache to take into account as well.#2018-03-2920:35favilaSpanner should work, but it's pricey (vs google's cloud mysql or postgres) and you don't get many benefits from it. It does auto-splitting (sharding) for reads and writes, but the records datomic stores are immutable and trivially memcached-able so who cares.#2018-03-2920:39favilagoogle's datastore or bigtable would be better fits (more akin to amazon dynamodb)#2018-03-2921:51alexandergunnarsonI know what you're getting at @U09R86PA4 (and yes, Spanner is incredibly expensive compared to other options!). However, I think I may have incompletely posed the question. I suppose the use case I have in mind is write scalability (Datomic's primary weak point / intentionally unsupported use case). Spanner is horizontally write-scalable; however, the current single-writer-node implementation of Datomic presents a bottleneck that no blazing-fast backend (DynamoDB comes to mind, but yes BigTable, etc., too) can overcome. I should have asked instead, are there plans to implement a Datomic interface that leverages Spanner's horizontally-scalable ACID guarantees in order to overcome Datomic's limits on write scalability?#2018-03-2922:10faviladatomic's limits on write scalability are inherit in it's single-writer design. I don't see that changing#2018-03-2922:10favilaor, I would be very surprised if it changed#2018-03-2922:31alexandergunnarsonAgreed. I would be very surprised as well — pleasantly so 🙂 But I can at least partially envision how Datomic's features might be implemented on top of Spanner (with the qualification that these are not polished thoughts). For instance:
- A Datomic-flavored Datalog query engine has been built for multiple SQL backends; the code for that could be nearly completely reused for Spanner.
- There could be a single, large EAVT table for all datoms (not accounting for indices of various sorts).
- It seems that, without a single-writer model, Datomic's transaction queue would have to be poll-based, unless there's some analogous push-based mechanism inherent to Spanner (doubt it, but possible). The poll mechanism would require a select of all rows whose timestamp was after the last poll (accounting somehow for the edge case of rows with the same exact timestamp that had been inserted after the last poll).
- Transaction functions could (ostensibly) be implemented using Spanner SQL transactions run on the peer.
- Peer cache creation/maintenance is a non-issue, as it seems not to be dependent on the single-writer model.#2018-03-2922:38alexandergunnarsonDoes that assessment seem reasonable to you?#2018-03-2922:52favilaI worry about the efficiency of that table design#2018-03-2922:53favilaconceivably, you could use spanner's "timestamp" as a transaction id (not a transaction time--that would have to be separate)#2018-03-2922:54favilaand use timestamp bounds for d/as-of (but not d/since)#2018-03-2922:54alexandergunnarsonFair; it's the most dead-simple of course, so that's why I mentioned it. Plus when I talked to Paul DeGrandis at Datomic a while back, he said that's essentially how the Datomic interfaces to the various SQL backends are implemented. (Could have changed though)#2018-03-2922:54favilano, that's not true at all#2018-03-2922:54alexandergunnarsonAh, I had no idea#2018-03-2922:54faviladatomic uses sql as a key-value blob store#2018-03-2922:55alexandergunnarsonHeh that would make a lot of sense given that it seems to me that that's how you'd need to do it in (at least several) NoSQL backends#2018-03-2922:56alexandergunnarsonAh interesting, thanks for the gist!#2018-03-2922:56alexandergunnarsonAnd that makes sense about using the timestamp as a txn ID#2018-03-2922:57favilaI have a feeling you could do it in spanner with careful table and index design, but whatever api layer there is inside datomic now assumes it can use a kv store lazily. I don't know if those internal interfaces are easy to retarget to a storage layer that actually represents everything first-class#2018-03-2922:58favilaevery "transactor" would have to have a copy of the necessary transaction function code, and have to do all contingent reads inside its write transaction and retry if it lost (I think?)#2018-03-2922:59favilaI'm not sure how much actual parallel tx-ability you could get in practice; depends on what those contingent reads are#2018-03-2923:00alexandergunnarsonYeah I'm not sure about the internal interfaces; also out of curiosity what do the ids represent in the schema you sent? I'm trying to mentally map each of those fields to EAVT. id is E, rev is T, map is A, and val is V?#2018-03-2923:01favilano, they are unrelated#2018-03-2923:01favilaid is a uuid for a block, or one of the mutable "pods" that hods references to the head#2018-03-2923:01favilarev is a revision counter, used only for those mutable rows#2018-03-2923:02alexandergunnarsonAh interesting... I had no idea that was the sort of implementation Datomic used under the hood#2018-03-2923:02favilathe "value" of the id is either map (edn) or val (a blob of fressian)#2018-03-2923:02alexandergunnarsonDatascript is very simple by comparison haha#2018-03-2923:02alexandergunnarson(And much slower despite being in-memory)#2018-03-2923:02favilahttp://tonsky.me/blog/unofficial-guide-to-datomic-internals/#2018-03-2923:02favilathat may help#2018-03-2923:02alexandergunnarsonAh yes. That's been on my reading list for a while#2018-03-2923:03favilathis is the same storage layout for all storage backends#2018-03-2923:03alexandergunnarsonThanks for sharing!#2018-03-2923:03favilaso the storage layers don't actually "see" the datoms#2018-03-2923:03favilathey're compressed into binary blobs of fressian#2018-03-2923:03alexandergunnarsonI see; fascinating#2018-03-2923:04favilathe blobs reference each other weakly by id to form a tree structure#2018-03-2923:04favila(weakly meaning the storage layer doesn't know about it)#2018-03-2923:05alexandergunnarsonInteresting; because they're referencing them "beneath" the binary blob compression#2018-03-2923:05favilaso spanner by itself takes care of some of this if you can represent the datoms directly in the storage layer efficiently enough#2018-03-2923:05favilabut just doing this in spanner is pointless#2018-03-2923:05favila(or, spanner provides no benefit over any other KV store)#2018-03-2923:07alexandergunnarsonNot clear why that is yet; is it because read-write-locking transactions become the bottleneck?#2018-03-2923:08alexandergunnarsonI was under the impression that Spanner could handle txn parallelism quite handily but then again I haven't done a deep dive into the docs#2018-03-2923:08favilayes read-write locking#2018-03-2923:08favilayou always need to read the previous tx to prepare the next one#2018-03-2923:09alexandergunnarsonThat makes sense#2018-03-2923:10favilaif you can design the tables in some way that you can "shard" the previous state they need to read, then parallel writes are at least possible#2018-03-2923:10favilaotherwise, you are effectively single-writer anyway, since each txor would just race to tx, but possibly execute their tx multiple times trying to "win"#2018-03-2923:11favilabut think about preserving the :db/txInstant invariant (that it is always increasing)#2018-03-2923:11favilacan you write a spanner sql SELECT that would not cause a retry if another tx interceded?#2018-03-2923:12favilaanyway, got to go, this is interesting though#2018-03-2923:13alexandergunnarsonI don't know enough about Spanner to answer intelligently; my guess is it would be delayed by read-write locking#2018-03-2923:13alexandergunnarsonOr cause a retry; however that "locking" is implemented (spin lock or not)#2018-03-2923:13favilaI think tx fails if any reads were invalidated#2018-03-2923:14favilathen the txor must rerun#2018-03-2923:14alexandergunnarsonYes, very interesting! And I get your point about the monotonicity of the :db/txInstant, but I wonder whether you could just use the Spanner timestamp in place of that?#2018-03-2923:15favilayou can backdate txInstant#2018-03-2923:15favilayou would lose that ability with timestamp#2018-03-2923:15alexandergunnarsonAh there's an issue, yes#2018-03-2923:15alexandergunnarsonHow useful is backdating though?#2018-03-2923:16alexandergunnarsonA DB restore from backup via Spanner should preserve the original timestamps, for one#2018-03-2923:17alexandergunnarsonBut about backdating in general, it seems misleading/inconsistent to say "I'm transacting at time X and recording that it transacted at time Y"#2018-03-3001:27favilaIt’s for imports and creating time indexed views of some other source data. The technique is called “decanting”#2018-03-3001:28alexandergunnarsonHuh, interesting; I can see the appeal of the feature for import purposes but I haven't run into decanting before#2018-03-2919:52souenzzoThere is plans to make EntityMap work with #clojure-spec ?#2018-03-2920:00alexmillerthere’s a ticket about this, haven’t decided what the course of action will be yet#2018-03-2920:01alexmillerhttps://dev.clojure.org/jira/browse/CLJ-2041#2018-03-3003:16caleb.macdonaldblackTrying to find a solution that does this within the query. I’m aware I could just use (map first result) afterwards.#2018-03-3012:02donmullen@caleb.macdonaldblack if this is on-prem you can do :find [(pull ?e [*]) ...] to get a vector of maps#2018-03-3012:06caleb.macdonaldblackAhh thank you! That's exactly what I was looking for. #2018-03-3119:15James VickersHas anyone ever gotten :db.error/transactor-unavailable Transactor not available when using Cassandra? I'm getting it when trying to submit any transactions over a small size (like 1k Datoms). I didn't have this problem at all with PostgreSQL, though now my transactor and storage service are on different nodes. Anything I should look at to investigate?#2018-04-0209:37igrishaevHi! I’ve got a Datomic Pro Starter Edition account at registered about a year and a half ago. When I try to use an up-to-date Datomic release, the transactor says your License Key cannot be used with that version. My key expires at Sep 21, 2017 and there is no any button or a link to update it. So the question is, how can I use the latest release of Datomic with my account? Thanks.#2018-04-0213:18jaret@igrishaev That’s the intention of Starter. To give users 1 year from signup worth of Datomic use (perpetually) to try it out. The next step would be purchasing pro or continuing to use the versions released prior to your expiration date.#2018-04-0215:06Datomic PlatonicHas anyone needed more than 4GB RAM for the transactor or 4GB RAM for the peer? How many datoms did you have when you reached those limits?#2018-04-0215:21magraHi, I have datomic free on my laptop.
0.9.5697 worked fine, I can't connect to 0.9.5697 though.
I keep getting:
JdbcSQLException Falscher Benutzer Name oder Passwort
Wrong user name or password [28000-171] org.h2.engine.SessionRemote.done (SessionRemote.java:568)
In the .properties file I tried both
storage-datomic-password=my-password
and
storage-datomic-password="my-password"
I tried try to get databases with:
(d/get-database-names "datomic:)
What am I missing?#2018-04-0215:23marshall@magra https://docs.datomic.com/on-prem/configuring-embedded.html#2018-04-0215:24marshallif it’s a remote peer you need to enable remote storage access https://docs.datomic.com/on-prem/configuring-embedded.html#sec-2-2#2018-04-0215:24magra@marshall It is localhost.#2018-04-0215:25marshallyou can access localhost without setting a password (https://docs.datomic.com/on-prem/configuring-embedded.html#sec-1)#2018-04-0215:28magraok. I will settle for that then. Still, should the password in .properties be written with single quotes or just the letters. I followed the manual you mentioned and it is just blank there?#2018-04-0215:29magraSorry, double quotes or just the letters. Single quotes produce a stack trace.#2018-04-0215:29marshallyou shouldnt need quotes#2018-04-0215:29robert-stuttafordyou must actually use “localhost” and not some other name even if it resolves to your local, right?#2018-04-0215:30marshall@robert-stuttaford not sure. I’d have to doublecheck#2018-04-0216:12joshkhi'm running into an odd problem possibly datomic related and was hoping someone could offer a clue. i have a pretty basic ring based project as well as an API project that makes use of com.datomic/client-cloud. as soon as i include the API project as a dependency i get the following error:
Compiling auth.server
2018-04-02 17:09:39.333:INFO::main: Logging initialized @14043ms
java.lang.NoClassDefFoundError: org/eclipse/jetty/util/thread/NonBlockingThread, compiling:(repl.clj:16:21)
Exception in thread "main" java.lang.NoClassDefFoundError: org/eclipse/jetty/util/thread/NonBlockingThread, compiling:(repl.clj:16:21)
at clojure.lang.Compiler$InvokeExpr.eval(Compiler.java:3700)
at clojure.lang.Compiler$DefExpr.eval(Compiler.java:457)
at clojure.lang.Compiler.compile1(Compiler.java:7609)
at clojure.lang.Compiler.compile(Compiler.java:7676)
at clojure.lang.RT.compile(RT.java:413)
at clojure.lang.RT.load(RT.java:458)
at clojure.lang.RT.load(RT.java:426)
at clojure.core$load$fn__6548.invoke(core.clj:6046)
at clojure.core$load.invokeStatic(core.clj:6045)
at clojure.core$load.doInvoke(core.clj:6029)
at clojure.lang.RestFn.invoke(RestFn.java:408)
at clojure.core$load_one.invokeStatic(core.clj:5848)
at clojure.core$load_one.invoke(core.clj:5843)
at clojure.core$load_lib$fn__6493.invoke(core.clj:5888)
at clojure.core$load_lib.invokeStatic(core.clj:5887)
at clojure.core$load_lib.doInvoke(core.clj:5868)
at clojure.lang.RestFn.applyTo(RestFn.java:142)
at clojure.core$apply.invokeStatic(core.clj:659)
at clojure.core$load_libs.invokeStatic(core.clj:5925)
at clojure.core$load_libs.doInvoke(core.clj:5909)
at clojure.lang.RestFn.applyTo(RestFn.java:137)
at clojure.core$apply.invokeStatic(core.clj:659)
at clojure.core$require.invokeStatic(core.clj:5947)
at clojure.core$require.doInvoke(core.clj:5947)
at clojure.lang.RestFn.invoke(RestFn.java:703)
...
at clojure.main.main(main.java:37)
Caused by: java.lang.NoClassDefFoundError: org/eclipse/jetty/util/thread/NonBlockingThread
at org.eclipse.jetty.io.SelectorManager.doStart(SelectorManager.java:258)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:132)
#2018-04-0216:13marshall@joshkh looks like a deps conflict (likely jetty); client-cloud requires a version that may be newer than the version included with some other dependency in your project#2018-04-0216:16joshkhthat's what i was thinking. i guess i'll play around with exclusions until i find the magic combination. thanks @marshall#2018-04-0216:18alexmillerhttps://docs.datomic.com/cloud/troubleshooting.html#dependency-conflict#2018-04-0216:18marshallnp. sorry i cant be more help - you can use your build tool to track it down potentially (i.e. https://maven.apache.org/plugins/maven-dependency-plugin/examples/resolving-conflicts-using-the-dependency-tree.html for maven, lein deps :tree for lein, https://github.com/boot-clj/boot/wiki/Boot-for-Leiningen-Users#lein-deps-tree for boot)#2018-04-0216:18marshallright…. or that ^#2018-04-0216:19marshall🙂#2018-04-0216:21joshkhcheers, thank you!#2018-04-0217:35souenzzoI'm trying to do an or-join with multiple db's but it throws IllegalArgumentException Cannot resolve key: $$ datomic.datalog/resolve-id (datalog.clj:272)
;; this query works
(d/q '[:find ?c ?tx
:in $ $$ ?params ...
:where
[...]
[$$ ?c ?i _ ?tx]]
db history params ...)
;; but when I try to make a or-join
(d/q '[:find ?c ?tx
:in $ $$ ?params ...
:where
[...]
(or-join [$$ ?c ?tx]
[$$ ?c ?i _ ?tx]
[$$ _ :notification/origins ?c ?tx])]
db history params ...)
I'm not sure if I really need to put the $$ and the ?c in the join. The join is just on ?tx#2018-04-0219:11marshallthe or-join vars are only vars, not data-sources#2018-04-0219:12marshallor-join-clause = [ src-var? 'or-join' rule-vars (clause | and-clause)+ ]#2018-04-0219:12marshallhttps://docs.datomic.com/on-prem/query.html#query#2018-04-0219:12marshall@souenzzo ^#2018-04-0219:13marshallrule-vars = [variable+ | ([variable+] variable*)]#2018-04-0219:14marshallso it should be:
($$ or-join [?c ?tx] [?c ?i _ ?tx]... #2018-04-0220:04souenzzoWired place to put the $$ but now it's working 😄 thnx#2018-04-0220:05marshallall the clauses in a single or-join must use the same data src#2018-04-0301:20brunobragahey everyone, I am new to clojure and datomic, and I have a simple questions
does anyone know why this query:
(d/q '[:find ?id ?type ?urgent
:where
[?e :job/id ?id]
[?e :job/type ?type]
[?e :job/urgent ?urgent]] (get-db))
returns the following: #object[java.util.HashSet 0x4362c471 [[690de6bc-163c-4345-bf6f-25dd0c58e864 bills-questions false]]]
but all I want is [690de6bc-163c-4345-bf6f-25dd0c58e864 bills-questions false]
I can get the answer I want but using first, but why is datomic retuning such a weird output, is there a way to get just what I actually want?#2018-04-0303:18timgilbertSee here: https://docs.datomic.com/on-prem/query.html#find-specifications
Basically with :find ?a ?b ?c :where ... you’ll get a set of tuples. This makes sense if you consider that there could be more than one result from your query. If you know there will only be a single result you can use :find [?a ?b ?c] :where ... instead#2018-04-0302:52mvI have an entity with two reference fields on it. Is there a way to write a datomic query that, given an ID, returns the entity if the id is contained in either ref field? Like an or for a query?#2018-04-0306:16brunobragadoes anyone know how I can save a vector on datomic and retrieve it as if it was any other literal value?#2018-04-0307:20Christian Johansen@mv (or [?e :ns/ref1?id] [?e :ns/ref2?id])#2018-04-0307:20Christian JohansenDatomic queries on the mobile while walking: hard #2018-04-0307:22Christian Johansen@brnbraga95 you can't. store values unsorted with cardinality many. If you need ordering, store refs and add a sort attribute to the entities #2018-04-0312:17brunobragaif I use cardinality many, then how can I at least retrieve all the values at once and store into an array? assuming lets say, I have an array of favorite foods.#2018-04-0319:39Christian Johansenthe entity or pull APIs would be good for that. e.g. (-> (d/entity (d/db conn) my-ref) :myentity/favorite-foods)#2018-04-0422:20brunobragaproblem is that I believe I am not using the entity-id so it just creates a new entity.
(d/transact (get-connection) [
{:job/id (first job)
:job/type ((second job) "type")
:job/urgent ((second job) "urgent")
:job/timestamp (bigint (System/currentTimeMillis))
:job/requester agent-id
:job/status "doing"}
])
#2018-04-0307:23Christian JohansenTechnically you could also store vectors as edn strings, but I wouldn't recommend it #2018-04-0309:54magra@marshall The password with datomic free works beautifully as soon as I update the peer too.
Maybe add this to the page.
Feeling stupid after having run in circles for a couple of hours 😉#2018-04-0310:36ggaillardIs datomic a good fit for a chat app? I have more reads than writes, but if I get too much messages/second to persist and only one transactor at a time can handle writes, I might end up in a situation where the transactor run out of memory. Even in HA mode, an other transactor would replace the dead one, but it would end up in the same situation. Is this true ?#2018-04-0312:14stijncan I assume that a Client that loses its connection to the datomic system will automatically be able to use the connection once it's again available? or is there some logic required to again setup the connection?#2018-04-0318:01favilapeers reconnect automatically (actually it can be sometimes hard to detect--only transactions will fail). I don't know about clients#2018-04-0321:25Datomic Platonicwe're getting ActiveMQNotConnectedException trying to connect to datomic on remote host, our url string looks like this: "datomic:<sql://datomic?jdbc:postgresql://1.2.3.4:5432/datomic?user=alice&password=secret>#2018-04-0321:26Datomic Platonicare we mangling the ports somehow? we can connect to 5432 (postgres) but 8998 (peer) and the peer api are not connecting#2018-04-0321:26marshallthe URI should be storage#2018-04-0321:26Datomic Platonicthe same ports all work fine when the transactor/peer/postgres are all on localhost#2018-04-0321:26marshallthe transactor writes its location to storage#2018-04-0321:26marshallthe peer reads it from there#2018-04-0321:26marshalland then connects to it#2018-04-0321:27marshallthe ArtemisMQ error indicates your peer is connecting to storage but not to the transactor#2018-04-0321:27marshallyou need to specify HOST and/or ALT-HOST in the transactor properties file#2018-04-0321:27marshallhttps://docs.datomic.com/on-prem/storage.html#connecting-to-transactor#2018-04-0321:28Datomic Platoniccan HOST be localhost? our file reads protocol=sql host=localhost port=4334#2018-04-0321:28marshallhttps://docs.datomic.com/on-prem/deployment.html#peer-fails-to-connect#2018-04-0321:28Datomic Platonic(we're running the transactor and peer on the same box)#2018-04-0321:28marshallnot if the peer is on a different box#2018-04-0321:31marshallwhat is the full exception?#2018-04-0321:34marshallis there any additional information in the ex-info?#2018-04-0321:34marshallusually tells you what address(es) it’s tried#2018-04-0321:35Datomic Platonicno, just a short stack trace with create-session-factory#2018-04-0321:36marshallthis is peer or client?#2018-04-0321:37Datomic Platonicusing peer only#2018-04-0321:37marshallyou mentioned port 8998, which is why i asked#2018-04-0321:38marshallthe transactor is up and running?#2018-04-0321:40Datomic Platonicok transactor is running with datomic:sql://<DB-NAME>?jdbc:<postgresql://localhost:5432/datomic?user=alice&password=secret>#2018-04-0321:41Datomic Platonicpeer is serving datomic:<sql://datomic?jdbc:postgresql://localhost:5432/datomic?user=alice&password=secret>#2018-04-0321:41marshallserving?#2018-04-0321:41marshallpeer-server ??#2018-04-0321:41Datomic Platonicthat is the peer server INFO log message#2018-04-0321:42marshallok, so you are using client#2018-04-0321:42Datomic Platonictransactor and peer are both running on the same remote machine, and we're attempting to launch the PEER client from a laptop REPL#2018-04-0321:43marshallok; what is in your connection config?#2018-04-0321:44marshallyou’ll need the access-key and secret you specified when you started peer server#2018-04-0321:44marshallpeer-server is only used to allow clients to connect (not other peers)#2018-04-0321:45marshallif you’re trying to connect a peer (i.e. using datomic.api) from another system, you need to add an alt-host value to your transactor properties file that is the remotely-resolvable address of the transactor instance#2018-04-0321:45Datomic Platonicour clojure repl has [datomic.api] from the datomic-pro, and we have the following uri: datomic:<sql://datomic?jdbc:postgresql://1.2.3.4:5432/datomic?user=alice=password=secret>#2018-04-0321:46marshallthat ^ won’t work unless you’ve added 1.2.3.4 as the host (or alt-host) in txor properties file#2018-04-0321:46Datomic Platonicwhere 1.2.3.4 is the remote IP address for the box with transactor+peer running, and alice and secret are the params in the datomic.properties file#2018-04-0321:46marshallthe peer will find postgres at that address specified by the URI#2018-04-0321:46marshallit will find “localhost” written in storage#2018-04-0321:46marshallb/c that’s what you have in host in your properties file#2018-04-0321:47marshallthen it will try to connect to the transactor at localhost:4334#2018-04-0321:47marshallit’s almost always better to use the actual system address (as resolvable from outside) for the value of host in your properties file#2018-04-0321:48marshallincidentally, if you’re using a peer you don’t need peer-server running#2018-04-0321:48marshall(unless you’re using peer-server for clients elswhere/also)#2018-04-0321:51Datomic Platonicah! that worked! we seemed to have made the classic 127.0.0.1->0.0.0.0 server error#2018-04-0321:51Datomic Platonic+1 by the way on not needing to start the extra peer server#2018-04-0321:52Datomic Platonicwe intend to run 4 peer clients and no cloud api clients for the moment#2018-04-0321:52Datomic Platonicthanks @marshall#2018-04-0321:52marshallno problem#2018-04-0323:55brunobragadoes anyone know how I can every value from a certain field that has cardinality many?
@christian767 told me to use this (-> (d/entity (d/db conn) my-ref) :myentity/favorite-foods) but I did not quite understood this query#2018-04-0400:31brunobragaGOT IT.#2018-04-0403:11brunobragadoes anyone know how I can update an attribute inside an entity, datomic keeps creating a new entity, which I believe is expected based on what it proposes, however I need to change a string value inside an entity from "todo" to "doing". Is it not the datomic way to look at it?#2018-04-0406:02Christian Johansen@brnbraga95 (d/transact conn [[:db/add entity-id :your/attribute "doing"]])#2018-04-0406:04Christian Johansenor the slightly more paranoid (d/transact conn [[:db.fn/cas entity-id :your/attribute "todo" "doing"]])#2018-04-0406:05Christian Johansenentity-id can be the actual entity id, e.g. (:db/id entity), or a unique reference. based on what you say (“keeps creating a new entity”) I suspect you are using a ref that is not unique?#2018-04-0408:45igrishaevDoes anybody know how can I connect to remote transactor? My URL looks like
"datomic:"
But I could not google for where to put transactor’s host and port.#2018-04-0408:53igrishaevThe idea behind that is, before I had a single node with everything installed on it at once: Clojure app, Postgres and Transactor. Now I move them across several nodes. I’d like to have several Clojure app instances that connect to a node that has Postgres + Datomic transactor. Still, my app tries to communicate with local transactor by localhost + 4334.#2018-04-0409:14igrishaevSolved: the solution was to declare a global IP in transactor properties rather then localhost. Other nodes discover it from the db.#2018-04-0412:51marshall@igrishaev that’s correct (https://docs.datomic.com/on-prem/deployment.html#getting-connected)#2018-04-0412:51marshallpeers connect to storage first, read the transactor location, then connect to the transactor#2018-04-0413:22Ben Hammondis there of list of :db/error values in the datomic docs?
I am aware of
- :db.error/transactor-unavailable
- :db.error/transaction-timeout
are there any others (that might be remediable)#2018-04-0413:33Ben Hammondis it reasonable to assume that #{:db.error/transaction-timeout :db.error/transactor-unavailable} are the only two error types that are worth retrying ?#2018-04-0414:04brettI have a syntax question. This is working:
(d/q '[:find (pull ?e [*])
:where [?e :team/id]] db)
This is not, and is throwing Exception
(d/q '[:find [(pull ?e [*]) ...]
:where [?e :team/id]] db)
Exception
ExceptionInfo Only find-rel elements are allowed in client find-spec
I’m using datomic-pro-0.9.5661#2018-04-0414:07marshallClient only supports find rel
https://docs.datomic.com/cloud/query/query-data-reference.html#find-specs#2018-04-0414:08marshallPeer has additional options for find spec https://docs.datomic.com/on-prem/query.html#find-specifications#2018-04-0414:14brettAll I changed is the binding form. I see it in both links you sent.#2018-04-0414:26marshallThe client only supports the find-rel form. You can use the other binding forms for inputs, but not in the find specification#2018-04-0414:34brettI see it now. Hmm, that’s unfortunate.#2018-04-0415:16eoliphanthi , is there a convinient way to copy a database (in the same storage)? in reading the docs for restore-db it seems that that is not possible with that tool at least#2018-04-0507:02caleb.macdonaldblackWould anyone recommend against storing edn in datomic? I can query it using read-string with some destructuring in datalog. It greatly simplifies my current problem.#2018-04-0507:41robert-stuttaford@caleb.macdonaldblack we do this a lot. sometimes you just need to store ad-hoc data.#2018-04-0509:09caleb.macdonaldblackThanks for the response!#2018-04-0509:42robert-stuttafordtry
[:find (count ?b)
:with ?v
:where
[?b :bar/baz ?v]]
#2018-04-0509:43robert-stuttafordyou’d need to use :with somehow @caleb.macdonaldblack https://docs.datomic.com/on-prem/query.html#with#2018-04-0509:47caleb.macdonaldblack@robert-stuttaford Thanks! I gave a bad example. I could have just done a count on the entity id. But for a sum I would need to use :with and that’s what I was looking for.#2018-04-0509:48caleb.macdonaldblackThat’s what I ended up with#2018-04-0519:15mynomotoI'm trying to register on AWS Marketplace Product Support Connection but it says that You are not currently subscribed to any products which are Product Support Connection enabled. You may return to this page to edit your support contacts after you have subscribed to a participating product., I checked and I'm subscribed to Datomic Cloud. Any clues about what I'm doing wrong?#2018-04-0519:22marshall@mynomoto We’re working with AWS to enable Product Support Connection - - not yet activated#2018-04-0519:31mynomoto@marshall Ok, thanks. The datomic cloud marketplace page mentions it, that's why I was thinking I was doing something wrong.#2018-04-0519:32marshallyeah, we were hoping it would be enabled sooner; still working on it 🙂#2018-04-0520:50mynomotoRemoving a dependency fixed the error above.#2018-04-0521:37gcastHow big a database in Terabytes can Datomic hold without totally obliterating the 10 Billion datom threshold? After googling quite extensively, I'm struggling to see any performance reports based on actual storage size. Most estimates just describe upper bound in terms of datoms.#2018-04-0522:50rgorrepatiHi, I am trying to investigate a high cpu usage(> 150% consistenly) on the transactor, causing timeouts for clients. Looking at the transactor logs, I see messages that look like “datomic.kv-cluster - {:event :kv-cluster/update-pod,” What does update pod mean?#2018-04-0523:05caleb.macdonaldblackCan I limit results in datalog?#2018-04-0523:18gcastI believe you can put a limit on pulls#2018-04-0523:20gcast@caleb.macdonaldblack https://docs.datomic.com/on-prem/pull.html#limit-option#2018-04-0523:30caleb.macdonaldblack@gcast The limit in pull is limited to cardinality many attributes#2018-04-0523:32gcastI see. I'm curious if query results are lazy or eager. Plain clojure functions may suffice if its lazy#2018-04-0523:32caleb.macdonaldblackThat seems to work#2018-04-0523:32caleb.macdonaldblackI don’t think its lazy#2018-04-0523:34gcastinteresting. the . syntax in the find returns a single value#2018-04-0523:35gcastbut you are taking 2?#2018-04-0523:35caleb.macdonaldblackJust in that example#2018-04-0523:35caleb.macdonaldblackI plan on paginating#2018-04-0523:36gcastgotcha. well good to know take can be used over entity binding#2018-04-0523:37caleb.macdonaldblackThanks#2018-04-0603:04steveb8n@adammiller as promised, here’s the lib I created to use Peer and Cloud compatible app code https://github.com/stevebuik/ns-clone#2018-04-0603:05steveb8nCurrently doesn’t support client/cloud since I’m only using Peer/local currently#2018-04-0603:05steveb8nI’m interested in any feedback from the community.#2018-04-0603:06steveb8nIt’s working really well in my project already. Using the logger interceptor shows my app is really chatty, will need some cleanup to run on cloud#2018-04-0609:49conanDoes anybody know whether it is possible to access Datomic Cloud from outside AWS?#2018-04-0610:05val_waeselynckIf it is for developer access, I believe that's what the bastion is for#2018-04-0610:36conanI'm looking for a production application to use it that's hosted on Heroku, and I can't afford their VPC solution. Can it go over the internet?#2018-04-0610:45val_waeselynckYou'll need to ask someone from the Datomic team (@U05120CBV ?) this is beyond my knowledge#2018-04-0612:59marshallDatomic Cloud runs in your own VPC in your own AWS account. You can configure security/network options however you require, but the default (only accessible from within AWS VPC) is designed with best-practices for AWS security.#2018-04-0615:28conanSad times for those of us not in AWS!#2018-04-0615:29conanSounds like I could configure public access though. I'll have to consider whether I'm happy with this, I understand it's all done over SSL.#2018-04-0620:30marshallif you’re asking about running Datomic Cloud itself in Heroku, no. Cloud can only be run in AWS#2018-04-0614:53mynomotoIf I delete a solo datomic cloud stack and want to clean everything, what do I have do delete manually? I found storage at s3, efs and dynamo. Is there something else that needs cleaning?#2018-04-0614:55jaretHey @mynomoto This page should help https://docs.datomic.com/cloud/operation/deleting.html#2018-04-0614:56mynomoto@jaret thanks! I will check it out#2018-04-0614:56jaretI think you got everything but maybe the CloudWatch log group.#2018-04-0615:04mynomoto@jaret I think the console steps to deregister scalable targets are not necessary if the dynamo tables are deleted. I could not find those.#2018-04-0615:45mynomotoAnd iam policies also need to be deleted manually.#2018-04-0619:34souenzzoThere is a :user/animals. It's a ref/many
I have a list of animals. I want to find all users that has just a subset(or equal) of my list of animals
There is how to do it with one query?#2018-04-0619:36souenzzo{:db/id 1
:user/animals [10 11 12]}
{:db/id 2
:user/animals [10 11]}
If my list of animals is [10 11 15], I want to find [2]#2018-04-0620:11eraserhdI have an odd scenario where I add a little bit of data to my seed data for tests, and tests start failing, even though I don't believe the test data is really used yet, and it's valid. I can't find any info about limits or config for "mem" databases. Am I missing something?#2018-04-0620:43eraserhdugh, nevermind. Found it.#2018-04-0711:13caleb.macdonaldblackhttps://support.cognitect.com/hc/en-us/articles/215581488-Aggregation#2018-04-0711:14caleb.macdonaldblackLast argument (and only one) must be the aggregate variable#2018-04-0823:37piotr-yuxuanHave you ever tried to setup a Figwheel project with Datomic ? I call lein new figwheel mwe-cider-cljs-datomic, add Datomic as a dependency (just like the doc says) and I stumble upon an exception. Any additional configuration?
Here is a minimal working example: https://github.com/piotr-yuxuan/mwe-figwheel-cljs-datomic#2018-04-0901:56zlrthI'm starting datomic via this command:
bin/run -m datomic.peer-server -h localhost -p 8998 -a myaccesskey,mysecret -d hello,datomic:
per a tutorial. i think i have to specify what logback.xml file i'm using. what's the flag for that? I'm looking for something like this:
bin/run -L ./bin/logback.xml -m datomic.peer-server -h localhost -p 8998 -a myaccesskey,mysecret -d hello,datomic:
Context in case I'm going about this the wrong way:
In cider-repl. I want logging to cider-repl to be less noisy; right now it's at least 30kb of logging per datomic repl command. I have commented-out sections of the logback.xml file in the bin directory of datomic-pro-0.9.5561, but they're not showing in my cider-repl. And as a test, I changed this line
<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} %-5level %-10contextName %logger{36} - %msg%n</pattern>
to this
<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} %-5level %-10contextName %logger{36} - testfoobar - %msg%n</pattern>
and didn't see testfoobar in my cider-repl logging, so I think the run datomic program isn't picking up my logback.xml file.#2018-04-0901:57zlrthIf there's comprehensive for Datomic pro starter, I'm happy to read it. I couldn't find it. Thanks everyone!#2018-04-0902:59alexmillerlogback.xml is probably loaded as a resource so you should just put it on your classpath#2018-04-0916:42zlrthThanks for your response! I thought I had a much more tricky problem than I did. I had put logback.xml in my classpath, but it wasn't picking up my modified logback.
Turns out, there was already a logback.xml in my project, which was clobbering my modified one. Silly problem!#2018-04-0912:07caleb.macdonaldblackAny way to pull the tx-instant from an entity’s db/id using the pull or pull-many function?#2018-04-0912:41gcastim guessing putting the ident :db/txInstant in the pull pattern doesn't work?#2018-04-0913:12robert-stuttaford@caleb.macdonaldblack no, neither entity nor pull support access to the ‘t’ in ‘eavt’. you need to use d/q, d/datoms, or d/tx-range. entity and pull start with ‘e’ and give you ways to discover ‘v’ through ‘a’. ‘t’ is never an ‘v’ in this scheme.#2018-04-0922:01caleb.macdonaldblackAhh ok cheers#2018-04-0914:55chris_johnsonDo I understand correctly that a consequence of this would be that if your schema doesn’t expose synthetic timestamps for things (e.g., if you rely upon :db/txInstant as a proxy for “when did thing happen” instead of explicitly storing a :thing/created-at datom), then you cannot use the Datomic Client API to know about when things happened in your db?#2018-04-0914:57alexmillerno, you just need to query for it, not pull#2018-04-0914:58alexmillerbut I think it’s also helpful be clear in your thinking about the difference between “when did thing happen” and “when did I record that thing happened”#2018-04-0914:59chris_johnsonah, I see, thanks for the clarification#2018-04-0914:59alexmillerit can be a useful thing to conflate the two, but don’t forget that you’re doing so :)#2018-04-0915:01chris_johnsonI am very clear in the difference between the two, but I also have inherited a medium-sized schema where the decision to conflate the two things was made long ago. I think it’s a reasonable choice for the data at hand since the events in question don’t have a need for high resolution to wall-clock time, but I like to keep a bead on what all we can and can’t do with Client#2018-04-0915:02chris_johnson(for example we have a lot of logic in DB fns, so I’m keeping my on-prem-on-AWS footprint up-do-date and waiting to see what Cloud will offer to handle them - this restriction would have been another thing I had to know about while doing the Indiana-Jones-sandbag-and-idol dance trying to decide when to migrate to Cloud :D)#2018-04-0915:05alexmillerwell if you haven’t, that’s probably something to talk to @marshall about#2018-04-0915:06chris_johnsonIt’s on my list. I’m very interested in Datomic Cloud but I have other dragons to slay before I get to the point of looking at upgrading our stack.#2018-04-0915:38danstoneHi, I'm not running datomic - I am interested in some consistency edge cases around GC and excision. What happens if your db 'value' holding one root pointer see's GC'd or excised nodes? Do you get an exception? Do you just not see the datoms? Lets say i (def dbval (db conn)) and then run a GC job or excise some data, several days later, I seek all datoms in dbval - what happens?#2018-04-0915:42danstoneI hope somebody just knows because it would be a pain to try it out 🙂#2018-04-0916:25marshall@danstone you get an exception#2018-04-0916:25marshallthat is why the gc-storage API has an ‘older than’ date argument#2018-04-0916:25marshallhttps://docs.datomic.com/on-prem/capacity.html#garbage-collection#2018-04-0916:26marshall“The reason that garbage is not deleted immediately on creation of a new tree is that not all consumers will immediately adopt the latest tree. Garbage collection should not be run with a current or recent time, as this has the potential to disrupt index values used by long-running processes. Except during initial import, garbage collection (gcStorage) older-than value should be at least a month old.”#2018-04-0916:33kvltHey all. Inside of my tx-data of a transaction I have a list of datoms. I.e. #datom[17592220704601 134 17592186149960 13194174193496 true]. I realise I can retrieve the eav as well as the tx. My understanding is that the true denotes as to whether it was actually transacted. My question is, how do I get at that value?#2018-04-0916:44marshall@petr (get datom :added)#2018-04-0916:44marshallor (:added datom)#2018-04-0916:44kvltThanks#2018-04-0916:44marshallyep#2018-04-0916:45kvltIs there documentation around this? I couldn't find it#2018-04-0916:45marshallcovered here: https://docs.datomic.com/on-prem/clojure/index.html#datomic.api/history#2018-04-0916:46marshallmore specifically here: https://docs.datomic.com/on-prem/javadoc/index.html#2018-04-0916:46kvltGreat, thank you#2018-04-0918:59bmaddyDoes anyone know of an easy way to do something like d/touch but include all reverse references? Context: I find myself typing (inspect-tree (d/touch (d/entity db 1234))) and would like to be able to traverse reverse references instead of having to look them up.#2018-04-0919:01marshall@bmaddy The set of reverse references from any entity is an open set (since any entity can have any attribute). I’d probably use the Datoms API and the VAET index to find what you’re looking for#2018-04-0919:02bmaddyOk, I'll probably look into that. Thanks @marshall.#2018-04-0919:59chris_johnsonI have designed a cross-region failover capability like so:
- backups are taken every 10 minutes automatically to an S3 bucket in us-east-1
- those backups are copied to a different bucket in us-east-2 by cross-region-replication
Now I am trying to exercise it:
- create new, empty DynamoDB table in us-east-2
- bin/datomic restore-db <the us-east-2 bucket> <the new, empty table> (<optionally, a t-value from bin/datomic list-backups)#2018-04-0919:59chris_johnsonI see the restore fail like this:#2018-04-0920:00chris_johnsonbin/datomic restore-db datomic: the-t-value
Copied 0 segments, skipped 0 segments.
Copied 0 segments, skipped 0 segments.
java.util.concurrent.ExecutionException: java.lang.Exception: Key not found: 583bfcc4-3b77-4dec-b0ba-2c0bda16dbed
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at clojure.core$deref_future.invokeStatic(core.clj:2292)
at clojure.core$future_call$reify__8097.deref(core.clj:6894)
at clojure.core$deref.invokeStatic(core.clj:2312)
at clojure.core$deref.invoke(core.clj:2298)
at datomic.backup_cli$status_message_loop.invokeStatic(backup_cli.clj:24)
at datomic.backup_cli$status_message_loop.invoke(backup_cli.clj:18)
at datomic.backup_cli$restore.invokeStatic(backup_cli.clj:53)
at datomic.backup_cli$restore.invoke(backup_cli.clj:43)
#2018-04-0920:01chris_johnsonrestores from the backups in us-east-1 appear to work okay (I haven’t let one run to completion yet but they do at least begin to copy segments)#2018-04-0920:01chris_johnsonDoes this likely mean that all my backups in Ohio are corrupted somehow? Is there something about how backup-db puts data into S3 that would not be amenable to CRR?#2018-04-0920:03chris_johnsonto be clear, bin/datomic list-backups shows the same set of t values available in both the us-east-1 and us-east-2 buckets, but restoring from the us-east-2 bucket for any of the t values I’ve tried leads to the above error (including the key that is not found being the same key each time)#2018-04-0920:23marshallsounds like the cross-region replication hasn’t completed#2018-04-0920:24marshallor had som kind of failure#2018-04-1002:50chris_johnson^^ I was able to resolve this. It wasn’t the CRR, but the initial “seed” copy missed a bunch of files somehow - not all of them, but many. Once I tracked down the missing key and saw that it was from November of 2016, it was pretty clear what the problem was. 😅#2018-04-1008:44piotr-yuxuanSorry to paste it again, has anybody encountered it? https://clojurians.slack.com/archives/C03RZMDSH/p1523230657000057#2018-04-1012:02marshallwhat is the exception?#2018-04-1009:36akirozHi guys, I'm having a bit of trouble with Datomic monitoring. Specifically the TransactionMsec metric, in the online documentation, I see that the metric should have 3 statistics max, avg, and min but when I collect the metrics, I get 4 statistics: count, hi, lo, and sum. I can guess that count is the number of transactions in that frame but sum is a completely mystery to me since it's value is always higher than hi.#2018-04-1012:01marshallsum is the sum of all values over the past minute, hi and lo are the largest and smallest values over the past minute, respectively. count is the number of individual instances of that metrics#2018-04-1012:01marshallso you can get average by taking sum/`count`#2018-04-1017:13akirozGot it, thanks!#2018-04-1010:08kirill.salykinHi guys
Do I understand correctly that client library (datomic.client) gets all attention now (because of Datomic Cloud) and peer library will be eventually deprecated?#2018-04-1011:51val_waeselynckWhoa I certainly hope not!#2018-04-1012:00marshallno. peer will remain an active part of Datomic On-prem#2018-04-1012:40kirill.salykinGood to know, thanks for answering#2018-04-1013:40hmaurer@U05120CBV so you still believe that the peer model has advantages? or would you push new users towards the client model?#2018-04-1013:55marshallthere are definitely use cases where the peer makes more sense. we’re working on options for solving some of those issues in Cloud#2018-04-1013:55marshallThe ease of use of Cloud and the fact that you can use Client with either Cloud or On-Prem makes it my recommendation in general, however#2018-04-1015:42kirill.salykinSo cognitect positions client as preferred way?#2018-04-1015:51marshallif you’re getting started on a new project, yes i would suggest client, as it provides the most options. i wouldn’t suggest changing an existing application that already uses the peer#2018-04-1015:51marshallhttps://docs.datomic.com/on-prem/moving-to-cloud.html#2018-04-1015:51val_waeselynck@U1V7K1YTZ My way of summarizing it would be that Peers give you expressive more power but also more operational constraints / challenges - you then need to see which of those matter to you#2018-04-1015:51marshallalso: https://docs.datomic.com/on-prem/clients-and-peers.html#2018-04-1015:52marshallI agree with Val; The Client library is inherently more flexible and better suited to certain use cases (i.e. microservices)#2018-04-1015:52marshallit’s much lighter weight than peer#2018-04-1015:52marshallbut there are definitely things you can do in peer that would be very challenging (or impossible) with client#2018-04-1016:37kirill.salykinThanks again for answers!#2018-04-1014:10stijnis there a reason why a transaction would fail with (d/with db {:tx-data tx}) and work properly with d/transact? I'm using the client lib and this is the exception:#2018-04-1014:10stijnclojure.lang.ExceptionInfo: Datomic Client Exception {:cognitect.anomalies/category :cognitect.anomalies/incorrect, :datomic.client/http-result {:status nil, :headers nil, :body nil}}#2018-04-1014:12marshall@stijn can you provide the stack trace (the ex-info)#2018-04-1015:15marshall@stijn can you share the actual tx data here?#2018-04-1017:09eraserhdI'm about to implement something that's kind of like more sophisticated pull expressions. They allow excluding nested entities that do not match criteria, for example... possibly back references, even. 1) Does this sound like something else which exists? 2) Has anyone else wanted or needed this?#2018-04-1017:25eraserhdTo back up, this is for computing dependencies between tasks which have an "input" pull and an "output" pull, as well as determining whether tree-like data satisfies an "output" pull.#2018-04-1115:49johnjhow do you enforce constraints in datomic cloud?#2018-04-1115:50alexmillerwhat kind of constraints do you mean?#2018-04-1116:05johnjlike a payment can be registered only if a customer exists#2018-04-1116:52eraserhdI have a mechanism for doing this, but there's nothing built in to Datomic.#2018-04-1116:53eraserhdEssentially, I have a transaction function that runs in the transactor, and it gets passed the tx. It uses d/with to build a db with the tx, then enumerates the constraints queries in the database, and ensures that none of them return data.#2018-04-1116:54eraserhdIf successful, it returns the tx as-is. Otherwise, it fails with a nice error message.#2018-04-1116:54eraserhdI'm looking for things to extract from my (overly large) project. Perhaps this is a nice thing.#2018-04-1116:56johnjah, you pass custom code to the transactor, is this supported in cloud?#2018-04-1117:07eraserhdHmm... I don't know. I mean, I'll bet you can install transaction functions, but I don't know much about cloud.#2018-04-1117:22mynomotoAfaik no custom transaction function on cloud.#2018-04-1117:32johnjhmm true, looks like is not supported https://docs.datomic.com/on-prem/moving-to-cloud.html#sec-4#2018-04-1118:13alexmillerThis will be an evolving area of cloud#2018-04-1120:32hmaurer@alexmiller do you have a rough timeframe for this?#2018-04-1120:32alexmillerno#2018-04-1215:33souenzzoI need some predicate functions like ref? to-many?, that check if a keyword is or not a ref? on db.
Is a good idea create a mem db at build-time, install schema, run a query, get the list of idents/keywords and make this functions
Or is better do it at runtime, with memoize or something like?#2018-04-1218:44favilaThis seems a trivial predicate to write with d/attribute. Why are you thinking of caching, memoizing, etc?#2018-04-1221:52souenzzothere is no d/attribute on client API
https://docs.datomic.com/cloud/client/client-api.html#2018-04-1221:53souenzzoI'm using d/entity at the moment. But I wanna go to the client api#2018-04-1222:35favilaoh I didn't realize this was client api#2018-04-1223:57souenzzopeer API is way better then client API
but client API is tons of light years better then #sql 🙂#2018-04-1215:46eraserhdNot sure about "better", but I actually query my "model map" which contains attribute descriptions (and which get turned into schema change txs at system boot).#2018-04-1215:47eraserhdI actually have a number of phantom attributes, too, that get computed but never committed, and this structure has made that possible.#2018-04-1221:09rapskalianHey all, I'm having a bit of trouble trying to get a Lambda function connected to my Datomic Cloud db via the client API. My function keeps timing out with the following error:
Unable to execute HTTP request: Connect to
...
connect timed out
I've followed the instructions here: https://docs.datomic.com/cloud/operation/access-control.html
And here: https://docs.datomic.com/cloud/operation/client-applications.html
but still not having any luck 😕
My Lambda is configured to access the Datomic VPC via security group $(SystemName)-apps as in the documentation. I have not tried VPC Peering however. Is that a necessary step when trying to access Datomic Cloud from Lambda? Thanks.#2018-04-1221:09rapskalianI've even tried giving my Lambda function administrator privileges to make sure it's not something silly like that..#2018-04-1313:35marshall@cjsauer you don’t need to use vpc peering if your lambda is running in the datomic cloud VPC#2018-04-1314:41chris_johnson@cjsauer When you say “keeps timing out” do you mean you are running it again immediately after it times out and getting the same result? It might be that your initial run of the lambda is super slow because unless things have changed since I learned this, the first invocation of a VPC-hosted lambda has to wait on an ENI being provisioned and attached to the VPC#2018-04-1316:33rapskalian@U07HA15PY ah ok! Thanks for the reply. So I've set the Lambda timeout well above 2 minutes, and noticed the same issue. Could it be that there is a timeout somewhere in Datomic that is the culprit? Maybe I just need to bump that up...#2018-04-1316:34rapskalianIt's definitely the call to d/connect that is failing. d/client seems to execute just fine.#2018-04-1316:36chris_johnsonThe only other thing I’d recommend checking is security group settings - you will absolutely get a timeout if you try to access a port with no security group rule allowing ingress; this is a security feature in the same way that S3 returns a 403 for anything it can’t find, rather than a 403 for things that exist and you don’t have permissions for and a 404 for things that don’t exist#2018-04-1317:10rapskalianThat makes sense, I'll keep digging. I have indeed added my Lambda function to the "Apps" security group mentioned in the docs:
>Inside this VPC, the stack also creates an applications security group named $(SystemName)-apps that you can use for you client applications. The security group that the Datomic system instances run in allows access from the applications security group.
Curious though if maybe I need to do additional custom tweaks to the security group? It seems though like things are configured to be "plug-and-play", at least that's my interpretation of the docs.#2018-04-1314:41chris_johnsonbetween that and say JVM startup, you could easily hit a timeout. 🙂#2018-04-1314:54davidwIs there any way to ensure uniqueness using multiple attributes in datomic cloud? With on prem I believe I could use a transaction function but that's not an option with cloud. Is there another option?#2018-04-1315:35eraserhdDo you mean have multiple fields of an entity determine uniqueness, or do you mean the same value (like a uuid) can't be used for two different attributes?#2018-04-1316:32davidwI mean have multiple fields of an entity determine uniqueness. In my case they're both strings and it would be safe to concatenate them to produce a third field which I can make unique, but that feels a little clunky.#2018-04-1316:57favilaAlways writing all three fields (two source plus the composite unique index field) at the same time is the only way without a transaction function. We do use this technique (composite index field) with the peer lib. It's a little clunky but not terrible#2018-04-1316:41eraserhdI don't know anything about Cloud. I think the transaction function thing is the only way to enforce this sort of thing (scroll up) to April 11, but that doesn't work on cloud.#2018-04-1316:42eraserhdI had this kind of scenario once, and IIRC, when I figured out some modeling problems it went away. So... not saying your problem will go away... but I am curious about the specifics.#2018-04-1320:50rapskalianStill having issues trying to connect a Lambda function with Datomic Cloud via the client API...I may be misunderstanding this passage in the docs:
>Inside this VPC, the stack also creates an applications security group named $(SystemName)-apps that you can use for you client applications. The security group that the Datomic system instances run in allows access from the applications security group.
I'm taking this to mean that if I add my Lambda function to the Datomic VPC, with access to all 3 subnets, and I have set the Lambda function to use the "AppsSecurityGroup" security group, should this be sufficient to connect to Datomic? I'm still seeing timeout errors as above, but have tried everything I can think of...including giving complete admin access to my Lambda functions in a last ditch effort. What else could I be missing? Is there some extra security group magic that I need to tweak?#2018-04-1320:51rapskalianI'm attempting to use the endpoint: http://entry.%s.%s.datomic.net:8182, as it looks like the Route 53 hosted zone for Datomic is attached to both the default VPC, and the Datomic VPC, so I'm assuming that this address should resolve properly...#2018-04-1320:55rapskalianJust trying to rule out all possibilities...I've tried giving the function explicit permission to one specific Datomic db as the docs describe, but still no luck.#2018-04-1321:19rapskalianMight be helpful to reiterate that this is the error I'm seeing:
Unable to execute HTTP request: Connect to
...
connect timed out
Specifically, it's the http://s3.amazonaws.com request on port 443 that's failing. Sorry to be flooding the channel, I'm just a bit out of my element here with these networking issues 😓#2018-04-1613:50marshall@cjsauer What is in your connection config map?#2018-04-1613:54marshallI just saw that you got it sorted. I’ll look into adding the information about public NAT access to the docs.#2018-04-1614:33rapskalianCool, thanks @U05120CBV#2018-04-1402:41Wes HallGiven Datomic Cloud's current lack of support of excision, it would seem that it is not safe to use when the EU GDPR regulation comes into force in May. I am suddenly fairly concerned about this. Part of the regulation is that any EU citizen can request that their data be permanently erased and datomic cloud currently lacks a feature to be able to comply with these requests 😕#2018-04-1408:54val_waeselynck@U9HA101PY My advice is to store privacy sensitive values in a complementary KV store with UUID keys that are referenced from Datomic. It's not as hard to set up as it seems, especially once you've realized that you can build some very generic querying/transacting helpers using Specter and tagged literals. Will blog about this soon.#2018-04-1408:59val_waeselynckNote that such issues affect Peer based systems as well, because Datomic excision is not really a practicing solution for data erasure - especially if you have to erase any personal information after 3 years.#2018-04-1411:07hmaurer@U06GS6P1N what do you mean by “tagge literals” in this context?#2018-04-1411:09val_waeselynck@U5ZAJ15P0 [[:db/add 3242525424 :contact/email-ref #privacy/to-be-replaced-by-a-key ["#2018-04-1411:09val_waeselynck#privacy/to-be-replaced-by-a-key [" is the tagged literal of interest here.#2018-04-1416:41Wes Hall@U06GS6P1N Interesting. A similar thought did cross my mind, but not as developed as what you are describing here. I think a problem with this is that nobody seems completely clear on what constitutes "personal data" under GDPR. I'd probably end up storing most of the data in this way. I wonder then if I am better just using something like Dynamo directly.#2018-04-1417:10daveliepmannI don't disagree with the complementary-KV-store approach as perhaps the best solution in the near term, but from an operations/infrastructure or business perspective "just have a second database for anything you might be legally required to delete" is, to put it mildly, simply not convincing.#2018-04-1417:11daveliepmannIt's not clear to me what "Datomic excision is not really a practicing solution for data erasure" means?#2018-04-1419:49val_waeselynck@U9HA101PY Being in a European company close to the team that deals with these issues, I may have some more precise knowledge. GDPR mostly concerns itself with data that can lead to identifying a person, which includes email addresses, phone numbers, IP addresses, first and last name, etc. In particular, the GDPR requires that such data be collected with explicit and informed consent, that it may be exhaustively deleted or exported upon request, and that it should be kept for a finite amount of time (typically 3 to 5 years).#2018-04-1419:52val_waeselynck@U05092LD5 Regarding excision: the Datomic team themselves said that excision is an very costly operation, that should only be performed under exceptional circumstances. Because of the 'limited retention period' constraint, which eventually requires to erase data at the same rate as it was ingested, it becomes clear that Datomic excision is not a practical solution.#2018-04-1419:56val_waeselynck> "just have a second database for anything you might be legally required to delete" is, to put it mildly, simply not convincing.
@U05092LD5 not sure I understand what your point is here - I'm not trying to convince anyone, just to share my solutions. Trust me, I'm also a stakeholder when it comes to both business and infrastructure.
Again, I will write about that in more details one of these days, but I think Datomic mostly has an edge over mutable databases here. Even with an SQL database, I don't think it's robust to approach this problem by just saying "I'll just null out the appropriate columns in the appropriate rows when the time comes", because your system should record the fact that 'this datum was erased for privacy reasons at this time etc.' At this point, the generic schema and reified transactions facilities of Datomic become an advantage to tackle this problem.#2018-04-1608:42val_waeselynck> It's not clear to me what "Datomic excision is not really a practicing solution for data erasure" means?
@U05092LD5 I realize I made a typo, I meant practical#2018-04-1612:56hmaurer@U06GS6P1N hang on, so if I got this correctly, you are not required to erase all user data? you are only required to erase ways to identify the users?#2018-04-1612:57hmaurerthat’s a bit blurry though, since surely you could identify the users based on patterns in their data beyond their name/email#2018-04-1612:57hmaurere.g. if you are storing GPS location data on a user#2018-04-1613:08val_waeselynck@U5ZAJ15P0 well there is always data related to a user that you need to keep, be it only for bookkeeping, e.g you won't delete the orders placed by a user. Regarding GPS data, this usually counts as personally identifying, just like IP addresses and cookies.#2018-04-1616:33Wes Hall@U06GS6P1N I didn't mean that I don't know, as much as the fact that (as with most regulations like this) there are some grey areas that will probably get determined in later cases.
The problem with the approach that you describe is that you have to get it right from the first. If some value that you didn't think would be included in the definition of personal data is later deemed to be included then you are fucked. If some dev on your team forgets to include the offloading of storage as they franctically hack towards a deadline, you are fucked. There is no, "going back and fixing it later", which worries me.
As it happens, I absolutely adore the datomic model, and think that GDPR is mostly a shit-show, which, as usual, hasn't been validated with the real-world, but the fines are simply too high to take the risk I think.#2018-04-1616:40Wes HallIncidentally, what is interesting is that having read some information about how people are dealing with backups (i'm pretty sure that nobody is going to restore every single backup in order to remove some piece of data on request), many people are suggesting that they are going to implement some filter mechanism such that if a backup is restored at any point, any data marked for deletion is removed during the restore process, rather than from storage. Quite a few people seem to think this constitutes "reasonable steps", so I don't know if something like this can be applied to a live system of immutable record. If you could create some kind of datom filter and centralise it in the peer server... maybe that works.#2018-04-1616:42Wes HallLaw makers are unlikely to make the distinction between, "absolutely is not stored", and "absolutely cannot be used", but I suspect that if you have the latter thing properly implemented, you'd never get into trouble to the degree that you have to prove the first thing... but IANAL etc.#2018-04-1617:07val_waeselynck> If some dev on your team forgets to include the offloading of storage as they franctically hack towards a deadline, you are fucked. There is no, "going back and fixing it later", which worries me.
Well, FWIW, I am definitely in this situation, and I do think I can fix it in time. Migrating the code to use a secondary store only took a few days (including some hammock time to come up with this KV store approach), one BandSquare's codebase which is probably one of the biggest Clojure + Datomic codebases out there. Migrating the data will probably be more painful and require some downtime - maybe I'll do it via a sequence of massive excisions, maybe by rebuilding a new Datomic database at the application level - but it's definitely doable.#2018-04-1520:09Ghas anyone ever got datomic cloud to work with aws lambda on a vpc? I’m having issues similar to https://forum.datomic.com/t/datomic-cloud-with-aws-lambda/342: my lambda can’t reach the datomic instance on the right port (as in trying to talk to the right ip in the right port just times out).
If I launch another instance in that VPC, the instance can talk to datomic, so it seems the problem is with the lambda configuration. vpc/subnets/security groups are all correct tho#2018-04-1520:14Goh, I see @cjsauer ’s similar issues above. I can rule out jvm start times etc. as I’ve resorted to running commands on the host directly through https://github.com/iopipe/lambda-shell
also it isn’t anything with dns resolution because I’m trying to talk to the right ip directly.
calvin — I doubt this is IAM related because the port isn’t even accessible. it’s almost as if the lambda can’t reach the vpc at all#2018-04-1520:40rapskalian@lewis I was able to get this working after reading the following from the AWS docs:
>When you add VPC configuration to a Lambda function, it can only access resources in that VPC. If a Lambda function needs to access both VPC resources and the public Internet, the VPC needs to have a Network Address Translation (NAT) instance inside the VPC.
Source for that is here: https://docs.aws.amazon.com/lambda/latest/dg/vpc.html
Datomic apps access both VPC resources and apparently the public internet in order to access S3, so you can't put them in the existing public subnets. What you have to do is create a few private subnets in the Datomic VPC, and configure them with a NAT Gateway for internet access as the docs say. These are then what the Lambda function goes in.#2018-04-1520:41Gooh gonna try that later! thanks for your help!#2018-04-1520:42rapskalian@lewis no problem. Here are the docs for setting up those private subnets btw: https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-nat-gateway.html#2018-04-1520:46rapskalianWondering if maybe the above information would be a helpful addition to the docs? I imagine this is a common tripping point for new Lambda+Datomic users.#2018-04-1602:31chris_johnsonI wonder if you can get away with building your own for-Lambda VPC and using VPC peering to “attach” to the Cloud VPC without having to manually make any changes to it#2018-04-1602:31chris_johnsonalso thank you for figuring that out, @cjsauer. If this Slack had a karma-bot I would give you a kudo#2018-04-1614:36rapskalian>building your own for-Lambda VPC
@chris_johnson that's a good idea. In that same line of thinking, it'd be really cool if the Datomic Cloud stack included a few private subnets (or even a separate VPC like you mentioned) out-of-the-box for client applications.#2018-04-1614:57stijnis there an alternative for conformity with datomic client?#2018-04-1615:32rapskalian@stijn maybe not exactly what you're after, but there is some sample code for doing schema conformance in the day of datomic repo: https://github.com/Datomic/day-of-datomic/blob/master/src/datomic/samples/schema.clj#2018-04-1618:42stijn@calvin, yeah I can use that until transaction functions (or similar) are available in datomic cloud. Thanks!#2018-04-1619:42Petrus TheronHow to model property graphs with Datomic? E.g. given a sentence like "Lawrence of Arabia rode 100km on horseback over 3 days," how can we model the semantic facts that can be extracted from the sentence including the context so that unit bases are not lost, e.g. facts: [{:db/id "sentence1" :source/text "Lawrence of Arabia rode 100km over 3 days" :subject {:db/id "person1" :person/name "Lawrence of Arabia"} :travel/distance {:distance/meters 300000 :distance/unit :km} :travel/time {:time/hours 72 :time/unit :days}}]. It bothers me that I'm having to make "component" entities when it feels like these aren't really entities, just facts about facts. I heard @stuarthalloway mention that transaction metadata might be more suitable for "edge attributes", but this means I have to trigger multiple transactions where one would suffice. Has anyone had trouble with this? Is this Datomic the right tool for the job here?#2018-04-1619:44hmaurer@petrus where did @stuarthalloway mention this? out of curiosity#2018-04-1619:46Petrus Therontake it as hearsay until I can recall the source, because I heard it several weeks ago on a podcast about a podcast of which I don't recall the name. It got me thinking that yes, edge attrs can be modelled as tx-meta, but it doesn't feel natural#2018-04-1620:32hmaurer@petrus without more context on what he said, that sounds very odd. In my opinion if you have edge data to store, you should represent the edge as an entity#2018-04-1619:48marshall@petrus I would probably use something like an enum for things like your distance and time units#2018-04-1619:49marshalli’m also unsure why you need nested entities for travel/distance, travel/time, and subject#2018-04-1619:53Petrus TheronI use sub-entities to retain context, so that I can adjust the source content in the dimension it was provided in, e.g. to show you a slider to change the no. of travel days, instead of basing it in unix timestamp seconds#2018-04-1619:54marshallkeep in mind, “sub-entities” aren’t really “nested” unless they’re components#2018-04-1619:54marshallif the nested entities have an explicit ID (like your :subject one does) they are entities with their own existence#2018-04-1619:55marshallthey happen to be referenced from another entity in this case#2018-04-1619:49marshallunless you’re planning on hanging other attributes off of those specifically#2018-04-1619:50marshalli.e. if you have other sentences about “Lawrence of Arabia”, then yes, probably make a separate entity for it#2018-04-1619:52Petrus Theronhow to answer, "who travelled 100km?" without tracking the subject of the activity as a person entity? Or, " who travelled on horseback?"#2018-04-1619:58marshallit really depends on how complex the possible data model is
if all your entities are “things that can travel” then their travel means and travel distances can be attributes directly on that entity (maybe)#2018-04-1619:59marshallbut if you’re interested in the travel itself, it may be worth reifying them as their own set of entities with references to the ‘things’ (the entities) that did the traveling#2018-04-1704:11Desmondso i just blew away my local m2 and now i'm having a hard time getting datomic back. i have the datomic pro started edition and everything looks fine in the my.datomic portal. i have gpg keys locally that have been working fine before and they match what i see in my.datomic. i followed the instructions in lein help gpg and ran gpg --quiet --batch --decrypt ~/.lein/credentials.clj.gpg and gpg --use-agent --decrypt ~/.lein/credentials.clj.gpg. both work fine so the key should be unlocked, right? but when i try to install i get an unauthorized error. any ideas?#2018-04-1704:22Desmondi also have datomic-pro-0.9.5561.50.jar.lastUpdated in my local m2 now. what is this?#2018-04-1704:25Desmondapparently to indicate a failed download and waiting period before a retry#2018-04-1704:52Desmondand trying to download directly from my.datomic keeps stalling out#2018-04-1713:49jaretI just responded to your e-mail into <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection> with some questions. Feel free to e-mail, direct message me here, or respond here with your findings.#2018-04-1718:14timgilbertFWIW, I've found it easier to just use the bin/maven-install script from the datomic distribution to get the jars into my local repo rather than trying to mess with the GPG stuff#2018-04-1800:16hoppydoes anybody know how to express '[:find (?e ...)]' in the query as a map?#2018-04-1800:19hoppynm, forgot to wrap in a vector#2018-04-1814:24joshkhis anyone worried about Datomic Cloud's lack of excision in relation to GDPR taking effect next month? if so, any thoughts or tips for mitigating the situation?#2018-04-1814:33val_waeselynck@joshkh Best solution I can think of is 1. refactor your code to store sensitive data in a complementary store 2. manually export all the non-sensitive data from your Cloud deployment and import it into a new Cloud deployment (yes, that will require downtime). Will blog about 1 soon#2018-04-1814:39joshkhah yes, thank you. we've been exploring option 2 as a last resort. the whole situation is a bit frustrating in the sense that option 1 still requires a fair amount of dev work while keeping two databases in sync. just to be clear, you're suggesting that datomic only stores a reference to some user ID in another database, and that's where the personally identifiable information goes?#2018-04-1815:16val_waeselynckTo be clear, to me these are not 2 options to choose between, but two steps to take in conjunction. Step 1 deals with future data, and step 2 deals with past data#2018-04-1815:21val_waeselynck> i've heard conflicting arguments that even maintaining some user id (even if it's only a reference to an external data source) is not within the "spirit" of GDPR.
I think whoever said that has not thought this through from an IT perspective (granted, this is common amongst lawyers). This really sounds like an unreasonable expectation, as it can prevent things like accounting from being done properly, and I'm taking the bet that GDPR will not get enforced to this extent.#2018-04-1815:22joshkhah. fortunately we don't have any past data. it's a brand new project so.. lucky us!#2018-04-1817:33stijnWhat about using noHistory on the attributes that contain personal information?#2018-04-1817:48joshkhwithout knowing how datomic works under the hood it's not an option. according to the documentation:
The purpose of :db/noHistory is to conserve storage, not to make semantic guarantees about removing information. The effect of :db/noHistory happens in the background, and some amount of history may be visible even for attributes with :db/noHistory set to true.#2018-04-1817:49joshkhthanks for the suggestion though#2018-04-2014:06stijnIt was a question rather than a suggestion (because we're in the same situation), but you've answered it, thanks 🙂#2018-04-1814:42joshkhi've heard conflicting arguments that even maintaining some user id (even if it's only a reference to an external data source) is not within the "spirit" of GDPR. anywho, i'd love to read your post when you've finished. where can i find your blog? i'll bookmark it in the mean time.#2018-04-1815:15val_waeselynck@joshkh https://vvvvalvalval.github.io/#2018-04-1815:22joshkhcheers#2018-04-1816:19octahedrionthere has been a feature request since March for targetted excision in Datomic Cloud but no updates since then. I don't understand why Datomic Cloud doesn't support it but on-prem does#2018-04-1817:43mynomotoAnyone trying to run dev against datomic cloud with the socket connection? For me it stops working all the time generating timeout exceptions, is there something I can do to have a better experience during dev?#2018-04-1817:50joshkhi have the same problem and have learned just to deal with it#2018-04-1818:00alexmillerI use it all the time and it drops about once per day for me#2018-04-1818:00alexmillerI’m specifically talking about the socks proxy#2018-04-1818:01alexmillerare you saying you see timeouts sending queries / txns / etc?#2018-04-1818:05mynomotoI'm talking about the socks proxy. I get timeouts when running tests which creates the client, creates a new database, creates a connection and test stuff. But for me is more like once every 10 minutes than once a day, which makes for a really bad experience. Restarting the proxy solves the problem but I would like to have a smoother experience.#2018-04-1818:07marshall@mynomoto Some folks here discussed using a keep-alive tool of some sort for the proxy, but I can’t recall the specific one they mentioned. I’ll see if I can find it again#2018-04-1818:09mynomoto@marshall I would appreciate that, thanks!#2018-04-1818:13mynomotoI could put the that script to test the socks proxy in loop but I'm not sure if that's a great idea.#2018-04-1818:15marshalli believe autossh was the tool that was discussed#2018-04-1818:16marshalland replacing the ssh command in the provided script with an analogous autossh command#2018-04-1818:16mynomotoI will check that out, thanks!#2018-04-1818:19marshallI believe
autossh -M 0 -o "ServerAliveInterval 5" -o "ServerAliveCountMax 3" -v -i $PK -CND ${SOCKS_PORT:=8182}
was the suggested replacement command.
I can’t comment on the specific efficacy of it, and I wish I could recall who to credit about it#2018-04-1818:27mynomotoTrying it now, I will report the results later, thanks.#2018-04-1818:40chris_johnsonquestion about ongoing operation of on-prem: do you have AWS AMIs for the default transactor stack available in us-east-2?#2018-04-1818:41marshall@chris_johnson yep#2018-04-1818:41marshallhttps://docs.datomic.com/on-prem/aws.html#2018-04-1818:41marshallThat page covers how to run the stack. Using us-east-2 should work fine#2018-04-1818:51chris_johnsonI see, I was trying to run a build out of a local copy of datomic-pro-0.9.5372, which obviously predates the existence of us-east-2#2018-04-1818:51chris_johnsonsymlinks: not always your friends#2018-04-1819:03hmaurer@marshall is Datomic Cloud just a “managed version” of Datomic on-prem? or is it actually a different product? (as in, a different codebase with some features only available in cloud)#2018-04-1819:03alexmillerit is a different product and mostly different code base with a totally different architecture#2018-04-1819:04marshall@hmaurer it is a different product. it uses the same data model, but has a strictly different architecture and use of storage (among other things)#2018-04-1819:04alexmillerjinx#2018-04-1819:04marshall@chris_johnson Ah, that would do it.#2018-04-1819:04marshall@alexmiller you type faster than i do#2018-04-1819:04alexmillerfaster but worser#2018-04-1819:05hmaurer@marshall do you intend on keeping the features available on on-prem the same as on cloud? or could we end up in a situation where an application using Datomic Cloud cannot migrate to on-prem easily? (and is therefor “locked” on AWS)?#2018-04-1819:08hmaurerAlso, from what I understand transaction functions are not available on Cloud (yet). Is there another way to safely enforce invariants?#2018-04-1819:09marshall@hmaurer feature dev will continue on both products, but we can’t guarantee every feature will come to both products#2018-04-1819:09marshallwe do intend to support the same API (Client) for both#2018-04-1819:10marshallWe are working on options for the problems solved with txn functions (invariants being one of those)#2018-04-1819:10marshallcurrently you can use the built in cas functionality to force atomicity on certain kinds of updates#2018-04-1819:10hmaurerGreat to hear. Also, can we expect to see peer support on Cloud in the near-ish future? And for txn functions, can we expect to hear more about this before the end of the year?#2018-04-1819:11hmaurerYep, but cas is quite limiting from what I understand#2018-04-1819:11hmaurerit does work in some cases though#2018-04-1819:11marshallI don’t have a timeline for any features#2018-04-1819:12hmaurerIs peer support on Cloud something you plan to include at least?#2018-04-1819:13marshallWe are interested in solving the problems that peer helps with (i.e. code/data locality), but have not determined if there will be “Peers” per-se in Cloud#2018-04-1819:14marshallor if there will be other/preferable ways to achieve those goals#2018-04-1819:15hmaurerI see. Last but not least: do you allow querying the indexes directly in Cloud? And do you allow listening to the transaction log?#2018-04-1819:15marshallyes, there is direct index access (https://docs.datomic.com/cloud/query/raw-index-access.html)#2018-04-1819:16marshallthere is not currently a tx-listener feature (like the tx-report-queue) in Cloud. For most use cases polling should be totally fine, but we’re interested in feedback here as well#2018-04-1819:16hmaurer@marshall do you mean polling using tx-range?#2018-04-1819:17marshallpotentially. depends what you’re looking for#2018-04-1819:17marshallif you want to know if anything has been updated, you could just inspect the basis-t#2018-04-1819:17hmaurerbuilding a generic worker which listens to the transaction log and performs some task#2018-04-1819:17hmaurere.g. keep an elasticsearch instance in sync#2018-04-1819:17marshallbut, yes, you could use tx-range to get latest txns as well#2018-04-1819:20hmaurerSorry, yet another question: is it possible to use Cloud from outside AWS? e.g. for an app on Heroku to connect to a Cloud setup#2018-04-1819:20hmaurerwithout heroku private spaces#2018-04-1819:20marshallyes, it is possible, but you will have to handle permissions/setup for the communication channel#2018-04-1819:20marshallby default Datomic Cloud runs in a private VPC in AWS#2018-04-1819:21marshallthat doesn’t allow traffic in from outside (other than via the bastion)#2018-04-1819:21marshallso you’ll have to configure the communication channels to allow that yourself (with the associated risks of having your DB available on the internet)#2018-04-1819:22marshallthe advantage of accessing from within AWS (same or different VPC) is you can use IAM roles and security groups to control that access very specifically#2018-04-1819:24hmaurerI see. Using it within AWS seems preferable indeed; I just wanted to know if the option was there to use it remotely.#2018-04-1819:24hmaurerThank you 🙂#2018-04-1819:28marshallsure#2018-04-1820:05mynomotoI'm doing a transaction and got a server error:
datomic.client.api/transact api.clj: 268
datomic.client.api/ares api.clj: 52
clojure.core/ex-info core.clj: 4739
clojure.lang.ExceptionInfo: Server Error
data: {#object[clojure.lang.Keyword 0x4f9fa472 ":datomic.client-spi/context-id"] "efd6cf48-685f-4055-b56e-1242be7ac557", #object[clojure.lang.Keyword 0x33705c6a ":cognitect.anomalies/category"] #object[clojure.lang.Keyword 0xa9f85af ":cognitect.anomalies/fault"], #object[clojure.lang.Keyword 0x2cc736a2 ":cognitect.anomalies/message"] "Server Error", #object[clojure.lang.Keyword 0x6f72f901 ":dbs"] [{#object[clojure.lang.Keyword 0x634e59fc ":database-id"] "d91f432b-1821-4213-8b80-e8d59a4e7b8c", #object[clojure.lang.Keyword 0x5cd4f36 ":t"] 5, #object[clojure.lang.Keyword 0x60c82a21 ":next-t"] 6, #object[clojure.lang.Keyword 0xc9d4817 ":history"] false}]}
#2018-04-1820:06mynomotoHow do I find what is wrong?#2018-04-1820:34mynomotoautossh works great for me! No more timeouts since I started using it, which makes the development flow way better.#2018-04-1820:35mynomotoAbout the error above, looks like that you cannot delete a database and immediately create one with the same name. I was doing it on tests and that caused the error. Adding a random suffix to the database name fixed the problem.#2018-04-1820:38marshallCorrect, there is a small window when you can't immediately reuse a db name. Having a random suffix is a good solution#2018-04-1820:55mynomoto@marshall I'm not sure if it is possible but a more specific error message could be useful for that.#2018-04-1821:30alexmillerit is particularly confusing because the create-database succeeds but subsequent transacts fail#2018-04-1910:15caleb.macdonaldblackAnyone know any easy way to count how many datoms datomic has stored? I’d like to see close I am to the recommended 10bil limit#2018-04-1910:38robert-stuttaford(count (seq (d/datoms db :eavt))) 🙂#2018-04-1910:58caleb.macdonaldblackHey thanks!#2018-04-1911:04val_waeselynckYou may want to do that on a history db#2018-04-1911:37robert-stuttafordexcellent point#2018-04-1911:57stuarthalloway@caleb.macdonaldblack don’t walk the seq like that. Use the client API and call db-stats https://docs.datomic.com/client-api/datomic.client.api.html#var-db-stats#2018-04-1911:58stuarthalloway@robert-stuttaford walking that seq will pull all X billion datoms through the peer, pounding storage and blasting whatever was cached#2018-04-1911:58stuarthallowaynote to self: add the db-stats API to peer#2018-04-1911:58robert-stuttaford-grin-#2018-04-1912:00robert-stuttaforddb-stats on peer would be rad, yes please#2018-04-1912:06caleb.macdonaldblack@stuarthalloway Ahh thank you. I’m currently just messing around in a non-production side project so no harm done. Good to know for future though#2018-04-1912:41val_waeselynck@marshall About Excision, I found the following assertion in the On Prem docs (https://docs.datomic.com/on-prem/excision.html#limitations):
> Excision of :db/fulltext attributes is not supported.
Is this still the case? It does seem to work on my dev connection on my local machine. I excised a :db/fulltext attribute, called syncExcise, and obtained different search results before and after the excision.
I'm on Datomic 0.9.5407#2018-04-1912:56marshall@robert-stuttaford @caleb.macdonaldblack You can also look at the :Datoms metric in the logs (or metrics), reported on every indexing job - that will tell you how many total datoms in your DB#2018-04-1913:28caleb.macdonaldblackIve never actually read those. I didn't know that you could see the datom count there. Thanks!#2018-04-1913:01stuarthalloway@val_waeselynck excise completely ignores the fulltext index. You will get different results because of changes in the other indexes, but the fulltext index is not excised.#2018-04-1913:07val_waeselynck@stuarthalloway allright, so assuming I have some data in a :db/fulltext attribute that I need to erase for legal reasons, is there a way to use excision (or something else) to that end?#2018-04-1917:46stuarthallowayexcision will not erase the fulltext index, and I would recommend against using fulltext on things that might need excision#2018-04-1919:10val_waeselynckYeah, I screwed up making everything fulltext by default. Will have to rewrite the db manually. Hell, that will probably make for another interesting blog post.#2018-04-1919:49souenzzoThere is ANY chance to @(d/transact conn tx-data) install data on DB and throw a exception?#2018-04-1919:50souenzzoit's saver use (d/transact-async conn tx-data)?#2018-04-1920:11marshall@souenzzo if a transaction throws an exception the entire transaction is aborted#2018-04-1920:12marshallthe dereferencing itself could time out or hit some issue#2018-04-1920:12marshallusing either sync or async — so if you see an error you should query to determine if the transaction completed or not#2018-04-1923:59brunobragahey guys, I have a dummy questions. I read everywhere that datomic and immutability provides a way to get read of side-effects and that concurrent programming becomes much easier...but that just does not get into my head. If I have a database, and multiple client are reading and writing into it, how does a complete copy of the data (which is immutability) solves the problem of one of these clients simply have a copy that is not up to date?#2018-04-2004:37val_waeselynckImmutability allows perception to be consistent and free of coordination; it doesn't prevent clients from perceiving a past view of the world when reading, which is a limitation that all databases have and is due to the physical way in which information propagates. When writing, Datomic prevents race conditions by running transactions serially (on an up to date view of the db); immutability doesn't especially help here, except for the performance aspect of writes not being impeded by reads.#2018-04-2000:53caleb.macdonaldblack@brnbraga95 You make updates using database functions. For example if you had a like counter you would read the current like count, increment its value and then set the new value. During the time that you read the value, 10 other clients may have as well and they all right the outdated value plus 1. That means you’re not guaranteed to get all 10 increments. If you use a database function though, you are asking datomic to stop processing any new transactions for a sec. During this time you read the like count, increment it and transact it. If 10 clients try to do this at the same time the must wait their turn as there is only one write transactor and it will only process one transaction at a time.#2018-04-2001:24brunobragacorrect me if I am wrong but, is not that just a lock?#2018-04-2003:55chris_johnsonTake a look at https://docs.datomic.com/cloud/best.html#optimistic-concurrency to see what (I believe) Caleb means by
> You make updates using database functions#2018-04-2017:51stuarthalloway@brnbraga95 no lock is needed. Datomic is a single writer system, so there is nobody else to lock out https://docs.datomic.com/cloud/transactions/acid.html#sec-3#2018-04-2122:46kingcodeI am trying to use the starter edition with an expired license key - is this OK or do I need to use datomic-free? thx..#2018-04-2122:51kingcodeAs instructed in on-Prem Getting Started, I tried to run
bin/run -m datomic.peer-server -h localhost -p 8998 -a myaccesskey,mysecret -d hello,datomic:
but got the following error:
Exception in thread “main” java.io.FileNotFoundException: Could not locate datomic/peer_server__init.class or datomic/peer_server.clj on classpath. Please check that namespaces with dashes use underscores in the Clojure file name.
…#2018-04-2204:14jaret@kingcode Client is not available on FREE. You would see this error if you were trying to use peer server on FREE or on a version of Datomic prior to the release of client. If your license is expired before the release of client (11-28-16 … 0.9.5530) then you wouldn’t be able to start a transactor, let alone the peer-server. So I would confirm which version you’re using and make sure you’re using datomic pro 0.9.5530 or latter.
Note: Datomic licenses (even starter) are perpetual, meaning you can continue using any version of Datomic released prior to your expiration date forever.#2018-04-2209:15dominicmIs there a better alternative to:
[(ancestor ?root ?binding)
[(identity ?root) ?binding]]
Where I'm trying to bind the root of the ancestry tree as well as the ancestors ( I have other rules for the ancestry part )#2018-04-2214:09favilaAFAIK identity is the only way to “alias” a binding (bind to a new name without transformation)#2018-04-2214:10favilaI use this trick frequently #2018-04-2213:47PontusI'm using a transaction function like this: (d/transact conn [[:db.fn/retractEntity id]]) -- I was thinking of adding a check that the entity that's going to be deleted also has the correct user ID (taken from a token). Currently thinking of doing this:
1. querying the user ID of the entity by the entity ID
2. check that it's the same one provided in the token
3. delete using the same method as before.
I was wondering if anyone knows if there is a simpler/better way to do that? I'm just curious if there's a SQL equivalent to AND WHERE userID = x or something similar#2018-04-2214:11kingcode@jaret Thank you for the info. The version I got a license for is 0.9.5497 - does that mean that all the Get Started instructions are not going to work for me?#2018-04-2214:12favilaIs that the original or latest version you have licensed? Licenses are a year#2018-04-2214:13kingcode@favila I believe that is the original version I have licensed.#2018-04-2214:13favilaYou should look at your http://my.datomic.com page and see what versions it lets you download#2018-04-2214:14favilaOr you can bisect versions and try your license key. Eventually you will find the highest version it works on#2018-04-2214:15favilaBut it should be any version released within a year of your license date#2018-04-2214:16kingcodeMy account doesn’t list version #, only expiry date and Type of license (Pro Starter), with scripts listing $VERSION_NUMBER only#2018-04-2214:17favilaYou should have a “downloads” page#2018-04-2214:19kingcodeAh I see - OK, I can get any up to 0.9.5697 (which I already got via the afore-mentioned script). However, as @jaret mentioned, the problem I have is due to my getting a version not covered by my expired license…#2018-04-2214:20kingcodeSo I tried running the Getting Started command against my old (non-expired) version, and got the Peer_Server class not found, still…don’t know what to do.#2018-04-2214:20favilaIs 5697 before client api?#2018-04-2214:20kingcodeI dabled with datomic in the past and forgot all about it….want to learn it anew now.#2018-04-2214:21kingcodeDon’t know what client API version is…? Shouldn’t it be the same?#2018-04-2214:22favila5530 introduces client api#2018-04-2214:23kingcodeI tried to run the following command from my install dir:
bin/run -m datomic.peer-server -h localhost -p 8998 -a myaccesskey,mysecret -d hello,datomic:
#2018-04-2214:23favilaSo anything higher than that should work#2018-04-2214:23favilaWith the new getting started instructions#2018-04-2214:23kingcode@favila OK…then I guess my old version isn’t going to cut it?#2018-04-2214:23favilaYour license has access to a recent enough version#2018-04-2214:24favilaJust download it and use that instead#2018-04-2214:24favilaDatomic has three different things now#2018-04-2214:25favilaClient api via cloud service; client api via “on prem” service; peer api via on-prem#2018-04-2214:25favilaWhen you first got your license only the last one existed#2018-04-2214:27favilaGetting started now only seems to talk about client api in either cloud or on-prem scenarios#2018-04-2214:28favilaOr I don’t know where peer api introductions are#2018-04-2215:32kingcode@favila Ah OK Thank you!#2018-04-2309:20val_waeselynckProcessing the Log to rewrite a database, i'm seeing some weird datoms in the log, of the form
[[17592186523036 :myapp/some-attribute :db.sys/partiallyIndexed 13194140011931 true]
[17592186485599 :myapp/some-attribute :db.sys/partiallyIndexed 13194140011931 false]]
Obviously these have not been added by me - what do they mean, and how should I process them?#2018-04-2309:38val_waeselynckNevermind, I got this wrong, this were actually long-typed attributes that I mistakenly processed as if they were ref-typed.#2018-04-2311:26hmaurer@val_waeselynck can you elaborate on this very quick? I am curious (in case I encounter a similar “issue” myself)#2018-04-2314:30val_waeselynck@U5ZAJ15P0 I've set some attributes to have :db/fulltext true, which is irreversible and prevents excision. So now I'm rewriting my entire database by processing the Log, reading the set of datoms added by each tx and transforming it into a transaction request (which is not trivial). My problem here was caused by a bug in my processing: I was interpreting the value of a datom as it the attribute was ref-typed, when in fact it was long-typed.#2018-04-2312:18prozzhi, some recent articles from reddit got me interested in datomic. i got question (probably very basic): where indexes are kept? is it client side? id suppose so as queries are executed client side from what i reckon. if thats the case, do you have any stats how much time does it take for datomic to warm-up and build indexes up? if those are kept in memory are there any limits in database size for this to work optimally/at all? im curious in this technology, but at the same time i feel like im missing some important concepts.#2018-04-2314:32val_waeselynckThey are primarily kept in storage, and redundantly propagated to many places, including in an in-memory cache on Peers which is where the querying happens.#2018-04-2314:32val_waeselynckThe time to warm up will largely depend on querying patterns#2018-04-2320:09prozzthanks#2018-04-2312:49jaret@favila @kingcode the peer api getting started is now here, under the on-prem docs. https://docs.datomic.com/on-prem/peer-getting-started.html#2018-04-2313:19eoliphantHi, I’m playing around with microservice architecture with sort of ‘lite’ event sourcing, where datomic does it’s thing internally and onyx or something dumps events in to kafka, etc based on transactions i’ve reified with the approprate meta data. In terms of sequencing, since I can do 1 transactor -> n db’s (I’m not going to be dealing with large write volumes), the various (since they are per db) onyx jobs that are extracting ‘events’ should have the correct timewise ordering right?#2018-04-2315:57eraserhdIt looks like I'm seeing significantly more memory usage since upgrading the peer from 0.9.5561.50 to 0.9.5697. Does this make sense?#2018-04-2316:30gerstreeSomehow in the back of my mind I stored the fact that after excision a transactor restart is required. The docs don't mention it, but to fully excise that fact from my memory: is it correct, that excision is an online process and no transactor restart is required? (Or maybe this was true before and it changed?)#2018-04-2316:31marshall@gerstree restart is not required; You may be thinking of restore - that requires a restart
Excision will happen “online” - it will be completely excised after an indexing job completes#2018-04-2316:33gerstreeIn the case of a restore that absolutely makes sense. Thanks for clearing this up!#2018-04-2316:33marshallno problem#2018-04-2316:35gerstreeThe question was GDPR related for a bit of background. That topic takes hostage of all of Europe these days unfortunately...#2018-04-2317:05octahedrionwhat's the upper-limit on the number of dbs that can be queried at once ?#2018-04-2319:56eraserhdIs Datomic likely to significantly optimize one big bag of or expressions over individual queries in a doseq?#2018-04-2419:26souenzzoIt will "return nil" or throw?#2018-04-2419:26souenzzoI think that clojure.core/run! Could help#2018-04-2319:56eraserhdIn a transaction function, even?#2018-04-2320:07eraserhdAh, about how many queries does datomic cache? That might affect it.#2018-04-2416:15rboydnot really a datomic question per se.. but to anyone using circleci to build a proj with datomic deps: how did you get the build agent to auth w lein gpg?#2018-04-2416:31eraserhdWe have a private S3 repo, which is publicly accessible but undiscoverable (big random hash in name) for purpose of builds. Set up as a repo named "private". We have Datomic and OneLogin jars in it.#2018-04-2419:24souenzzoCheckout this :)
https://github.com/eth0izzle/bucket-stream#2018-04-2417:51eraserhdCan d/next-t be used as a unique key for a db? I want to cache results of an expensive query so that it isn't run twice on the same snapshot.#2018-04-2419:21souenzzoUse d/basis-t.#2018-04-2419:22souenzzoYou may also check is-filtred and as-of-t#2018-04-2505:02Christian Johansen@rboyd in project.clj:#2018-04-2505:02Christian Johansen:repositories {“” {:url “”
:username [:gpg :env/datomic_username]
:password [:gpg :env/datomic_password]}}
#2018-04-2505:02Christian Johansenthen set DATOMIC_USERNAME and DATOMIC_PASSWORD in your CircleCI environment variables#2018-04-2505:03Christian Johansen(Build settings -> environment variables)#2018-04-2510:01val_waeselynckFor those who worry about GDPR: this Gist demonstrates an alternative to Excision for erasing data from Datomic. Hope this helps, feedback welcome.
https://gist.github.com/vvvvalvalval/6e1888995fe1a90722818eefae49beaf#2018-04-2515:29octahedrioncount me among those worried#2018-04-2515:32val_waeselynck@octo221 I take it you are not happy with this solution?#2018-04-2515:35octahedrionit's at least a solution#2018-04-2515:37octahedrionbut I don't understand why cloud doesn't support excision since it's such an important feature#2018-04-2515:37octahedriongiven that on-prem does#2018-04-2515:39val_waeselynckNeither do I. I will soon publish an article which should help alleviate the lack of Excision on Cloud.#2018-04-2515:41octahedrioncool#2018-04-2515:41octahedrionhave you thought about the possibility of using multiple DBs as an alternative too ?#2018-04-2515:41octahedriona db per user#2018-04-2515:42val_waeselynckNo, my approach is rather a complementary mutable KV store, turns out you can get a loooong way with that.#2018-04-2515:42octahedrionoh wait i just saw another thead on /cdn-cgi/l/email-protection#2018-04-2515:43octahedrionhmm i want datomic only#2018-04-2515:43octahedrionif possible#2018-04-2510:08robert-stuttaford@val_waeselynck what’s the biggest database you’ve used this on?#2018-04-2510:11val_waeselynck@robert-stuttaford BandSquare's, about 500k txes and 37M datoms, took about 4 hours to complete on dev storage on my local machine (note that this does not mean 4 hours of downtime).#2018-04-2510:31robert-stuttaford🙂 nice#2018-04-2510:33val_waeselynckNote that pipelining could theoretically used to speed things up (as soon as you're confident there won't be errors), but I could not get it to work. Unfortunately, I suspect this is a Datomic concurrency bug, but have not worked yet though a minimal repro.#2018-04-2510:35robert-stuttafordi think it’s important to mention the implications on your gist, @val_waeselynck that this is a much slower process than excision - similar to replacing an engine in a car, rather than removing a tiny piece while it’s driving#2018-04-2510:36robert-stuttafordi wonder how long it’d take to process our 72,891,554 txes#2018-04-2514:40val_waeselynckFrom my measurements, you probably won't do better than 30k tx/min#2018-04-2512:06val_waeselynck@robert-stuttaford You're right, I added a comment. https://gist.github.com/vvvvalvalval/6e1888995fe1a90722818eefae49beaf#gistcomment-2569689#2018-04-2512:08dominicmWhy do this, if it's slower than excision?#2018-04-2512:13val_waeselynck@U09LZR36F I tried to explain it in the Gist - please tell me if it's not clear?#2018-04-2512:16dominicmSorry, I should have read the gist closer, you're right, thank you#2018-04-2512:27Christian JohansenMy intention for new systems is to try to avoid this problem by designing around it - using multiple databases#2018-04-2512:31val_waeselynckSure, on the other hand you may not always get things right upfront 🙂 (I know I haven't) in which case you will probably need a safety net#2018-04-2512:34Christian JohansenSure#2018-04-2512:36dominicm@U9MKYDN4Q you mean a mutable one for personal data?#2018-04-2512:37Christian JohansenPossibly, but not necessarily. Could also be interesting to use multiple datomic databases - maybe even a database per user + one that has all the interconnections. Won’t work in all circumstances though#2018-04-2512:41dominicmDeploying a transactor per-user sounds expensive (in hardware)#2018-04-2512:52Christian JohansenYou can have multiple databases on the same transactor#2018-04-2512:52dominicmInteresting, I did not know that#2018-04-2512:52dominicmis there a significant overhead?#2018-04-2512:53Christian JohansenActually, I think maybe this depends on storage backend. Don’t quote me on it 🙂#2018-04-2512:53Christian JohansenIt’s something I’ve been meaning to look into anyway#2018-04-2514:00robert-stuttafordtotally can have multiple databases on a transactor, just like you can have multiple dbs on a mysql server or mongo server#2018-04-2514:00robert-stuttafordthere is peer memory overhead for each database of course#2018-04-2515:06folconIs there an issue with entity id collisions when querying across databases?#2018-04-2516:00val_waeselynck@U0JUM502E no, since in such cases you need to specify explicitly in which db you are matching a particular Datalog clause.#2018-04-2516:02folcon@val_waeselynck Not sure I understand, say you do a query such as
[:find ?e ?like
:in $db1 $db2
:where [e :user/likes ?like]]
wouldn’t you just get a mix of entity id’s?#2018-04-2516:03val_waeselynckYou'd have to write it as
[:find ?e ?like
:in $db1 $db2
:where [$db1 e :user/likes ?like]]
#2018-04-2516:03val_waeselynckI think Datalog simply won't let you do what you suggested#2018-04-2516:04folconOk :)…#2018-04-2515:54timgilbertSay, I have a question about rules. Is it possible to have variables that only exist inside of the rule be exported to the calling query? Eg, if I have something like this:
(def rules '[[(tracks ?artist)
[?artist :artist/albums ?album]
[?album :album/tracks ?tracks]]])
...can I then access the ?tracks value outside of the rule, or bind it to another var or something?#2018-04-2515:57favilano#2018-04-2515:58favilaif you want an "out" parameter, add it to the rule#2018-04-2515:59timgilbertAh, so input to a rule doesn't need to already be bound to something?#2018-04-2515:59favila'[(tracks ?artist ?track)
[?artist :artist/albums ?album]
[?album :album/tracks ?track]]
#2018-04-2516:00favilano, that would be pointless#2018-04-2516:00favilawell not pointless completely#2018-04-2516:00favilabut it would mean that rules could only serve like predicates or filters#2018-04-2516:00favilarules are actually constraint specifiers#2018-04-2516:00timgilbertRight, that makes sense. I'll mess around with it, thanks!#2018-04-2516:01favilaif you want to require a parameter to be bound (sometimes important for performance), surround the arguments with a vector#2018-04-2516:01favilae.g.#2018-04-2516:01favila'[(tracks [?artist] ?track)
[?artist :artist/albums ?album]
[?album :album/tracks ?track]]
#2018-04-2516:01favilathat means this rule can only run "in one direction" from a bound artist to an unbound track#2018-04-2516:01timgilbertAh, ok. I think I was confused about what that syntax meant#2018-04-2516:01favilabut a rule can run "backwards" too#2018-04-2516:03favilaso the rule name is bad, it's really describing a constraint you want to satisfied among all rule parameters, not input-output#2018-04-2516:03favilaa name like artists-tracks might make that clearer#2018-04-2516:04favilabut both 'tracks for artists' and 'artists for tracks' are valid names, because the rule expresses both#2018-04-2516:05timgilbertI see what you mean. artist-tracks would make sense if I used the vector args to make the artist required, yes?#2018-04-2516:06favilaI was hoping the name expressed the bidirectionality better (i.e. not with a required-bound arg)#2018-04-2516:06favilayou see how it's hard to name a rule#2018-04-2516:06timgilbertGotcha, yeah#2018-04-2516:07favilabut "tracks" definitely works if artist must be bound#2018-04-2516:07favilaits when either none or one or both can be bound that it's hard to name the rule#2018-04-2516:08favilayou have to name the constraint the rule itself expresses, not the "output" (because there isn't really any)#2018-04-2518:37octahedrionif you d/delete-database
is there any way to get it back ?#2018-04-2519:23favilaIf gc-deleted-dbs hasn't been run yet, the blocks are still in storage#2018-04-2519:23favilain theory you could shut down the transactor and manipulate the proper values in storage to "resurrect" it#2018-04-2519:23favilabut there's no cognitect-blessed way to do it#2018-04-2519:24favilayou're deep into undocumented internals at this point#2018-04-2519:24marshall@octo221 Datomic On-Prem or Datomic Cloud?#2018-04-2519:24octahedrionCloud#2018-04-2519:25marshallCan you issue a support ticket to the portal at http://support.cognitect.com#2018-04-2519:25octahedrion@marshall I haven't done it, I was wondering#2018-04-2519:26octahedrionwhat happens#2018-04-2519:26marshallah. It may be possible to recover some aspects of the DB, but don’t count on it#2018-04-2519:27octahedrionactually I was hoping that delete meant delete#2018-04-2519:28octahedrionpresumably dbs in cloud are stored in S3#2018-04-2519:28marshalls3, dynamodb, efs#2018-04-2519:29octahedrionand delete-database would delegate deletion to whatever AWS deletion mechanism they have#2018-04-2519:30octahedrionmeaning it's out of datomic's hands right ?#2018-04-2519:31marshallI don’t believe the individual segments are removed from storage#2018-04-2519:31marshallthe db is removed from the catalog#2018-04-2519:31marshallthe same as mentioned by @favila for Datomic On Prem#2018-04-2519:31marshallthey’re no longer accessible, however#2018-04-2519:31favilacloud will eventually GC them, or no?#2018-04-2519:32marshallgood question - let me get back to you#2018-04-2519:32favila(I'm only familiar with on-prem; there you have to schedule the GC yourself)#2018-04-2519:33favilabottom line is delete-database means the db becomes api-inaccessible, but does not guarantee that all bits that back the db were erased.#2018-04-2519:34favilaon on-prem, there is a separate process that does that, which you run at will. not sure yet what cloud does, but it's probably a similar process#2018-04-2519:34marshallagreed on both counts and I’m looking into the specifics of cleanup in Cloud#2018-04-2519:36favilanot to be too pedantic about it, but "bits are erased" is itself just a guarantee that whatever storage-level api you have cannot access them anymore. e.g. with an sql storage, you may still need to vaccuum to remove the bits from the db's storage; and then you may need to write over the blocks on disk; etc#2018-04-2519:36favilait's apis all the way down#2018-04-2519:36marshallyep, very good point#2018-04-2519:37marshallthat’s one of the things that I suspect is going to make the GDPR stuff so hard to enforce/define/resolve#2018-04-2519:39octahedrionI thought that if you issued a 'delete' command to an AWS service, then it's Amazon's responsibility to ensure that the deletion is done correctly#2018-04-2519:40favilaI don't know what guarantees they make. At a minimum that is a guarantee that a "read" of that same item will not succeed via the s3/dynamodb/whatever api#2018-04-2519:40octahedrionyes#2018-04-2519:40favilabut perhaps some lower-level api could still read it#2018-04-2519:40favilai.e. it's not necessarily a guarantee that the bits are obliterated#2018-04-2519:40octahedrionand of course they could always be copying everything anyway#2018-04-2519:41octahedrionyou wouldn't be to know#2018-04-2519:53octahedrionif you wanted to be sure, could you chase the individual segments and delete them ?#2018-04-2520:15marshallcleanup of deleted dbs is automatic in Cloud#2018-04-2521:50stijnis it possible to launch elasticbeanstalk instances in the Datomic Cloud VPC and have the ELBs running in the same VPC? Or are is it always needed to setup vpc peering when working with beanstalk? (i'm a bit in the dark on the AWS concepts)#2018-04-2521:51stijnI tried running elastic beanstalk in the datomic apps security group, but it keeps on insisting that this security group does not exist (probably i'm missing some other configuration option. subnets?)#2018-04-2606:46dominicmDoes the transactor ever create symlinks? We've had a FIM exception where the Link Count changed of our datomic-pro folder, I can config around it, but I want to know why I'm doing it first.#2018-04-2917:22dominicmThe link count went up, I believe, because a sub-directory was created: "data". I suspect this one because our FIM configuration ignores this directory.
This apparently did not get created within 20 minutes of the system starting, I'm not sure if that's weird or normal?#2018-04-3012:29marshallThe data directory is created by Datomic when it needs some local swap space, usually during an indexing job#2018-04-3012:43dominicmI see, that makes sense.#2018-04-2606:48dominicmThis happened in our only configuration which utilizes fulltext.#2018-04-2608:31stijnok, disregard my previous question, have setup VPC Peering instead. that works#2018-04-2612:51folconFor search scenarios, is there a good way to get related values?
Eg:
(fn [text]
(q '[:find ?id :in $ ?txt % :where [search ?txt ?id]]
(db conn)
text
'[ [[search ?txt ?id] [(fulltext $ :artist/name ?txt) [[?id]]]]
[[search ?txt ?id] [(fulltext $ :track/name ?txt) [[?id]]]]
[[search ?txt ?id] [(fulltext $ :release/name ?txt) [[?id]]]] ]))
Now for any result I want to grab details such as the artist name and a collection of tracks associated. Do I need to encode that into the rules itself? I was also thinking of using get-else, or get-some, but it feels clunky. Am I on the right track or is there a better approach?#2018-04-2613:50favilaUse a pull expression#2018-04-2613:51favilaYou can grab data with more binding clauses in the where, but this doesn’t allow “null” values and is more verbose#2018-04-2613:54folconYou can do joins in a pull?#2018-04-2613:55folconI thought that wasn’t possible? I’ve been using (pull ?id [*]) pull expressions and only get the local id values.#2018-04-2613:56favilaHuh?#2018-04-2613:57favilaWhat is a “local id value”?#2018-04-2614:03folconsorry, based on the context it’s either :artist/name, :track/name or :release/name which matches the fulltext of ?txt. I’ve used (pull ?id) a lot to get all the attributes+values, but I didn’t realise that you could pull values optionally through entity relations. I’ve been hand constructing queries that manually walk the relations with lots of get-else to catch missing values.#2018-04-2614:05folconthe (pull ?id [*]) was to my knowledge the most flexible version of this, as in grab all the values related to this entity id.#2018-04-2614:12folconAnd suddenly a whole new world has opened up! Thanks @U09R86PA4, I think I’ve worked out how to do it!#2018-04-2614:30folconFor anyone else: https://github.com/Datomic/day-of-datomic/blob/master/tutorial/pull.clj#L72#2018-04-2615:54marshallIt should work fine to run EB instances in the Datomic VPC#2018-04-2616:17stijnyes, I guess i mixed something up on the naming of the resource groups. anyway, I think the vpc peering setup is cleaner in our case#2018-04-2616:23stijnI have 2 questions on the permissions: $(S3DatomicArn)/$(SystemName)/datomic/access/dbs/db/$(DbName)
1/ can I specify wildcard db-names?
2/ does this mean that also creation of the db is allowed, or only read/write access to the db itself?#2018-04-2617:03marshall@stijn you should be able to use wildcards - it will follow AWS IAM rules for getObject permissions on S3 objects#2018-04-2617:04marshallI believe create and delete db are admin-only permissions#2018-04-2708:51stijnif I get the following exception when using the client lib, is this due to the networking not being OK or the S3 permissions?
Caused by: clojure.lang.ExceptionInfo: Unable to connect to system: {:cognitect.anomalies/category :cognitect.anomalies/unavailable, :cognitect.anomalies/message "Connection refused"} {:config {:server-type :cloud, :region "eu-central-1", :system "datomic", :query-group "datomic", :endpoint "", :proxy-port 8182, :endpoint-map {:headers {"host" ""}, :scheme "http", :server-name "", :server-port 8182}}}#2018-04-2708:52stijnI'm able to curl , so I guess the networking is OK, but the error doesn't mention anything about specific database access...#2018-04-2708:53stijn(that's on elastic beanstalk)#2018-04-2709:36stijnok, the proxy-port was still in the config#2018-04-2719:05eraserhdSo today, I refactored a dozen small queries, executed sequentially, into one query that's an or-join. I was surprised to find that it runs an order of magnitude slower. Is there any information on why this could be, or any information on how planning works?#2018-04-2719:05eraserhdIn this case, the normal case is for all queries to return no results.#2018-04-2719:07faviladatomic does almost no query optimization#2018-04-2719:08favilathe most important consideration is clause order: most selective clauses first#2018-04-2719:08favilahttps://matthewboston.com/blog/datomic-performance-with-clause-ordering/#2018-04-2719:09faviladatomic query does not reorder clauses#2018-04-2719:10favilais it possible for you to show us the queries?#2018-04-2719:12eraserhdI can. It will take a bit, though.#2018-04-2719:13favilayour or-join has nested ands, whose content is one of the smaller queries?#2018-04-2719:13favilaor did you refactor more than that?#2018-04-2719:13favilaI'm curious if using a named rule instead would give better performance#2018-04-2719:15eraserhdor-join with nested ands, and the ands should have the same thing as before.#2018-04-2719:23eraserhdHere's the big form: https://gist.github.com/eraserhd/23ab2c6d65c26638b0b5ab342521bfa7#2018-04-2719:26eraserhdThe and clauses are sorted so that they should be consistently be in the same order. This reduces the query from 1.2s to 0.8s, presumably because of compile caching? Although, I tried inlining a the new-result rule and it went back to 1.2s. I don't know why. The individual queries take a total of ~57ms.#2018-04-2719:29favilaall of these look like they are unavoidably full scans of an index?#2018-04-2719:30favilaperhaps you are io bound; running them in parallel increases cache stress#2018-04-2719:31eraserhdThe database is small, and I'm testing in on a local dev database anyway.#2018-04-2719:31eraserhdSmall enough to be in memory, I mean.#2018-04-2719:31eraserhdI reloaded the repl, and it got faster. Now it's just 5x slower.#2018-04-2719:32favilawhat is the peer's object cache size, and can it really fit the whole db in memory?#2018-04-2719:32eraserhdhmm.... where do I find that?#2018-04-2719:32favilahttps://docs.datomic.com/on-prem/caching.html#object-cache#2018-04-2719:34faviladefault is 50% of vm ram#2018-04-2719:34favila(java heap)#2018-04-2719:35eraserhdThat should be 2g, which should be waaay more than enough. The dev database is seeded from a file with at most 300 tuples in it.#2018-04-2719:35favilahuh#2018-04-2719:36favilayeah, no idea#2018-04-2719:36eraserhdLet me set it explicitly, though. Just to see.#2018-04-2719:36favilathe only thing I can think to try is make one named rule and put each "and" as a separate implementation#2018-04-2719:37eraserhdThat's not hard.... but why would this help?#2018-04-2719:37favilamy understanding is that should be equivalent, but maybe or-join isn't quite the same#2018-04-2719:37favilabottom line running these in parallel is somehow making it slower#2018-04-2719:38favilathe only thing I can think of that could possibly do that is if memory pressure causes evictions for the various parts running in parallel#2018-04-2719:39favilaalternatively, there's something wrong with the query compiler; but since these are all big scans anyway I don't know what it could do that would be worse!#2018-04-2719:40favilathis is a minor point, but != should be faster than not= in cases where you don't need clojure type coersion#2018-04-2719:42favilaI think that's every case here; you use them to avoid the same item in a self-join#2018-04-2719:45markbastianWhen you create a database via datomic.api/create-database (e.g. datomic:<sql://test-db?jdbc:...>), where does the created database (e.g. test-db) get created? Does it create an actual new table or db in the backing sql instance or is it all in the datomic.datomic_kvs table?#2018-04-2719:46favilait's in the datomic_kvs table#2018-04-2719:46favilaa datomic database uses the underlying storage like a key-value blob store#2018-04-2719:47favilaall datomic indexes are "inside" it--they're not visible to the storage tech#2018-04-2719:47favilaso "create-database" with an sql storage is an "INSERT" statement, not "CREATE TABLE" or "CREATE SCHEMA"#2018-04-2719:47favila(it's probably an UPDATE actually)#2018-04-2719:48markbastiancool, thanks#2018-04-2719:48favilaFYI transactor only needs SELECT INSERT UPDATE DELETE permissions for its own datomic_kvs table, and peers only need SELECT#2018-04-2719:48favilaif you want to add an extra layer of protection, you can use two different sql users#2018-04-2719:49markbastianah, cool#2018-04-2719:49markbastiandidn't know that#2018-04-2719:53markbastianAre there any good rules of thumb regarding how many actual datomic_kvs tables/transactors to have? I realize you only use one transactor per datomic_kvs table, but at what point would you want multiple transactors? For example, if an organization had a few projects would it make sense to have a single transactor and each project have it's own virtual databases or would each want their own transactor?#2018-04-2719:54favilaI think it's mostly dictated by write volume#2018-04-2719:54favilaperhaps other operational concerns#2018-04-2719:57favilain general fewer transactors is easier; you split only if your write volume exceeds what a transactor can handle (remember, single writer for all dbs) or the io exceeds what the underlying storage can handle; or if you want fewer points of failure; or if you really just want to be super-sure different orgs have their data completely siloed#2018-04-2719:58favila"fewer points of failure" isn't right#2018-04-2719:58favilawhat I mean is a failure affects fewer customers#2018-04-2719:59markbastianThanks. That make sense.#2018-04-2720:06eraserhdI think I just figured out that queries are cached by object id, not hash?#2018-04-2720:21favilaYou mean the compiled query form is cached?#2018-04-2720:22favilathe results aren't cached AFAIK#2018-04-2720:22favilai would expect it to be hash not object identity#2018-04-2720:27eraserhdThe compiled query form.#2018-04-2720:28eraserhdYeah, just confirmed. If I extract the query and run it (without rebuilding it), the first time is about 300ms, and every subsequent time is about 20ms. If I wrap the captured query with (read-string (pr-str ...)), it goes back to 300ms every time.#2018-04-2720:30eraserhdAlthough, if I try minor perturbations of the query, it still seems to work. So there's something in the query form with a bad hash or equal?#2018-04-2720:31eraserhduh, yeah.... whoa dev=> (hash foo/test-q)
-1500982317
dev=> (hash (read-string (pr-str foo/test-q)))
1049440012
dev=> (hash (read-string (pr-str (read-string (pr-str foo/test-q)))))
-604086260
#2018-04-2720:36eraserhdAnd the culprit is regular expressions!#2018-04-2720:37eraserhddev=> (hash #"foo")
738015301
dev=> (hash #"foo")
793194086
#2018-04-2720:53eraserhdOK, so that's what happened. I had a regular expression literal in one of the small queries. As a result, 11 of 12 of the small queries were compile-cached, but the last wasn't. When I converted to a big query, it couldn't be cached because of the literal.#2018-04-2720:53eraserhdNow the large query runs faster - 20ms vs 50ms for all small queries.#2018-04-2722:42favilaah, nice debugging#2018-04-2921:23joelsanchezjust tried to transact an entity with an string where a ref value should've been, was getting this cryptic error:
:db.error/tempid-not-an-entity tempid used only as value in transaction
as I understand it, Datomic was trying to interpret my string as a tempid for the ref attribute, and failed at that#2018-04-2921:24joelsanchezit's not obvious to me how to improve the error message but it's totally unhelpful here 😂#2018-04-2921:26joelsanchez(so basically, {:my-app/should-be-a-ref "a string"} = that error)#2018-04-2921:31alexmillerstrings are valid temp ids#2018-04-2921:45joelsanchezI know, I meant that it tried to take my string as a tempid and failed because I was not trying to use a tempid there#2018-04-2921:46joelsanchezmaybe the error could handle this case too, idk#2018-04-2922:21SoV4I'm trying to start my local dev transactor but I get the error,#2018-04-2922:23SoV4I'm using datomic-free-0.9.5561 ... maybe it's just a version mismatch?#2018-04-2922:29SoV4Nevermind, just leapt to the latest pro version ^.^#2018-04-2922:42SoV4Actually, I do have a question: how do I know the latest version still under perpetual license with my license-key?#2018-04-3012:30marshallYou can see the valid maintenance period for your license in your http://my.datomic.com dashboard#2018-04-2922:46SoV4in all likelihood... Release Date < License Expiry Date#2018-04-3009:30greywolveHow do you construct a Datom?#2018-04-3012:21val_waeselynckYou could maybe use defrecord and implement the interface#2018-04-3012:22greywolveTa, I'll try that 🙂#2018-04-3012:24val_waeselynckBe careful that Datoms are also treated as lists sometimes - as @U0509NKGK said, the most foolproof way is probably to transact#2018-04-3009:59greywolve(for mocking purposes, or datom transformation)#2018-04-3011:55robert-stuttaford@greywolve how would this Datom be used? afaik, the only way Datomic’s APIs encounter Datoms is by reading them from storage under the hood. therefore, the simplest way to mock them is to transact. can you explain why making your own would be useful to you?#2018-04-3012:25greywolveI guess that's probably the simplest approach, maybe constructing is the wrong approach, even for mocking purposes.#2018-04-3017:12Niksohello here!
How do you do your test with the client library? starting an in memory peer-server is a must right? how do you go about cleaning the db?#2018-04-3017:13NiksoI tried to use the peer library for testing but ofc the connection doesn't like datomic.client.api functions#2018-04-3019:11adammillerI'm not sure there is a great way. What I've done is wrapped the datomic api fns I use and call the client or peer as appropriate for the type of connection I have. Not ideal, but easy enough to do to allow me to use the peer for testing (or perhaps even switch between peer and client later on as needed).#2018-04-3017:32kirill.salykinHi all
I am trying to start datomic console but got this:
/usr/local/Cellar/datomic/0.9.5656
% bin/console -p 8080 alias transactor-uri-no-db
bin/console: line 3: bin/classpath: No such file or directory
Error: Could not find or load main class clojure.main
Datomic installed via homebrew formula https://github.com/Homebrew/homebrew-core/blob/master/Formula/datomic.rb
Console install with
bin/install-console /usr/local/Cellar/datomic/0.9.5656
Please advice what can be wrong
thanks#2018-04-3017:36kirill.salykinok, seems I should have used /usr/local/Cellar/datomic/0.9.5656/libexec as path-to-datomic
Now everything works#2018-04-3017:37kirill.salykinsorry for disturbing#2018-04-3019:11bjNoting a small documentation inconsistency I hit when trying the tuorial... The datomic retract tutorial documents d/transact as accepting a hash-map with a :tx-data key as its second argument (https://docs.datomic.com/cloud/tutorial/retract.html); however, the datomic.api package accepts only a list of lists as the second argument (https://docs.datomic.com/on-prem/clojure/index.html#datomic.api/transact).#2018-04-3019:12marshall@bj the tutorial you mentioned uses the client API (https://docs.datomic.com/client-api/datomic.client.api.html#var-transact), while the api link you provided is the peer API#2018-04-3019:14bj👍 Thanks. And of course after I sent this I noticed the on-prem versus cloud url paths 😅#2018-04-3019:16adammillerCurious @marshall is there a need to make these API's differentiate? As per the question above regarding testing and my own use where I've internally wrapped these calls and dispatched to the correct api based on my connection type....it seems the differences in the api are minor so why not provide consistency so they are interchangeable (at least some core set of api usage)?#2018-04-3019:17adammillerSuppose it's not hard to do on our own but I can imagine that many apps are going to end up with very similar layer of wrapping at least the transact and query calls for testing purposes.#2018-04-3019:58rnagpalJust getting started with datomic queries. I see that we can use predicates.
predicates can compare a property with the provided value
But I have a predicate, which takes an entity. How can I pass entity to the predicate?
I have (defn event-matches-1 [event start-time end-time categories levels]
... )
[:find ?e
:in $ ?start ?end ?categories ?levels
:where
[(user/event-matches-1 ?e ?start ?end ?categories ?levels)]]
#2018-04-3019:59rnagpalin the function/predicate event-matches-1 I get event as nil#2018-04-3020:13adammillerbased on what you have there I'd guess that ?e needs to be bound to something before calling the predicate#2018-04-3020:14adammillerexample of what I mean: https://docs.datomic.com/on-prem/query.html#predicate-expressions#2018-04-3020:21rnagpal@adammiller I did try binding ?e also, but that doesn’t work too
[:find ?e
:in $ ?start ?end ?categories ?levels
:where
[?e ?entity]
[(user/event-matches-1 ?entity ?start ?end ?categories ?levels)]]
#2018-04-3020:24adammillerwell, it would have to be bound to some attribute specifically. Not sure what your model looks like but if you have an attribute of say :event/title or something it would be like:#2018-04-3020:25adammiller[:find ?e
:in $ ?start ?end ?categories ?levels
:where
[?e :event/title]
[(user/event-matches-1 ?entity ?start ?end ?categories ?levels)]]#2018-04-3023:03steveb8n@adammiller check out https://github.com/stevebuik/ns-clone/blob/master/README.md#2018-05-0109:29staskwhats the recommended procedure for upgrading on-prem transactor created via cloud formation?#2018-05-0111:45marshall@stask https://docs.datomic.com/on-prem/deployment.html#upgrading-live-system - briefly, stand up a new cloudformation stack with the upgraded version, wait until they are both in standby, and kill the old stack#2018-05-0111:58staskawesome, thanks!#2018-05-0114:17eraserhdHas anyone noticed a memory leak with the datomic peer security upgrade?#2018-05-0119:30d._.b(d/q '{:find [(pull ?p ?pulls)]
:in [$ ?pulls]
:where [[?p :person/company 10]]}
(d/db conn)
[:person/first_name
:person/last_name])
#2018-05-0119:30d._.bthis tells me "invalid pull expression", could someone explain why?#2018-05-0119:30favilapull expressions in queries must be literal @d._.b#2018-05-0119:31d._.bah, i think in datascript this was allowed#2018-05-0120:55d._.bi transacted some entities, and the way im setting 'em up might be wrong, but i can't seem to find any of them.#2018-05-0120:56d._.b(d/transact conn ({:person/name "Fred", :db/id #db/id[-1 -1122785]} {:person/name "Bob", :db/id #db/id[-1 -1122786]})
#2018-05-0120:57d._.bI have a coll of maps, and I am doing (map #(assoc $ :db/id (d/tempid -1)) [...])#2018-05-0120:58eraserhdwithout quoting the tx above, you are looking up the second in the first map, producing nil for the tx.#2018-05-0120:58eraserhd(or use square braces)#2018-05-0120:59d._.berr sorry, that's passed in as a sequence#2018-05-0120:59d._.b(d/transact conn (map #(assoc % :db/id (d/tempid -1)) [...]))#2018-05-0207:51val_waeselynckNew article about Datomic and GDPR: https://vvvvalvalval.github.io/posts/2018-05-01-making-a-datomic-system-gdpr-compliant.html#2018-05-0217:58jjfinethe datomic docs show a way to find the lengths of the 5 longest/shortest tracks in the database. is there a way also return the entity id of each of those tracks in the same query?#2018-05-0217:59jjfinei'm referring to this part of the docs: https://docs.datomic.com/on-prem/query.html#aggregates-returning-collections#2018-05-0218:20d._.bI'm a little confused on peer vs client library. I have a peer server and the transactor running. I'm using the peer library. Prior to this, I was simply running the transactor and directly connecting to it. The transactor and peer URLs are identical. How do I know if I'm accessing storage through a peer or directly through the transactor?#2018-05-0219:34marshallPeers never connect through peer-server#2018-05-0219:34marshallpeer-server is only there to “serve” clients#2018-05-0219:34marshallthe URL you use is actually the Storage URI - Peers go to storage first, look up the active transactor, and connect to it#2018-05-0219:35marshallwhen you connect with a client you use an endpoint, either your Datomic Cloud system endpoint or the address of the peer server#2018-05-0219:35marshallhttps://docs.datomic.com/on-prem/getting-started/connect-to-a-database.html#repl - the middle of that section shows an example of connecting to peer-server with client#2018-05-0219:35marshall@d._.b ^^#2018-05-0219:36d._.bthanks @marshall#2018-05-0219:37d._.balso, i figured out my issue yesterday. i was doing something silly with :db/unique on an attribute#2018-05-0219:37marshallsorry i missed that - i would have asked for the schema#2018-05-0219:39d._.bquestion for ya: is there anything out there for bulk import that's more general? the mbrainz-importer seems to have all the pieces, but it's not really built for use as a utility for any old dataset#2018-05-0219:40d._.bi have a few million records, and i can cobble together the core.async stuff, but it'd be nice if i could just hand my transit to a bulk import utility, even if the only guarantee is that it gives me "slightly better than sequential transactions" performance#2018-05-0219:47d._.bbasically im looking for the simplest path to take (def xs [{:a/b 1} {:a/b 2} ...]) where (count xs) => 500000 and get better than (d/transact conn xs) performance.#2018-05-0220:33marshallthere’s nothing I know of that’s pre-baked; that sort of thing tends to be a relatively custom ETL with quite a bit of stuff that is dataset-dependent#2018-05-0220:33marshallI think the mbrainz import stuff is definitely a good starting point#2018-05-0220:33marshall@d._.b ^#2018-05-0220:47drewverleeIs it possible to programmatically go from a relational schema (say with postgres) to datomics graph schema?#2018-05-0220:47alexmillersure#2018-05-0220:48alexmillera table is a set of rows (entities) and columns (attributes)#2018-05-0220:48alexmillerevery cell in the table is a tuple [<row> <column> <value>]#2018-05-0220:49alexmillerand then you have to figure out an approach to foreign keys#2018-05-0220:49alexmilleryou basically need to map a fk value in one table to the entity representing the pk in the referred table#2018-05-0220:50alexmillerI’ve only hand-waved past a few dozen critical details, but that’s the broad shape of it#2018-05-0220:51drewverleeThanks alex!
right. I”m actually working through handling the foreign keys part right now. I just thought i would ask before i potentially went down a rabbit hole 🙂
In datomic, the refs aren’t constrained though right? so i can say this is a ref, but not really declare what to specifically?#2018-05-0220:52alexmillerthey are constrained to one entity :)#2018-05-0220:52alexmillerbut entities don’t have a “type”#2018-05-0220:52marshallthere are some user blogs out in the wild describing how they’ve approached this#2018-05-0220:52marshalli.e. http://grishaev.me/en/pg-to-datomic#2018-05-0220:52drewverleeRight. I’m free to have:
person/dog -> cat
person/dog -> dog#2018-05-0220:52alexmillerah, cool, hadn’t seen any of those#2018-05-0220:53marshalland http://michaeldrogalis.tumblr.com/post/98378329045/onyx-a-new-data-bridge#2018-05-0221:00drewverleei’m going to give those a read. The use case is mostly academic. I wanted to create a tool that helped with testing by generated insert statements for my relational db. i’m looking to handle the foreign key constraints, so i need a way to walk the foreign keys. My original model puts the relational db into a simple graph. This works fine, but, as its a personal project, i wanted it to be perfect and i was remiss that you can’t really spec the graph as the keys are unique values and not really an entity map.
So after some thought i was like , well datomic is a graph that has a defined schema, that would would make it clear to the user the shape. Then i thought about what it means that in order to solve a problem with a relational db it makes sense to reformat it into a datomic and how i wouldn’t have this testing problem and about 500 others if we were using datomic in the first place ….and now i need a drink 🍺#2018-05-0221:03alexmiller🍺#2018-05-0314:31folconHow do you reference the current transact? Is it just adding a tempid?
I’m trying to add some auditing to my db and I’ve been looking at this -> https://blog.clubhouse.io/auditing-with-reified-transactions-in-datomic-f1ea30610285.
They use (d/tempid :db.part/tx), is that just creating a tempid to the transaction id in the same way that an tempid references an entity id?#2018-05-0314:34alexmillerhttps://docs.datomic.com/cloud/transactions/transaction-processing.html#reified-transactions may help#2018-05-0314:42folconThanks, I’ll have a read =)…#2018-05-0314:54folconSo further reading leads me to -> https://docs.datomic.com/cloud/best.html#add-facts-about-transaction-entity.
It seems that "datomic.tx" is the txid for the current transaction?#2018-05-0314:56alexmilleryes#2018-05-0314:57folconGreat, thanks!#2018-05-0316:28octahedrionhey what does it mean when I get :cognitect.anomalies/busy
when I do a multi-database query on 2 dbs, but when I query each separately it works ?#2018-05-0318:39eraserhdI frequently do this: [?eid1 :foo/bar ?value] [?eid2 :foo/bar ?value] [(!= ?eid1 ?eid2)]. This seems like two passes on the index (I imagine). Is there a better way to do that?#2018-05-0319:00hueypI have a question around transactor function guarantees — we have a unique v attribute {:db/ident :foo/id
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one
:db/unique :db.unique/value}#2018-05-0319:00hueypand we set this id in a transactor function (fn [db eid]
(let [n (->> (d/datoms db :avet :foo/id (str "foo" 1))
(take-while (fn [[e a v t]]
(.startsWith ^String v "foo")))
(count)
(inc))]
[[:db/add eid :foo/id (str "foo" n)]]))#2018-05-0319:00hueypthe idea being we start at “foo1” and go from there#2018-05-0319:00hueypin testing we sometimes get :db.error/unique-conflict Unique conflict: :foo/id, value: foo2 already held by: 17592213012316 asserted for: 17592213017846
#2018-05-0319:01hueypI’m not sure how to reason about this — could be a bug in my code, or is datoms not supported inside a tx function? the index is eventually consistent?#2018-05-0319:03hueyp(“foo” is actually an argument to the function for full disclosure — we prefix the auto-inc ids)#2018-05-0319:32favila@hueyp possibly you invoke the tx function twice in the same TX on the same entity?#2018-05-0319:33hueyp@favila interesting possibility — that would definitely make sense#2018-05-0319:34hueypI’m doubtful tho as the previous version of the tx function didn’t use datoms but a query of all :foo/id so would hit the same bug and has never hit it 😜#2018-05-0319:34hueypits weird because it only sometimes fails#2018-05-0319:34hueypand our tx’s are pretty consistent in datoms#2018-05-0319:34favilaoh, your d/datoms will never see "foo2"#2018-05-0319:34robert-stuttafordshouldn’t it be d/seek-datoms ?#2018-05-0319:34hueypnot seeing “foo2” would do it 🙂#2018-05-0319:34favilaisn't the goal here "find the highest count for this attribute plus prefix"?#2018-05-0319:35favilayou only match "foo1"#2018-05-0319:35favilayou want seek-datoms, not datoms#2018-05-0319:35hueyp@robert-stuttaford checking docs 🙂#2018-05-0319:35favilaor index-range#2018-05-0319:35robert-stuttafordhttps://docs.datomic.com/on-prem/clojure/index.html#datomic.api/seek-datoms#2018-05-0319:36hueyphah — for sure that is what we want#2018-05-0319:36favilaif you use index range, you won't have to do an attr check too#2018-05-0319:36hueypso datoms is already constrained by your supplied components?#2018-05-0319:36favilayes, datoms is "only give me datoms matching this pattern"#2018-05-0319:36hueypperfect#2018-05-0319:37favilaseek-datoms is "start reading at the first datom that matches this pattern"#2018-05-0319:37favilaindex-range is in the middle#2018-05-0319:37favilait will not seek past the attribute#2018-05-0319:37hueypthat makes a lot of sense … I was surprised when (d/datoms db :avet :foo/id "foo") returned nothing#2018-05-0319:37hueypI figured it should just start with the first one having that prefix#2018-05-0319:37hueypbut didn’t question it further#2018-05-0319:38favilabe careful with seek-datoms#2018-05-0319:38favilayou need to check each index segment not just the one you want#2018-05-0319:38hueypah — because I’ll keep going right past :foo/id as the attribute#2018-05-0319:38favilayes#2018-05-0319:38favilathat's why index-range may be a better choice here#2018-05-0319:38hueypthanks @favila / @robert-stuttaford!#2018-05-0319:39favila(index-range db :foo/id "foo" nil) and your take-while#2018-05-0319:39favilaif there's no assertions for foo/id at all it will just be empty, not the next attribute#2018-05-0319:39hueypI read this from the docs “The index is updated periodically in the background and contains datoms sorted in various orders.” and was like “this can’t be true for the client API” but was now questioning my life#2018-05-0319:40hueypall faith restored#2018-05-0319:40favila@eraserhd I'm pretty sure what you are doing is a true cross join--you really do need to visit every index segment to self-join#2018-05-0319:40favilain SQL it would be the same#2018-05-0319:42favilaoptimizing it is about choosing whether to bind ?value first or ?eid1 first#2018-05-0319:42eraserhdhuh, how would I do that?#2018-05-0319:59hueypcan you rely on (index-range db attrid start end) using :avet?#2018-05-0319:59favilait only uses avet#2018-05-0319:59hueypawesome#2018-05-0319:59hueypthanks!#2018-05-0320:00favila@eraserhd By controlling which var is bound and which free in the previous clauses#2018-05-0320:08eraserhdAh, but if you are binding two variables in the first clause, there's no way to tell Datomic which way to do it. e.g. seek on eavt, bind v, seek on avet, bind e. Hmm.#2018-05-0320:09eraserhdI've noticed I can't use criterium to benchmark queries, either. It seems to get stuck warming up because, I guess, Datomic periodically inserts new classes?#2018-05-0320:12eraserhd(oh, that could be my other components. Don't trust me on that last one.)#2018-05-0407:05davidwI'm trying to upsert an entity and use its temp id with :db.fn/cas, like so
[{:db/id "my-foo-id"
:foo/id "my-foo-id"
... other foo attributes ...}
[:db.fn/cas "my-foo-id" :foo/is-processed? nil true]]
but I get an exception
:db.error/not-a-keyword Cannot interpret as a keyword: my-foo-id, no leading :
is there a way to do this?#2018-05-0407:16val_waeselynck@davidw I doubt it, because transactions functions happen prior to tempid resolution (for good reasons) - for such cases, you may want to write your own transaction function that is aware of identity attributes#2018-05-0407:19davidwthe intention is to use this with datomic cloud so a custom transaction function isn't an option#2018-05-0407:20davidwdo you have a reference for the transaction function, tempid resolution order? I'm interested in the good reasons.#2018-05-0407:21val_waeselynckI don't have any doc ereference about that, but we can reason about it#2018-05-0407:22val_waeselyncka transaction function is local - it cannot see the whole transaction it participates in. But tempid resolution has to consider the entire transaction#2018-05-0407:24val_waeselynckWhat's more, if tempid resolution happened before transaction functions, what would happen with tempids emitted by transactions functions ?#2018-05-0407:26val_waeselynckAnd yeah, about Cloud, that's exactly the sort of limitation causing me to not be too enthusiastic about it yet, so I don't know what to tell you really 😕#2018-05-0407:28davidwthat makes sense. meaning I can see why it works that way. it's disappointing because it's seems like a valid think to want to do.#2018-05-0407:29davidwdo you know off the top of your head if you can use a look up ref with cas?
[:db.fn/cas [:foo/id "my-foo-id"] :foo/is-processed? nil true]
#2018-05-0407:31val_waeselynckI don't sorry 😕 I don't use cas much#2018-05-0407:31davidwno problem, I'll test it.#2018-05-0407:33davidwthe only other way I can think to achieve what I want is to insert the empty entity in one transaction and then use the lookup ref with cas in another. It's not as nice as I was hoping for but I think it should work. I'll check it out. Thanks for your help.#2018-05-0407:37val_waeselynckDatomic Cloud is definitely lacking in transactional expressive power as of today IMHO#2018-05-0412:15alexmillerThat will change in time #2018-05-0414:30lopalghostSo, how do you approach user-defined fields in Datomic?
In say Postgresql I would roll user-defined fields into a map and store that as json. Not perfect, but it works well enough.
Storing maps doesn't seem practical with Datomic and I'm not sure I want to let users arbitrarily modify the schema.
Anyone have a good solution?#2018-05-0414:55eraserhdStoring JSON or EDN in strings works decently. I have arbitrary, user-supplied queries and pull expressions in string fields.#2018-05-0414:55eraserhdHowever, I'm also curious what kind of application is it in which users want to modify the schema.#2018-05-0415:10lopalghostIn Postgresql you can index by json fields, which is nice. Storing serialized objects in Datomic wouldn't offer the same advantage.
User-defined data fields are a pretty common requirement for enterprise software. In Datomic it could be as easy as just modifying the schema whenever a user wants to add a field, but that doesn't seem like a good practice. #2018-05-0415:34tony.kayCurious if anyone knows: when you set “no history” on a Datomic attribute, does this allow Datomic to do update-in-place at the storage level? Wondering how much of a performance boost that is likely to give if you don’t need history (and update-in-place would imply “a lot”).#2018-05-0417:41favilano it does not imply that#2018-05-0417:42favilathe storage layer is used as a key-value blob store#2018-05-0417:42favilayou still will dirty the "blocks" of data with value updates#2018-05-0417:42favilayou just won't keep the old values#2018-05-0417:52favilaalso once a key+value is written it is never mutated (there are only a few exceptions: well-known key names whose values hold the root pointers)#2018-05-0417:53favilaallowing them mutate would give up a lot#2018-05-0418:14tony.kayYeah, that makes sense. I guess I’ll just have to micro-benchmark and get a sense of how much it can help.#2018-05-0418:17favilait's only purpose is to save storage#2018-05-0418:17favilait may cut down on index time, but that's not it's primary purpose#2018-05-0418:18favilaalso you have no guarantees there will never be any history at all--before the index is written, you will see "old" values in the log#2018-05-0418:20tony.kayhttps://docs.datomic.com/on-prem/best-practices.html#nohistory-for-high-churn
” cost of storing history is frequently not worth the impact on database size or indexing performance.”
Thinking about it now, I’m sure the “indexing performance” comment is just due to there being less data.#2018-05-0415:34tony.kayThe docs say that it reduces indexing overhead, which implies update-in-place in my mind#2018-05-0416:07tony.kay@lopalghost “seem like a good practice” feels like something, from my perspective, that is coming from a certain SQL security mindset. I have that leaning as well, but as I’ve thought about it schema in Datomic:
1. Gives a clear name for something, with a namespace. Allowing schema to “flex” to include user-namespaced things seems natural to me.
2. Gives a clear type to it, that because of (1) can also be given a data specification (e.g. clojure.spec).
Personally, I think opening Datomic schema up to extension through a user UI is pretty powerful#2018-05-0416:09tony.kayof course, new challenges as well…rules like “only grow schema” need to be followed 🙂#2018-05-0417:38lopalghost@tony.kay I'm starting to come around on that line of thinking. I'm definitely coming from a mindset of sql security that might not be relevant to Datomic.
Has anyone else tried opening the schema to modification by users?#2018-05-0417:39tony.kayI’m working with a client (consulting) that is doing so#2018-05-0417:39tony.kayproduct is young and not yet released, though#2018-05-0418:33rnagpalTrying to find all entities where :alarm/cleared_at is nil#2018-05-0418:33rnagpal[:find ?e
:in $
:where
[?e :alarm/cleared_at ?cleared]
[(nil? ?cleared)]]#2018-05-0418:34rnagpalbut it return empty list#2018-05-0418:51favila@rnagpal Datomic does not store nil and datalog cannot refer to nil#2018-05-0418:52favilaThere is no assertion of ?e :alarm/cleared_at at all#2018-05-0418:52favilaso that clause never matches#2018-05-0418:52favilayou maybe want [(missing? $ ?e :alarm/cleared_at)], but that will match every entity in the entire system that lacks a cleared_at#2018-05-0418:53favila(including schema, transaction entities, etc)#2018-05-0418:53favilayour data model may need refinement if you need to say "I assert it was not cleared"#2018-05-0418:54favilasome possibilities: ?e needs another indexed attribute to indicate it is "alarm-like"#2018-05-0418:54favila(i.e. :alarm/cleared_at could be expected on it)#2018-05-0418:55favilaor there is a sentinel value of cleared_at to indicate "not-cleared"#2018-05-0418:55favilaor there is another attribute :alarm/not-cleared#2018-05-0418:55favila(or :alarm/cleared)#2018-05-0418:56favilaand you have to set or retract both attributes together all the time in your application code#2018-05-0419:04eraserhd1. Is it possible that d/tx-report-queue slows down the processing of queries?#2018-05-0419:15favilaSeems very unlikely by itself; although obviously cycles devoted to reading it are consumed that wouldn't be#2018-05-0419:16favilaI think the peers get all TXs all the time anyway; tx-report-queue is just adding them to a user-readable queue#2018-05-0419:04eraserhd2. Does Datomic cache query results, such that the same query on the same db doesn't hit indexes and such?#2018-05-0419:14favilaNo, but a repeat run will typically be very hot: query plan is cached, indexes were loaded, etc#2018-05-0419:12rnagpalThanks @favila
This worked for my case
'[:find ?e
:in $
:where
[(missing? $ ?e :alarm/cleared_at)]
[?e :alarm/alarm_id]]#2018-05-0419:12favila@rnagpal Reverse the order of those clauses#2018-05-0419:12favila@rnagpal first clause visits every entity in the db; second clause filters#2018-05-0419:13favilathe second one is more selective, so use it first#2018-05-0419:13rnagpalcool. Got you. Thanks @favila#2018-05-0420:00eraserhdHuh, I just found out that nesting expressions started working. When did that happen? e.g. [(string/starts-with? ?foo (str ?bar "-"))]#2018-05-0514:53octahedrioncan you use or with multiple databases ?#2018-05-0615:25octahedrion(the answer is no: or or-join etc all take an optional $db)#2018-05-0520:02octahedrionwhy does (d/q '{:find [?e] :in [$0 $1] :where [[$1 ?e :thing _] ]} (first dbs) (second dbs))
return the entities of the first db not the second one ?#2018-05-0615:12Hendrik Poernama(let [dbs [[[123 :a 1] [124 :a 2]]
[[912 :a 1] [910 :a 2]]]]
(d/q '{:find [?e]
:in [$0 $1]
:where [[$1 ?e :a _]]}
(first dbs) (second dbs)))
;; => #{[910] [912]}#2018-05-0615:12Hendrik Poernamaseems to work fine?#2018-05-0615:18octahedrionhmm I'm using Datomic Cloud#2018-05-0615:22octahedriondoes datomic cloud definitely support queries with multiple dbs ? I see nothing in the "differences between" page to suggest it doesn't#2018-05-0615:23octahedrionI must be doing something wrong somewhere#2018-05-0615:31Hendrik Poernamanot sure if it works with cloud. Someone asked this before, but I can't find the answer in history#2018-05-0615:23Hendrik PoernamaIs there any harm in transacting the same schema multiple times? I have not noticed any issue, but wondering about best practice. I tried conformity, but my schema code became clutter-y#2018-05-0616:18robert-stuttaford@poernahi the only ‘cost you pay’ is in empty transaction log entries#2018-05-0616:19Hendrik Poernamaah, cheap enough 🙂#2018-05-0617:17octahedrion(copied from above thread) does datomic cloud definitely support queries with multiple dbs ?#2018-05-0711:18dominicmI'm pretty sure I recall that it is not supported#2018-05-0711:33octahedrionso confusing - where's the documentation ? There's nothing here https://docs.datomic.com/on-prem/moving-to-cloud.html to suggest it doesn't support multiple dbs (maybe there is, but not to my naive eye)#2018-05-0711:35octahedrionok I think I've found it: https://docs.datomic.com/on-prem/clients-and-peers.html "cross database joins" is "no" in the client column#2018-05-0711:35octahedrionthat, combined with https://docs.datomic.com/on-prem/moving-to-cloud.html#sec-3 "Apps that make heavy use of peer locality will require substantial
alteration for Cloud"#2018-05-0711:36octahedriondoes suggest Datomic Cloud doesn't support multiple database queries, which is a shame since it's an appealing feature#2018-05-0711:37octahedrionI wish Datomic's documentation was much clearer for newbies#2018-05-0711:50robert-stuttaford@octo221 Cloud is still brand new. although they don’t say it anywhere, you can basically treat Cloud as “Beta”.#2018-05-0711:50alexmillerIt’s not beta#2018-05-0711:51alexmillerBut it is new#2018-05-0712:38octahedriondoes Datomic cloud support in-memory dbs ?#2018-05-0712:40alexmillerNo - the data is ... in the cloud#2018-05-0712:43octahedrionthe cloud is just other computers though#2018-05-0712:46octahedrioni mean in-memory on the other computers - not the client#2018-05-0715:34alexmillerdata is cached across many levels in Datomic cloud, possibly including memory. but you generally shouldn’t know or care.#2018-05-0716:13favila@octo221 I think the answer is no, datomic cloud does not give you any storage choices, so mem-only db is not possible#2018-05-0717:45octahedrion@alexmiller @favila ok thanks for the clarification. I guess it's ok to create disposable dbs for testing and then delete them afterwards#2018-05-0718:12alexmilleryes, that’s the approach I’ve used - in a fixture, create a database whose name includes a uuid, test, then delete#2018-05-0815:53Ben Hammondif I want to connect a datomic Peer through an ssh tunnel to a datomic that running in a private network on Postgres , what ports would I need to tunnel?
4334/4335/4336 ?#2018-05-0819:38favilaonly 4334 (for the transactor), and whatever port postgres uses @ben.hammond#2018-05-0819:49Ben HammondAh it must be the Postgres port that I am missing.
Thanks very much#2018-05-1001:29alexkVisualVM shows that my application (with a Datomic peer) has 32 threads named query-N. Each is holding onto about 120MB of memory. How come there are so many, and why are they conveniently using all available memory?#2018-05-1013:57matthaveneralexk: https://docs.datomic.com/on-prem/caching.html#object-cache#2018-05-1014:20alexkThanks Matt. My object cache is set to 1g, I wonder why those threads appear to be using so much more.#2018-05-1016:32shaun-mahoodIt sounds like I'm going to be pitching Datomic to our non-technical management soon - we would be moving from MS SQL to either Cloud or On-Prem if I can do a good job convincing them. Anyone have any good resources or things that have worked for them in a similar situation?#2018-05-1018:19val_waeselynckAre they already sold on Clojure ?#2018-05-1018:19val_waeselynckAre they already sold on Clojure ?#2018-05-1019:41shaun-mahood@U06GS6P1N: I've been doing all new projects in Clojure/CLJS for the past 2 years, but their understanding of Clojure (or any programming language) is pretty much limited to the basics. In the same meeting l'm going to be explaining why we are using Clojure and comparing it to other options, but from what I can tell the main concern is switching away from an more common database technology.#2018-05-1107:27val_waeselynckSell them a decrease of risk, not an increase of productivity. They will want to know how to address staffing concerns, and why the technology is not more popular if it's so great. Tell them that if you don't use Datomic, you will end up maintaining a crappy ad hoc version of half Datomic.#2018-05-1107:28val_waeselynckAlso, some ideas here https://medium.com/@val.vvalval/what-datomic-brings-to-businesses-e2238a568e1c#2018-05-1118:44shaun-mahood@U06GS6P1N: That is an awesome write-up, thanks!#2018-05-1020:48FlexesI'm having trouble connecting to my ddb transactor, what is supposed to be in my ddb-ensured.properties file under host?#2018-05-1020:53FlexesP.S. I'm running inside of a docker container#2018-05-1022:17hmaurer@pat839 are you running datomic inside a docker container?#2018-05-1112:05Flexes@hmaurer I am. And that container is running in a swarm.#2018-05-1112:08hmaurer@pat839 where are you trying to access it from? I tried to run Datomic on Kubernetes a while ago. If I recall correctly I configured host=0.0.0.0 and I had to configure alt-host as well#2018-05-1114:23Andreas LiljeqvistTips for using Datomic with an SPA? At the moment using re-frame.#2018-05-1114:45eraserhdAlright, so it seems in Datomic, you can use arbitrarily nested Clojure expressions, so long as any variables are used at the top-level. e.g., you can't do [(first (.split ?foo)) ?bar], but you can do [((fn [^String s] (first (.split s))) ?foo) ?bar].#2018-05-1116:04jjfinehow do i restart a peer after doing a restore-db without stopping the jvm? is datomic.api/shutdown sufficient?#2018-05-1118:31mishadid anyone try to "reify" datom?
say, I need to save a fact that an entity e1 relies on [e2 :foo/bar :baz], what do I do?#2018-05-1118:47favilaif you can make it match along tx boundaries, you can reuse the TX itself as the reified object#2018-05-1118:48favilaif you need to be more granular, you need to explode the assertion into its own entity#2018-05-1118:49favilae.g. {:db/id e2 :attr :foo/bar :valueKeyword :baz}#2018-05-1118:49favilaup to you whether the value of :attr is an actual datomic attribute or other ordinary entities#2018-05-1118:50mishayeah, the second option fits better, but "explode" describes pretty well how I feel about it, since such dependencies are maybe ~5% out of all the data I expect, and impose this explosion on everything else – is daunting#2018-05-1118:50favilayou want to make arbitrary higher-order assertions?#2018-05-1118:50mishayes#2018-05-1118:51favilawould a weak reference be ok?#2018-05-1118:51mishaI need very diverse graph, where such dependencies are not predefined, but rather indicated by user#2018-05-1118:52mishalots of nodes, of lots of "types".#2018-05-1118:52faviladatomic is not as flexible as RDF (if that is where you are coming from). the only reification datomic has is transactions#2018-05-1118:52mishaas opposed to lots of nodes of few types#2018-05-1118:53favilaare the types user-created?#2018-05-1118:53mishaI think I need transactions for something else, and "overloading" those with this use case is kinda shooting future myself in a foot#2018-05-1118:53mishayes#2018-05-1118:53mishanot all, but essentially yes#2018-05-1118:54favilaif they are user-created, exploding is probably a better idea--represent the types as data rather than schema#2018-05-1118:54favilayou don't want users transacting schema#2018-05-1118:54mishayeah, that was my plan B opieop#2018-05-1118:54mishabut I had to doublecheck#2018-05-1118:55mishathank you, @favila#2018-05-1118:55favilasomething to consider: you can "lower" into datomic for ease-of-use#2018-05-1118:56favilai.e. source of truth is is a higher-order db, but you can derive another one where user types are lowered into datomic attributes and the reification goes away#2018-05-1118:57misha2 datomics one on top of the other?#2018-05-1118:57favilayou read one db to generate the other one#2018-05-1118:58favilause the generated one as read-only#2018-05-1118:58mishaah. the nature of user types would likely be volatile enough to make dynamically creating schema and DB - not an option#2018-05-1118:59mishaon the other hand, those db should be fairly small, so it might work...#2018-05-1119:00favilathe purpose would only be to remove the awkwardness of working with higher-order expressions#2018-05-1119:00favilaI guess a datomic rule can remove some of it#2018-05-1119:01mishaI will try hard, because I want datalog for this no matter what : )#2018-05-1119:05mishasql will be nightmare, aql, cypher, etc. – life is too short for all those#2018-05-1119:05favila`[(usermatch ?e ?a ?v)
[?e :user/attr ?a]
[?a :user.attr/type ?real-a]
[?e ?real-a ?v]]`#2018-05-1119:05favilasomething like that#2018-05-1119:07favilainstead of [?e ?a ?v] (attr-as-schema) you would use (usermatch ?e ?a ?v) (attr-as-user-data)#2018-05-1119:07mishathat might work#2018-05-1119:08mishanow need to consider the rest of the requirements. thank you again#2018-05-1209:37akirozHey guys, got a little question regarding Datomic and Consistency. The official website claims that Datomic is consistent in both the ACID and CAP senses, now the ACID one I can understand since we have serialised transactions but how does it achieve consistency in the CAP sense if we have an eventually consistent storage backend? The C in CAP states that "Every read receives the most recent write or an error".#2018-05-1214:01favilaThe stuff written is immutable#2018-05-1214:04favilaSo if a peer gets an error for a key, it knows the storage just doesn’t have it yet. But if it gets a value, it knows the value will never change#2018-05-1214:07favilaThere are still a small number of keys (less than ten) that are updated—they store root pointers. The storage impl makes sure they’re written with stronger consistency#2018-05-1214:08favilaThe supported storages aren’t only eventually consistent, they have stronger modes#2018-05-1214:08akirozBut is it guaranteed that you'll always get the latest "db value" when you do (datomic.api/db conn) ?#2018-05-1214:08favilaWhere they do not (eg riak) another db is used for those mutable keys (eg zookeeper)#2018-05-1214:09favilaIt is guaranteed you will always get a consistent snapshot#2018-05-1214:09favilaNot the “latest-latest “ because physics#2018-05-1214:10favilaYou can use the sync functions if you need to ensure you are caught up to a specific db t#2018-05-1214:11akirozCool, never knew there was a sync function, let me check out the docs 🙂#2018-05-1214:11favilaThis is an issue when you have out of band communication#2018-05-1214:12favilaEg one of your servers saw db at time t, and sent something to another which expects to read at least the same db value or newer#2018-05-1214:13favilaThe msg from the other server may have gotten to you faster than the new db valu#2018-05-1214:14favilaIf you transmit the t also you can use sync to ensure you are at least as far along as the server that sent the message #2018-05-1214:15akirozOk so if I were to need to coordinate some actions between services, I would have to send the time-basis for the previous tx in my message then use sync, is that correct?#2018-05-1214:15favilaYou would if you need the other service to read the same data out of the same db#2018-05-1214:16akirozgotcha, thanks! 🙂#2018-05-1214:19akirozI'm not actually building any sort of distributed system at the moment but was just thinking about that case that causal information from a user's interaction could be lost in a distributed system.#2018-05-1214:20favilaYeah that’s taken care of between a peer and the storage+transactor, but not among peers (which is what sync is for)#2018-05-1314:49joshkhi'm pretty new to datomic and not sure if i'm asking the "right" question, but maybe someone can help? i have two apps that use datomic (cloud). one app is responsible for authenticating a user and then linking them to some entity in datomic. the other app kicks off this process, gets back a confirmation that some datoms were updated, and then queries for that linked entity. the first time the query runs i get back nothing, and then all subsequent queries return the linked value. how asynchronous is transacting vs. reading?#2018-05-1314:55joshkhor in other words, how can i guarantee that another peer/client has immediate access to the data after a transaction has occurred?#2018-05-1317:26misha@joshkh have a look at the sync fn discussed literally just before you posted a question#2018-05-1318:00souenzzo@joshkh you can do something like
;; On srv1
(let [tx-data (do-auth-stuff request)
{:keys [db-after]} @(d/transact conn tx-data)
basis-t (d/basis-t db-after)]
(send-to-srv-2 basis-t))
;; On srv2
(let [db @(d/sync conn basis-t)]
(do-srv2-stuff db ...))
Then, db on srv2 will "wait" to a db with this basis-t. But as @misha said, d/sync without a t will ask the transactor "the newst db at this time"#2018-05-1318:28joshkhcheers, thanks! i'll give it a whirl.#2018-05-1320:52joshkhout of curiosity, is d/sync only available on-prem and not applicable to datomic cloud?
i see it here:
https://docs.datomic.com/on-prem/clojure/index.html#datomic.api/sync
but not here:
https://docs.datomic.com/client-api/index.html#2018-05-1400:46ezmiller77Anyone run into this error when trying to connect to a datomic cloud stack:
> 2018-05-13 20:36:01.593:WARN:oejuc.AbstractLifeCycle:nREPL-worker-1: FAILED /cdn-cgi/l/email-protection(null,null): java.lang.IllegalStateException: SSL doesn't have a valid keystore>
> java.lang.IllegalStateException: SSL doesn't have a valid keystore`#2018-05-1401:08ezmiller77It occurs for me when running
(require '[datomic.client.api :as d])
(def cfg {:server-type :cloud
:region "us-east-2"
:system "<sysname>"
:query-group "<sysname>"
:endpoint ".<sysname>."
:proxy-port 8182})
(def client (d/client cfg))
#2018-05-1401:09ezmiller77Before that, I was also getting a bunch of what seemed to be dependency conflict related errors, so my project.clj has some exclusions:
[com.datomic/client-cloud "0.8.50"
:exclusions [org.eclipse.jetty/jetty-io
;; org.eclipse.jetty/jetty-client
org.eclipse.jetty/jetty-util
;; org.eclipse.jetty/jetty-http
commons-logging
commons-codec]]
#2018-05-1412:42joshkhto follow up with my async question from yesterday, i've noticed something interesting using datomic.client.api. after transacting in repl1 i'm left with {:db-after {:t 765 }}. then, in repl2, no matter how many times i try to fetch the latest db using d/db, it always returns {:t 764}, one t value behind. it stays this way forever until i attempt to query for some data. the query doesn't run returning anything as the time is one transaction behind, however the action of querying does update the db value to have {:t 765}.
In REPL 1:
(do-some-transaction)
=> {:db-before
{:database-id "some-db-id", :db-name "my-test-db", :t 764, :next-t 765, :history false, :type :datomic.client/db},
:db-after
{:database-id "some-db-id", :db-name "my-test-db", :t 765, :next-t 766, :history false, :type :datomic.client/db},
:tx-data
[#datom[13194139534077 50 #inst "2018-05-14T12:19:52.484-00:00" 13194139534077 true]
#datom[7868105208367458 73 "SomeData" 13194139534077 true]
#datom[7868105208367458 70 #uuid "66026046-ef8e-4605-9280-94cbf4c0cd5b" 13194139534077 true]],
:tempids {}}
In REPL 2:
; Always 764 no matter how many times i re-run this:
(d/db @conn)
=> {:t 764, :next-t 765, :db-name "my-test-db, :database-id "some-other-id", :type :datomic.client/db}
In REPL2 again:
; To update the time, run any query:
(d/q '[:find (pull ?person [*])
:in $ ?person-name
:where
[?person :person/first-name ?person-name]]
(d/db @conn)
"person-from-previous-transaction"
)
=> []
(d/db @conn)
; time has been updated
=> {:t 765, :next-t 766, :db-name "my-test-db", :database-id "some-other-id", :type :datomic.client/db}
shouldn't (d/db conn) always return the oldest version of the database?#2018-05-1413:00souenzzo(d/db conn): An available db (any)
@(d/sync conn): Ask the transactor the last available db and wait for it.
@(d/sync conn t): Wait for a db with basis-t >= t#2018-05-1413:05favilaClient api is not push like the peer api#2018-05-1413:09joshkhexactly. i'm pretty stumped about the "right" thing to do here.#2018-05-1413:10favilaUse peer api?#2018-05-1413:11favilaI know cognitect is pushing cloud and client api very hard, but as of now it is much more impoverished#2018-05-1413:14joshkhis the peer api compatible with cloud?#2018-05-1414:22favilano#2018-05-1414:22favilacloud only supports client#2018-05-1412:51joshkhwhere as yesterday's discussion lead me to think it was latency between two clients, this is telling that i need to actively take action to tell the second client that someone else has updated the db. obviously running a bogus query isn't the right way to do that...#2018-05-1413:08favilaI don’t know how this works with the client api. Peers are actively pushed new transaction records by the transactor. Clients don’t have transactions pushed to them.#2018-05-1413:09favilaSyncing may not be possible with the client api#2018-05-1413:07joshkhah yes, i saw that yesterday (thanks!) but i don't think sync is available in the client api#2018-05-1413:19ezmiller77If anyone has any thoughts on this trouble I'm having migrating to datomic cloud, I'd be much obliged: https://stackoverflow.com/questions/50331264/ssl-doesnt-have-a-valid-keystore-error-when-trying-to-connect-to-datomic-clou#2018-05-1413:21joshkhsounds like you need an :exclusion somewhere#2018-05-1413:21souenzzoOh. There is no sync on client. 😕
Maybe you can do something like:
>> POST /auth-me
<< token: my-token, t: 255
>> GET /my-data?token=my-token&t=255
<< 102: please wait
....
>> GET /my-data?token=my-token&t=255
<< 200: you-data#2018-05-1413:22joshkhi thought about returning the t value but i can't see a way to pass it into the datomic client library#2018-05-1413:24souenzzoyou can check:
(let [{:keys [token t]} request
db (d/db conn)
current-t (:t db)]
(if (>= current-t t)
(do-stuff db token)
{:status 102}))
#2018-05-1413:24souenzzoP.S. not sure if status 102 is "the correct" http status for this case.#2018-05-1413:26joshkhthe problem is actually fairly widespread. i have a repl open, a web app, and an asset server all running with their own client connection passing around a t value after each update sounds like a pretty heavy approach to keeping the clients in sync.#2018-05-1413:27joshkhnot that it's a bad suggestion! but i feel like i must be missing something.#2018-05-1413:29souenzzomissing the peer API#2018-05-1413:28joshkhhave you tried running lein deps :tree | grep jetty? you might have a dependency conflict.#2018-05-1413:51ezmiller77@joshkh I have. That's how I came up with the exclusions that I'm using.#2018-05-1413:52ezmiller77I've tried a great many combinations.#2018-05-1414:02joshkhwhile surely a huge pain, you could knock out the deps and namespaces until you find the culprit. hopefully the project isn't too large? i also had the same problem and even when using lein deps it wasn't immediately clear which library was causing the problem. i think it ended up being something neo4j related.#2018-05-1414:03joshkhalso, one of my deps also had a dependency on datomic so i needed some exclusions there as well.#2018-05-1414:06joshkhold code but i also had a fairly heavy handed exclusion (although i don't think most of them apply): [ring/ring-jetty-adapter org.eclipse.jetty/jetty-client org.eclipse.jetty/jetty-http org.eclipse.jetty/jetty-util]#2018-05-1501:59James VickersIs there a way to mark a var as final in the same way you can mark a Java variable as final, to prevent it from being re-bound to another value?#2018-05-1502:08favilaYou mean a datalog var?#2018-05-1502:15James VickersSorry I’m in the wrong channel! Ignore me#2018-05-1509:53octahedrionhow do I use :db-after in a query ? The documentation https://docs.datomic.com/cloud/transactions/transaction-processing.html#results says "Both :db-before and :db-after are database values that can be
passed to the various query APIs", but :db-after returned from a transaction is an ArrayMap and I can't find a way to use that map to create a datomic.client.impl.shared.Db for use in (d/q)#2018-05-1510:00joshkhan answer to @octo221’s question might help me solve the problem of keeping two datomic cloud clients in sync https://stackoverflow.com/questions/50347307/how-do-i-keep-two-datomic-cloud-clients-in-sync#2018-05-1512:26octahedrion@octo221 @joshkh it looks as though you use (d/as-of db t) where t is (get-in transaction-result [:db-after :t])#2018-05-1512:29souenzzoas-of dont get a "future" db, just a past db
(as-of is a filter)
https://docs.datomic.com/on-prem/filters.html#as-of-not-branch#2018-05-1512:50joshkhhmm, agreed. using as-ofwith a future value still doesn't "catch up" the db to the future.#2018-05-1513:00joshkhunfortunately it's a no go. if someone from the cognitech team finds this, any advice would be very much appreciated! i've been trying to work it out for a few days and running queries twice is a nasty hack.#2018-05-1513:07marshall@joshkh client does not currently have the feature to ‘get the latest db available’; that would be a good suggestion for a feature request in our feature portal
currently, passing a t value is the correct method to ensure multiple clients share a basis#2018-05-1513:08joshkhthanks, @marshall. does passing a future t value to the client's d/as-of (as @octo221 suggested) update the database value? i ran a test and unless i'm mistaken it suggests not.#2018-05-1513:09marshalla ‘future t’ ?#2018-05-1513:13joshkhi'm still out of the datomic nomenclature loop! if repl 1 transacts and results with a :t 100, and repl 2 is still at :t 99 and uses (as-of (d/db conn) 100) as the db value in a query, should the transaction from repl 1 be found?#2018-05-1513:13marshalli believe so yes#2018-05-1513:14joshkhi just gave it a shot and came up blank but i'll try again now#2018-05-1513:18joshkhREPL 1 (db-after t value is 770)
(d/transact @conn {:tx-data [{:person/first-name "Alice"}]})
=>
{:db-before {:database-id "...",
:db-name "datomic-test",
:t 769,
:next-t 770,
:history false,
:type :datomic.client/db},
:db-after {:database-id "...",
:db-name "datomic-test",
:t 770,
:next-t 771,
:history false,
:type :datomic.client/db},
:tx-data [#datom[13194139534082 50 #inst"2018-05-15T13:14:44.154-00:00" 13194139534082 true]
#datom[70804150782265703 73 "Alice" 13194139534082 true]],
:tempids {}}
REPL 2 with explicit t value at 770 (is currently at 669)
(d/q '[:find (pull ?person [*])
:in $ ?person-name
:where
[?person :person/first-name ?person-name]]
(d/as-of (d/db @conn) 770)
"Alice")
=> []
#2018-05-1513:21joshkh... where as the act of executing that queries does update the t value in REPL 2, and re-running the query for a second time does return the transacted value.#2018-05-1513:22marshallunderstood - looking into it#2018-05-1513:26favila@marshall this is an issue where d/sync is needed#2018-05-1513:26joshkhthanks, @marshall. very much appreciated. it's a hurdle for us as we're sharing a cloud db across two applications. thanks for looking into it - i'm sure i'm just missing something obvious.#2018-05-1513:26marshall@favila agreed#2018-05-1513:39octahedrion@joshkh @marshall @favila I can confirm: 2 repls, in one I transact a datum, then wait plenty of time (seconds), then in the second repl I query for the datum just transacted using d/db to get the latest db value, but the result is [], then I immediately try the same query again and this time I get the result#2018-05-1513:39octahedrion(using Datomic Cloud)#2018-05-1513:46octahedrionwhat's more, it's the use of d/q which refreshes the db connection (or something), not the use of d/db, since if I first use a previous db value (without calling d/db in my first d/q) then use d/q with d/db I get the same behaviour#2018-05-1513:47joshkhyes, exactly.#2018-05-1514:09joshkhrepl 1 vs repl 2 was a contrived example but representative of the larger services that we've already built, and folding them into a monolithic application that shares a single client connection isn't really an option for us as we're scaling across containers. i'm open to any work-around that gets us past this show stopper, even if it's the phantom query method, but i'm hoping that's not the case unless it's bullet proof.#2018-05-1514:22marshall@joshkh we’re looking into the issue; in the interim, you can also create a new connection to “force” update#2018-05-1514:26joshkhthanks a lot, @marshall. that wasn't meant to sound all gloom-and-doom. just highlighting the problems it's causing downstream.#2018-05-1514:26marshallunderstood.#2018-05-1520:16joshkh@marshall is there a trackable issue we can check on for an update? slack is great but discussions tend to evaporate upwards over time. 🙂#2018-05-1520:19joshkh(we're also snapshot friendly and happy to test anything unofficial)#2018-05-1520:53favilaIs there any way to use rules "bi-directionally" with decent performance?#2018-05-1607:45val_waeselynck@favila I had this problem, hence made a feature for that in datalog-rules: https://github.com/vvvvalvalval/datalog-rules#reversed-rules-generation-experimental . Granted, it's a bit of a hack, I wish that was part of Datomic's API#2018-05-1612:50favilaShort of datomic actually doing some query optimization, I wish I could use the bracket syntax on arbitrary arguments#2018-05-1612:50favilaAnd have datomic use that info to select the right impl at runtime#2018-05-1615:45val_waeselynckI know...#2018-05-1601:34souenzzotransducers + entity api is powerfull as datalog, with some different properties. great combination#2018-05-1602:07alexmillerpull is even better than entity#2018-05-1607:42val_waeselynck@U064X3EF3 @U2J4FRT2T I think the comparison deserves more nuance. Entity is good for navigating on one data path and making late-bound decisions along the way. Pull is good for collecting data along a bunch of data paths with less control and expressiveness. Entity can be a good substrate for implementing a richer version of Pull (e.g Om Next parsers or GraphQL). Finally, let's not forget that these 2 compose!#2018-05-1609:12souenzzopull with transducers is less cool then with entity. If I swap entity with pull, I will need to write a pattern that "matches" with the transducers.
And with pull, I will not be able to be lazy as I am in entity#2018-05-1615:24souenzzoExample
(eduction
(map :cart/itens)
(filter custom-item-pred?)
(map :cart/_itens)
(d/entity db id))
If I ask for empty? on this, It will be way faster then datalog or pull's#2018-05-1615:42alexmillerfair enough! I’ve mostly been using client lately, which doesn’t have the entity api…#2018-05-1615:56souenzzoBut peer's will never be deprecated, right? 😉
Peer is a different product. Client API feels like traditional DB's (sure, with the awesome of datomic model)#2018-05-1616:05alexmillerI’m not on the Datomic team, so can’t answer any questions about future direction (as I don’t know)#2018-05-2123:59dustingetzEntity is more general, pull can be implemented in terms of entity, but not vice versa#2018-05-1613:17danielstocktonDoes Datomic cloud allow bypassing the load balancer and controlling which peer you hit? In other words, can I direct similar queries at particular peers to get the most out of the object cache?#2018-05-1613:22marshallDatomic Cloud or On-Prem?#2018-05-1613:22marshallCloud does not have peers#2018-05-1613:23danielstocktonThat's my question really. The use-case i'm considering it for requires high frequency, very low latency reads (not write heavy). Should I try on-prem if this kind of read optimization might be necessary?#2018-05-1613:23marshallCloud uses an “affinity” to automatically route requests to the correct nodes#2018-05-1613:24marshallno, Cloud will use things like sticky sessions / query affinity /etc to route multiple subsequent queries to the same node#2018-05-1613:24marshalladditionally, when Query Groups become available, you will be able to provision and specify a query group dedicated to any particular workload you have#2018-05-1613:24danielstocktonOk, cool. Is there documentation on how query affinity is determined?#2018-05-1613:26danielstocktonDo I also have control over the size of memcached? I basically want memcached to be the primary storage and misses to be extremely rare.#2018-05-1613:31marshalltake a look at https://docs.datomic.com/cloud/operation/caching.html#2018-05-1613:31marshallvalcache + EFS cache together mean misses are indeed extremely rare#2018-05-1613:36danielstocktonGreat, sounds good. I'll have to evaluate it in practice. Thanks a lot.#2018-05-1614:07chris_johnsonOperations question about the on-prem topography running on AWS, which if there’s an answer in the docs I apologize for missing but I haven’t found: is there a way to discover, at a given moment in time, the IP address of every peer connected to a given transactor?#2018-05-1615:58val_waeselynckI seem to recall someone saying than using SQUUIDs was no longer necessary, but having switched to regular UUID generation, I suspect that this is no the case for my Datomic version because of how slow indexing is (v 5.0.5407) - can someone confirm that ?#2018-05-1616:06alexmilleras I understand it, squuids are not necessary in cloud, have no idea if anything changed re on-prem#2018-05-1616:25marshall@val_waeselynck Adaptive indexing is the change that i would have expected to remove most of the advantage of SQUUIDS (http://blog.datomic.com/2014/03/datomic-adaptive-indexing.html) It was released in version 0.9.4699#2018-05-1617:20val_waeselynckOk, that's what I thought. I'll benchmark tomorrow#2018-05-1617:21marshallOk. Let me know if that doesn’t agree with your observations#2018-05-1708:25val_waeselynck@U05120CBV solved it. The culprit was not slow index writes due to dispersion of values; it was actually slow index reads in a transaction function due to a high dispersion of EAVT lookups, resulting in lots of cache misses. Reordering the imports sensibly solved it. Adaptative indexing works fine AFAICT.#2018-05-1712:57marshall👍#2018-05-1709:40val_waeselynckBy the way, am I right in assuming that Adaptive Indexing must have been way more straightforward to implement for Datomic than for ordinary databases, due to writes being separated from reads?#2018-05-1715:37eraadHi! I´m getting this error when connecting to Datomic Cloud from Lambda:
Unable to connect to system: {:cognitect.anomalies/category :cognitect.anomalies/unavailable, :cognitect.anomalies/message “Connect Timeout”}: clojure.lang.ExceptionInfo
clojure.lang.ExceptionInfo: Unable to connect to system: {:cognitect.anomalies/category :cognitect.anomalies/unavailable, :cognitect.anomalies/message “Connect Timeout”} {:config {:server-type :cloud, :region “us-west-2”, :system “datomic-cloud-prod”, :query-group “datomic-cloud-prod”, :endpoint “http://entry.datomic-cloud-prod.us-west-2.datomic.net:8182“, :endpoint-map {:headers {“host” “http://entry.datomic-cloud-prod.us-west-2.datomic.net:8182%22}, :scheme “http”, :server-name “http://entry.datomic-cloud-prod.us-west-2.datomic.net”, :server-port 8182}}}
at clojure.core$ex_info.invokeStatic(core.clj:4754)
at clojure.core$ex_info.invoke(core.clj:4754)
at datomic.client.impl.cloud$get_s3_auth_path.invokeStatic(cloud.clj:171)
at datomic.client.impl.cloud$get_s3_auth_path.invoke(cloud.clj:162)
at datomic.client.impl.cloud$create_client.invokeStatic(cloud.clj:205)
I read past responses on having your S3 IAM policies squared away. I think I have everything setup (indeed, it worked before, not sure what we did :s).
I’d appreciate any help!#2018-05-1715:44eraadSo I´m taking a look at this https://forum.datomic.com/t/datomic-cloud-with-aws-lambda/342/6#2018-05-1715:45eraadFunny thing is that it stopped working at some point. I will try creating a VPC endpoint. Will report back.#2018-05-1716:03rapskalianCurious to know what kind of success you find. The VPC endpoint solution is much simpler than my original private subnet suggestion, but I have yet to try it myself.#2018-05-1716:02joseayudarte91Hi! I’m trying to get Peer Library of Datomic Pro, using Leiningen, but I cannot find it in Clojars. I had already com.datomic/datomic-free "0.9.5206" version working but it seems there is no version for “pro” one hosted there I think. Do you know I place to get the latest version from Leiningen?#2018-05-1716:19akielyou need an account from http://datomic.com. than you can use a special Datomic repository with credentials. It should be documented.#2018-05-1717:07alexmillerif you look at https://my.datomic.com/account portal with your login, there are instructions.#2018-05-1812:44joseayudarte91Perfect, thanks!#2018-05-1718:26rapskalianBeen running into this error all day trying to use Datomic Cloud. It's being thrown while trying to call d/client. Has anyone encountered this before?
>NoClassDefFoundError Could not initialize class com.amazonaws.partitions.PartitionsLoader com.amazonaws.regions.RegionMetadataFactory.create (RegionMetadataFactory.java:30)#2018-05-1718:26rapskalianThis is with version 0.8.52 of com.datomic/client-cloud#2018-05-1718:36rapskalianAh, never mind. I had to add this line to my project.clj:
[ring-webjars "0.2.0" :exclusions [com.fasterxml.jackson.core/jackson-databind]]
Dependency hell strikes again#2018-05-1720:08dogenpunkHi! I’m looking for some guidance on storing time values in datomic (on-prem). I’ve been developing my app using the new java.time classes (via clojure.java-time) and I’m running into issues trying to coerce values into datomic friendly values.#2018-05-1720:08dogenpunkSpecifically, coercing OffsetDateTime into Instant that is compatible with :db.type/inst#2018-05-1720:10dogenpunkThe application is for scheduling appointments/resources#2018-05-1720:26donaldballI have essentially resigned myself to using java.util.Date instances as values, and to only use the new java.time classes when doing arithmetic#2018-05-1720:28bjKind of a beginner question... As a datomic peer, if I query against the result of (datomic.api/db connection), am I querying against database as it appeared at the point in time when that function was called, or am I querying against the database in real-time? The reason I'm asking is because it sounds to me like the documentation is implying the former, and that sounds magical, so I want to make sure Retrieves a value of the database for reading. Does not
communicate with the transactor, nor block.
#2018-05-1720:56eraserhd@bj it is the former. Datomic is kind of like a git repo, where "db" is a commit hash, and "conn" is a branch name.#2018-05-1720:57eraserhdYou don't get a whole snapshot of the database in memory. Pieces are loaded as required. But it is a coherent point-in-time snapshot.#2018-05-1720:58bjThanks, and I like that analogy#2018-05-1721:00dogenpunk@donaldball are you converting to java.time classes in queries?#2018-05-1721:06dogenpunkThat is, in :where clauses?#2018-05-1809:17robert-stuttafordAs of the newest tools.deps.alpha "0.5.435" (TDEPS-9 https://github.com/clojure/tools.deps.alpha/blob/master/CHANGELOG.md#changelog), you can now use Datomic Pro!
May I suggest that someone on the Datomic team add a note about the below to this page? https://my.datomic.com/account
;; ~/.m2/settings.xml
<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0
https://maven.apache.org/xsd/settings-1.0.0.xsd">
<servers>
<server>
<id>my.datomic.com</id>
<username>USERNAME</username>
<password>PASSWORD</password>
</server>
</servers>
</settings>
;; deps.edn
{:deps
{com.datomic/datomic-pro {:mvn/version "0.9.5697"}}
:mvn/repos
{"" {:url ""}}}
thanks @alexmiller @dominicm !#2018-05-1813:38cgrandHi! What does :db.error/cycle-in-affinity mean exactly? (I can work around the issue but I’d like to understand better what’s going on)#2018-05-1816:28brycecovertI’m having trouble distinguishing between whether to use datomic’s api (https://docs.datomic.com/on-prem/clojure/index.html), or the client api (https://docs.datomic.com/client-api/datomic.client.api.html). The APIs are similar, but different enough. FWIW I’m using datomic starter.#2018-05-1816:30rapskalian@brycecovert if you're just trying to tinker, I've found Datomic Cloud to be very quick to get to a working environment. I suppose it depends what your goals are.#2018-05-1816:33brycecovertI am mostly just wanting to learn and see if it makes sense to integrate datomic into an existing system. if it does, deploying it through docker would be the most straightforward way. My assumption is that would be wrapping up datomic starter#2018-05-1816:34brycecovertare one of these apis meant to be used for cloud and the other on-prem/starter?#2018-05-1816:40brycecovertOh, I think I’ve got it. One is the peer api and the other is the client api.#2018-05-1816:44rapskalianMy understanding is that there is both a peer and a client library available for use with On-Prem.
Cloud has its own separate client library (and only a client library).#2018-05-1816:46rapskalianI don't have any experience with launching Datomic via Docker, but it looks like there are plenty of options over on Docker Hub.
(https://hub.docker.com/search/?isAutomated=0&isOfficial=0&page=1&pullCount=0&q=datomic&starCount=0)#2018-05-1816:48rapskalianThose are all going to be the unlicensed variant I imagine though...so they wouldn't be used for production.#2018-05-1816:49rapskalian>I am mostly just wanting to learn and see if it makes sense to integrate datomic into an existing system.
I'd personally recommend launching a Datomic Cloud (Solo) instance in this case. It's easy to get set up, and it can directly transition into a production deployment without any API changes necessary.#2018-05-1816:50rapskalianI have one running in my AWS account that I just create lots of ephemeral databases on for testing ideas and for development#2018-05-1816:50brycecovertCool thanks!#2018-05-1816:51brycecovertThat’s helpful - I’ll try out solo.#2018-05-1816:51rapskalianSetup link is here: https://docs.datomic.com/cloud/setting-up.html#2018-05-1819:17bedersis there any way you can use the peer library with the cloud version on AWS?#2018-05-1819:20bedersI'm mostly interested in taking advantage of the client side cache#2018-05-1918:44stuarthalloway@beders not the peer library, but stay tuned — we are working on something#2018-05-1918:47bedersthanks for the reply, Stuart. While I have you: GDPR. The right of a user to be forgotten. I assume Excision is designed to help with that.
Other than hard-core deleting datoms, is there a way to override the values with other content? (i.e. anonymizing a user for example)#2018-05-2012:13Peter Wilkins@beders https://hk.saowen.com/a/6b7a98d508424e6f3ea79e64a50506a349468a22003437db33f2f3a39b589cc9#2018-05-2018:56andrewhrLooks like a mirror from @val_waeselynck’s blog https://vvvvalvalval.github.io/posts/2018-05-01-making-a-datomic-system-gdpr-compliant.html#2018-05-2020:22alexmillerI’m not a lawyer, and I’m not speaking on behalf of the Datomic team, but this seems to mangle a lot of Datomic’s value for a possibly questionable goal in the face of a rapidly evolving legal area, which may actually not require anything of the sort.#2018-05-2108:02val_waeselynck@U064X3EF3 maybe you would think differently if you worked in the EU, this stuff is getting pretty tangible there :) . In any case, I felt it was important to make people aware of this option so that they don't consider Datomic's immutability a deal breaker.#2018-05-2108:07val_waeselynckI also disagree about it mangling a "lot" of Datomic's value, from experience, and I tried to articulate that in the article (criticism welcome). As a comparison, IMO you lose much less power by adopting this approach than by moving from Peer/On-Prem to Client/Cloud.#2018-05-2108:11val_waeselynckFinally, from a non-legal standpoint, I will add that I witness every day the temptation to abuse the data of our users within my company, and I want to technically enforce its protection - who knows how things will evolve, especially once I'm gone?#2018-05-2106:24Andreas LiljeqvistI would wait for the debate to settle before doing anything. At the moment much is unclear, a simple retract might be enough.#2018-05-2109:27joshkhhi @marshall - i was wondering if any progress was made investigating multiple datomic cloud clients not fetching the latest version of the database using d/db without first "updating" the connection with a bogus call to d/q? https://stackoverflow.com/questions/50347307/how-do-i-keep-two-datomic-cloud-clients-in-sync#2018-05-2109:31joshkhmaybe there's an issue we can track? 🙂 we've had to build a phantom query into our API library that gets called every time before a pull is performed (with the other option being to reconnect before each query).#2018-05-2116:23jdkealyHi i just accidentally called retractEntity on half my development database without making a backup. Is it possible to undo all that ? I can ditch all transactions that happened since that transaction.#2018-05-2116:24marshallyou can take the set of datoms from that last transaction (use the log API to get the last transaction)#2018-05-2116:24marshalland then “reverse” them#2018-05-2116:24marshalli.e. change all assertions to retractions and retractions to assertions#2018-05-2116:25jdkealysweet 🙂#2018-05-2116:30favilahttps://stackoverflow.com/questions/25389807/how-do-i-undo-a-transaction-in-datomic @jdkealy#2018-05-2116:36jdkealywow holy shit that was easy 🙂#2018-05-2121:19dottedmagWhat's the most natural way to store an array of entities in Datomic? I'm trying to come up with a way to model a manually sorted list of items. First I thought giving every item a "sort id" integer that can be adjusted when a user drags and drops it, but I can't see how to painlessly deal with "no more integers between two items you're trying to drop your item between".#2018-05-2121:21alexmillerYou can either do ordinals for indexing or a linked list approach#2018-05-2121:21alexmillerBoth have pros and cons#2018-05-2121:27dottedmagSeems to be a FAQ 🙂#2018-05-2121:28dottedmag@alexmiller Thanks#2018-05-2122:03steveb8n@dottedmag https://github.com/vvvvalvalval/datofu supports ordinals and https://github.com/dwhjames/datomic-linklist does linked list. I have been using linked-list but it makes datalog queries tricky so I’ll probably switch to ordinals soon#2018-05-2123:56dustingetzIs there a secret Javascript datomic client that it is possible to get access to?#2018-05-2201:57souenzzo"No"
Datomic client use HTTP
https://mvnrepository.com/artifact/com.datomic/clj-client
You can inspect some jars, find the clj files, checkout the http calls and reimplement in cljs
Remember: There is no stable/public api yet, it may change/break on any update.#2018-05-2212:45dustingetzYeah, I have been through those, it seems super apparent that an internal/unreleased javascript client exists#2018-05-2212:49souenzzoNow you know more then me 👀#2018-05-2219:37hmaurer@U09K620SG how so?#2018-05-2713:35madstapI think David Nolen said as much in one of his talks#2018-05-2213:03dustingetzMy client wants to use Datomic from Nodejs. Are they supposed to use the deprecated rest api?#2018-05-2213:03dustingetzMy client wants to use Datomic from Nodejs. Are they supposed to use the deprecated rest api?#2018-05-2213:17val_waeselynckor via GraalVM 😛#2018-05-2213:18souenzzoHey, just a [OFF] idea. but you can use #graalvm to get access datomic.api from nodejs.
https://www.graalvm.org/
From What does GraalVM do? example, you may be allowed to access datomic.api and use from nodejs with something like
result = datomic.api.q(edn`[:find ?e :in $ ?id :where [?e :user/id ?id]]`, db, user_id)
Where edn is a "tagged string function" that transforms string into edn.
I will do a blog about that some day#2018-05-2214:00dustingetzThat's amazing, has this been tested ?#2018-05-2214:43souenzzoNope. But you can test and feedback us 🙂#2018-05-2215:21val_waeselynckFrankly, if it were me, I would probably not do something so experimental at my client's - or at least make it easy get out of this strategy#2018-05-2215:24dustingetzYes, I fear the answer is to tell them not to use Datomic#2018-05-2215:27dustingetzHowever I am probably willing to maintain a cljs/js client, but not without coordinating with cognitect#2018-05-2219:04hmaurer@U09K620SG Cognitect is supposed to be open-sourcing th documentation of the client protocol#2018-05-2219:05hmaurerI am not sure when that will be though…#2018-05-2302:40souenzzoHey I just made some snippet's about how to access datomic from javascript (using graalvm)
Important notes:
- :heavy_exclamation_mark: Experimental :heavy_exclamation_mark:
- You dont need to JS "inside" clj/java. You can run a JS file with graal directily (but you will need to setup classpath)
- in the middle of development I realized that it would be easier to use the JAVA API than the clojure API
https://gist.github.com/souenzzo/c4719d45e804767c97f6f5be1bcdd1c5#2018-05-2313:40hmaurer@U2J4FRT2T ah, using graal. nice one!#2018-05-2221:51eraadSome entities have the Stripe-related property, others don’t.#2018-05-2313:18chris_johnsonQuestion re: on-prem and datomic:ddb// uris - is there a way to support STS-mediated IAM roles (e.g., access-key, secret-key, token) using Datomic on-prem, or does the role used by Datomic systems have to be attached to an IAM user with programmatic access?#2018-05-2313:38chris_johnsonI created a forum post about this question too, since that seems to be an emerging best practice: https://forum.datomic.com/t/dynamodb-datomic-ddb-connect-uri-and-aws-sts-roles-can-we-provide-the-token-for-a-keypair/436#2018-05-2314:03chris_johnsonYou know what, I think the main issue here is an abject failure of reading comprehension on my part. I will report back in one (1) Docker build/deploy cycle time.#2018-05-2314:27chris_johnsonYes - it was me misreading the docs and missing the line specifying that if you provide no aws_access_key or aws_secret_key in the URI, Datomic will fetch the credentials from the default chain, which works just fine. 😅#2018-05-2318:17sparkofreasonI'd like to be able to run tests against a clean database for code written for Datomic cloud. When running against the cloud instance I can create/delete databases as needed, but this doesn't seem possible when running a local peer server backed by the mem transactor. Is there a way to start/kill peer servers from code for test purposes?#2018-05-2318:36timgilbertIt should be possible to create and delete databases willy-nilly with the mem:// transactor, our unit test suite does this kind of thing a lot#2018-05-2318:43favilaI don't think it's possible to dynamically change the dbs that a peer server is serving#2018-05-2318:44favilaI also am not sure you can serve a mem db from a peer server anyway#2018-05-2320:52marshallPeer server can indeed run mem dbs. Giving it a mem db URI at startup will cause the peer server to create and serve that mem db#2018-05-2320:52marshall@U066LQXPZ ^#2018-05-2322:13sparkofreasonRight. How can I start/stop peer servers from code?#2018-05-2403:45sparkofreason^^^ @U05120CBV#2018-05-2318:45favilafor the first problem (can't reload peer server's server list) maybe you can figure out how to start the peer server directly (likely it's just a clojure function) and make your own peer process with a "reload" or "change dbs" side channel#2018-05-2318:46favilathis is just a slightly faster and more elegant kill-and-restart#2018-05-2320:28cmcfarlenHello datomic slack. I'm seeing some strange behavior with :db.type/bigint attributes and queries. The issue involves storing values as clojure.lang.BigInt and querying as java.math.BigInteger (using the 'q fn). In the memdb, I have to query the type that I gave. Using a sql storage backend, I must query using java.math.BigInteger regardless. Using the 'entity fn and an ident ref I can query using either type for any storage.#2018-05-2320:30cmcfarlenI can kind of reason about why this might be, but the inconsistency was surprising.#2018-05-2320:32cmcfarlenhttps://gist.github.com/cmcfarlen/33d9a8f7e0f926db7d112326e7523792#2018-05-2320:32cmcfarlenThis code reproduces the issue#2018-05-2415:51adamfreyis there a way to shutdown a datomic cloud client? I've found that when I create a datomic cloud client in a script, my script will hang instead of exiting. I tried to call shutdown-agents but that didn't work#2018-05-2416:06alexmillercan you thread dump and see what threads are still alive?#2018-05-2416:10adamfreyyes, but I don't know how to do that#2018-05-2416:10adamfreyI found someone with a helpful blog post: https://puredanger.github.io/tech.puredanger.com/2010/05/30/clojure-thread-tricks/#2018-05-2416:10adamfrey😉#2018-05-2416:12alexmillerdon’t trust that guy, he’s an idiot#2018-05-2416:13adamfrey(def d-client (datomic.init/init-client (datomic.init/conn-config)))
Reflection warning, cognitect/hmac_authn.clj:80:12 - call to static method encodeHex on org.apache.commons.codec.binary.Hex can't be resolved (argument types: unknown, java.lang.Boolean).
Reflection warning, cognitect/hmac_authn.clj:80:3 - call to java.lang.String ctor can't be resolved.
2018-05-24 12:11:56.637:INFO::main: Logging initialized @12428ms
=> #'price-alerts.query-test/d-client
(shutdown-agents)
=> nil
(prn
(.dumpAllThreads
(java.lang.management.ManagementFactory/getThreadMXBean)
false
false))
#object["[Ljava.lang.management.ThreadInfo;" 0x5ebe1552 "[Ljava.lang.management.ThreadInfo;@5ebe1552"]
=> nil
(on-exit (fn* [] (prn "done.....")))
=> nil
#2018-05-2416:14adamfreyhere's output from my script#2018-05-2416:15alexmillerif you’re in a repl, just ctrl-\#2018-05-2416:16adamfreythis is in Stu's transcriptor, but I've noticed the same hanging behavior in all my clj run tasks that start up a datomic client#2018-05-2418:57sparkofreasonFor test purposes, I am able to start/stop peer servers by running/killing the run script and its child java process by calling the OS shell from clojure. It's an ugly solution, probably OS-dependent, and every call to run takes a fair amount of time to complete. Dev/test processes would be facilitated if I could run just one peer server process and create/delete mem DBs programatically.#2018-05-2420:48adamfreyI'm using datomic cloud, not the peer server, so I don't have a run script in my case#2018-05-2421:06matthaveneris that ctrl-\ documented somewhere? I’d never heard of it until today.. wondering if there’s more 🙂#2018-05-2421:48alexmillerit’s a jvm thing (ctrl-break on windows, ctrl-\ in *nix)#2018-05-2421:48alexmillerI don’t think there are any other standard handlers other than ctrl-c#2018-05-2421:48alexmillerand most Clojure repls use ctrl-d to quit (although some use ctrl-c)#2018-05-2512:27jetzajacJust curious. When I go to Datomic with the query (:some-key (entity 43)) it has to return datom with the latest tx possible. Given it uses [e a v t] index for that, does it mean that it will scan entire history of the attr? or th required Datom will be found before the historical one somehow?#2018-05-2512:43souenzzoThere is a EAV Index and this operation should be fast as a hash-map access.#2018-05-2512:45jetzajacEAV includes just recent data somehow?#2018-05-2512:46souenzzoIt's something like "lazy" cache.
In the first access it can be slow, if your DB is larger then your RAM.#2018-05-2601:54sparkofreasonIt occurred during testing, app code creates its own connection while test code uses another. The workaround is to be sure to create the test connection after the app code runs.#2018-05-2614:35mishagreetings, how does licensing work with staging environments? Is there something to read in any detail?
for example, if I have 3 envs: sandbox, qa, production, how should my datomic deploy look like?
1 deploy per env, or single deploy with different DBs within it per env? Or something completely different?#2018-05-2614:37mishaNext, is datoms-limit™ - per "transactor" or per DB "within transactor"?#2018-05-2713:54dustingetzStu wrote in 2015: "10 billion datoms is not a hard limit, but all of the reasons above you should think twice before putting significantly more than 10 billion datoms in a single database." https://groups.google.com/d/msg/datomic/iZHvQfamirI/RANYkrUjAEwJ#2018-05-2708:56dominicm@misha a license is for a "system" and includes staging and qa.#2018-05-2708:56dominicmTo be super clear, that means you would have 1 license to cover all 3 environments.#2018-05-2711:59mishaso "system" is my system, not "datomic setup", nice#2018-05-2716:32donaldballDoes anyone know why d/tx-range doesn’t report on the datoms that appear transacted in a new database? In such a database, I see system datoms transacted at times [0 54 56 63].#2018-05-2718:13favilaIt starts at t=1000#2018-05-2718:14favilaIf you want everything you can look at the index for the db/txInstant attribute#2018-05-2718:14favila:aevt#2018-05-2717:56misha@dustingetz thanks, read that one few times. It is just, almost everyone in google groups uses db/sharding/system/nodes/connections /transactors/peers very loosely (or at least db-as-application-data vs db-as-actual-datomic-db-term - interchangeably). And depending on actual meaning – answer's meaning changes dramatically.#2018-05-2718:58bkamphaus@misha the meaning is per database, though several databases adding up to exceed 10 billion datoms behind the same transactor would encounter some perf challenges as well. 🙂 And you’re probably safe assuming when anyone on the Datomic team says “database” they mean database rather than peer, transactor or something else. Precision in use of terminology is definitely a goal there in community support.#2018-05-2720:18misha@bkamphaus thank you#2018-05-2803:02drewverleeWhats the idiomatic way to use a previous value as the argument to the next value? Say i want to update all the people with name “drew” to “drew rocks” or “drew” + “something”
6 :person/name “drew rocks” evening
6 :person/name “drew” morning#2018-05-2803:04drewverleei can query for the entity id and name then use those in a transact.#2018-05-2808:25val_waeselynck@U0DJ4T5U1 I assume the challenge here is to do it without race conditions? In that case, the way to go is to use a transaction function (https://docs.datomic.com/on-prem/database-functions.html), e.g [[:myapp/replace-first-name "drew" "drew rocks"]].#2018-05-2808:31val_waeselynckIf you're using Datofu (https://github.com/vvvvalvalval/datofu#writing-migrations-in-datalog), you can use the more general :datofu.utils/datalog-add transaction function, which acts as a Datalog interpreter, so that you don't have to create and install a custom transaction function. E.g:
[[:datofu.utils/datalog-add
'[:find ?e ?a ?new-name :in $ ?old-name ?new-name :where
[(ground :user/first-name) ?a]
[?e ?a ?old-name]]
["drew" "drew rocks"]]]
#2018-05-2807:06ttxWhat is the justification for reverse reference of multi-arity "component" attributes returning only a single value in a datomic pull? Is there any way to return all the values? Relevant doc: https://docs.datomic.com/on-prem/pull.html#multiple-results#2018-05-2812:48favilaThe justification is that a component attr should always be the only way to reach its entity value#2018-05-2812:48favilaOtherwise it’s not truly a component#2018-05-2902:24souenzzo@U4XHJ3J9H you can do this
(let [db (d/db conn)
{:keys [db-after]} (->> (d/q '[:find ?op ?e ?a ?v
:where
[(ground :db/retract) ?op]
[(ground :db/isComponent) ?a]
[?e :db/isComponent ?v]] db)
(vec)
(d/with db))]
(d/pull db-after [:your/_pattern] eid))
#2018-05-2906:57ttx@U2J4FRT2T Not sure about what is being achieved by the code snippet. Can you please explain it to me?#2018-05-2907:36favilaThis removes all isComponent annotations, writes it to a local (I.e. forked) db, then pulls from that db#2018-05-2907:36favilaIn essence, temporarily un-isComponent-ing all attributes so a reverse pull will always be cardinality many#2018-05-2908:59ttxThanks!#2018-05-2820:21mishadatomic ♥#2018-05-2820:45mishaIs there an idiomatic way to not do the transaction if key-value did not change? now it does not duplicate fact assertion, but still generates a transaction datom.
It sounds like a tiny optimization, but, for example, for .csv file imports, if I want to have more granular error reports, I would not be able to put all the lines into a single transaction, I'd need to batch them. But that would "waste" 1 datom per batch even if batch changed nothing in DB. Smaller the batches – higher the chance of wasting precious datoms on repetitive (large) files imports. Especially, if I will put some meta info about file import in tx.#2018-05-2820:53mishadoes :db/noHistory reduce datoms count over time, or somehow just reduces space, and that's it?#2018-05-2820:54mishais it backed up by excision under the hood?#2018-05-2820:57sparkofreasonHas anybody successfully accessed Datomic Cloud across peered VPCs? Our app uses another service that requires VPC peering, and the same procedures do not work when applied to Datomic.#2018-05-2820:59hcarvalhoaves@misha maybe you can avoid the empty transaction altogether w/ a transaction function by running a query on the transactor - that could negatively impact your transactor though, possibly more than just having the empty tx#2018-05-2821:03misha@hcarvalhoaves yeah, thought about that, and would need to evaluate incoming files frequency vs amount of empty tx-datoms generated. However, transactions would contain fairly large nested maps, and comparing those with db data inside a transaction would likely be slower than I'd like it to be.#2018-05-2821:28mishaon the other hand, looking at error types which make tx fail, those are either network or dev errors: connection reset/timeout, and invalid value type. Which means, with enough testing, ETL step can collect batch of valid tx data from ~1k rows for a single transaction.#2018-05-2821:33mishaspeaking of errors. I'd be delighted to see actual value, attribute and it's type in ExceptionInfo data map:
datomic.impl.Exceptions$IllegalArgumentExceptionInfo: :db.error/wrong-type-for-attribute Value 1 is not a valid :bool for attribute :foo/bar?
data: #:db{:error :db.error/wrong-type-for-attribute}
From the error above you cannot tell that 1 was in fact string "1".#2018-05-2822:28Peter WilkinsHi all, just learning Datomic. Having a lot of fun. However have hits 2 queries I need a bit of help with:
1: get name and id for topics not in category
(defn orphaned-topics [conn]
(d/q {:query '[:find [(pull ?topics [:topic/name :topic/id]) ...]
:in $ ?taxonomy
:where [?t]
[?tax :taxonomy/id ?taxonomy]
[?tax :taxonomy/categories ?cats]
(not [?cats :category/topics ?t])]
:args [(d/db conn) "z5ojxcs40azi"]}))
Error message is “Only find-rel elements are allowed in client find-spec”. I don’t understand what a find-rel is.
2: full text search returns a 500 server error after a long delay
(defn search-keywords [conn query]
(d/q {:query '[:find ?entity ?name ?tx ?score
:in $ ?search
:where [(fulltext $ :keyword/phrase ?search) [[?entity ?name ?tx ?score]]]]
:args [(d/db conn) query]}))
CompilerException clojure.lang.ExceptionInfo: Server Error {:datomic.client-spi/request-id “f90939d2-b6f2-4c55-a8e3-18af3fa7e0b5", :cognitect.anomalies/category :cognitect.anomalies/fault, :cognitect.anomalies/message “Server Error”, :dbs [{:database-id “0ed6aab0-5e31-400f-8fd7-dc40dc67df98", :t 11, :next-t 12, :history false}]}
Relevent schema:
#:db{:ident :category/id, :valueType :db.type/string, :cardinality :db.cardinality/one :unique :db.unique/identity}
#:db{:ident :category/name, :valueType :db.type/string, :cardinality :db.cardinality/one}
#:db{:ident :category/topics, :valueType :db.type/ref, :cardinality :db.cardinality/many}
#:db{:ident :category/weight, :valueType :db.type/float, :cardinality :db.cardinality/one}
#:db{:ident :keyword/excludes, :valueType :db.type/string, :cardinality :db.cardinality/many, :fulltext true}
#:db{:ident :keyword/phrase, :valueType :db.type/string, :cardinality :db.cardinality/one, :fulltext true}
#:db{:ident :taxonomy/categories, :valueType :db.type/ref, :cardinality :db.cardinality/many, :isComponent true}
#:db{:ident :taxonomy/editable, :valueType :db.type/boolean, :cardinality :db.cardinality/one}
#:db{:ident :taxonomy/id, :valueType :db.type/string, :cardinality :db.cardinality/one :unique :db.unique/identity}
#:db{:ident :taxonomy/name, :valueType :db.type/string, :cardinality :db.cardinality/one}
#:db{:ident :taxonomy/organization, :valueType :db.type/ref, :cardinality :db.cardinality/one}
#:db{:ident :taxonomy-input/categories, :valueType :db.type/ref, :cardinality :db.cardinality/many}
#:db{:ident :taxonomy-input/name, :valueType :db.type/string, :cardinality :db.cardinality/one}
#:db{:ident :topic/document-count, :valueType :db.type/long, :cardinality :db.cardinality/one}
#:db{:ident :topic/id, :valueType :db.type/string, :cardinality :db.cardinality/one, :unique :db.unique/identity }
#:db{:ident :topic/keywords, :valueType :db.type/ref, :cardinality :db.cardinality/many, :isComponent true}
#:db{:ident :topic/name, :valueType :db.type/string, :cardinality :db.cardinality/one, :fulltext true}
#:db{:ident :topic/type, :valueType :db.type/ref, :cardinality :db.cardinality/one}
#:db{:ident :topic-type/company}
#:db{:ident :topic-type/risk}
Thanks for reading!#2018-05-2902:28souenzzo1- pass db, not conn to your functions. db is immutable
2- pull ?topics but mathing :category/topics ?t
3- not sure if fulltext is avaible at datomic cloud#2018-05-2911:26chrisblomWhat are your experiences with storing timeseries in datomic?#2018-05-2912:36dominicm@chrisblom if you have a lot of it, it's not ideal.#2018-05-2912:38chrisblomyeah i’m finding out that its not a great fit, we currently have a database that keeps on growing, with no good way to remove old data#2018-05-2912:39dominicmI think Nubank do something where they create new databases regularly, but that depends on a complex aws setup from what I recall.#2018-05-2912:40chrisblomi’m also looking into the timescale plugin for postgres, but no AWS RDS support yet unfortunately#2018-05-2913:05Christian JohansenIf you’re on AWS, might as well stick timeseries in Dynamo?#2018-05-2913:37matthavener@chrisblom if you want a “sliding window” snapshot of the data, you can store it in kafka or some unindexed store, and then load it into a memory datomic db at fixed intervals#2018-05-2913:37chrisblomthanks, i’ll look into Dynamo#2018-05-3007:43gavanitratehey folks, does anyone know if it is possible to perform a pull expression on an aggregate function? i.e.
'[:find ?pc (pull (distinct ?c) [:company/name])
:in $
... ]
#2018-05-3012:50stuarthalloway@gavanitrate no, pull takes entities only#2018-05-3015:17drewverleeI feel like it would be useful to expand http://www.learndatalogtoday.org/ to have more examples and show more options. For example, the pull api. Does anyone know if the maintainer takes pull requests or who to talk to about that?#2018-05-3015:20val_waeselynckWhy not ask him directly? It does say PRs/feedback welcome https://github.com/jonase/learndatalogtoday#feedback-welcome#2018-05-3015:22drewverleegood point. I suppose i should have put the emphasis on the first part. I’m more curious if people think it should be expanded. I personally find it hard to learn datomic without working through the examples. I wonder if maybe i should be trying to understand it through the grammer.#2018-05-3016:10misha@U0DJ4T5U1 https://github.com/Datomic/day-of-datomic/tree/master/tutorial might be useful for you then#2018-05-3016:50drewverleethanks @U051HUZLD#2018-05-3113:43mishacan I declare a datomic database-function with more than 1 arity?#2018-05-3113:52val_waeselynckI don't think so, but you'll be fine using collections to achieve the same objective#2018-06-0110:34mishacan I pass _ as an argument to d/q? to avoid explicitly implementing extra arity in cases like:
(defn f
;; what I want to write:
([db a] (f db a '_))
;; what I have to write:
([db a] (d/q '[:find [e?...] :in $ ?a ?v :where [?e ?a]] db a))
([db a v] (d/q '[:find [e?...] :in $ ?a ?v :where [?e ?a ?v]] db a v)))
#2018-06-0111:42souenzzo(defn f
[db & args]
(let [[_ & syms
:as frags] (into '[?e] (for [i args
:when (not (nil? i))]
(gensym "?arg-")))
query (into '[:find [?e ...] :in] (concat syms [:where]))]
(apply vector (conj query frags) db args)))
#2018-06-0116:02mishano kappa
the actual query I need it for is not that much larger than the one in example, and I choose readability over the spell you suggested, @U2J4FRT2T
however, thank you : )#2018-06-0110:35mishaI know that pull can accept "*" as a string, but neither "_" nor '_ seem to work here ^^^#2018-06-0110:53mishais :db.install/valueType "exposed" to datomic users? Did anyone try install any composite types yet? can't seem google anything related#2018-06-0112:43dustingetz@U051HUZLD what are you trying to do?#2018-06-0116:07misha@U064X3EF3 I think I am asking exactly that.
@U09K620SG the use case is usual – to put something in db without forgetting to pr-str read-string. But in this particular case, I just stumbled upon it and wanted to explore.#2018-06-0116:11alexmilleryeah, datomic attribute types are fixed (for now at least). Clojure certainly makes it possible to consider extensible types at a future point though.#2018-06-0112:42alexmillerIt’s not extensible if that’s what you’re asking#2018-06-0115:20bjIs it possible to include the transaction time of an attribute in a pull?#2018-06-0115:44alexmillerI think you have to use query to get to the transaction component and its attributes#2018-06-0208:50emil0rIs it possible to start up an instance of datomic free from inside an application? Ie, I don't spin one up with the provided scripts, but do it from inside the application#2018-06-0216:49ezmiller77@emil0r I am not sure I understood you question but I think you do need to run datomic as a service separately, though you could probably write a script to automate that process somehow.#2018-06-0216:50ezmiller77Does anyone know why datomic cloud no longer has the d/squuid func? Is it no longer needed?#2018-06-0222:40val_waeselynckNot needed since adaptive indexing. Wish they updated the docs about that.#2018-06-0304:26ezmiller77Thanks#2018-06-0311:18Andreas LiljeqvistIs this true for onprem as well?#2018-06-0314:20ezmiller77I put the question on the Datomic forum. We could document it there to some extent.#2018-06-0314:20ezmiller77https://forum.datomic.com/t/why-no-d-squuid-in-datomic-client-api/446#2018-06-0317:26val_waeselynck@U7YG6TEKW yes true for on prem as well#2018-06-0222:30rapskalianI think I remember this being discussed here before, but does anyone else have trouble with the Datomic Cloud SOCKS proxy timing out or otherwise acting unreliably in the face of frequent REPL reloads? I regularly have to restart the proxy process in order to reconnect to Cloud.#2018-06-0304:25ezmiller77@cjsauer I've been experiencing that as well. It seems to close out periodically. One could wrap it in some sort of service to restart it when it crashes.#2018-06-0401:55ezmiller77Hi all, I've been struggling with what seems to be a dependency conflict problem between Datomic Cloud and ring. At least, it first presented itself in that guise. Now I'm less sure, but it's an error that appears when d/client is called. I've created a branch on a test repo to show what I mean: https://github.com/ezmiller/datomic-ring-dep-conflict/tree/exlusions-from-datomic-cloud. The errors that arise still smack of a dep conflict in the sense that there's a missing class: java.lang.ClassNotFoundException: org.eclipse.jetty.client.HttpClient, compiling:(cognitect/http_client.clj:1:1) and Caused by java.lang.ClassNotFoundException org.eclipse.jetty.client.HttpClient`. The full stack trace is in the README in the repo.#2018-06-0401:58alexmillerThis is doc’ed on the Datomic faq page I think #2018-06-0401:59alexmillerhttps://docs.datomic.com/cloud/troubleshooting.html#dependency-conflict#2018-06-0401:59ezmiller77@alexmiller I think you are referring to the troubleshooting section referencing the jetty dep conflict? This: https://docs.datomic.com/cloud/troubleshooting.html#dependency-conflict#2018-06-0402:00ezmiller77Right. Yeah. In that branch of the repo, I've got those exclusions added. The error happens when you call d/client.#2018-06-0402:02alexmillerHmm, well that’s more than I can diagnose on my phone :)#2018-06-0402:03ezmiller77🙂 So far it's been more than I can diagnose at all!#2018-06-0402:03ezmiller77Wasted at least 6 hours on this today.#2018-06-0402:04alexmillerIf you exclude Jetty don’t you need to include it somehow ?#2018-06-0402:05alexmillerIf you lein deps :tree what’s including jetty?#2018-06-0402:07ezmiller77Both datomic.cloud and ring reference parts of jetty normally, which creates the dependency conflicts. My understanding is that the exclusions are placed on one side to defer to the inclusions by the other package. In this case, the recommendation in the troubleshooting doc is I think deferring to the versions included by ring. Part of this, also, I gather, is that the way these packages work you can only have one version dependency in a project since they all somehow exist in a global space. (I'm not sure about this but I gathered it from a comment at the end of this thread: http://discuss.purelyfunctional.tv/t/how-to-detect-and-workaround-dependency-conflicts/516/4).
Here's the relevant part of lein deps :tree with the exclusions applied:
...
[ring "1.7.0-RC1"]
[ring/ring-core "1.7.0-RC1"]
[clj-time "0.14.3"]
[commons-fileupload "1.3.3"]
[commons-io "2.6"]
[crypto-equality "1.0.0"]
[crypto-random "1.2.0"]
[ring/ring-devel "1.7.0-RC1"]
[clj-stacktrace "0.2.8"]
[hiccup "1.0.5"]
[ns-tracker "0.3.1"]
[org.clojure/java.classpath "0.2.3"]
[org.clojure/tools.namespace "0.2.11"]
[ring/ring-jetty-adapter "1.7.0-RC1"]
[org.eclipse.jetty/jetty-server "9.2.24.v20180105"]
[javax.servlet/javax.servlet-api "3.1.0"]
[org.eclipse.jetty/jetty-http "9.2.24.v20180105"]
[org.eclipse.jetty/jetty-util "9.2.24.v20180105"]
[org.eclipse.jetty/jetty-io "9.2.24.v20180105"]
#2018-06-0405:27ezmiller77What seems to be a solution was provided by @shohs on the Datomic Forum: https://forum.datomic.com/t/dependency-conflict-with-ring/447/4?u=emiller#2018-06-0405:28ezmiller77The exclusions suggested in the "Troubleshooting" text did not work. Removing them and then adding [org.eclipse.jetty/jetty-server “9.3.7.v20160115”] as a top-level dep does. At least so far...#2018-06-0408:52jumar@ezmiller77 I tried to diagnose your problem a bit more and I think you can solve it by explicitly using newer version of jetty-server and jetty-client explicitly.
See my answer here: https://stackoverflow.com/a/50676715/1184752
Also the related change: https://github.com/ezmiller/datomic-ring-dep-conflict/pull/1/files#2018-06-0408:55jumarI've been following official datomic cloud tutorial which is pretty good. However, I've struggled a bit with following
#:cognitect.anomalies{:category :cognitect.anomalies/forbidden,
:message
"Forbidden to read keyfile at ************/juraj-datomic-cloud/datomic/access/admin/.keys. Make sure that your endpoint is correct, and that your ambient AWS credentials allow you to GetObject on the keyfile."}
#2018-06-0408:58jumarEventually, I've found that I can specify :creds-profile in datomic client config, but found that only by reading source code.
Although it was related to credentials profiles which datomic cloud's documentation doesn't use I think it would be useful to mention that in the documentation because it's pretty common to use profiles.#2018-06-0413:05marshallYou should be able to use any standard method of AWS credential management#2018-06-0413:05marshallas indicated here https://docs.datomic.com/cloud/getting-started/connecting.html#access-keys#2018-06-0413:05marshallthe environment you run in must have proper IAM credentials#2018-06-0413:05marshall(envars, aws profile)#2018-06-0413:12ezmiller77@jumar I was also able to get in with IAM. Did you grant access to the IAM group for SSH?#2018-06-0413:12ezmiller77Oh I see @U05120CBV already pointed you to the relevant section of the docs.#2018-06-0409:07jumarI'm evaluating Datomic cloud (Solo) and using socks proxy for connecting to the database.
I'm suffering from frequent connection errors (socks proxy connection being broken every ~10 mins). Did you encounter such problems before or is there some issue in my network?#2018-06-0412:52ezmiller77@jumar: that solution, specifying the server, worked for me as well. How did you think to try specifying the server? Was it named at some point in the lein deps :tree output? Or did you work it out somehow? I'm curious to know as I tried so many combinations, but never saw the jetty-server named...#2018-06-0413:32jumarIt's a transitive ring dependency therefore you should see it in lein deps :tree output. I had been already thinking about specifying jetty server deps in project.clj explicitly because that's one way how you enforce proper versions to be used in your project effectively overriding transitive dependencies.#2018-06-0412:54ezmiller77Regarding your trouble with the broken socks proxy connection, I am also experiencing the same behavior. It troubled me but I hadn't gotten to the point where I had the luxury of considering what to do about that. I thought I might use some sort of service that restarts something when it fails. Can't remember the names off-hand.#2018-06-0413:00jaret@jumar @ezmiller77 re: connection errors. We don’t see that happening. I do recall a previous user reporting something similar and they used autossh to get around laptop sleeps etc.#2018-06-0413:01jaret>Autossh for keeping alive the socks proxy:
>Not sure who to message with this, but I have a suggestion.
>I’m using datomic cloud and developing against it, which basically means a long running datomic-socks-proxy process. This was quite painful due to frequent timeouts and disconnects, causing me to having to keep jumping across and restarting it.
>I installed autossh instead and hacked the script to use this, and it is now much more stable (and survives sleeps of my laptop). I wonder whether it might be worth having the standard script check for the installation of autossh and if found, use that instead (and maybe print a message to the user if not found, before continuing with the regular ssh client).
>For anybody interested in my little hack, I just commented out the ssh command at the bottom of the script, and added the autossh one. Like this...
`#ssh -v -i $PK -CND ${SOCKS_PORT:=8182} #2018-06-0414:28rapskalianThanks for this. I've saved your suggestion as a gist for future reference: https://gist.github.com/cjsauer/01b288a7e6fe306372b90d1930575836#2018-06-0413:01jaretThat was their suggestion, I have not tested it ^#2018-06-0413:36conanIs it possible to add an entity reference in a trasaction by using a lookup ref?#2018-06-0413:37conanso for example (d/transact db-conn [{:person/name "conan" :person/team [:team/id 123]}]) if i want to add a ref to a specific team entity to a person#2018-06-0414:05griersonIn Datomic how would I model a Runner's time during a race? for example "Alice" started at 10:35 but is still currently running. Would I have a :end nil then (update :end (now)) when she finishes?
But then I need to ask question about the race such as "Who is currently running?" (filter #(nil? (:end %)) runners)#2018-06-0414:22alexmilleryou can find all runners without an end attribute with something like
(d/q
'[:find ?runner
:in $
:where
[(missing? $ ?runner :end)]]
(d/db conn))#2018-06-0420:41donaldballQuick modeling question: I want to mark attributes as deprecated, indicating they should neither exist nor be asserted in a database. I could justify using a boolean (though there’d be no reason for a false value to ever exist), a long (the t-value of the deprecation), or an instant (the time of the deprecation). Does anyone have any opinions on the best choice?#2018-06-0421:07alexmillerWell you can get the t and instant from the transaction that contains the assertion already#2018-06-0510:58souenzzoMy deprecation has 2 values:
why: string. see-also: "ref to many" that points to other attributes. If I wanna the when it's deprecated, I can ask datomic when the first why was written. @donaldball#2018-06-0517:20donaldballI’m exploring using rules to express mildly complex groups of clauses to make my queries somewhat more composable. On one point, the docs aren’t totally clear. If my rule uses a variable symbol that is not part of the rule params, is it entirely distinct from any coincidental uses of that variable symbol in the query?#2018-06-0518:04val_waeselynckYes#2018-06-0518:09donaldballThanks!#2018-06-0601:35donaldballIn the rules documentation, it reads:
> We can require that variables need binding at invocation time by enclosing the required variables in a vector or list as the first argument to the rule.
Is this just a performance hint/requirement, or are there cases where this would be required in order to obtain correct results?#2018-06-0614:21uwocould the backup-db command result in transactor unavailable errors for other peers?#2018-06-0614:41uwonvm. It shouldn’t. we were using a bad uri#2018-06-0615:10zalkyHi all, I'm working on a recursive datomic rule to return all the nodes of a list attached to an entity:
'[[(link ?e ?node)
[?e :head ?next]
(link ?next ?node)]
[(link ?e ?node)
[?e :link ?next]
(link ?next ?node)]
[(link ?e ?node)
[?e :type :node]
[?node :type :node]
[(= ?e ?node)]]]
While this works, it is somewhat slow (~1s) given we know the entity to which the head is attached, and we have only a dozen or so nodes. I'm guess that that third clause is what slows it down. Ideally I would just have that third clause assert (= ?e ?node), but the rule then throw and :db.error/insufficient-binding error. Any ideas how to make this traversal more efficient?#2018-06-0615:13eraserhdIf you know that ?e is bound, the third clause can be [(identity ?e) ?node].#2018-06-0615:14zalkyAmazing! That did the trick.#2018-06-0615:18eraserhdI have some stuff like this, and I just went to look at it, and it doesn't make sense anymore 😄#2018-06-0615:19zalkyHa, live in the moment 😛#2018-06-0615:28zalkyFor posterity, to return just the nodes the the final clause would be:
[(link ?e ?node)
[(identity ?e) ?node]
[?node :type :node]]
#2018-06-0615:29eraserhdIt would be neat if Datomic had a predicate for whether a value is bound.#2018-06-0615:35eraserhdI suppose this can be done with something like, (or (and (not [(identity ?e1) :bad-value]) if-bound...) (and (not (not [(identity ?e1) :bad-value]) if-not-bound...)).#2018-06-0615:40alexmillerthere’s missing?#2018-06-0615:41alexmillerhttps://docs.datomic.com/cloud/query/query-data-reference.html#missing#2018-06-0615:42eraserhdI think that's different entirely, unless there's a trick for it that I'm ... missing ...#2018-06-0617:48favilausing identity to "rename" a binding is a pretty fundamental technique I've found#2018-06-0617:49favilaI continually run into cases where it's impossible to express the query otherwise#2018-06-0617:51favilathe only downside is it forces the clauses to run in only one direction#2018-06-0617:51favilasome kind of datalog primitive would be required to go in both directions#2018-06-0617:53favilabut writing a query without fixed ideas of what will be bound is impossible in practice. This is especially bad for rules. You can't practically speaking make generic rules that are independent of knowledge of what is bound#2018-06-0617:53favila@zalky You can force a rule to require a var to be bound by surrounding the initial args with a vector: [(link [?e] ?node) ...] for eg#2018-06-0618:29zalkyRight, I forgot about that, thanks for the pointer!#2018-06-0617:54favilabut you can't do [(link ?e [?node]) ...]#2018-06-0617:55favilaso you can't write rules that are polymorphic on what is bound#2018-06-0618:17jaretDatomic Ions are now available. http://blog.datomic.com/2018/06/datomic-ions.html
Datomic Datomic Cloud 397 and Datomic 0.9.5703 are now available#2018-06-0618:42viestiWhoa!#2018-06-0618:44viestiHaving hacked with AWS Lambda & JVM/Clojure, Ions sounds just the thing that has been missing from the Clojure cloud world domination :)#2018-06-0618:44naomarikto resolve enums within pull api is this the general way everyone does it? {:listing/status [:db/ident]}#2018-06-0618:49richhickey@viesti we hope so!#2018-06-0619:26robert-stuttafordhi @richhickey!#2018-06-0619:26richhickey@robert-stuttaford Hi!#2018-06-0619:27alexmillerI can’t wait to see what @robert-stuttaford does with ions…#2018-06-0619:27richhickeyIf you were waiting for peer-like features for cloud, ions are that and more#2018-06-0619:27robert-stuttaford:-))) Christmas came twice this year!#2018-06-0619:28robert-stuttafordi’m really looking forward to digging in#2018-06-0619:29viestia serious contender for any future spare time 🙂#2018-06-0619:30richhickeydefinitely looking for feedback on the docs and whether they make the value props and the mechanisms clear#2018-06-0709:49chrisblomhi, while reading the docs i found a mistake.
In the table here <ttps://docs.datomic.com/cloud/ions/ions-reference.html#web-code>, :protocol has as value “HTTP verb as keyword”, i suppose it should be “:http or :https”#2018-06-0619:31richhickeyit's one of those inside-out things much like Datomic was originally#2018-06-0619:31robert-stuttaford“For Datomic On-Prem, we have added classpath functions and auto-require support for transaction functions and query expressions.”
how does this change the current peer behaviour? afaik we were always able to add jars to the transactor’s classpath. perhaps a simple before/after for this change would help illuminate things?#2018-06-0619:31robert-stuttafordright - peer put the database in your app. ions puts your app in the database!#2018-06-0619:31eggsyntax@richhickey I think it would be really useful to have what you said above ("peer-like features for cloud") at the beginning of https://docs.datomic.com/cloud/ions/ions.html -- I hadn't gotten that yet after reading through much of that page.#2018-06-0619:32eggsyntaxThat way the value prop is right up front.#2018-06-0619:36richhickey@robert-stuttaford two ways - you can now use an ordinary classpath fn as a tx fn w/o installing in the db, and both there and in query, any such fully-qualified fns will auto-require the namespaces#2018-06-0619:37robert-stuttafordoh! is there an example of how i’d invoke such a not-installed function from a transaction? right now you have to tie it to an ident. is that still required?#2018-06-0619:38robert-stuttafordif it works the way i think it does, that’s seriously great. i was never comfortable with putting code inside storage like that 🙂#2018-06-0619:38richhickeyjust a fully-qualified symbol#2018-06-0619:38robert-stuttafordthat’s metal#2018-06-0619:41robert-stuttafordthis is probably a question for @marshall or @jaret - does the newest CF template for transactors provide some kind of support for supplying class path functions to the transactor as described here?
https://docs.datomic.com/on-prem/database-functions.html#classpath-functions#2018-06-0619:42richhickeyno#2018-06-0619:42robert-stuttafordso we’re still using your AMI - we’d have to roll our own to take advantage of this feature, then#2018-06-0619:44richhickeythere's a ton of plumbing in Cloud to pull that off, things that can't go in on-prem AMI#2018-06-0619:45richhickeyat this point we really want people on AWS to use cloud#2018-06-0619:58richhickeybut if you want to use on-prem and classpath fns on AWS you have to get your code on the AMI#2018-06-0620:04robert-stuttafordthat makes sense, thanks#2018-06-0620:10viestihttps://docs.datomic.com/cloud/transactions/transactions-functions.html#testing seems to give 404#2018-06-0620:45redingerThe correct link should have been https://docs.datomic.com/cloud/transactions/transaction-functions.html#testing
The link has been fixed in the docs, thanks!#2018-06-0705:52viestithanks for the fix 🙂#2018-06-0620:13richhickeythe ions solution has all the power of code deploy, rolling deploys, rollbacks etc#2018-06-0620:14richhickeyit doesn't cycle the instance, just the process#2018-06-0620:18mitchelkuijpersThis looks absolutely amazing, we are currently running on fargate an were looking into Datomic cloud and lambdas (we are currently on prem). One thing I could not find if there is a solution for listening to the log with ions?#2018-06-0620:19johnjbesides better and more flexible tx functions (really big and needed feature imo) what other peer-like features does ion has? I'm not familiar with on premise peer library just curious.#2018-06-0620:21richhickey@lockdown- essentially the whole model of your app code running in the db context, with cache and query locality, working sets etc. Where the peer was 'put the brain in your app' ions are 'give your thoughts to the datomic brain cluster'#2018-06-0620:22richhickeybut there's more because, unlike with on-prem, we understand the broader execution context in cloud, so e.g. your app auto-scales with cloud#2018-06-0620:23johnjnice, are lambda functions somehow kept warm by the setup?#2018-06-0620:23mitchelkuijpersOr is there another solution to for example push data from Datomic to Elasticsearch?#2018-06-0620:27richhickey@reitzensteinm you could use any logging you want, just put the logging lib in your classpath and grant the cloud node role the needed permissions#2018-06-0620:28richhickey@mitchelkuijpers ^#2018-06-0620:29mitchelkuijpers@richhickey I meant the Datomic tx-log#2018-06-0620:29richhickey@mitchelkuijpers ah, there is no push ATM#2018-06-0620:30mitchelkuijpersAh ok, we currently have a separate process that listens to the tx-log which pushes data to Elasticsearch. Which we absolutely love#2018-06-0620:30richhickey@lockdown- what scenario are you concerned about re: warm?#2018-06-0620:31richhickey@mitchelkuijpers one could imagine an ion callback on txes, could do whatever you like#2018-06-0620:32dominicmhttps://docs.datomic.com/cloud/ions/ions-tutorial.html#push uses -Adev which is inconsistent with the rest of the docs, e.g. https://docs.datomic.com/cloud/ions/ions-tutorial.html#deploy and https://docs.datomic.com/cloud/ions/ions-tutorial.html#monitor#2018-06-0620:34mitchelkuijpersYeah something like that would be awesome, really loving this idea. Deploying apps without managing servers#2018-06-0620:38johnj@richhickey for the JVM one problem is bursts of traffic, where api gateway will be invoking concurrent execution of the lambdas creating more cold starts#2018-06-0620:38alexmiller@dominicm should be -A:dev (I should have caught that!)#2018-06-0620:39dominicmI'm an eagle on this stuff. I don't like the -Adev syntax very much, although I accept it is good to be tolerant of.#2018-06-0620:39dominicm(so I had an ulterior motive here, basically)#2018-06-0620:43redingerThis has been fixed in the docs, thanks!#2018-06-0620:39viestiThinking about the Lambda 5min runtime limit, I guess longer processes would be done outside of ions#2018-06-0620:39johnjcreating spikes, I know there are some methods devs use to keep the lambdas warm#2018-06-0620:40richhickey@lockdown- AWS understands the issues re: Java startup and has been improving (keeping alive, freeze/thaw etc)#2018-06-0620:41richhickeyour lambdas our minimal, they proxy to the code on the Datomic cluster#2018-06-0620:43johnjok, definitely trying and testing these#2018-06-0621:38spiedenany timeline on the prem -> cloud migration tool?#2018-06-0621:39spieden> If you are working on committed code with no local deps you will get a stable revision named after your commit.
what if the local deps are in the same git repo? ^^#2018-06-0621:39spiedenions look great!#2018-06-0621:51stuarthalloway@spieden why would you have local deps in the same repo?#2018-06-0621:51stuarthallowayno timeline on migration tool, but I will count you as an implicit +1#2018-06-0622:35csm+1 here, too#2018-06-0621:53spiedenwell, if i want to have a shared library that other components can move in lockstep with#2018-06-0621:53spieden.. then keeping them all in the same repo with separate deps.edn files is convenient and simple#2018-06-0621:55spiedenbeen moving towards this away from snapshot jars and trying to implement cascading builds across multiple vcs projects#2018-06-0621:56spieden@stuarthalloway i got an on-prem license into our budget for this year but now i’m not sure what to do =)#2018-06-0621:57richhickey@spieden a bit objective of the local deps support is it removes the cascading builds/snapshot problem completely#2018-06-0621:57richhickeyyou can deploy you app and one or more libs-in-progress while dev/testing#2018-06-0621:58richhickeyno artifacts needed#2018-06-0621:58spiedenyes it’s very appealing#2018-06-0621:58reitzensteinmit's not every day you get a wrong number call from rich hickey#2018-06-0621:58spiedenhaving a single commit hash version multiple interdependent components is great too#2018-06-0621:58spieden.. which is why i was wondering about: “If you are working on committed code with no local deps you will get a stable revision named after your commit.”#2018-06-0621:59richhickey@reitzensteinm sorry about that, completion-o#2018-06-0622:05spieden(my hope is it would read something like: “If you are working on committed code with no local deps outside the git repo root you will get a stable revision named after your commit”)#2018-06-0622:11spiedeneasy enough to fudge on our own i suppose by passing the hash as revision name anyway =)#2018-06-0622:34spiedeni’ve been wanting to take our step functions processes serverless, and resolving task states to ion-created lambdas in our client lib (stepwise) could be handy#2018-06-0703:54johnjare all features of ion available for solo?#2018-06-0704:36steveb8nhas anyone tested AWS App-Sync using an Ion Lambda? i.e. graphql api for Ion without any code#2018-06-0709:50chrisblomehm, wait no#2018-06-0709:50chrisblomif its like ring, :protocol should be something like “The protocol the request was made with, e.g. “HTTP/1.1".” and :scheme is :http or :https#2018-06-0710:21stuarthalloway@chris.blom thanks! You are right, :protocol should be like Ring#2018-06-0712:17chrisblomOk, it was not immediately clear to me what ion is about:
My understanding now is that:
- its an application server integrated with datomic
- has build in tooling based on deps.edn to deploy based on git revisions
- it integrates with AWS Lambda and API Gateway to handle http requests, in a mostly ring compatible way
Its not clear to me what “deploying your code to a running Datomic cluster” entails:
- What exactly runs on Lambda, and what runs on the datomic cluster?
- What are the limitations of running code in a datomic cluster? Can I access the local disk and other AWS services?
- How does the autoscaling work?
- Is it possible to develop and test ions locally?
- Can I run some sort of test environment for CI testing?#2018-06-0712:23alexmillerThe picture at https://docs.datomic.com/cloud/ions/ions.html might help#2018-06-0712:30alexmillerEssentially all of your code is running on the d cluster. Being there you can access all the aws services. For storage, I think you’d use aws storage services, not disk. Stu or Rich can probably answer some of the others better than I can but generally the answers will be to use the aws functionality for autoscaling, ci, etc.#2018-06-0712:42chrisblomok, so the Lambda functions for an Ion are just glue to interface with the outside world, and delegate the actual work to the Datomic cluster?#2018-06-0712:51andrewhr@chrisblom yes https://clojurians.slack.com/archives/C03RZMDSH/p1528317660000673#2018-06-0712:52chrisblomthanks, good to know#2018-06-0712:52stuarthallowayI care a lot about local dev (Give me REPL or give me death!)#2018-06-0712:53stuarthallowayClient API now supports :server-type :ion, which connects remotely when you dev on your laptop, but connects in memory when you deploy the same code to Datomic: https://docs-gateway-dev2-952644531.us-east-1.elb.amazonaws.com:8181/cloud/ions/ions-reference.html#server-type-ion#2018-06-0712:55andrewhrstu, the transaction producing functions will run on whatever node in datomic cluster, right? But the final transactions are still directed to the node acting as transactor? (so essentially, it’s like the peer model)#2018-06-0712:57richhickey@andrewhr the tx fns run where the txes do. There isn't a dedicated transactor per se as with on prem#2018-06-0712:58richhickeybut you will be able to have independent clusters running app/query code and handling txes#2018-06-0713:05andrewhrI remember something about “avoiding contention”, but in retrospect doesn’t make too much sense giving ddb could probably just autoscale in response. Maybe this image make me a little confused https://docs.datomic.com/cloud/whatis/architecture.html#production-topology#2018-06-0713:07andrewhras far as I understand (together with you previous explanation), query groups aka “extra clusters” will tunnel their transactions thought the primary tx group#2018-06-0713:08andrewhror when do you say “extra clusters” you’re really meaning “one set of storage resources” + “multiple sets of primary compute resources”?#2018-06-0713:27chris_johnson@steveb8n I gave a talk last month at Serverless Chicago about using Datomic Cloud with AppSync, you may expect some kind of preliminary blog post or code sample extending that talk to Ions like …today?#2018-06-0713:28chris_johnsonI am finding it very difficult to focus on my day-job work right now, knowing that I could be spinning up a Cloud instance in my personal account and exploring getting AppSync to run, looking at modeling what $day-job does with the txn report queue in Ions callbacks, etc. etc. so …I don’t think it will be too long before I have a trip report re: AppSync ready for people to read hehe#2018-06-0805:42steveb8nAgreed, lots of people will be interested to know the graphql options using API Gateway on top of Ions.#2018-06-0805:43steveb8nfallback would be Lacinia but App Sync would be better#2018-06-0805:43steveb8nbest would be App Sync subscriptions support. Somehow I doubt that’s possible. What do you think?#2018-06-0713:37richhickey@chris.blom - What exactly runs on Lambda,
a generic proxy. We call it 'Ultimate, the lambda'
- and what runs on the datomic cluster?
everything
- What are the limitations of running code in a datomic cluster? Can I access the local disk and other AWS services?
AWS services sure. It is *your* instance, running in *your* VPC. That said, local disk, probably not a great idea.
- How does the autoscaling work?
You can trigger autoscaling of the cluster on any of various metrics we or AWS produce.
- Is it possible to develop and test ions locally?
As Stu said, sure! The db API you'll see in the ion is the same as the client sync API, and the :ion server type dynamically loads the right back end.
- Can I run some sort of test environment for CI testing?
Yes. You can run a solo instance that is a target of the same application, deploying early revs to it and tested revs to prod.
#2018-06-0713:37richhickey@chris.blom - What exactly runs on Lambda,
a generic proxy. We call it 'Ultimate, the lambda'
- and what runs on the datomic cluster?
everything
- What are the limitations of running code in a datomic cluster? Can I access the local disk and other AWS services?
AWS services sure. It is *your* instance, running in *your* VPC. That said, local disk, probably not a great idea.
- How does the autoscaling work?
You can trigger autoscaling of the cluster on any of various metrics we or AWS produce.
- Is it possible to develop and test ions locally?
As Stu said, sure! The db API you'll see in the ion is the same as the client sync API, and the :ion server type dynamically loads the right back end.
- Can I run some sort of test environment for CI testing?
Yes. You can run a solo instance that is a target of the same application, deploying early revs to it and tested revs to prod.
#2018-06-0713:41chrisblomthanks, that clears things up#2018-06-0713:42chrisblomalso, its the answers i was hoping for 😁#2018-06-0713:38eggsyntax"We call it 'Ultimate, the lambda'"
That's awesomely horrible facepalm 😂#2018-06-0713:46dominicmI suppose if everything runs in the cluster, then there's no way to restrict certain functions to certain operations, as you can do with Lambdas?#2018-06-0713:52jeroenvandijkWe had some bad experiences with AWS lambda in the past: 1. It has a global queue per account (our solution: never use AWS lambda for anything of high throughput as it will block other tasks unexpectedly). 2. AWS requires node updates sometimes (one time within 3 months from launch). @richhickey Are these issues taken into account? Is the ultimate lambda free of these concerns? Thank you.#2018-06-0713:58richhickey@dominicm There are distinct instances of ultimate the lambda and each is an independent AWS Lambda and proxies to a particular fn on a particular Datomic compute group. From there you have all the ordinary wiring up of Lambdas available, to particular events etc.#2018-06-0714:01richhickey@jeroenvandijk AWS has improved #1 with per-Lambda concurrency reservations/limits (which we expose as a knob). Not sure I understand 2 - lambda's internal nodes?#2018-06-0714:03jeroenvandijk@richhickey Thank you, interesting. Regarding 2 sorry i meant the node.js runtime (assuming you use this, but might apply to jvm runtime too). We were forced to upgrade the node.js runtime and didn't have a choice to leave it as is (like how you can choose to stick with an old AMI version)#2018-06-0714:05richhickey@jeroenvandijk we care very little about the lambda runtime as we're not doing much there, and your ion code does not run there so cares not at all#2018-06-0714:06richhickeyI guess you might occasionally have to roll for things like that#2018-06-0714:06jeroenvandijkOk understood. Thank you. I guess it could have been more like a one time occurrence.#2018-06-0714:11jeroenvandijkOne other thing I noticed in the architecture of Datomic Cloud (https://docs.datomic.com/cloud/whatis/architecture.html) is that the storage of record is S3 and Dynamodb is used as transaction log. Is it possible to clean up the transaction log every now and then and rely on S3 for older data in order to save data costs? (With a self-hosted Datomic setup we have a big dynamodb table and this would be a potential cost saver)#2018-06-0714:12johnjCan ions be used for light/hobbie stuff with solo?#2018-06-0714:12stuarthalloway@lockdown- yes, or even medium/moonlighting stuff 🙂#2018-06-0714:30stuarthalloway@jeroenvandijk yes, getting the log into S3 is an optimization we intend to implement#2018-06-0714:31stuarthalloway@jeroenvandijk for many use cases, Cloud is already cheaper than On-Prem with DDB just because indexing doesn’t have to hit DDB#2018-06-0714:34jeroenvandijk@stuarthalloway Nice 🙂 TBH we have been really abusing Datomic for things that is advised against. We have also reached the 10 billion datom limit times 4.. Is this something that Cloud is also addressing?#2018-06-0714:37johnjhttps://docs.datomic.com/cloud/transactions/transaction-functions.html#calling#2018-06-0714:37johnjis says to omit the first argument (the database) but the example is including it?#2018-06-0715:16dustingetzIn practice how are classpath functions most commonly invoked?
[:find (pull ?f [:db/id *]) :where
[(partial contrib.datomic/datomic-entity-successors $) ?succ]
[(loom.alg-generic/bf-traverse ?succ :db/ident) [?f ...]]]
vs
(->> (loom.alg-generic/bf-traverse (partial contrib.datomic/datomic-entity-successors $) :db/ident)
(d/pull-many $ [:db/id *]))
The first has constraints on the complexity of the clojure form, but is more structured#2018-06-0715:21devn@jeroenvandijk out of curiosity, how are you abusing it?#2018-06-0715:30jeroenvandijk@U06DQC6MA Biggest abuse is that we are using it as a timeseries database basically. This is causing (too) many transactions and a huge dynamodb size. Another less serious abuse it that we are storing relative big string blobs in Datomic#2018-06-0715:23kennyAre :cookies supported in the return map? https://docs.datomic.com/cloud/ions/ions-reference.html#web-code#2018-06-0715:27octahedrionis there a way I can pull attributes that are not :db/valueType :db.type/ref ?#2018-06-0715:28octahedrionwithout knowing what their names are in advance ?#2018-06-0715:34dustingetzCan you query schema and use that to decide the pull#2018-06-0715:35octahedrionthat's what I'm trying to do, but I can't see how to use the variable from the :where to inform the pull#2018-06-0715:37dustingetzyou need a second query#2018-06-0715:37octahedrionahhh#2018-06-0715:37octahedrioni did not think of that#2018-06-0715:38dustingetzYou can also go wild with subqueries in this fashion http://www.hyperfiddle.net/:cookbook.recipe!datomic-subquery/#2018-06-0715:40octahedrionalso, is there a way in the pull to use a wildcard with a recursion limit ?#2018-06-0715:40octahedrionI tried {* 2} but doesn't return anything#2018-06-0715:40dustingetzi dont know the answer to that sorry#2018-06-0715:30kennyAlso, is there a story for web apps that have real-time requirements (i.e. Web sockets or SSE)? I don't believe AWS API Gateway has built-in support for either of those.#2018-06-0716:05gabrielewhen i try to clj -Spom the deps.edn in the guide of ion i get
Caused by: org.eclipse.aether.resolution.ArtifactResolutionException: Could not transfer artifact com.datomic:ion:pom:0.9.7 from/to datomic-cloud (): Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 82E950FAE7704579; S3 Extended Request ID: Su8rZMB9c+Z65RQbBwA8K1mxjVkjX0JU5NJRpbrT6k2/rRN8ubyMA3SJO9TucAlKFVf0fQUVZ3w=)
deps.edn
{:paths ["src" "resources"]
:deps {com.datomic/client-cloud {:mvn/version "0.8.50"}
com.datomic/ion {:mvn/version "0.9.7"}
org.clojure/data.json {:mvn/version "0.2.6"}
org.clojure/clojure {:mvn/version "1.9.0"}}
:mvn/repos {"datomic-cloud" {:url ""}}
:aliases
{:dev {:extra-deps {com.datomic/ion-dev {:mvn/version "0.9.160"}}}}}
what am i doing wrong?#2018-06-0716:05richhickey@kenny right, API gateway has no websockets/sse#2018-06-0716:10alexmiller@gabriele.carrettoni do other things (like clj -Spath) work?#2018-06-0716:10gabriele➜ datomic clj -Spath
Error building classpath. Failed to read artifact descriptor for com.datomic:ion:jar:0.9.7
org.eclipse.aether.resolution.ArtifactDescriptorException: Failed to read artifact descriptor for com.datomic:ion:jar:0.9.7
at org.apache.maven.repository.internal.DefaultArtifactDescriptorReader.loadPom(DefaultArtifactDescriptorReader.java:276)
at org.apache.maven.repository.internal.DefaultArtifactDescriptorReader.readArtifactDescriptor(DefaultArtifactDescriptorReader.java:192)
at org.eclipse.aether.internal.impl.DefaultRepositorySystem.readArtifactDescriptor(DefaultRepositorySystem.java:253)
at clojure.tools.deps.alpha.extensions.maven$eval668$fn__670.invoke(maven.clj:77)
@alexmiller#2018-06-0716:11gabrielesame AccessDenied#2018-06-0716:12alexmillerI don’t have any issues doing that on my own box, not sure why you’d see that#2018-06-0716:13alexmillerI assume you’re not proxied or anything#2018-06-0716:14gabriele@alexmiller i was inside the work vpn, just tried outside of it and i get the same error#2018-06-0716:18gabriele@alexmiller maybe it's picking something from my .aws/credentials? 🤔#2018-06-0716:20alexmillermaybe#2018-06-0716:20alexmillercan you try with com.datomic/client-cloud {:mvn/version “0.8.52”} instead ?#2018-06-0716:21gabriele@alexmiller#2018-06-0716:22alexmillerI think prior is a bug in the ion-starter deps.edn, but it’s not this problem#2018-06-0716:23alexmilleraws s3 cp . - does that work for you?#2018-06-0716:23gabrieledownload: to ./ion-0.9.7.pom
#2018-06-0716:40jaretHi @gabriele.carrettoni can you confirm you have list-buckets with your AWS creds?
aws s3api list-buckets --query "Buckets[].Name"
#2018-06-0716:41gabrieleyup it works#2018-06-0716:42gabrieleand i believe the repo is public so it shouldn't matter if i have or not the permissions 🤔#2018-06-0717:27jaretWe updated ion-starter to reflect the latest client. Could you pull the last commit to give you com.datomic/client-cloud {:mvn/version “0.8.54”}?#2018-06-0717:27jaret@gabriele.carrettoni ^#2018-06-0718:09gabrielesame Could not transfer artifact com.datomic:ion:pom:0.9.7 from/to datomic-cloud (<s3://datomic-releases-1fc2183a/maven/releases>): Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 9BF4D9EDB140DB3B; S3 Extended Request ID: KDJi+KjUHOXfcCf0EoU9DSk5V4iP0HxFAUxqj14QFHEwN1rQ5D8qzDOdmCsantCEKzeG+6ydacE=)#2018-06-0718:23stuarthalloway@gabriele.carrettoni I can repro the problem locally, but only when running with no AWS creds at all#2018-06-0718:23stuarthallowayi.e. if I use any random creds from any account things seem to work#2018-06-0718:25gabriele@stuarthalloway on my pc i got it working, i'll try again tomorrow at work to try to understand where the problem is, now i get this error
[ERROR] Failed to execute goal on project ion-starter: Could not resolve dependencies for project ion-starter:ion-starter:jar:0.1.0: Failed to collect dependencies at com.datomic:client-cloud:jar:0.8.54 -> com.datomic:client:jar:0.8.59 -> com.datomic:client-impl-shared:jar:0.8.40: Failed to read artifact descriptor for com.datomic:client-impl-shared:jar:0.8.40: Could not transfer artifact com.datomic:client-impl-shared:pom:0.8.40 from/to datomic-cloud (): Cannot access with type default using the available connector factories: BasicRepositoryConnectorFactory: Cannot access using the registered transporter factories: WagonTransporterFactory: java.util.NoSuchElementException
#2018-06-0718:27gabrielebtw {com.datomic/client-cloud {:mvn/version "0.8.50"} works#2018-06-0716:23gabrieleyup#2018-06-0716:23alexmillerthe plot thickens#2018-06-0716:25alexmillerI guess I’d start looking for other environmental variables - default AWS creds, repository settings in ~/.m2/settings.xml, etc. But I’m not sure what in any of those would actually cause a problem.#2018-06-0716:25alexmillerthe fact that the s3 call works makes me think it’s not the aws creds#2018-06-0716:27kennyI think my question got lost 🙂 Are :cookies supported in the return map for web code? https://docs.datomic.com/cloud/ions/ions-reference.html#web-code#2018-06-0716:28alexmillerI think someone is checking#2018-06-0719:10stuarthallowayhi @kenny ! in my experience :cookies are derived from the :headers and added by middleware, e.g. https://github.com/ring-clojure/ring/blob/master/ring-core/src/ring/middleware/cookies.clj. I believe that should work fine here.#2018-06-0719:11kennyRight. I guess my question should be rephrased - what subset of the Ring spec is supported?#2018-06-0719:12kennyIs it only :body, :headers, and :status? And will it remain that way?#2018-06-0719:12stuarthallowayAt minimum the keys listed at https://docs.datomic.com/cloud/ions/ions-reference.html#web-code. That map is open and can/will get more stuff over time.#2018-06-0716:31alexmiller@gabriele.carrettoni I was able to repro it by monkeying with my aws creds, we will check into this#2018-06-0716:31kennyIon is closed source, correct?#2018-06-0716:36richhickey@kenny yes, it's part of Datomic#2018-06-0716:53kennyAny plans for a cloud agnostic Datomic Cloud?#2018-06-0717:08richhickey@kenny nope, part of the value prop is getting maximum leverage and agility.#2018-06-0717:11johnjare tx functions aws lambdas?#2018-06-0717:12richhickey@lockdown- no, some ions are just used by txes and queries, not behind lambdas#2018-06-0717:15kenny@richhickey your customers aren’t worried about vendor lock-in?#2018-06-0717:19viestiApplication Load Balancer (ALB) supports Websockets and lately it got a feature to delegate authentication (https://aws.amazon.com/blogs/aws/built-in-authentication-in-alb/). Do you see possibility to front Ions with ALB?#2018-06-0717:23johnj@richhickey k, cool, just want to use ions for tx functions for now with solo and app code running on a ec2 node#2018-06-0717:25viestiThinking that API Gateway is public (might change in future though), ALB could be exposed internally only, to allow say access from corporate network only#2018-06-0717:25richhickey@kenny lockin has to be balanced with other objectives. If it dominates ones thinking you'll never maximize your leverage of the platforms. I can't say what's most important for anyone, but there are tradeoffs. We are supporting the 'leverage AWS' strategy. Lots of people succeeding there while others worry and don't ship.#2018-06-0717:36dustingetz@richhickey Does ion auto scaling apps essentially supersede the client model? Meaning there is no datomic client in play? Just a client-like local api#2018-06-0717:36richhickey@dustingetz right, client-free apps totally possible#2018-06-0717:37dustingetzIdiomatic, right?#2018-06-0717:37richhickeywell, there's zero reason for clients inside ions, but you may have legacy architecture that dictates use of clients. They're not deprecated or anything#2018-06-0717:38richhickeyI think many apps can be written as ions#2018-06-0717:38richhickeyand of course you can mix and match#2018-06-0717:38dustingetzAre there technical reasons for no entity API or is it just historical at this point, and is the ion "local client api" equivalent in power to the entity api#2018-06-0717:41richhickey@dustingetz there are semantic differences with client (vs peer) that yield 'no entity'. The 'local' API of ions matches the client API so people can do local REPL dev and testing (against cloud). So semantically, it's compatible with 'over wires'. That said, the perf of the same API in local mode within ions trounces clients.#2018-06-0717:42richhickeybut keeping wire semantics means if you need to make an architectural shift that requires moving some parts to clients you're not screwed#2018-06-0717:43richhickeythus 'client' is the only API of cloud, ion or out#2018-06-0717:43johnjwhere does the performance gain comes from? eliminating the network calls?#2018-06-0717:44richhickey@lockdown- right, no wires, same process#2018-06-0717:45dustingetzRemote client requesting query for Ion cluster
[:find (pull ?f [:db/id *]) :where
[(partial contrib.datomic/datomic-entity-successors $) ?succ]
[(loom.alg-generic/bf-traverse ?succ :db/ident) [?f ...]]]
Ion application local query
(->> (loom.alg-generic/bf-traverse (partial contrib.datomic/datomic-entity-successors $) :db/ident)
(d/pull-many $ [:db/id *]))#2018-06-0717:46dustingetzif I am an Ion app, I can do both, is the former considered idiomatic? Because it is equivalent and works over wires#2018-06-0717:49johnjthe former has the overhead of the network, which of the two you use depends on your needs/architecture#2018-06-0717:51richhickey@dustingetz the portability of the former seems like a flexibility win, but you may not care#2018-06-0717:51dustingetz@richhickey I like the former but that#2018-06-0717:51dustingetzthats a pretty wild "partial" in there#2018-06-0717:51dustingetzyou would consider that idiomatic? Im ok with that if you are#2018-06-0717:52richhickeyidioms take time to develop#2018-06-0717:52richhickeythese are some brand new power tools, would hate to pour concrete advice 🙂#2018-06-0717:52dustingetzok:)#2018-06-0717:53val_waeselynckTo what extent is it reasonable to query other data stores (e.g a remote SQL or ElasticSearch server) from ions?#2018-06-0717:54richhickey@val_waeselynck you can do anything as long as you give the datomic cluster node's role the necessary permissions#2018-06-0717:54dustingetzThe ion will autoscale though and the foreign store will not#2018-06-0717:54dustingetzyou'd need a queue#2018-06-0717:54richhickeyyour instances, your VPC#2018-06-0717:55stuarthalloway@dustingetz not everyone scales or needs to#2018-06-0717:56stuarthallowaybut in any case the important point is, as @richhickey said, it is your instances, use them as you will#2018-06-0717:57val_waeselynck@richhickey thanks, it may be a good idea to make that obvious in the docs - it helps assess the power / limitations, especially since we've been accustomed to "be careful of the code Datomic runs for you" with tx fns#2018-06-0718:01val_waeselynckThis is very exciting - it seems to give you the getting started experience of Firebase or Lambda with the scalability of advanced hand-rolled systems#2018-06-0718:05richhickey@val_waeselynck that still applies, you can hold up your txes#2018-06-0718:20val_waeselynckSure, but from what I understand it's fine for reading and for preparing writes - that part is important but not obvious imho#2018-06-0718:06dustingetzIf I write to a durable kv store from a transactor fn and it takes 1ms to return, will this in practice slow down transactor throughput? Or are transactions processed in parallel up until the dynamo conditional put which i understand to be batched#2018-06-0718:07richhickeyit's going to be serial#2018-06-0718:08dustingetzOh I see, because if there is a cas in the same tx, that needs a dbval#2018-06-0718:08richhickeybecause e.g. tx fns can query and expect to see prior txes#2018-06-0718:08richhickeyyes, stuff like that#2018-06-0718:09richhickeybut remember you will likely be initiating your txes from ions, so there's another opportunity there for coordinated (if not transactional) work#2018-06-0718:10dustingetzI dont understand that, can you give me another hint#2018-06-0718:10richhickeyput in elasticsearch, transact#2018-06-0718:11richhickeymeans it might be in elasticsearch but tx fails, but not holding up txes#2018-06-0718:11dustingetzOh ok, so if the other store supported two phase commit that might work too#2018-06-0718:12richhickeyor whatever, cleanup on tx fail#2018-06-0718:12richhickeyyou are likely retrying#2018-06-0718:12dustingetzthanks#2018-06-0718:12johnjwhy web ions and not just a ring app in a lambda function?#2018-06-0718:12richhickeythe point is, you will be in your VPC, running in a role, able to do cool things#2018-06-0718:15richhickey@lockdown- a) ease, b) not having to run in lambda execution container, c) running in db context vs making a client call, d) speed#2018-06-0718:15richhickeythe important value prop is API Gateway. Putting a lambda behind it is just one option (hint)#2018-06-0718:19johnjok, pretty cool#2018-06-0718:23dustingetz@richhickey Since it is our instance, why is eval blacklisted in :where clause evaluation#2018-06-0718:23dustingetzOr is/could that be different in Ion#2018-06-0718:24richhickeythere is an :accept list in ion, put stuff there to allow it#2018-06-0718:24dustingetzclojure.core/eval was excluded from cloud for security?#2018-06-0718:24dustingetzsecurity of your own jars or something#2018-06-0718:24richhickeynothing will run not on the list#2018-06-0718:25richhickeyi.e. nothing will be an entry point in tx/query/lambda#2018-06-0718:25richhickeypants on by default#2018-06-0718:26dustingetzlol#2018-06-0718:26richhickeypeople inadvertently expose clients, query etc#2018-06-0718:26dustingetzOh, ok makes sense#2018-06-0718:28johnjare there docs that show where web ions might be incompatible with the ring spec?#2018-06-0718:38gabrieletrying to push
{:command-failed "{:op :push}",
:causes
({:message
"You must either specify a uname or deploy from clean git commit",
:class IllegalArgumentException})}
what does it mean? 🤔#2018-06-0718:45gabrieleit seems push won't work if there are untracked files, adding them to .gitignore did the trick#2018-06-0718:51alexmilleror you can add a :uname key to the op#2018-06-0718:59gabrielethanks, finally got it running 😃#2018-06-0719:09gabriele@alexmiller may i suggest to update https://docs.datomic.com/cloud/ions/ions-tutorial.html#sec-5-4 changing the curl call from
curl https://$(obfuscated-name).
to
curl https://$(obfuscated-name). -d ':hat'
because calling the first gives Expected a request body keyword naming a type#2018-06-0719:40kennyCan I specify an AWS profile when pushing or deploying?#2018-06-0720:08jumar@kenny I guess :creds-profile#2018-06-0720:08kenny@U06BE1L6T Yep.#2018-06-0719:44kennyhttps://docs.datomic.com/cloud/ions/ions-reference.html#deploy 🙂#2018-06-0720:51tony.kayHow are people dealing with schema changes in development mode. I’m using conformity, but there is this problem: When I’m developing I might be “fiddling” with some bit of schema. I’d like to put it in a migration, but as soon as I do that it is “fixed” in the database, and I have to “hand unroll” it if I change my mind. It seems:
1. make an “up”/“down” function to use during dev, and drop the “up” one into the migrations only once it is stable.
2. It occurs to me that if you were to look up the tx time of the “up”, you could “undo” all of the changes that had been made since that time (which could be your automatic “down”…`d/as-of` to get the old values for the things the history could tell you have changed since then)
Given those two, seems you could just add something like a “SNAPSHOT” versioning in conformity that would automate that on startup in dev mode (e.g. undo everything that has happened since the earliest SNAPSHOT was conformed, then re-conform).
Anyone aware of existing code for doing that, or an alternative that is as convenient? Otherwise, we’re probably going to write it.#2018-06-0721:01chrisblomi’ve used datomock to develop migrations on top of an existing db#2018-06-0721:03chrisblomusing forked connections to develop the migration, and transact it to actual db when finished#2018-06-0721:03chrisblomhttps://github.com/vvvvalvalval/datomock#2018-06-0721:03tony.kaynice..I’l look at that#2018-06-0721:06chrisblomi’ve even used it to test migrations in production, by connecting to the production db, forking the conn, and running the migrations and app with the forked conn#2018-06-0721:07eggsyntaxSeconded on datomock, it makes datomic development workflow a joy ❤️#2018-06-0721:07tony.kayyeah, I think that looks pretty good, actually.#2018-06-0721:08tony.kaySuper cool idea…and only about 100 LOC…it’s all about finding the right abstraction on top of your tool#2018-06-0721:08tony.kayglad I asked. Thanks!#2018-06-0801:59nzjoelHi all, I am new to clojure but really like both datomic and fulcro and I am looking to use them in an upcoming project. I imagine I will be making heavy use of transaction functions as a way of performing atomic writes. However, I am worried about the statement in the docs (https://docs.datomic.com/on-prem/database-functions.html):
>The transaction processor will lookup the function in its :db/fn attribute, and then invoke it, passing the value of the db (currently, as of the beginning of the transaction)
Say I have 2 TxFunctions (f1 and f2) which sometimes I may perform individually but other times want to perform them together in a single transaction. I am not sure how to deal with a case where the result of f2 depends on the value of the database which may have been modified by f1.
Have other people had problems with this and if so how have they dealt with them?#2018-06-0802:04chris_johnson@nzjoel I know for a fact that there are other people in here with much more knowledge and experience than me, but I would suggest looking at the :db/cas special value for a transacted value as a starting point#2018-06-0802:06chris_johnsonIt causes transactions to throw a specific exception type if the so-annotated value in the db into which you propose to transact differs from the one in the db you’re working from, essentially kicking it back to you to decide how to handle that case (exponential backoff? propagate exception out to caller? something else?)#2018-06-0802:12nzjoel@chris_johnson Yeah I thought about handling it like that, i guess that would take load of the TXer but seems like a "try again and hope it works this time" situation#2018-06-0802:14chris_johnsonwell, sort of#2018-06-0802:14chris_johnson:db/cas doesn’t make any decisions about what to do because it’s different for every case#2018-06-0802:15chris_johnsonsometimes you just need to ensure that nothing else is changing in flight, so you can afford to retry n times with exponential backoff, so when there’s contention all contending txns slow down until each one in turn gets to “win”#2018-06-0802:16chris_johnsonin another situation, though, something changing underneath your txn might invalidate it so thoroughly that you have to go aaaaaaall the way back up to a human to find out what to do#2018-06-0802:16chris_johnsonetc. etc.#2018-06-0802:18nzjoelI guess maybe the TXer could be wrapped in a higher level TXer which queues writes and can minimize contention in the actual TXer...?#2018-06-0802:20nzjoeljust seems like it would be a common problem and have not been able to find much info about it#2018-06-0802:35chris_johnsonI wouldn’t think that’s the “right path” - I don’t know the specifics of your case but if coordination between f1 and f2 is critical enough that they have to see the same db value, at least some of the time, I would probably solve that with something like (defn f3 [db & args] (apply (comp f1 f2) db args))#2018-06-0802:35chris_johnsonnot sure if both apply and comp are needed in there, but you hopefully see what I mean#2018-06-0802:36chris_johnsoncompose them when they have to see the same db rather than trying to coordinate them#2018-06-0805:03viestiThinking that getting a post on Datomic Ions to AWS Blog would be neat for visibility (ship material to https://aws.amazon.com/blogs/aws/author/jbarr/ maybe) 🙂#2018-06-0808:28claudiuHi, Is aws lambdas + clojure a good fit for http api & serving html given the startup time costs ? (looking at datomic ions tutorial)#2018-06-0809:43viestiI guess cold-start cost depends on the runtime used by the proxying Ions Lambdas#2018-06-0809:47viestiWas reading on [Datomic architecture](https://docs.datomic.com/cloud/whatis/architecture.html) and saw query groups (transactional vs. analytic vs. developer queries). Does this mean that one might use Datomic for analytic workload also? (say long running queries that possibly scan the whole dataset) Was thinking machine learning use, where a model might create predictions for whole set of customers, replacing old predictions entirely (this implies batch ingestion of data as well as queries that scan a large set of data).#2018-06-0810:36rhansenI'm migrating my app which used a postgres database, and I wonder what the idiomatic way of structuring my session and user data would be in Datomic. Should I:
1) Have a session "object" (`:session/token`, :session/validUntil, :session/user-ref) which has a reference to a user. (what I have today in postgres).
2) Store :session/token as part of :user/* with db.cardinality set to many and check the insertion time if the token is valid or not.#2018-06-0810:38rhansenAlso, should I in general just stop having created_at and updated_at attributes for my entities, since I can look up the transaction time when I need something like that?#2018-06-0810:42stuarthalloway@claudiu There was a great talk at Clojure/West that explains Lambda overhead https://www.youtube.com/watch?v=GINI0T8FPD4. Recommended for anybody using AWS Lambda, whether via Datomic or not#2018-06-0810:42stuarthalloway@claudiu but to answer your question: as always it depends on you specific requirements 🙂#2018-06-0810:44stuarthallowaythat said, there is more than one way to put code behind API Gateway. We shipped AWS Lambda proxies yesterday. If we were to ship something else with different price/performance tradeoffs, choosing it would be transparent to your application code.#2018-06-0812:47souenzzoDatomic ion can be used with peer API? There is HTTP/IO cost when I run a query/make a pull?#2018-06-0813:00richhickey@souenzzo no, ions use the client API. With no wire overhead#2018-06-0815:45johnjWhy use as a tx function a function that just inserts data? https://docs.datomic.com/cloud/transactions/transaction-functions.html#testing#2018-06-0815:49matthavener@lockdown- :db/add might be slightly confusing if you’re used to other databases. It can result in a “modification” or “addition” to the new value of the db. A transaction function can also return :db/retract to “remove” data#2018-06-0815:55matthavenerhopefully that makes sense, I may have misunderstood your question, though 🙂#2018-06-0815:57johnjyeah, I'm referring more to classpath transaction functions#2018-06-0815:57johnjwhy use one instead of a function that just does (d/transact conn {:tx-data some-data}) for just inserting data that is#2018-06-0816:00donaldballYou want a txn fn if you require that the data you’re asserting absolutely has a stable basis#2018-06-0816:05johnj@donaldball don't understand what that means#2018-06-0816:06johnjwhat's is data that has absolutely stable basis?#2018-06-0816:06donaldballOne use case might be incrementing the number of items in a user’s shopping cart#2018-06-0816:07johnjsure, you are doing some computation there correct?#2018-06-0816:08johnjlike check/using the current number of items#2018-06-0816:09johnjbut this create-item function https://docs.datomic.com/cloud/transactions/transaction-functions.html#testing is just simply inserting new data#2018-06-0816:09eraserhdSo, I get "transactor is unavailable" on a regular basis, and I see on the transactor that it's stopped because of heartbeat. I've upped the timeout, but it still happens every week or two. The client and transactor are both running in AWS, in us-west-2, and it#2018-06-0816:09eraserhd's using Dynamo for storage.#2018-06-0816:09eraserhdThis can't be normal, is it?#2018-06-0816:10eraserhdThe transaction load is tiny.#2018-06-0816:11eraserhdLike maybe a dozen transactions a week.#2018-06-0817:04stuarthalloway@eraserhd that is not normal at all. Are you running HA?#2018-06-0817:05eraserhdNo.#2018-06-0817:05stuarthalloway@lockdown- that tx fn is a constructor, and could be performing validations#2018-06-0817:05stuarthalloway@eraserhd instance size?#2018-06-0817:07eraserhdWe just bumped it from t2.small to t2.medium and gave it more memory#2018-06-0817:07eraserhdIt might be happening less frequently? Not sure, but it's definitely happened a few times since.#2018-06-0817:18johnj@stuarthalloway got it, wasn't clear for me since the example wasn't doing any validation#2018-06-0817:18stuarthalloway@lockdown- that example could be more interesting 🙂#2018-06-0817:19johnjtrue 😉#2018-06-0817:21stuarthalloway@eraserhd the only common causes I have ever seen for that are underprovisioned DDB or out-of-memory. You could look for evidence of OOM in the logs. You probably already have this link but for the record: https://docs.datomic.com/on-prem/deployment.html#troubleshooting#2018-06-0817:22johnj@eraserhd you have the peer lib and transactor in the same instance?#2018-06-0817:25eraserhdThe peer and transactor are separate instances. The peer is a bigger instance, -Xmx6G for the JVM.#2018-06-0817:27eraserhdThere is a kind of large transaction function involved. It's updated once per deploy, and used for almost every transact.#2018-06-0817:30eraserhdIt seems like we lost the heartbeat timeout config. Hrmm.#2018-06-0819:15leongrapenthin@eraserhd is it possible that your peer is unresponsive to the transactor because too high cpu load? We have experienced this to be a common cause#2018-06-0819:16leongrapenthinIn those cases we were mislead by the "transactor unavailable" log#2018-06-0819:46spieden@eraserhd we experienced intermittent “transactor unavailable” under low load while running ours on a t2.medium. tweaking memory settings and upping the heartbeat timeout cleared them up eventually. when in non-HA mode i don’t understand the benefit of the heartbeat failover even being enabled, really#2018-06-0819:47spieden@eraserhd we build a docker image for the transactor and run it via ECS — i can send you our properties file and Dockerfile if you’re interested#2018-06-0819:52eraserhd@spieden that would be great! The organization is moving to Kubernetes, so the Dockerfile would also be valuable.#2018-06-0822:24spieden@eraserhd here you go: https://gist.github.com/spieden/7a5b303e6ce03bbbce765e797f236a73#2018-06-0822:24spiedenwe run that task on a cluster with a single t2.medium in it#2018-06-0822:24spieden(i assume you’ve got all the roles correct so i left our their defs)#2018-06-0901:18eraserhd@spieden thanks!#2018-06-0916:52chris_johnsonTwo questions:
1) song as old as rhyme, story old as time, some dude running on-prem in AWS wants to know when the tooling for migrating his existing db to Cloud will shiiiiiip 😆
2) I don’t see anything about it in the documentation, but once we are running Cloud/Ions in production I would definitely be interested to buy some reserved instances and use those instead of paying on-demand pricing for the underlying compute. Is that something that is possible today but not featured in the docs? On the roadmap? Not possible because of how the Marketplace works?#2018-06-1113:42jaretHi Chris! Re: 2 We’re pretty sure that reserved instances should just work, but we’re confirming with the AWS marketplace team.
re: 1 I’ve brought this up to the team and I’ll update you when I have more info. I know we’re keen to look at this.#2018-06-1113:44jaretRe: 2. We’ve confirmed with amazon docs that it should work.
https://aws.amazon.com/marketplace/help/buyer-general?ref=help_ln_sibling#topic7
>“AWS Marketplace products work with other AWS features such as VPC and can be run on Reserved and Spot instances, in addition to normal On Demand Instances”#2018-06-0917:19folconWhat am I misunderstanding here?
This works:
q:
[:find ?id :in $ ?txt % :where (or [db-search $ ?txt ?id])]
args:
[{:db/alias "sql/db"} "test" [[[db-search ?db ?txt ?id] [(fulltext ?db :song/name ?txt) [[?id]]]]]]
Rewriting q as:
[:find ?id :in $1 ?txt % :where (or [db-search $1 ?txt ?id])]
throws:
java.lang.Exception: processing rule: (q__922 ?id), message: processing clause:
However this works:
[:find ?id :in $1 ?txt % :where [db-search $1 ?txt ?id]]
Now I want the or as I’m trying to query multiple databases and want to use the results together. I’m just trying to build it up in steps as that’s the way I’ve found to do this with minimal issues.
Thanks =)…#2018-06-1112:30denikfrom the reading I’ve done datomic seems not to be the right tool for the job, but I figure I’ll ask here in case someone wants to prove me wrong:
We’re looking to implement n append only logs as stacks (LIFO) where data is unrelated and the stacks should not inhibit each other’s performance. It’s important we can read it in reverse order and stop on a predicate, think take-while. In case it matters, we’re using datomic cloud.#2018-06-1113:03chrisblom@denik sounds like you want something like kafka really#2018-06-1113:03chrisblomoh wait, you want stacks not queues#2018-06-1113:05denikexactly#2018-06-1113:05denikalso never care to delete data#2018-06-1113:53pesterhazyI'm thinking of using a Datomic-like system (maybe Datascript) to implement a system where multiple offline clients are editing the same document. So essentially there a multiple replicas of the db (or document), and clients can write to their replica optimistically even without connectivity to the "leader" node (in the cloud). When the clients come back online, the server receives the writes from the clients and needs to return the authoritative tx log, determining the order of the concurrent writes.#2018-06-1113:54pesterhazyHas anyone used a Datomic-like system for collaborative editing (a kind of multi-leader distributed db)?#2018-06-1113:58pesterhazyI'm especially curious what happens when node1 and node2 have concurrent writes (tx1 and tx2). The central node (node0) needs to either apply tx1 first and then tx2, or the other way round. In each case the tx-log would be different from what one of the clientd expected. Would this lead to problems?#2018-06-1113:59tonskyit will lead to logical conflicts, sure#2018-06-1113:59tonskybut if you keep “prev” state around on client 2 you can rearrange his txs so that the order would become tx1→tx2 on client2 too#2018-06-1113:59tonskystill doesn’t solve any conflicts though#2018-06-1114:02pesterhazyso you would have the server reject tx2 because tx1 has already been applied, and leave rearrange tx2 to node2?#2018-06-1114:04pesterhazyI'm not too worried about conflicts for now. If node1 and node2 both transact [:db/add 1234 :person/add "Joe"] and [:db/add 1234 :person/add "Jeff"] respectively, Last Write Wins would be an acceptable resolution#2018-06-1114:05pesterhazyI'm more concerned about getting the database into an inconsistent state by transacting in the wrong order, or not being able to apply the second transaction for some reason.#2018-06-1114:06tonskyDatomic/DataScript transactions are simple enough#2018-06-1114:06tonskyif you can’t apply a tx just throw it away. It’s kind of a conflict too#2018-06-1114:06pesterhazyI guess that's true#2018-06-1114:07tonskybut I recommend building a system that maintains same order at all instances#2018-06-1114:07tonskyso that tx2 is first applied locally, but when server confirmation comes it’s “undone” and tx1 + tx2 applied#2018-06-1114:07pesterhazyyes, I primary goal would be to have identical dbs (identical lists of datoms) on every node...#2018-06-1114:08pesterhazyso "eventual consistency", i.e. identical dbs after reestablishing connectivity with the central server#2018-06-1114:09pesterhazyin effect, the nodes would fork the db for optimistic updates, then discard the fork when the server returns the canonical tx log#2018-06-1114:12pesterhazyI was thinking of rolling my own system for this, but Datascript already solves so many of these problems it seems like a waste to ignore it#2018-06-1114:18pesterhazyIn Datascript, is it possible (or necessary?) to serialize the eavt/aevt indexes? I see that (-> @conn pr-str cljs.reader/read-string) only seralizes the datoms, so it may take some time to rebuild the index for a longer document when reading from, e.g. JSON.#2018-06-1114:18pesterhazyOr am I overthinking this?#2018-06-1114:32tonskyall indexes store the same datoms, just sorted in a different order. Yes it takes time to sort them on DB import, whether it’s important or not is up to you#2018-06-1114:48pesterhazymakes sense#2018-06-1114:50pesterhazyso really in an in-mem db, a "covering index" is just an index, because there's not need to store the same datom twice#2018-06-1115:09tonskyyes#2018-06-1117:37spieden@pesterhazy i’d check out couchdb too as i believe it was designed with your use case in mind#2018-06-1117:41pesterhazy@spieden right, especially given that there already is pouchdb, which runs in the browser#2018-06-1118:30thegeez@pesterhazy "multiple offline clients are editing the same document" sounds like the JSON CRDT-ish automerge: https://github.com/automerge/automerge#2018-06-1118:35pesterhazy@thegeez I saw that, the paper that describes automerge is on my reading list#2018-06-1118:35pesterhazyDo you have experience in this field?#2018-06-1118:44thegeez@pesterhazy I have experience with operational transformations. I'm investigating the json crdt things. A lot depends on what kind of conflicts are possible or need to be supported#2018-06-1118:53pesterhazy@thegeez agreed. I think in my particular case, conflicts can be resolved with a simple last-write-wins strategy#2018-06-1118:56pesterhazyThe simplicity of storing data as [e a v t] tuples is appealing#2018-06-1118:56pesterhazyThat's why I'm investigating a Datomic-like data structure as a basis for real-time collaborative editing#2018-06-1118:58pesterhazyI guess EAV tuples are familiar from RDF, not just from Datomic#2018-06-1121:43rapskalianI might be totally missing this in the reference material, but how do I tear down what I've deployed using Datomic ions? Or should I actually remove all functions from my ion config and deploy "nothing"?#2018-06-1202:29jaretI think this would make a great forum topic. I might copy it over there tomorrow, but essentially Ions are immutable, there is no tear down 🙂. If you’re concerned with the added noise you can deploy an empty ion to clean up.#2018-06-1214:30rapskalianI'll be on the lookout for the post. Thanks Jaret 🍻#2018-06-1201:46mgIs there a way to do equality in rules? My use case here is a rule for a hierarchy, where I'd want to match in the case that: the two args are equal, the second arg is a parent of the first (via :parent relationship), or the second arg is a grand-parent (or recursively great-grandparent) of the first arg. The parent and grandparent cases are pretty straightforward, it's that equal case that's tripping me up#2018-06-1201:47mgI have now:
'[[(in-hierarchy ?child ?parent)
[(= ?child ?parent)]]
[(in-hierarchy ?child ?parent)
[?child :parent ?parent]]
[(in-hierarchy ?child ?parent)
[?child :parent ?p1]
(in-hierarchy ?p1 ?parent)]]
The clause with = is throwing, though#2018-06-1201:47favilaThrowing what?#2018-06-1201:48mgIllegalArgumentExceptionInfo :db.error/insufficient-binding [?child] not bound in expression clause: [(= ?child ?parent)] datomic.error/arg (error.clj:57)#2018-06-1201:48favilaAh, so where you are using that rule in your query child isn’t forced to be bound#2018-06-1201:49favilaEither force it (square bracket around args) or ensure it some other way#2018-06-1201:49favilaYou might also be able to name both args the same and have no rule body?#2018-06-1201:49favilaThus making it unify with itself#2018-06-1201:50mgahh that's interesting#2018-06-1201:50favilaNo you still need to ensure binding#2018-06-1201:50favilaBoth must be bound for an equality check#2018-06-1201:50favilaIt’s in the semantic of what you are doing#2018-06-1201:53favilaWait#2018-06-1201:53mgI suppose I could do something hacky here and round-trip via a reference attribute#2018-06-1201:53favilaDo you mean “ancestor or self”?#2018-06-1201:54favilaGiven a reference element, you want to match itself and all its parents transitively?#2018-06-1201:54mgyes#2018-06-1201:54favilaOk#2018-06-1201:55favilaThree impala as you have here#2018-06-1201:55favilaImpls #2018-06-1201:55favilaAll have first arg bound as the reference node#2018-06-1201:55favilaYour “equality” impl should just rename the arg using the identity fn #2018-06-1201:56favilaThe other two impls are correct#2018-06-1201:56favila[(identity ?ref) ?matched]#2018-06-1201:56mgah yes I see#2018-06-1201:58mgthere we go! thanks#2018-06-1201:58favilaNo it took me a while to get the first time#2018-06-1201:58favilaNp#2018-06-1201:59favilaThat identity arg renaming trick was key but I don’t really see people talking about it#2018-06-1201:59mgthere doesn't seem to be a good set of reference examples for rules generally#2018-06-1312:46Andreas LiljeqvistHow do I handle a customer that might stop having an address? Like [:db/retract 123 :customer/address "oldaddress"]#2018-06-1312:47Andreas LiljeqvistIs there some util-functions that will take an entity and produce the needed retractions?#2018-06-1312:49Andreas Liljeqvist{:db/id 123 :customer/attribute :something} should in this case retract :customer/address#2018-06-1313:32Andreas LiljeqvistDoing an SPA and having a hard time figuring out how to communicate changes back to the server#2018-06-1313:33rhansenHow do people do local development when using datomic cloud in production?
If I understand the documentation correctly, there is a different client api dependency for on-prem and cloud, and the client api doesn't have an in-memory mode.
Just wondering how I'm supposed to do unit testing or local development if on a plane or the network goes down.#2018-06-1314:01stuarthalloway@rhansen not the case!#2018-06-1314:01stuarthallowaywell, partially anyway#2018-06-1314:02stuarthalloway{:server-type :ion
:region "us-east-1"
:system "stu-8"
:query-group "stu-8"
:endpoint ""
:proxy-port 8182}#2018-06-1314:02stuarthallowaythat will resolve to the local implementation when deployed into an ion, but will go through the socks proxy for local dev#2018-06-1314:03stuarthallowayso you do need a network connection, but there is a local dev story#2018-06-1314:03marshall@alqvist https://docs.datomic.com/cloud/transactions/transaction-functions.html#sec-1-1#2018-06-1314:04stuarthallowaythat all said, we understand the value of local dev and plan to continue improving in that area#2018-06-1314:04marshall@alqvist That function will retract the entire entity. If you just want to retract a single attribute of the entity, you can do it exactly as you showed, issuing an explicit [:db/retract entityID :attrID Value] in a transaction#2018-06-1314:05rhansen@stuarthalloway Interesting. My goal is an ion setup, so this does seem to help 🙂#2018-06-1314:05rhansenthanks!#2018-06-1314:22Andreas Liljeqvist@marshall On the client I have a pull(nested map) for some entity. A few assocs and dissocs is applied to that map - How can use the original map and the new to get a list of transactions that will lead to the same state?#2018-06-1314:26alexmillerI don’t think there is a thing for that but you could probably use something like https://clojure.github.io/clojure/clojure.data-api.html#clojure.data/diff to make data that you transform into a txn#2018-06-1314:27alexmillermight be a nifty small library. maybe someone has already done this?#2018-06-1315:06Andreas Liljeqvistok thanks#2018-06-1316:10rapskalian@alexmiller @andreas862 I've had my eye on this library for use cases such as this: https://github.com/juji-io/editscript#2018-06-1408:23Andreas LiljeqvistThanks for the link, will check it out#2018-06-1316:23johnjthe readme in datomic-free.zip says one can get the peer from maven but no version of it shows in maven https://search.maven.org/#search%7Cga%7C1%7Cg%3A%22com.datomic%22%20AND%20a%3A%22datomic-free%22#2018-06-1316:24johnjis this a wrong readme?#2018-06-1316:40marshallthe Datomic Free peer is in clojars
The Datomic Pro peer is available by private maven repo, credentials supplied in your http://my.datomic.com account @lockdown-#2018-06-1316:44johnj@marshall ok, looks like it wasn't resolving for me because the latest version is not available in clojars yet: https://clojars.org/com.datomic/datomic-free#2018-06-1316:45marshall@lockdown- Ah - i’ll have a look. Thanks for catching that#2018-06-1316:53marshall@lockdown- if you need it in the interim, you can download it directly from https://my.datomic.com/downloads/free and use the bin/maven-install script to install it locally#2018-06-1316:54johnjyeah thanks, that's what I did :+1:#2018-06-1320:11bj#2018-06-1323:32steveb8n@rhansen I created https://github.com/stevebuik/ns-clone to deal with this. works well for me. Sounds like it won’t be necessary in future though.#2018-06-1406:39jlmrHi, I'm trying to set something up so that I can sync Datomic entities to ElasticSearch (with Datomic being the source of truth). I would like it to be able to "catch up" on transactions that may have happened while the process was offline (or for rebuilding the Elastic index at some point) as well as continuously keeping pace with transactions as they occur while the process is online.
Right now I'm using tx-range for catch-up and tx-report-queue for keeping pace. I'll get the datoms out of the transactions and use them to pull entities for syncing to Elastic. This does seem to work to some extent, however more entities are indexed in Elastic than there are in Datomic. Some pieces of the solution are apperently still missing. I suspect I need pieces of code that:
• Tell which entities where added between two t's: these need to be created.
• Tell which entities where changed (attributes changed or entities retracted) between two t's: these need to be updated.
• Tell which entities where deleted (ie. all attributes retracted) between two t's: these need to be removed.
It would be great to get some tips on how I can do this or pointers to earlier material or solutions for similar problems.
Thanks in advance!#2018-06-1407:31val_waeselynck@jlmr I've implemented something like that for our stack. In our case, we do it in a batch (as opposed to streaming) fashion, every 30 min, therefore we use the Log API - but this could be applied to the txReportQueue if you wanted to do streaming.#2018-06-1407:31val_waeselynckFor simplicity, we don't make any difference between added and changed; in both cases, the whole document gets recomputed and upserted into ES.#2018-06-1407:35val_waeselynckIn our case, we're dealing with Customer entities. We detect additions/changes with a (cust-changed ?e ?customer) Datalog rule, in which ?e is an entity that appears in the Log data, and ?customer is a customer that gets affected by this change to ?e. We register an implementation of this rule for each data path leading to a change to the Customer.#2018-06-1407:36val_waeselynckTo detect deletions, we detect datoms of the form [?cust :customer/id _ _ false] - which happen iff the Customer gets deleted.#2018-06-1407:40val_waeselynckAbout ES management:#2018-06-1407:40jlmrThanks @val_waeselynck, right now I'm using the same general idea to detect deletions, however I'm still unfamiliar with the idea of Datalog rules.#2018-06-1407:40val_waeselynck1) do the updates in batches#2018-06-1407:40val_waeselynck2) Your ES materialized view needs to maintain an 'offset' t - so that it can pick up where it left off#2018-06-1407:42val_waeselynck@jlmr https://docs.datomic.com/on-prem/query.html#rules#2018-06-1407:42jlmrI opted to use t as the external_version number for documents in elastic. That way newer versions get overwritten#2018-06-1407:43jlmrI'll take a look at the Datalog rules as well#2018-06-1407:44val_waeselynckRules are not mandatory for doing this (you can also do plain old disjunction with or-join), but they will allow you to decouple your code.#2018-06-1407:45val_waeselynck@jlmr I think you'll need something more than this external_version - you want to know at what t the whole materialized view was last updated, not one of its document (what if the last update consisted only of deletions?)#2018-06-1407:46val_waeselynckI recommend to keep track of this t in a document of a dedicated type#2018-06-1407:48val_waeselynckAlso, if you're going to do batching, consider using 2 rolling ES indexes that you put behind an ES index alias - this will give you more consistency, as you'll never query an 'in progress' MV#2018-06-1407:48jlmrgood points!#2018-06-1407:48jlmrIs there some code I could take a look at?#2018-06-1407:49val_waeselynckNo, sorry, all proprietary#2018-06-1407:50val_waeselynckDo watch this if you haven't already, it will set your ideas straight: https://www.confluent.io/blog/turning-the-database-inside-out-with-apache-samza/#2018-06-1408:00jlmrwill take a look at it next week when I have time again for this project#2018-06-1409:12jeroenvandijkRegarding datomic ions, will there be a version that accesses the Ion instances directly (through an AWS ELB) to support applications with higher http throughput requirements than AWS lambda offers by default?#2018-06-1409:12jeroenvandijkRegarding datomic ions, will there be a version that accesses the Ion instances directly (through an AWS ELB) to support applications with higher http throughput requirements than AWS lambda offers by default?#2018-06-1409:13jeroenvandijkMy gut feeling says this is possible as it would be some different AWS cloudformation template + some http server functionality on the Ions node (that might already be part of the current setup)#2018-06-1409:18jeroenvandijkThe above question could also be me missing the point about the benefit of putting lambda in between#2018-06-1420:28chris_johnsonDoes the Peer library expose any logging about its use of Memcached servers?#2018-06-1420:43jaret@chris_johnson there is memcache average, sum, samples.#2018-06-1420:43jarethttps://docs.datomic.com/on-prem/monitoring.html#transactor-metrics#2018-06-1420:44donaldballIs there a semantic difference between [(missing? $ ?e :foo/bar)] and (not [?e :foo/bar _]) ?#2018-06-1420:49chris_johnson@jaret Are those not tracking transactor use of memcache? I’m looking at my peer usage#2018-06-1509:21val_waeselynckReleased Datomock v0.2.1. Fixed a bug causing d/transact to throw instead of returning a failed ListenableFuture, which means that you could typically see some errors in development and not see them in production. Thanks @tony.kay for spotting this.
https://github.com/vvvvalvalval/datomock#2018-06-1512:34evangelinehello! does anyone have any experience with migrations in datomic cloud? i've been using conformity for on-prem but it seems that the library doesn't support client API 😞#2018-06-1513:29chris_johnsonQuestion about the Ions starter project: should you expect to have to push your ions before you can develop against them locally?#2018-06-1513:29chris_johnsonI’m getting an error about datomic/ion-starter.edn not being on the classpath and I see it both in my REPL and in the log output from my Cloud instance:#2018-06-1513:30chris_johnsondatomic-datomic-cloud-appsync datomic-cloud-appsync-datomic-cloud-appsync-Compute-AKGO1YKWH8U7-i-0a275b66a345e2502-2018-06-09-17-06-06- {"Msg":"ClientSPIAnomaly","DatomicClientSpiErrorResponse":{"Status":200,"Body":{"CognitectAnomaliesCategory":"CognitectAnomaliesNotFound","CognitectAnomaliesMessage":"'datomic\/ion-config.edn' is not on the classpath"}},"Type":"Event","Tid":361,"Timestamp":1529069275190}
datomic-datomic-cloud-appsync datomic-cloud-appsync-datomic-cloud-appsync-Compute-AKGO1YKWH8U7-i-0a275b66a345e2502-2018-06-09-17-06-06- {"Msg":"AdopterCheck","DbId":"c4f19984-b350-4a1e-94de-5368e4b26b12","Type":"Event","Tid":382,"Timestamp":1529069276364}#2018-06-1513:31chris_johnsonwhich makes me think maybe you have to push …first? Maybe you have to push an “empty project” so that the ion-config.edn file exists in the Cloud instance, and then you can develop against your local one? I find this confusing.#2018-06-1513:31marshall@chris_johnson the file is read locally. deploy will upload the necessary files to s3#2018-06-1513:32marshalldid you create a datomic/ion-starter.edn file?#2018-06-1513:32chris_johnsonhm. that’s confusing but in a different and more-manageable way#2018-06-1513:32marshallhttps://docs.datomic.com/cloud/ions/ions-tutorial.html#sec-2-1#2018-06-1513:32chris_johnsonI sure did, it’s right next to ion-config-sample.edn and the only difference between the two is that I added my system name#2018-06-1513:33chris_johnsonI restarted my REPL too#2018-06-1513:33chris_johnsonand deps.edn does have {:paths ["src" "resources"] in it#2018-06-1513:34marshallhow are you starting your REPL?#2018-06-1513:35chris_johnsonclj#2018-06-1513:35marshallwhere are you getting that error? in local repl (when you do what)?#2018-06-1513:36chris_johnsonwell, so I do have one other thing going on which I suppose will end up being the problem somehow, though I don’t see how: I added CIDER to my deps like so:#2018-06-1513:36chris_johnsoncider/cider-nrepl {:mvn/version "0.18.0-SNAPSHOT"}
refactor-nrepl {:mvn/version "2.4.0-SNAPSHOT"}#2018-06-1513:37chris_johnsonand I start the REPL with clj and then (require 'cider-nrepl.main) and (cider-nrepl.main/init ["cider.nrepl/cider-middleware"])#2018-06-1513:38chris_johnsonand I am C-c C-e eval’ing this: (items-by-type* (d/db (get-connection)) :hat)#2018-06-1513:38marshallare you trying to invoke the ion on a Cloud instance before you’ve done a push?#2018-06-1513:38chris_johnsonindeed#2018-06-1513:39chris_johnsonthat was the substance of my initial, poorly-worded question#2018-06-1513:39marshallyou would need to push before you can invoke it remotely#2018-06-1513:39marshallpush and deploy#2018-06-1513:39chris_johnsonso you can do truly local development (e.g., handing a map into a fn) in isolation, but in order to exercise something that uses get-connection you have to push and deploy that first#2018-06-1513:41chris_johnsonthat makes sense now that I know it, but it wasn’t clear to me from the way the starter docs order things. Thanks for clearing it up. As you might guess from my log message above, my Friday Me Time™ today is all about getting an Ion or two working with AppSync and writing it up for others to see. Onward and upward! 😄#2018-06-1513:42marshallCool. glad that makes sense now#2018-06-1513:42stuarthalloway@chris_johnson not exactly -- see https://docs.datomic.com/cloud/ions/ions-reference.html#server-type-ion#2018-06-1513:43stuarthallowayget-connection is not part of Datomic Cloud, but if it uses :server-type :ion you can dev locally without :push#2018-06-1513:44stuarthallowayalso cider et al should not be in your production deps? just under e.g. :dev alias#2018-06-1513:45chris_johnsonso, I do in fact have :server-type :ion set up and I’m using the get-connection -> ensure-dataset -> get-client machinery in the starter project#2018-06-1513:46chris_johnsonI guess based on my reading of the docs I would have expected to be able to exercise all the ions in the project locally against the Cloud instance over the SOCKS proxy (which is running)#2018-06-1513:46chris_johnsonensure-dataset appears to have transacted schema and data to the Cloud instance#2018-06-1513:47chris_johnsonand thanks for the pointer about :dev - this is my first time trying out a tools.deps based project so I took the bull-in-a-china-shop approach and said “oh look, a dependencies array! I’ll just leave these here” 😄#2018-06-1513:48chris_johnsonI want to learn Ions now and will go back and learn tools.deps in more depth after I have my blog post written hehe#2018-06-1513:53chris_johnsonI do get the same error from the stock clj REPL too:#2018-06-1513:53chris_johnsonuser=> (in-ns 'datomic.ion.starter)
#namespace[datomic.ion.starter]
datomic.ion.starter=> (items-by-type* (d/db (get-connection)) :hat)
ExceptionInfo 'datomic/ion-config.edn' is not on the classpath clojure.core/ex-info (core.clj:4739)
datomic.ion.starter=>
#2018-06-1514:13chris_johnsonoh#2018-06-1514:13chris_johnsonhaha, yes#2018-06-1514:13stuarthalloway@chris_johnson is that error from the Cloud or the local REPL?#2018-06-1514:14chris_johnsonI think the problem is actually right here:
> I want to learn Ions now and will go back and learn tools.deps in more depth after I have my blog post written hehe#2018-06-1514:14stuarthallowaywhat does ( "datomic/ion-config.edn") tell you?#2018-06-1514:14chris_johnsonI was running clj without -Rdev and so I’m 99% sure I was running without ion-dev loaded at all#2018-06-1514:18alexmillershould -R:dev btw (although -Rdev works right now)#2018-06-1514:15chris_johnsonI’m a few seconds away from apologizing for wasting yours and Marshall’s coffee break#2018-06-1514:15stuarthallowaythe error message was on point 🙂#2018-06-1514:17chris_johnsonhm, well I still get that error so that wasn’t it#2018-06-1514:18chris_johnsonIt’s being reported by the local REPL but is also showing up in the Cloud logs:#2018-06-1514:18chris_johnsondatomic-datomic-cloud-appsync datomic-cloud-appsync-datomic-cloud-appsync-Compute-AKGO1YKWH8U7-i-0a275b66a345e2502-2018-06-09-17-06-06- {"Msg":"ClientSPIAnomaly","DatomicClientSpiErrorResponse":{"Status":200,"Body":{"CognitectAnomaliesCategory":"CognitectAnomaliesNotFound","CognitectAnomaliesMessage":"'datomic\/ion-config.edn' is not on the classpath"}},"Type":"Event","Tid":357,"Timestamp":1529072273749}#2018-06-1514:21chris_johnsonto answer your question:
user=> ( "datomic/ion-config.edn")
#object[java.net.URL 0x73a19967 "file:/Users/chris/src/ion-starter/resources/datomic/ion-config.edn"]
user=>
#2018-06-1514:22chris_johnsonand slurping that does show the contents I would expect to see in the file, including a valid value for :app-name#2018-06-1514:42viestihttps://aws.amazon.com/blogs/compute/introducing-amazon-api-gateway-private-endpoints/#2018-06-1514:42viestiSo now one can do private services with Ions#2018-06-1514:43chris_johnsonyeah, that’s going to be Big™ - I’m sure lots of people’s Friday is figuring out how to use that to pull a bunch of their service API surface “inside the wire”#2018-06-1514:43marshall@chris_johnson did you edit the client map in ion/starter.clj ?#2018-06-1514:44marshallhttps://github.com/Datomic/ion-starter/blob/master/src/datomic/ion/starter.clj#L14#2018-06-1514:46marshallif you’re going to do local dev, you’ll need to set up that map#2018-06-1514:49chris_johnsonI did do that#2018-06-1514:49chris_johnsonokay, so this appears to be working:#2018-06-1514:49chris_johnsonClojure 1.9.0
(def cfg {:server-type :ion
:region "us-east-2"
:system "datomic-cloud-appsync"
:query-group "datomic-cloud-appsync"
:endpoint ""
:proxy-port 8182})
#'user/cfg
user=> (require '[datomic.client.api :as d])
nil
user=> (def client (d/client cfg))
#'user/client
user=> (def conn (d/connect client {:db-name "datomic-docs-tutorial"}))
#'user/conn
(defn create-item
"Transaction fn that creates data to make a new item"
[db sku size color type]
[{:inv/sku sku
:inv/color (keyword color)
:inv/size (keyword size)
:inv/type (keyword type)}])
#'user/create-item
user=> (create-item (d/db conn) "test-001" "large" "blue" "shirt")
[#:inv{:sku "test-001", :color :blue, :size :large, :type :shirt}]#2018-06-1514:50chris_johnson(note that this is just mechanically copying into the REPL the very same config map and fn def that exist in the local source file)#2018-06-1514:55stuarthalloway@chris_johnson pretty sure @marshall was right at the beginning#2018-06-1514:55stuarthallowayyou cannot call a query function that you haven't deployed#2018-06-1514:56chris_johnsonokay#2018-06-1514:56stuarthallowayyou can dev and test it directly, e.g.#2018-06-1514:56stuarthalloway(datomic.ion.starter/feature-item? db some-e)#2018-06-1514:56stuarthallowaybut not inside a query as items-by-type* does#2018-06-1514:57stuarthallowaywe probably could use more pictures describing the dev workflow#2018-06-1514:57chris_johnsonright, that makes sense and tracks with the next thing I tried, which was to defn feature-item? in #'user and re-`defn` items-by-type* to use that#2018-06-1514:58chris_johnsonwhich yields ExceptionInfo Unable to resolve symbol: feature-item? in this context clojure.core/ex-info (core.clj:4739)#2018-06-1514:59chris_johnsonthat all makes perfect sense and for the record I was 100% sure at every point that one of the two of you was right and I was doing something incorrectly 😇#2018-06-1515:00chris_johnsonhopefully your efforts to help an under-caffeinated neophyte will persist in the logs long enough to help someone else. Thanks, as always, for being so willing to spend time in here working through issues.#2018-06-1515:02stuarthallowayI am afraid my contribution here was net negative. Sorry @marshall and @chris_johnson#2018-06-1515:02marshallnah#2018-06-1515:03marshallit was also correct, just not for this specific context i think#2018-06-1515:05chris_johnsonI agree, I think at least for me there was value in having you both assert the correct things you did and then having to convince myself they were true at the REPL. As I find often to be the case, coming at something with a half-correct mental model and a thoroughly-broken code example, with a couple of willing experts to offer advice, has taught me much more than just having the tutorial work right the first time ever would have.#2018-06-1515:06marshallwell. one expert and one pretender.#2018-06-1515:07marshalli’ll let @stuarthalloway interpret that as he sees fit 😉#2018-06-1515:13chris_johnsonand indeed, as a final check before deploying anything, reloading starter.clj with this form of items-by-type* does work at the REPL (and in CIDER for that matter):
(defn items-by-type*
"Returns info about items matching type"
[db type]
(d/q '[:find ?sku ?size ?color #_?featured
:in $ ?type
:where
[?e :inv/type ?type]
[?e :inv/sku ?sku]
[?e :inv/size ?size]
[?e :inv/color ?color]
#_[(datomic.ion.starter/feature-item? $ ?e) ?featured]]
db type))
#2018-06-1515:13chris_johnson💯#2018-06-1515:34chris_johnsonCompletely unrelated question while you’re (maybe) still reading, @marshall - does the Peer library log anything about its use of memcached? Not the transactor’s CloudWatch metrics but the Peer library. I’m moving a service from Heroku to AWS and it doesn’t seem to be using the shiny new Elasticache cluster I set up for it, even though it was making use of Memcachier in Heroku. I tried turning the datomic logger in our logback.xml up to ”DEBUG” but I still don’t see any telemetry I can use to figure out What I Did Wrong #2018-06-1515:34marshallYes, the peer lib will report memcache metrics as well#2018-06-1515:35marshallyou need to specify the memcached server address as a command line arg#2018-06-1515:35marshallto the jvm#2018-06-1515:36chris_johnsonYes, we do that #2018-06-1515:37chris_johnsonIt will report metrics? Like, CloudWatch metrics or logging statements? #2018-06-1515:37marshalldo you have peer metrics enabled?#2018-06-1515:38marshallhttps://docs.datomic.com/on-prem/monitoring.html#register-callback#2018-06-1515:38marshallyou’d need to have a metrics callback enabled#2018-06-1515:38marshallyou could use println if you just want them locally in a repl, or you can write (or use a community) wrapper to push them to CloudWatch#2018-06-1515:39marshalli.e. something like this https://gist.github.com/geoff-kruss/4504cdcf7e017d289862ab75fc856720
**I haven’t used that specifically and can’t officially endorse it, but that should give you an idea#2018-06-1515:39chris_johnsonOh ho #2018-06-1515:39chris_johnsonNo, I don’t believe we have a callback handler set#2018-06-1515:40chris_johnsonI was expecting to find stuff in the console logs because I am a bad person who doesn’t read documentation, apparently #2018-06-1515:40chris_johnsonI will try that, thanks!#2018-06-1515:40marshallif you have logging enabled (i.e. logback or something) you will likely see them there too#2018-06-1515:41marshalli’d suggest looking at the logs when you start up the app - it should record some details about the datomic config and creating the memcached connection#2018-06-1515:41marshallagain, assuming your peer lib logging is enabled#2018-06-1515:42marshallhttps://docs.datomic.com/on-prem/configuring-logging.html#peer-logging#2018-06-1515:44chris_johnsonit should be and the fellow who set this up in Heroku in the long-long ago claims that he’s seen useful peer logging in the past by changing the same value I did in our logback.xml but I have not seen with my own eyes proof that this is true#2018-06-1515:44marshallhrm. you should just need to have logback.xml on your classpath and logback in your project deps#2018-06-1515:45marshallhttps://docs.datomic.com/on-prem/configuring-logging.html#leiningen-logback#2018-06-1515:49chris_johnsonyeah, we do - just knowing that it should be logging something at startup is enough for me to chew on#2018-06-1518:11kennyIs there a way to get the datomic-socks-proxy to persist across laptop sleeps?#2018-06-1518:31jaretWe had one user who made the following suggestion, I haven’t tested but others have and it seems to work:
>I installed autossh instead and hacked the script to use this, and it is now much more stable (and survives sleeps of my laptop). I wonder whether it might be worth having the standard script check for the installation of autossh and if found, use that instead (and maybe print a message to the user if not found, before continuing with the regular ssh client). For anybody interested in my little hack, I just commented out the ssh command at the bottom of the script, and added the autossh one. Like this... #ssh -v -i $PK -CND ${SOCKS_PORT:=8182} /cdn-cgi/l/email-protection${BASTION_IP} autossh -M 0 -o “ServerAliveInterval 5” -o “ServerAliveCountMax 3" -v -i $PK -CND ${SOCKS_PORT:=8182} /cdn-cgi/l/email-protection${BASTION_IP}#2018-06-1518:32jaretCredit to @U9HA101PY. I might copy it over to the forum as I think this is the second or third time I’ve linked it.#2018-06-1518:32jaretas reduce says, https://mosh.org/ would probably work as well#2018-06-1518:19johnjmosh#2018-06-1623:11euccastrois there, in Datomic Cloud and/or Ions, anything like tx-report-queue?#2018-06-1623:15euccastroI see there's tx-range in the Client API. I imagine one could keep polling with that. but is there any way to subscribe once to get any new transaction reports sent ~immediately?#2018-06-1701:46steveb8nA suggestion for the Cloud docs: I used a non-default region for my Cloud Stack and this caused problems in setting up cloud (key pairs in other regions) and with Ion push (an error about not finding matching tags). Maybe the docs could help noobs like me avoid these errors by making the sensitivity to region more explicit?#2018-06-1702:01steveb8nfor reference, the Ion error when pushing to the incorrect region is “Did not find exactly one result with tags” so not obvious that the region is the problem#2018-06-1703:26steveb8nanother doc correction: the post-deploy curl command fails with 403 {“message”:“Missing Authentication Token”} but, if you add any path to the uri then it succeeds e.g. curl https://$(obfuscated-name).http://execute-api.us-east-1.amazonaws.com/dev/foo -d :hat#2018-06-1703:26steveb8nwithout the /foo it will 403#2018-06-1704:01steveb8nI have a question about Ion transaction fns vs traditional Datomic “transactor” fns: in Ion transaction fns, the generation of datoms and the transaction of those datoms is 2 separate steps, unlike the old transactor fns. Does this mean we run the risk of concurrency issues if the delay between generate vs transact is non-zero? In the old world, these were a single step so no concerns like this. What is the advice for how to implement db invariants (e.g. composite keys) in light of this?#2018-06-1704:03steveb8nideally I’d like to use spec to validate entities being written but, for now, reliable composite keys are my focus#2018-06-1811:48stuarthalloway@steveb8n your premise is incorrect. Ion transaction functions work exactly like database transaction functions. They differ only in how the code is loaded.#2018-06-1811:50stuarthallowayIs there something in the docs that made this unclear? If so I would like to fix it.#2018-06-1815:35stuarthalloway@steveb8n the next release of ion-dev will have a better error message when you specify the wrong region -- thanks for the report!#2018-06-1819:23gabrielewhat is the best way to understand where is the problem? (i'm using ions)#2018-06-1820:03gabrielethis is a bit frustrating, i cannot deploy, every time it fails and i have no way to know what's wrong#2018-06-1820:16jaretHi @gabriele.carrettoni what are your deps in your project?#2018-06-1820:16gabriele@jaret#2018-06-1820:16jaret@gabriele.carrettoni and can you look in your Datomic Cloud system logs via these instructions and search for Ion or Exception?#2018-06-1820:17jarethttps://docs.datomic.com/cloud/troubleshooting.html#http-500#2018-06-1820:17jaret@gabriele.carrettoni ah you have buddy. I think thats the problem. let me confirm, we had a user run into this previously.#2018-06-1820:19steveb8n@stuarthalloway when using the old transactor fns, it was a single api call and it ran on the transactor, where the transactor called any txn fns as part of that single call. with Ions, it’s 2 api calls. 1/ to generate the extra datoms https://github.com/Datomic/ion-starter/blob/master/src/datomic/ion/starter.clj#L154 and 2/ to make them persist https://github.com/Datomic/ion-starter/blob/master/src/datomic/ion/starter.clj#L155#2018-06-1820:19steveb8nin my use case I am using a transactor fn to run a query and then throw an exception if I find a conflict#2018-06-1820:20steveb8nwith the Ion pattern, another thread could transact in between those 2 api calls and my check would miss it#2018-06-1820:20stuarthallowaysteveb8n nope 🙂#2018-06-1820:20steveb8nFWIW I really like new pattern as it matches the pure db command pattern I am using for all my txns#2018-06-1820:21stuarthallowaythat line you linked is not a function call. it is pure data#2018-06-1820:21stuarthallowayexecuted on the cluster node, inside the transaction#2018-06-1820:21steveb8nah yes, I see my mistake#2018-06-1820:22steveb8nso it is identical. thanks, clearly I was reading too fast#2018-06-1820:22stuarthallowaywell, it is a function call ... but to the function list* 🙂#2018-06-1820:22steveb8nit is one api call, that’s what we need to atomic txn fns, I see that now#2018-06-1820:23steveb8nwhile we’re chatting, re my other hiccup with testing the web ion sample, I learned about API gateway, resources and methods and made it work without needing /foo in the uri#2018-06-1820:24steveb8nbut I think the docs could help avoid that by providing a bit more detail in setting up the API gateway routes#2018-06-1820:25steveb8nit’s a deep rabbit hole so helping noobs not fall straight in will be good for first impressions#2018-06-1820:26steveb8nwith that, I’m off to work. but generally stoked about Ion and all the devops work I don’t have to do. great work!#2018-06-1820:26stuarthalloway@steveb8n agreed, and/or maybe have some automation for API Gateway. It is a deep well.#2018-06-1820:32gabriele@jaret removed all extra libraries, still the same#2018-06-1820:35jaret@gabriele.carrettoni it looks like code deploy is having some issues. I am also not able to deploy.#2018-06-1820:35jarethttps://status.aws.amazon.com/#2018-06-1820:36gabriele@jaret but the old version is deploying just fine, when i rollback#2018-06-1820:36jaretService: AmazonCodeDeploy; Status Code: 500; Error Code: InternalFailure;
#2018-06-1820:36gabriele@jaret btw i'm in europe#2018-06-1820:40gabrielewhere does datomic store logs on the ec2 instance?#2018-06-1820:40gabrielecan't seem to find them#2018-06-1820:49jaret@gabriele.carrettoni under https://console.aws.amazon.com/cloudwatch/home#logs:#2018-06-1820:49jaretSearch for your datomic-system name#2018-06-1820:49jaretThere should be an error there for the deploy.#2018-06-1820:49gabriele@jaret there are no exceptions#2018-06-1820:50jaretJust to be sure, did you capitalize the E when searching for “Exceptions”#2018-06-1820:51gabriele@jaret#2018-06-1820:54jaret@gabriele.carrettoni apologies… can you confirm you get no results with “Exception”#2018-06-1820:54jaretI added an “s”#2018-06-1820:55gabriele@jaret#2018-06-1820:58jaret@gabriele.carrettoni can you look in your cloudwatch logs for a message “Cluster node starting”#2018-06-1820:59gabriele@jaret "no event... "#2018-06-1821:02jaret@gabriele.carrettoni and you aren’t seeing any alerts in cloudwatch to the left? Do you see a datomic instance with your system name when looking at the ec2 dashboard with a status of “running”#2018-06-1821:03gabrieleno, yes#2018-06-1821:04gabriele@jaret i just tried to clone ion-starter and deploy#2018-06-1821:04gabrieleand that project works#2018-06-1821:04jaretWell now I am thoroughly confused. You removed all the added deps from the previous project so it should have been effectively the same.#2018-06-1821:05gabrieletell me about that 😅#2018-06-1821:07jaretHa! Well just to warn you on the buddy front. Buddy and Pedestal use Cheshire which uses jackson and we’ve previously found a conflict with the versions there. The previous client was able to drop buddy altogether, but you should be able to force a newer version of the jackson dep to make it work if you do run into an issue including buddy.#2018-06-1821:08jaretAs Stu mentioned we’re working on a new release of Ion-dev that will show deps conflicts at the time of deploy to help troubleshoot those issues without going into the logs.#2018-06-1821:12jaret:dependency-conflicts
{:deps
([com.fasterxml.jackson.dataformat/jackson-dataformat-cbor
#:mvn{:version "2.6.7"}]
[com.fasterxml.jackson.core/jackson-core #:mvn{:version "2.6.4"}])
#2018-06-1908:34gabriele@jaret by removing every line/namespace and testing i finally found what is that breaks, if you have a namespace called "datomic.db" in your project, codedeploy fails at validate service#2018-06-1910:04conanDoes anybody know whether communication with an on-prem transactor is secure by default? I'm trying to get one running in Heroku, backed with Heroku Postgres; all comms with the DB are over SSL and authenticated, so I'm not worried about the storage, so if comms between peers and the transactor are secure then I'm good to go, but I can't find any mention of the transactor's network protocol anywhere. I'd be very grateful if anyone knows the answer or can point me to a helpful resource, thanks#2018-06-1913:11stuarthalloway@gabriele.carrettoni good catch! You cannot, and should never, use namespaces owned by some other organization in your code#2018-06-1913:19gabriele@stuarthalloway lesson learnt#2018-06-1913:47stuarthalloway@conan look at your transactor properties file#2018-06-1915:29conanoh great, i'd missed this. thanks, that reassures me a lot!#2018-06-1913:47stuarthalloway## Set to false to disable SSL between the peers and the transactor.
# Default: true
# encrypt-channel=true
#2018-06-1914:05gabriele@stuarthalloway now i get namespace 'cheshire.factory' not found#2018-06-1914:06gabrielejava.lang.Exception: namespace 'cheshire.factory' not found, compiling:(cheshire/core.clj:1:1)
what is going on now 😣#2018-06-1915:00stuarthalloway@gabriele.carrettoni I doubt cheshire will work at all until we update jackson in Datomic Cloud, planned for the next release#2018-06-1915:01stuarthallowayCheshire requires a version of Jackson that uses methods not present in the version currently shipping with Datomic Cloud#2018-06-1915:03gabriele@stuarthalloway i see, i'll remove the dependency and use data.json#2018-06-1915:06stuarthallowaythat is a priority fix for us, you are (at least) the second person to hit it#2018-06-2001:12souenzzoThere is a roadmap to update on-prem deps too?
By default, it has issues with clojurescript and tons of other projects, like onyx#2018-06-1915:21denikcan’t connect to datomic cloud:
Caused by: clojure.lang.ExceptionInfo: Unable to connect to system: #:cognitect.anomalies{:category :cognitect.anomalies/unavailable, :message "Connection refused"}
#2018-06-1915:22denikran into this once before a few months ago#2018-06-1915:22deniknothing changed in the code or creds#2018-06-1915:22denikit simply worked, then I restarted the repl and now it doesn’t work anymore#2018-06-1915:26deniksince cloud is necessary even for dev, I’m now 100% blocked#2018-06-1915:33marshall@denik you’re getting that error in your local repl? have you restarted your SOCKS proxy? can you run the proxy test (https://docs.datomic.com/cloud/getting-started/connecting.html#test-bastion)#2018-06-1915:36denikI did restart (even the machine which is what fixed it last time)#2018-06-1915:37marshallwhat do you get from the proxy test?#2018-06-1915:40denikI can’t run the test, we shut down out bastion bc we develop with a machine inside the VPC#2018-06-1915:41marshallhave you upgraded Datomic? Is your primary compute node still online?#2018-06-1915:44denikyes, prod db works (same system)#2018-06-1915:44denikhaven’t touched deps or creds#2018-06-1915:44denikneither code#2018-06-1915:45denikI might have had an orphan jvm running that was holding a connection but after restart that should be wiped, too#2018-06-1915:46marshallwhat security group is your dev instance using?#2018-06-1915:46marshallis it possible you opened up SG ingress to a specific IP (the original dev system) and now you have a new IP#2018-06-1915:47denikit’s open within the VPC#2018-06-1915:47denikhad no problems for months#2018-06-1915:48marshallyour prod db is on the same Datomic cloud system?#2018-06-1915:48marshalland it’s working fine?#2018-06-1915:49denikyes#2018-06-1915:50denikI guess the prod system hasn’t tried reconnecting yet#2018-06-1916:02gabriele(using ions) i have renamed a namespace and deployed a new version, now the lambda throws that it doesn't find the namespace#2018-06-1916:08denik@marshall any ideas?#2018-06-1916:14marshallThat's a known issue with Aws marketplace. You can unsubscribe and resubscribe to get it to go away or you can just choose the one that has the latest#2018-06-1916:15denikunsubscribing and resubscribing will leave the VPC alone?#2018-06-1916:15denikmeaning the system will run as usual after resubscribing#2018-06-1916:17marshallYes. When you launch, choose reuse existing storage#2018-06-1916:18denikand that’s the recommended way to update the system?#2018-06-1916:19marshallNo. You can upgrade without resubscribing#2018-06-1916:20marshallhttps://docs.datomic.com/cloud/operation/upgrading.html#2018-06-1916:20denikI’m looking to update the templates, not to upgrade the system#2018-06-1916:22marshallI'm not sure what you mean by update the templates#2018-06-1916:24deniknvmd, found the little copy and paste button#2018-06-1916:24marshallIt is automatically copied to clipboard when you click the icon#2018-06-1916:24marshallYea#2018-06-1916:25denik@marshall will our settings around the VPC be overwritten?#2018-06-1916:25marshallNot as long as you select the "reuse existing resources" box#2018-06-1916:26marshallNote that you need to delete the old stack first#2018-06-1916:26denikin the docs I only see Reuse existing storage#2018-06-1916:26denikthat includes other resources?#2018-06-1916:26marshallYes that one#2018-06-1916:50rhansenIs there a way to pass in pull selector to a query? Like:
(d/q '[:find (pull ?e ?fields)
:in $ ?fields
:where [?e :game/name _]]
(d/db conn) [:db/id :game/name])
This fails when using the datomic client api#2018-06-1917:40okocimHello, has anyone been able to configure logging inside of ions so that the log entries show up in cloudwatch?#2018-06-1919:27stuarthalloway@okocim you should be able to use CloudWatch like any other API. That said, The ion API does not yet provide a way to put your logs in the same log stream that Datomic uses. That is planned for a future release.#2018-06-1919:30okocim@stuarthalloway Thanks Stu. I was trying to avoid using the API directly and instead make use of either the LambdaAppender or the System.out.println functionality, but I was having a hard time determining exactly which log stream this ends up in. Appreciate the heads up.#2018-06-1919:31stuarthalloway@okocim remember that ion code is not running in a Lambda#2018-06-1919:31stuarthallowaythe Lambda is just a proxy#2018-06-1919:35okocimgot it. Well that explains why I wasn't seeing what I expected.. I'm going to re-familiarize myself with the docs. Thanks so much 🙂#2018-06-1920:44johnjthe datomic cloud license fee in production is per node correct?#2018-06-1920:45johnjso it starts at ~$450/month#2018-06-1921:04Joe Lane@lockdown- that doesn’t sound right.#2018-06-1921:10johnj@lanejo01 which part? I didn't include the bastion but included the cost of the ec2 nodes#2018-06-1921:54Joe Lane2 i3.large instances for 30 days runs you ~$224.64 https://aws.amazon.com/ec2/pricing/on-demand/#2018-06-1921:55Joe LaneSo, roughly half.#2018-06-1921:55Joe LaneThats just for the ec2 instances themselves.#2018-06-1921:55Joe LaneI’ll check the license fee.#2018-06-1921:58Joe LaneOh, you know what, I think you’re right @lockdown-.#2018-06-1921:58Joe LaneUltimately I think it depends on the “per node” part of your question. Sorry for adding confusion.#2018-06-1922:22sparkofreasonAny guidance on running tests with the cloud client lib? I have a rather big hammer approach right now that involves creating/deleting DBs for tests that need clean data, and it's pretty slow.#2018-06-1923:25alexmillerIve done that and found it to be pretty fast#2018-06-2010:40stuarthalloway@dave.dixon I have found the db for (small group of) tests approach to be fine, esp when you break out as many pure functions as possible and test them with data in memory#2018-06-2010:42octahedrionI'm trying to retract a unique identity on an attribute like this https://docs.datomic.com/cloud/schema/schema-change.html#sec-5#2018-06-2010:42octahedrionbut I get Server Error#2018-06-2010:43octahedrionwhen I do this (d/transact conn {:tx-data [[:db/retract :thing/id :db/unique :db.unique/identity]]})#2018-06-2010:44octahedrionwhere :thing/id is the :db/ident of my attribute which has a unique identity#2018-06-2010:44octahedrion(I can do queries and other transactions on that connection)#2018-06-2012:51marshall@octo221 can you look in your system logs (cloudwatch logs for the datomic system) for the error? You can search “Exception” to narrow it down#2018-06-2012:51marshallhttps://docs.datomic.com/cloud/troubleshooting.html#using-cloudwatch-logs#2018-06-2013:59octahedrion@marshall "Cause": "nth not supported on this type: Db",
#2018-06-2014:00octahedrion"Type": "java.lang.UnsupportedOperationException",
"Message": "nth not supported on this type: Db",
"At": [
"clojure.lang.RT",
"nthFrom",
"RT.java",
983
]#2018-06-2014:01marshallcan you share the exact transaction you’re issuing?#2018-06-2014:01octahedrion(d/transact conn {:tx-data [[:db/retract :thing/id :db/unique :db.unique/identity]]})#2018-06-2014:02marshalloh, your attr is actually called :thing/id#2018-06-2014:02marshalli thought that was a placeholder 🙂#2018-06-2014:02octahedrionyes! it's a test#2018-06-2014:02marshallok. let me look into it#2018-06-2014:03octahedrionthat attribute was previously transacted like this: (d/transact conn {:tx-data [{:db/ident :thing/id :db/unique :db.unique/identity :db/cardinality :db.cardinality/one :db/valueType :db.type/keyword}]})#2018-06-2014:09marshall@octo221 yes, i’ve repro’d that behavior - looking into it now#2018-06-2014:14chris_johnsonDoes Lambda the Ultimate log telemetry anywhere? My issue is that I have an Ion (`items-by-type` from the ion-starter repo, in fact) that I can successfully invoke at the Lambda console, and when I try to invoke it elsewhere I get an exception that the second datasource in the datalog query (so, the type passed in to items-by-type) cannot be found#2018-06-2014:16chris_johnsonI’m trying to figure out if there’s a place I can see what the Lambda execution chain is “seeing” when I pass my invocation to it without having to build in the AWS SDK, set up CloudWatch logging API, etc. in the starter project just to put Received Lambda input object: {} in a log somewhere I can see it. 🙂#2018-06-2014:34stuarthallowaywhere is "elsewhere"?#2018-06-2014:35chris_johnsonAn AWS AppSync query resolver#2018-06-2014:36stuarthallowayI think you want logging in the ion code, not in the lambda#2018-06-2014:36stuarthallowaycoming in the next release#2018-06-2014:37chris_johnsonIndeed, I do want logging in the ion code, but I was hoping to “get away” with finding the shape of the input data in the lambda so that I didn’t have to build the ion logging myself just for this PoC 😄#2018-06-2014:37stuarthallowaythat said, for now you could modify items-by-type to catch exceptions and return whatever additional info you want, and see the problem directly in the ion return#2018-06-2014:38stuarthallowayor even have a separate "debug" lambda that does that ^^#2018-06-2014:38chris_johnson(especially since I did read you statement that unified ion logging was coming soon)#2018-06-2014:40octahedrionthank you @marshall#2018-06-2015:56chris_johnsonOkay, so I’ve made quite a bit of progress and now I am getting back an anomaly report I don’t know what to do with at all, from what I believe to be a successful invocation of a Lambda ion:#2018-06-2015:56chris_johnson{
"errorMessage": "No implementation of method: :->bbuf of protocol: #'datomic.ion.lambda.dispatcher/ToBbuf found for class: java.util.HashSet",
"errorType": "datomic.ion.lambda.handler.exceptions.Incorrect",
"stackTrace": [
"datomic.ion.lambda.handler$throw_anomaly.invokeStatic(handler.clj:24)",
"datomic.ion.lambda.handler$throw_anomaly.invoke(handler.clj:20)",
"datomic.ion.lambda.handler.Handler.on_anomaly(handler.clj:139)",
"datomic.ion.lambda.handler.Handler.handle_request(handler.clj:155)",
"datomic.ion.lambda.handler$fn__4062$G__3998__4067.invoke(handler.clj:70)",
"datomic.ion.lambda.handler$fn__4062$G__3997__4073.invoke(handler.clj:70)",
"clojure.lang.Var.invoke(Var.java:396)",
"datomic.ion.lambda.handler.Thunk.handleRequest(Thunk.java:35)"
]
}#2018-06-2015:57chris_johnsonI …am not sure what to do to debug this further. If I munge my input test data to fail, I get back the output of a catch statement in my ion that looks how I’d expect. If I change the test input data to be correct, or invoke the ion via AppSync at the GraphQL console there, I get …that.#2018-06-2015:58chris_johnsonthis is my ion code:#2018-06-2015:58chris_johnson(defn items-by-type-gql
"GraphQL Datasource input massager for items-by-type ion"
[{:keys [input]}]
(let [type (keyword (get (json/read-str input) "type"))]
(try
(items-by-type* (d/db (get-connection)) type)
(catch Exception e (str "Exception: ["
(.getMessage e)
"], for input: ["
(str input)
"], resolved type data: ["
type
"]")))))
#2018-06-2016:00chris_johnsonand this is the “successful” input as Lambda console test data (the input that causes the anomaly report above: {"type": "hat"}#2018-06-2016:01chris_johnsonwhile, at the REPL, this succeeds succeeds, meaning I get back an array of items:
(items-by-type-gql {:input "{\"type\":\"hat\"}"})#2018-06-2016:04chris_johnsonso for example if I execute the Lambda at the console with this test data:
{"type":"vibranium_bracelet"}#2018-06-2016:05chris_johnsonI get back:
"Exception: [processing rule: (q__303 ?sku ?size ?color ?featured), message: processing clause: [?e :inv/type ?type], message: Cannot resolve key: :vibranium_bracelet], for input: [{\"type\":\"vibranium_bracelet\"}], resolved type data: [:vibranium_bracelet]"
(note that the [] characters are inserted by my ham-fisted catch output and do not imply arrays here)#2018-06-2016:06chris_johnsonso in the case where the input is malformed or the type is not found, I get back a catch that appears to show the input being processed correctly, but when the input is correct, I get a No implementation of method: :->bbuf of protocol: #'datomic.ion.lambda.dispatcher/ToBbuf found for class: java.util.HashSet
Just to summarize and clarify. I will shut up now since I’ve eaten over a screen of everyone’s scrollback. 😇#2018-06-2016:31octahedrionis there a way to pull all reverse refs to an entity ? I can only seem to get the first one#2018-06-2016:36octahedrionI'm doing (pull ?n [* {:edge/_node [*]}]) where there are several edges having the same :edge/node#2018-06-2017:02marshall@chris_johnson your catch returns a str; it appears that the items-by-type* function returns a HashSet#2018-06-2017:03marshallnotice that the lambda in the example converts the items-by-type* return value to a json string#2018-06-2017:03marshallhttps://github.com/Datomic/ion-starter/blob/master/src/datomic/ion/starter.clj#L113#2018-06-2017:03marshallalso doc here https://docs.datomic.com/cloud/ions/ions-reference.html#lambda-code#2018-06-2017:06marshall@octo221 can you show the result of that pull?#2018-06-2017:14octahedrion[[{:db/id 30711558787040342, :edge/_node {:db/id 474989023200450 :edge/node {:db/id 30711558787040342}}}]]#2018-06-2017:17octahedrionyet (d/q '{:find [(count ?e)] :where [[?e :edge/node 30711558787040342]]} db)
=> [[6]]#2018-06-2017:17chris_johnson@marshall ah, yes! Thank you for being a second pair of eyes on The Obvious-in-Hindsight Mistake. 😅#2018-06-2017:29marshallnp#2018-06-2018:30chris_johnsonyyyyyeaaaaaahhhhh#2018-06-2019:54denikFYI @marshall re: the issue I had yesterday, I works now but I don’t know why#2018-06-2019:54denikis there a story for a webapp that uses datomic ion and websockets?#2018-06-2020:00stuarthalloway@denik not yet#2018-06-2020:00stuarthallowaywe certainly intend to continue adding integration points#2018-06-2113:30stuarthallowayvideo of Stuart Halloway's TriClojure talk introducing Datomic Ions https://www.youtube.com/watch?v=3BRO-Xb32Ic#2018-06-2123:40gdeer81I'm following the cloud tutorial https://docs.datomic.com/cloud/setting-up.html#2018-06-2123:40gdeer81and I've gotten all the way to the part where you try to connect to aws from the repl#2018-06-2123:41gdeer81but this is what I get user=> (def client (d/client cfg))
ExceptionInfo Unable to connect to system: #:cognitect.anomalies{:category :cognitect.anomalies/not-found, :message ": Name or service not known"} clojure.core/ex-info (core.clj:4739)#2018-06-2123:43gdeer81I ran the socks proxy script and it says that my endpoint to the bastion is good#2018-06-2123:45gdeer81my endpoint key in the cfg map looks exactly like it does in the cloudformation console#2018-06-2123:47chris_johnsonYour :system-name and :query-group values in the cfg map are both gdeer81?#2018-06-2123:48chris_johnsonI don’t know that this would cause that problem, that’s just how I have my working REPL/socks setup set up#2018-06-2123:49chris_johnsonquestion about Ions: if I run say items-by-type out of ion-starter at the REPL I get a nice pretty string with an array of arrays in it, suitable for such activities as deserializing into JSON#2018-06-2123:50chris_johnsonwhen I invoke that ion as a Lambda, however, I get a nice pretty string that is the first string wrapped in a hashset (e.g., "#{[[...stuff...]]}")#2018-06-2123:51chris_johnsonThis presents a problem if I want to then deserialize the result of a Lambda execution somewhere. Is that extra #{...} wrapper Lambda the Ultimate’s doing? Is there a way to “turn it off”?#2018-06-2123:52chris_johnson(apologies if this is answered in the video above, which I haven’t gotten a chance to watch. Stupid linear time!)#2018-06-2123:52gdeer81didn't realize you were asking me if the sys name and query were the same, thought you were asking me why#2018-06-2123:56marshallRight, which is gdeer81#2018-06-2123:56gdeer81then I realized I haven't posted the config map#2018-06-2123:56gdeer81{:server-type :ion,
:region "us-west-2",
:system "gdeer81",
:query-group "gdeer81",
:endpoint ""}#2018-06-2123:56chris_johnsonthe helpful Slack message is coming from inside the house! 😄#2018-06-2123:56marshallCan you run the aws cli command shown in the docs that lists system names #2018-06-2123:57marshallaws ec2 describe-instances --filters "Name=tag-key,Values=datomic:tx-group" "Name=instance-state-name,Values=running" --query 'Reservations[].Instances[].[Tags[?Key==`datomic:system`].Value]' --output text
#2018-06-2123:58marshallaws ec2 describe-instances --filters "Name=tag-key,Values=datomic:tx-group" "Name=instance-state-name,Values=running" --query 'Reservations[*].Instances[*].[Tags[?Key==datomic:system`].Value]' --output text`#2018-06-2123:58gdeer81{:tag :a, :attrs {:href "/cdn-cgi/l/email-protection", :class "__cf_email__", :data-cfemail "6e090f1c172e1e011e43011d"}, :content ("[email protected]")}#2018-06-2123:58marshallHmm#2018-06-2200:00gdeer81yeah it is a real head scratcher#2018-06-2200:03marshallTry switching server type to :cloud#2018-06-2200:03marshallOh wair#2018-06-2200:03marshallWait#2018-06-2200:03marshallYou are missing proxy port#2018-06-2200:03marshall:proxy-port <local-port for SSH tunnel to bastion>#2018-06-2200:03marshallLeave ion#2018-06-2200:04marshallAdd the proxy port entry#2018-06-2200:04gdeer81I couldn't ever figure out which port number to use so I just left it off and let it use the default port#2018-06-2200:04marshallDoesnt work that way ;)#2018-06-2200:06marshallIf you used the one in the docs its 8182#2018-06-2200:07marshallIf you didnt overload on the socks proxy script #2018-06-2200:07marshallBut you do need to supply it in the cfg map @gdeer81 #2018-06-2200:11gdeer81okay, I set it to 8182 in the config map and it threw an error in my repl and the terminal where the proxy script was running terminated, so I restarted the proxy script and reran (def client (d/client cfg))
and didn't get any errors, so it's working I guess? :man-shrugging::skin-tone-2:#2018-06-2200:12marshallYup#2018-06-2200:18gdeer81oh boy, now I can get to the fun part#2018-06-2200:20gdeer81I spoke too soon >_< user=> (d/create-database client {:db-name "movies"})
ExceptionInfo java.lang.NoClassDefFoundError: javax/xml/bind/DatatypeConverter clojure.core/ex-info (core.clj:4739)#2018-06-2200:25gdeer81probably just a java 9+ problem, I've seen something like this at work before#2018-06-2201:08gdeer81in case anyone stumbles upon this conversation in the future: I had to update my deps.edn to add the module with an alias I named "my-ops"
{:deps
{com.datomic/client-cloud {:mvn/version "0.8.54"}}
:aliases
{:my-ops {:jvm-opts ["--add-modules" "java.xml.bind"]}}}
Then I started up clojure with the command line flag -O to use it clj -O:my-ops
and now I am able to do the thing I was trying to do for like two days now user=> (d/create-database client {:db-name "movies"})
true
😁#2018-06-2201:14gdeer81Thanks @marshall for your assistance with that config mess. I'll to try to find where I read that the proxy-port key defaults to something if not provided. not sure if I ran into bad docs or if grossly misread something 🙃#2018-06-2201:37marshallNp. Glad to help#2018-06-2214:41sekaois it a bad idea to serve your app’s root html from an ion? i’m guessing the warmup time makes that impractical…#2018-06-2214:45stuarthalloway@sekao it depend on usage patterns. Is your app in continuous use, and how many 9s of low latency response do you need?#2018-06-2214:46stuarthalloway@sekao and if you are playing the long game, don't sweat the specifics of the current integration too much, we can offer other integrations besides the Lambda proxy in the future#2018-06-2214:48chris_johnsonI’m going to repeat a question from yesterday because I think it got lost in the surrounding discussion, but if it doesn’t get any bites this time I won’t bother the channel again. 😇#2018-06-2214:48chris_johnsonquestion about Ions: if I run say items-by-type out of ion-starter at the REPL I get a nice pretty string with an array of arrays in it, suitable for such activities as deserializing into JSON
when I invoke that ion as a Lambda, however, I get a nice pretty string that is the first string wrapped in a hashset (e.g., "#{[[...stuff...]]}") (edited)
This presents a problem if I want to then deserialize the result of a Lambda execution somewhere. Is that extra #{...} wrapper Lambda the Ultimate’s doing? Is there a way to “turn it off”?
(apologies if this is answered in the video above, which I haven’t gotten a chance to watch. Stupid linear time!)#2018-06-2214:50chris_johnsonThe use case here is that I have an example almost completely done showing how to set up AppSync and Cognito to work with Ions and I want to show the difference between doing the transformation of data shape between GraphQL and Lambda in a net-new ion vs. in the AppSync resolver VTL template (tl; dr: do you want to use Velocity when you have Clojure ready to hand, for data transform? You do not!)#2018-06-2214:51chris_johnsonbut the AppSync invocation of the items-by-type Lambda chokes on the #{ ... }#2018-06-2215:03stuarthalloway@chris_johnson The answer is not immediately obvious to me, but the problem is almost certainly not in the Lambda. Why can't AppSync handle the set notation?#2018-06-2215:03chris_johnsonbecause it’s not JSON#2018-06-2215:04stuarthallowayitems-by-type does not return JSON#2018-06-2215:04stuarthallowayif you want it to do so, modify it to print JSON instead of printing edn#2018-06-2215:05stuarthallowayand then I hope you will be good to go! Are you going to publish the example?#2018-06-2215:06chris_johnsonOkay, that makes sense. I was hoping to be able to take just the raw edn string value and process it in VTL on the AppSync side, not because it’s a thing you’d want to do often but to demonstrate dropping AppSync “onto” an existing ion, but AppSync tries to deserialize it before handing it off#2018-06-2215:06chris_johnsonyeah, that gives me the information I need. I do already have an items-by-type-gql which returns JSON but it’s also doing the output data shaping.#2018-06-2215:07stuarthallowayany name with more than two hyphens is doing too much 🙂#2018-06-2215:08chris_johnsonfair#2018-06-2215:08stuarthallowayI-do-it-all-the-time-myself#2018-06-2215:08chris_johnsonI do intend to publish this example just as soon as I get these last couple bits polished up and finish finding or creating enough cool memes to put in the README#2018-06-2215:09chris_johnsonI will also go look again and see if there’s a way to make the AppSync resolver expect a raw string instead of something it can parse into JSON, which would be delightful.#2018-06-2215:10stuarthallowayonce you are out in the broader AWS world you should probably be in JSON anyway 😞#2018-06-2215:10chris_johnsonI mean, they expose VTL for you to template the output! Why would they give you a possibly-Turing-complete tool for shaping the output data but insist on pushing it through JSON.readString first? 🤔#2018-06-2216:09madstap(Cross post from #clojure)
I have what I think is a backup of a datomic database. It is a directory with the contents ./data/db/datomic.h2.db. Does this seem right? How do I translate that to a file:///home/... url? I am getting a clojure.lang.ExceptionInfo: :restore/no-roots No restore points available at error.#2018-06-2216:22favilaThat's not a backup of a datomic db#2018-06-2216:29madstapOh, ok. I'll have to dig up the old computer I had the backups running on. Thanks!#2018-06-2218:03favilathat's a dev h2 database#2018-06-2218:03favilai.e. what the datomic dev transactor uses to store its blobs#2018-06-2216:52souenzzoCan I (or have plans) to generate a "amazon event that triggers a lambda" on every transaction on datomicIons?#2018-06-2218:18eraserhd[(identity ?list) [[?e ?a "" _ true] ...]] seems to match any value. You can't use literals in this way, can you?#2018-06-2218:42rapskalianMy ion with API Gateway integration is responding with the body base64 encoded...what might I be doing incorrectly? 🤔#2018-06-2219:08stuarthalloway@cjsauer in the API Gateway test UI, or as viewed from e.g. curl?#2018-06-2219:09rapskalian@stuarthalloway it's within the API Gateway UI, but I'm also seeing it happen within the Lambda console using a sample API Gateway event. My handler is returning a string in the response :body field.#2018-06-2219:10stuarthalloway@cjsauer that is by design. You must make a gateway level choice about how to do this, and then Gateway will fix it on the way through#2018-06-2219:11stuarthallowaythe only way to see it the way you want is to consume all the way from the outside edge, e.g. from your browser or curl#2018-06-2219:13rapskalian@stuarthalloway hm...even hitting it with curl returns a base64 encoded string. I'm testing at the REPL as well by doing (my-handler {:input (slurp "fixture.json")}) and am also seeing the encoding.#2018-06-2219:14stuarthallowaymaybe you left out "Choose Add Binary Media Type, and add the */* type, then Save Changes." from https://docs.datomic.com/cloud/ions/ions-tutorial.html#sec-5-3#2018-06-2219:15stuarthallowayI left that out about 100 times when testing it ^^#2018-06-2219:15rapskalian@stuarthalloway DOH...that was it!! Working now. Thank you! 🍻#2018-06-2222:14mtbkappI don't have any experience running applications that require HIPPA compliance. I'm curious if anyone has ran Datomic in such an environment and if it would be possible to store HIPPA protected data in Datomic Cloud.#2018-06-2223:15adammillerhas anyone successfully used figwheel main along with datomic cloud client? It seems there are some dependency issues on the version of jetty that each one requires.
if I try to use the version of jetty that datomic cloud api requires (9.3.7.v2016011) then figwheel can't connect...looks like it gets an error trying to create the websocket. If I use the version figwheel main relies on which looks to be (9.2.21.v20170120) then of course datomic cloud api gets errors trying to connect.#2018-06-2223:16adammillerjetty seems to be the one library that is constantly frustrating when it comes to dependency management in clojure#2018-06-2223:22adammilleri'm looking at that now#2018-06-2223:40adammillerwell but i'm in dev mode working on my app that utilizes the datomic client api....that's the issue. Figured it out with Bruce's help in the figwheel-main channel. If anyone else comes across the issue you just need to exclude the websocket libs from figwheel main then add the version matching the jetty client (9.3) in your top level deps.#2018-06-2301:29alexmillermaybe https://docs.datomic.com/cloud/troubleshooting.html#dependency-conflict ?#2018-06-2313:25adammillerYeah, I think those docs are a little out of date with the latest cloud api library. It seems the fix if you are using Jetty web server is to leave out those exclusions but require the jetty version used in cloud api in your top level deps.#2018-06-2313:26adammillerOf course that doesn't fix issue with figwheel main....to fix that you need to exclude the jetty websocket libs from figwheel and include the versions matching the jetty version included in cloud api in your top level deps.#2018-06-2318:50souenzzo- I'm using com.datomic/datomic-pro "0.9.5656"
There is a datomic function for example :run-once
when I do (d/with (d/db conn) [[:run-once]]) it results on for exaple :tx-data [#datom [55 33 :run-once 55324 true]], but when I transact, it does not do the same operation.#2018-06-2401:36Simon O.Am new to Datomic but can someone please clarify these statements: Datomic is read scalable but not write scalable. What does that mean? And why it is not best for website like twitter? And Also from Professional Clojure book: It took inspiration from multiple sources, but its basic goal is to be a drop-in replacement for the main cases where you would otherwise use a relational database as a transactional store. Stuart Halloway says that it’s targeted at the ninety-six percent use cases of relational databases, *leaving off the top four percent of high write-volume users like Netflix, Facebook, etc. * ?#2018-06-2402:10johnj@simon from the docs:#2018-06-2402:10johnjDatomic is a single-writer system. A single thread in a single process is responsible for writing transactions.#2018-06-2402:11johnjit serializes all writes#2018-06-2402:11johnjtwitter needs to support massive writes#2018-06-2406:05mg@simon I can perhaps try to clarify the second statement, having written it originally ;)#2018-06-2406:09mgAlthough I was mostly paraphrasing various claims from the Datomic team. The essential idea is that it's for the majority of business systems that would traditionally be powered by a transactional SQL database. But it's not designed for extremes of scale in terms of total data volume or write throughput of the kind that top consumer-facing sites like Twitter face#2018-06-2412:37drewverleeThough, i doubt twitter started needing anything near the write throughput they have now, or would have gotten their if they did 🙂. Somewhat off topic but i just finished re-reading the first chapter of designing data intensive applications. Which talks about twitter and very soon after gives this advice:
> Approaches for Coping with Load. Now that we have discussed the parameters for describing load and metrics for measuring performance, we can start discussing scalability in earnest: how do we maintain good performance even when our load parameters increase by some amount?
> An architecture that is appropriate for one level of load is unlikely to cope with 10 times that load. If you are working on a fast-growing service, it is therefore likely that you will need to rethink your architecture on every order of magnitude load increase—or perhaps even more often than that.
The question then becomes, once you outgrow (your perf start to dominate the complexity of your system) something, how easy is it to transition? I feel like the clojure community at large (though i can’t speak to datomic) approaches these issues in a very healthy way due to the focus on compostability and immutability.#2018-06-2721:03dustingetzYou give me twitter sized bags of scale-money and i will scale Datomic Cloud to twitter write throughput by sharding transactors, without sacrificing query expressiveness 🙂#2018-06-2412:44drewverleeSomewhat tangentially related. Does anyone have any insight into if https://github.com/denistakeda/re-posh with data-sync would play nice with datomic ions? I’m also just generally searching for more insight into data-sync (a component of one implementation of re-posh). I need to re-consult the docs but it seems like a fairly complex topic so i’m curious if anyone has any insights.#2018-06-2413:11mgSure, the question should be "do I have the scale of something like Twitter/Facebook/etc", not "might I at some point have that scale?"#2018-06-2413:55drewverleeYep. Iv have been dealing with a system that was pre optimized and it's hard to move it towards are current goals. I think it's hard to understand the trade offs, which is why it's nice when your ecosystem supporters decoupling.
The creators had good intentions, but they felt compelled to think ahead because they had felt the pain of going at it from the other direction#2018-06-2417:53sparkofreasonHas anyone successfully set up a VPC endpoint as described in https://docs.datomic.com/cloud/operation/client-applications.html#create-endpoint? It fails for me with "No export named datomic-demo-VpcEndpointServiceId found. Rollback requested by user."#2018-06-2422:08folconHey Everyone, I’ve been going through the ions tutorial and I’m currently stuck. Granted, I don’t have a lot of familiarity with tools.deps.alpha. However I’m currently getting access denied on calling clojure or clj at the command line:
Caused by: org.eclipse.aether.resolution.ArtifactResolutionException: Could not transfer artifact com.datomic:ion:pom:0.9.7 from/to datomic-cloud (): Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied
...
Anyone seen this before?#2018-06-2422:09alexmillerDo you have aws creds set?#2018-06-2422:10alexmillerEnd vars for keys or profile etc?#2018-06-2422:15folconI do#2018-06-2422:15folconHmm, give me a sec#2018-06-2422:16folconyep#2018-06-2422:16folconsocks proxy is running#2018-06-2422:16folconand env vars are set in the shell I’m currently in#2018-06-2422:21folconJust tried running aws configure. For the avoidance of doubt, the keys currently set are:
export AWS_ACCESS_KEY_ID=
export AWS_SECRET_ACCESS_KEY=
export REGION=
export AWS_REGION=
export DATOMIC_SYSTEM=
export DATOMIC_REGION=
export DATOMIC_SOCKS_PORT=
#2018-06-2422:23folconI’m trying to run either clojure or clj, which I installed via homebrew.#2018-06-2422:33alexmillerTry setting AWS_REGION too#2018-06-2422:38folconDid both REGION AND AWS_REGION and no luck#2018-06-2422:39folconok so the exception is an access denied.
Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied;
Should I be authenticating something else?#2018-06-2422:41folconBastion is working, as I can successfully call: curl -x .$DATOMIC_SYSTEM.$#2018-06-2422:57folconI worked it out. So going through the docs you end up making two users, a datomic user account and a secondary account which is implied for development. I was under the assumption that because I was doing development, I should be using the secondary account with fewer permissions. When I swapped to the datomic account, it worked perfectly.#2018-06-2422:58folconThanks for the help @alexmiller!#2018-06-2510:14maleghastI have a couple of n00b questions about datomic cloud...
1). Is there any way to develop an app that uses Datomic, offline, that will be deployable without change(s) into an environment that uses Datomic Cloud, i.e. is there any way to create an offline version of Datomic Cloud with say Docker and things like DynamoLocal and FakeS3?
2). Has anyone out there got a deployment model where their apps are deployed in containers into K8s / DC-OS (or similar) and they connect to Datomic Cloud as an off-cluster resource, and if so, is it a trivial setup or is it fraught with peril..?#2018-06-2517:18nilpunningHello! I have two points:
1. Thank you for Datomic Cloud, it is a massive achievement.
2. I think there are serious flaws on this page: https://docs.datomic.com/cloud/releases.html
The CloudFormation templates listed for Production are not in fact the Production templates. They are the the Production Compute nested templates. The parent Production template link is not listed on the release page. I was ultimately able to find the Production template here: https://aws.amazon.com/marketplace/pp/prodview-otb76awcrb7aa. I ran into this confusion when trying to upgrade a Solo deployment to a Production deployment while following these directions: https://docs.datomic.com/cloud/operation/upgrading.html#upgrading-solo-to-production. You may want to rename the Production links on the Releases page to Compute and add the correct Production template links.#2018-06-2517:25marshall@nilpunning As indicated here: https://docs.datomic.com/cloud/operation/upgrading.html#first-upgrade the preferred method for maintaining an ongoing Datomic Cloud system is with separate compute and storage templates.
The master template that launches 2 nested templates is a necessity of AWS Marketplace requiring ‘single click’ delivery#2018-06-2517:26marshallFor most ongoing systems, we would expect (and recommend) that they be run as separate stacks#2018-06-2517:26nilpunningI see that now, thank you for pointing that out. It says that also on the Why Multiple Stacks section.#2018-06-2517:26marshall👍#2018-06-2518:48folconJust wondering if there’s any info around dealing with other aws resources from within ions such as s3? I’m trying to work out how I should be handling credentials, Stu mentions the systems-manager-paramstore in the ion-event-example, but looking at it, I’m not sure I should be storing the aws access and secret keys there?
Or is there some other way to do it?#2018-06-2519:06stuarthalloway@folcon give your instance role the permissions it needs to do the things you want#2018-06-2519:06stuarthallowayno AWS keys!#2018-06-2519:08folconok, so I’m doing what you described in the video, and building this in the repl, and I’m using amazonica.aws.s3, because as far as I’m aware, ions itself has no way of giving me s3. And before I manually handed the credentials in, I was just getting access denied errors.#2018-06-2519:08folconIs it the case that this will fail in the repl, but will work in the lambda?#2018-06-2519:16folconOk, completely misunderstood, just googled instance role permissions and found some docs, no idea how this works, so time to start reading 😃#2018-06-2519:18folconNot sure if there’s any info about how to use this in the repl though, would a sensible idea be to use envvars in the repl?#2018-06-2519:28stuarthalloway@folcon sure, at the REPL you can do whatever you feel safe doing on your local machine#2018-06-2519:29stuarthalloway@folcon we prolly need more docs here 🙂#2018-06-2519:30folconOk, and I expect that once I’ve added more the role permissions to the lambda, then they’ll appear as envvars as well? Or perhaps I need to poke the context?#2018-06-2519:32folconThanks for your help though, appreciate it, I’ve been trying ions out since yesterday, hit a few stumbling blocks where the docs were confusing, but all in all it’s been a fairly good experience up to this point 😃. Really appreciate what you’ve achieved!#2018-06-2522:00sparkofreason@maleghast
1. I don't believe there is any local solution yet. Hopefully this is forthcoming soon.
2. I've successfully connected to Datomic Cloud from containers running under ECS, but in the "apps" VPC created by the Cloud install. Also did the same setting up a VPC endpoint service to access Datomic Cloud from the VPC running my EKS cluster. Basically no problems. Note that you must be running the production topology to use the VPC endpoint service.#2018-06-2522:17maleghast@dave.dixon - Thanks very much; this is very encouraging!#2018-06-2600:39folconIf we have suggestions for where the docs are puzzling is there a process for that?#2018-06-2601:53folconFor eg:
datomic.ion.lambda.handler.exceptions.Incorrect: No implementation of method: :->bbuf of protocol: #'datomic.ion.lambda.dispatcher/ToBbuf found for class: clojure.lang.LazySeq
Documenting under troubleshooting that this means please return a string?#2018-06-2613:18marshall@folcon https://docs.datomic.com/cloud/ions/ions-reference.html#signatures however I will also add that exception to the troubleshooting section#2018-06-2613:24folcon@marshall Thanks, I only noticed that later, it was just really puzzling in the moment as I had no idea where the problem was. This wasn’t an issue I had surfaced when I was debugging it in my repl.#2018-06-2613:24folconAlthough equally it might be that I’m not clear on how I’m supposed to be developing lambda expressions in the repl.
I’m still trying to understand how the instance roles permissions that stu was referring to translate into the difference of working in local development vs running in the cloud. IE: Do I test that I’m running in a dev and then check envvars vs doing something else when I detect I’m in a lambda?
At the moment my workaround is just using envvars and setting them in the repl, and manually doing the same for the lambda env vars. That way I can maintain config values outside my code.#2018-06-2615:43tomcIf in my local/dev datomic system I add an attribute of type :db.type/string but meant to use :db.type/ref, and I haven't used that attribute anywhere else yet is the only way to fix the attribute to restore to an earlier backup?#2018-06-2615:56marshall@tomc you could rename the erroneous one and create a new attribute of the correct type with that name instead#2018-06-2616:04tomc@marshallThanks, that solved my problem.#2018-06-2616:21lenWe have a job system that uses datomic and we want to add callbacks for when the job completes or fails - how would I save those functions to an entity in datomic ?#2018-06-2616:30lenI guess I could have a multimethod and then save a keyword to dispatch on in datomic#2018-06-2617:10Spencer AppleHello, is it required to update the schema when you want to expand your enum? It works locally for development, but am not sure what will happen on prod. E.G. I have my attribute :person/eye-color with the possible values: {:db/ident :person.eye-color/blue} and {:db/ident :person.eye-color/brown}. If I want to add {:db/ident :person.eye-color/red}, do I need to update the schema?#2018-06-2617:12marshallyou would need to transact the new enum value#2018-06-2617:12marshallwhich is just adding a new entity#2018-06-2617:12marshallDatomic doesn’t restrict the “types” or “targets” of reference attributes#2018-06-2617:13marshallso your :person/eye-color reference attribute could be a ref to any other entity#2018-06-2617:13marshallby convention (and application enforcement), you constrain it to the eye color entities you’ve created#2018-06-2617:14marshall@splayemu ^ does that answer your question?#2018-06-2617:25Spencer Appleah @marshall so the enum entities live in the user partition then? So if your application code transacts a new entity:
{:db/id ....
:person/eye-color {:db/ident :person.eye-color/red}}
Everything will work fine because datomic will also create the new entity for {:db/ident :person.eye-color/red}. Then I assume the next time you would create a new {:db/ident :person.eye-color/red}, instead it references the other one?#2018-06-2617:45marshallcorrect#2018-06-2618:16Spencer Applethank you!#2018-06-2618:04currentoorHi, I used on-prem datomic in the past and now for a new project I really want to use datomic cloud but I need to keep several UIs on separate devices in sync, so I need something like txReportQueue. I know datomic cloud doesn't have that but is there a way to build it myself? Any suggestions?#2018-06-2619:47alexkI wrote a datomic->mongo streamer that worked by keeping track of the basis-t last synced. Every time it polled datomic & found it was behind, it would use the tx log to get a list of changed entities, filter down to just the top-level ones, then convert those to mongo format and send them to mongo.#2018-06-2619:53currentoorhow often did you poll?#2018-06-2619:54alexkevery 10 seconds#2018-06-2619:54currentoori need something real time#2018-06-2619:56currentoorin my case though, i'd also need to have a persistent connection between the datomic could instances and my application server#2018-06-2619:56currentooror constant pings for novelty, which might not be possible given the lambda architecture#2018-06-2619:57currentoori'm thinking i could have something in the datomic cloud instance that notices novelty and pushes to a queue outside datomic, which my app server can subscribe to#2018-06-2619:59alexkthat sounds ok, it would be pretty lightweight. I don’t know of any decent way to get push-style changes from datomic but I wish there was#2018-06-2619:59currentooryeah i might just have to stick to on-prem because of this :slightly_frowning_face:#2018-06-2620:00currentoortxReportQueue was a godsend#2018-06-2620:13alexkhttps://forum.datomic.com/t/tx-report-queue/316 😞#2018-06-2620:48currentooryeah :slightly_frowning_face: indeed, txReportQueue was such a beautiful feat of engineering#2018-06-2620:02currentoorAre there any plans or desires to add something like txReportQueue to datomic cloud?#2018-06-2702:30donaldballI’m sure this something most have in their bag of tricks already… what’s a good fn to produce a txn that will transform a given db into an older version of itself, assuming their schemae are unchanged, ignoring txn datoms?#2018-06-2706:40val_waeselynckMaybe this? https://stackoverflow.com/a/25389808/2875803#2018-06-2718:25donaldballYeah, have seen that before, but this operates on a tx, not the delta between two dbs, and has that obviously loud WARNING#2018-06-2718:50val_waeselynckI'm afraid there's no surefire way, this is as good as it gets... what's wrong with asOf in your use case?#2018-06-2718:54donaldballI’m preparing a moderately large data change, and if there’s an unforeseen problem, I’d like to be able to restore the db to a given t. I can, of course, use the datomic restore facility, but as a matter of both principle and practice, it seems like applying a txn to restore to a given state (of the user partition, at least) is more correct.#2018-06-2721:00donaldball(defn restore-to
[db t]
(let [db' (d/as-of db t)
user-vs-by-ea (reduce (fn [accum [e a v]]
(cond-> accum
;; only look at entities in the user partition
(= 4 (d/part e))
(update [e a] (fnil conj #{}) v)))
{}
(seq (d/datoms db :eavt)))]
(into []
(mapcat (fn [[[e a] vs]]
;; vs are whare are currently true
;; vs' are what we want to be true
(let [vs' (set (map :v (seq (d/datoms db' :eavt e a))))]
(concat
(for [v (set/difference vs' vs)]
[:db/add e a v])
(for [v (set/difference vs vs')]
[:db/remove e a v])))))
user-vs-by-ea)))
seems to be okay?#2018-06-2808:05val_waeselynckSeems much less foolproof than the SO link I gave you IMHO. This may fail for bytes values due to their equality semantics, and forces you to load the whole dataset in memory instead of just the diff#2018-06-2712:27jwkoelewijnHI there, is this the place to ask questions about datomic ions as well? If so: how can i do any logging? according to aws docs everything that is println’d should end up in cloudwatch logs, however, this does not seem to be the case for me 😞 does anyone have any experience with this?#2018-06-2713:19stuarthallowayhi @jwkoelewijn ! Ion code does not run inside a Lambda, so the AWS docs are not relevant here.#2018-06-2713:19jwkoelewijnaaah, that explains#2018-06-2713:20stuarthallowayyou can use any ordinary AWS logging you like, BUT:#2018-06-2713:20stuarthallowayyou have to add a policy authorizing your nodes to do it#2018-06-2713:21stuarthallowaywe can and will provide something that is easier and more integrated, coming soon#2018-06-2713:21jwkoelewijnhmmmm…seems a bit much for dev/debug purposes#2018-06-2713:21jwkoelewijnsweet, i’ll wait for that then 🙂#2018-06-2713:21stuarthallowayagreed, working on the better thing right now#2018-06-2713:21jwkoelewijnquite impressed with the rest of ions i have to say#2018-06-2713:22jwkoelewijnthanks for clarifying!#2018-06-2717:06stuarthalloway@currentoor we certainly understand the value of tx-report-queue, and will probably do something similar (or better) in Cloud at some point. That said, you should plan and build your app around what exists today.#2018-06-2717:07currentoorglad to hear this from the horse's mouth!#2018-06-2717:07currentoorthank you, for the amazing tech and answering my question simple_smile#2018-06-2717:08stuarthallowaydepending on your specific use, you might be able to fake the tx-report-queue with polling or with a transaction function#2018-06-2717:10currentoorpolling seems like it would have a lot of lag#2018-06-2717:10currentoorthe transaction function approach, pushing to a queue, could you elaborate on that?#2018-06-2717:11currentoorthat would push all transactions right? vs only the ones that succeeded?#2018-06-2717:11currentoorand what's an example of something the transaction function could push to?#2018-06-2717:39stuarthallowaythe tx fn idea is very tricky, I would shy away from it#2018-06-2717:39stuarthallowaysince fns run in-tx before the tx succeeds#2018-06-2717:39stuarthallowayin fact forget I ever said anything about tx-fns#2018-06-3011:22eoliphant@currentoor Onyx supports datomic cloud and might address your need to ‘stream off’ changes#2018-06-3017:18currentoorinteresting, seems complicated though, i've never used anything with that many buzz words in it 😅#2018-06-2718:06sekaoi have noticed that if you try to return a large amount of data in the response body of a web ion (say, 1MB+) it fails with a 504 error: “Timeout connecting to cluster node”. is this configurable somewhere?#2018-06-2718:10stuarthalloway@sekao have you tried the timeout per the ns doc? https://docs.datomic.com/client-api/datomic.client.api.html#2018-06-2718:11sekaoactually the route in question is not making any calls to the db. it’s just returning a big chunk of binary data. it definitely is size-related. if i make the data smaller, it generally succeeds.#2018-06-2718:14stuarthallowaymy bad @sekao, how about the lambda timeout at https://docs.datomic.com/cloud/ions/ions-reference.html#lambda-configuration#2018-06-2718:17sekaoi’ll try that. however, it’s definitely failing before 60 seconds, which appears to be the default lambda timeout. i just tried and my route fails after about 5-15 seconds#2018-06-2718:22eraserhdI thought I saw some papers about datoms - the 5-tuple structure of facts - but now I can't find them. Does anyone have the link (if I'm not imagining them)?#2018-06-2718:54sekaodoesn’t look like the lambda timeout is the issue — i doubled it to 120 but same result. maybe API gateway has its own timeouts….#2018-06-2719:55okocimis the :datomic.ion.edn/api-gateway-request considered a stable part of the api when the lambda proxies via the :api-gateway/proxy integration?
I’d like to pull out some of the cognito identity information that is in the json under that key, but there is no mention of this data in the reference docs.#2018-06-2720:03stuarthalloway@okocim not stable! Next release will this information in a documented place#2018-06-2720:03viestiI haven't had time to play with Ions yet, but was wondering about the plain HTTP integration that Api Gateway offers. That would require an Ion to be a http server, instead of a function that serves a request, right? #2018-06-2720:07okocim@viesti, no. Your ion is just an (fn []) that receives a request with a ring-like parameter and it should return a ring-like response map. The details are documented under “Developing Web Services” here: https://docs.datomic.com/cloud/ions/ions-reference.html.
All of the http server machinery is taken care of by api gateway#2018-06-2720:08stuarthalloway@folcon and others have asked questions about how to authorize ion code to use other AWS services. We have added a new section to the docs covering this: https://docs.datomic.com/cloud/operation/access-control.html#authorize-ions#2018-06-2720:09okocim@stuarthalloway ha 🙂, Thanks Stu. I just spent the past couple of hours poking at that and finally figured that out. I’ll take a look and see how I did 😅#2018-06-2720:10viesti@okocim yeah, but after Api Gateway there's proxy integration to a proxy Lambda, before hitting Ion, if I understood correctly :) #2018-06-2720:11viestiwhich gives glue possibility to whatever a Lambda can be used for#2018-06-2720:13okocim@viesti, yes that’s right, and the code you write executes somewhere inside your datomic cluster, not in the proxy lambda. I don’t know the details of how that all works, but your code will have the same permissions that are allowed by the instance profile that the Ec2 instance inside the datomic cluster has. Or so it would seem.#2018-06-2720:13viestiwas just thinking about going directly to Ion from Api Gateway with http integration, but what Ion takes in is probably not plain http. Was thinking about long lived connections, beyond Lambda timeout. #2018-06-2720:15viestishould play with Ions, not just contenplate on playing :) #2018-06-2720:16stuarthalloway@viesti we certainly contemplate additional integration points, with minimal (or no) code changes from those that exist today#2018-06-2720:31viestiyeah, I think that Ions really make the serverless Datomic story become true, just got hungry on thinking about the "run your code" story for other AWS services that sit in front of the app/fn :). I might be chasing a non-existing problem though, if I recall right, Api Gateway can now be exposed to private vpc only also, which allows to make apps for companies that have extended on-premise networks to aws via tunneling+firewalls. #2018-06-2721:23kvltCurious - If I were to have my app transact schema to create new attributes during runtime; How much of an antipattern is that?#2018-06-2721:26mgNot an anti-pattern at all#2018-06-2721:28kvltInteresting. Thanks#2018-06-2721:28mgdon’t go crazy with it or anything, but that kind of flexibility is what Datomic is for#2018-06-2721:29kvltI'm thinking through ways that I could do so without resorting to that. I do enjoy being able to look at my schema.edn and having a pretty good idea of what is there#2018-06-2800:35chris_johnsonI guess the question in my mind is “what schema would you want to transact programmatically instead of by direct human direction?”#2018-06-2800:35chris_johnsonbut I also think that if you do have a use case for such a thing, there’s no reason the fn doing the schema updates couldn’t also read in your schema.edn, add to it, and spit it back out to S3#2018-06-2801:06olivergeorgeNot sure if this is a sensible question but I'm curious if there are recommend ways to maintain additional indexes alongside datomic - in particular for spatial queries. #2018-06-2801:07olivergeorgeI found this related thread from a while back: https://groups.google.com/forum/m/#!topic/datomic/dFTigkOIDB8#2018-06-2801:10olivergeorgeI guess the datomic fulltext search feature is an example of that#2018-06-2810:37val_waeselynck@olivergeorge if you're OK with eventual consistency (as is fulltext), it's relatively easy in Datomic to sync data to other stores such as ElasticSearch#2018-06-2811:30olivergeorgeThanks. That makes sense. #2018-06-2810:37val_waeselynckthanks to change detection being so easy with the Log API#2018-06-2810:48mgrbytePossible error in the on-prem docs: Noticed that cloudwatch:PutMetricDataBatch is not a valid AWS action when reviewing our setup (https://docs.datomic.com/on-prem/storage.html#cloudwatch-metrics)#2018-06-2811:13sekaocan anyone confirm that the 504 error Timeout connecting to cluster node originates from datomic/ion glue code somewhere? i assume it’s not some kind of generic AWS error. i’ve ruled out cloudfront and lambda timeouts, but i’m an AWS noob and there’s probably a dozen other timeouts i don’t know about 😛#2018-06-2815:32tony.kayIs anyone aware of a library that can convert a sequence of transactions on a mocked connection (i.e. datomock) into a single final transact? I know it isn’t that hard to code, but no need if someone’s already got it working and tested.#2018-06-2815:43val_waeselynckThis may be hard to code, once you take transaction functions, conflicting values, reified transactions and upserts into account#2018-06-2815:44val_waeselynckMy best shot at it would be a transaction fn which applies the supplied tx requests via db.with(), looks at the diffs and merges them into :db/add :db/retract#2018-06-2815:44val_waeselynckAgain, far from trivial IMO#2018-06-2815:45tony.kayDatomock will give me the ability to snapshot the starting point, run anything through a mocked connection to an ending point. At that point I should just have a sequence of datoms in “history” to apply, right? And I can detect which are “new” by checking for their existence from the starting point, and remap them back to tempids#2018-06-2815:46tony.kayand run that whole thing through a single transact#2018-06-2815:46tony.kaynot atomic by any means#2018-06-2815:46tony.kayso writes after reads are a problem if I had concurrent access to stuff during the “block”#2018-06-2815:51tony.kayAm I missing something else? I mean: transaction functions just result in datoms…admit that they are “atomic”, so I lose that atomicity. Upserts should “just work”. So, other than giving up some of the ACID bits that I would have had during that block, I’m not sure I see a(nother) difficulty…#2018-06-2816:19val_waeselynck@U0CKQ19AQ you don't need Datomock at all for doing this - db.with will give you the same thing. (in a more functional style). Which is a good news, because it means it's not difficult to embed in a transaction function, thus keeping atomicity#2018-06-2816:20val_waeselynckOne issue you could have is transaction entities; e.g the :db/txInstant datoms#2018-06-2816:21tony.kayfor my use-case I actually need Datomock…I need to create a block context where the (black-box) code in the block uses the connection as-if#2018-06-2816:21tony.kayAh, good point#2018-06-2816:22val_waeselynckI fail to see how Datomock is mandatory to your use case; for the purpose of merging several transaction requests into one, I would not use it#2018-06-2816:22tony.kayThe use-case is a rules engine that is using Datomic in a very granular way…transacting single datoms and using the results as it goes to figure out what was added/retracted…using the real db is very heavy, but we’re not using the atomicity or functions at all…it’s just a bunch of little changes.#2018-06-2816:24tony.kayand it has to be cumulative for each “run” of the rules…so cumulative#2018-06-2816:24tony.kayworking in “with” would be a bit difficult#2018-06-2816:25tony.kayI’ve tried stripping Datomic out of the middle for 3-4 days, and this is what we settled on as a compromise to move forward…it’s really just an optimization#2018-06-2816:29val_waeselynckI see, interesting !#2018-06-2816:29tony.kayso, I’ll just be stripping the txInstant#2018-06-2816:29tony.kayand running the delta as a new tx…detecting which IDs should be converted back to tempids based on their existence in the real db#2018-06-2816:30val_waeselynckbe careful with entity relationships#2018-06-2816:31tony.kayThis is why I asked if anyone had already coded it 🙂#2018-06-2816:33tony.kayI’m simultaneously pleased this is possible (and also relatively straightforward), but concerned about the concurrency issues it raises. I’m hoping I’ve analyzed the safety of this well enough for this given use-case and runtime context 😕#2018-06-2816:34val_waeselynckbecause the algorithm is parallel?#2018-06-2816:35tony.kayno, because the real database is being used by many users…so updates to the real database could cause this “diff” to be incorrect in subtle ways#2018-06-2816:36tony.kaysay someone does something that retracts an entity while this is running…I detect it as a “missing” thing at the end of the batch, give it a tempid, and recreate it. I guess I can do detection of that as well…but there are possible issues#2018-06-2816:36val_waeselynckah, I see#2018-06-2816:37tony.kayI can’t currently think of real cases for this particular area of the app where that will happen…but the things you “don’t see” are also often called “bugs”#2018-06-2816:39val_waeselynckif you can restrict the scope of the proposed change to a small set of entities, you could do some optimistic locking where you check in a txfn whether any of these entities has been affected by a new transaction, should be cheap enough#2018-06-2816:48tony.kaythat’s true….does the log API work with datomock? I’m not seeing a delta when I use it against the mocked connection#2018-06-2816:49val_waeselynckIt should work yes, but do tell me if you see any bug#2018-06-2816:49tony.kaylooking at the source it seems there’s some implementation for it#2018-06-2816:50tony.kayok, I’ll try against a real connection to see if my code behaves differently#2018-06-2816:54tony.kayhm. yeah. Against a real connection I get a diff…with datomock I get nothing#2018-06-2816:55tony.kay(defn diff
"Returns the diff on the database from start-t to end-t"
[connection start-t end-t]
(d/q '[:find ?e
:in ?log ?t1 ?t2
:where [(tx-ids ?log ?t1 ?t2) [?tx ...]]
[(tx-data ?log ?tx) [[?e]]]]
(d/log connection) start-t end-t))
;; then:
(let [start (d/next-t (d/db c))
_ @(d/transact c [{:db/id "name"
:owsy/name "Tony"}])
end (d/next-t (d/db c))
delta (dbatch/diff c start end)]
...)
#2018-06-2816:57tony.kaywhen c is “real”, delta is non-empty. When it’s a mocked connection, empty#2018-06-2817:31souenzzoI have a function that "undo" a transaction
Sure, it's not the same problem but may help.
Later I will try to make the "tx-diff" function
https://gist.github.com/souenzzo/d8e6afe21e990530f58fab5c8c3abc8c#2018-06-2817:33tony.kayI’ve got the diff, and already have the filtering…am working on the tempid generation now (almost done)#2018-06-2817:56tony.kay@U06GS6P1N Do you want me to open an issue on log?#2018-06-2817:58chrisblomHi tony, i needed something similar once, I had something similar to datomock that collected all the inputs to d/transact so that they could all be combined in the end, but there are various edge cases that were problematic#2018-06-2817:59tony.kayI think the problem is here: https://github.com/vvvvalvalval/datomock/blob/master/src/datomock/impl.clj#L22
The normal log API accepts various forms of “t”#2018-06-2818:00chrisblomconflicting id’s, tx function n longer being atomic etc, in the end I also settled on a diffing approach which works really well#2018-06-2818:00tony.kaywell, perhaps that isn’t true…this is internal..might have already been transformed by Datomic#2018-06-2818:01tony.kay@U0P1MGUSX Yeah, it is going fine so far. The main thing is I was hoping to use datomock to track the progress, and the log API isn’t working 😕#2018-06-2818:01tony.kayall my tests pass with a real db, so I’m making progress, but I’ll be blocked on the full solution. I guess I might be contributing a patch today 🙂#2018-06-2818:12souenzzo@U0CKQ19AQ there is many differences between use log api or history api?#2018-06-2818:15tony.kayhistory lets you make time based queries, whereas the log is just the log of stuff that happened between two pts in time#2018-06-2818:15tony.kayI htink log is easier to use for my use-case#2018-06-2818:43tony.kay@U06GS6P1N so, the bug is due to how Datomic is executing that query. It calls the log once with times, but then again with what I think are tx db/ids#2018-06-2818:43tony.kaythe latter one does not succeed, so the query returns nothing#2018-06-2819:00tony.kay@U06GS6P1N so I have a fix, but it is sort half-baked unless you know something I don’t#2018-06-2819:10tony.kay@U06GS6P1N See https://github.com/vvvvalvalval/datomock/pull/8#2018-06-2905:46val_waeselynck@U0CKQ19AQ thanks, will have a look#2018-06-2815:36eraserhdIf it contains no attribute installs, you can just concat them, yes?#2018-06-2815:39val_waeselynck@U0ECYL0ET no, because there could be conflicts#2018-06-2815:38eraserhd(I might not understand the problem)#2018-06-2819:29stuarthalloway@sekao yes the 504 is from Datomic -- have not yet repro-ed what you are seeing but it is on the list#2018-06-2820:06sekao@stuarthalloway awesome thanks. BTW i can get it to happen with a simple web ion containing (Thread/sleep 15000). time chosen is arbitrary, but still below the lambda / API gateway timeouts AFAIK. also i’m hitting the route with an ajax POST, if that matters.#2018-06-2916:00amarjeet@stuarthalloway If I start using Ions with Solo, will it automatically upgrade to Production if the app demands for it? Or, will I have to do it manually? Your talk on Ions seem to suggest that it will happen automatically, I don't have to worry about the scaling. Please advise.#2018-06-2916:37nilpunning@amarjeet I believe you must manually upgrade: https://docs.datomic.com/cloud/operation/upgrading.html#upgrading-solo-to-production#2018-06-2916:47amarjeetOkay#2018-06-2916:49amarjeetAnother query I have: The Datomic Cloud pricing seems to be time-based (usage or no usage) rather than usage-based. Is my understanding correct?#2018-06-2917:09amarjeetMy understanding of usage is Transactions/Queries#2018-06-2917:13stuarthalloway@amarjeet pricing is by EC2/hour on instances that run in your account#2018-06-2917:16amarjeet@stuarthalloway and the software/hr is for datomic - even if there aren't any transaction/queries - because it's just live waiting for transactions/queries? The reason I am asking is because I had tried Aws lambda and dynamodb combination - I wanted to compare the cost impact .#2018-06-2917:17stuarthallowayyes, although note that Datomic can do more than transactions / queries, e.g. serve web requests#2018-06-2917:18amarjeetTrue, I have better reasons to use Datomic of course :) I was just trying to estimate my pocket burdens.#2018-06-2917:18amarjeetThanks, appreciate it.#2018-06-2917:42alexmillerwe ran the Strange Loop CFP app (which is a Datomic cloud app) for about a month, which is relatively low traffic - just people submitting talks and reviewing talks and the Datomic portion of the cost was about $11/month (EC2 may have been like $100 or something, don’t remember)#2018-06-2917:42alexmillerit’s a little hard to separate out from other stuff in same account, but maybe that arbitrary real data point is useful#2018-06-2917:42alexmillerthis was not ions, just d cloud#2018-06-2917:42alexmiller(although I intend to move it to ions :)#2018-06-2917:44alexmillerI expect ions would allow me to drop this cost considerably as the app would run in the instances I have for d cloud rather than on an additional box#2018-06-2917:45alexmillerand I used to run on-prem on a box on aws, that at least twice as expensive#2018-06-2917:45amarjeetHmm, this is helpful, thanks :)#2018-06-2917:46alexmillernothing beats just trying it of course#2018-06-2917:46alexmilleryou can get daily spend numbers in aws#2018-06-2917:46amarjeetYes, have decided to test this for a few days#2018-06-2918:16gabriele(datomic ions) sometimes the lambda called by api gateway fails with:
{
"errorMessage": "Timeout connecting to cluster node.",
"errorType": "datomic.ion.lambda.handler.exceptions.Unavailable",
"stackTrace": [
"datomic.ion.lambda.handler$throw_anomaly.invokeStatic(handler.clj:24)",
"datomic.ion.lambda.handler$throw_anomaly.invoke(handler.clj:20)",
"datomic.ion.lambda.handler.Handler.on_anomaly(handler.clj:139)",
"datomic.ion.lambda.handler.Handler.handle_request(handler.clj:155)",
"datomic.ion.lambda.handler$fn__4062$G__3998__4067.invoke(handler.clj:70)",
"datomic.ion.lambda.handler$fn__4062$G__3997__4073.invoke(handler.clj:70)",
"clojure.lang.Var.invoke(Var.java:396)",
"datomic.ion.lambda.handler.Thunk.handleRequest(Thunk.java:35)"
]
}
#2018-06-3011:43stuarthallowaycan you say more about the situation(s) when this occurs? Are you (re)deploying code?#2018-06-2918:17gabrieleanybody knows what the problem is? 🤔#2018-06-3000:54steveb8nI have a question about cold-starts in Ions. I’m running the sample Ions on solo and the cold start is 15-20secs after a deploy. I’m wondering if this is the lambda or the cloud node or both? I could see that (in future) the lambdas could be moved to cljs but not the cloud nodes so this knowledge will affect some architecture decisions i.e. designing in tolerance for cloud node cold-starts#2018-06-3006:15olivergeorgeI think I saw a reference which indicated that "integration with ElasticCloud" was a possible future addition to Datomic Cloud but I can't find it for the life of me. Am I making that up?
My guess is that it would facilitate more diverse search indexing (already used for full-text search).#2018-06-3011:44stuarthallowayWe are considering this, and it is also pretty easy to roll it yourself.#2018-06-3011:42stuarthalloway@steveb8n that is probably the Lambda. Cloud nodes will never be cold once you move to production topology.#2018-06-3012:43jlmrHi, I'm trying to understand t a little better. Suppose you do a transaction: db-before, db-after and the transaction itself all have values for t. How do these values relate to eachother?#2018-06-3012:55stuarthallowayhi @jlmr Time moves forwards: db-after's t = tx t > db-before's t#2018-06-3012:56jlmr@stuarthalloway thanks!#2018-06-3012:57jlmrand good to know that tx t is always the same to db-after t#2018-06-3015:42bedersRoot exception is this:
Error building classpath. Failed to read artifact descriptor for com.datomic:ion:jar:0.9.7
org.eclipse.aether.resolution.ArtifactDescriptorException: Failed to read artifact descriptor for com.datomic:ion:jar:0.9.7
#2018-06-3015:43bedersand running clj will also give me the same exception#2018-06-3017:21bedersok, found the issue with the Ions Tutorial.
For AWS beginners, it would be great to enhance this bit:
This tutorial presumes AWS Administrator permissions
since coming out of the Setting Up/User Access/Getting Started you will not have a user with that policy added.
May I suggest adding:
Make sure the user you set up in User Access also receives the AdministratorAccess policy so the mvn dependencies can be downloaded.
And link to
https://console.aws.amazon.com/iam/home#/policies/arn:aws:iam::aws:policy/AdministratorAccess?section=attached_entities#2018-06-3017:23bedersIt's easy to get this confused since earlier in the User Access tutorial for Datomic Cloud I had added an Administrator Policy.#2018-07-0104:19Nick CabralThe datomic cloud docs are great about walking you through getting connected to a datomic instance for development via the Bastion. Has anyone seen any good docs for getting a web service deployed and running in the datomic AWS VPC (and connected to the datomic instance)? I’m an AWS newbie, but I get the impression that it will be easier to migrate my app from heroku into the VPC, rather than trying to connect to datomic externally from heroku.#2018-07-0105:08steveb8n@nick652 @chris_johnson and I are working on that now. Should have something to share in a few days#2018-07-0105:11Nick Cabral@steveb8n that’s excellent news, I’m looking forward to it! Thanks for the info.#2018-07-0206:24olivergeorgeAWS novice question: Is there an easy way to start/stop my Datomic Cloud System?#2018-07-0206:25olivergeorgee.g. When casually exploring on the weekends it seems silly to leave dev cloud services running 24x7#2018-07-0209:08rhansen@olivergeorge I've been setting desired capacity in my EC2 Auto Scaling groups to 0. This shuts down the EC2 instances and brings costs down to 0. Setting desired capacity to 1 brings them up again.#2018-07-0209:08rhansen(solo topology)#2018-07-0209:13olivergeorgeThanks that makes sense. I guess I should be able to do that using the AWS CLI in a makefile too. #2018-07-0209:59olivergeorgeEnter the Mr Miyagi of cloud services:
https://gist.github.com/olivergeorge/d3d22e3d55d6f8b3179ff0bf4b49b149#2018-07-0210:13Ben HammondI have a question about attribute aliases. Although I can transact multiple db/idents pointing at the same attribute entity
the most recent one seems to get a 'most favoured status`, and the earlier ones seem to get hidden#2018-07-0210:14Ben Hammondwhen I query (sort-by second
(d/q '[:find ?e ?a
:in $
:where [?e :db/ident ?a]]
db))
it only shows me the 'most favoured' db/idents#2018-07-0210:15Ben Hammondalthough previous ones continue to work, I cannot find out what they are; I just have to know#2018-07-0210:25stuarthallowayhi @U793EL04V -- query against a history db to see retracted values#2018-07-0210:27Ben Hammondhi stu. I am trying to follow
https://docs.datomic.com/on-prem/best-practices.html#use-aliases#2018-07-0210:27Ben Hammondso I have two db/idents pointing to the same attribute entity#2018-07-0210:28Ben HammondI want both to work (and they do)#2018-07-0210:28stuarthallowayyes, and they can both be found in query via a history db#2018-07-0210:28stuarthallowaydb/ident is cardinality one#2018-07-0210:30Ben Hammondah right I see#2018-07-0210:30Ben Hammondthankyou#2018-07-0218:31favilaIf you just want the db id for a (possibly old/replaced) ident, use datomic.api/entid; to learn the "preferred" (current) ident use datomic.api/ident with the old ident as an arg#2018-07-0218:32favilanevermind using ident doesn't work: just returns any keyword it's given#2018-07-0218:32favilabut entid does work#2018-07-0213:02miridiusIs there any way to have a single Ions-based application span over multiple AWS regions? With the old model of using Datomic (Cloud) and running my app separately, I could have a single Datomic instance in say US East 1, but then have multiple instances of my app (including the Datomic peer library) running all over the world, which would then each be able to cache read results from the DB locally and serve geographically nearby client requests faster#2018-07-0214:40miridiusAnother, more noobie question: can I develop an Ions app using an in memory datomic DB (i.e. "dev" storage) on my own machine for local dev, and then push it to the cloud afterwards?#2018-07-0215:01stuarthalloway@miridius no cross-region support at this time#2018-07-0215:04stuarthallowayIons do not currently have any local storage, but the :server-type :ion is designed for local dev: see https://docs.datomic.com/cloud/ions/ions-reference.html#server-type-ion and discussed in the video starting at https://www.youtube.com/watch?v=3BRO-Xb32Ic&t=668s#2018-07-0220:56miridius@stuarthalloway yeah I've seen that video already, it's great! That's what got me really excited about Ions. I can imagine that using the ion server type is useful in plenty of local development situations. But what about when you don't have an internet connection (or a crappy one)? Or a more likely scenario, what if you have a large test suite that you want to run frequently, and if those test read/write to the DB it's going to make the suite a lot slower.#2018-07-0220:57miridiusOr maybe I'm just thinking about things the wrong way and haven't got my head around the Ions approach yet#2018-07-0422:11miridiusSo after thinking more about this I realized 2 things: the Datomic free edition can be used for offline, in memory development/testing, and if I export my tests as lambdas I could run them in the compute cluster directly. Both of those options solve the test suite latency dilemma, and the first one also solves the developing without an internet connection issue (happens more than you might think!). It made me wonder though, if it would be possible to include test execution as a step in the Ions deployment Step Program so that the code is only deployed if tests pass?#2018-07-0502:18steveb8nI noticed that the datomic.client.api is a protocol so I’m currently testing a testing impl of that protocol which uses Datomic-Free instead of the cloud connection. This should provide the local dev/test flow we all want.#2018-07-0502:19steveb8nthere are a few behaviour differences that need to be adapted and some that need to be avoided but so far it’s working well#2018-07-0502:22olivergeorgeSounds interesting.#2018-07-0502:22steveb8njust hit a road-block with d/q. it’s the only fn not behind a protocol. I’ll update as I try and get around it#2018-07-0503:32steveb8nok, it works. I’ll post a gist#2018-07-0503:34steveb8nhttps://gist.github.com/stevebuik/9b219090a2d10cc4fb06d62ee928ca7e#2018-07-0503:34steveb8nlots of work to do but it proves the concept. there’s one important question remaining which I’ll ask Stu in a new thread#2018-07-0215:06eraadHi, I´m getting the following error when trying to push code for a Datomic Ion:
$ clojure -A:dev -m datomic.ion.dev ‘{:op :push :uname “hello”}’
{:command-failed “{:op :push :uname \“hello\“}”,
:causes
({:message “Shell command failed”,
:class ExceptionInfo,
:data
{:args (“clojure” “-Sdescribe” “”),
:result {:exit 1, :out “Invalid option: -Sdescribe\n”, :err “”}}})}#2018-07-0215:06eraadAny lights on whats the error related to?#2018-07-0215:07stuarthallowaythat is telling you that clojure -Sdescribe failed -- maybe you do not have Clojure command line tools installed?#2018-07-0215:08eraadOk. I´m sorry for such a newbee error. Will check that out#2018-07-0215:15eraad@stuarthalloway Did brew upgrade clojure and everything worked as expected, thanks!#2018-07-0217:27sparkofreasonUsing the Datomic Cloud client, I'm seeing errors like "Attempting to call unbound fn: #'datomic.client.api.sync/client". Any thoughts on why that might occur?#2018-07-0217:35marshall@dave.dixon what version of client?#2018-07-0217:36marshalland what version of Datomic Cloud?#2018-07-0217:38sparkofreasonclient 0.8.54#2018-07-0217:38marshallAnd when are you getting that error?#2018-07-0217:42sparkofreasonThat one occurred in a call to datomic.client.api/client#2018-07-0217:42marshallyou’ve required the client API namespace?#2018-07-0217:48sparkofreasonYes. It's sporadic, and not always the same error. I've also seen something to the effect of "method db not found on connection", and another when transacting where the message is just "datomic.client.impl.shared.Connection". I need to check my logging to ensure that I'm getting the full chain of causes, perhaps.#2018-07-0217:49sparkofreasonSystemCFTVersion 397#2018-07-0217:50marshallyeah, if you could get the full error/stacktrace from cloudwatch logs that may help#2018-07-0217:52marshall@dave.dixon I suspect this is related to a code loading race condition issue; can you upgrade to the latest Cloud release/template (v 402)#2018-07-0217:52stuarthalloway@dave.dixon are you rolling instances (e.g ion deploy?)#2018-07-0217:54sparkofreason@stuarthalloway Don't think so, no ions, just running the DB in the vanilla production topology.#2018-07-0217:56stuarthalloway@dave.dixon any chance you are racing with your own code?#2018-07-0217:57sparkofreason@marshall I'll give it a try ASAP. That was my guess too, We're running an onyx cluster processing messages from Kafka, and these error seem to crop up early in the lifetime, when the onyx job starts and there's messages waiting to be processed.#2018-07-0217:59sparkofreason@stuarthalloway What sort of scenario are you thinking of? It's possible, I suppose, and I'll take another look at the code. but I believe it's wired up so that all of the Datomic interactions are scoped to a single onyx task, clients and connections not being shared amongst different threads.#2018-07-0217:59stuarthalloway@dave.dixon if your code calls require from multiple threads you could be hitting the equivalent of https://dev.clojure.org/jira/browse/CLJ-2026#2018-07-0218:02stuarthallowayClojure's ns macro marks the namespace loaded at the moment the ns form completes, so another thread requiring the namespace thinks it is loaded when it may still be load*ing*#2018-07-0218:03stuarthallowaysymptom is symbols not present, seemingly at random#2018-07-0218:06sparkofreasonThat sounds like a strong candidate.#2018-07-0218:09sparkofreason@stuarthalloway Is the workaround to try and require the client namespace as early as possible?#2018-07-0218:54stuarthalloway@dave.dixon the workaround is a concurrency mechanism that blocks the system on initialization, e.g. a a delay or future#2018-07-0301:07olivergeorgeWhat are the recommended ways to test a transaction function with datomic cloud? I can see that a traditional fixture approach will work against a datmoic cloud service (create-database, add schema, add data, run tests, drop database). Can't see how generative testing could work because of the db parameter but I do see we can treat that as static at least (ref https://docs.datomic.com/cloud/transactions/transaction-functions.html#testing). Feels like using datomic free in-mem would be faster but I think that doesn't have the cloud-api (likely premature optimisation in that thought).#2018-07-0301:14olivergeorgeInstead of trying to interpret/validate the data returned I guess I could be observing the normalised data associated with the change it makes on the database via the Log API.#2018-07-0304:08steveb8nsorry if someone already asked this but…the result of a datomic client query is a vector of vector tuples. in the old (peer) api it was a set of vector tuples. is this deliberate? the reason I ask is because the docs https://docs.datomic.com/cloud/query/query-executing.html show two behaviours - first a set and then a vector. which is correct or is there some way to control this?#2018-07-0304:16steveb8nI could imagine this being related to the new :limit and :offset features (presumably for pagination) although that would also imply some kind of “order by” feature as well but I can’t find docs on using these for pagination#2018-07-0304:27steveb8nI found the docs for :limit and :offset https://docs.datomic.com/cloud/client/client-api.html although I’m still unsure about the lack of “order-by” like behaviour when using these features. Is it just arbitrary?#2018-07-0304:28steveb8nso, in summary: 1/ is the set result here a doc bug? https://docs.datomic.com/cloud/query/query-executing.html and 2/ when paginating, is there an implied ordering or can we control this now?#2018-07-0307:10steveb8nanother possible doc bug? : should the :server-type be :cloud in the “Connect and use Datomic” section (instead of :ion)#2018-07-0311:50stuarthalloway@steveb8n check if you are on an older version of client pre ion support, see https://docs.datomic.com/cloud/releases.html#0-8-54#2018-07-0307:11steveb8nI suspect a bug in the client api as well. it doesn’t mention :ion here (throw (impl/incorrect ":server-type must be :cloud, :peer-server, or :local")))#2018-07-0307:12steveb8nI can’t see any change in behaviour between :cloud and :ion. is there any difference?#2018-07-0307:43val_waeselynckReleased Datomock v0.2.2. This solves bugs with Datomock's Log implementation, which failed to accept nil, Dates and tx-entids for txRange bounds.
https://github.com/vvvvalvalval/datomock#2018-07-0310:57olivergeorgePerhaps a typo in the Ions Tutorial.
In the Deploy section it gives an example of using curl to make sure everything is okay:
curl https://$(obfuscated-name). -d :hat
Which returned {"message":"Missing Authentication Token"}
I think that should be:
curl https://$(obfuscated-name). -d :hat
#2018-07-0310:57olivergeorgePerhaps a typo in the Ions Tutorial.
In the Deploy section it gives an example of using curl to make sure everything is okay:
curl https://$(obfuscated-name). -d :hat
Which returned {"message":"Missing Authentication Token"}
I think that should be:
curl https://$(obfuscated-name). -d :hat
#2018-07-0314:55jaretThanks for the catch. I’ve updated the docs with a dummy string “datomic” there.#2018-07-0400:36olivergeorgeCool. Glad I could help. Just noticed the formatting of the following section isn't quite right. The HTML for the link is showing as text:
The API Gateway is an external connection point, not managed by Datomic. If you created an API Gateway in the previous step, you can select and delete it <a href="" target="_awsconsole">in the console</a>.
#2018-07-0914:34jaret@olivergeorge thanks! I’ve updated the malformed link.#2018-07-0311:00olivergeorgeClearly that's not quiet right but it works.#2018-07-0311:52stuarthallowayhi @steveb8n Query is documented to return a collection of tuples: https://docs.datomic.com/client-api/datomic.client.api.html#var-q, so your consuming code should not know/care about sets vs vectors. There is no "order by" in query (yet).#2018-07-0311:56stuarthallowayhi @olivergeorge You can generate db values by picking from a set of premade example values, and those example values do not need to be constructed every time you run the test. Why not construct fixture dbs once when you write a test, give them good db-names (db-in-state-A, db-in-state-B)?#2018-07-0312:25olivergeorgeI'll give that a try. Thanks. #2018-07-0312:27stuarthalloway@olivergeorge what does your tx fn do?#2018-07-0312:47olivergeorgeAt this stage I'm doing simple things. But thinking ahead to more complex systems. Still lots to learn. #2018-07-0312:47steveb8nThanks @stuarthalloway for the clarification#2018-07-0314:41eoliphantany advice/strategies on env config/parameterization for ions? I know we can obviously just stick the info in datomic itself#2018-07-0315:07kommenThe “as a standalone Clojure API” link at the very top of https://docs.datomic.com/on-prem/pull.html is broken#2018-07-0315:07kommenany preferences where to report this?#2018-07-0315:23stuarthalloway@eoliphant I recommend using AWS Systems Manager parameter store. Hm, maybe Datomic should have a feature making that easier... 🙂#2018-07-0315:25eoliphantyes, that’s definitely more in line with what I was thinking 🙂#2018-07-0315:26marshall@kommen thanks - i’ll fix it#2018-07-0317:26rhansenHmm... I gather that datomic doesn't store duplicate values. But could it be done?
Usecase: I want to store the last n dice rolls for one of my players, but duplicate values should still count.
Currently I'm just using the following schema:
{:db/ident :player/last-rolls
:db/valueType :db.type/long
:db/cardinality :db.cardinality/many
:db/doc "The last couple of dice rolls for this player"}
#2018-07-0317:31Joe LaneHey @rhansen, you could change :db/valueType to :db.type/ref then create an entity to represent a roll, including a timestamp on the entity.#2018-07-0317:31favilayou could also encode the value#2018-07-0317:32Joe Lanewhat do you mean “encode the value”?#2018-07-0317:32favilacardinality one, the value is an encoding of multiple values#2018-07-0317:32favilato datomic it looks like one value#2018-07-0317:32favilayour application decodes it to get many values#2018-07-0317:34rhansenInteresting.#2018-07-0317:37rhansenI'll probably go the ref route, thanks for your help 🙂#2018-07-0317:38Joe LaneDo you have an example of this? Do you mean encode as a series of bytes or a string? Just looking for some clarity here. Seems like a workaround to encode an ordered collection, am I right?#2018-07-0317:39favilathat is exactly what it is#2018-07-0317:40favilaspecifics may vary; bytes, strings, bignums, whatever#2018-07-0317:40favilathe point is only that datomic doesn't see into the value; because it's a blob to datomic you can store whatever you want in it that datomic couldn't represent natively#2018-07-0317:40favilaor could represent but too inefficiently for your use case#2018-07-0317:41favilae.g. json array in a string#2018-07-0317:45Joe Laneas well as an edn string too I suppose. Neat. That may change the way I model a schema today. I’ll give it a shot.#2018-07-0317:45Joe Lanethanks#2018-07-0322:48steveb8ncan I confirm something: the scalar query using . is not supported in Client/Cloud e.g. ’[:find ?movie-title .
:in $
:where
[?movie :movie/title ?movie-title]]#2018-07-0322:49steveb8ngetting a single result was pretty useful in the peer api. it’s no biggie to call ffirst on the result in client but I just wanted to confirm that this feature has been removed#2018-07-0417:34drewverleeIt seems like I can add a datom with an attribute that isn’t described in the schema. Is that right? What does that imply? To contrast, clearly you can’t do this in a relational db (without some setup).#2018-07-0421:24favila"I can add a datom with an attribute that isn’t described in the schema." This shouldn't be possible. Do you have an example of doing this?#2018-07-0502:06drewverleeI'm using datascript, I should have asked in that channel. I'll post the example tomorrow#2018-07-0417:35drewverleeI’m reading the docs and trying to connect the dots here:
> The set of possible attributes a datom can specify is defined by a database’s schema.
Which would imply, to me, you can’t add a attribute to the db without it being in the schema.#2018-07-0419:14igrishaevHello Clojurians! I’ve been googling for the whole day but without any result: how can I sort datoms/entitines by their tx date in descending order? For example, to show the last 100 messages on the dashboard.#2018-07-0420:39val_waeselynckDon't use the tx date in your app code, prefer a custom attribute. You could use avet with either a negated timestamp, or with seek-datoms using an exponentially decreasing lower bound until you reach a count of 100#2018-07-0606:24octahedrion@U06GS6P1N why do you advise against using :db/txInstant in your app code ?#2018-07-1106:48val_waeselynckSee this: https://vvvvalvalval.github.io/posts/2017-07-08-Datomic-this-is-not-the-history-youre-looking-for.html#2018-07-0419:15igrishaev(taking into account there might be quite many of entities, so manual work with collections is not applicable)#2018-07-0419:43drewverlee@igrishaev I’m not an expert, but i believe one way is to
1. make sure to return the tx date from your query
2. call sort-by :tx-date using clojures native sort function
This will work, its perf implications probably have todo with how your running datomic. Remember that part of the value proposition is that <waves hands> datomic can be treated as a value in your process. </waves hands>#2018-07-0419:50drewverleeThats possible not the best way, and i might be wrong 🙂. I glanced at the docs and this might be the right place to look: https://docs.datomic.com/cloud/query/query-data-reference.html (see custom aggregates)#2018-07-0421:04leblowlHey, I'm a little confused on how Datomic caches entities in my application code. Reading this: https://docs.datomic.com/on-prem/caching.html#entity-caching ... I am wondering what lifetime of the Entity instance is referring to. Are we talking about until the object is garbage collected? Also is this entity cache the same as the object cache: https://docs.datomic.com/on-prem/caching.html#object-cache? Thanks... (p.s. is there a preference for asking these questions in http://forum.datomic.com ?)#2018-07-0421:12leblowlI think I found an answer for my questions. I found this line in the Entity javadocs (https://docs.datomic.com/on-prem/javadoc/datomic/Entity.html): the values of its attributes are not obtained from the db until get or touch are called, after which they are cached in the entity. So I am pretty sure that we just talking about basic heap caching with a garbage collection lifetime. And if that's the case, I think that is different than the object cache. I'd love to get a confirmation on that.#2018-07-0421:23favilaThe object cache is a cache of datoms: objects decoded out of a fressian block; the entity objects have their on cache of the map view of datoms#2018-07-0421:42leblowl@U09R86PA4 thanks. I guess the docs @ https://docs.datomic.com/on-prem/caching.html#object-cache were confusing me because it only mentions contains segments of index or log... I think I was missing the fact that Datomic only retrieves entities through the indexes (I think that's a little different then SQL, where an index may or may not be used, right?)#2018-07-0421:44favilaAll datomic's indexes (except the fulltext index, which is a derived Lucene index) are "covering" indexes: each contains all of the data for the datoms it is indexing.#2018-07-0421:45favilayou may find this helpful to understand datomic's architecture http://tonsky.me/blog/unofficial-guide-to-datomic-internals/#2018-07-0421:46favilain datomic an "index" is just datoms in a certain order#2018-07-0421:47favilathe datoms are stored in binary blobs encoded in fressian#2018-07-0421:47favilathe blobs are arranged in a sorted tree structure#2018-07-0421:47favilaso one blob/block may have hundreds or thousands of datoms in it#2018-07-0421:48favilathe object cache is a cache of the datoms decoded out of a block#2018-07-0421:48favilathe blocks themselves are what is stored in storage, keyed by a uuid#2018-07-0421:48favila(this is what memcache is caching)#2018-07-0421:53leblowlThanks, very helpful. I'll take a look at that guide#2018-07-0423:47sekaoi know this is a common problem but i’m struggling to find a clear answer anywhere…how do i mimic SQL’s order by clause? i dont want to sort the data after retrieval since that certainly wont work with large data sets#2018-07-0500:33johnjnot sure, but searching a bit, looks like one have to use a query function https://docs.datomic.com/cloud/query/query-data-reference.html#sec-7-3#2018-07-0500:33johnjgo down to Custom Aggregates#2018-07-0501:00sekaoi can’t quite wrap my head around how to use those to implement sorting. i have my own attribute (db.type/instant) to keep track of creation time. i can’t implement pagination without having a reliable way to sort based on that attribute.#2018-07-0501:08sekaoi believe from what i’ve read, it does indeed need to be done after retrieval. not sure how that will scale with large data sets but i’ll go ahead and try that#2018-07-0501:43favilaThe only way around that is if the query has a shape such that you can chunk the output effectively by changing some input parameter #2018-07-0502:13mg@sekao it shouldn't scale worse than the query itself, because remember it's your app's memory that's doing all the query processing, not an external database system#2018-07-0503:37steveb8n@stuarthalloway I’ve found it’s possible to use a local mem db while using the client-api but re-implementing the client protocols. I have 2 questions: 1/ are you ok with this technique being generally used? and 2/ line 68 uses the Queryable abstraction which is not part of the public API. Can we rely on this or would you consider moving it into the public API? https://gist.github.com/stevebuik/9b219090a2d10cc4fb06d62ee928ca7e#file-components-datomic-core-test-clj-L68#2018-07-0503:39steveb8nMyself and a lot of other folks would really like it if this is something we can rely on. Please say yes 🙂#2018-07-0513:22jaretDatomic Ions in Seven Minutes
https://www.youtube.com/watch?v=TbthtdBw93w#2018-07-0515:15bherrmannI feel like it would be nice to see an datomic ion example todo app#2018-07-0519:15souenzzosomething to clone, update configs push to aws and see working#2018-07-0519:26bherrmannI probably should follow this, https://docs.datomic.com/cloud/ions/ions-tutorial.html … “The ion-starter project contains a complete ion-based application.”#2018-07-0520:00adammillerWhat's the best practices recommendation as far as exposing datomics generated ids? Is it simply don't do it and instead use a squuid when there is no other unique identifier for a given entity?#2018-07-0520:23marshall@adammiller correct. If your domain doesn’t already have a unique identifier, use something like a UUID or GUID. Squuid fine too, altho no longer required by Datomic (since adaptive indexing)#2018-07-0520:24adammillerah, excellent...did not realize that about squuids not being necessary now. Thanks @marshall!#2018-07-0520:24marshallhttps://forum.datomic.com/t/general-product-questions/309/2#2018-07-0520:25marshallanswer #4 there ^#2018-07-0520:41mkvlralso didn’t know about this, might be good to update https://docs.datomic.com/on-prem/identity.html#sec-6 then?#2018-07-0521:04rhansen@marshall what’s the rational for not exposing internal ids?#2018-07-0521:07mgInternal ids can change if you do a backup/restore, as I recall#2018-07-0521:07marshallthere is no contract that guarantees they’re preserved#2018-07-0521:07marshallalthough they happen not to change on backup/restore#2018-07-0521:08marshallhowever if you have to do something like play back a db into another db (for filtering or moving data) they will not be#2018-07-0606:25val_waeselynck@rhansen Here's an example of something that breaks if you expose internal ids: https://gist.github.com/vvvvalvalval/6e1888995fe1a90722818eefae49beaf#2018-07-0611:40rhansenTwo questions:
1) Is excision planned for datomic-cloud? Need to handle GDPR 😕
2) Is gc-storage performed automatically in datomic-cloud?#2018-07-0611:42rhansen(Btw, I've been playing with datomic-cloud for about a week now and I love it. It's simplified a lot of my code, and I don't look forward to work on projects that use other dbs. A big high five to cognitect for this product)#2018-07-0612:21stuarthalloway1) we hear you! and 2) yes#2018-07-0615:21henrikI’m in a situation where I’m looking at databases for storing a lot of academic publishing-related information. Metadata about articles, journals, books, reports, publishers, authors and so on. There will be blobs of PDFs with fulltexts of the articles, books, etc. as well, and XML versions of those same PDFs. Is Datomic a good fit for this type of data? (I expect the blobs will end up in storage outside of the database)#2018-07-0615:27marshall@henrik I think Datomic is a great fit for the metadata; that sort of multi-source integration is a very good use case; I would definitely store the blobs out of band (i.e. S3 or elsewhere) and keep a reference to them in Datomic (i.e. a URI or other content address)#2018-07-0615:32henrikThank you @U05120CBV! The amount of data is likely to grow to approach the amount of digitalized academic publishing data available in the world (i.e., all data from the top 10 publishers to begin with, then the top 20 and so on). Still good?#2018-07-0616:05marshallprobably, but i’d want to know more about scale in terms of # of datom#2018-07-0616:05marshalldatoms#2018-07-0616:35henrikWould it be possible to do something like a Skype chat with yourself or someone else at Cognitect and talk through the specifics?#2018-07-0616:40marshallSure. I’m traveling next week, but would the week after work? You can email me at <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection> and we can set something up#2018-07-0616:01sekaowhen i pull a datom from :tx-data and read its attribute via :a, i get back a number rather than a keyword. why is that?#2018-07-0616:06marshall@sekao all schema definitions are just datoms themselves#2018-07-0616:07marshalltry pulling the :db/ident of the number you got back (it’s an entity ID)#2018-07-0616:07marshalli.e. (d/pull (d/db conn) [:db/ident] (:a my-datom))#2018-07-0616:10sekaomakes sense. and also you just blew my mind. thanks!#2018-07-0616:10marshall🙂#2018-07-0616:21Daniel HinesIs there an easy way to write reports/dashboards against a datomic database?#2018-07-0616:22Daniel HinesFor a non-developer, that is.#2018-07-0616:27Daniel HinesTo clarify, my business analyst and database programmer colleagues are running into a bunch of problems which I think Datomic solves beautifully, but they depend on tools like SSRS and PowerBI to write lots and lots of reports.#2018-07-0618:40henrikWe’ve used another project from the Clojure landscape: https://www.metabase.com/
Unfortunately, they don’t support Datomic (yet). I would love for them to, though.
I’m sure Datomic can be set up to export continuously to one of the supported databases, however.#2018-07-0623:41Daniel HinesThanks @henrik #2018-07-0718:38eoliphantOn a couple of my projects we just use transaction log/onyx to stream stuff in to RedShift, etc. There’s not much direct support from your typical BI tools for datomic#2018-07-0619:45jjttjjanyone know if it's possible to get the datomic ions tutorial working on windows (and thus without the clojure command line tools)?#2018-07-0714:51luchiniYou will need at least Java installed and Clojure’s jar…. clojure cli is just a script that calls Java with a well-crafted classpath#2018-07-0703:32olivergeorge@jaret Another quick website fix for the Ions Tutorial. The text reading "Follow the instructions to create a new Datomic Cloud system" links to the page about connecting, not setting up a datomic cloud. Perhaps should link to: https://docs.datomic.com/cloud/setting-up.html#2018-07-0703:33olivergeorge@jaret One other thing I mentioned in a thread which might have been missed. The formatting of the following section isn't quite right. The HTML for the link is showing as text:
The API Gateway is an external connection point, not managed by Datomic. If you created an API Gateway in the previous step, you can select and delete it <a href="" target="_awsconsole">in the console</a>.
#2018-07-0704:09olivergeorgeAlso, there's a typo on the AWS Marketplace page. The Production Toplogy provides a full featured
needs an extra o in "Topology".#2018-07-0704:09olivergeorgeYou can borrow one of mine if you like.#2018-07-0712:58bherrmannFYI: My first ion cloud formation failed... I think it timed out waiting for the storage volumes to come up. Second time through the marketplace it seemed to work fine.#2018-07-0713:00bherrmannI'm a bit of a AWS noob. To keep my personal costs down, should I stop the EC2 instances when I'm not playing around? and start them when I'm tinkering? My solo instance was up overnight and it cost $0.10 ... so I know this is pretty small already.#2018-07-0714:54luchiniYou can always turn both of your instances off when you are not using them (the bastion one and the compute one). They do cost you money while they are on. It should be in the realm of cents but if cost is an issue, turning them off and back on should save you some.#2018-07-0800:42bherrmannHuh. I turned both instances off, using EC2 and my bill was 0.10 this morning. This evening I viewed my bill and it is now 0.42 ... I wonder why the bill when up when the instances are off... hummm....#2018-07-0800:44bherrmannweird.. I know I stopped the instances... but the were now running again... strange....#2018-07-0801:03bherrmannhuh.. I see it restarted them....#2018-07-0801:45bherrmannI deleted the CloudFormation Stacks... now my bill is at $1.30 ...#2018-07-0714:49luchini@stuarthalloway great work on Datomic Ion. Loving it. The one thing that I’m confused is on the API Gateway integration. AWS doesn’t let you create more than one proxy resource so how do you plug several web-specific ions to API Gateway? The very tutorial shows two ions marked as :integration :api-gateway/proxy but only one is exposed on API Gateway.#2018-07-0715:12stuarthalloway@luchini you could create more than one gateway#2018-07-0715:13luchiniOne gateway for each endpoint?#2018-07-0718:54eoliphantis that (get-connection), etc approach in the docs the ‘best’ way to manage schema, etc with ions? It just feels a little weird doing it per request. But I guess there’s no equivalent of a ‘startup func’ or something for an ion deployment?#2018-07-0722:26miridiusSince it's a memoized function, it will only get called once per running process (my understanding is that Ions is designed so that there will be a long running process that is used to handle many requests - the lambda simply defers execution to that process). Therefore you can include some "startup" activities in that function body such as transacting schema#2018-07-0723:21eoliphantah yes, read right past the memoize call#2018-07-0721:12eoliphantreally having fun with ions, but just had an ‘oh hell’ moment. Lambdas only take JSON right? ugh.. most of the stuff I’ve been working on is EDN all the way down..#2018-07-0722:30miridiusWhat's held in the JSON is all the meta-data about the request (headers, URI, etc.). The request body can still be of any content type, it will be a single string in the JSON map#2018-07-0723:23eoliphantok so for instance, I’m trying to call my deployed ion via the aws cli, but it’s complaining about the --payload not being parseable as json. any idea how i might get around that?#2018-07-0802:26miridiusah sorry, I realise now you're talking about calling the lambda function directly rather than through an API gateway. Gotcha. Yeah it looks like you're right that invoking a plain lambda requires a json payload which you then have to parse in your Clojure code, but I'm not sure that's a big deal. Your plain lambda functions can only be invoked by other things within AWS (e.g. an event from some other service) so that's probably often in JSON anyway. If you want to make an API that takes EDN and is accessible from the web, make your lambda into a web service Ion: https://docs.datomic.com/cloud/ions/ions-tutorial.html#webapp#2018-07-0818:38oscarUsing transit+json is also a solution.#2018-07-0915:05eoliphantah, forgot about transit. that will do it#2018-07-0722:33miridiushow much file system access do Ion functions have? E.g. can I have my schema stored in an EDN file that gets slurped by my Ion code? I assume the resources directory would need to be included in deps.edn?#2018-07-0723:01olivergeorgeThere is an example of this here https://github.com/Datomic/ion-event-example/blob/master/src/datomic/ion/event_example.clj#L47#2018-07-0802:19miridiusthanks!#2018-07-0722:38miridiusThe main reason I ask is that to serve a web app from Ions at some point it's going to be necessary to serve static files (e.g images, CSS, compiled CLJS). An alternative would be to split the code to have all the front-end files hosted elsewhere and just use Ions for the API, but then you have a new class of problems around coordinating multiple deployments and multiple domains.#2018-07-0808:48rhansenI would just store static assets in s3. S3 has a builtin http server for serving static files, and that way your datomic system doesn’t waste cycles it doesn’t have to.#2018-07-0723:55orlandowHi, I’m new to datomic and AWS. I’m following the ions tutorial, integrating with slack via the api gateway but I’m getting a base64 encoded body, I can’t find a setting to configure it, I think I have the same code as the tutorial and I followed every step, am I missing something?#2018-07-0800:00orlandowmy ion code is:#2018-07-0800:01orlandow(defn echo* [{:keys [body]}]
{:status 200
:headers {}
:body "hello world"})
(def echo (apigw/ionize echo*))
#2018-07-0800:01orlandowand my config#2018-07-0800:01orlandow:lambdas {:echo {:fn iontest/echo
:integration :api-gateway/proxy
:description "echos input"}}
#2018-07-0800:43orlandowI just deleted and recreated the api and it’s working now ¯\(ツ)/¯#2018-07-0800:43orlandowI must have missed something the first time#2018-07-0818:41oscarMaybe you didn't set the "Binary Payload: */*" setting?#2018-07-0821:09orlandowYes, that’s probably it, thanks, I did see it and changed it but perhaps I didn’t deploy the api afterwards.#2018-07-0801:52olivergeorgeHi @stuarthalloway I'm curious why the ion-event-example breaks the schema into keyed chunks and only migrates new chunks. Specifically: https://github.com/Datomic/ion-event-example/blob/master/src/datomic/ion/event_example.clj#L47#2018-07-0801:53olivergeorgeThere would be less moving parts without the grouping. Perhaps there are performance considerations with loading the whole schema (even if 95% unchanged) each time. Can see it would be more managable working with a large edn schema file with chunks & would allow migration to report what schema chunks are being loaded...#2018-07-0819:30stuarthallowayhi @olivergeorge ! it is just sample code, I don't have any agenda about how people manage schemas and migrations#2018-07-0819:31stuarthallowayalthough I do have an agenda about "what" and "why" 🙂 http://blog.datomic.com/2017/01/the-ten-rules-of-schema-growth.html#2018-07-0821:03olivergeorgeThanks for clarifying#2018-07-0801:56olivergeorge(I'd love to see other people's approaches to declaring and migrating schema too.)#2018-07-0803:11steveb8nI’m working this out for myself right now. I’m deploying parts of my app using the “component” lib where each component deploys it’s own schema on start. It could be a bit heavyweight inside an Ion but so far it’s ok. I’ll blog on this later if the pattern holds up as the app grows#2018-07-0811:05eoliphantI’ve used conformity a good bit with on-prem, for schema mgmt. Coming from aeons of using flyway/liquibase/etc in the java/sql world it seemed a good fit. It needs some love to get it working with the client api, been thinking about tackling a pr.
But yeah for now @olivergeorge, I’m just memoizing the load of the whole thing#2018-07-0811:38eoliphantI have a couple questions about the ion-config.edn stuff
The :allow section lists what datomic is allowed to call. The ‘entry points’ if you will. In looking over the starter though, I don’t quite get how a function, that’s not a lambda would ever get executed directly, and need to be ‘allowed’. Are non-lambda funcs just there to take advantage of the namespace loading? If that’s the case maybe it’d be cleaner to have a separate more explicit tag?
Also, the reference describes the ion function signatures. It appears that the transaction and query types are really just recommended conventions, while the lambda and web service types are describing the actual required function signature. Is this the case?#2018-07-0818:45oscarIf I understand it correctly, you need to :allow transaction and query functions. They aren't lambdas but they need to be declared in case an external client called them through a query or transaction.#2018-07-0921:08eoliphantah lol. I’d jumped right into lambdas, etc and totally missed that we now have transaction funcs, etc via ions as well Was really missing the transaction funcs. We use them judiciously, but for some key functionality in for on-prem#2018-07-0818:38eoliphanthi, what’s the story for logging with ions? went poking around in the log for the lambda, then remembered that they’re just the glue. And the system or whatever log just looks like health stuff#2018-07-0818:49oscarI haven't messed around with logging too much, but I would try prepending something searchable like the current namespace and then searching the "datomic-<compute-stack>" log-stream for it.#2018-07-0818:58stuarthallowayYou can use any logging tech you would use for EC2, but stay tuned, help is on the way.#2018-07-0820:40eoliphantI’m fairly certain that ‘stdout’ logging isn’t showing up in the compute stack’s log stream.
Ok @stuarthalloway will try using CWL or datadog or something directly#2018-07-1119:40stuarthalloway@U380J7PAQ ... and help has arrived http://blog.datomic.com/2018/07/datomic-ion-cast.html#2018-07-1119:40eoliphant@whohoo!#2018-07-1200:40eoliphanthey @stuarthalloway just an FYI, looks like the doc page has 2 copies of the same content
https://docs.datomic.com/cloud/ions/ions-monitoring.html#2018-07-1200:43stuarthallowaythanks, will investigate!#2018-07-0819:23oscar@stuarthalloway What version of jackson-core does Datomic Ions use? I found that my code was getting broken only in my deployment because of my dependency on cheshire and its transitive dependency on jackson-(core|dataformat-smile|dataformat-cbor) 2.9.0. Pinning them down to 2.8.11 seems to have fixed it.#2018-07-0819:25stuarthallowayHi @U0LSQU69Z! We just did an update to help people with this problem: https://docs.datomic.com/cloud/releases.html#402-8396#2018-07-0819:26stuarthallowayI am guessing you are on an older release than that one.#2018-07-0819:26oscarYes! Thank you!#2018-07-0822:59eoliphantare there any restrictions on outbound traffic for ions? I’ve looked over the net acl’s and sg’s but didn’t see anything obvious. Trying to shoot my logs over to loggly, but nothing is showing up#2018-07-0900:36olivergeorgeI'm thinking through the right place to do schema migrations. The key issue being that a deployment is unstable until schema migrations are run. I've seen examples of doing the schema migration as part of a memoized get-connection helper. Are there other approaches I'm missing? Perhaps there should be a hook to run a schema migrations as part of the code deploy deploy step.
Ref: https://github.com/Datomic/ion-starter/blob/master/src/datomic/ion/starter.clj#L69#2018-07-0900:38eoliphantyeah some sort of :init-method in the config would be nice for that kind of stuff. But i’m just doing the memoization for now was well#2018-07-0900:48olivergeorge@eoliphant the downside I see regarding memoization is the risk that the schema fails at "run time". I'd prefer to the deploy to fail. Could argue it's unlikely given schemas should grow, not break, but human error is a thing. e.g. added a unique constraint but values in db aren't unique.#2018-07-1001:26miridiusYou could write your own deploy code that transacts the schema before running the ions deploy op, perhaps#2018-07-1001:55olivergeorgeHi @U0DHVSBHA, yes that would work. For a commit based deploy there is little chance of the local environment (running the migration) being inconsistent. That's the risk I see. Deployment becomes coupled to local dev env and the code pushed. More moving parts. More complexity.#2018-07-0900:51eoliphantyeah I totally agree, but I think that’s going to have to be something they build in. If say the :init-function returns falsy, etc yeah then kill the deploy. Though even that gets interesting, since you could possibly say successfully transact in some schema, but then have something else in the code fail, such that it returns false. So you’d roll back the deployment, but still have made changes.#2018-07-0900:54olivergeorgeTrue enough but happily we're talking about an unlikely case given "grow don't break" schema approach. Does require some coding practices reduce the risk of surprises.
Ref: https://docs.datomic.com/cloud/best.html#plan-for-accretion#2018-07-0900:55eoliphantyep, that’s what I do with all my datomic stuff. And similarly one would just have to be disciplined about not doing anything too hinky in this proposed init-method.#2018-07-0900:55olivergeorgeCrazy idea would be a schema migration transaction function which runs a suite of tests before committing (if that's even possible).#2018-07-0900:56eoliphantok i’m pulling my hair out right now.. My ionized gw function is base64 encoding my responses lol#2018-07-0900:58orlandowThat happened to me too, maybe this helps:#2018-07-0900:58orlandowhttps://clojurians.slack.com/archives/C03RZMDSH/p1531007700000037#2018-07-0900:56eoliphantah that would be interesting#2018-07-0900:57eoliphanti use conformity for my on-prem stuff. it’s like flyway/liquibase lite. But provides a little structure around the process#2018-07-0900:58olivergeorgeThanks, I'll check it out.#2018-07-0901:01eoliphantwell it doesn’t support the client api yet though 😞#2018-07-0901:01eoliphanti was planning to fork it and see if I could add that support#2018-07-0917:26rapskalianI took a stab at this using a small gist. Haven't discussed a PR or anything though.
https://gist.github.com/cjsauer/4dc258cb812024b49fb7f18ebd1fa6b5#2018-07-0901:04oscarWhat I do for migrations is test that all of the queued migrations work using d/with. If I don't throw any exceptions, then I commit the transactions.#2018-07-0901:07eoliphantanyone had this problem? I had this function go a bit wonky on me.
I’ve stripped it down to this
(defn handle-request*
"Handle API Requests"
[{:keys [headers body]}]
(log/debug "here's something")
{:body "body here" #_(json/generate-string {:its "ok"})
:headers {"Content-Type" "application/json"}
:status 200}
#_{:status 200
but in the gateway log, I’m seeing the following (and the encoded value is returned to my client)
(c562e562-8313-11e8-8b30-f3eb1fd30d3f) Endpoint response body before transformations:
{
"body": "Ym9keSBoZXJl",
"headers": {
"Content-Type": "application/json"
},
"statusCode": 200,
"isBase64Encoded": true
}
`#2018-07-0901:08oscar@eoliphant Have you added */* as a Binary Media Type?#2018-07-0901:10eoliphantah hell lol#2018-07-0901:10eoliphanti was having another issue and recreated the gateway#2018-07-0901:10eoliphantforgot to do that, again.. thanks#2018-07-0901:59eoliphantbeen pulling my hair out all day trying to get logging working via a logback appender for loggly. And just realized that most of the typical java ecosystem stuff will probably never work. Since most of it depends on classpath scanning, etc etc, so when your ions deploy all that’s already taken place.. ugh.. Maybe a good use case for modules 🙂#2018-07-0908:06pradyumnahi, is there a preferred strategy to manage data locally in a standalone application, which ordinarily online to access several other datomic databases. When offline the app should be able to still perform with whatever information it has cached. It should be able to store locally some of the work and then try to update the remote databases as applicable. Of course, the issue of conflict needs to be addressed in a sane way. I was thinking may be have a local datomic instance to serve as a cache for multiple remote datomic instances. or is there something better and simpler.#2018-07-0908:26steveb8n@pradyumna take a look at the AWS AppSync javascript lib. It handles all of these requirements for you, including conflict resolution. I’m helping out on a project which hopes to expose Ions using AppSync so it should fit pretty well#2018-07-0908:50pradyumnathanks @steveb8n. i checked this. unfortunately its not exactly fitting in. its clojure (jvm, not javascript)#2018-07-0912:55eoliphantYou’d probably have to implement this yourself @pradyumna like most db’s there’s no explicit support for that use case AFAIK. You could potentially use something like onyx for moving the updates between databases, but you’d be on the hook for conflict resolution ,etc#2018-07-0913:59eoliphantHi, I’m still trying to get some form of logging working. In the course of this I’ve run into another issue. Given what I mentioned previously about commons/sl4j/etc stuff probably never working, I tried creating a custom logger with timbre, that just fires entries via rest in to loggly. I’m using cljs-ajax for this, and it works fine in local dev, but when I call in now, i’m getting a classnotfoundexception for org.apache.http.HttpResponse, so there are presumably some classloader conflicts there. I noticed that the ion-event-example uses some cognitect http-client lib, but I can’t seem to find it in any of the repos#2018-07-0914:08stuarthallowayhi @eoliphant I would stand down, the next release will solve this.#2018-07-0914:09eoliphantah awesome 🙂#2018-07-0914:10eoliphantions are pretty friggin cool. This was the only real nitnoid so far#2018-07-0914:26stuarthallowaythat's great to hear, thanks!#2018-07-0917:51oscarWhen I instantiate a Client, the following is printed:
Reflection warning, cognitect/hmac_authn.clj:80:12 - call to static method encodeHex on org.apache.commons.codec.binary.Hex can't be resolved (argument types: unknown, java.lang.Boolean).
Reflection warning, cognitect/hmac_authn.clj:80:3 - call to java.lang.String ctor can't be resolved.
It returns the Client, though, and it seems like there aren't any issues. I just want to know if this will be a problem.#2018-07-0919:20stuarthallowayhi @U0LSQU69Z what version of Clojure and Java are you running?#2018-07-0919:25oscaropenjdk version "1.8.0_172"
clojure "1.9.0"#2018-07-0919:53stuarthallowaythat won't harm anything, but I will squelch it in a future build#2018-07-0919:58oscarCool. Thanks!#2018-07-0920:26timgilbertHi everybody, I could have sworn I saw a project on here that you could point to a datomic database and get a GraphViz diagram of the schema, but now I can't seem to find it. Anyone remember it?#2018-07-0921:02nilpunningThink I played around with this a while back. https://github.com/felixflores/datomic_schema_grapher#2018-07-0920:58eoliphantHey @stuarthalloway, I think there may be an issue ionized lambdas handling of OPTIONS requests. For this ‘echo’ ion
(defn api-request*
"lambda entry point"
[{:keys [headers body request-method]}]
(try
{:status 200
:body (json/generate-string request-method)}
....
I get “post” “get”, etc just fine but a "message": "Internal server error" for an OPTIONS request, need that since API gateway expects the lambda to respond to the CORS preflight stuff#2018-07-0921:23oscarDo you have an OPTIONS method next to your ANY method in API Gateway?#2018-07-0921:27oscarSomething similar was happening to me because I hit "Enable API Gateway CORS". If you did the same, delete the OPTIONS method and handle it in your Ion. This is happening because AWS matches your OPTIONS request content-type to */* and base64 encodes it. The "Mock Integration" that the preconfigured CORS handler generates expects JSON and throws when it can't parse.#2018-07-0921:41eoliphantyeah that’s exactly what I did#2018-07-0921:43eoliphantthat did it. thanks!#2018-07-0921:44oscarNo problem!#2018-07-1000:30johnjhttps://docs.datomic.com/cloud/whatis/data-model.html#sec-5#2018-07-1000:31johnjis that saying its ok to use :person/name instead of :customer/name or :employee/name ?#2018-07-1000:34johnjand differentiate between customer or employee by other attributes? for ex: :person/department, :person/role for employees.#2018-07-1000:36johnjjust confused if that advice is given for prototyping or is idiomatic to do so#2018-07-1001:30miridiusSeems like Datomic Cloud (with :server-type :ion) doesn't like namespaced database names?#2018-07-1001:41miridiusI also can't seem to list-databases (https://docs.datomic.com/client-api/datomic.client.api.html#var-list-databases) to work 😞
(def cfg {:server-type :ion,
:region "us-east-1",
:system "dev",
:query-group "dev",
:endpoint "",
:proxy-port 8182})
=> #'user/cfg
(def client (d/client cfg))
=> #'user/client
(d/list-databases client)
CompilerException java.lang.IllegalArgumentException: No single method: list_databases of interface: datomic.client.api.Client found for function: list-databases of protocol: Client, compiling:(/tmp/form-init5702408163425337762.clj:1:1)#2018-07-1002:17euccastroI'm having the same error as this user: https://forum.datomic.com/t/issue-retrieving-com-datomic-ion-dependency-from-datomic-cloud-maven-repo/508
except that I can't download the jar at all:
{:tag :a, :attrs {:href "/cdn-cgi/l/email-protection", :class "__cf_email__", :data-cfemail "4f2a3c0f2d202c272e"}, :content ("[email protected]")}#2018-07-1002:21euccastrowget works, though, so I'm baffled#2018-07-1002:23euccastroFWIW, this is my version of AWS Tools:
{:tag :a, :attrs {:href "/cdn-cgi/l/email-protection", :class "__cf_email__", :data-cfemail "3e5b4d7e5c515d565f"}, :content ("[email protected]")}#2018-07-1002:44euccastroif I either add --no-sign-request to the aws invocation or if I add read permission for arn:aws:s3:::datomic-releases-1fc2183a/* in the IAM group of the user I have credentials configured for, then the aws cp succeeds, but I still get the same error when trying to run clj#2018-07-1003:16euccastroif I rename my ~/.aws/credentials to something else, then last cause in the traceback becomes Caused by: com.amazonaws.SdkClientException: Unable to load AWS credentials from any provider in the chain instead of Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 542B5E955147817A; S3 Extended Request ID: (elided), so, unlike in the forum report above, it seems the right credentials are getting picked up by clj#2018-07-1003:17jarethttps://forum.datomic.com/t/issue-retrieving-com-datomic-ion-dependency-from-datomic-cloud-maven-repo/508/6#2018-07-1003:18jaretIf you set your aws resources file in the terminal does aws s3 cp work?#2018-07-1003:19jaretnvm. I see your results#2018-07-1003:49euccastroit seems I was missing at least one other permission, GetBucketLocation for <s3://datomic-releases-1fc2183a> . actually, if I go lazy and allow all S3 ops for all buckets and objects, then clj works. I created a new AWS IAM user for this tutorial, and I only assigned it the datomic-admin-$APP-$REGION policy. I guess most people just use their existing AWS credentials, which have access to everything, and that's why they don't get bitten by this issue?#2018-07-1003:50euccastrobottom line: I think permissions to access the datomic repos should be given to the autogenerated datomic-admin-... policies#2018-07-1003:50euccastroI can continue to pinpoint the exact permissions if that's useful#2018-07-1003:51jaretWere you using an old account for AWS? You have to have an account that supports EC2-VPC#2018-07-1003:52jaretIf your AWS account was prior to DEC 4 2013 it wouldn’t support EC2-VPC#2018-07-1003:52jaretThat wouldn’t be it nvm#2018-07-1003:52jaretI need to look again at what perms are needed#2018-07-1004:00euccastro@jaret: adding these permissions solved it for me:
{
"Sid": "VisualEditor2",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:GetBucketLocation"
],
"Resource": [
"arn:aws:s3:::datomic-releases-1fc2183a",
"arn:aws:s3:::datomic-releases-1fc2183a/maven/releases/*"
]
}
#2018-07-1004:25euccastroI got the following error trying to perform the initial push in the ions tutorial:
{:command-failed "{:op :push}",
:causes
({:message
"Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 2AF8C01FF6D0B032; S3 Extended Request ID: (elided)",
:class AmazonS3Exception})}
adding the following permissions fixed it:
{
"Sid": "VisualEditor3",
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::datomic-code-3a1b169a-4a28-4693-8e32-891f20e65112/*",
"arn:aws:s3:::datomic-code-3a1b169a-4a28-4693-8e32-891f20e65112"
]
}
#2018-07-1004:31euccastronitpick: I now get the following error when trying to do the initial push. it's obvious that I need to commit the addition of the ion-config.edn, but the tutorial doesn't mention it
{:command-failed "{:op :push}",
:causes
({:message
"You must either specify a uname or deploy from clean git commit",
:class IllegalArgumentException})}
#2018-07-1004:33euccastroanother permissions error. I'm wondering whether I did something wrong in the datomic cloud setup
{:command-failed "{:op :push}",
:causes
({:message
"User: arn:aws:iam::563900263565:user/deitomique is not authorized to perform: codedeploy:RegisterApplicationRevision on resource: arn:aws:codedeploy:eu-central-1:563900263565:application:deitomique (Service: AmazonCodeDeploy; Status Code: 400; Error Code: AccessDeniedException; Request ID: d00b0b01-83f9-11e8-ad19-ad95a71fbe60)",
:class AmazonCodeDeployException})}
#2018-07-1004:46euccastroand then some more in CloudFormation and StepFunction, when deploying#2018-07-1005:15euccastrofinally, I get the following error when trying to invoke the API Gateway endpoint via curl: {"message":"Missing Authentication Token"}#2018-07-1005:19euccastronevermind; I was using the URL as it appears in the Invoke URL, so I was missing the /datomic at the end of the path#2018-07-1005:26euccastrowhat's this /datomic suffix all about anyway? should I just add /datomic at the end of any Invoke URLs exposed via API Gateway, or is that set somewhere (that I missed) in the ion-starter project?#2018-07-1005:28euccastroions is awesome BTW; I was just pointing out points of friction in the tutorial, should that help#2018-07-1006:50olivergeorge@euccastro That's a recent bug fix in the tutorial. /datomic can be anything... makes sense if you think about having many routes associated with your endpoint based on request path.#2018-07-1010:15euccastrothanks!#2018-07-1015:58luchiniIf anyone is looking for a super basic, very fast, getting started material for Datomic Ions, I’ve put this together last night: https://twitter.com/tiagoluchini/status/1016698810364461058#2018-07-1015:59eoliphanthi, is it the case that say limit and offset aren’t available in the sync client API?#2018-07-1016:01johnjwhy do you believe that?#2018-07-1016:05johnj@eoliphant https://docs.datomic.com/cloud/client/client-api.html#offset-and-limit#2018-07-1016:07eoliphantyes that’s what I’m looking at. The async api for say q takes a map of the form {:query '[:find ..] :offset .. :limit ..}
The sync api looks like the same list (or map) form as on prem [:find .. :where.. ]#2018-07-1016:10eoliphantyeah it looks like :chunk :offset and :limit are only available for the async api#2018-07-1016:34oscar@eoliphant That's not correct. From the docs "The arity-1 version takes :query and :args in arg-map, which allows additional options for :offset, :limit, and :timeout. See namespace doc."#2018-07-1016:35oscar(arity-1 version (q {:query '[:find ..] :offset .. :limit ..}))#2018-07-1017:07eoliphantah yeah, I didn’t pull the db in to the map#2018-07-1017:16rhansenNeed some help to formulate a query.
In my application, a character can have a set of skills. Those skills can be based of off other skills. And those skills can be based of off other skills again.
How do I write a recursive query which gives me all the skills of a character, but also all the skills those skills reference?#2018-07-1017:16rhansenIf that was confusing I can happily make a better attempt at explaining it.#2018-07-1017:19donaldballYou probably need to use rules: https://docs.datomic.com/on-prem/query.html#rules#2018-07-1017:25rhansenI might be missing something obvious here. But I don't know why this would help 😅#2018-07-1017:26rhansenoh#2018-07-1017:27rhansenI think I get it... opening repl#2018-07-1017:31rhansenNo. I didn't 😞#2018-07-1017:34rhansenI fail to see how rules can be used to form recursive queries 😕#2018-07-1017:45oscarYou set up two rules with the same name. One that is your base case, one that follows your "skills" chain and recursively calls the rule, again.#2018-07-1017:46rhansenahh, ok#2018-07-1017:47rhansenThanks for the heads up 😃#2018-07-1018:09Oleh K.Hi! What is the best way to fill a datomic database with test data?#2018-07-1105:48val_waeselynckTransact the application schema, then transact the test data? It's hard for me to see where what difficulties you're encountering without more context#2018-07-1018:24souenzzoHey, I'm still not on ions 😢
There is cons on run datomic on fargate?
Apart from formatting/html issues, is there a problem in this tutorial?
https://www.avisi.nl/blog/2018/02/13/how-to-run-the-datomic-transactor-on-amazon-ecs-fargate#2018-07-1107:19gerstreeOuch, that looks bad. We just moved to a new website/platform, will ping the devs to fix that.#2018-07-1107:20gerstree@U2J4FRT2T I can share large parts of our cloudformation template with you if you like.#2018-07-1020:24fingertoeTrying to follow the “First time upgrade instructions” https://docs.datomic.com/cloud/operation/upgrading.html
I don’t see the “Reuse existing storage on create” option to mark true in my AWS console.. Did they change it on us?#2018-07-1023:57oscar@fingertoe It's there. Are you sure that you copied the storage template?#2018-07-1103:29fingertoeThanks @oscar… I am making progress now..#2018-07-1104:19bmaddyDoes anyone know how to make datomic-free use more memory? I'd like to try -Xms4g -Xmx4g.#2018-07-1104:21bmaddyI can't find anything that says what the max memory amount is for the free version, so I'm not sure it's even possible...#2018-07-1104:44bmaddyNevermind, I found it in the transactor script: bin/transactor -Xms4g -Xmx4g ...#2018-07-1106:38euccastrowhat are the advantages of using entities with :db/ident, as opposed to keywords, for enumerations? is it only that you can assign other attributes to those entities?#2018-07-1106:40euccastrooh and that misspelling a keyword may go unnoticed for longer. anything else?#2018-07-1109:13rhansenI think that's about it. Since datomic isn't really a good fit for huge breaking changes to its schema, those advantages are really nice though.#2018-07-1109:46Andreas LiljeqvistProbably a performance advantage as well#2018-07-1109:47Andreas LiljeqvistDisadvantage is representation, :mykey vs 12312454123#2018-07-1115:07bmaddyI'm trying to rewrite some sql in datalog. Does anyone see what I'm doing wrong here?
;; SELECT sub_type, AVG(duration) AS "Average Duration"
;; FROM trips
;; GROUP BY sub_type;
(d/q '[:find [?st (avg ?d)]
:with ?st
:where
[?e :trip/sub-type ?st]
[?e :trip/duration ?d]]
(d/db conn))
I get ArrayIndexOutOfBoundsException [trace missing]#2018-07-1115:09chrisblomdon’t wrap ?st (avg ?d) in []?#2018-07-1115:09bmaddyYeah, that gives the same thing. 😕#2018-07-1115:12chrisblomdrop the :with part#2018-07-1115:16bmaddyThat gives a result, but the sub-types get coalesced
(d/q '[:find [?st (avg ?d)]
:where
[?e :trip/sub-type ?st]
[?e :trip/duration ?d]]
(d/db conn))
["Casual" 3283.31254089422]
Other sub-types do exist:
(d/q '[:find [?st (avg ?d)]
:where
[?e :trip/sub-type ?st]
[(= ?st "Registered")]
[?e :trip/duration ?d]]
(d/db conn))
["Registered" 1145.4663382594417]
#2018-07-1115:52chrisblomah ok, you only get the first result now because in :find you wrap ?st (avg ?d) with []#2018-07-1115:52chrisblomdoes it work if you remove the [...]?#2018-07-1115:55chrisblomSee https://docs.datomic.com/on-prem/query.html#find-specifications#2018-07-1115:57bmaddyHmm, I'm not seeing a ... to remove. Thanks a ton for taking a look at this, btw.#2018-07-1115:57chrisblomah, i meant your query should look like this:#2018-07-1115:57chrisblom(d/q '[:find ?st (avg ?d)
:where
[?e :trip/sub-type ?st]
[?e :trip/duration ?d]]
(d/db conn))#2018-07-1115:59bmaddyAh! So I only need :with if the relvar I'm grouping on isn't included in the :find clause I bet! That totally fixed it!#2018-07-1116:00bmaddyThanks a ton @chrisblom!#2018-07-1116:00chrisblomyeah, usage of :with is a bit tricky#2018-07-1116:00chrisblomthe error message does not help much#2018-07-1116:00bmaddyYeah. I tend to get bewildered by find-specifications also, so I think that contributed.#2018-07-1119:35jarethttp://blog.datomic.com/2018/07/datomic-ion-cast.html#2018-07-1122:56miridiusAwesome! Minor issue: in the metrics section (https://docs.datomic.com/cloud/ions/ions-monitoring.html#metrics) the list of required keys includes "type", but in the example code it uses "units" instead.#2018-07-1122:58miridiusalso the whole monitoring document is repeated twice (it starts over at https://docs.datomic.com/cloud/ions/ions-monitoring.html#sec-9)#2018-07-1213:32jaretThanks for catching that the merging of doc branches somehow duplicated the page. I’ve fixed it.#2018-07-1204:28euccastroI'm trying to make a ring app as an ion. I pushed and deployed an app that uses com.cemerick/friend (admittedly a bit of a stress test). I got the following error when curl -iing the gateway API endpoint:
HTTP/1.1 500 Internal Server Error
Date: Thu, 12 Jul 2018 04:22:05 GMT
Content-Type: application/json
Content-Length: 157
Connection: keep-alive
x-amzn-RequestId: 1f30cc2e-858b-11e8-ac8f-b9d295c18321
x-amz-apigw-id: J5aYVFrQliAFs7w=
X-Amzn-Trace-Id: Root=1-5b46d768-43f8e546292ea7321adbb5a0;Sampled=0
java.io.FileNotFoundException: Could not locate slingshot/slingshot__init.class or slingshot/slingshot.clj on classpath., compiling:(cemerick/friend.clj:1:1)
I have no such problem locally, and slingshot appears in the list of downloaded libraries that got printed when I first deployed:
... {:s3-zip "datomic/libs/mvn/slingshot/slingshot/0.10.2.zip", :local-dir "/home/es/.m2/repository/slingshot/slingshot/0.10.2", :local-zip "/home/es/.cognitect-s3-libs/.m2/repository/slingshot/slingshot/0.10.2.zip"} ... #2018-07-1204:29euccastrothis is my deps.edn, FWIW
{:paths ["src/clj" "resources"]
:deps {com.datomic/ion {:mvn/version "0.9.7"}
org.clojure/data.json {:mvn/version "0.2.6"}
org.clojure/clojure {:mvn/version "1.9.0"}
com.cemerick/friend {:mvn/version "0.2.3"}
ring/ring-defaults {:mvn/version "0.3.2"}}
:mvn/repos {"datomic-cloud" {:url ""}}
:aliases
{:dev {:extra-deps {com.datomic/client-cloud {:mvn/version "0.8.54"}
com.datomic/ion-dev {:mvn/version "0.9.160"}}}}}
#2018-07-1205:21henrikIs this section in the Datomic tutorial (https://docs.datomic.com/cloud/tutorial/assertion.html#sec-3) missing (d/transact conn {:tx-data (make-idents colors)})?#2018-07-1206:09euccastrotrying to use the session ring middleware with cookie storage seems to break the proxy integration:
Thu Jul 12 06:04:17 UTC 2018 : Endpoint response body before transformations: {"statusCode":200,"headers":{"Content-Type":"text\/plain","Set-Cookie":["ring-session=ECSI%2FAxqP4g3%2F6Lsf6j2gw6iTCd2jVL9CB2n8D%2BsBIY%3D--FweWg7tIHsIfkhtzoKxqC9YvJNtKEjzU%2BQtbF1Qzk20%3D;Path=\/;HttpOnly"]},"body":"T2zDoSAwIQ==","isBase64Encoded":true}
Thu Jul 12 06:04:17 UTC 2018 : Endpoint response headers: {X-Amz-Executed-Version=$LATEST, x-amzn-Remapped-Content-Length=0, Connection=keep-alive, x-amzn-RequestId=68d54df4-8599-11e8-a98d-17a42203bec1, Content-Length=254, Date=Thu, 12 Jul 2018 06:04:17 GMT, X-Amzn-Trace-Id=root=1-5b46ef61-1d46065e28b013d3ba616863;sampled=0, Content-Type=application/json}
Thu Jul 12 06:04:17 UTC 2018 : Execution failed due to configuration error: Malformed Lambda proxy response
Thu Jul 12 06:04:17 UTC 2018 : Method completed with status: 502
#2018-07-1206:47euccastroso just in case this bites someone: it seems like the AWS API gateway doesn't accept a list as a header value, and in general it doesn't accept multiple headers with the same name. a workaround, if you really need multiple headers with the same name, is to return the headers in different upper/lower case combinations (e.g., "Set-Cookie" and "sEt-cOOkiE" will work). you could write a ring middleware that does just that#2018-07-1210:01fmnoiseany thoughts about excision for datomic cloud - is it even planned to implement?#2018-07-1210:41stuarthallowayhi @U4BEW7F61. It is definitely on our radar. https://forum.datomic.com/t/support-for-excision-or-similar/323#2018-07-1211:37henrikI want to model a taxonomy, like this one:
Biology --> Medicine -> Internal
|-> Genetics
|-> Morphology
Eventually, I want to tag stuff with this taxonomy, such as an article entity tagged with genetics for example.
What would be a good way to model the taxonomy?#2018-07-1211:39henrikThis is my current attempt:
[{:db/ident :taxonomy/name
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one
:db/unique :db.unique/identity
:db/doc "The title of a taxonomy node"}
{:db/ident :taxonomy/children
:db/valueType :db.type/ref
:db/isComponent true
:db/cardinality :db.cardinality/many
:db/doc "Children of a taxonomy node"}]#2018-07-1211:40henrikIt works. I’m just not sure if it’s an intelligent way to do it.#2018-07-1213:17chrisblom@henrik that looks reasonable to me#2018-07-1213:44val_waeselynck@henrik if the taxonomy graph is tree-like, :taxonomy/parent instead of :taxonomy/children is probably safer#2018-07-1213:44henrik@val_waeselynck Interesting! How is that safer?#2018-07-1213:45val_waeselynckWell, by having a cardinality-one attribute, you're being more explicit about the model ("a taxonomy has at most one parent")#2018-07-1213:46val_waeselynckAlso seems more reasonable to me that the parents, being more general, don't "know" about their children#2018-07-1213:53jonahbenton@henrik Relatedly, do you need to be able to navigate up the tree from child to parent? And is it possible for the taxonomy to be rich enough for there to be the same or similar names in different parts of the tree?#2018-07-1214:07henrik@jonahbenton Every node, regardless of level, should be entirely unique. Or, if it’s named the same, it is the same. And yes, navigation would have to be bidirectional. But as I understand Datomic, all references are bidirectional, right?#2018-07-1214:26val_waeselynckthey are, in the sense that you can easily navigate in both directions, whatever the query API you're using#2018-07-1214:30jonahbentonSo a given node may have multiple parents?#2018-07-1214:30henrikOh, I see. No, one parent I think.#2018-07-1214:32henrikThis is for categorising science into fields and subfields. Though now you got me thinking about whether modeling it as a network of subjects would be more powerful.#2018-07-1214:36jonahbentonYeah, probably would, though seems like it might depend on the size and the dataset feeding the categorization. Tags may be a useful modeling tool to capture commonalities (like computational-ness of the subfield) Perhaps also include a description attribute#2018-07-1214:42henrikRight now, I’m looking at basing it on a standard way of categorising (CWTS Leiden, about 250 categories and subcategories).
But just because that particular model is hierarchical doesn’t mean that there isn’t a more powerful way to do it.
The point with this particular taxonomy is to try to keep it small(-ish), using it to create rather large, but interconnected groups of material.#2018-07-1214:46henrikI could essentially model a freer graph in the same way, right? Renaming parent to something like relation.#2018-07-1214:47jonahbentonAh, that sounds neat. It sounds like datomic as a metadata store- this taxonomy applied to source material that lives outside datomic- which I'm thinking about for a project as well.#2018-07-1214:48jonahbentonYes, I believe so, I have seen some "node" "edge" terminology in schemas#2018-07-1214:48henrikYeah, the source material would come from scientific publishers, in the form of articles, journals, books etc. And we have to find a way (many ways, actually), to tie all that disparate information together into a cohesive, consumable collection.#2018-07-1214:48henrikWhat type of material will you be working with?#2018-07-1214:50henrikActually, with edges/nodes, I’m back to a list of relatives, though. Just not necessarily parents.#2018-07-1215:03jonahbentonThat sounds neat! Lots of interesting problems there. For me, as a side project, I'm looking at reimplementing a container artifact metadata api. The api is from a project called Grafeas: https://grafeas.io/ which acts as a metadata repository around container usage, vulnerabilities, deployment history, stuff of that nature. The basic technical idea is that grafeas is one of many projects in the container ecosystem that are glorified packagings of go code generated from protobufs. I like go, but when it comes to code generation, it's an awkward workflow, and the go people argue about checking code into the repo, doing it at build time, yadda yadda. It seems to me that in the clj space, you should have a pretty clean workflow of generating schema and data models from protobuf for the different layers -> spec, apis, datomic schema- and that should be sufficient to yield something of a working system. I don't see any of that tooling right now, so that's what I'm looking at.#2018-07-1215:19henrikCould you summarize the problem and the value proposition for me? I don’t think I’m familiar enough with the problem to fully understand the solution.#2018-07-1216:08jonahbentonKind of you to ask, it's niche, so the explanation is a little long:
Companies/orgs that run applications- api-type services and scheduled/batch jobs- have been "containerizing" their applications. Once you have containerized, there are a whole set of questions you'd like to be able to ask about your fleet, some operational, some security related, etc. Do any of the jvm applications I'm running use the vulnerable version of struts? If so, where are they in my network and for how long have they been running? How many of my applications have had vulnerabilities reported against their dependencies? What third party libraries are my service applications consuming, and are any of those licenses GPLV3?
In even a small plant you wind up wanting to have a metadata repository into which that sort of operational and security data can be pushed, and against which one can run queries. Beyond that, you want to be able to plug other consumers and providers into that repository. You want to be able to use vulnerability scanner X and build tool Y and signing tool Z, and Google has succeeded in getting commitments for adoption of this particular metadata API by various players in this ecosystem.#2018-07-1216:09jonahbentonI'm curious about this as a side project, as I do some work in security and have been enthralled with containers and kubernetes.
From a product standpoint, it seems like Datomic should be a good fit for this sort of metadata, both for storage and for query. Having a fundamentally immutable store that knows-when-you-knew-something is useful for security, and datalog is more capable than many other languages from a query perspective.
On a technical level, I'm curious about the ergonomics of going from protobuf->spec, protobuf->api, protobuf->datomic schema, and am curious about data-driven systems in general. There is a project called "vase" from the Cognitect folks which was an experiment in building a fully data-driven api + database. Write as little code as possible, describe the system entirely using data, how far can you go with that? So on a technical level I'm basically curious whether protobuf is a feasible "front end" with vase as a "back end".#2018-07-1222:39henrikThank you for the description, that does like an interesting (and hard) problem. I can see how managing tons of containers quickly takes on qualities of cat herding.
I remember the Vase introduction from a Cognicast way back. “Because it sits on top of Pedestal.”
In the more abstract, it’s interesting to try to imagine how to keep some of the ergonomics of Clojure once you pass the border of the application. Philosophically, a function and a container have sort of morphological similarities, but the environment is as different as that of a one-cell organism to that of an animal.#2018-07-1318:11jonahbentonAgree! Very interesting.
Working in clj on applications that will get deployed into k8s, one can't avoid engaging in thought experiments about a repl that directly creates and interacts with k8s resources in a first class manner. The repl and kubectl are equivalent levels of abstraction. One can imagine having a way to produce a pseudo clj namespace from a container image + a swagger spec, so loading that namespace under the hood spins up a container, and calling functions turns into (cross-language) service calls.
Certainly we've seen movies like this before; when abstractions are similar but not equivalent the pain is often greater than the benefit. But still interesting to think about.#2018-07-1214:42rhansenHmm... I have a list of references, and I want to check if those references all belong to a certain entity. What would be the best way to construct such a query?#2018-07-1214:46val_waeselynckwhat does it mean for a reference to belong to an entity?#2018-07-1214:47rhansen[?entity :person/friends ?some-ref]#2018-07-1214:47val_waeselynck@rhansen I would use a Datalog query to list or count those that don't#2018-07-1214:49rhansenInteresting. Thanks.#2018-07-1214:49euccastroI've done the ring wrapper I mentioned above. it's only tested in the REPL (and by deploying to ions, of course) so far, but I hope it's useful if you're tinkering with hosting a ring web app in ions: https://github.com/euccastro/expand-headers#2018-07-1214:58val_waeselynck@euccastro sorry, I don't follow what problem you are addressing?#2018-07-1214:59euccastro@val_waeselynck are you talking about my response to you or about the github repo I mention above?#2018-07-1215:00val_waeselynck@euccastro my response to you#2018-07-1215:01euccastrooh sorry I think I misunderstood your question to @rhansen#2018-07-1215:03euccastroI've deleted my responses since they only add noise#2018-07-1215:04val_waeselynckah ok 🙂#2018-07-1215:04val_waeselynckdebugging human conversations#2018-07-1217:30euccastroFWIW, the problem I mention here (https://clojurians.slack.com/archives/C03RZMDSH/p1531369681000042) persists if I manually add to my own deps.edn a dependency on the same slingshot version as cemerick.friend does (0.10.2), but for whatever reason it doesn't manifest if I upgrade the slingshot dependency to the current version, 0.12.2#2018-07-1217:33oscar@euccastro Upgrade to the newest Ions. It sounds like you have dependency conflicts. https://docs.datomic.com/cloud/ions/ions-reference.html#dependency-conflicts#2018-07-1219:09euccastrothanks @oscar!#2018-07-1219:10stuarthallowayHi @euccastro! If that does not work you should be able to spot an error in the logs, per https://docs.datomic.com/cloud/operation/monitoring.html#searching-cloudwatch-logs#2018-07-1219:15euccastrothanks @stuarthalloway! I just noticed I'd missed that whole "Operation" section of the docs 😛#2018-07-1303:42shoHi @euccastro, have you managed to create a ring app as an ion? Does it work just fine with your hack for the headers problem? I'm just trying to do the same exercise and curious what to expect.#2018-07-1304:48euccastroso far it works fine. as you may have seen in the #datomic channel, I have stumbled into some dependency problems too, but so far I'm managing by paying attention the first time I push a version that introduces a dependency and manually declaring any conflicting dependencies#2018-07-1304:49euccastrosee this (not ions specific) for how to associate a domain name to your API Gateway app: https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-custom-domains.html#2018-07-1304:53euccastroalso, if you want to be able to serve the root (/) directory, you need an additional ANY method in the root (/) resource of your API Gateway. the ions tutorial doesn't get into that. you shouldn't remove the /{proxy+} resource, though. AFAICT both are needed#2018-07-1304:55euccastroall that said, I haven't tested much functionality yet, only that basic ring handlers work#2018-07-1304:55euccastrogoogle "keep aws lambda warm" for another important consideration if your app is user-facing or otherwise latency sensitive#2018-07-1304:57euccastrothe good thing about these hoops is that you only need to jump through them once I think. I haven't touched my API Gateway configuration at all since I initially set it up, and I don't expect to have to worry much about it#2018-07-1305:01euccastrohttps://datomique.icbink.org where I'm testing these things. that is backed by ions (solo deployment). the counter (refresh the page) is kept in the cookies, and the list of accessed paths is kept in a local atom (note that any process-local state gets lost on deployments though)#2018-07-1305:03euccastrothis is my ring handler ATM FWIW:
(def log (atom []))
(defn ring-handler
[{:keys [headers body uri params session]}]
(if (= uri "/favicon.ico")
{:status 404
:body "Not found!"}
(do
(swap! log conj uri)
(let [count (get session :counter 0)]
{:status 200
:headers {"Content-Type" "text/plain"
"p-ro-va-heaDers" ["a" "b" "c" "d" "e"]}
:body (str "Olá " count "-" (pr-str @log) "!")
:session (assoc session :counter (inc count))}))))
(defn dup [xs]
(conj xs (first xs)))
(defn wrap-add-cookie [handler]
(fn [req]
(update-in (handler req) [:headers "Set-Cookie"] dup)))
(def ring-app
(-> ring-handler
(wrap-session {:store (cookie-store {:key "a 16-byte secret"})})
wrap-keyword-params
wrap-params
wrap-add-cookie
wrap-expand-headers))
#2018-07-1305:04euccastroas you see I've been mostly tinkering with multiple header values and ring handlers, not doing anything fancy yet#2018-07-1305:08euccastroI'm pushing my experiments here if you're interested (ignore the /old folder): https://github.com/euccastro/semente#2018-07-1306:36shoSorry I've been offline for lunch. All of your information is very helpful, especially because I haven't found anyone else doing the same stuff yet.#2018-07-1306:50shoI'm still not 100% convinced whether the approach of building a ring handler behind API Gateway is the best decision for me, but the alternative would be doing auth with AWS Cognito, which means throwing away a good chunk of Clojure code and moving away from the Clojure ecosystem.#2018-07-1306:58shoSo I want to first try my server-side code with Buddy auth as a ring ion.#2018-07-1307:08shoAbout java cold start, I'm thinking about dispatching an event to knock the ion app right at the moment users visit my static site on CloudFront and having one ion handle all of my api requests that requires both authentication and authorization. Not sure if this is a good strategy, but I plan to try it and examine the latency problem with my eyes.#2018-07-1307:14shoI'll be out for a few days, but if I happen to find anything valuable, I'll ping you and share the info. Cheers.#2018-07-1321:43euccastrothanks!#2018-07-1219:15euccastro(btw it did work)#2018-07-1221:09eggsyntaxIs anyone aware of any writing or documentation out there about guarding against malicious datomic queries, especially preventing queries with too great a performance impact? I don't think it makes sense to naively expose queries entirely to the public (or semi-public in my case, ie logged-in users, with only signups vetted). But I'm interested in seeing what's been written on the subject. Didn't find anything relevant on a quick review of the datomic docs.#2018-07-1621:05timgilbertI've thought about this a lot, but never found much in the way of writing on the subject. In general the problems are similar to problems that other graph databases also face. But there's not tons of general literature available for those either.#2018-07-1621:07timgilbertAt my company we did go through an exercise of parsing pull queries and then limiting specific queries to a certain depth and doing other validations on them#2018-07-1621:08eggsyntaxThanks, Tim! Any particular tips/gotchas on that process?#2018-07-1621:08timgilbertBut we eventually moved to keeping all the queries on the server where we could control them, and then moving to a GraphQL interface which has its own set of issues#2018-07-1621:09eggsyntaxHeh. We've been doing some exploration on a new project, and I had put off making DB decisions. I added GraphQL so I could support client-side "pull"-specification. Now that I've decided to go with datomic, I'm dropping GQL like a hot potato 😉#2018-07-1621:10timgilbertOne thing that we ran into a bunch was trying to figure out how to guard against attacks where a user is able to escape her own company and start getting data about another person's company by backref-linking through a shared entity#2018-07-1621:10eggsyntaxIt doesn't seem like GQL really provides any inherent support for limiting query specification impact either, seems like you're left facing the same problem.#2018-07-1621:10eggsyntaxBut not the backref aspect I guess, huh?#2018-07-1621:11timgilbertIf you decide to expose some of your datomic stuff via lacinia, we open-sourced a library that does some of the grunt work for you: https://github.com/workframers/stillsuit#2018-07-1621:12eggsyntaxHmm, seems like one option (for datomic) would be to parse the pull and look for backrefs, and then just reject any calls that had them.#2018-07-1621:12timgilbertYeah, except in cases where you actually need them, like you have a project and are looking for all users with :person/project ?p#2018-07-1621:13timgilbertAnyhow, we thought about it for a while and eventually decided keeping the queries on the front-end was going to be a black hole of engineering time#2018-07-1621:14timgilbertI think there are ways you could work around it, like have a "dev mode" where the client sends them over and a "prod mode" where they are replaced by keywords or something#2018-07-1621:17eggsyntaxYeah, I can definitely see the possibility of it becoming a terrible timesuck. The keyword approach seemed promising to me too.
This is a bunch of really useful info for me. May save me from going down some wrong roads. I really appreciate it :man-bowing:#2018-07-1621:18timgilbertWe were also thinking about moving to a multi-tenant setup where user data from different orgs was stored in entirely separate databases, which would have been easier to do on day 1 than day 638 or whatever#2018-07-1621:18eggsyntaxAh, yeah, no doubt.#2018-07-1621:18timgilbertNo prob. I'd say definitely give it some thought, you might stumble on something we didn't, and I'll look forward to reading your blog post about it 😉#2018-07-1621:21eggsyntaxSeems like maybe writing your schema explicitly to avoid the need for backrefs in client requests might work, although I'm not at all sure of that.
Or maybe you could take an approach like disallowing certain things like backrefs, but being able to pass keywords that tell the server to include datomic rules that provide just the backrefs that you need.#2018-07-1621:22eggsyntaxie hide the potentially dangerous stuff behind keywords and disallow it in client requests, but then expose the full range of non-disallowed stuff for the client, for the sake of power.#2018-07-1621:32timgilbertIt's possible, yeah. Starting with a subset seems like a promising approach, or maybe a query DSL that you could validate and then translate back into pull syntax on the server side#2018-07-1221:12eggsyntax(this is re: on-prem / peer, btw)#2018-07-1222:01jonahbentonFor reads, in terms of constraining cpu/ram resource utilization- in the peer architecture, the query processing is happening wholly in your app, so this is under your control. You can give inbound requests as much or as little time as you want on a thread, then cancel; or retrieve only a limited number of results, or whatever...#2018-07-1222:02eggsyntaxFor sure! I'm just wondering if there are some examples of approaches that people have taken to that, that may bring up datomic-specific considerations that I have thought of.#2018-07-1303:09eoliphanti’m seeing this error for a largish (~500 datoms) transaction
"java.lang.IllegalArgumentException: No implementation of method: :value-size of protocol: #'datomic.cloud.tx-limits/ValueSize found for class: java.lang.Integer\n\t
Any ideas what this might be?#2018-07-1311:15stuarthalloway@U380J7PAQ somehow a wrapped Integer showed up where it should not, probably should be a primitive long. Let me know if you can make a small repro. I doubt this has to do with tx size.#2018-07-1321:53eoliphanthmm, will do some digging. I’m using transit to sneak edn in and out of my ions. ran into the customary issues/surprises on my cljs client, might be something similar on the server#2018-07-1518:36eoliphantOk so.. um.. that was in fact the problem… but it was weirdly intermittent lol. I’m uploading some info off of a gene sequencer. I’d totally forgotten that my parser on the server was in fact calling Integer/parseInt to set the value of the associated datoms But, it frequently worked just fine. Changing to Long/parseLong did fix it.
Gonna finish this stuff up. Then try to go back and see if i can get a consistent test case#2018-07-1320:49souenzzoHello
I'm on "classic peer"
I have a datomic function :empty-query? that is pretty simple
(def empty-query?
(d/function '{:lang :clojure
:requires [[datomic.api :as d]]
:params [db query & args]
:code (when-not (->> (into [db] args)
(hash-map :query query :args)
(d/query)
(empty?))
(throw (ex-info "FAIL" {})))}))
But some queries produce different result's on peer and on transactor
For example
'[:find ?e
:in $ ?ignore-set
:where
[?e :app/foo]
(not [(contains? ?ignore-set ?e)])]
on peer(d/with and d/transact on "mem") works "as expected"
on transact(d/transact on "dev") always return "empty?"
Then I changed to
'[:find ?e
:in $ ?ignore-set
:where
[?e :app/foo]
[(contains? ?ignore-set ?e) ?q]
[(ground false) ?q]]
That second one always returns the same results ("as expected") on transactor and on peer.
Is it a bug?#2018-07-1612:55souenzzoBUMP.
It's causing me concurrence problems and there is no simple way to test if the query will work on transactor or not #2018-07-1704:03souenzzohttps://forum.datomic.com/t/inconsistency-between-query-on-peer-and-transact/548#2018-07-1723:20souenzzoAny hope on this?#2018-07-1911:42stuarthalloway@U2J4FRT2T I have reproduced this but have not fully isolated it yet. The workaround with ground seems sound.#2018-07-1912:40souenzzoThe worse part is that I can't test if my query will work or not. The unique way to test it is testing against dev/free transactor, and it's way slower.#2018-07-1915:22souenzzoWill be a issue to fix that? @U072WS7PE#2018-07-2105:25souenzzo@U072WS7PE news about that? it's a but and will be fixed? I will need to always run all my tests in datomic:free? I need to make a repo to reproduce?#2018-07-2202:14souenzzo@U072WS7PE repo to reproduce the bug
https://gist.github.com/souenzzo/c7b5a5434d4c04efcc58802c81b46023#2018-07-1415:16Björn EbbinghausIs it wise to store sequential data in datomic? Like a log.
I need to store sequences of sequences of events. Like this:
{:user1 [[:a :b] [:a :b :c]]
:user2 [[:b :c]]}
#2018-07-1713:13stuarthallowayhi @U4VT24ZM3! There is some discussion of patterns for sequential data at https://forum.datomic.com/t/handling-ordered-lists/305#2018-07-1416:18miridiusMy clj tool can't seem to download the com.datomic/ion jar from S3. Even if I directly clone the ion-starter project and then try to run clj, it gets a 403 from Amazon S3:
$ git clone && cd ion-starter
$ clj
Error building classpath. Failed to read artifact descriptor for com.datomic:ion:jar:0.9.16
org.eclipse.aether.resolution.ArtifactDescriptorException: Failed to read artifact descriptor for com.datomic:ion:jar:0.9.16
<snipped>
Caused by: org.eclipse.aether.resolution.ArtifactResolutionException: Could not transfer artifact com.datomic:ion:pom:0.9.16 from/to datomic-cloud (): Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 6F26D77435731E93; S3 Extended Request ID: bcJFpRXI081lRtaNVQeMMyrTWhU+wbqWfwOk/YjCD+m5t0mfCwHFWcGdqVYAbMK75k5S4Ei9Y4M=)
at org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolve(DefaultArtifactResolver.java:422)
...#2018-07-1416:45miridiusok looks like it was an AWS permission issue. My user was in the datomic-admin-<system name> group but evidently that's not enough, I gave it the AdministratorAccess policy and now it works :+1:. Figuring out exactly which permission was missing is an exercise for later, I guess 😁#2018-07-1417:00miridiusis it possible to deploy multiple ions applications to the same datomic cloud system? I suppose they would at least have to have the same name, since you can't do a push if the ions application name doesn't match the system's application name#2018-07-1418:10chris_johnson@miridius in my experience you can, though you might need to have one top-level ion-config.edn that knows about all the applications#2018-07-1418:49eoliphanthow does that work exactly? I think i tried, setting :app-name to something arbitrary and it didn’t seem to like that#2018-07-1502:05miridiusIf I try to download the bundle to my own machine using aws s3 cp then it works fine#2018-07-1506:29henrikIs it possible that Datomic Cloud will be available on Google Cloud eventually?#2018-07-1600:45eoliphantin @stuarthalloway’s longer talk on ions he sort of alluded to it as a possibility if there’s sufficient demand, etc etc. As it stands, it’s very “AWS’ey”#2018-07-1616:29henrikRight! Well it’s not a HUGE problem. I intend to make use of some very Googly services. They’re not realtime though, so calling them from AWS is not insurmountable. Nevertheless, it would be nice to be a bit more consolidated.#2018-07-1511:18eraad@stuarthalloway Hi! There is a 404 error in https://www.datomic.com/details.html. The “Learn more” link in Hierarchical.#2018-07-1516:13jaret@eraad I’ll have to fix that on Monday. But it should link here: https://docs.datomic.com/cloud/schema/schema-modeling.html#2018-07-1516:13jaretThanks for the report!#2018-07-1603:55eoliphantso, about that :app-name in ion-config.edn lol. As far as I can tell so far, that must be the same as your datomic cloud system name? Curious because have been cranking away, and have enough ion code, that I’d probably like to break it out into separate projects, that would still be installed in the same system/instance/whatever. Is that possible at this point?#2018-07-1701:18stuarthalloway@U380J7PAQ you can set the application name when you create a system, see https://docs.datomic.com/cloud/ions/ions-reference.html#ion-config. If you have N library projects and 1 app project, the app project should have the ion-config.edn file.#2018-07-1701:21eoliphantok, but basically there can be only one app project per ‘system’? Where a system is a deployed instance of Datomic Cloud?#2018-07-1701:22eoliphantI was thinking (hoping 😉 ) that I could install multiple apps, not just libs that a single app uses. I may be looking at this incorrectly. I’m moving what used to be 4,5 clojure/datomic microservices into ions. I was thinking they’d each be an ‘app’ in a given ’sytem.s#2018-07-1711:27stuarthalloway@U380J7PAQ an Ion app is 1-1 with an AWS CodeDeployment app, which is the unit of deployment to a compute group (not a system). When we release query groups (see https://docs.datomic.com/cloud/whatis/architecture.html#system) you will be able to deploy a different app to each query group in a system, if you want.#2018-07-1712:16eoliphantok, i think i got it now. So (granted, I know this is new even for you guys lol), in your estimation would query groups be the logical longer term unit of demarcation, for more or less independent chunks of functionality? I’m working through how this would scale in terms of dev teams. I’m about to turn a scrum loose on this but will have others coming online in the next quarter or so.#2018-07-1712:17eoliphantand on that note, lol, any ETA on query groups?#2018-07-1712:26stuarthallowayworking on it 🙂 Because it is a CloudFormation change requires more coordination with AWS#2018-07-1712:27stuarthallowayI am recommending "popup solo system per dev who needs isolation", query groups will provide another axis here.#2018-07-1712:28stuarthallowayvery interested in your feedback on mapping the tech to a team workflow, and already working on improvements in this area as well#2018-07-1721:08eoliphantyeah, i’d already adopted the solo per dev approach. which makes things pretty awesome. self contained env. The query groups stuff is more about app segmentation. while all the code is all hanging out in a compute/query group or whatever. I’d stll like to have the ‘microservice feel’ lol. Where my ‘apps’ have their own db’s etc etc. It’s a little brain twisty. But yeah will definitely keep you apprised. Fortunately the first app is small enough for this to work.#2018-07-1604:46chris_johnsonOkay, so this has taken far longer than I wanted to get into a state I am okay sharing, but here is an early draft: https://github.com/hlprmnky/ion-appsync-example#2018-07-1604:47chris_johnsonFull-stack GraphQL example backed by the ion-starter code and data set. Thanks to @steveb8n for his work on the Cognito-aware SPA client.#2018-07-1604:47chris_johnsonI’m just about to crash and then go catch a plane for a short vacation, but I will try to remember to post this to the Datomic forum tomorrow as well. Cheers!#2018-07-1618:06henrikThat’s great, thanks for writing this up!#2018-07-1605:14steveb8nNice to see it come alive @chris_johnson#2018-07-1612:19eoliphant
Nice. Im working on some similar stuff with amplify in a cljs client, and a ring ion entry point via api gateway. And a poc with lacinia, umlaut, etc to see if we can build a '"better appsync"#2018-07-1612:20eoliphantHey is it or will it, be possible to deploy into an existing VPC?#2018-07-1713:11stuarthalloway@U380J7PAQ no current plans for "BYO VPC" -- it is an implementation and support hairball#2018-07-1713:14eoliphantUnderstood, I can imagine, lol. We’re doing some significant rengineering of our VPC’s one per lifecycle stage, then dedicated ones for transit, logs/secvault, etc. will dig on on how to best integrate datomic clouds’ config#2018-07-1617:20firstclassfuncHey guys, Is there a better place to ask Datomic-ION setup questions?#2018-07-1618:54jaret@firstclassfunc here or on the forums. https://forum.datomic.com/#2018-07-1620:48Joe LaneAnyone ever seen an error like this before? "No implementation of method: :value-size of protocol: #'datomic.cloud.tx-limits/ValueSize found for class: java.lang.Integer"#2018-07-1620:48Joe LaneI’m trying to transact some data, which transacts locally, but not from apigw#2018-07-1621:47Joe LaneFigured it out. Datomic Cloud doesn’t (currently) seem to store integers (nor convert them automatically to longs). I had a field that was using ->int from semantic-csv to convert from string to java.lang.Integer.#2018-07-1621:47Joe LaneInstead Datomic Cloud stores longs. Once converting the int to a long it worked great. Hope this help someone in the future.#2018-07-1622:52eoliphantYep, ran into that myself a few days ago. Weirder thing, was that it it seemed to be intermittent. I had some code that would call parseInt on a string, transact it in, it would fail in some cases but not others#2018-07-1712:48RodinHi, I'm trying to load about 0.5GB of data into datomic. Can anyone confirm that transact is not lazy, i.e. when passing it an ISeq that sequence will be reified into a list/and or the head of that sequence will be held onto?#2018-07-1712:50jaret@rodinhart are you trying to transact that amount of data as a single transaction?#2018-07-1712:53RodinWell, I'd like to. The follow-up question, as expected, would be: how do I batch data that has references to earlier entities?#2018-07-1712:56jaretThat’s almost certainly far too much data for a single transaction. To batch you’ll want to build up batches with some kind of identifier on your entities. Like using lookup refs or unique identities that you create. The tempid’s map that is returned from transact can be used to map entities.#2018-07-1712:56RodinAre you confirming transact isn't lazy?#2018-07-1712:57RodinAnd are you saying if I give entities a temp id for :db/id, the return value of transact will give me a mapping from those temp ids to the actual ids in the db?#2018-07-1712:57marshallyes ^#2018-07-1713:00marshall@rodinhart https://docs.datomic.com/cloud/transactions/transaction-processing.html#tempid-resolution#2018-07-1713:01RodinAh, brilliant, very helpful.#2018-07-1717:55Joe LaneAnybody know why after removing the backslashes for the ion deploy step from the :group tag and other tags the backslashes still occur in :rev and :uname?#2018-07-1719:29stuarthallowayhi @U0CJ19XAM you should not need backslashes anywhere#2018-07-1719:47Joe Laneclojure -A:dev -m datomic.ion.dev '{:op :push :uname "ch357"}'
Downloading: com/datomic/java-io/0.1.11/java-io-0.1.11.pom from
(cognitect.s3-libs.s3/upload "datomic-code-f070a20d-8cb2-44f6-b83a-a47dd69ed035" [{:local-zip "target/datomic/apps/someapp/unrepro/ch357.zip", :s3-zip "datomic/apps/someapp/unrepro/ch357.zip"}] {:op :push, :uname "ch357"})
{:uname "ch357",
:deploy-groups (someapp-compute),
:dependency-conflicts
{:deps #:com.cognitect{http-client #:mvn{:version "0.1.80"}},
:doc
"The :push operation overrode these dependencies to match versions already running in Datomic Cloud. To test locally, add these explicit deps to your deps.edn."},
:deploy-command
"clojure -Adev -m datomic.ion.dev '{:op :deploy, :group someapp-compute, :uname \"ch357\"}'",
:doc "To deploy to someapp-compute, issue the :deploy-command"}
I have to pull backslashes off of the output of the above command still, I realize you all removed the backslashes from the other commands (thank you!), but these still seem to remain.
clojure -Adev -m datomic.ion.dev '{:op :deploy, :group someapp-compute, :uname "ch357"}'
#2018-07-1720:01Joe Lane@U072WS7PE Just Tried it again from Vanilla Bash, same issue, now with :rev instead of :uname
bash-3.2$ clojure -A:dev -m datomic.ion.dev '{:op :push}'
Downloading: com/datomic/java-io/0.1.11/java-io-0.1.11.pom from
(cognitect.s3-libs.s3/upload "datomic-code-f070a20d-8cb2-44f6-b83a-a47dd69ed035" [{:local-zip "target/datomic/apps/someapp/stable/a510fee0af59e67a9ba99cfff20c935b7b02d517.zip", :s3-zip "datomic/apps/someapp/stable/a510fee0af59e67a9ba99cfff20c935b7b02d517.zip"}] {:op :push})
{:rev "a510fee0af59e67a9ba99cfff20c935b7b02d517",
:deploy-groups (someapp-compute),
:dependency-conflicts
{:deps #:com.cognitect{http-client #:mvn{:version "0.1.80"}},
:doc
"The :push operation overrode these dependencies to match versions already running in Datomic Cloud. To test locally, add these explicit deps to your deps.edn."},
:deploy-command
"clojure -Adev -m datomic.ion.dev '{:op :deploy, :group someapp-compute, :rev \"a510fee0af59e67a9ba99cfff20c935b7b02d517\"}'",
:doc "To deploy to someapp-compute, issue the :deploy-command"}
#2018-07-1800:51sho@U072WS7PE I have the same issue with @U0CJ19XAM. Even after the latest update, I still get unnecessary backslashes whenever I "push". For the other ops, this does not occur.#2018-07-1717:56Joe LaneA better question may be, is there a way I can just invoke this library from the repl so I never have to go back to my terminal and mess with the deployment step in a different window?#2018-07-1801:22fdserrOn Prem DB/TX functions: any trick to deploy changes without a Transactor restart (classpath reload)? I’m using ˋd/function` with a single :require, no closures or multimeths. Many thanks.#2018-07-1813:21marshallis the namespace you’re requiring already on the classpath?
If so, you should be able to install the transaction function and then use it without a restart#2018-07-1902:36fdserrIndeed, with proper env set.#2018-07-1902:38fdserrI can use the fns, but I’d be keen to be able to hot deploy updates. Thanks.#2018-07-1912:51marshallyou can definitely install and use txn functions on a running transactor without a restart#2018-07-2000:21fdserrSo you mean a running Transactor will grab changes in its classpath without a restart? I’m gonna give it another try... anything specific to be aware of? (option, env...)#2018-07-2017:00marshalltransaction functions are not classpath functions#2018-07-2017:01marshallyou can use a classpath function as a txn function#2018-07-2017:01marshallbut you can also install a “regular old” transaction function (not as a classpath fn)#2018-07-1807:28ignorabilisCloud queries: any way to use :limit to get the newest instead of the oldest values?#2018-07-1808:24oscarDatomic doesn't guarantee a return order. Your best bet is to sort the entire set on some criteria (like :db/txInstant) in an ion. It could be a query-fn or a lambda.#2018-07-1815:17ignorabilis@U0LSQU69Z - thanks a lot!#2018-07-1807:36ignorabilisAnd again on cloud - (pull ?eid [(:event/inputs :default false)]) does not return false when there are no records, whereas (pull ?eid [(default :event/inputs false)]) works properly; the first one is in the docs as an example; am I doing something wrong?#2018-07-1813:25marshallwhere in the docs is that example? I believe that might be a syntax issue#2018-07-1813:34marshall=> (d/pull (d/db conn) '[:artist/name (:artist/endYear :default "N/A")] paul-eid)
#:artist{:name "Paul McCartney", :endYear "N/A"}
Seems OK here.
What version of Datomic and client?#2018-07-1813:36marshalljust noticed you didnt quote your pull expr#2018-07-1813:36marshalloh. interesting. it may not work with false#2018-07-1813:39marshalllooking into it#2018-07-1815:16ignorabilis0.8.56 is the client; I'm not sure about the version of Datomic, but we updated somewhere after the release of ions#2018-07-1815:16ignorabilisthanks 🙂#2018-07-1807:48kirill.salykinHi all, for datomic in prem how one filter based on LocalDate? Because seems like datomic uses old java Date.#2018-07-1811:10eoliphantThat's correct. Datomic uses clojure #insts, which are java Dates. Just convert as necessary #2018-07-1811:15kirill.salykinMakes sense, thanks. Would be nice to have all new java date time things tho#2018-07-1815:23ignorabilisDatomic Ions - Is there a way to ensure that data is sorted by time when being transacted? I.e. instead of sorting it by :db/txInstant over and over again for each query we want to have the default functionality of classic SQL databases, where everything is sorted by time by default#2018-07-1815:24Joe LaneWhy do you want that? Are you sure you need it?#2018-07-1815:24Joe LaneAlso, when you say sorted by time when being transacted do you mean being queried?#2018-07-1815:25ignorabilisWe have some values that get added over time; we want to get from the db only the latest N values#2018-07-1815:27val_waeselynck@ivan.yaroslavov btw, I recommend you don't rely on :db/txInstant and use a custom attribute for that: https://vvvvalvalval.github.io/posts/2017-07-08-Datomic-this-is-not-the-history-youre-looking-for.html#2018-07-1815:29val_waeselynckwhat's more, with a custom attribute, you will be able to use the indexes for that attribute to your advantage, either using comparisons clauses in Datalog or seek-datoms#2018-07-1815:36ignorabilis@val_waeselynck - that is ok, the main concern is that we don't want to constantly be sorting hundreds of values; we have an entity that contains a component entity with cardinality/many; we just want to get the latest N values in an efficient way#2018-07-1815:37ignorabilisSo in an ideal world part of the query would be (:my/entity :limit 50 :ascending false); of course :ascending is pseudo code#2018-07-1815:38val_waeselynck@ivan.yaroslavov if the target of the to-many are entities in the same partitions, seek-datoms should sort them in ascending order; you could use some dichotomy algorithm to get the latest#2018-07-1815:40ignorabilisbut we want :ascending false; also could you please elaborate on dichotomy algorithm?#2018-07-1815:42val_waeselynckit's easier to explain with dates; Datomic's index API only give you datoms in ascending order. So if you want the first 50 it's easy, but the latest 50 it's harder. However, you can query for the datoms starting from an exponentially decreasing lower bound date until you get to 50.#2018-07-1815:42val_waeselynckE.g give me the datoms from 1 day ago to now; then give me the datoms from 1 day ago to 2 days ago, then from 4 days ago to 2 days ago, etc.#2018-07-1815:43val_waeselynckuntil you get to 50#2018-07-1815:44favilaor maybe another attribute for indexing#2018-07-1815:44val_waeselynckbut you know, if we're just talking about hundreds of ref-typed values, you might as well realize them all in memory, since they will probably be in the same segment anyway#2018-07-1815:44favilayou could separately store an indexed long which is the date in milliseconds negated#2018-07-1815:45favilathat would give you a cheaper "newest stuff" index#2018-07-1815:46val_waeselynckthe thing is, you also have to restrict the search to the owning entity#2018-07-1815:46favilad/index-seek before (or at the top of) a query#2018-07-1815:47val_waeselynckor a compound index#2018-07-1815:50ignorabilisok, thanks, we'll try the attribute for indexing#2018-07-1822:11johnjUsing the free transactor, a simple write takes ~15ms on average, is this normal? (in a single machine)#2018-07-1822:19eraadHi! One of my co-workers is thinking about setting up a “long running” Datomic Ion as a Kafka client to process real time events. I see there are a lot of loose ends (how to start it, monitor it, stop it, etc.). Any feedback?#2018-07-1822:24eraadIt would be cool if Datomic had an Ion configuration option (similar to Lambda and API GTW) called Kafka, so Datomic can manage the long-running process for you.#2018-07-1911:52henrikIs it a good idea to create something like :internet/email, and inject it everywhere for people, companies, what have you, rather than :person/email, :company/email and so on?#2018-07-1911:54chrisblomi prefer specific attributes over generic attributes#2018-07-1911:54henrikWhy?#2018-07-1912:08chrisblomdifferent entities may have different requirements, and it makes it easier to do queries like “give me all the email adresses of users”#2018-07-1912:09chrisblomfor example, for users you may want to use email’s as id’s, but for companies not#2018-07-1912:09chrisblomor: a user can have only one email, but a company can have more#2018-07-1912:14chrisblomanother issue is that an email address might be used as the :internet/email of both a company and a user. If want you use the this email as an id, you will run into trouble#2018-07-1912:21henrikRight, I see your point.#2018-07-1912:25chrisblomthere are valid use cases for generic attributes of course#2018-07-1912:41henrikI’ve got to think through where to draw the line. Theoretically, you could say that names are unique as well. This person-entity refers to the name-fact “Jane Smith.” As does a bunch of other person-entities.
The utility would be minimal though.#2018-07-1912:02dominicmhttps://docs.datomic.com/cloud/whatis/data-model.html#sec-5
> For example, an application that wants to model a person as an entity does not have to decide up front whether the person is an employee or a customer. It can associate a combination of attributes describing customers and attributes describing employees with the same entity. An application can determine whether an entity represents a particular abstraction, customer or employee, simply by looking for the presence of the appropriate attributes.
I feel like Datomic is encouraging :internet/email here.#2018-07-1912:05henrikI figure URLs and emails should be candidates for uniqueness. You could then theoretically pull out every entity where that URL or email appears.#2018-07-1912:16chrisblomwhat if a company and user share the same email?#2018-07-1912:17henrikThey’ll point to the same datom I suppose. That’s kind of what I contemplate might be a feature.#2018-07-1912:18chrisblomi can be if that is what you want#2018-07-1912:20chrisblomi’d say they point to the same entity, not datom#2018-07-1912:20chrisbloma datom is a single [entity attribute value …] fact#2018-07-1912:23henrikThis is what I’m trying to wrap my head around at the moment. In words, I’d like to express it as “There is such a thing as an email that is <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>.” And as a separate fact, “Person X has declared that their email is [email of <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>]”#2018-07-1912:24henrikAlthough, perhaps this is chopping up the conceptual world too finely.#2018-07-1912:41chrisblomIt is an option#2018-07-1912:41chrisblom{:db/id 123
:internet/email "#2018-07-1912:44chrisblomThen later you could add:#2018-07-1912:44chrisblom{:db/id 789
:company/id "Acme Corp."
:company/email 123}
#2018-07-1912:44henrikIn the domain I’m looking at, email adresses may show up on people, organisations, book reviews, journal articles, etc. etc., and this may be a way to tie them together, given that both URLs and emails have UUID-like properties.
If the same email appears on two of these entities, there’s likely a relation, barring typing errors.#2018-07-1912:47chrisblomyes that seems reasonable to me#2018-07-1912:47henrikBut then I have to face the fact that there may eventually be overhead as well. Such as “<mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>”, which was then corrected to “<mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>”. Now there’s a lonely email, unconnected to anything, floating around.#2018-07-1912:47henrikNow I’ve got to write a vacuum cleaner which goes around and retracts pointless emails.#2018-07-1912:48henrikOr a thing that checks if this was the last reference to the email. If so, retract it.#2018-07-1912:49henrikIt’s sounding a lot like garbage collection at the moment.#2018-07-1912:49chrisblomyes, but it seems doable#2018-07-1912:50chrisblomyou could use transaction function to rename emails, that retracts email entities once they are no longer used#2018-07-1912:51stuarthalloway@henrik I don't think I would bother removing such things without a tested performance requirement showing that it matters.#2018-07-1912:51henrikRight. So, let ’em float.#2018-07-1912:52stuarthallowayAnd you always have enough info to change your mind later, because Datomic.#2018-07-1912:52henrikTrue. I’ll give it a shot and see what happens. Thanks to both of you.#2018-07-1912:53stuarthallowayI am lazier than @U9CQNBXDX 🙂 -- if I did have such a batch cleanup job I would write it as a 5-line script, not a tx fn.#2018-07-1912:58chrisblomi forgot to mention that it would be a 4-line tx fn#2018-07-1914:53henrikStill not as lazy as doing nothing at all, so that wins 😄#2018-07-1912:22chrisblomi think its better to model separate domain types as separate entities: a person can have some relation to a company, but a person is not a company#2018-07-1914:08eoliphantyeah, @henrik @chrisblom I’ve been back and forth this as well. For me at least part of the problem has been falling into being ‘unnecessarily relational’ when modeling. One technique I’ve come across that’s a little less common, but pretty powerful (kind of like Datomic lol) is Object-Role Modeling. It has some formalisms around verbalizing models and what have you. There are some ORM tools that do all this crazy transformation to map it into relational models. But you can do a pretty much 1-to-1 mapping of what you come up with in datomic#2018-07-2001:39chrisbroomeIs there any automated way to get datomic pro starter running locally on a laptop? I haven't found any way to use it that doesn't require manually editing configuration files.#2018-07-2004:34eoliphantnot that I know of @chrisbroome, but if you’re just running in dev mode, I think it’s just copying that sample template up and pasting in your license key#2018-07-2008:36dominicm@chrisbroome we usually add a dependency on it locally, and then use the in-memory mode. That is, we don't use an external transactor.#2018-07-2013:11Petrus TheronIs it possible to run Datomic client API on Heroku with Postgres as a backing store with an eager indexing scheme/transactor without running a Datomic Peer?
Datomic On-Prem requires Heroku Enteprise to run, and my hobby side-project doesn't justify the cost of migrating to Datomic Cloud yet.
*Edit: I see there is now a $1/day solo deployment. Maybe that's what I need. Does that work with Datomic Ions?#2018-07-2013:19Petrus TheronHm, I don't understand the AWS Solo deployment pricing. When I continue to the AWS Marketplace subscription page, I see that I will also be billed hourly for t2.small and i3.large instances, not just $1/day. Will these costs be discounted for solo deployment, or am I doing something wrong?#2018-07-2016:31marshallThe all-in price for solo is around $1 a day, depending a bit on your free tier usage, etc#2018-07-2016:31marshallYes, ions absolutely work with solo#2018-07-2020:35henrikI’ve been running the solo since the 25th last month. I’m up to a grand total of about $20 for this month so far, which is about a buck a day.
I wouldn’t necessarily call it easy to set up Datomic Cloud (or indeed do anything else on AWS), but it sure is a lot easier than the alternative.#2018-07-2013:25Petrus Theron^ nevermind, I figured out the AWS Marketplace UI is just confusing - it quotes you for all possible components. At the next step, you can specify which Cloudformation to use (Solo or Production).#2018-07-2016:31marshallYes, we are working with AWS to try and make this clearer, but it’s currently the way Marketplace’s UI works#2018-07-2104:10henrikOne of these days, Amazon will decide that they’ve finally amassed enough cash to hire a designer.#2018-07-2015:22donaldballI understand that string tempids are encouraged when building txns these days. When there isn’t a reasonable synthetic id available, I’ve been using (str (d/squuid)). Is that a bad idea?#2018-07-2016:33marshallDon’t think it’s an issue, although it may be a bit heavyweight for what you actually need
You only need as many unique strings as you have tempids in a single transaction - I usually default to the “stringified” version of any unique/identity attr I have#2018-07-2020:33kennyIn order to push an Ion, it has to be a Git repository?#2018-07-2021:03olivergeorgeNo. The are benefits though. If not you need to manually name your release. #2018-07-2021:15kennyI tried pushing a non-git Ion and received:
{:command-failed
"{:op :push :uname \"kenny\" :creds-profile \"compute-dev\"}",
:causes
({:message "Shell command failed",
:class ExceptionInfo,
:data
{:args ("git" "status" "-s"),
:result
{:exit 128,
:out "",
:err
"fatal: Not a git repository (or any of the parent directories): .git\n"}}})}
#2018-07-2022:19jaret@U083D6HK9 @U055DUUFS yes, it requires git. It uses git to make the package to push. We’d be interested in hearing feedback if your business or project prohibits the use of git.#2018-07-2022:40olivergeorgeI stand corrected#2018-07-2103:56kenny@U1QJACBUM No use case for it. Just surprised me because the docs don't mention it as a requirement.#2018-07-2021:22kennyI am getting a bunch of DEBUG output from AWS and apache when running datomic.ion.dev commands. Is there a way to configure this? I use com.taoensso/timbre with com.fzakaria/slf4j-timbre and configure Timbre in my code. My code is not getting called when running the Ion commands so the log config is not set.#2018-07-2211:48henrik(d/q {:query '{:find [?id ?title (pull ?id [:journal/id])]
:where [[?id :journal/title ?title]]}
:args [(d/db conn)]})`
Gives me the following error:
ExceptionInfo processing rule: (q__1114 ?id ?title ?id), message: processing clause: [?id :journal/title ?title], message: java.lang.ArrayIndexOutOfBoundsException: 2 clojure.core/ex-info (core.clj:4739)
#2018-07-2312:14marshallYou can only have each entity once in a find expression. In your original example, you have ?id and the pull on ?id. You could pull [:journal/title :db/id] if you want to pull both.#2018-07-2312:23henrikAh, yes, I can see ?id appearing twice there. java.lang.ArrayIndexOutOfBoundsException threw me off. Pull looks like a function, so intuition suggests that ?id would be consumed by it and of no concern for the surrounding bits. There’s clearly some magic going on here.#2018-07-2312:23henrikThank you!#2018-07-2312:25marshallNo problem#2018-07-2211:49henrikDropping the initial ?id in the :find clause works fine though:
(d/q {:query '{:find [?title (pull ?id [:journal/id])]
:where [[?id :journal/title ?title]]}
:args [(d/db conn)]}
[["International Bulletin of Mission Research"
{:id [{:identity/type "publisher-id",
:identity/value "IBM",
:db/id 22918220369363020}
{:identity/type "hwp",
:identity/value "spibm",
:db/id 22918220369363024}]}]]#2018-07-2211:50henrikWhy?#2018-07-2307:00olivergeorgeThis is the result of an experiment to automate the API Gateway setup required for Web Service Ions via the aws cli (so each new deployment isn't a manual setup task). I'm interested in any feedback (approach, assumptions, implementation...)
https://gist.github.com/olivergeorge/cc0ca9a945cb372d35d97e45573656ee
(updated to tidy up)#2018-07-2312:35steveb8nI did something similar here https://github.com/hlprmnky/ion-appsync-example/blob/master/src-pages/cf/node_pages.clj although not for Ions, instead for CLJS lambdas intended to eventually be the host pages for an Ion backed SPA#2018-07-2312:36steveb8nUsing Crucible and Cloudformation is a pretty nice experience for doing infra as code IMO. I like where all these ideas are taking us#2018-07-2312:54olivergeorgeThanks I'll check it out. Crucible is new to me. Still getting familiar with the AWS landscape.#2018-07-2314:59eoliphantGood deal, I’ve been looking at this myself.#2018-07-2315:01eoliphantWe’re more of a terraform shop, but similar idea#2018-07-2316:50rapskaliancondensation is also a fun option for writing infra as code in Clojure. I've had good success working this library into certain deployment workflows.
https://github.com/comoyo/condensation#2018-07-2401:34olivergeorgeNow that I look at the cloudformation documentation it does seem like generating a cloudformation template is ultimately a simpler solution.#2018-07-2311:19billyrIs excision CPU bound? Does the way I partition transactions matter? I'm excising 150k entities and wondering how long it'll take#2018-07-2312:16marshallThat’s a HUGE excision. Excision is not intended for size control and should be used only when necessary (i.e. legally required to remove something). As a rule of thumb, if you can’t type out the excision transaction manually it’s probably too big to run at once.#2018-07-2311:32val_waeselynck@bill_rubin I think it's not so much about size, more a matter of how much it disrupts your indexes.#2018-07-2311:35val_waeselynckhow long it takes probably depends a lot on your Transactor's hardware, your storage and your network performance characteristics#2018-07-2311:36henrikDoes anyone know where I can find the options available for the :headers field for a web ion?#2018-07-2311:40billyr@val_waeselynck Thanks. It's on a single host and the transactor has been maxing out the cpu overnight so I'm assuming that's the limiting factor. I guess I'll just recreate the db#2018-07-2311:41val_waeselynck@bill_rubin you may want to have a look at this: https://gist.github.com/vvvvalvalval/6e1888995fe1a90722818eefae49beaf#2018-07-2311:42billyr@val_waeselynck Yea that's what I'm doing haha, thanks!#2018-07-2313:28henrikFor a web ion, how can I capture the path, like the /dev/hello/world part of ?#2018-07-2313:30marshallhttps://docs.datomic.com/cloud/ions/ions-reference.html#web-code#2018-07-2313:31marshalli believe it is in the :uri key#2018-07-2313:31marshallyou’d need to parse it yourself#2018-07-2313:36marshall@henrik ^#2018-07-2313:36henrikExcellent, thank you @marshall#2018-07-2315:05eoliphantAlso @henrik, since what’s coming in is basically ring-compatible, you can drop right into one of the various and sundry routing/middleware libs if your use case is non-trivial. I just swapped out some custom code for reitit literally last night#2018-07-2316:40luchiniI’m trying a simple retraction on Datomic Cloud and getting a weird error. The retraction is: [[:db/retract :person/email :db/unique :db.unique/identity]] (yes, the dataset I’m working on does not guarantee unique emails for some bizarre reason).#2018-07-2316:41marshall@luchini that’s a known issue. we’re working on a fix#2018-07-2316:41luchiniDatomic gives me a nth not supported on this type: Db anomaly#2018-07-2316:42luchiniGreat @marshall! Thanks a lot. Do you know if it would work in the very same transaction where I’m in fact asserting duplicate emails, or do I need to keep two separate transactions?#2018-07-2316:42marshallyou’d need to retract the :db/unique first#2018-07-2316:44luchiniWhat about the opposite scenario? (trying to prepare for the future). When I manage to implement a transaction that fixes the dataset in the live system, I’ll need to make sure that I’m updating all duplicate emails in the same transaction I’m adding the :db/unique back in.#2018-07-2316:45luchiniIs that possible?#2018-07-2316:45marshallhttps://docs.datomic.com/cloud/schema/schema-change.html#sec-5#2018-07-2316:46marshall“If there are values present for that attribute, they must be unique in the set of current database assertions.”#2018-07-2316:46marshallso you’d have to update things to make them unique then issue the schema change transaction#2018-07-2316:47luchiniThank you @marshall.#2018-07-2317:54eoliphanthey to the datomic folks, I dropped in a couple feature requests. I’d talked to stu about supporting deploying into existing vpc’s, based on that discussion I know part of the desire is to keep things as contained as possible from a support perspective. But, I think this will be pretty important in the context of datomic cloud’s inevitable massive growth 🙂 Say in our case, we’ve gone from a kind of cheesy, ad-hoc couple vpc’s across two accounts (prod v non-prod) to a 1-1 account/vpc approach for dev,test,etc complemented by shared vpcs for management, ingress etc. with IPSec vpn’s wiring them together. It’s kind of hard to shoehorn extra dedicated datomic VPC’s, per env into this.
ions can complicate this further. Since the code is effectively global, I’d like to give each of my devs their own solo system, then have a common ‘dev’ system that’s updated via CI or something. It’d potentially be more manageable to support multiple systems/vpc.
Just a few thoughts 😉
In the meantime, ions, are fricking amazing 😉#2018-07-2320:33jjfineanyone have any tips on how to write tests for queries that use the :db/txInstant field? i'm having trouble fixturing test data without doing a (Thread/sleep ..) between calls to transact#2018-07-2513:53matthavenerYou can pass a instant to your transaction to arbitrarily set the time of the transaction. All that datomic cares about is that the txInstants are monotonic#2018-07-2320:48kennyWhen testing my Ion endpoint via the Method Test UI in the AWS Console, my response body is base64 encoded. Is there a way to get the UI to display the decoded version?#2018-07-2322:11shaunxcodeis :db/index not supported in datomic cloud? when I try to do (dc/pull db '[*] :db/index) I get #:db{:id nil}#2018-07-2322:51steveb8n@shaunxcode yes, :db/index is not supported https://docs.datomic.com/cloud/schema/schema-reference.html#2018-07-2406:10steveb8nI’m deploying the Specter lib with my Ions. At load time I am seeing a stackoverflow in the logs. I’ll paste it below. This code loads/runs fine on my laptop using the same deps although I do see a deps override warning during push. What is the best way to debug something like this? I’m using Clojure 1.9.0 on my laptop and I presume the same on Ions/Cloud.#2018-07-2414:14stuarthallowayhi @U0510KXTU The Solo template is economized at every level, including having a smaller stack max. I have seen this happen with deep compilation, and (sigh) it can be nondeterministic. AOTing the problem library may help. The problem will definitely go away on Prod.#2018-07-2422:30steveb8nthanks @U072WS7PE that’s good to know. in this case it seems that a classpath issue (still undiscovered) was the real issue, which was then masked by the stackoverflow. maybe there’s greater memory consumption when classpath exceptions occur?#2018-07-2422:32steveb8neither way, some docs on this would be good for others since lots of folks will try ever more libs on Ions/Solo over time. I’m fully sorted now, just got my api working so stoked!#2018-07-2406:10steveb8n{
"Msg": ":datomic.cluster-node/-main failed: java.lang.StackOverflowError, compiling:(com/rpl/specter/util_macros.clj:61:29)",
"Ex": {
"Cause": null,
"Via": [
{
"Type": "clojure.lang.Compiler$CompilerException",
"Message": "java.lang.StackOverflowError, compiling:(com/rpl/specter/util_macros.clj:61:29)",
"At": [
"clojure.lang.Compiler",
"analyzeSeq",
"Compiler.java",
7010
]
},
{
"Type": "java.lang.StackOverflowError",
"Message": null,
"At": [
"clojure.lang.Util",
"equiv",
"Util.java",
33
]
}
],
#2018-07-2406:12steveb8nhere’s the deps I’m using
org.clojure/clojure {:mvn/version "1.9.0"}
com.datomic/client-cloud {:mvn/version "0.8.56"}
com.datomic/ion {:mvn/version "0.9.16"}
org.clojure/data.json {:mvn/version "0.2.6"}
com.rpl/specter {:mvn/version "1.1.1"}
com.stuartsierra/component {:mvn/version "0.3.2"}
com.taoensso/timbre {:mvn/version "4.10.0"}
#2018-07-2407:26steveb8nstrange. I just fixed it but not sure how. I changed some of the dependencies from the push warning. I’ll follow up with more info if I can clarify#2018-07-2407:47henrikI’m now rendering a webpage through an Ion, which is awesome. Http-kit works great for developing the page locally.
- A couple of questions on top of this: how can I set up API Gateway to allow rendering of /?
- What’s the recommended way of serving static content? Should I set a custom domain, create S3 buckets for images, js and css? Or do I serve those directly from the Ion?#2018-07-2409:18henrikI’ve attached a domain to API Gateway. But I’m getting "Missing Authentication Token" for . works fine of course.#2018-07-2409:33henrikAlright, I seem to have figured this one out: create proxy method directly on the root / in API Gateway.#2018-07-2410:38souenzzoCheckout cloudfront.
My app has a /html/render/* that generates the index.html
Static images and js go to s3
Cloudfront do this redirect / ->> api/html/render, /static ->> s3
You can also do others rules #2018-07-2411:04henrikOh, right! I just set up a custom domain directly in API Gateway.
I’ll dismantle that and figure out Cloudfront instead. Thanks!#2018-07-2415:03henrik@U2J4FRT2T I’m having trouble figuring out how to redirect to API Gateway, while allowing to be redirected to s3.
The sources I’m reading are all saying that this only can be done with subdomains.
How do you go about routing / and /static respectively?#2018-07-2415:35henrikThose sources were apparently fallacious! I think I got it.#2018-07-2415:38souenzzoCreate a distribution (on create, you need to assign it to your loadbalancer/apigateway)
in this distribution, create another Origin, assign your S3 bucker.
then create some Behaviors to redirect to each origin.
be careful with caching. "Cache-Control" "max-age=xxx" is your friend. API call's through cloudfront bay not be a good idea. (Unless you REALLY want the caching thing)
You can not do complex regexp on Behaviors.#2018-07-2416:51henrik@U2J4FRT2T Do you have to do anything special with Route 53 when associating the domain name? The domain redirects to the API Gateway <gunk>. rather than hiding it.#2018-07-2416:52souenzzojust alias on r53
it will probably offer you this endpoint as an option for the alias#2018-07-2416:53henrikRight. But it does a redirect, so the raw API Gateway URL ends up exposed to the user.#2018-07-2417:39henrikThe path pattern for s3, should it be for example /static/*?#2018-07-2418:07souenzzoYep. that simple patterns are ok.
But at first, i tryied to write "anything that ends with 'dot' + 2 or 3 alphaletters" but that regexp engine dont accept that kind of pattern#2018-07-2506:16henrikI could not for the life of me get Cloudfront to alias instead of redirect, so I ripped it apart and set up S3 to be accessed through the API Gateway.#2018-07-2506:16henrikThen, just for the heck of it, I set up Cloudfront, and for some reason it’s no longer redirecting, but aliasing properly. But now API Gateway is already handling the S3 stuff.#2018-07-2506:17henrikI guess the downside is that I can’t control the caching strategy for the static assets separate to the caching strategy for API Gateway.#2018-07-2410:35staskquestion about datomic client (not cloud) and peer server. when calling (first (d/tx-range conn {:start 1000 :end 1001})), i’m getting exception like this:
Datomic Client Exception
{:cognitect.anomalies/category :cognitect.anomalies/fault,
:datomic.client/http-result {:status nil, :headers nil, :body nil}}
The peer server log has following warning:
2018-07-24 10:32:33.112 WARN default datomic.cast2slf4j - {:msg "Could not marshal response", :type :alert, :tid 12, :timestamp 1532428353111, :pid 1560}
java.lang.RuntimeException: java.lang.Exception: Not supported: class clojure.lang.Delay
at com.cognitect.transit.impl.WriterFactory$2.write(WriterFactory.java:150) ~[transit-java-0.8.311.jar:na]
at cognitect.transit$write.invokeStatic(transit.clj:149) ~[datomic-transactor-pro-0.9.5661.jar:na]
at cognitect.transit$write.invoke(transit.clj:146) ~[datomic-transactor-pro-0.9.5661.jar:na]
at cognitect.nano_impl.marshaling$transit_encode.invokeStatic(marshaling.clj:59) ~[datomic-transactor-pro-0.9.5661.jar:na]
...
#2018-07-2410:35staskis tx-range not supported in datomic client with peer server?#2018-07-2415:00rhansenHmm... What does this error message mean? tempid used only as value in transaction#2018-07-2415:01rhansenDoes it mean that I have a tempid somewhere which isn't used in as a value for db/id?#2018-07-2415:02rhansenAlso, is it possible to figure out which tempid it is refering to? I have a pretty big transaction 😕#2018-07-2415:02donaldballI believe that means you’ve asserted an entity that has no attributes.#2018-07-2415:05donaldballProbably you could filter the txn for a map that only has a :db/id key.#2018-07-2415:05rhansenhmm, ok#2018-07-2420:30rhansenThe problem was a typo somewhere in my code. 😛#2018-07-2420:31rhansenWould've been much easier to find if the error message included which tempid caused problems though 🤔#2018-07-2500:30olivergeorgeAWS CloudFormation newbie question. I'm experimenting with setting up a apigateway via a cloudstack template. There's one magic number... the CodeDeployDeploymentGroup. I think I could use Fn::ImportValue to read this from the datomic cloud cloudstack if it included an Export for the associated Output.
"CodeDeployDeploymentGroup": {
"Description": "CodeDeploy Deployment Group",
"Value": {
"Fn::GetAtt": [
"Compute",
"Outputs.CodeDeployDeploymentGroup"
]
}
},
Could become
"CodeDeployDeploymentGroup": {
"Description": "CodeDeploy Deployment Group",
"Value": {
"Fn::GetAtt": [
"Compute",
"Outputs.CodeDeployDeploymentGroup"
]
},
"Export": {
"Name": {
"Fn::Sub": "${SystemName}-CodeDeployDeploymentGroup"
}
},
},
(or similar, from that template i think you'd use "Ref": "AWS::StackName" as the system name component)
The alternative seems to be modifying the root stack to reference my app specific apigateway stack. Not sure if that's normal or recommended and how that might interplay with datomic cloud updates.
Question really is: am I missing something?#2018-07-2502:30steveb8nNot answering your question but you might consider using Crucible instead to generate your templates. Even just having functions available makes it a lot easier. Here’s an example https://github.com/hlprmnky/ion-appsync-example/blob/master/src-pages/cf/node_pages.clj#2018-07-2502:36olivergeorgeThanks @U0510KXTU I thought I'd aim for zero helpers/libs/tooling first to get familiar with what's underlying things. Presume it's something I'd outgrow. Look forward to checking out your code and understanding how it's helpful.#2018-07-2502:39steveb8nthat makes sense. there are examples of refs in there, in Crucible it’s the xref fn#2018-07-2502:40steveb8nand they can be parameters from the command line or CF “env” values e.g. region#2018-07-2502:40steveb8nmaybe looking at how those CF value/fns are built will get you closer to how to infer/import the group you are trying to access#2018-07-2502:42olivergeorgeHere's the simple template I came up with. Effectively what would be generated from following the ion-tutorial (but not using {proxy+} so slightly simpler)
https://gist.github.com/olivergeorge/c3918c52b89278a9c1807c9d47a9860e#2018-07-2502:43olivergeorgeI used a Parameter since the datomic stack doesn't export the compute group name... if they did the ImportValue thing should do the trick .#2018-07-2502:46olivergeorge@U0510KXTU that json feels very similar to your code doesn't it.#2018-07-2502:47steveb8nyep. the only downside I’ve noticed is refs are string joins which are a bit more complex#2018-07-2502:52olivergeorgeIn your experience, what approach would you use for setting up an apigateway to complement a datomic cloud app (with ions)? I'm largely guessing but the options seem like:
(1) a "nested stack" approach provides access to the compute group name and connects the datomic stack lifecycle events to the apigateway stack.
(2) treat as stand alone cloudformation and refer to the compute stack by the known group name
(3) other.. (aka I need to learn more about cloudformations and devops practices on AWS)#2018-07-2502:59steveb8nI’ve already done this 🙂 I used Crucible to generate the APIGW and passed in the name of the compute stack as a parameter so that my fns can generate the AWS ARNS using string joins#2018-07-2503:01olivergeorgeGotcha thanks (and cool!)#2018-07-2504:13eoliphantHi, I’m getting a weird error when I try to retract a unique constraint on an attribute
(d/transact conn {:tx-data [[:db/retract :otu-seq/otuid :db/unique :db.unique/identity]]})
ExceptionInfo nth not supported on this type: Db clojure.core/ex-info (core.clj:4739)
#2018-07-2504:24eoliphantnvm just saw the other comments about it#2018-07-2510:10henrikI just added ring as a dependency to my Datomic/Ions project, and now I get this:
Refresh Warning: Reflection warning, cognitect/hmac_authn.clj:80:12 - call to static method encodeHex on org.apache.commons.codec.binary.Hex can't be resolved (argument types: unknown, java.lang.Boolean).
Refresh Warning: Reflection warning, cognitect/hmac_authn.clj:80:3 - call to java.lang.String ctor can't be resolved.
Removing ring from deps.edn removes the error.
Has anyone else seen this?#2018-07-2512:54rhansenyes#2018-07-2512:54rhansenIt's just a reflection warning though. No biggie#2018-07-2513:30ninjaHi, rather short question:
is it possible to transact multiple values for a :db.cardinality/many attribute using the list form?
Something along those lines:
[[:db/add "my-ident" :foo/bar-refs [ref-ident-1 ref-ident-2]]]
#2018-07-2513:54eraserhdI think you can't do that here, but you can in map form. If you think about it, this is ambiguous. ref-ident-1 could be a keyword and ref-ident-2 could be a value, making the inner vector an entity reference.#2018-07-2513:58marshalltry [[:db/add "my-ident" :foo/bar-refs [[ref-ident-1 ref-ident-2]]]] @atwrdik#2018-07-2514:03ninja@marshall following this example i got an invalid list form error (the same happens using my example above)#2018-07-2514:04marshallerm. right; the list form is one datom per vector I believe#2018-07-2514:05marshallyou can transact multiple vals in map form#2018-07-2514:05marshallor you can use multiple individual vectors in list form#2018-07-2514:05ninjaThe explanation from @eraserhd makes sense to me. But I'm still curious how to add multiple refs without using the map form. Would one just write something like this:
[[:db/add "my-ident" :foo/bar-refs ref-ident-1]
[:db/add "my-ident" :foo/bar-refs ref-ident-2]]
#2018-07-2514:06marshallyep#2018-07-2514:06ninjagreat, thx guys#2018-07-2516:27curtosisare Tim Ewald’s code examples for his reified transactions talk from DoD 2015 still available anywhere? The gist has understandably evaporated.#2018-07-2610:19octahedrionI really wish I'd added a :db/unique :db.unique/identity to an attribute but it's too late as there are multiple values in the current set of database assertions -- I tried retracting all but one of those assertions but to no avail, is there anything I can do ?#2018-07-2610:26chrisblomhave you seen https://docs.datomic.com/cloud/schema/schema-change.html#sec-5?#2018-07-2610:27chrisblomdoes you attribute use :db.cardinality/one?#2018-07-2610:28octahedrionyes, but the 2nd condition in the green box is not met#2018-07-2610:28chrisblomis there any reason you cannot remove the duplicate values?#2018-07-2610:28octahedrionas I said - I tried retracting them#2018-07-2610:30chrisblomand the values are unique afterwards?#2018-07-2610:33steveb8nHas anyone setup CI to push/deploy Ions yet? If so, anything to watch out for? How do you do auth for the CLI in the CI env?#2018-07-2611:55octahedrionok - I think I've found a way: I renamed the offending attribute :old-attribute-name and asserted the attribute again with the unique constraint, which works, thereafter one has only has the small inconvenience of having to specify the attribute in one's queries (to prevent assertions for the old one appearing)#2018-07-2611:56octahedrionand naturally I have to assert the latest values of the old attribute on the new one#2018-07-2611:56octahedrionbut that's ok#2018-07-2617:57curtosisreally dumb question, but I’m drawing a blank today: how do you programmatically build a query that takes a UUID string as parameter?#2018-07-2617:59octahedrion(d/q '{:find [?n] :in [$ ?uuid] :where [[?n :uuid ?uuid]]} (d/db conn) uuid)#2018-07-2618:00octahedrion- programmatically manipulate the map as you wish#2018-07-2618:02curtosislooks like what I’m trying, but that doesn’t work. I can run it in the console with [?e :org/id #uuid "string"] , but without the reader tag in my query it won’t match.#2018-07-2618:03octahedriontry (UUID/fromString uuid-string)#2018-07-2618:07curtosisI think that’s what I’m looking for, but somehow that’s not working.#2018-07-2618:08curtosis(d/q '[:find ?org .
:in $ ?orgId
:where [?org :organization/id (UUID/fromString ?orgId)]]
db orgId )#2018-07-2618:09octahedriondo the UUID/fromString outside the query#2018-07-2618:10octahedrionoutside the :where clause I mean#2018-07-2618:10octahedriondb (UUID/fromString orgId)#2018-07-2618:10octahedrionor pass in a UUID not a string#2018-07-2618:10curtosisright. That works! Thanks!#2018-07-2618:11octahedrionbetter to pass in UUIDs#2018-07-2618:12curtosisunfortunately coming in from graphql /js so it’ll be a string, but easily managed.#2018-07-2618:12octahedrionconvert elsewhere before using in query#2018-07-2618:13octahedrioncleaner#2018-07-2618:14Peter Wilkinsstillsuit has a custom scalar for that https://github.com/workframers/stillsuit/blob/51064573edab7a3f03f54f23c632aeb87f243fa4/resources/stillsuit/base-schema.edn#L40#2018-07-2618:17curtosishmmm… wonder why stillsuit isn’t picking it up right then#2018-07-2618:18Peter Wilkinsshould probably move to graphql channel?#2018-07-2618:19curtosisyup#2018-07-2618:21Peter WilkinsI’m having trouble getting a postgres backend setup. the jdbc uri looks ok but when I try to backup from s3 computer says no
bin/datomic -Xmx1g -Xms1g restore-db 'jdbc:'
java.lang.IllegalArgumentException: :db.error/invalid-db-uri Invalid database URI jdbc:
#2018-07-2618:35curtosis datomic:sql://{db-name}?{jdbc-url}#2018-07-2618:36curtosisand IIRC the jdbc-url has to be URL-encoded#2018-07-2618:42Peter Wilkins:+1: solved it - was missing the datomic:sql://? before jdbc…#2018-07-2619:06Peter Wilkinsargg. I managed to restore the database under the name '' (empty string) and I can’t restore it again. Struggling to delete or rename it. :restore/collision The database already exists under the name ''#2018-07-2619:07marshallyou can delete the postgres table and recreate it#2018-07-2700:10rhansenAfter following the tutorial I just get connection refused when trying out api gateway with a ring handler 😕#2018-07-2700:10rhansenanyone experience that?#2018-07-2703:15kennyIs there a way in code to tell if an Ion is deployed or not? Curious how others are handling configuration in dev vs prod.#2018-07-2703:34steveb8n@kenny you could try invoking it using AWS CLI. that would verify it’s deployed. but it doesn’t seem like this is really your question. can you elaborate?#2018-07-2703:35steveb8nI’m curious because I’ll be setting up the same environments in the coming weeks#2018-07-2703:36kennyMy application configuration depends on the environment it is running in (dev/qa/prod). I don’t see anyway to parameterize the deployment like that. #2018-07-2712:41jaretWe have a release in the works to deliver params for deployment. We’re currently working with AWS to get it out. I don’t have a timeline, but we’re going to be delivering parameterized deployment.#2018-07-2703:45steveb8nI’m wondering about that also. hence my earlier question about CI setup. If you look at the Ion lambdas there are env variables there so that seems like a good way to do this but not sure how to populate those from code.#2018-07-2704:37steveb8nit seems like the AWS Params store is part of the answer for this https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-paramstore.html#2018-07-2713:39stuarthalloway@U0510KXTU stay tuned 🙂#2018-07-2802:07eoliphantthat’s what I’m using at the moment. I tagged my Datomic Cloud deployment with an ENV tag, and use that to ‘root’ my stuff in parameter store#2018-07-2704:37steveb8nbut it would be good to have an example of how to use it with Ions / deploy#2018-07-2705:40olivergeorge@stuarthalloway I think the Datomic Ion tutorial could be improved by including a sample apigateway cloudformation template in the repo. It'd provide a more complete picture of how to deploy an app and potentially save some pain for potential users who are less familiar with AWS.#2018-07-2709:22rhansenHas anyone seen this in their ion logs? "Message": "java.lang.StackOverflowError, compiling:(riddley/walk.clj:29:37)",#2018-07-2713:17steveb8nThere's a good chance this is the same issue I had this week. Solo has low memory and classpath problems get masked by these stackoverflow errors. #2018-07-2713:19steveb8nThe problem is not in the walk NS. Most likely O E of the libs you see in the push result warnings is the case. Try commenting them out one by one. Not great but worked for me#2018-07-2713:23stuarthallowayinstead of commenting them out, change your local deps to match the warnings and see what happens locally#2018-07-2823:19rhansenOk. So I altered my deps.edn to match the warnings, spin up a local repl, and require the namespace which contain the ion functions. Everything works perfectly.#2018-07-2823:23rhansenMy deps.edn currently looks like:
{:paths ["backend" "resources"]
:deps
{org.clojure/clojure {:mvn/version "1.9.0"}
com.datomic/ion {:mvn/version "0.9.16"}
ring/ring-core {:mvn/version "1.7.0-RC1"}
ring/ring-json {:mvn/version "0.4.0"}
email-validator {:mvn/version "0.1"}
clj-mailgun {:mvn/version "0.2.0"}
crypto-password {:mvn/version "0.2.0"}
danlentz/clj-uuid {:mvn/version "0.1.7"}
clojure.java-time {:mvn/version "0.3.2"}
commons-codec/commons-codec #:mvn{:version "1.10"},
com.fasterxml.jackson.core/jackson-core #:mvn{:version "2.9.5"}}
:mvn/repos {"datomic-cloud" {:url ""}}
:aliases
{:dev {:extra-deps {com.datomic/client-cloud {:mvn/version "0.8.54"}
com.datomic/ion-dev {:mvn/version "0.9.175"}
ring/ring-jetty-adapter {:mvn/version "1.7.0-RC1"}
org.eclipse.jetty/jetty-server {:mvn/version "9.4.9.v20180320"}
org.eclipse.jetty/jetty-client {:mvn/version "9.4.9.v20180320"}}}
:test {:extra-paths ["tests"]
:extra-deps {expectations {:mvn/version "2.2.0-rc3"}}}}}
#2018-07-3118:21rhansenSo, that worked... But I now have no idea how to send e-mails 😢#2018-07-3118:21rhansenBut i'll work it out 🙂#2018-07-2709:23rhansenI'm trying to setup my ring app for api-gateway integration, and my datomic node just crashes. Can't re-produce it localy 😕#2018-07-2716:58tlimaI know this is the Clojurians workspace, but anyone here knows if it's possible to use the client (not the peer) in Java? And, if so, where I can find some documentation on that?#2018-07-2716:59marshallThere is currently no Java client library#2018-07-2717:10tlimaThanks, @marshall. I guess there are no docs around the API used by the clients, except for (maybe) the Clojure client's source code, right?#2018-07-2717:11marshallthat is currently correct; we plan to publicize the client wire protocol once it’s completely finalized#2018-07-2717:11tlimaAny ETA for that?#2018-07-2717:11marshallunfortunately not#2018-07-2717:11tlimaI see. Thanks.#2018-07-2717:12tlimaOne more thing, @marshall: is the Java peer library still supported and updated?#2018-07-2717:12marshallyep#2018-07-2717:13marshallthe API has remained pretty stable, so the java peer itself hasn’t changed much, but On-Prem is definitely still updated and supported#2018-07-2717:26mtbkappI'm curious about the 2^20 limit for schema attributes as described: https://docs.datomic.com/cloud/schema/schema-limits.html . Is that per database or per cluster (transactor, storage, peers)?#2018-07-2717:30marshall@mtbkapp Cloud or On-Prem (Cloud doesn’t have transactors or peers)#2018-07-2718:01Mark AddlemanHey folks - We’re using Datomic Cloud in production topology. Our database is about 450m datoms and we’ve started to receive “Busy rebuilding index” responses from the transactor. Some googling led me to an earlier #datomic conversation that indicated we are pushing too much data into Datomic. How would I go about debugging this problem? Is this a transactor resource problem or too few DyanmoDB resources or something else entirely?#2018-07-2718:09mtbkapp@marshall both. I forgot about the difference in Cloud.#2018-07-2718:26marshallper database in both I belive @mtbkapp - however, are you likely to reach that? that would be a rather large DB if so#2018-07-2718:44octahedrionhey @marshall I had a problem I hoped I could get your advice on: I have an attribute that I wish I'd added a :db/unique :db.unique/identity to, but it's too late as there are multiple values in the current set of database assertions#2018-07-2718:44mtbkappright, I was think about a case where users can create "custom fields" for entities. I think there are at least two ways to do this. 1) using datomic attributes directly (with additional constraints for security and what not). 2) modeling them using datomic entities, which turns it into something like a meta data model inside the datomic meta data model.#2018-07-2802:13eoliphantThere’s a good discussion of some approaches here: https://groups.google.com/forum/#!searchin/datomic/dynamic$20schema%7Csort:date/datomic/p3ZACQXnhd0/KexSo_AVBAAJ#2018-07-2718:47octahedrionretracting those multiple values doesn't work, but I think I found another way: I renamed the attribute :old-attribute-name and asserted the attribute anew with the unique constraint, and thereafter one has only has the small inconvenience of having to specify the attribute in one's queries (to prevent assertions for the old one appearing) I have to assert the latest values of the old attribute on the new one#2018-07-2718:48octahedrionbut what I wondered was, when asserting those values of the old attribute to the new one, is it possible to retain their txInstants ?#2018-07-2718:48octahedrionas though they had been asserted back when the old ones had been ?#2018-07-2719:30marshall@octo221 retracting the values until it is unique should work. is this on Cloud?#2018-07-2719:31marshallif so, This may be related to a known issue (where you can’t retract :db/unique on an attr). we are working on a fix for that#2018-07-2719:33tlima(@marshall, is this a good channel for Java support or I should reach out other ways?)#2018-07-2719:34marshall@t.augusto here is good; the Datomic forum is also good#2018-07-2719:39marshallalso, the Datomic support channel (http://support.cognitect.com) if you have a paid license#2018-07-2719:41octahedrion@marshall ah - I tried retracting until it was unique but it didn't work - I'll try again tomorrow#2018-07-2719:42octahedrionyes it's on Cloud#2018-07-2719:42marshall@octo221 OnPrem or Cloud?#2018-07-2719:42marshallyeah, it’s possible that’s a bug.#2018-07-2719:42marshallif you can’t get it to work, let me know, but hopefully the fix for retracting unique also fixes that#2018-07-2719:42marshall(i believe it will)#2018-07-2719:42octahedrionok thank you!#2018-07-2719:46tlimaOk, @marshall, here it goes: I managed to create a docker composition, with the transactor (pro starter) + console + app (my java app). The console is working fine, but I can’t get the app to connect to the transactor. I changed the logback.xml to redirect the log messages to stdout, so that it shows in my regular docker logs. The snippet (on it’s way) is what it looks like in the transactor and app when the app tries to connect.#2018-07-2719:47tlima#2018-07-2719:48marshallError communicating with HOST 0.0.0.0 or ALT_HOST transactor on PORT 4334#2018-07-2719:48marshall^ indicates that your peer can’t reach the transactor#2018-07-2719:48tlimaI can telnet from the app to the transactor on that exact port, with no issues#2018-07-2719:49marshallyou need to set the values for HOST and/or ALT_HOST to addresses that the peer instance can resolve as the transactor#2018-07-2719:49tlimawell, ALT_HOST=transactor, which is exactly the name of that service inside the composition… 😕#2018-07-2719:49marshallyou shouldn’t be able to telnet to that port#2018-07-2719:49tlimaWhy not?#2018-07-2719:50tlimaTelnet will just establish the connection, no matter the protocol… right?!#2018-07-2719:50marshallerm. possibly#2018-07-2719:50marshalli guess i havent tried that#2018-07-2719:51tlimaI’m used to telnet to validate reachability, ACLs, etc 🙂#2018-07-2719:51marshallyeah, i suppose you should at least see something there#2018-07-2719:51marshallfor reachability#2018-07-2719:51tlimaThe odd part for me is the AMQ214013: Failed to decode packet message#2018-07-2719:52marshallyour application is using what version of Datomic? and what transactor version?#2018-07-2719:54tlimaPer the transactor log:
transactor_1 | 2018-07-27 19:53:46.327 INFO default datomic.lifecycle - {:tid 24, :username "HgWS7pDswz79iMxZZKUEGfhle+9ggGa4PSSdxeZlhBY=", :port 4334, :rev 4852, :host "0.0.0.0", :pid 1, :event :transactor/heartbeat, :version "0.9.5703", :timestamp 1532721226326, :encrypt-channel true}
My pom.xml:
<dependency><groupId>com.datomic</groupId><artifactId>datomic-pro</artifactId><version>0.9.5703</version></dependency>#2018-07-2719:55marshallcan you get the rest of the transactor log that you elided as well?#2018-07-2719:56marshallas well as a few lines prior#2018-07-2719:57marshallusing dev storage?#2018-07-2719:57marshallalso, you said the console works fine - are you able to connect it and see a DB, etc?#2018-07-2720:01tlima#2018-07-2720:01tlimaYes, @marshall, dev storage#2018-07-2720:02tlimaYeah, console works#2018-07-2720:03tlima(can you see the screenshot, despite of this slackbot warning about “no storage space left”?)#2018-07-2720:03marshallyep#2018-07-2720:03marshallso that suggests that your peer config is the issue somehow#2018-07-2720:05marshallis it possible you have a conflicting dependency that is overriding something like netty?#2018-07-2720:05marshallin your app specifically#2018-07-2720:06tlimaHere is how I’m trying to connect:
public Connection getDatabaseConnection() {
String uri = "datomic:
Peer.createDatabase(uri);
return Peer.connect(uri);
}
#2018-07-2720:07tlimaI’d need to check for those conflicts… Maybe spring boot is also pulling netty, not sure.#2018-07-2720:13marshallI’ll have to do some more looking - may not be able to discuss with the team until Monday#2018-07-2720:16tlimaOk, @marshall. I’ll try it a bit more, here, and ping you back on monday.#2018-07-2720:16marshallIf you have Clojure on the docker system running your app, I’d be interested if you could connect from a REPL#2018-07-2720:16marshallor with a simple clojure app using the same URI#2018-07-2720:17tlimaI’m just trying to evaluate if it fits our project needs, before getting deeper into budget approvals and stuff.#2018-07-2720:18tlimaMy goal is to, unfortunately, create an API layer on top of it. Our app language is not supported (peer library nor client) so I don’t see much of an alternative.#2018-07-2720:22tlima(btw, @marshall, I just tried the enforcer:enforce command (from the maven-enforcer-plugin) and it reports no other conflict than datomic-pros own dependency on both com.google.guava:guava:18.0 (direct) and com.google.guava:guava:19.0 (through Artemis related packages))#2018-07-2720:23tlimaI will try adding Clojure to the mix and experiment with the REPL. Just not really fluent on Clojure 😕#2018-07-2720:24marshallyou should be able to follow this https://docs.datomic.com/on-prem/dev-setup.html#create-db#2018-07-2720:24marshallto try creating a DB and connecting via a REPL#2018-07-2720:35sparkofreasonIs there any difference in the query results obtained using or vs, or-join, or is or-join more of a hint for the query compiler?#2018-07-2720:36tlima@marshall, JFYI, the REPL worked.#2018-07-2916:03henrikWhat’s the best way to handle static content in an Ion? I have a bunch of schema files that needs to be available internally to the Ion. Should I stick them on S3? What’s the best way to access them from inside the Ion?#2018-07-2918:00euccastro@henrik: why not have them in the classpath? maybe in a resource directory in your deps.edn's :paths?#2018-07-2918:00euccastrothen they'll get pushed / deployed along with your code#2018-07-2918:07henrikI tried that, and it didn’t work once it reached AWS, so I assumed that it wasn’t possible.
Do you know if there are differences in how paths are resolved locally vs. AWS?#2018-07-2918:08henrikIn both cases, I attempted to use a relative path—a folder inside resources#2018-07-2918:09euccastrowhat function did you use to load them? io/resource?#2018-07-2918:10euccastrohttps://clojuredocs.org/clojure.java.io/resource#2018-07-2918:38henrikManually written path, I’m afraid. I’ll use resource instead and see if it makes a difference.#2018-07-2918:45henrikOh yeah, that worked much better. Thank you!#2018-07-2922:28euccastroyou're welcome, glad to know it worked!#2018-07-3019:32curtosisI’m working on adding a list “tagged values” as a value and am looking for advice on how to structure it most “datomically”. Essentially, the tagged value entity has tagged-value/key and tagged-value/value, both Strings.#2018-07-3019:34curtosisI hypothetically want to be able to search by key or value, but they’re otherwise essentially freeform. They’re not logically components of the parent object.#2018-07-3019:35curtosisSo far I’ve thought of either a) making them all unique — “shared” only by key equality, or b) building them as full-fledged entities, possibly via a tx-fn that looks for an existing one.#2018-07-3019:35curtosisAnyone solved something similar, or have suggestions for what will bite me least downstream?#2018-07-3020:44Mark AddlemanHi - We have a similar requirement. We have a “key” attribute and 6 “value” attributes (string-value, keyword-value, boolean-value, etc) and a value-type attribute which tells us which value type was written#2018-07-3020:45Mark Addlemanwe then wrote a rule named kv-value which makes querying against that data structure not too painful#2018-07-3021:09curtosisah, that makes sense#2018-07-3021:11curtosisdo you create each instance as a distinct entity and then link by value in your kv-value?#2018-07-3021:36Mark Addlemanyes. we have a parent entity which has a cardinality/many ref to the individual key - value pairs#2018-07-3112:14curtosisSuper, that’s exactly how I have it modeled. Thanks!#2018-07-3020:43eraserhdWell, it's probably not true, but just in case, first consider that your keys are really attributes. You can add a :mything/is-tagged? true attribute to tag attributes.#2018-07-3020:49Mark AddlemanI have a performance monitoring question: In our workload, we transact 200k datoms every 30 minutes. We expect this to increase over time. I wouldn’t be surprised if we are in a position to transact 2 million datoms every 30 minutes in a month or two.
My understanding of the Datomic Cloud transact pipeline is that clients transact data into a single node. The transacting node must perform some CPU operation on the tx-data and then it writes into EFS, S3 and DynamoDB simultaneously. The transact operation complete when the transacting node is finished writing to all three storage surfaces. Is my understanding correct?#2018-07-3021:10curtosis@eraserhd yeah, that would be the easy solution, but the idea here is specifically to capture user-defined “stuff” that’s outside the formal schema attributes.#2018-07-3022:13johnj@mark340 curious, at ~110 writes per second ( your current load), how is datomic handling it? That already seems too much for a single db.#2018-07-3022:13johnj@mark340 curious, at ~110 writes per second ( your current load), how is datomic handling it? That already seems too much for a single db.#2018-07-3022:15Mark AddlemanAfter a little back and forth with Datomic support (great dealing with them, BTW!), it’s handling it just fine. The trick was to raise our DynamoDB provisioning to 250 write units.#2018-07-3022:15Mark AddlemanI’m still not sure I understand the performance relationship between the Datomic log, EFS, S3 and DynamoDB. I have a question into Support about it and I expect an answer soon.#2018-07-3022:15Mark AddlemanI was hoping for an answer from the community as well 🙂#2018-07-3022:17johnjOk, nice, I haven't try the cloud stuff, but itsn't it supposed to scale automatically for you for stuff like write units?#2018-07-3022:18johnjalso, is this for a single db ?#2018-07-3022:25Mark Addlemanit will scale automatically within limits that you get to set. the default limits are pretty good to get started and can be easily changed#2018-07-3022:25Mark Addlemanyes, it is for a single db.#2018-07-3022:26Mark Addlemanfyi - in Datomic Cloud, you you are still limited to a single transactor per database but you can have multiple databases and thus multiple transactor nodes. we might end up scaling that way but, as it stands right now, the single transactor seems be to holding up#2018-07-3022:39johnjwill you be able to single query to multiple databases if you scale up to that? or you don't need that?#2018-07-3023:08Mark Addlemanyes, datalog allows you to query multiple databases without much difficulty. i don’t know the performance implications of it, though#2018-07-3109:59octahedrionbut Datomic Cloud doesn't support multiple dbs for query yet does it ?#2018-07-3113:55Mark AddlemanI don’t know if Cloud supports it yet. I don’t remember reading about the limitation in the docs#2018-07-3114:25octahedrionlast time I tried it didn't work#2018-07-3123:47Mark AddlemanOh, interesting. I’ll add it to my list of things to investigate 🙂#2018-07-3022:41bmabeyDoes anyone know if the recommended "10 billion datum" limit has increased or if datomic cloud changes this? Trying to figure out if datomic will scale to our problem.#2018-07-3023:09Mark AddlemanI had a similar question a couple of weeks ago. My memory of the answer: Cognitect tests up to 10 billion datoms. Performance implications vary widely based on structure of your data.#2018-07-3023:10Mark AddlemanThe 10 billion is not a limit#2018-07-3108:23jeroenvandijkYeah +1. We have reached 40 billion or so with on-premise. We do see increasing issues with transactor timeouts at this level. And we haven't tried to fully understand if we can circumvent these issues. Part of the reason might be that we also have extra indices on the biggest part of these datoms. For us it's no big of an issue as we solve the problem by starting with a fresh db when the problems get too bad (or the dynamodb costs too high). I would like to spent some more time to figure out how to scale properly to this amount some day though#2018-07-3115:01jd-white💪 Thanks guys! This helps greatly with our capacity planning.#2018-07-3109:48henrikWhat’s the proper way of setting environment variables for Ions?#2018-07-3109:53steveb8n@henrik we had this discussion late last week. Stu implied that this question will be answered soon so, for now, we wait#2018-07-3109:55henrikExcellent!#2018-07-3109:55henrikIn the meantime I’ll proceed with a dirty and/or unsecure hack.#2018-07-3110:11steveb8nsame here. just go in an manually add the env vars to the lambdas#2018-07-3118:33eoliphantI add a ENV=XXX tag to the config, then use that to ‘root’ grabbing stuff out of the AWS param store. hacky but works, looking forward to thenative solution#2018-07-3110:32joshkhhas anyone experienced this when creating a datomic cloud client?
(d/client (get-in env [:datomic :client]))
CompilerException java.lang.IllegalArgumentException: Can't define method not in interfaces: recent_db, compiling:(datomic/client/impl/shared.clj:304:1)
#2018-07-3110:41joshkhi don't get the error with 0.8.46, but i do when i upgrade to 0.8.54+#2018-07-3113:47henrikThis is a persistent problem for me:
java.lang.Exception: namespace 'cheshire.factory' not found, compiling:(cheshire/core.clj:1:1)
Cheshire works fine locally, but pushed to AWS (Datomic Ions), it generates an error. Why might this be?#2018-07-3113:55Joe Lane@henrik jackson conflict possibly, look at the deps tree#2018-07-3113:56Joe Laneclojure -Stree#2018-07-3113:57Joe LaneCompare that list with any deps conflicts that datomic ions barks about when you push your code up.#2018-07-3113:57Joe LaneI’ve found that I dont actually need cheshire, and that clojure.data.json works well enough, however I understand thats not a great answer.#2018-07-3114:01souenzzoany news about this bug?
https://gist.github.com/souenzzo/c7b5a5434d4c04efcc58802c81b46023
https://forum.datomic.com/t/inconsistency-between-query-on-peer-and-transact/548#2018-07-3115:27henrik@lanejo01 I don’t quite get it, these are the deps listed for cheshire:
cheshire/cheshire 5.8.0
com.fasterxml.jackson.dataformat/jackson-dataformat-cbor 2.9.0
com.fasterxml.jackson.core/jackson-core 2.9.0
tigris/tigris 0.1.1
com.fasterxml.jackson.dataformat/jackson-dataformat-smile 2.9.0
They only appear for cheshire, nowhere else, including jackson.#2018-07-3115:29henrik#2018-07-3115:31Joe LaneI dont see com.datomic/client-cloud included#2018-07-3115:31henrikWhops! I might have moved that into dev deps.#2018-07-3115:32Joe LaneWell thats a bit of a problem haha#2018-07-3115:32Joe LaneAlso, for the dev deps, what version of com.datomic/ion-dev do you have?#2018-07-3115:32henrik"0.9.160"#2018-07-3115:33Joe LaneUpdate to "0.9.175", it includes as output (when you push) a collection of dependency conflicts.#2018-07-3115:34henrikWhere can I see what the most recent versions of the libs are?#2018-07-3115:34Joe LaneWhich I believe your issue is rooted in.#2018-07-3115:42henrikRight, it’s telling me explicitly about the conflicts now, that’s nice.#2018-07-3115:43henrikIt’s kind of weird that Cognitect is using outdated versions of their own libraries#2018-07-3115:54jaret@henrik we should be on 0.9.175 throughout docs and repos. May I ask where you got 0.9.160 from? so I can make sure we update that. https://docs.datomic.com/cloud/releases.html#2018-07-3115:59henrik@jaret Oh sorry, I meant the deps. It’s telling me that in Datomic Cloud, com.cognitect/transit-clj #:mvn{:version "0.8.285" is used, while I’m relying on com.cognitect/transit-clj {:mvn/version "0.8.309"}.
Or can I control which version Datomic Cloud uses?#2018-07-3116:00Joe LaneYeah, I ran into some issues with things like the priority-map library and the aws java sdk.#2018-07-3116:00henrikWell, good news is I got a different error now. java.net.ConnectException: Connection refused#2018-07-3116:01jaretSuch great news 😉. Are you sourcing your AWS creds?#2018-07-3116:02henrik@jaret Thank you! 🙂 Possibly, I have no idea what that means. What would I be doing in this case?#2018-07-3116:03jaretLet me back up. When are you getting this error?#2018-07-3116:04henrikAPI Gateway is reporting this as I’m trying to access the web server endpoint. It was working fine as of 0.9.160 and no cheshire. The house of cards started collapsing with the introduction of cheshire, alas.#2018-07-3116:07jaretAnd you pushed/re-deployed after removing cheshire?#2018-07-3116:08henrikNot yet, I’m trying that just now.#2018-07-3116:09jaret@lanejo01 btw I wanted to ask if you had a link to your ion tutorial/talk.#2018-07-3116:09jaretI was going to add it to the documentation as a community example, but was only able to find your talk notes on your github.#2018-07-3116:11Joe LaneI’ve got several (unpublished…) small projects with ions if you’re in going to start adding community examples. Do you mean a blogpost or video? Because the source code should be present.#2018-07-3116:12jaretI know we’d love to see more community examples in whatever format you have them in. Github links work as well, but I know you mentioned on twitter after your websocket IOT Ion talk that a tutorial/talk would follow.#2018-07-3116:12jaretI don’t want to put any pressure on you 🙂 Just want to make people can find the great stuff everyone is making.#2018-07-3116:13Joe Lanehaha yeah, thats on my todo list. Need a platform to host it first.#2018-07-3116:13Joe Lane(read: yak shave)#2018-07-3116:13jaretUnderstood. ✂️ 🙂#2018-07-3116:13jaretNeed a yak shave emoji#2018-07-3116:13jaretyak#2018-07-3116:18henrikNo dice, I’m afraid. I’ll downgrade to 0.9.160, see if that gets it up and running again.#2018-07-3116:19Joe LaneWhat is giving the connection refused error?#2018-07-3116:22henrikI’m not sure. This is as reported by the “test” tool in API Gateway.#2018-07-3116:19henrikThough the List of Reprimands is smaller:
:dependency-conflicts
{:deps
{commons-codec/commons-codec #:mvn{:version "1.10"},
org.slf4j/jcl-over-slf4j #:mvn{:version "1.7.14"},
com.fasterxml.jackson.core/jackson-core #:mvn{:version "2.9.5"},
com.cognitect/http-client #:mvn{:version "0.1.80"},
com.google.guava/guava #:mvn{:version "18.0"},
com.cognitect/s3-creds #:mvn{:version "0.1.18"},
org.slf4j/slf4j-api #:mvn{:version "1.7.14"},
com.amazonaws/aws-java-sdk-kms #:mvn{:version "1.11.349"},
com.amazonaws/aws-java-sdk-s3 #:mvn{:version "1.11.349"}},}#2018-07-3116:21rhansenI can't get ions to work 😕
I'm pretty certain the problem is related to my deps, but I'm unable to re-create the StackOverflow error localy after applying the deps clojure.ions warns about. 😕 (details in thread)#2018-07-3116:23rhansenMy deps.edn:
{:paths ["backend" "resources"]
:deps
{org.clojure/clojure {:mvn/version "1.9.0"}
com.datomic/ion {:mvn/version "0.9.16"}
ring/ring-core {:mvn/version "1.7.0-RC1"}
ring/ring-json {:mvn/version "0.4.0"}
email-validator {:mvn/version "0.1"}
clj-mailgun {:mvn/version "0.2.0"}
crypto-password {:mvn/version "0.2.0"}
danlentz/clj-uuid {:mvn/version "0.1.7"}
clojure.java-time {:mvn/version "0.3.2"}}
:mvn/repos {"datomic-cloud" {:url ""}}
:aliases
{:dev {:extra-deps {com.datomic/client-cloud {:mvn/version "0.8.54"}
com.datomic/ion-dev {:mvn/version "0.9.175"}
ring/ring-jetty-adapter {:mvn/version "1.7.0-RC1"}
org.eclipse.jetty/jetty-server {:mvn/version "9.4.9.v20180320"}
org.eclipse.jetty/jetty-client {:mvn/version "9.4.9.v20180320"}}}
:test {:extra-paths ["tests"]
:extra-deps {expectations {:mvn/version "2.2.0-rc3"}}}}}
#2018-07-3116:24rhansenthe error message:
"Msg": "IonLambdaDispatcherFailedtoStart",
"Ex": {
"Cause": null,
"Via": [
{
"Type": "clojure.lang.Compiler$CompilerException",
"Message": "java.lang.StackOverflowError, compiling:(potemkin/namespaces.clj:88:34)",
"At": [
"clojure.lang.Compiler",
"analyzeSeq",
"Compiler.java",
7010
]
},
{
"Type": "java.lang.StackOverflowError",
"Message": null,
"At": [
"clojure.lang.RestFn",
"applyTo",
"RestFn.java",
130
]
}
],
#2018-07-3116:30Joe LaneLook in clojure -Stree for a dependency that requires potemkin and then remove it. See if the issue persists. That lib does some crazy stuff and I wouldn’t be surprised if its causing issues.#2018-07-3116:46henrikI see clj-http depends on potemkin 😕#2018-07-3116:27henrik@lanejo01 Do you track ion-starter to keep up to date with the releases? Or is there a release page somewhere?#2018-07-3116:29Joe Lanehttps://docs.datomic.com/cloud/releases.html#2018-07-3116:29Joe Lane@henrik ^^#2018-07-3116:39henrikOh man, what’s going on?
java.lang.Exception: namespace 'clj-http.headers' not found, compiling:(clj_http/core.clj:1:1)
#2018-07-3116:52jaret@henrik can you try the base tutorial? That looks like you may have some basic auth in your app?#2018-07-3116:52jarethttps://docs.datomic.com/cloud/ions/ions-tutorial.html#2018-07-3116:53jaretOr are you getting that error with the base tutorial and ion-starter git repo?#2018-07-3116:55henrik@jaret No, this is after moving some stuff over from another project. It includes things that talk to Google Cloud, hence cheshire and clj-http.#2018-07-3116:58henrikAre some libraries unsupported in Ions? I’m not sure I understand the patterns behind what works and not.#2018-07-3117:12henrikI ripped out the deps I added from my other project, and now it works fine.
Some or all of these are the culprit(s):
buddy/buddy-sign {:mvn/version "3.0.0"}
environ {:mvn/version "1.1.0"}
org.clojure/data.json {:mvn/version "0.2.6"}
clj-http {:mvn/version "3.9.0"}
clj-time {:mvn/version "0.14.4"}#2018-07-3117:18kennyHow do you guys approach testing with Datomic Cloud? I see Stu mentioned here (https://forum.datomic.com/t/integration-testing/465) that he has some "tricks" to unit test with Datomic, but I have not seen any posts about that. Our approach right now is to create a new db with a unique suffix on each test run, deleting the db after each test completes. This ensures a clean DB for each test. Is there a better approach to this? With the peer library you could simply create a new in-memory connection for each test, allowing you to use your normal DB names without a suffix. It'd be great if there was something similar for Datomic Cloud. Is there a more reasonable solution to this problem?#2018-07-3118:40eoliphantwe do pretty much the same thing. Moved a clj/datomic microservice into ion code, moved to client api, then just updated my test fixture that would dynamically create and tear down an in-mem db to do what you described. Pretty much just worked#2018-07-3122:34steveb8nyou might try this https://gist.github.com/stevebuik/9b219090a2d10cc4fb06d62ee928ca7e#2018-07-3122:35steveb8nit’s not blessed by Cognitect (yet) but works well for me. I can reproduce all cloud behaviours in local/mem db (at least those that I need)#2018-07-3122:35kennyI just wrote an (almost) exact version of that haha#2018-07-3122:36kennyhttps://github.com/ComputeSoftware/datomic-client-memdb#2018-07-3122:38steveb8nawesome#2018-07-3117:45ghadiFavorite datomic-ish ER tool besides Omnigraffle?#2018-07-3117:51Joe Lane@ghadi https://github.com/bmaddy/gadget may be what you’re looking for?#2018-07-3118:29henrikSo, buddy-sign, unfortunately has this dep tree:
buddy/buddy-sign 3.0.0
buddy/buddy-core 1.5.0
org.bouncycastle/bcprov-jdk15on 1.59
commons-codec/commons-codec 1.11
org.bouncycastle/bcpkix-jdk15on 1.59
cheshire/cheshire 5.8.0
com.fasterxml.jackson.dataformat/jackson-dataformat-cbor 2.9.0
com.fasterxml.jackson.core/jackson-core 2.9.0
tigris/tigris 0.1.1
com.fasterxml.jackson.dataformat/jackson-dataformat-smile 2.9.0
net.i2p.crypto/eddsa 0.3.0
Meaning it contains cheshire, which is Forbidden.
java.lang.Exception: namespace 'cheshire.factory' not found, compiling:(cheshire/core.clj:1:1)
#2018-07-3118:31henrikIs it just a fact of life that anything referencing cheshire is going to blow up when it reaches the Cloud?#2018-07-3118:42chrisblomcan you exclude it?#2018-07-3118:47henrikI do need to sign jwt tokens. Maybe there’s another lib I can use.#2018-07-3118:43eoliphanti’m using cheshire as a top level dep#2018-07-3118:44eoliphantno probs so far#2018-07-3118:48henrikBut how! Would you mind sharing your deps.edn?#2018-08-0102:06eoliphantsorry just saw. did the upgrade to 402 fix your problem?#2018-08-0104:37henrikYes, it did, thank you!#2018-07-3118:58Joe LaneThe issue is due to that particular version of cheshire having a jackson dep conflict with the version of jackson-core used in datomic-cloud.#2018-07-3118:58jaret@henrik https://docs.datomic.com/cloud/releases.html#402-8396#2018-07-3118:58jaretAre you on that CFT?#2018-07-3118:59jaretWe updated the jackson libs in 402-8396.#2018-07-3119:00jaretYou’ll want to upgrade your Datomic Cloud System to that and I think you should be fine to include Cheshire. https://docs.datomic.com/cloud/operation/upgrading.html#2018-07-3119:05henrikAh, no, 397. That might be it!#2018-07-3119:01johnj@jaret observation: datomic-free hasn't been updated to the latest version in clojars https://clojars.org/com.datomic/datomic-free#2018-08-0104:49henrik@jaret @lanejo01 Thank you for your patience, the upgrade to 402 seems to have resolved the problems.
Forrest Gump was right—AWS really is like a box of chocolates.#2018-08-0110:32olivergeorgeI'd love an AWS support group. There's so much to get your head around. So many ways to do things. Say #ionaws#2018-08-0110:39henrik@olivergeorge Great idea, and I think you’re free to set one up. We just have to trick the people who actually know stuff into coming there as well.#2018-08-0114:40Joe Lane@olivergeorge @henrik I’d be happy to join that channel and help there and ask for help there#2018-08-0114:49henrikI named it #ions-aws#2018-08-0115:08Joe Lane:+1:#2018-08-0119:14currentoorI’m following the instructions here
https://docs.datomic.com/on-prem/aws.html
but I get this VPC error
The security group 'datomic' does not exist in default VPC 'vpc-XXXXXX' (Service: AmazonAutoScaling; Status Code: 400; Error Code: ValidationError
#2018-08-0211:30marshallHave you run the ensure-transactor script?#2018-08-0211:30marshallhttps://docs.datomic.com/on-prem/storage.html#automated-setup#2018-08-0221:44currentoor@U05120CBV i’m using postgres, not dynamoDB#2018-08-0221:44currentoori thought that script was only for dynamoDB?#2018-08-0313:41marshallYes, it is for using DynamoDB, but it also sets up all the required AWS infrastructure (i.e. roles/permissions)#2018-08-0313:41marshallerr- actually ensure-cf might be the one that does that#2018-08-0313:41marshalldid you run it?#2018-08-0313:43marshallYou’ll need to run it with AWS credentials that can do things like create security groups, etc#2018-08-0318:05currentooryes i did run, with AWS credentials, it printed success and i see the transactor instance in my EC2 instance table#2018-08-0318:06currentoorbut it keeps stopping and restarting#2018-08-0318:06currentoorClient.InstanceInitiatedShutdown: Instance initiated shutdown#2018-08-0318:06currentoori read online that this might be because of not enough memory, so i bumped the instance type upto t2.large#2018-08-0318:06currentoorbut still no luck#2018-08-0319:20marshallthe transactor instance does?#2018-08-0319:20marshallrestart that is#2018-08-0320:08currentooryes#2018-08-0320:14currentoori’m able to get a local transactor to connect to this postgres storage just fine#2018-08-0320:15currentoori’m trying to just connect via my own ec2 instance instead of the prebuilt AMI#2018-08-0320:59marshallYoud need proper security group configured for the instance you are running#2018-08-0400:16currentoorok i’m getting closer 😅#2018-08-0400:16currentooryou were right i needed proper security group configurations#2018-08-0405:55currentoori think i’m almost there, just one last issue left#2018-08-0405:55currentoori detailed what i’m seeing here#2018-08-0405:55currentoorhttps://forum.datomic.com/t/unable-to-connect-to-transactor-from-to-ec2/568#2018-08-0420:21currentoorI figured it out, it was a misconfigured host, needed to use the public DNS#2018-08-0420:22currentoori documented my solution in that same forum post#2018-08-0420:26currentoorso since the public DNS is dynamically determined for me, how do you recommend scripting it?#2018-08-0420:29currentoordo i need to modify the *.properties file i pass to bin/transactor or is there a way to pass it as an argument?#2018-08-0119:18currentoorAm I supposed to make a security group called datomic?#2018-08-0207:06dominicmI'm not using ions, but I'm just curious (because I'm also using CodeDeploy). What role does AWS Step Functions take in the process?#2018-08-0207:27henrikI have no idea of the actual answer, but would like to invite you to #ions-aws.
We set it up because there’s a lot to figure out with AWS/Ions, and we’re kind of saturating the #datomic channel. 🙂#2018-08-0210:22henrikWhen creating an entity where some parts may or may not be nil, what’s good practice? Do I transact regardless and store nil, or should I weed out the nils beforehand?#2018-08-0210:27alexmillerI believe it’s invalid to transact a nil value for an attribute#2018-08-0210:28alexmillerIdeally in Clojure it’s best to just avoid having nil values in an entity in the first place#2018-08-0210:34henrikYep, it blew up ^_^
I’m considering writing a protocol that just rips out anything that’s nil and wrap all transactions in it.
Can you think of any reason why this would be a bad idea?#2018-08-0211:31henrikResult: https://gist.github.com/eneroth/81fe5acf0aab82c355889f28887e08ca#2018-08-0211:57henrikActually, blows up on UUIDs.#2018-08-0213:43alexmillerI promise you that you’ll be happier 6 months from now if you spend the time to avoid making nil attribute values in the first place#2018-08-0213:43alexmillerthen blowing up on nils is a feature, not a problem#2018-08-0213:46henrikI’m not sure I get that choice. I’m converting wildly varying XML blobs from scholarly publishers and sticking them in Datomic. Sometimes stuff is missing. Sometimes VITAL stuff is missing.
I only get the choice of checking attribute by attribute, or the entire thing at the same time.#2018-08-0213:49alexmillerif stuff is missing, just don’t make an attribute?#2018-08-0213:52alexmilleryou may need to rework some ingest code, but I promise you from experience this is a path that will result in less code and less pain in the long run#2018-08-0213:53henrikYeah, maybe my pipeline is wonky.
So I do,
1. XML -> EDN (generic)
2. EDN -> EDN (for Datomic)
3. Transact.
So step two, can look like this, for example:
(defn prepare-data [{:keys [external-ids issn website title description]}]
(let [online-issn (:online issn)
print-issn (:print issn)]
[{:journal/id (java.util.UUID/randomUUID)
:journal/external-ids (mapv prepare-external-id external-ids)
:journal/issn-print {:identity/issn print-issn}
:journal/issn-online {:identity/issn online-issn}
:journal/website {:internet/URL website}
:journal/title title
:journal/description description}]))#2018-08-0213:54henrikEssentially, take a bunch of stuff and stick them in a template.#2018-08-0213:55henrikSo, is it preferable to wrap each attribute in a conditional?#2018-08-0213:56henrikOr a list of conditional assocs?#2018-08-0214:18alexmillercond-> tends to be very helpful in stuff like this#2018-08-0214:20alexmiller(cond-> {:journal/id (java.util.UUID/randomUUID)} ;; etc, the part that's always there
;; check each optional thing and assoc if needed
print-issn (assoc :journal/issn-print {:identity/issn print-issn})
online-issn (assoc :journal/issn-online {:identity/issn online-issn}})#2018-08-0214:22alexmillerwhich you can read as:
start with an init map
if print-issn exists, assoc it into the map (otherwise pass along)
if online-issn exists, assoc it into the map (otherwise pass along)#2018-08-0214:22alexmillerthe starting object threads into the first arg of assoc#2018-08-0214:23alexmilleronce you’ve seen this form a couple times, it becomes very easy to read#2018-08-0214:28Joe Laneive also been using this before transacting data (into {} (remove #(nil? (second %)) some-map))#2018-08-0214:28Joe Lanecould probably be converted to work with nested data#2018-08-0214:29alexmilleras I said above, I promise you the better long-term strategy across your app is to avoid ever creating or passing around nil attributes in the first place#2018-08-0214:30alexmillerit requires a little more up-front care but Clojure and Datomic both come from an aesthetic where that is preferred#2018-08-0214:31Joe LaneAgreed about just not making nils. I made the mistake initially and have regretted it in record time. Luckily I may still have time to fix it.#2018-08-0214:37henrikI would love to, but to do that I would have to make my case with those third parties 🙂#2018-08-0214:37henrikWhy is cond-> preferable to just wiping out the nils in one go? Is it more obvious what’s going on?#2018-08-0214:40alexmillerinevitably you need to handle nested nils too - then you’re doing recursive walks and modification of your data#2018-08-0214:40alexmillerinstead making something, then removing parts of it, it is better to just make what you want in the first place#2018-08-0214:41alexmillerif you’re taking data from external sources, then that’s not an option of course#2018-08-0214:57henrikThank you for the advice @U064X3EF3 🙂#2018-08-0214:42Mark Addlemani have a datalog query against datomic cloud that takes in a large number of entity ids as a parameter. the :in clause has a binding like [?entid ...] and each iteration produces a result that is independent the other iterations (there’s no join across the entity ids in the in clause - i’m pretty sure that’s not possible anyway). so, from a performance standpoint, which is better: executing as a single query or breaking the entity ids up into batches and executing the queries in parallel (this risks overloading datomic and getting back :cognitect.anamoly/busy and retrying).#2018-08-0214:42Mark Addlemani’ve been trying different query strategies but datomic’s awesome caching makes getting consistent timings across runs pretty difficult 🙂#2018-08-0214:44Mark Addlemanif the datomic query planner doesn’t already do this, it would be awesome if it could detect this situation and execute the query in parallel automaticaly 🙂#2018-08-0216:30favilaDon't quote me on this but I am pretty sure clauses of a query are evaluated in parallel already#2018-08-0216:31favilaunless you can get parts of the query running on different machines I don't think there's any advantage to splitting#2018-08-0216:32favilamake sure the parameter you destructure as [?entid ...] is a true vector#2018-08-0216:32favilanot a set or seq or something that doesn't allow direct index access#2018-08-0216:40Mark Addleman> evaluated in parallel
that is in line with my early exploration. increasing the parallelism on the client side does not seem to improve overall query performance.#2018-08-0216:41Mark Addleman> make sure the parameter you destructure as [?entid ...] is a true vector
interesting. why does direct index access matter? i would have thought simply being a seq would be sufficient#2018-08-0216:51favilaunder the hood is the clojure.core.reducers/CollFold protocol, which has an efficient parallel implementation for vectors but not for seqs#2018-08-0216:52favilathe parallelism in query is most likely implemented with reducers#2018-08-0217:59currentoorhas anyone here setup up datomic with AWS RDS postgres? i’ve been struggling for two days on this issue 😅
AWS doesn’t give you a postgres user, so I modified the setup SQL scripts to do this
CREATE DATABASE datomic
WITH OWNER = currentoor
TEMPLATE template0
ENCODING = 'UTF8'
-- TABLESPACE = pg_default
LC_COLLATE = 'en_US.UTF-8'
LC_CTYPE = 'en_US.UTF-8'
CONNECTION LIMIT = -1;
CREATE TABLE datomic_kvs
(
id text NOT NULL,
rev integer,
map text,
val bytea,
CONSTRAINT pk_id PRIMARY KEY (id )
)
WITH (
OIDS=FALSE
);
ALTER TABLE datomic_kvs
OWNER TO currentoor;
GRANT ALL ON TABLE datomic_kvs TO currentoor;
GRANT ALL ON TABLE datomic_kvs TO public;
CREATE ROLE datomic LOGIN PASSWORD 'datomic';
Just replaced postgres owner with my user and commented out the table space line (it defaults creating datomic to pg_default anyway.
All this appears to work, but trying to spin up a transactor locally connected to this postgres instance results in
Launching with Java options -server -Xms1g -Xmx1g -XX:+UseG1GC -XX:MaxGCPauseMillis=50
Starting datomic:sql://<DB-NAME>?jdbc:, you may need to change the user and password parameters to work with your jdbc driver ...
System started datomic:sql://<DB-NAME>?jdbc:, you may need to change the user and password parameters to work with your jdbc driver
Critical failure, cannot continue: Lifecycle thread failed
java.util.concurrent.ExecutionException: org.postgresql.util.PSQLException: ERROR: relation "datomic_kvs" does not exist
#2018-08-0218:09currentoorin the psql console i see this
mydatabase=> \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
------------+------------+----------+-------------+-------------+---------------------------
datomic | currentoor | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
mydatabase | currentoor | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
rdsadmin | rdsadmin | UTF8 | en_US.UTF-8 | en_US.UTF-8 | rdsadmin=CTc/rdsadmin
template0 | rdsadmin | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/rdsadmin +
| | | | | rdsadmin=CTc/rdsadmin
template1 | currentoor | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/currentoor +
| | | | | currentoor=CTc/currentoor
(5 rows)
mydatabase=> \d
List of relations
Schema | Name | Type | Owner
--------+-------------+-------+------------
public | datomic_kvs | table | currentoor
(1 row)
#2018-08-0218:10currentoorso datomic_kvs definitely exists#2018-08-0218:16currentooroh looks like i might have figured out my mistake#2018-08-0218:17currentoori was creating the relation datomic_kvs in mydatabase, i needed to make it in the datomic database#2018-08-0219:33ghadiWe're having trouble starting another Datomic Cloud cluster -- we have an older one, but the new one in US-East-2 is running into CF resource failures#2018-08-0219:34ghadiIs there a better place to ask questions? I have CF failure screenshots.#2018-08-0220:27Joe Lane@ghadi Silly question, do you have more than 5? I ran into an issue where I wasn’t able to have more than 5 at a time (never resolved it, just spun down some clusters)#2018-08-0221:07eoliphantare cross db queries working in cloud?#2018-08-0404:29henrikNot yet#2018-08-0222:14kennyHow do you get the basis-t in Datomic cloud?#2018-08-0222:15solussd@kenny lookup :t in your db val#2018-08-0222:16solussde.g., (:t db)#2018-08-0222:23kenny@solussd Thank you.#2018-08-0314:48rhansenAnyone else here have a problem when using cast/initialize-redirect :stdout in cider?#2018-08-0314:48rhansenworks when using the repl from CLI tools, fails within cider 😕#2018-08-0320:17matthaveneris it possible to create an entity in datomic with no attributes, only refs to it?#2018-08-0320:19matthavenerEg I have two entities, 123 and 456, and a :db.cardinality/one :db.type/ref attribute. If I transact [[:db/add 123 :my/ref "foo"] [:db/add 456 :my/ref "foo"]] it fails#2018-08-0320:39Joe LaneWhat would it be a ref to?#2018-08-0320:40Joe LaneAren’t you just saying entities 123 and 456 have the string "foo"?#2018-08-0320:40Joe LaneAt that point isn’t "foo" just a value instead of an entity?#2018-08-0320:44matthavenerIt'd be a ref to an empty entity. I'm guessing the answer is just "you can't have an empty entity with only incoming references".#2018-08-0320:48Joe LaneYeah, I mean thats your answer, because it would be kind of like a null entity.
I was more questioning what in your domain model would this empty ref represent and, if its a string like you proposed, should you think of "foo" as a static value instead of an entity. I reserve entities for things that change over time but still need a stable id of some sort.#2018-08-0320:50marshallEntities are just a number. You can definitely have an "empty" entity with only 'incoming' refs#2018-08-0320:50Joe LaneOh! Huh.#2018-08-0320:50Joe LaneSorry for adding confusion and misinformation then.#2018-08-0320:50marshallTry your example using the map syntax#2018-08-0320:53marshall{:db/id "foo"}
{:db/id 123
:my/ref "foo"}
{:db/id 456
:my/ref "foo"}
#2018-08-0320:55marshallYou could also do list form. Add to your example above:
[:db/add "foo" :db/id "foo"]#2018-08-0320:55marshallI believe that last will work. I'm on my phone so I cant check it atm#2018-08-0320:56marshall@matthavener ^^#2018-08-0320:57marshallNote to self: figure out how to run a REPL on my phone :D#2018-08-0320:59matthaveneruser=> (d/with db' [[:db/add 17592186046851 :my/ref "foo"] [:db/id 17592186046852 :my/ref "foo"] [:db/add "foo" :db/id "foo"]])
IllegalArgumentExceptionInfo :db.error/not-a-data-function Unable to resolve data function: :db/id datomic.error/arg (error.clj:57)
#2018-08-0320:59matthavenertrying map syntax#2018-08-0321:00marshallYou need a db/add in the middle list#2018-08-0321:00matthaveneroh! geez#2018-08-0321:00matthaveneruser=> (d/with db' [[:db/add 17592186046851 :my/ref "foo"] [:db/add 17592186046852 :my/ref "foo"] [:db/add "foo" :db/id "foo"]])
IllegalArgumentExceptionInfo :db.error/not-an-entity Unable to resolve entity: :db/id datomic.error/arg (error.clj:57)
#2018-08-0321:01matthavenermap version#2018-08-0321:01matthaveneruser=> (d/with db' [{:db/id "foo"} {:db/id 17592186046851 :my/ref "foo"} {:db/id 17592186046852 :my/ref "foo"}])
IllegalArgumentExceptionInfo :db.error/tempid-not-an-entity tempid used only as value in transaction datomic.error/arg (error.clj:57)#2018-08-0321:02matthavenerfwiw this is not a huge issue, we can add a token attribute onto "foo". I just thought it was weird 🙂#2018-08-0322:11favilaYou can also try reversing the attribute. i.e. redesign the attribute so that the currently empty entity contains forward-ref to other things#2018-08-0322:11favilathere's no difference in size or speed between forward and reverse attrs#2018-08-0322:12favilathe only differences are ergonomic, * in pull expressions and d/touch won't look for them#2018-08-0322:12favila(but my suggestion won't work if you're trying to enforce cardinality)#2018-08-0321:02marshallIt should work. I'll try to look into it this weekend #2018-08-0321:03marshallCloud or onprem? And version?#2018-08-0321:03matthavenerom-prem, 0.9.5561#2018-08-0321:03marshallOk#2018-08-0322:05favilano this is correct, you can't use a tempid without using it as an :e in some assertion in the tx#2018-08-0322:06favilaconsider reversing the attribute?#2018-08-0322:06kennyWhy does datomic.client.api/transact not throw an error when called like this?
(d/transact conn [{:db/id bob-id
:user/name "bob"}])
This has bitten me so many times.#2018-08-0322:06favilawhat is bob-id?#2018-08-0322:06favilaI'm guessing it resolves to some value that would be legal there?#2018-08-0322:07kennyA :db/id. Doesn't matter for this. transact needs to provide feedback on the incorrect call.#2018-08-0322:07favilawhat is incorrect?#2018-08-0322:07kennyShould be:
(d/transact conn {:tx-data [{:db/id bob-id
:user/name "bob"}]})
#2018-08-0322:07favilaoh cloud api#2018-08-0322:07favilasorry#2018-08-0322:08kennyYes. And I'm so used to the peer API I always forget to call it correctly.#2018-08-0322:08favilaforever confused that they use the same prefix by convention#2018-08-0322:08favilayes that is terrible#2018-08-0322:08favilait should error#2018-08-0322:08favilamap? predicate at least#2018-08-0322:09favilaif you give a non-sequential to the peer d/transact it errors#2018-08-0322:09favilacloud should do likewise if it gets non-map#2018-08-0322:09kennyI agree 100%. Right now the transaction succeeds, only transacting :db/txInstant.#2018-08-0411:09rhansenIt seems that when returning a header value that is a seq of strings, a web app in datomic ion will crash. I filed a bug with ring (their wrap-cookie middleware returns a seq of strings for the Set-Cookie header), but they said that it's proper ring behaviour to do that.
Is it within ions scope to support this?#2018-08-0415:45Mark AddlemanI don’t know the answer but several Ions related people are hanging out on #ions-aws . You might want to repost there#2018-08-0704:37sho@U0H2CPW6B You can get around that cookie issue with [this library](https://github.com/euccastro/expand-headers) by @U65FN6WL9.#2018-08-0419:23Desmondcan someone please recommend a migration tool to manage running schema migrations exactly once? also what is the drawback to running these migrations more than once? i occasionally do it with my dev database and i haven't noticed any problems.#2018-08-0502:01donaldballSome folk like conformity for this. Assuming your database schema contains the schema datoms you’re transacting, the only consequence of which I’m aware is taking some transactor time and a noop txn in your history.#2018-08-0502:09DesmondThose seem like trivial costs. I think i'll stick with my redundant schema migrations. thanks @U04V4HWQ4#2018-08-0507:31val_waeselynck@U7Y912XB8 Datofu also provides helpers for that https://github.com/vvvvalvalval/datofu#managing-data-schema-evolutions#2018-08-0605:02Desmond@U06GS6P1N thanks i'll check it out#2018-08-0419:56Desmondhow does the :db/id used in schema alteration (https://docs.datomic.com/cloud/schema/schema-change.html) relate to the :db/ident in schema definition (https://docs.datomic.com/cloud/schema/defining-schema.html)?#2018-08-0419:57Desmondwould i use the value from :db/ident in the definition for :db/id in the alteration?#2018-08-0419:58Desmondi suppose i could just go check a schema datom's :db/id#2018-08-0421:16Desmondthe :db/id is just a number...#2018-08-0421:17Desmondon another unrelated note: during restoration I'm getting a throughput error: The level of configured provisioned throughput for the table was exceeded. Consider increasing your provisioning level with the UpdateTable API.#2018-08-0421:17DesmondIs there a way to slow down backup-db?#2018-08-0503:48DesmondI'm finding pull syntax dramatically slower (4x) than binding all the return values i want with where clauses. Any ideas why that might happen?#2018-08-0512:24rhansenAnyone experienced this error?
No implementation of method: :value-size of protocol: #'datomic.cloud.tx-limits/ValueSize found for class: java.lang.Integer
It only happens in cloud during a specific transaction. I cannot reproduce this localy 😕#2018-08-0614:16Joe LaneI ran into this exact thing and it cost me a day of confusion not understanding what I was doing wrong. I think maybe we should aggregate these into a Common Error Messages FAQ?#2018-08-0514:43marshall@rhansen that's a known issue with transacting a java Integer. Fix pending. #2018-08-0517:25rhansenAhh, ok. Should be an easy fix 😊#2018-08-0518:01rhansenI realize now that the previous statement can sound a bit rude. I ment it should be easy to work around on my end, not that it should be an easy bug for you to fix on your end 😅#2018-08-0519:07austinbirchHello all, Hopefully simple question about Datomic Cloud.
If I’m understanding the Datomic Cloud documentation on pricing correctly, you can run Datomic Solo Topology for $1/day, but if you want to scale (for either performance/capacity or HA) the next step would be to switch to Datomic Production Topology which would start at about $15/day (two i3.large instances). Is that reading correct?#2018-08-0522:06DesmondSo I want to pass a pattern to a pull using :in $ pattern and the pattern has a list of attributes like [:input/name :input/value] except only the first attribute is coming back. If i hardcode the pattern with pull ?e [:input/name :input/value] everything works as expected. I just upgraded to version 0.9.5703 after reading @marshall's comments in this thread: https://clojurians-log.clojureverse.org/datomic/2018-03-15#2018-08-0522:08Desmondultimately my goal is to pull using an om-next ast sent from the client. not sure if there are workarounds here but i'm surprised that passing the pattern as an argument works differently than hardcoding the pattern.#2018-08-0523:07rhansen@austinbirch yes. But if you use reserved instances that price goes down by a noticable amount.#2018-08-0600:49euccastroFWIW IMHO that's still a bit of a leap. there are applications that don't need a lot of processing power but where you still want high availability. I think there's room for a plan like the production topology just with smaller instances#2018-08-0608:34austinbirchThanks @rhansen. @euccastro Yeah, agree. Thinking that I’ll end up in the situation you describe.#2018-08-0609:02jmingtanDoes Datomic cloud licensing permit downgrading the instances after they have been launched? Or likewise to upscale the instances#2018-08-0609:03jmingtanI could only find this forum thread https://forum.datomic.com/t/datomic-cloud-provisioning-options/364/5 that mentioned that launching the AMI by hand is possible, although not officially supported#2018-08-0613:58jaret@jmingtan Datomic Cloud is only supported, and will only function correctly, when run on the instance types offered as options in our cloud formation templates, e.g. solo will only run on t2.small and prod on i3.large#2018-08-0614:03jmingtan@jaret I see, thank you!#2018-08-0614:20Joe Lane@captaingrover I unfortunately don’t have an answer for you, however I’m really interested in your om-next ast project. Is it for a public project?#2018-08-0803:52Desmondno worries. the issue is resolved now. classic user error. the project is not public but if i get things figured out i might make an open-source demo version.#2018-08-0803:54Desmondi'm really inspired by Clients in Control https://docs.datomic.com/on-prem/videos.html#2018-08-0803:56Desmondi work with a lot of enterprise clients that have this weird instinct to always push logic deeper into the system. i find the opposite, putting logic as close to the user as possible, much more conducive to agility in response to user feedback.#2018-08-0615:05timgilbertJust noticed that the datomic logo in the top-nav of http://docs.datomic.com links to https://datomic.com/ which seems to have an incorrectly-configured HTTPS certificate#2018-08-0615:08conanthink i'm missing something, but if i'm running on-prem using the cloudformation template, is there any way i can make changes to my cluster? for example, i want to enable cloudwatch, or add memcached; how do i do this without just creating an entirely new cluster? or is that what i should do?#2018-08-0615:58conando i just create a new template using the command-line tools and paste that into the Update Stack function in the AWS console?#2018-08-0617:33leblowlHey I have run into this problem a couple times now, basically I want to get a list of entities and their values for a specific :db.cardinality/many attribute. Given a query like this
(d/q '[:find ?e ?v
:in $ [?e ...] ?a
:where [?e ?a ?v]]
db
eids
attrib)
It should return a list of 2-tuples, eg: [[1 "yo"] [1 "dude"] [2 "cat"] [2 "nip"]]. However I would like to retrieve the data like so: [[1 ["yo" "dude"]] [2 ["cat" "nip"]]]. Is there any way to do this with vanilla DatomicQL, or will I need to accumulate results like this in my app code? Thanks!#2018-08-0618:36chrisblom(d/q '[:find ?e (set ?v)
:in $ [?e ...] ?a
:where [?e ?a ?v]]
db
eids
attrib)
#2018-08-0618:36chrisblomUsing vector or list might also work (instead of set)#2018-08-0619:23leblowl@chrisblom Thanks, interesting that that works. I was trying something like :find ?e [?v ...]#2018-08-0619:52curtosisis there something like d/get-some available in transactions? I need to set up a ref while loading some data where in this data I know the lookup value resolves to a unique entity, even though the attribute itself isn’t unique.#2018-08-0705:22olivergeorgeWhat's a good way to reset the database to a known state for integration testing for an Ions app? Thinking it through I could use (do (delete-database...) (create-database ...) (transact fixtures ...)). I wondered if there was something like a (clone-database ...) or (reset-db-to tx-id) which might simplify restoring to a known state. Clearly this is somewhat at odds with providing an immutable db.#2018-08-0709:59steveb8nI use the datomock lib. it makes a big difference in speed when integration testing. only works with peer/mem db. it’s one option that works for me#2018-08-0710:01olivergeorgeYeah. Something like that for cloud would be useful. #2018-08-0800:00steveb8nI found a way to have both. You can proxy the client conn/db and use mem underneath. then you can use datomock with client/cloud code.#2018-08-0800:00steveb8nI created a gist for this and there’s an early lib out there as well#2018-08-0705:29orestisI’ve been discussing Datomic with my CTO and his biggest fear is Cognitect being a relatively small company combined with the closed source nature of Datomic. Is there any documentation that tries to address this kind of fear? Is there way to get the data out in a usable form? Can we sign a source code access agreement?#2018-08-0705:29orestisThis is more specific to Datomic cloud, I guess. #2018-08-0711:41alexmillerDatomic sales would be the best people to talk to about all that <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>#2018-08-0712:12orestisThanks @alexmiller#2018-08-0715:20okocimIn some of the examples on Github, I see that the client and connections are
memoized as such:
(def get-client
(memoize #(d/client
{:server-type :ion
:region "us-east-1"
:system "stu-8"
:query-group "stu-8"
:endpoint ""
:proxy-port 8182})))
(def get-http-client (memoize #(http/create {})))
(def db-name "ion-event-example")
(def get-conn (memoize #(d/connect (get-client) {:db-name db-name})))
Is memoizing the call to d/client ok to do in general, or is this just something
that was done for development purposes?
Likewise, would one want to memoize the call to d/connect?#2018-08-0716:21marshall“Datomic connections do not adhere to an acquire/use/release
pattern. They are thread-safe and long lived. Connections are
cached such that calling datomic.api/connect multiple times with
the same database value will return the same connection object.”#2018-08-0716:21marshallhttps://docs.datomic.com/on-prem/clojure/index.html#datomic.api/connect#2018-08-0716:22marshallhttps://docs.datomic.com/cloud/client/client-api.html#connection#2018-08-0716:23marshall@okocim ^#2018-08-0716:24okocim@marshall thank you#2018-08-0719:40curtosisI find myself building up a lot of tx maps with (merge m (when x {:attr-x x}) (when y {:attr-y y})) … this must be a common datomic pattern? how do y’all normally handle that idiomatically?#2018-08-0719:40curtosisI guess that’s not datomic-specific, but it comes up a lot in building txes.#2018-08-0719:46alexmillercond-> is your friend#2018-08-0719:46okocim@curtosis I find myself using this pattern quite a bit as well. I personally prefer using:
(cond-> {}
x (assoc :attr-x x)
y (assoc :attr-y y))
over merge, but it seems like it’s essentially the same. I’ve also created some macros in the past to expand this for me, if I’m able to extract a common pattern#2018-08-0719:46alexmillerjinx#2018-08-0719:47curtosislol @okocim I had in the back of my head that it looked a little cond->ish#2018-08-0719:47alexmillercond-> both reads better and is faster so definitely preferred#2018-08-0719:47curtosisthx#2018-08-0719:48curtosisok, now a definitely datomic-related question… I’m using the “best practice” asyc tx-pipeline and sometimes I’ll get no actual elements transacted, even though the batch looks fine.#2018-08-0719:49curtosisordinarily I’d asssume they were already-asserted items, but I don’t see them in the datomic console#2018-08-0719:49curtosiswhat should I be looking for to troubleshoot?#2018-08-0722:20csmin on-prem, can you call a transaction function from within a transaction function? My use case was to extend a transaction function :foo/bar with a :foo/bar2, which took the results from the original :foo/bar, but added more to it#2018-08-0722:44marshallYes, you can @csm #2018-08-0813:33tlimaIs it possible to use environment vars like DATOMIC_LICENCE_KEY and DATOMIC_PROTOCOL, instead of the .properties file?#2018-08-0815:14joshkhdoes anyone know what this error might indicate?
clojure.lang.ExceptionInfo: java.lang.NoClassDefFoundError: javax/xml/bind/DatatypeConverter {:cognitect.anomalies/category :cognitect.anomalies/fault, :cognitect.anomalies/message "java.lang.NoClassDefFoundError: javax/xml/bind/DatatypeConverter"} (java 10 with [com.datomic/client-cloud "0.8.56"])#2018-08-0815:16joshkhwell, less of what it indicates and more of why it's happening after upgrading the client version number 😉#2018-08-0815:17marshall@joshkh what are you doing when you get that error?#2018-08-0815:20joshkhit's coming from a client library. let me take a look and confirm.#2018-08-0815:27joshkhit's coming from d/connect:
(def conn (d/connect client {:db-name "my-db"}))
CompilerException clojure.lang.ExceptionInfo: java.lang.NoClassDefFoundError: javax/xml/bind/DatatypeConverter #:cognitect.anomalies{:category :cognitect.anomalies/fault, :message "java.lang.NoClassDefFoundError: javax/xml/bind/DatatypeConverter"}, compiling:(form-init5907823285585856170.clj:1:11)
#2018-08-0815:43joshkhi do get some warnings when building the client. repl history as follows: https://gist.github.com/joshkh/0b388c8e04533a461e1d2bbbc165a10d#2018-08-0816:03alexmillerjavax.xml.bind was pulled from the JDK in Java 9 and put into a separate module#2018-08-0816:06alexmillerI’m not sure what is actually using javax.xml.bind.DataTypeConverter. Is it transit maybe?#2018-08-0816:06alexmillerI know there’s a pending issue there#2018-08-0816:08alexmillerif at the end of your gist, you do (pst), would like to see that#2018-08-0816:18alexmilleras a temporary workaround, you could add javax.xml.bind/jaxb-api {:mvn/version "2.3.0"} dep#2018-08-0816:18joshkhon it!#2018-08-0816:20joshkhsilly question, but i noticed that the datomic cloud getting started documentation now uses :ion for the :server-type in the example config. is :cloud still an option, or should we be using :ion even if we're not making use of Ions? https://docs.datomic.com/cloud/getting-started/connecting.html#creating-database#2018-08-0816:28joshkh@alexmiller here's the result of (pst) https://gist.github.com/joshkh/5814f64414986ff357ccc036446de64f , and i can confirm that adding jaxb-api as a dep fixes the problem. cheers 🙂#2018-08-0816:28alexmillerif you’re not using Ions, use :cloud. If you are, use :ion#2018-08-0902:19steveb8nseems like Datomic Cloud is well timed. The “Database as a service” race is heating up https://techcrunch.com/2018/08/08/oracles-database-service-offerings-could-be-its-last-best-hope-for-cloud-success/?utm_source=tctwreshare&sr_share=twitter#2018-08-0911:25tlimaDoes anyone know if we can set Transactor properties (like license-key, protocol, alt-host, etc) using environment variables? If so, what is the naming convention for those variables? I tried the Java approach (`DATOMIC_LICENSE_KEY`, for instance) but it doesn’t seem to work…#2018-08-0913:42marshall@t.augusto those are not configurable as envars - that would be a good suggestion for a feature that I’d encourage you to log at our feature request portal (“suggest features” link in the top nav of http://my.datomic.com)#2018-08-0913:46mgrbyteyou can supply them as java properties on the command line though IIRC? (e.g: java -Ddatomic.licenseKey=...)#2018-08-0913:48marshallpossibly; i can’t recall which ones that will work for#2018-08-0913:48matthavenertlima: we just use a script that injects env vars into a properties template using sed#2018-08-0913:48marshallI’d probably use the same approach as @matthavener ^ since if you’re deploying anywhere with VMs, you’ll need to dynamically populate host and alt-host anyway#2018-08-0913:51matthavenerif you look at the scripts in here, its pretty similar https://github.com/opengrail/heroku-buildpack-datomic#2018-08-0917:06eoliphantYeah, we use ansible’s templating to setup our on-prem stuff#2018-08-0917:41eoliphantIs there a new version of Cloud? Just noticed there’s a field to set a map of inputs for ion/get-env#2018-08-0919:20ghadiIon Question: Is there any mechanism for conveyance of permissions from Lambda to the proxied code?#2018-08-0919:24dominicm@ghadi I asked this previously, I think the asg has to be given permissions.#2018-08-0919:44ghadithanks @dominicm ... I wonder if there is a Better Way#2018-08-0920:19dominicm@ghadi within the constraints of aws it's hard to think of a better way it could be done without ending up in heavy lambda territory#2018-08-0920:32brycecovertHas anyone measured the point at which datomic transactor does not scale well? The context here is that I am about to recommend using it for a transactional system containing around 1 billion entities. I’m expecting throughput of about 1 million transactions on those entities per day.#2018-08-0920:41eoliphantIn general no @ghadi. AWS permissions get applied only to the ‘aws thing’ in question. Giving a lambda perms to say read an S3 bucket, won’t in the case of ions, have anything to do with the EC2 instances where your actual ion code is running. You really just want to add whatever permissions your ions are going to need to the role associated with the EC2 instances, since lambdas are just ‘glue’ for ions#2018-08-0920:49ghadiyeah I want to convey STS temporary credentials through to the compute cluster#2018-08-0920:50eoliphantAhhhh#2018-08-0920:53eoliphantWell, hmm… I mean, if you have the token, I guess you could include it in the payload, then when you’re calling AWS Service X, via the api you’d have to manually setup the credential provider#2018-08-0920:55ghadii don't have the token, I want to acquire it during execution of the lambda#2018-08-0920:55ghadiKinda like how Netflix BLESS works.#2018-08-0920:56ghadiStrongly authenticate the lambda execution itself#2018-08-0920:58eoliphantGotcha, I think the problem is that for ions, the actual lambda code is basically opaque. Your first opportunity to actually do your assumeRole’ing, etc would be the entry point into the clojure/datomic function. But you’d like to have that happen prior#2018-08-0921:02steveb8nTake a look at SSM parameters, that seems to be the best way to do env vars in the Ion world#2018-08-0921:03ghadiI believe SSM parameters are static#2018-08-0921:04ghadiI'll clarify my use-case a bit more, but it's per-request credentials#2018-08-0921:13steveb8nAh ok, in that case I don't have experience to share. #2018-08-0922:40misha@okocim @curtosis beware of valid false values in cond->#2018-08-1000:23eoliphantSo @ghadi, another approach would be just wrapping your ‘real’ ion code with a higher order function ‘authorizer’. That might give you what you need.. But just out of curiosity, how are you identifying the ‘principal’ to whom the temp token is issued? we have a pretty complex AWS setup, and avoid IAM users, etc. We issue tokens for say AWS API access, via integration with Keycloak and LDAP. You mentioned bless which is similar, in that you need to have a ‘session’ in order to get the cert issued.#2018-08-1004:31spiedeni wanted to copy some entities between dbs, and came up with the hack of pulling them with [*] and applying this:
(defn tempify-entity [entity partition]
(walk/prewalk (fn [node]
(if (and (instance? MapEntry node)
(= :db/id (key node)))
(MapEntry. :db/id (d/tempid partition (- 0 (val node))))
node))
entity))
this worked and resolved all references, but the tempid docs say only to use ns from -1 to -1000000. can i cause some type of tempid collision in datomic land by doing this?#2018-08-1020:13matthavenerDatomic supports strings as tempids now, so you could just do (str (:db/id thing)) instead#2018-08-1014:25ghadi@eoliphant I thought about this a bunch last night, I think the "right thing" to do is generate an STS session token before the invoke of a lambda, and pass the temp credentials+token through to the Ion, then AssumeRole inside the code#2018-08-1014:55eoliphantThat sounds like a plan#2018-08-1015:02ghadithe only downside is that session tokens are minimum 15 minutes in length#2018-08-1100:13eoliphantyeah, they are more geared to stuff like handing a user one for API access or something#2018-08-1014:53tlimaSorry if this is a silly question, but how should I pack my own Java static method with the Transactor, so that cassandra-cluster-callback works?#2018-08-1015:01octahedrionthe Datomic Cloud documentation on functions & java methods suggests you can return variables created in queries from the :find clause, but it's not working for me: I have ?d , a :db/txInstant which I'm trying to return from :find [?i] using [(.toInstant ^java.util.Date ?d) ?i] but the query hangs & eventually returns a Datomic client exception#2018-08-1015:08marshall@octo221 what’s in your datomic Cloud logs#2018-08-1015:29jdkealyHi, I've been running into transaction timeouts (the transactions do end up succeeding). I've been using transact and am reading i need to switch to transact-async. What would the benefit be of calling transact-async and then deref'ing it right away? Is it that I could boost the timeout time for these instances? If my transactor had a timeout of 5 seconds, and I deref'd the async tx with a 5 second limit, would the result be the same ?#2018-08-1015:40jdkealyI believe, however, i need to rethink my implementation, batch the transactions, return some kind of temp id and then query for them. That's the right way to deal with this ?#2018-08-1112:35eoliphantYou can boost the timeout, or use smaller batches. Have you checked out the core.async/ pipelining approach in the best practices section? That might also fit your usecase#2018-08-1413:50jdkealyYes, it's just unclear to me when making a record, how to return the record to the user when they make a new post.#2018-08-1015:47brycecovertWhen you’re building new schema in development mode, is there a way to retract schemas? Simple example is that I accidentally used bigint instead of long, but it seems like the only way to undo my mistake is to create a whole new database.#2018-08-1015:49brycecovertIn the documentation it says, “breakage is at best a dev convenience” — i need that convenience. 😉#2018-08-1015:49Joe LaneA Datomic Schema is just a series of attributes, which themselves, can be retracted. If its a dev mode schema just retract the attribute. Take a look a the day of datomic cloud source repo on github (dont have the link handy rn)#2018-08-1015:50brycecovertI’ll take a look at it. thanks.#2018-08-1016:02brycecovertHmm. It seems like you can’t retract schema: @(d/transact (d/connect uri)
[[:db.fn/retractEntity :vendor/mistake]])#2018-08-1016:02brycecovert:db.error/invalid-alter-attribute Error: {:db/error
:db.error/unsupported-alter-schema, :attribute :db/cardinality,
:from :db.cardinality/one, :to :disabled#2018-08-1018:40Joe Laneare you using cloud or on prem?#2018-08-1018:41Joe LaneOh, wait, do you have any records using :vendor/mistake?#2018-08-1018:42marshallschema cannot be retracted#2018-08-1018:42marshallif absolutely necessary, you can rename an attribute#2018-08-1018:42marshallthen create a new one with the original name and whatever change you were wanting#2018-08-1018:43marshallthis is squarely in the realm of dev-time; i wouldn’t recommend doing that in any kind of prod environment#2018-08-1018:46Joe Lane:+1: good to know, thanks marshall.#2018-08-1018:56curtosisprobably forgetting something dumb, but do I need to do anything special to pass an eid into a query?#2018-08-1018:56curtosisI’m getting back :db.error/invalid-lookup-ref Invalid list form: []#2018-08-1018:57curtosisbut my query is :where [?ref :org/location ?eid] and I’m passing in a known-good eid as ?eid.#2018-08-1100:16eoliphantAre there any notes on the new release?#2018-08-1100:18johnjwhich new release?#2018-08-1100:23eoliphantah. sorry, cloud. Looks like there’s a 407 in the AWS marketplace#2018-08-1103:21brycecovertthanks @marshall. That’s my intention. I’m onboard with not breaking schema unless I fatfinger something 😉#2018-08-1210:24joshkhIt seems that I can't call select-keys on the result of a transaction. Am I overlooking something?
; Good
(-> conn
(d/transact (update {} :tx-data conj data))
(get :db-after)
(get :t))
=> 24
; Not so good
(-> conn
(d/transact (update {} :tx-data conj data))
(get :db-after)
(select-keys [:t]))
IllegalArgumentException find not supported on type: datomic.client.impl.shared.Db clojure.lang.RT.find (RT.java:863)
#2018-08-1212:04eoliphantjust looked at the client docs, Db only supports ILookup (`get` or keyword access), as opposed to being an actual Map or Associative#2018-08-1212:33joshkhcheers @eoliphant, that's what i suspected#2018-08-1212:33eoliphantnp 😉#2018-08-1212:35joshkh+1 for more of a mappy personality though!#2018-08-1212:35joshkhit caught me off guard#2018-08-1217:12alex-dixonIs there a limit on the size of a string valueType that can be stored in Datomic?#2018-08-1217:35okocim4096 characters#2018-08-1217:35okocimit’s here:
https://docs.datomic.com/cloud/schema/schema-reference.html#2018-08-1217:36alex-dixonNo idea how I missed that. Thank you 😊 #2018-08-1218:17orestisHow do people go over that 4k String limit? Kind of annoying to have to bring in an external data store for this. Use case would be anything that stores user-entered text. #2018-08-1223:56blandinwis there anybody from the Datomic core team here? our database is going crazy, generating a seemingly unlimited amount of :kv-cluster/create-val even though there's almost no txs going on, we don't know what to do#2018-08-1300:02marshall@blandinw sounds like it is running an index job#2018-08-1300:03marshallIs the transactor failing over or just writing a lot to storage?#2018-08-1300:06blandinwwriting a lot to storage, then dying because of some error while creating a tempdir#2018-08-1300:06blandinwand failing over#2018-08-1300:06blandinwrinse and repeat#2018-08-1300:07blandinwtrying to enable more logs in datomic.{index,kv-cluster} atm#2018-08-1300:07blandinwwe're also trying to give the machines more ram/disk but it does not help so far#2018-08-1300:09blandinwwe went from 8GB of data to 12GB in the span of ~12 hours#2018-08-1300:21blandinwsorry, 80GB to 120GB 😞#2018-08-1300:25blandinw@marshall any advice? it's a black box to us, this looks like an indexing bug#2018-08-1300:35marshallIf the transactor dies during an indexing job it will create unrecoverable garbage#2018-08-1300:35marshallNeed to see the error you're getting#2018-08-1300:36marshallHave you changed OS permissions or anything of that sort?#2018-08-1300:36marshallError with temp dirs is often a problem with directory ownership/permissions #2018-08-1300:39blandinwno OS permission should have changed, I will double check.
In the meantime, here is the error
2018-08-12 15:49:01.193 WARN default datomic.update - {:message "Index creation failed", :db-id "prod-15fa998f-08bc-4e72-a0c8-cb02a2116b20", :pid 495, :tid 420}
java.io.IOException: No such file or directory
at java.io.UnixFileSystem.createFileExclusively(Native Method) ~[na:1.8.0_152-gcc-4.9-glibc-2.20-fb]
at java.io.File.createTempFile(File.java:2024) ~[na:1.8.0_152-gcc-4.9-glibc-2.20-fb]
at datomic.common$create_temp_directory.invokeStatic(common.clj:481) ~[datomic-transactor-pro-0.9.5561.56.jar:na]
at datomic.common$create_temp_directory.invoke(common.clj:477) ~[datomic-transactor-pro-0.9.5561.56.jar:na]
at datomic.fulltext$create_indexing_job.invokeStatic(fulltext.clj:176) ~[datomic-transactor-pro-0.9.5561.56.jar:na]
at datomic.fulltext$create_indexing_job.invoke(fulltext.clj:170) ~[datomic-transactor-pro-0.9.5561.56.jar:na]
at datomic.fulltext$do_indexing_job.invokeStatic(fulltext.clj:241) ~[datomic-transactor-pro-0.9.5561.56.jar:na]
at datomic.fulltext$do_indexing_job.invoke(fulltext.clj:237) ~[datomic-transactor-pro-0.9.5561.56.jar:na]
at datomic.fulltext$build_index$fn__5048$fn__5074.invoke(fulltext.clj:351) ~[datomic-transactor-pro-0.9.5561.56.jar:na]
at datomic.fulltext$build_index$fn__5048.invoke(fulltext.clj:345) ~[datomic-transactor-pro-0.9.5561.56.jar:na]
at datomic.fulltext$build_index.invokeStatic(fulltext.clj:334) ~[datomic-transactor-pro-0.9.5561.56.jar:na]
at datomic.fulltext$build_index.invoke(fulltext.clj:324) ~[datomic-transactor-pro-0.9.5561.56.jar:na]
at datomic.index$merge_db_STAR_.invokeStatic(index.clj:1726) ~[datomic-transactor-pro-0.9.5561.56.jar:na]
at datomic.index$merge_db_STAR_.invoke(index.clj:1596) ~[datomic-transactor-pro-0.9.5561.56.jar:na]
at datomic.index$merge_db$fn__6053.invoke(index.clj:1790) ~[datomic-transactor-pro-0.9.5561.56.jar:na]
at datomic.index$merge_db.invokeStatic(index.clj:1781) ~[datomic-transactor-pro-0.9.5561.56.jar:na]
at datomic.index$merge_db.invoke(index.clj:1771) ~[datomic-transactor-pro-0.9.5561.56.jar:na]
at datomic.update$process_request_index$fn__25502$fn__25503$fn__25506.invoke(update.clj:189) ~[datomic-transactor-pro-0.9.5561.56.jar:na]
at datomic.update$process_request_index$fn__25502$fn__25503.invoke(update.clj:184) ~[datomic-transactor-pro-0.9.5561.56.jar:na]
at datomic.update$process_request_index$fn__25502.invoke(update.clj:180) [datomic-transactor-pro-0.9.5561.56.jar:na]
at clojure.core$binding_conveyor_fn$fn__6757.invoke(core.clj:2020) [clojure-1.9.0-alpha15.jar:na]
at clojure.lang.AFn.call(AFn.java:18) [clojure-1.9.0-alpha15.jar:na]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_152-gcc-4.9-glibc-2.20-fb]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_152-gcc-4.9-glibc-2.20-fb]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_152-gcc-4.9-glibc-2.20-fb]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_152-gcc-4.9-glibc-2.20-fb]
`#2018-08-1300:41blandinwdatomic runs as root (in a container) and disk space is at least 25GB available#2018-08-1300:41marshallWhat has changed recently#2018-08-1300:41blandinwI'll force the java.io.tmpdir property#2018-08-1300:46blandinwnothing obvious changed, we transacted a dbfn on Friday, but that was a routine operation#2018-08-1300:50marshallYou could restore a current backup to local (dev) storage and run a local transactor to see if you can complete an indexing job#2018-08-1301:39blandinw@marshall we'll get a huge machine and try to tweak the memory-index-* transactor settings + restore a backup to see if the job completes#2018-08-1301:40blandinw@marshall apart from that I have no idea what to do, we are at a real risk of shutting down our product by tomorrow if it continues like this, because we have no way to stop it#2018-08-1303:00johnjscary#2018-08-1303:01johnjI'll stick with postgresql for now where is much easier to find help to common problems 😉#2018-08-1303:01blandinwIf anybody from Cognitect is wondering, we are in contact with a Datomic engineer (J* B**rd) via support ticket #1878#2018-08-1303:01johnjdatomic has some nice features but if you don't have 24 hour support things can get frustrating I mean#2018-08-1303:02johnjoh good#2018-08-1303:03blandinw@lockdown- to be fair, we've been using Datomic for years and its unique features have come in handy (historical queries, etc.), but we've been burnt by its unique features as well... so yeah, debate is open ^—^ I think it depends on the use case#2018-08-1303:04johnjI very much like the integration between datomic and clojure but yeah, one has to measure well if it is worth it,#2018-08-1303:05johnjwith some effort, postgres can have point in time queries too I have read#2018-08-1309:39val_waeselynckPoint in time queries are really a minor part of Datomic's story compared to other advantages. The database as a value and universal schema are the real advantages IMO.#2018-08-1304:06blandinwIncident over, the indexing job was failing in a loop because the filesystem it tried to write to was read-only (and not the tmp directory) and properties to customize the path where not taken into account (tmpdir and indexWorkDir). This has been the case for a long while, so maybe it had been failing for a while and decided to blow up last night. Anyway, thanks for your support and have a good night#2018-08-1318:13ignorabilishi, Datomic Cloud just stopped working for no apparent reason - for the EC2 instance the CPU utilization hasn't gone past 60% (for a moment only), both Disk Reads & Writes are very low since we are still in dev mode
What is the proper way to restart Datomic Cloud?
And more importantly - how do we check why this thing happened in the first place? We have some CloudWatch dashboards, created by default, but there are no errors.#2018-08-1318:21ignorabilisRestarting the EC2 instance does not seem to have an effect#2018-08-1318:21marshall@ivan.yaroslavov have you searched your logs https://docs.datomic.com/cloud/operation/monitoring.html#searching-cloudwatch-logs#2018-08-1318:27ignorabilisthanks, @marshall ; I just checked them; the last log is from 2 hours ago; however there are no alerts - what else should we look for?#2018-08-1318:59stuarthalloway@ivan.yaroslavov are you running Solo or Prod?#2018-08-1321:24ignorabilisSolo#2018-08-1400:16stuarthallowayWhat is the specific symptom you are seeing?#2018-08-1407:12ignorabilisI'm not sure, Datomic Cloud just stopped working and we are trying to figure out why#2018-08-1411:39stuarthallowaywhat does it look like when it is not working?#2018-08-1400:37steveb8n@stuarthalloway any chance you could comment on this discussion about Ion Lambdas? https://clojurians.slack.com/archives/CC0A7PUHF/p1533611391000066#2018-08-1400:40stuarthalloway@steveb8n what is the performance requirement?#2018-08-1400:42steveb8nit’s arbitrary i.e. I’m determining what is acceptable but multiple 15 second waits is not gonna work#2018-08-1400:42steveb8nhence I’m considering options to work-around it but would prefer to avoid extra complexity#2018-08-1400:42stuarthallowaywho is calling the lambdas? events don't care...#2018-08-1400:43steveb8napi-gateway calls from SPA client#2018-08-1400:43steveb8nit’s fine for mutations but not for reads#2018-08-1400:43steveb8n“fine” because optimistic update#2018-08-1400:44stuarthallowayapi-gateway has independent timeout and retry capability, so why tie yourself to lambda?#2018-08-1400:45steveb8nclarifying to ensure we are talking about same thing: re-frame client, xhr request to API-GW which invokes Ion Lambda.#2018-08-1400:45stuarthallowayso if API gateway has a 5 second timeout you will never see a 15 second delay from lambda#2018-08-1400:45steveb8n15 sec wait on first invocation for every GW endpoint#2018-08-1400:46stuarthallowayhow many endpoints are we talking about? how often are they invoked?#2018-08-1400:46steveb8ntrue that but that’s a failure. worse than slow#2018-08-1400:46stuarthallowayhow many 9s are you promising?#2018-08-1400:47steveb8nThe Ion architecture seems to suggest 1 lambda per read or write fn so many for a SPA app#2018-08-1400:48stuarthallowaycategorically:#2018-08-1400:48steveb8n#9's not set at this stage but this is not downtime but perf#2018-08-1400:48stuarthalloway1. Lambdas will become less latent#2018-08-1400:49stuarthalloway2. There are other options for wiring API Gateway that don't go through Lambda at all, and to the extent that #1 is a problem for applications we can investigate and implement them#2018-08-1400:50stuarthallowayAPI gateway timeout is not (by itself) downtime, because API gateway should automate retry as well#2018-08-1400:52steveb8ninteresting. it’s been suggested by others that the cold-start is likely due to ENI init for Lambdas in VPCs which is a well known problem. on that basis, not using Lambdas would help#2018-08-1400:53stuarthallowayI would like to see an actual app, under actual load, with API gateway timeout and retry configured, that has a problem meeting a particular SLA. I say this not in the "I don't believe you" sense, but in the "particular example we can use to make Datomic better" sense#2018-08-1400:53steveb8nfull disclosure - I’m giving the e.g. of API-GW but in my case I’m using AppSync and not API-GW. same end result#2018-08-1400:53stuarthallowayI know less about AppSync, does it have timeout/retry config?#2018-08-1400:53steveb8nok that’s a good target for me (or others) to aim for#2018-08-1400:54steveb8nOnce I can show a better example, I’ll record the UX and pass it along#2018-08-1400:54steveb8nnot sure about AppSync features at that level yet. will dig deeper#2018-08-1400:55stuarthallowayI understand this could be a problem, I just want to grab it hard by the specifics and not some generalized concern. We are committed to doing the best thing possible under AWS.#2018-08-1400:55steveb8nbut FWIW graphql as a service without code is great. Fits really well with Ions as well. Hence I’m really happy with this stack#2018-08-1400:55steveb8nworth saying 🙂#2018-08-1400:56stuarthallowayGood to hear, and when you have the concrete perf problem in hand we will jump on it!#2018-08-1400:57steveb8nI appreciate your position also. Others are interested also. What form would be best to present the perf profile to you/team? is a video of client good enough?#2018-08-1400:59stuarthallowayeven just some prose about the objectives and numbers here for starters#2018-08-1400:59steveb8nlast point: my arbitrary requirement is that 15 secs is too long for users but 2-3 secs is fine. now that Clojure (tools.deps) starts in 2-3 secs, this would be “arbitrarially” acceptable for my use case#2018-08-1401:00steveb8nok. I’ll see if I can get a few others on board to form up a description of stack/UX/requirements#2018-08-1401:01steveb8nnot sure we will be able to produce “prose” but will do our best#2018-08-1401:01stuarthallowaynot looking for more than the kind of prose you are producing now#2018-08-1401:02steveb8ngreat. thanks. we’ll get on it#2018-08-1401:02stuarthallowaythank you for using Datomic!#2018-08-1401:03steveb8nit’s been a pleasure so far. looking forward to more fun with it#2018-08-1407:13ignorabilisWhat is the proper way to restart Datomic Cloud? All of a sudden it stopped working for us and simply restarting the EC2 instance did not work#2018-08-1413:16alexkAlso curious if there’s a best practice. When DynamoDB throttles, the transactor dies and does NOT come back. At that point I stop the entire CloudFormation and start it again. Now I err on the side of too much DynamoDB capacity, unfortunately.#2018-08-1415:06stuarthalloway@U8ZA3QZTJ Datomic Cloud does not have transactors. Are you talking about On-Prem?#2018-08-1415:14alexkOops I got confused. It’s cloud from my perspective but it’s not Datomic Cloud. My comment above isn’t relevant, then.#2018-08-1415:18stuarthalloway@U8ZA3QZTJ if you run transactors in an ASG (as our template does) you should have HA in On-Prem also: https://docs.datomic.com/on-prem/ha.html#2018-08-1415:19stuarthallowayalso @U8ZA3QZTJ Cloud puts much less pressure on DynamoDB, so that would be less of (or not) a problem there#2018-08-1411:40octahedrioncan I confirm that retracting a :db.type/ref attribute of an entity that's isComponent true will only retract the entity's immediate sub-components, not recursively their sub-components ?#2018-08-1412:29stuarthalloway@U0CKDHF4L retract is primitive and does exactly what you tell it. retractEntity works per https://docs.datomic.com/cloud/transactions/transaction-functions.html#sec-1-1, "components of the given entity are also recursively retracted."#2018-08-1413:07octahedrionah so :db/retract will not recursively retract the component's subtree#2018-08-1411:41octahedrion(or retracting the entity itself)#2018-08-1411:42octahedrion- in Cloud - I just tried and I get only one-level of retraction which is what I'd expect and what I want#2018-08-1413:18Andreas LiljeqvistIs there any sort of counter or time for a given entity that is updated whenever there is a transaction with that entity?#2018-08-1413:18Andreas LiljeqvistNot for a certain attribute, but for every attribute used with the entity#2018-08-1413:19Andreas Liljeqvistsomething like (last-modified eid)#2018-08-1413:19marshall@alqvist Not ‘built-in’. You can use a combination of query and the log to determine that or you can track it explicity yourself#2018-08-1413:25Andreas Liljeqvist@marshall Ok, thanks. Probably will add another attribute since I am unsure of the performance implications#2018-08-1414:14markbastianI have an interesting issue. I am connecting to my local datomic transactor using this connection string: "<postgresql://localhost:5432/datomic?user=...&password=...>". When I uberjar the application and run it, I get this exception: java.sql.SQLException: No suitable driver. Any ideas? For my edification, where is the driver specified for the peer in the first place (i.e. why does it even work in the REPL)?#2018-08-1414:15ghadi@markbastian https://docs.datomic.com/on-prem/storage.html#jdbc-drivers#2018-08-1414:15ghadineed to add the PG drivers to your app#2018-08-1414:17ignorabilisAfter Datomic Cloud just stopped we checked the logs to see what went wrong; we did not find any alerts. The log just ends suddenly, with normal messages regarding Datomic's compute stack. What should we look for?#2018-08-1415:05stuarthallowayHow do you distinguish stopped from working?#2018-08-1508:54ignorabilis@stuarthalloway our application could no longer connect to Datomic - we started seeing
clojure.lang.ExceptionInfo: Connect Timeout
cognitect.anomalies/category: :cognitect.anomalies/unavailable
cognitect.anomalies/message: "Connect Timeout"
#2018-08-1508:59ignorabilisWe saw the same error both on our dev machines and also in the logs of a deployed version (uses the same dev database). The load is fairly low as we are in development and there are only several users. It has been working for months before that.#2018-08-1516:34stuarthallowayare you connecting through the socks proxy?#2018-08-1517:58ignorabilisfor simplicity we have just exposed the port and we are connecting directly#2018-08-1713:27ignorabilis@stuarthalloway so what would be the difference in the logs in that case?#2018-08-1414:21markbastianThis is in my pom:
<dependency>
<groupId>org.postgresql</groupId>
<artifactId>postgresql</artifactId>
<version>42.1.1</version>
</dependency>
And both the transactor and peer work fine if the peer is a REPL.#2018-08-1414:24markbastianSo, the inclusion of the jar in the project should be sufficient, though? If that is the case I'll do some poking around in my uberjar to make sure everything was included.#2018-08-1414:26ghadiyeah, should be. java -cp your-uberjar.jar clojure.main -e org.postgresql.Driver <-- does that error out?#2018-08-1414:35markbastianNope, it produces "org.postgresql.Driver".#2018-08-1414:36ghadithat's the expected result#2018-08-1414:36ghadiyou'd get a resolution error otherwise. So the driver is in your uberjar#2018-08-1414:36markbastianIf multiple sql driver implementations are on the classpath will that cause any issues (e.g. postgres and the ms driver)?#2018-08-1414:38ghadiunlikely, but there may be interactions with ServiceLoaders (a JVM mechanism):
What does this return:
unzip -l your-uberjar.jar | grep 'META-INF/services'
#2018-08-1414:42markbastianThese are the entries in META-INF/services:
com.fasterxml.jackson.core.JsonFactory
io.undertow.protocols.alpn.ALPNProvider
io.undertow.predicate.PredicateBuilder
io.undertow.attribute.ExchangeAttributeBuilder
io.undertow.server.handlers.builder.HandlerBuilder
io.undertow.client.ClientProvider
org.xnio.XnioProvider
javax.servlet.ServletContainerInitializer
io.undertow.servlet.ServletExtension
javax.websocket.ContainerProvider
io.undertow.websockets.jsr.WebsocketClientSslProvider
javax.websocket.server.ServerEndpointConfig$Configurator
java.sql.Driver
javax.validation.spi.ValidationProvider
org.apache.commons.logging.LogFactory
javax.json.spi.JsonProvider
com.fasterxml.jackson.core.ObjectCodec#2018-08-1414:43markbastianThe java.sql.Driver's contents are "org.h2.Driver"#2018-08-1414:44markbastianShould I be seeing the postgres classname in the java.sql.Driver file?#2018-08-1414:50ghadiI am not 100% sure if how Datomic loads drivers. I'll defer to someone more knowledgeable.#2018-08-1414:57markbastianInteresting, when I build the uberjar with lein vs. mvn I get the other drivers in my java.sql.Driver file. I'm guessing it's my mvn assembly configuration that's the problem. I'll poke on that. Thanks!#2018-08-1414:57ghadicheers#2018-08-1415:11markbastianClosing the loop on this, it looks like it's the way the maven assembly plugin merges (or doesn't merge) the services file. The solution is to do something along these lines https://maven.apache.org/plugins/maven-assembly-plugin/examples/single/using-container-descriptor-handlers.html or use the shade plugin. Thanks again for the help.#2018-08-1415:59okocimhas anyone experienced a StackOverflowError while trying to redirect the (cast/…) functionality? I thought I followed the guide, but I’m seeing the following error when I try to call cast/dev in the repl:
StackOverflowError
clojure.lang.RT.seq (RT.java:530)
clojure.core/seq--5124 (core.clj:137)
clojure.core/drop/step--5646 (core.clj:2919)
clojure.core/drop/fn--5649 (core.clj:2924)
clojure.lang.LazySeq.sval (LazySeq.java:40)
clojure.lang.LazySeq.seq (LazySeq.java:49)
clojure.lang.RT.seq (RT.java:528)
clojure.core/seq--5124 (core.clj:137)
clojure.core/take/fn--5630 (core.clj:2876)
clojure.lang.LazySeq.sval (LazySeq.java:40)
clojure.lang.LazySeq.seq (LazySeq.java:49)
clojure.lang.RT.seq (RT.java:528)
#2018-08-1416:49cap10morganWith Datomic on-prem, is it possible to connect to a DynamoDB table owned by a different AWS account? If so, what does the URI look like for that?#2018-08-1417:25joshkhMaybe someone can help me stretch the datalog part of my brain?
I'm trying to write a generic query that returns a particular attribute called :global/uuid for any referenced entities that are directly related. And for silly reasons I need to act exactly like a normal pull:
({:global/uuid #uuid"d7becc50-b43a-4873-805d-890a612174ea",
:person/email "gNJf4D4OJv6I4oc",
; Example A:
:person/manages [#:db{:id 1697645953286222}]})
Except that {:db/id} is replaced with an entity's :global/uuid:
({:global/uuid #uuid"d7becc50-b43a-4873-805d-890a612174ea",
:person/email "gNJf4D4OJv6I4oc",
; Example B:
:person/manages [#:global{:uuid #uuid"e9c98197-9cc0-45fd-8dce-59356d780dbd"}]})
I could easily get the value back if the reference was a component, but that's not the behaviour I want in the schema.
I could use two pulls, but that returns :global/uuid in its own collection and I want it to be in the value of the :person/manages key.
I could use Map Specifications, but the query needs to be generic. So wild card support like: (pull ?a [* {* {:global/uuid}}]) which I don't think exists.#2018-08-1419:24okocimWhat part of the query are you trying to make generic here, the pull spec, or the conditions?#2018-08-1421:23joshkhin my case every entity in datomic has a :global/uuid, and i want to always return the uuids for entities referenced by whichever entity is being pulled.#2018-08-1421:24joshkhi managed to do it using spec and a function to build queries from that spec but it feels like an abomination#2018-08-1421:25joshkh(assoc :find (vector (list 'pull '?root (conj without-refs map-spec)))) where map-spec is a map specification built from a collection of attributes defined in a spec somewhere#2018-08-1422:45Desmondi have all my data in datomic but i want to use google's natural language service. does anyone have tips for migrate data from datomic to google cloud sql (or any other sql db)?#2018-08-1423:49johnjdevelop the representation of your data for a RDBM and perform an ETL#2018-08-1500:18Desmond@lockdown- any ETL tools that you recommend?#2018-08-1500:19Desmondand any strategies for how to design a sql schema that simplifies the ETL#2018-08-1500:19Desmondi haven't used google's natural language service but i think for our first iteration we probably won't need a very sophisticated schema. we'll probably just be going through one column.#2018-08-1500:47johnjdon't know of any tools for this, very new to datomic. Can datomic backup/export to edn? if its simple enough, why just not do it in clojure? convert part of the edn you need to csv and load/import the data into the correct columns in the sql server.#2018-08-1500:49johnjI would just use a query to dump the data you need to edn#2018-08-1500:55chris_johnsonIt would be even simpler, probably, to just query your datomic db for the data you want in a batch, and emit rows directly into the SQL datastore via yesql or korma or whatever the current hotness is in clj SQL libraries. If your data is “finished”, that would be the end of it (one big batch ETL job), but if you want to maintain a living datastore you could also add a transaction fn to your db that writes each new (relevant) transacted fact out to your gcs database using the same method#2018-08-1519:16Desmondsounds good. i'll check out those sql libraries. thank you!#2018-08-1500:56chris_johnsonNo need to abandon your datomic db just to also have your data elsewhere (JSON data lake for BI, SQL, etc.) :)#2018-08-1501:24steveb8nyou might find this useful https://www.youtube.com/watch?v=oOON--g1PyU#2018-08-1506:08henrik@alexmiller The links under this heading in the FAQ lead nowhere: https://www.datomic.com/cloud-faq.html#_can_datomic_be_used_by_applications_running_on_any_os#2018-08-1512:08alexmillerBest to ping @U05120CBV and @U1QJACBUM for that, I’m not on Datomic team#2018-08-1512:34henrikSorry, I that’s because I watched REPL-driven development yesterday 🙂 You were declared the Grand Master of Documentation.#2018-08-1513:20jaretThanks @U06B8J0AJ we’ll get on these#2018-08-1514:43marshallFixed. Thanks for letting us kno.#2018-08-1514:43marshallknow#2018-08-1510:34steveb8nQuestion: what do you use for schema migrations? I’ve been using Conformity but I wonder if it’s worth the effort. Since schema txns are idempotent, I could just stop using it and re-transact on every restart. Does anyone have other benefits they can see from using Conformity or other migration strategies?#2018-08-1513:17val_waeselynck> Since schema txns are idempotent, I could just stop using it and re-transact on every restart
That's my default approach as well, but keep in mind that you may also need to migrate data in addition to schema (e.g populating a new attribute with a default value), and more generally some migrations that are not idempotent. Those are the reason I still use something like Conformity (see also Datofu: https://github.com/vvvvalvalval/datofu#managing-data-schema-evolutions)#2018-08-1511:20eoliphantI've used conformity myself on quite a few projects. Still sort of having the same debate lol. I know part of it is just "psychological" having used stuff like flyway and liquibase to manage schema for what seems like decades, i had sort of an existential fear of not using something to manage the process. For most of my cases though it hasn't really been strictly necessary as we've always tried to follow the guidance on non-breaking schema growth. One thing that i haven't had to do so far. But I think would make me more comfortable to manage with something like conformity is true "migration" when you need to make some bulk data update where you're say doing some transformations to take advantage of or pre populate some new schema attributes.
Having said that, it's a bit of a moot point at the moment for us as were moving as much as possible to cloud, and it doesn't support the client api as of the last time i checked#2018-08-1511:39tlimaIs it possible to transact some data using a date in the past, as the timestamp, so that I could use as-of to navigate that history?#2018-08-1511:41tlimaCould I just manipulate the :db/txInstant attribute?#2018-08-1514:44marshallthe specific docs regarding tx/Instant are here https://docs.datomic.com/on-prem/best-practices.html#set-txinstant-on-imports#2018-08-1511:49eoliphantYes, @t.augusto but there are limitations. I believe the date you set can't be older than anything currently in the db. Theres a note about imports on the "best practices" page. Also, make sure thats what you really want to do. @val_waeselynck has an excellent blog post on the optimal use of tx time vs your "domain" time. https://vvvvalvalval.github.io/posts/2017-07-08-Datomic-this-is-not-the-history-youre-looking-for.html#2018-08-1511:52tlimaThanks, @eoliphant#2018-08-1511:55eoliphant@captaingrover check out http://www.onyxplatform.org/
We use it to stream transactions out of datomic and into stuff like elasticsearch, etc#2018-08-1514:32tlimaIs there a way to make the Transactor run some storage setup code, before starting (Cassandra keyspace/table creation, for instance), or I must ensure everything is in place before starting it?#2018-08-1516:40stuarthallowayThe latter. The great thing about On-Prem is you get to set up storage yourself, exactly the way you like it. The terrible thing about On-Prem is you have to set up storage yourself, exactly the way you like it. 🙂#2018-08-1516:35jarethttps://forum.datomic.com/t/datomic-cloud-version-409-and-ion-params-release/567#2018-08-1516:36jarethttp://blog.datomic.com/2018/08/ion-parameters.html#2018-08-1517:37henrikWhat's the least unidiomatic way of creating an ordered set of references in Datomic?#2018-08-1520:09val_waeselynckSome ideas here: https://github.com/vvvvalvalval/datofu#implementing-ordered-to-many-relationships-with-an-array-data-structure. You can also consider linked lists. Don't bother about being idiomatic, but you should consider the read and write patterns to choose.#2018-08-1517:47brycecovertI’m recommending datomic for a use case that will have >10billion datoms within 1 year. Is there an easy way to do excise datoms, so long as it is not the most recent datom for that entity? i.e., I want to delete data that’s older than 6 months, but not if it hasn’t been updated in the last 6 months#2018-08-1518:50johnjIf you don't mind, why are you recommending datomic? What features do you need of it?#2018-08-1520:14val_waeselynckExcision seems like an extreme solution for this problem. Have you considered moving some attributes to an auxiliary store?#2018-08-1520:37eoliphantYeah one of datomic’s key value props is the history etc. If you’re anticipating excision at the outset, might not be the best fit or datomic + something else..#2018-08-1521:11brycecovertGoing back in time at an entity level is a key feature my client needs. It would preferably keep all history, but given that there is a bit of a ceiling on datoms, I’d like to understand ways of mitigating the problem of having 10 billion datoms within a year.#2018-08-1521:12brycecovertI have considered moving them to a different store, but at the very least, I’m talking about 1 billion entities within one year, with each entity having ~10 attributes#2018-08-1523:21eoliphantFair enough. I think the 10 billion thing is a matter of what they've tested out to. Perhaps some testing at your expected scale is in order#2018-08-1517:51brycecovertThis is based on my understanding that 10 billion datoms is something of a soft limit in datomic#2018-08-1518:59johnjIs a soft limit, but I understand performance will degrade greatly#2018-08-1519:00johnjForcing you to create more DBs, or move to a completely different DB#2018-08-1519:56Joe Lane@U1ZP5SMA6 on-prem or cloud?#2018-08-1521:13brycecovert@U0CJ19XAM We would be open to either. We are on AWS now, but hope to move towards GCP in the next 2 years or so. As such, I imagined using on-prem.#2018-08-1523:24eoliphantAlso dbs are "cheap" afaik. Can you potentially use them as a sort of partitioning?. I think the limit is db not storage#2018-08-1601:34henrikUnfortunately, it doesn't seem that you can join dbs in queries yet for Cloud.#2018-08-1614:02brycecovertThanks for the help. There is a good chance that there are such partitioning schemes that can work for us.#2018-08-1615:20eoliphantAh hell @U06B8J0AJ really? I missed that. Going to have some scenarios where I'll need that soon#2018-08-1615:28henrikThat's what I understood. @U05120CBV?#2018-08-1616:55marshall‘performance will degrade greatly’ is not accurate#2018-08-1616:56marshallperformance of what? writes? reads?
the specific behaviors of “very large” databases will depend greatly on the data model and access patterns#2018-08-1616:57marshallthe “dbs are cheap, make more” advice is good, but only true for Cloud. If you’re using on-prem you should have one transactor per operational DB#2018-08-1616:59marshallalso, correct, there is currently no cross db join using the client#2018-08-1617:00marshalldepending on ‘how’ you want to use multiple DBs you may be fine to query the individual dbs separately then join your application#2018-08-1619:08brycecovert@U05120CBV thanks for some of that clarification. I’ll definitely see if it’s possible to shard into separate databases. I expect regular updates to ~50million entities, each with about 10 attributes. My math shows being at 10 billion datoms within a year, with a fairly static number of entities. The ability to go back in time is very desirable for my use case, but only in recent months. If the database can’t support that scale, I’d be willing to excise old data, so long as that data isn’t the most recent for the entity.#2018-08-1619:09brycecovertDo you have any recommendations on how to go about capacity planning given that use case?#2018-08-1619:41marshallDon’t use excise for size control#2018-08-1619:41marshallthat’s not what it was designed for, and it will not work well for that use case#2018-08-1619:42marshallbroadly, I’d recommend sharding by time with a rolling window#2018-08-1619:43marshallcreate a new db every X months and only write to it#2018-08-1619:43marshallkeep the ‘older’ db around long enough to support your queries against the older data#2018-08-1619:47brycecovertPerfect. Good call @U05120CBV. Appreciate the feedback#2018-08-1619:49marshallif you “need” to handle entity continuity across the DBs you can write a tool that “moves” some subset of active entities from the old db into the new one#2018-08-1619:49marshallif you can get away with just moving all writes to the new db and leaving the old db(s) there for read, you can probably get away with a single transactor#2018-08-1619:50marshallthe transactor-per-db rule is largely based on write-side behavior#2018-08-1619:50marshallso if you can move all your writes to the new db, you’re usually pretty OK to have one or two older read-only dbs in the same system#2018-08-1518:35tlimaWhen using a Cassandra cluster as the storage layer, how should the Console app be configured? Tried to add -Ddatomic.cassandraClusterCallback=com.example.MyClass.myStaticMethod to the DATOMIC_JAVA_OPTS envvar, to no avail… 😕#2018-08-1519:29tlima@U072WS7PE or @U05120CBV, could any of you guys help here?#2018-08-1519:30marshallyou should be able to launch the console against Cassandra the same way you do against any storage#2018-08-1519:30marshalluse the cassandra storage URI#2018-08-1519:44tlimaYou mean this URI: datomic:cass://<cassandra-server-host>:<cassandra-port>/<cassandra-table>?user=<usr>&password=<pwd>&ssl=<state>? If so, this is what I’m using.
The thing is <cassandra-server-host> is a “cluster gateway”, if we can say so, which has it’s own API to retrieve the actual cluster nodes.
This is why I’d like to use the same cluster callback I’m already using with the transactor. Is it possible?#2018-08-1520:09tlima@U05120CBV?#2018-08-1520:09marshallI don’t believe you can specify a cluster callback using the console#2018-08-1520:09marshalli’ll have to look into it#2018-08-1520:45tlimaOk, @U05120CBV. May I ping you again, later this week?#2018-08-1521:08marshallsure#2018-08-2018:57tlimaHi, @U05120CBV. Any updates here?#2018-08-2021:33tlimaOne more thing: does the cluster callback gets used by the commands like backup-db and restore-db?#2018-08-2111:56tlima@U072WS7PE or @U05120CBV, could any of you guys help me?#2018-08-1519:19Desmondhow can i query datomic from nodejs? according to the peer language support docs https://docs.datomic.com/on-prem/languages.html i should call the REST api. according to the REST api docs https://docs.datomic.com/on-prem/rest.html, however, the REST server is no longer supported. what's the move here?#2018-08-1520:39stuarthalloway@captaingrover your best options right now are:#2018-08-1520:39stuarthallowayin Cloud: expose a lambda or REST service via API Gateway#2018-08-1520:40stuarthallowayOn-Prem: expose your own REST service from a peer#2018-08-1603:48Desmond@U072WS7PE hi Stu, yeah i'm using On-Prem. When you say your own REST service you mean wrap some http around the client library as opposed to using the bin/rest standalone service?#2018-08-1611:22stuarthalloway@captaingrover around the peer library would be better, no process hop#2018-08-1521:23cap10morganDoes the automated AWS setup still not support custom VPCs (i.e. any VPCs other than the default)?#2018-08-1521:24ghadilike classic VPCs @cap10morgan?#2018-08-1521:25cap10morganno, not classic, just VPCs other than the default#2018-08-1523:17eoliphantNope. It's still self contained. @cap10morgan we've just gone to using the vpc endpoint approach into our existing vpcs, working through some hacks to deal with the facts that endpoint ips are a bit of AWS magic. They're fine for access from the client vpc. But if you're say using a vpn for dev access to the vpc, those ips aren't directly accessible, even though they're in the accessible range#2018-08-1523:19cap10morganI’m... asking about something much simpler than all that. I just want the ability to deploy the transactors into a non-default VPC. I’m not familiar with the VPC endpoint approach, though.#2018-08-1523:30eoliphantYeah, you can't at this point. The CF scripts include the vpc creation. We even looked at hacking them to do what you describe but wasn't worth it from an effort or support impact perspective.
With the vpc endpoint, you'll install datomic, let it create its vpc and associated other bits. Subsequent to that, you create an endpoint ip address for the datomic system in your existing vpc where your client apps are located. There's a description, and another supporting CF script in the docs under something like operations -> client applications#2018-08-1611:04olivergeorgeDatomic Ion Parameters look good. All makes sense and I'm glad to see it added. +1#2018-08-1611:07olivergeorgeOne thing which got my attention was the example. It shows using it to discover the db-name. That struck me as unnecessary since there can only be one ion app per datomic ion stack.
Can someone give me an example of why it would be useful to have a configurable db name?#2018-08-1611:21stuarthallowayhi @U055DUUFS -- separate db names for dev, staging, CI, and production#2018-08-1611:36olivergeorgeHi Stuart. I think dev, staging and production can't share a cloud deployment. Can you elaborate with a simple use case. #2018-08-1611:38stuarthallowaycreate two systems: one for dev and one production, both sharing the same code deploy application#2018-08-1611:38stuarthallowaypush/deploy against the dev system until you are happy, then deploy (no new push!) to the production system#2018-08-1611:44olivergeorgeI guess my point is that in your example it's not useful for the two systems to have different db-names. There's never a conflict. #2018-08-1611:46stuarthallowayah, right#2018-08-1611:46stuarthallowaythis becomes much more interesting with query groups#2018-08-1611:47stuarthallowaywhen there are N deploy targets in the same system#2018-08-1611:48stuarthalloway(guess what I am working on...)#2018-08-1611:48olivergeorge:-)#2018-08-1611:48olivergeorgeCool#2018-08-1618:08Joe Lane@U055DUUFS I use multiple db-names in a multi-tenant system and its a dream.#2018-08-1618:09Joe LaneThat is going to become very exciting in our architecture and is an example where thats useful.#2018-08-1623:23olivergeorge@U0CJ19XAM can see value in separate dbs in a multi-tenant system. Nice to know that's proven useful. #2018-08-1617:39lambdamHello, I'm trying to connect the Datomic console to an in memory database (local dev). Is it feasible?#2018-08-1617:43manutter51Found this from 3 yrs ago, says “Console cannot connect to in-memory” http://datomic.narkive.com/4RyKpVMd/can-the-console-be-used-against-either-an-in-memory-or-dev-database#2018-08-1618:05manutter51There’s no such thing as read-only Datomic, right? Like if App A wants to look at App B’s data without ever modifying it (e.g. a reporting front end), I can’t just give App A read-only access to App B’s data.#2018-08-1618:28val_waeselynckI think you could do this at the storage level#2018-08-1618:31favilaNo, in the end, everyone connects to a transactor, and the transactor needs write-access to the storage#2018-08-1618:32favilathe peers can have read-only access to storage; but since they can send txs to the transactor that doesn't matter#2018-08-1618:33faviladatomic cloud may be different; I am only familiar with on-prem#2018-08-1619:49manutter51Ok, that’s what I was thinking, just wanted to confirm, thanks.#2018-08-1618:10johnjnot that I know of, your app must enforce this#2018-08-1618:10lambdamThanks @manutter51.
That is sad since I have a very smooth dev workflow with integrant. I can reset the whole database with fresh dev data after a simple cider-refresh. Also I do everything in the same JVM instance.
I wonder if there is a similar dev workflow that makes it possible to also easily have the datomic console.#2018-08-1618:13marshall@dam https://docs.datomic.com/on-prem/dev-setup.html#2018-08-1618:13marshallyou’d need to use dev storage#2018-08-1618:13marshallnot mem#2018-08-1618:13marshallbut you gain console and disk-based persistance#2018-08-1618:27lambdamThanks @marshall. I saw that do but it doesn't seem easy to reset the database from the REPL. I made a mistake in my last post. I don't use cider-refresh but a custom reload-with-dev-data from the dev namespace with integrant.
(defn reload-with-dev-data []
(halt)
(go)
(dev-data/load-dev-data! @db-conn-ref))
(halt) calls (d/delete-database db-uri) and (go) calls (d/create-database db-uri). It's instantaneous.
Do you think that I can have something similar with the dev storage without restarting the JVM process?#2018-08-1618:32marshallsure; call the same functions#2018-08-1618:40timgilbert@dam for dev purposes you might also want to check out datomock: https://github.com/vvvvalvalval/datomock#2018-08-1618:44lambdamThanks @timgilbert, I'll check that. Also I see Valentin often during Paris Clojure Meetups. I'll discuss about that with him directly next time (ping @val_waeselynck (Salut Val !)).#2018-08-1620:02eoliphanthi. have a datomic cloud question for any of the cognitect folks who might be around. We have a pretty complex VPC setup, and one of our requirements is that traffic from any of our ‘internal’ VPCs (dev,test,etc and associated datomics) has to traverse a transit VPC to get out to the world. It looks like the IGW’s that get created are used for license checking or something? We tried just disabling one and things just broke 🙂 . In any case, we wanted to get some input from you guys on how we might best accomodate this..#2018-08-1620:11marshall@eoliphant Datomic uses the internet gateway to access some AWS services#2018-08-1620:11marshallI don’t recall which ones off the top of my head, but any AWS service for which there are not VPC endpoints available or which we haven’t yet been able to transition to using VPC endpoints#2018-08-1620:16eoliphantah ok#2018-08-1620:16eoliphantlet me let my guys know. we’ll try to figure it out but if you guys could scare up a list that’d be great#2018-08-1620:17marshalleverything but ddb and s3#2018-08-1620:17marshalli suspect Cloudwatch, possibly codedeploy#2018-08-1620:19marshallmaybe IAM#2018-08-1620:21ghadiSNS / SFN @marshall?#2018-08-1620:21marshallprobably so#2018-08-1620:21ghadiEFS probably is internal traffic only... administrative endpoints notwithstanding#2018-08-1620:22marshallyeah, i think EFS is strictly scoped to VPC#2018-08-1620:22marshalllambda for ions uses SNS#2018-08-1620:23marshalllambda service itself may also be a public endpoint#2018-08-1620:24eoliphantyeah i was wondering about the lambdas. and yeah I think my dev noticed the issue during a deploy. SO codedeploy is a likely place to look#2018-08-1620:25marshalldeploy will definitely require lambda and step functions#2018-08-1620:32shaunxcodewith datomic cloud supporting :limit and :offset is there planned support for :sort-by perhaps?#2018-08-1620:44shaunxcodenevermind apparently the sort aggregate is a thing (but I can not see it in the docs?)#2018-08-1710:11steveb8nI'd like to know more about this "sort aggregate". If you find something, can you share it?#2018-08-1712:59eoliphantAfaik, there's no in built sort aggregate. The custom aggregate example finds the mode, which of course requires sorting. At this point you'll still have to roll your own. I hope they add something. It's pretty easy to write your own, but it gets hinky if you want to use datomic's :limit and :offset. You usually want the sort applied first, then limit and offset that result. But using the native stuff its effectively reversed. In those situations i just use my own sort, limit, offset logic#2018-08-1713:24dustingetzHey you guys how would you explain why Datomic is more powerful than SQL in one slide to a business person? Pictures or words. Basically you get about one sentence and one picture and they have to "ah ha" or we failed#2018-08-1713:24dustingetzHey you guys how would you explain why Datomic is more powerful than SQL in one slide to a business person? Pictures or words. Basically you get about one sentence and one picture and they have to "ah ha" or we failed#2018-08-1713:29val_waeselynck1. More leverage 2. Less anticipation / more agility 3. Enforces / educates to IT best practices#2018-08-1713:29val_waeselynckAnd 4. Integration heaven#2018-08-1713:32val_waeselynckNot sure if they can «ah ha», I mean if they are not technical all dbs will look the same to them I think#2018-08-1713:33Mark AddlemanI would do it in two pictures: first picture is a two timelines showing the steps and latencies of an cycle with datomic vs SQL (a cycle would be developing the schema, migrating, creating queries, optimizing, etc)#2018-08-1713:33Mark Addlemanthe second picture would be the number of times that you go through a cycle in a typical development project.#2018-08-1713:34Mark Addlemanthen, it’s just multiplication to show time saved#2018-08-1713:37val_waeselynckI would usually address risk before productivity though (they will usually trust you about productivity)#2018-08-1717:05jonahbentonWhat's the business? It entirely depends on what their concerns are. If it is a business where audit and history matters, one slide mapping that feature to those specific concerns is enough#2018-08-1717:33jonahbentonNot all businesses care about that, but the ones that do really do#2018-08-1717:33jonahbentonIn any event, the best case is where concerns/painpoints map directly to features#2018-08-1718:56dustingetzI am trying to explain to venture capitalists why http://www.hyperfiddle.net/ is dependent on Datomic and why that is our greatest strength, not a weakness#2018-08-1719:01jonahbentonAh. May be better as a DM conversation, but I wonder: is that a question about Datomic, or is that a question about the applicability/leverage of the hyperfiddle concept?#2018-08-1719:15dustingetzThe rest of the deck is about Hyperfiddle, its the specific Datomic dependency which has been a hang up thus far#2018-08-1720:04eoliphantHmm, yeah not sure how to articulate this positively, but I mean, I’ve messed around with hyperfiddle. I’m having difficulty even visualizing how one might do it with say MySql#2018-08-1720:17Mark AddlemanWhy is an implementation detail like the database a VC concern?#2018-08-1723:12eoliphantWell sometimes they've folks who will dig in on your tech stack. Been through that more than once#2018-08-1923:45chris_johnsonIt’s really difficult to boil down to a pitch-deck slide, I think, because there’s like a 1.5-2 minute talk you can give that makes it abundantly clear how Datomic lets you differentiate yourself from the same team in an alternate universe stuck using SQL, but when you zoom out far enough to fit all those points into a single slide/~20 seconds talking over a slide, it tends to dither out to “we use it because it is awesome next question” (which I’m sure you’re familiar with)#2018-08-1923:46chris_johnsonHere is the text of the “useful properties of Datomic” slide from a meetup talk I gave about Datomic Cloud a few weeks ago:#2018-08-1923:46chris_johnson- Data modeling is (almost all) deferred to query time
- Accumulate-only, immutable stream of facts means that you can create arbitrary "views" over your data just by writing (or refining) a query
- Reads can scale arbitrarily - peers read directly from storage and don't need to coordinate with anything to pull reads
- Writes create a consistent state that can be replayed, queried, etc. ("what was every value of the db at the time this bug occurred")#2018-08-1923:48chris_johnsonI’m sure you’re aware of it, but this blog post also gave me a lot of raw material for the ~20 seconds of talking about that slide: https://augustl.com/blog/2018/datomic_look_at_all_the_things_i_am_not_doing/#2018-08-1717:20eoliphantHi, given that in cloud we’ve lost the bytes type and strings are now limited to 4K are there any recommended strategies for dealing with requirements for more ‘midsize’ strings, etc. Just say longer text vs something that should more appropriately be a ‘file’ in S3 or something#2018-08-1718:58dustingetzI'm planning on using a foreign KV store from a transaction function (the foreign write will be optimistic, not atomic – e.g. upload your image again if it failed without rolling back the rest)#2018-08-1720:08eoliphantOk, yeah, I’d started thinking about stuff like that. We’re doing S3 for images, etc. Wanted something ‘easy’ lol, for this scenario. I’d really like bytes back 🙂#2018-08-1720:38timgilbertThis is one possibility: https://vvvvalvalval.github.io/posts/2018-05-01-making-a-datomic-system-gdpr-compliant.html#2018-08-1720:39timgilbert(Not the GDPR stuff, but augmenting datomic with an external KV store for the actual data)#2018-08-1721:19eoliphantah right. readers to the rescue..#2018-08-1819:30dangercoderIs there a problem in datomic cloud with gdpr (EU)? I want to use datomic for my app but since it will be used all around the world I am not sure if I should roll with Datomic..#2018-08-1819:32dangercoderThis might be something. https://vvvvalvalval.github.io/posts/2018-05-01-making-a-datomic-system-gdpr-compliant.html#datomic_excision,_and_its_limitations#2018-08-1821:47olivergeorgeAn interesting comment here: https://www.reddit.com/r/Clojure/comments/8gfpb3/making_a_datomic_system_gdprcompliant/dybgot7/#2018-08-1905:07dominicmHow is your company handling backups?#2018-08-1905:07dominicmThey contain user data after all. #2018-08-2211:25olivergeorgeLate reply but I meant to send a link. I don't have answers really - just getting across things myself. I did enjoy this post https://techblog.bozho.net/gdpr-practical-guide-developers/#2018-08-2211:26olivergeorgeThere's a bit about backups in there... What about backups? Ideally, you should keep a separate table of forgotten user IDs, so that each time you restore a backup, you re-forget the forgotten users.
#2018-08-2211:39dominicmYes, different lawyers are interpreting the requirements there differently. But datomic and backups function largely the same.#2018-08-1902:11olivergeorgeG'day from down under. Seems like Datomic Cloud is coming to Sydney - should make a big difference to latency for Australian users.
https://docs.datomic.com/cloud/releases.html#409-8407#2018-08-1902:12olivergeorgeI didn't think this was possible because Sydney doesn't seem to provide AWS Auto Scaling based on this page: https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/#2018-08-1905:06dominicmI don't know what datomic uses, but auto scaling groups are part of ec2, not the auto scaling service, afaik.#2018-08-1905:38olivergeorgeRight, perhaps I'm reading the marketplace details wrong. here's what it says:
Required AWS Services: APPLICATIONAUTOSCALING, AUTOSCALING, CLOUDFORMATION, CODEDEPLOY, DYNAMODB, EC2, EFS, ELASTICLOADBALANCINGV2, IAM, LAMBDA, LOGS, ROUTE53, S3
#2018-08-1906:26dominicmI guess it uses it then... #2018-08-1906:27dominicmI might be wrong about auto scaling anyway 😊 #2018-08-1906:28dominicmAmazon EC2 Auto Scaling has moved
You have several options for scaling with AWS.
Do you only need to manage Amazon EC2 instances? #2018-08-1906:28dominicmIt's probably there for dynamodb scaling#2018-08-1902:14olivergeorgeI can see ap-southeast-2 in the RegionMap of the compute templates but Sydney isn't a region I can pick on the AWS Marketplace page for Datomic - wondered if that was related or just an issue of getting marketplace updates approved by Amazon.#2018-08-1913:32stuarthalloway@U055DUUFS you should be able to use ap-southeast-2 with the 409-8407 release. I will check into what is happening on the marketplace page.#2018-08-2010:22olivergeorgeThanks @U072WS7PE. Look forward to trying it out.#2018-08-2013:24jaret@U055DUUFS Where were you not seeing Ap-southeast-2? I checked on marketplace and was able to see the region when I had 409 selected…#2018-08-2013:25jarethttps://previews.dropbox.com/p/thumb/AAIq-zqGVYA_ZxPQjxuSY-HLRBXZInM95Pv6zcJTLDG6Pcug8XBSh-QUGC7DpOFkU8Kw8fooc_2HopiEDpqXoZnaI4l94NnYUoOC8PK2AlKv1_Ox3kuBpHU5u7l1JYnPcJiyPlXFDftRYwJv1wCeetTRBcEHZwk2DMiWpxHgVf4Hp7RJv9Rqll1L8dRZzL3AqFcrCTp3cncmIdOIAfvWh1xVGC9VC6zip6FjbkJkl3EEXw/p.png?size=800x600&size_mode=3#2018-08-2016:58olivergeorgeSounds like I got punked by "estimating your costs" here https://aws.amazon.com/marketplace/pp/prodview-otb76awcrb7aa #2018-08-2210:35olivergeorgeWorks like a charm.#2018-08-1909:16vladexHi! What would be an idiomatic way to query last asserted datom? My idea is to add :created-at and use max on it. does this sound reasonable?#2018-08-1909:44vladexor get [?e ?created-at] relations and sort-by…#2018-08-1911:44henrikIs pull-many a thing in Datomic Cloud?#2018-08-1913:33stuarthallowayhi @U06B8J0AJ, pull is a convenience over query, you can always do what you want with a query.#2018-08-1913:33stuarthallowayThis gets asked a lot, it may be worth it to add pull-many just to spare confusion.#2018-08-1913:35henrikI see, thank you. I figured I could probably create my own pull-many, but I saw it in an example and went looking for it.
I guess if for nothing else, for backwards compatibility with existing guides that may be out there, it might be good to have it in.#2018-08-1912:49eoliphant@vladex I haven’t had to do this personally, but you should be able easily drive this from the datomic log apis. If you do something like -> (tx-range {:start <some num> :end nil}) (last) (look at datoms asserted by tx) that should get you what you need#2018-08-1913:14vladex@eoliphant do you think that this approach is more performant assuming that I have an index for :created-at? `#2018-08-1913:19eoliphantnot sure, I’d maybe (time) both approaches#2018-08-1914:03favila@vladex you said “last asserted datom”, that’s impossible with a created-at attr because it can only be asserted on entities not datoms. Can you be more precise about what you are trying to do?#2018-08-1914:37vladexyeah, you are right 🙂. Hypothetically, I use datomic for newborns registry at a hospital. I assert an entity with :name and :born-at. What is the most idiomatic way to answer what was the name of the last newborn?#2018-08-1917:03eoliphantah ok.. That’s clearer. So you want the ‘domain’ most recent time, not the ‘system’ time when the transaction took place. There may be a more idiomatic approach, but right off the top of my head, you can just do a :find (max ?ba) :where [ .. :born-at ?ba] ... then use that value to query for the entity you want#2018-08-1917:09vladexyeah, that was my initial idea. “My idea is to add :created-at and use max on it”. Should be more clear next time 🙂#2018-08-1917:15vladexbut anyway seems strange, with SQL semantics is like ORDER BY LIMIT 1… maybe I should look also into raw indexes…#2018-08-1918:01favilaIf you want to be really efficient, you should store the reverse index of what you want#2018-08-1918:04favilaEg if you care about “newest” timestamp, store an attr with the negated ms value#2018-08-1918:04favilaThen it’s always first item in d/datoms#2018-08-1918:05favilaDatomic doesn’t have efficient reverse walking of its indexes, so even if you use raw indexes you will have to do some array bisection to find the last value#2018-08-1918:33vladexinteresting approach, thank you!#2018-08-1918:37vladexbut I see this pattern quite often: latest news, most recent posts, most liked comments… is datomic a good fit in this case? Or should I use third party indexing tools?#2018-08-1923:42favilaNot a good fit for a naive implementation, which is why it’s even a bit difficult to express succinctly in datalog#2018-08-1923:43favilaThere’s lots you can do short of an external index, but if you can’t afford an index scan you do need to do something#2018-08-2000:33favilaEg you could make a derived attr that orders more naturally; you could cache your “top x” list; you could read the Tx queue and keep the “top x” values incrementally in memory; you could write a bisection algorithm that uses index-scan to find the latest values without a full scan#2018-08-2104:52vladexthank you, do you know some useful resources where I can dig into this topic?#2018-08-1923:46joshkhMaybe someone can offer advice on modelling permissions in Datomic? I have a Permission entity that represents how any one entity can interact with another. It's essentially an edge with some data stored on it (source, target, access level).
Works just fine, but a little hard to manage when either its source or target ref is retracted. The Permission entity still hangs around. I don't think I can solve this with components - that would require every "type" of entity to have a component reference to some Permission entity. Is there something tricky I can do during the retraction of some entity to also retract any related Permission entities?#2018-08-1923:46joshkh#2018-08-1923:50chris_johnsonCould you not write a transaction fn that looks at every transaction (via :tx-report-queueor by registering a fn in Cloud) and when seeing a retraction, does a query to see if any Permissions reference the retracted thing, then does some cleanup?#2018-08-2123:58Petrus TheronDoes Datomic Cloud support DB functions yet?#2018-08-1923:51chris_johnsonI don’t guarantee that this would be uh, performant but you could try it and see if it works well enough#2018-08-1923:51joshkhAh, I haven't looked into registering transaction functions in Cloud (which is where I'm operating).#2018-08-1923:52chris_johnsonI haven’t done it in anger yet - I’m still in the “hey let’s get this existing infrastructure to scale successfully before we rewrite all our code to be Cloud-aware” phase of life - but I know you’re supposed to be able to do so#2018-08-1923:53joshkhIs that a purely Ions thing? Or something I could use with the Cloud client?#2018-08-1923:53chris_johnsonmight be purely Ions, I haven’t really looked at the Cloud client at all tbh#2018-08-1923:54chris_johnsonI am the living embodiment of that Squidward “fuuuuuuutuuuuuurrrreeee” .gif about Datomic Cloud#2018-08-1923:54chris_johnsonI only care about the stuff you can just barely trick it into doing, things that are “supported”? Meh!#2018-08-1923:55joshkhI'm living in the Cloud environment. Somewhere between the old and the new. 😉#2018-08-1923:56joshkhWhich I think is lacking in transaction functions? Or only supports a subset of what on-prem and ions does? Still wrapping my head around it.#2018-08-1923:59joshkhI know I can do this easily in Cypher...#2018-08-2000:00joshkhAnywho, off to bed. Thanks for the advice!#2018-08-2000:08eoliphantas of ions becoming available, transaction functions are good to go in cloud, but they do have to be ions#2018-08-2001:24steveb8n@joshkh FWIW I use a combination of transaction fns and https://cjohansen.no/referentially-transparent-crud/ to ensure writes have all the correct associated effects. works well (but only with Ions or on-prem for txn fns)#2018-08-2007:36ckarlsenHey, looks like the certificate for http://my.datomic.com expired today!#2018-08-2008:04geoffkyip. creating problems for us too#2018-08-2009:32mkvlrsame here#2018-08-2010:05geoffkthere's apparently no easy way to get lein to ignore this either#2018-08-2010:06joseayudarte91We have the same problem! Let me know if you find a workaround, please#2018-08-2010:09biscuitpantsyou can force trust the certificate locally, but that won’t work for CI boxes#2018-08-2010:10geoffki hope Cognitect can provide information on how this happened and what they suggest the community of Datomic users do to prevent being blocked like this again. It's not convenient as it stands.#2018-08-2010:12joseayudarte91Mm I can trust in the browser, but how I do that with Leiningen?#2018-08-2010:14biscuitpantsyou need to force-add the certificate to your truststore (different per OS)#2018-08-2010:15joseayudarte91Ah ok! I thought I had already done that with IntelliJ. It was asking about that, but I will double check it, thanks#2018-08-2011:56stuarthallowayThis should now be fixed.#2018-08-2012:00lenthanks :+1:#2018-08-2010:33Petrus TheronI set up a solo Datomic Ion system last month, but now I'm having trouble getting datomic-socks-proxy to connect to the bastion, despite verifying that my stack is running and bastion instance is running. I've set AWS keys with aws configure and manually with export AWS_ACCESS_KEY=... and secret key.
When I run bash datomic-socks-proxy my-system, I get (redacted IP and system name):
download: s3://..../datomic/access/private-keys/bastion to ../../.ssh/datomic--my-system-bastion
OpenSSH_7.6p1, LibreSSL 2.6.2
debug1: Reading configuration data /Users/my-user/.ssh/config
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 48: Applying options for *
debug1: Connecting to 34.249... port 22.
ssh: connect to host 34.249... port 22: Operation timed out
What am I doing wrong here?#2018-08-2010:46Petrus TheronFixed by adding an inbound rule for SSH port 22 🙂#2018-08-2010:40lenThe SSL cert on http://my.datomic.com has expired ?#2018-08-2010:41lenwe cant build at the moment 😞#2018-08-2010:42biscuitpantsthere’s a thread above about this. no message from Cognitect/Datomic team yet#2018-08-2010:46geoffk@len we can't build either 😞 i imagine many are having problems at the moment.#2018-08-2011:57lenmy.datomic is back :+1:#2018-08-2011:59geoffkyip#2018-08-2012:12biscuitpantshopefully we get some sort of message here 🙂#2018-08-2012:21stuarthallowaySorry for the outage! The problem has been resolved and should not occur. Please use the support system https://www.datomic.com/support.html for time-sensitive issues.#2018-08-2016:16Petrus TheronHow to do in-memory Datomic testing with datomic.client.api, e.g. for testing Datomic Ions? Do I need to pull in datomic.api as well?#2018-08-2016:38kennyWe have been using this for about a month now and it has made development much faster and CI testing far easier: https://github.com/ComputeSoftware/datomic-client-memdb#2018-08-2016:55Mark AddlemanThis looks cool and seems to support on-prem Datomic. Have you thought about Cloud support?#2018-08-2016:57kenny@UAMEU7QV7 Not sure what you mean. The purpose of this library is to support local development & testing when you use Datomic Cloud in production 🙂 Sorry for the lack of documentation on the lib - haven't had time to write anything.#2018-08-2016:57Mark AddlemanOh. I quickly glanced over the code and thought it was on-prem. This looks great! Thanks!#2018-08-2016:59kennyIt essentially provides the Datomic Client API for the on-prem memory DB.#2018-08-2109:44Petrus TheronHi @U083D6HK9, thanks for this. I'm having a hard time querying the LocalDb, though. I can transact and retrieve the raw datoms I transacted, but when I query an instance of LocalDb with (dc/q ...), no results are not being returned. It seems that the implementation of q from client-impl/Queryable is not being called. Also, transact returns datomic.db.Db in :keys [db-after db--before] instead of LocalDb, which can't be queried directly.#2018-08-2110:05Petrus TheronHnnng, resolved: I was calling (dc/transact conn [{:db/id -1 :contact/name "Test Name"}]) (which returned just fine) instead of (dc/transact conn {:tx-data [{:db/id "new-entity" :contact/name "Test Name"}]}). Queries now run 🙂 I hope Spec will detect this kind of thing#2018-08-2110:18Petrus Theron@U083D6HK9 submitted PR for LocalDB casting in transact: https://github.com/theronic/datomic-client-memdb/pull/1#2018-08-2116:36kenny@U051SPP9Z Yeah, the change in the transact format has bitten me many times. I've commented in this channel about it before and got no response :man-shrugging::skin-tone-2:.
Hmm, that is a bug. Also a downside to implementing it the way I did. Can you open that PR against the repo?#2018-08-2116:47kennyOriginally I implemented the lib using extend-type on the peer Connection and Db but I ran into the issue of being unable to store extra information on the object (`:db-name` and :t), so I went with the deftype approach. The downside there is that now we need to coerce all returned dbs into LocalDb in order to use them with the Client API.#2018-08-2117:04kennyThe other problem was I could not add an implementation of clojure.lang.ILookup to datomic.db.Db because ILookup is an interface, not a protocol. Not sure if there is a workaround for that one.#2018-08-2018:56tlimaQuestion: if I transact the same data twice (same entity, same values) will Datomic create a new transaction for that? i.e. will the transaction counter be updated?#2018-08-2019:11favilaevery successful transact call makes a transaction entity#2018-08-2019:12favilathe smallest possible transaction has one datom: [the-new-tx-id :db/txInstant the-transaction-time true the-new-tx-id]#2018-08-2019:30tlimaI see. Thanks, @U09R86PA4#2018-08-2110:35Petrus TheronOut-dated link in Datomic Cloud v0.8.54 query exception: (datomic.client.api/q '[:find [?e ...] :where ...] cloud-db) throws Only find-rel elements are allowed in client find-spec, see , which links to on-prem documentation that does support find-coll return, which was confusing. Exception should probably link to https://docs.datomic.com/cloud/query/query-data-reference.html#2018-08-2111:26Petrus TheronDoes datomic.client.api/with work for prospective tx on Datomic Cloud?
(d/transact cloud-conn {:tx-data []})
=> {:db-before {:database-id "57ad2773-...", :db-name "my-db", :t 3, :next-t 4, :type :datomic.client/db}, :db-after {:database-id "57ad2773-...", :db-name "my-db", :t 4, :next-t 5, :type :datomic.client/db}
(d/with (d/db conn) {:tx-data []}) ;; also throws for actual tx-data values
=> Exception: `clojure.lang.ExceptionInfo: Datomic Client Exception {:cognitect.anomalies/category :cognitect.anomalies/incorrect, :datomic.client/http-result {:status nil, :headers nil, :body nil}}`
#2018-08-2111:40stuarthallowayHi @U051SPP9Z! At first glance seems like that should work. What do the logs say? https://docs.datomic.com/cloud/operation/monitoring.html#searching-cloudwatch-logs#2018-08-2111:49Petrus TheronHi @U072WS7PE. I can't seem to find any logs related to this error, but I'll PM you my log group name. I see other errors about trying to use a missing db function though. (would be cool if I could stream errors related to my queries into REPL)#2018-08-2111:55stuarthallowayWe are improving error messages with each release. Are you on the latest client and CFT?#2018-08-2112:42Petrus TheronI'm on com.datomic/client-cloud {:mvn/version "0.8.54"} and CFT version 402.#2018-08-2112:44stuarthallowayso the first thing I search in any Datomic Cloudwatch log is "Alert - Alerts" -- that text will find all the potentially scary stuff, if any#2018-08-2112:45stuarthallowayExample at https://docs.datomic.com/cloud/operation/monitoring.html#finding-alerts#2018-08-2113:09Petrus TheronTook me about a minute to find the little page icon for CloudFormation template URL on https://docs.datomic.com/cloud/releases.html, which you can't right-click on. So I had to go to web inspector to find it. (It's not obvious that clicking on it puts something in your clipboard.)#2018-08-2113:17Petrus TheronAttempting to create a new CloudFormation stack from template triggers Rollback with reason: No export named mysystem-CatalogTable found. Rollback requested by user. (I did not initiate anything)#2018-08-2113:21Petrus TheronHm, I must have done something wrong - probably didn't tick "Reuse existing storage" to yes, or it didn't detect when I pasted JSON template URL#2018-08-2113:26Petrus TheronStack started up again, but bastion instance not created#2018-08-2113:31Petrus TheronWhy would I not see an "Enable bastion?" option for the latest Solo CloudFormation template (v409)?#2018-08-2113:35Petrus TheronHmm...maybe I triggered the Storage only template. Trying to fix it via "Update Stack" with the Solo template and enabling bastion triggers a rollback again, with reason Export mysystem-Subnet1 cannot be deleted as it is in use by mysystem sigh I'm going to delete the whole stack and start over again#2018-08-2113:38Petrus TheronCan't recreate stack with same name: No export named "mysystem-Subnet2" found. Rollback requested by user.#2018-08-2113:51Petrus TheronAfter attempting an upgrade to 409 by deleting my stack as per docs (maybe it wasn't a "first upgrade"?) I had to go via the marketplace to avoid the subnet error and get the VPC options again; so far it doesn't seem to be rolling back. This cloud state management stuff is so complicated.#2018-08-2114:18stuarthallowayWow, that is a bummer. Will investigate.#2018-08-2114:29henrikWith Ions, I’m getting random stack overflows in my code when pushing to AWS,
"java.lang.StackOverflowError, compiling:(buddy/core/bytes.clj:72:15)"
Usually something in buddy, but not the same file every time. This started happening after adding Amazonica to do some S3 stuff.
This is on solo, so is it likely to be OOM errors?#2018-08-2115:11stuarthalloway@U06B8J0AJ this is definitely solo not having enough mem#2018-08-2115:14henrikPoor solo. Can I edit the official cloud-formation template to use t2.medium or something instead?#2018-08-2115:26stuarthallowayNo. Solo runs only on t2.smalls. I am considering other options.#2018-08-2115:26stuarthallowayIf the problem is in compilation then deploying compiled bits may help.#2018-08-2115:26stuarthallowayThe last time I checked, Amazonica is enormous, and only has a monolithic deploy.#2018-08-2115:32henrikYeah, I tried my best to rip out large parts of it, but no go. I will consider other options. Perhaps the REST API instead, for now.#2018-08-2115:35henrikI’m making a prototype at my own expense to convince my colleagues to Do The Right Thing, so I’d rather avoid taking on the cost of the production topology at the moment.#2018-08-2115:39henrikBut also of course to learn because it’s awesome.#2018-08-2116:08okocimI had similar issues with amazonica myself. It’s a nice library, but I find it a bit too clever and bulky. I ended up just writing some small functions to do what I needed in S3 using interop and the DefaultAWSCredentialsProviderChain. It works great for my needs, and I’m happy to share the code with you if you’d like.#2018-08-2117:20henrik@UBBBNAS6T That would be fantastic, thank you!#2018-08-2119:44okocim(ns util.aws.s3
(:import [com.amazonaws AmazonServiceException, SdkClientException]
[com.amazonaws.auth DefaultAWSCredentialsProviderChain]
[com.amazonaws.services.s3 AmazonS3, AmazonS3ClientBuilder]
[com.amazonaws.services.s3.model ObjectMetadata, PutObjectRequest]))
(defn default-s3-client []
(AmazonS3ClientBuilder/defaultClient))
(defn stack-trace-as-str [e]
(let [sw (java.io.StringWriter.)
pw (java.io.PrintWriter. sw)
_ (.printStackTrace e pw)]
(str sw)))
(defn put-object-string
([bucket-name key-name value]
(put-object-string (default-s3-client) bucket-name key-name value))
([s3-client bucket-name key-name value]
(try
(let [field-puller (fnil
(juxt (memfn getContentMd5)
(memfn getETag)
(memfn getExpirationTime)
(memfn getExpirationTimeRuleId)
(memfn getMetadata)
(memfn getVersionId)
(memfn isRequesterCharged))
{:response-obj "no data!"})
[md5-hash
etag
expiration-time
expiration-time-rule-id
metadata
version-id
requester-charged?] (field-puller
(.putObject s3-client bucket-name key-name (str value)))]
(cond-> {}
md5-hash (assoc :md5-hash md5-hash)
etag (assoc :etag etag)
expiration-time (assoc :expiration-time expiration-time)
expiration-time-rule-id (assoc :expiration-time-rule-id expiration-time-rule-id)
version-id (assoc :version-id version-id)
true (assoc :requester-charged? requester-charged?)))
(catch AmazonServiceException e (stack-trace-as-str e))
(catch SdkClientException e (stack-trace-as-str e)))))
#2018-08-2119:45okocimI only ever needed stuff on the put side, so there’s not much too this, but here you go anyway 🙂#2018-08-2120:02henrik@UBBBNAS6T Do you manually rip out all AWS libraries you don’t use, or do you find it possible to include the entire SDK?#2018-08-2120:05okocimfor s3, I’ve been relying on the dependency that’s transitively included with an ion project, so I didn’t explicitly put it in my deps.edn, but as a rule, I’m only including the parts of the SDK that I am using.#2018-08-2120:05okocimcom.amazonaws/aws-java-sdk-sqs {:mvn/version “1.11.382”}#2018-08-2120:05okocimfor example#2018-08-2120:07okocimin this case, com.amazonaws/aws-java-sdk-s3 1.11.314 is coming through transitively because of the ion-dev dependency#2018-08-2114:30henrikBoth deploys AND rollbacks are failing.#2018-08-2115:53stuarthallowayDid you deploy to a strong name (git SHA) or a weak one?#2018-08-2115:53stuarthallowayIf you are deploying to the same weak name over and over, rollback cannot work because you are overwriting the thing to roll back to#2018-08-2115:53stuarthallowayThat said, the stack overflow thing can be intermittent, so it may be you got lucky the first time and now cannot get back.#2018-08-2117:48henrikNo, I consistently use git SHAs for deploy. I think you’re right in that I got lucky. Maybe AWS is rolling back to that one lucky build.
I’m worried I’ve screwed something up permanently though. I’m getting failed deploys for older builds even, where Amazonica definitely was absent.#2018-08-2122:10steveb8nI suffered that problem as well a while back. In my case, it was due to the dep warnings seen during the push step. Once I found the right combination, the stackoverflows went away. My errors were referring to Specter for the stackoverflow but that was not accurate, it’s just where it was executing when it ran out of mem.#2018-08-2114:38Petrus TheronHow do I refer to the tx-id in a transact call using Datomic Cloud?#2018-08-2114:42rhansen"datomic.tx" if I remember correctly. It's some magic string, you should be able to find it somewhere in the docs.#2018-08-2115:07Petrus TheronWhere do my Datomic Cloud queries actually run? Do they run on the t2.small EC2 instance, or is that just a transactor?#2018-08-2115:51stuarthallowayBwa ha ha. It is magic!#2018-08-2115:52stuarthallowayThere are no transactors. Nodes do all the things. https://docs.datomic.com/cloud/whatis/architecture.html#nodes#2018-08-2116:28Petrus TheronSo do the nodes live on my Solo topology's t2.small?#2018-08-2116:15okocimIs there any way to install the lambda proxy in a different region than the datomic cluster?
I’m trying to use the pre-generate token Cognito trigger so that I can put additional claims for data from Datomic into the idToken that Cognito hands back, but it appears that the Cognito Triggers only work with lambdas that are in the same region as the user pool. Unfortunately, in this case, my user pool is in us-east1, and my Datomic system is in us-east-2.#2018-08-2117:20Petrus TheronHow are peeps doing granular application permission filtering on Datomic Cloud without d/filter? I'd like to give my power users Datalog read-query access to whitelisted attributes, but I'm not sure how I'd do that safely.#2018-08-2117:58kennyFor those of you using Datomic Cloud and looking for a solution to local development & testing or offline usage, I wrote this library https://github.com/ComputeSoftware/datomic-client-memdb which provides an implementation of the Datomic Client protocols for the Datomic peer in-memory database. We've been using it for about a month at my company and it has made local development and testing on the CI much easier.#2018-08-2120:23jarethttps://forum.datomic.com/t/new-client-release-for-pro-and-cloud/591#2018-08-2208:45Petrus TheronI'm still getting d/with Datomic Client Exception with no error on CloudWatch running Solo topology after upgrading to client-cloud v0.8.63:
{:cognitect.anomalies/category :cognitect.anomalies/incorrect, :datomic.client/http-result {:status nil, :headers nil, :body nil}}
Do I need to cycle in my stack?#2018-08-2212:58jaretWhat version is your stack running server side? (i.e. what CFT did you launch your solo stack with). I will investigate further.#2018-08-2213:37jaret@U051SPP9Z ^#2018-08-2121:37tlimaIs there anything like an undocumented alt-port property that I could configure at my transactor, to let peers know that alt-host also uses a different port?#2018-08-2122:40stuarthalloway@UBY44N2JG No, just alt-host.#2018-08-2212:23tlimaWhere I can file a feature request?#2018-08-2212:54jaret@UBY44N2JG under the “suggest a feature portal” located in the top right of your http://my.datomic.com account after login. If you don’t have a http://my.datomic.com account, please let me know and I can create one for you.#2018-08-2212:54tlimaThanks, @U1QJACBUM#2018-08-2209:13joshkhback in june @denik asked the question
> "is there a story for a webapp that uses datomic ion and websockets?".
i'm very interested in this too. my (webapp) project is very websocket heavy, so AWS API Gateway isn't useful for me. is that a roadblock?#2018-08-2210:50Petrus TheronYou might be able to use AWS IoT for this and call out to your Ions as needed, as IoT supports MQTT over websockets: https://docs.aws.amazon.com/iot/latest/developerguide/protocols.html#mqtt-ws#2018-08-2213:19joshkhthanks, i'll look into it!#2018-08-2214:08denik@U0GC1C09L please keep me post it if you find a good way to make it work#2018-08-2316:29Joe Lane@U0GC1C09L that is exactly what you should do. I demo’ed this myself and its magnificent. I’m working on making a writeup but i need to set up the right blogging system with work first.#2018-08-2316:48Petrus Theron@denik @U0GC1C09L a validating use-case for AWS IoT + Datomic: https://twitter.com/JPLane3000/status/1017215326990274566#2018-08-2317:23joshkhcheers! this seems to be exactly what i'm looking for. very much looking forward to the tutorial. 🙂#2018-08-2318:43Joe LaneHaha, cool, y’all found my twitter account.#2018-08-2413:32eoliphantSweet. I actualy have a couple upcoming user stories for exactly this lol#2018-08-2210:52Petrus TheronIs there a way to access the Datalog internals so I can block queries that touch certain attributes? Basically, I want to whitelist attributes for querying or pulling.#2018-08-2210:54Petrus TheronIt would be great if there was an intermediate Datalog parser step that returned a list of "compiled attributes" for a given query before being run, so that I could block the query. For wildcards, I'd like to constrain the list of pulled attributes.#2018-08-2213:18joshkhi remember seeing someone post a keep-alive wrapper for the datomic SOCKS proxy script. anyone have a link?#2018-08-2216:23jarethttps://forum.datomic.com/t/keeping-alive-socks-proxy/593#2018-08-2216:23jaretI just copied out the advice given earlier in slack (which I was able to find on an archive)#2018-08-2216:23jaretPlease feel free to add any experience reports to that thread ^#2018-08-2217:00joshkhcheers, thanks Jaret#2018-08-2215:30markbastianI am having an issue with transacting a large amount of data in which I keep getting the exception "db.error/transactor-unavailable Transactor not available". I've got along the lines of ~100k objects being transacted using transact-async, each one resulting in about 5-10 datoms. I'm partitioning the data so that I'm performing a large number of small transactions. I've tried a variety of sizes for "small" such that the transaction report queue shows txdata sizes along the lines of 100-10000 datoms depending on how I do my partitioning. It looks like the transactor is being saturated then once it recovers it starts writing again. I've tried setting a high timeout (-Ddatomic.txTimeoutMsec=120000) and that doesn't seem to work. The log does give a warning of "2018-08-22 07:58:07.869 WARN default o.a.activemq.artemis.core.server - AMQ222183: Blocking message production on address 'test-98bbb5cd-6025-4228-aee8-f0ee886d5a82.tx-submit'; size is currently: 263,836 bytes; max-size-bytes: 262,144." Could the problem be an artemis configuration? If so, how would I configure that for datomic? I can't seem to find any information on that. Any ideas? Thanks!#2018-08-2215:34markbastianAdditional data: I'm using datomic-pro-0.9.5697 with a postgresql backing store configured basically the way the datomic site says to configure it. My current transactor template file has the "recommended settings" for production enabled and write-concurrency=8.#2018-08-2216:00marshall@markbastian https://docs.datomic.com/on-prem/capacity.html#plan-for-back-pressure
If you’re hitting backpressure you need to slow your import and/or wait#2018-08-2216:01marshallDatomic OnPrem doesn’t provide “flow control” in the form of a queue or work limiting in the peer library; if you need that kind of upstream arbitration you need to supply it#2018-08-2216:06markbastianShould the use of transact-async just queue up the data in the artemis queues and get picked up eventually?#2018-08-2216:06markbastianThat was my understanding when I read that before, but I am still pretty new to datomic, especially the transactor.#2018-08-2216:12markbastianOk, I think I figured something out. If I call transact-async and let it do its thing it works very well. I can write hundreds of thousands of datoms in not much more time that it takes to process the data. The issue is when I dereference the future it returns. I want to be able to catch any exceptions in the transaction and it seems like the only way to do this is to eventually deref the future. I tried putting that in another future, but that still seems to block the transactor (or something that gives bad performance).#2018-08-2216:13markbastianSo, I think my only issue is how I can catch possible exceptions on the transaction without turning it into a blocking operation.#2018-08-2216:27conanWhat's the easiest way of reliably scheduling a call to backup-db for a datomic cluster built using the CloudFormation template?#2018-08-2216:34markbastianTo put it quite clearly, I can do this just fine and it is very perfomant:
(datomic/transact-async conn [[:my "data"]]) ;Assume for the moment that data may or may not be valid.
But if I do this it performs terribly:
(future
(try
@(datomic/transact-async conn [[:my "data"]])
(catch Exception e (.printStackTrace e))))
Should I be using core.async as shown here: https://docs.datomic.com/on-prem/best-practices.html#pipeline-transactions? It still seems like this would dereference the future and you'd have the same issues.#2018-08-2216:51marshallif you don’t deref the future it’s fire and forget#2018-08-2216:52marshallif you need to know whether the transaction failed or not you either have to keep the future and deref it (only tells you if it succeeded, not if it potentially failed) or query for the things later to ensure they got in there#2018-08-2217:09markbastianYeah, my general case is fire and forget and it works very well. I just would like some facility to report an exception if and only if one occurs. I suppose I could put the futures on a queue and occasionally poll them to see if they've been realized and then check for exceptional behavior. Alternatively, if I suspect a problem I could enable derefing the future only in those cases. Thanks for the tips!#2018-08-2219:39eraserhdHey, wait... Datomic is reordering query clauses???#2018-08-2317:58jaretThere is no feature to re-order clauses. Clause order is up to you. If you’re seeing something unexpected here, please feel free to log a support case to <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>
https://docs.datomic.com/on-prem/query.html#clause-order#2018-08-2318:17eraserhdOk. It's specifically in calling a user function. We have a work-around, but I'll report there.#2018-08-2219:39eraserhdIs this new?#2018-08-2219:40eraserhd(On Prem)#2018-08-2223:42kenji_signifierHi, I’m looking at Ions to implement business specific layer on top of Datomic to support operations such as money transfer, stock trading (transfer share amounts between accounts), etc. It seems to be good fit, but I’d also like to publish post-commit events to Kafka topics. I think it could be achievable with datomic.Connection.txReportQueue in peer API, but it seems not exist in Client API. Are there alternatives to achieve this in Datomic Cloud?#2018-08-2301:29stuarthallowaynot at present, but you could poll#2018-08-2315:39dustingetzCan you write to a queue in a transaction function? Rich was talking about this at Conj party '18. (Be careful not to block in the transactor)#2018-08-2403:34kenji_signifier@U09K620SG thx for the idea and I read Ions launch day questions as well. In this case I’d like to capture post-tx so transaction function may not be the best place as I need to consider multiple invocations and tx failure case. @U072WS7PE Is it possible to connect Datomic Cloud DB via Peer API if I run it from VPC peered network to circumvent securities? Suppose it’s possible, would it be considered as “at-your-own-risk” or a proper approach?#2018-08-2413:40eoliphant@U1QA1G3UH we’re using http://www.onyxplatform.org/ to stream out transactions and do other stuff with them. @U0509NKGK has a great blog post on how they did it https://www.stuttaford.me/2016/01/15/how-cognician-uses-onyx/
The preferred approach to accessing your datomic endpoint from another VPC is using AWS’ VPC endpoint feature https://docs.datomic.com/cloud/operation/client-applications.html#separate-vpc#2018-08-2420:27kenji_signifierHi @U380J7PAQ thx for the info. Onyx datomic plugin had only Peer API until I contributed the PR to support for Datomic Client API and Datomic Cloud in Feb. so I presume you’re using Peer API 🙂 https://github.com/onyx-platform/onyx-datomic/pull/31 BTW, I decided to use Kafka Streams instead. (And now Distributed Masonry is a part of Confluent!) My question is if there is a way to access Datomic Cloud via Peer API. Yes, I know I can VPC peer, but I’d like to know if accessing Datomic Cloud via Peer API is a) possible, b) supported.#2018-08-2315:09dustingetzCas-retract – is there a nice idiom for this?#2018-08-2409:41tmulvaneyi've just started using ions to build a web app to explore our data. It's been great! I have a question though: Is there a way of getting the stage name of the API Gateway eg. "dev"?#2018-08-2409:42tmulvaneyIt would be handy for constructing hyperlinks.#2018-08-2410:58tmulvaneyok so the raw request json under the key: datomic.ion.edn/api-gateway-request has the stage name.#2018-08-2414:33ghadihttps://clojurians.slack.com/archives/C03RZMDSH/p1534868110000100#2018-08-2414:33ghadidid you ever workaround this okocim?#2018-08-2418:26okocimNo, I ended up just creating the user pool in the same region as the datomic cluster.#2018-08-2414:42ghadiwe are probably going to destroy our US-East-2 cluster and recreate in East-1 under a separate AWS acct, and deal with the cross-account permissioning#2018-08-2417:10eoliphantThis kind of stuff really annoys me. We had to do something similar because of some other services. They're trying to get more people out of east-1, but the lag on services is just too much#2018-08-2417:01mpenetlet's say you want to store an edn "blob" into an attribute, how would you do that?#2018-08-2417:56favilapr-str and mind the 4k limit#2018-08-2417:57favilamore sophistication would involve serializing to fressian or netty or binary serialization of your choice and storing as bytes; or storing in an external store if the blobs are large#2018-08-2417:58favilaNote that I think datomic cloud does not support the binary type#2018-08-2418:22henrikI'm storing a number of blobs for an entity, and I ended up sticking them on S3. As they're part of a details view of the entities in question, I figure the response time is sufficient. Seeing as S3 supports versioning as well, I reckon I can support time travel even though they're not sitting inside Datomic.#2018-08-2419:24favilaIt might be simpler to just write a new key to the bucket?#2018-08-2419:24favilaI'm not sure how to correlate a datomic T with a specific bucket blob version#2018-08-2419:25favilain a foolproof manner#2018-08-2420:41mpenetYeah I am heading that way for now (serialized as byte[])#2018-08-2420:42mpenetI am just playing for now, so not on cloud#2018-08-2417:01mpenetI need an edn "bag of anything" type#2018-08-2417:02mpenetseems like I might have to serialize it and store as "bytes". I am trying to convert a schema we have on postgres as an exercise, we use jsonb in these rare cases.#2018-08-2417:38ghadican't index an opaque bag, but yeah you could do that, or store the bag out-of-band in a K-V store and remember the K in datomic#2018-08-2420:45mpenetI dont need to have its content indexed. For now I ll just serialize it, but in a real world scenario I would likely do something like what you suggest #2018-08-2507:14henrik@mpenet @favila I’m exploring DynamoDB as an alternative (cheating with Amazonica for now). Here’s the code I’m trying out at the moment. It encodes and decodes the EDN blob using Transit in Messagepack mode.#2018-08-2507:15henrikThere’s a lot of casts and pprint in there for me to follow along with what’s happening just now, it can be stripped out.#2018-08-2507:16henrikI think for dealing with versioning, you’d use a new UUID for every time you update the blob, and keep all versions intact in DynamoDB. That way it can follow along with the versioning happening in Datomic.#2018-08-2507:49henrikFor Amazonica, I’m stripping out all AWS deps and pull DynamoDB back in at the same version that Ions uses.#2018-08-2516:04mpenetJust wondering, why the 4k string limit and lack of byte[] in cloud? Is that to artificially limit the memory usage of databases/entities?#2018-08-2520:33pfeodrippeHi folks, I'm having some problems using db/fn with datomic memory connections in which they don't give a exception or warning if a transaction function uses dependencies that are not part of datomic classpath (for a example, calling a datomic function using a dynamo connection gives a "not a classpath" error)#2018-08-2520:34pfeodrippeIs there some way to represent this kind of error using a datomic memory connection?#2018-08-2520:55mgString size limit is probably to keep segment size down#2018-08-2622:01johnjhttps://martintrojer.github.io/clojure/2015/06/03/datomic-dos-and-donts here says to keep strings under 1Kb for performance reasons.#2018-08-2703:31henrikSo with UTF-16, that means strings of no more than 64 characters.#2018-08-2703:49henrikThough that guide is from 2015, and the official best practices for Cloud don’t seem to mention it: https://docs.datomic.com/cloud/best.html#2018-08-2708:38Petrus TheronAre there are any long-term future plans to allow extending Datomic with custom data types, such as sparse matrices to d o the matrix math needed for collaborative filtering, or is this something that could be handled by database functions already + caching?#2018-08-2714:31stuarthalloway@U051SPP9Z the original design on Datomic included consideration of custom data types, but this is not an area of active development. Always happy to learn about your use cases.#2018-08-2712:59henrikHello @marshall, found a dead link. “ion :allow list” under https://docs.datomic.com/cloud/query/query-data-reference.html#calling-clojure-functions#2018-08-2713:16marshall@henrik fixed - thanks!#2018-08-2714:37manutter51Any conformity people here? I’m trying to use c/ensure-conforms to set up my db schema, and I’m getting “:db.error/not-a-data-function Not a data function: 10”. I don’t have “10" anywhere in my norms, and there’s no reference to any conformity code in the stack trace. This is on an in-memory db (datomic on-prem), so no previous schema in the db.#2018-08-2714:37manutter51Here’s the norm: {::fz.location
{:txes [{:db/ident :fz.location/name
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one
:db/doc "Location name"}]}#2018-08-2715:22manutter51Co-worker helped me figure this out: :txes has to be a vector of vectors of maps, not just a vector of maps.#2018-08-2717:28timgilbertHey, datalog question here. I have an entities with a cardinality-many keyword attribute :x/colors. I want to write a datalog query that accepts a set of color keywords, and returns every entity for which the entire set of :x/colors for the entity is the same as the set I passed in.#2018-08-2717:29timgilbertI'm a little confused about how to do this - using collection binding I only match on a single :color at a time#2018-08-2718:35Mark AddlemanYes, this is the typical approach. You can accomplish this a couple of ways. You could issue one query per element in the set or you can pass the set into a iterable binding (I don’t think that’s the offical term) doing this :in [?color...]#2018-08-2719:07timgilbertThe problem with that is that I only get, eg, :blue on any given match, and in that context I don't know whether :red was also passed in the input set#2018-08-2719:57dustingetzYOu can pass in a set. This was asked in the Datomic forum, ill try to find link#2018-08-2719:59dustingetz@U08QZ7Y5S https://forum.datomic.com/t/exact-unordered-match-of-multi-valued-string-field/365#2018-08-2720:00dustingetzThe secret sauce is calling clojure.core/= for set equality#2018-08-2721:21timgilbertThanks, I'll take a look at that#2018-08-2717:30timgilbertAnd then in the context of a datalog clause I'm not sure how to get the entire set of :x/colors for a single entity e.#2018-08-2718:39bmaddyI'm not sure how the iterable binding approach would work, but I'd probably do something like this:
(defn get-colors
[db eid]
(let [results (->> eid
(d/pull db '[:x/colors])
(map :x/colors)
set)]
results))
(d/q '[:find ?e
:in $ ?search
:where
[?e :x/colors]
[(foo/get-colors $ ?e) ?search]]
(d/db conn) #{"blue" "green"})
#2018-08-2719:09timgilbertI can see how that would work, but I'm still hoping for a way to do it inside of the query itself#2018-08-2719:10timgilbertI guess I'm hoping for a way to write an (every?) predicate or something#2018-08-2719:11timgilbertI found this StackOverflow answer which seems like it might be heading in the right direction: https://stackoverflow.com/questions/23352421/for-all-in-datalog#2018-08-2719:56cap10morganIs it possible to do a Datomic restore across AWS accounts? I've given the other account (i.e. the one that owns the destination dynamo table) access to the backup bucket in S3, and I can do aws s3 ls from the CLI w/ those creds, but Datomic's bin/datomic restore-db still crashes with S3 access errors.#2018-08-2719:57marshallyou’ll need to make sure the role/credentials used by the restore-db call (peer application) have read permissions to the s3 bucket#2018-08-2719:57marshallthat should be all that’s required#2018-08-2719:57marshallkeep in mind, aws s3 ls isnt’ the same as being able to read all the objects in that bucket#2018-08-2719:57marshallIAM has separate permissions for getting objects and listing bucket contents#2018-08-2720:06cap10morgan@marshall OK I think you clued me into the right thing here. I need to set this up using IAM, not ACLs.#2018-08-2720:08marshallyeah, it may be possible with ACLs, but I think you’ll have better luck / more control with IAM#2018-08-2720:10marshallalso, I believe S3 ACLs are a legacy tool that pre-dates IAM or bucket policies#2018-08-2720:21cap10morganHmm, the only stuff I'm seeing around this involves assuming roles. I'm not having any luck combining that with a Datomic restore-db command.#2018-08-2720:21marshallrun the instance with a role#2018-08-2720:21marshallwhatever instance you’re running the restore job on#2018-08-2720:22marshallalternatively, give the role to a user#2018-08-2720:22marshalland run with that user’s credentials#2018-08-2720:22marshalli.e. if you’re running the restore on your local machine#2018-08-2720:27cap10morganIs there a way to give a role to a user other than aws sts assume-role? I tried that, then tried using the creds it gave me, but those aren't working.#2018-08-2720:29marshallyou can assign a user to a group and put the role on that group#2018-08-2720:30marshallactually i may be thinking of policies#2018-08-2720:36cap10morganOh, I think I may have figured it out. I needed to set the AWS_SESSION_TOKEN env var too. And now it looks like I need to give this role permission to write to the destination dynamo table b/c it drops existing privs when you assume the role.#2018-08-2720:46cap10morganHmm, no. That doesn't work either. It tries to restore to the DynamoDB table in the account that the assumed role gave it access to. So I'm back at square one. I'm not sure how to grant one side of a restore access to another account's S3 bucket and the opposite side access to the primary account's DynamoDB table. 😕#2018-08-2723:08cap10morganUsing a bucket policy that gave permissions to the other account in the Principal field fixed this ^^^. And with no more need to mess with assuming roles.#2018-08-2805:28henrikWith the client API (on Ions), I’ve made a basic pagination function:
(defn paginate [offset limit results]
(take limit (drop offset (sort-by second results))))
(d/q {:query '{:find [(fully.qualified/paginate 0 10 ?tuple)]
:in [$]
:where [[?id :article/id ?uuid]
[?id :article/title ?title]
[(vector ?uuid ?title) ?tuple]]}
:args [(get-db)]})
The above works fine, but it goes nuts when I try to parameterize the query:
(d/q {:query '{:find [(fully.qualified/paginate offset limit ?tuple)]
:in [$ offset limit]
:where [[?id :article/id ?uuid]
[?id :article/title ?title]
[(vector ?uuid ?title) ?tuple]]}
:args [(get-db) 0 10]})
ExceptionInfo Datomic Client Exception clojure.core/ex-info (core.clj:4739)
#2018-08-2805:29henrikWhat’s the correct way to pass in parameters for the pagination function?#2018-08-2805:29henrikOr more generally, what’s the correct way to paginate sorted results?#2018-08-2816:22markbastianAs I've been learning Datomic (and Datascript) I've come across a practice that seems to make a lot of sense but I don't see it in the examples so much so I wanted to see if it was considered good or bad form or "just another way to model the data." The practice is to make heavy use of refs, sometimes to the point where the data model consists of a number of atomic values/entities and the majority of the entities are aggregations of references to those values. For example, without the practice I'm describing you might model a movie like so (let's assume each field has a schema that refers to the field and its value type - e.g. :movie/title is a string):
{:movie/title "Ben Hur"
:movie/year 1959
;Cardinality many on this one
:movie/actors ["Charlton Heston" "Stephen Boyd"]}
However, you might recognize that the movie title, year, and actor names are all other values in the model. Instead, you might do this:
{:movie/title {:title/string "Ben Hur"}
:movie/year {:year/value 1959}
:movie/actors [{:actor/name "Charlton Heston"}
{:actor/name "Stephen Boyd"}]}
In this case, every field is a ref out to another entity. The movie entities are defined logically and have no actual primitive value fields themselves. These referenced values can then be used to construct other movie (or other domain) entities in which they are used. For example, you could reference other movies or books with the same title or other events that happened in that year.
Is this considered good practice? Does it have any sort of negative implications on the size of your indexes?#2018-08-2816:54henrik@markbastian I've been looking at this for some attributes, but not all. Specifically those I want to enforce as unique throughout the DB, like email and URL.#2018-08-2816:55favilayeah, this generally makes no sense unless the value has some kind of identity#2018-08-2816:55favila(in your data model)#2018-08-2816:55favilae.g. actors have identity independent of anything asserted of them#2018-08-2816:56favilabut the number "1959"?#2018-08-2816:56favilaor the string "Ben Hur"?#2018-08-2816:56faviladepends on the domain but I think usually not#2018-08-2816:58favilaalternatively, if you want to use entities with value-ish semantics (so they are shared-by-value) then they should have a unique attribute or some kind of hash-derived id#2018-08-2816:59favilawe use this technique as a kind of compression and to get around datomic not having custom value types#2018-08-2817:05markbastianAs a title or year, I would think these things do have identity.#2018-08-2817:08markbastianThe number 1959 wouldn't be particularly special. There are an effectively infinite number of them. But movie release years are limited. Less than 150.#2018-08-2817:09markbastianAnd as a title, there are a limited number of works related to "Ben Hur" (one book, several movies, etc.)#2018-08-2817:12markbastianIn the year example, all of the references would have schemas along the lines of
{:db/ident :year/value
:db/valueType :db.type/long
:db/cardinality :db.cardinality/one
:db/unique :db.unique/identity}
in which they would exist uniquely in the domain.#2018-08-2817:14markbastianIn the above case I am presenting an extreme, but the idea is that you may have a relatively finite number of values from which all other entities are built. Some things, such as movie revenue, would definitely not fall into this category as they could be effectively anything.#2018-08-2818:11dustingetzA value like 42 is its own identity, you don't need a second layer of identity on top of it#2018-08-2818:26markbastianHmmm, that makes a lot of sense. One thing I like about what I was doing was that I could do very fast queries along the lines of:
[:find [?e ...]
:in $
:where
[?t :title/string "Ben Hur"]
[?y :year/value 1942]
[?e :movie/title ?t]
[?e :movie/year ?y]]
As long as the set of titles and years were relatively small this will be quite fast. It should just be a set operation on the backreferences to the domain values. Essentially the domain values provide a gateway into the entities.
If, on the other hand, I did something like this
{:movie/title "Ben Hur"
:movie/year 1959
:movie/actors [{:actor/name "Charlton Heston"}
{:actor/name "Stephen Boyd"}]}
I would query with something like this:
[:find [?e ...]
:in $
:where
[?e :movie/title "Ben Hur"]
[?e :movie/year 1992]]
Wouldn't this option be dramatically slower for a large data set? It seems like I don't have a fast path to my movie entity. I don't really have a strong concept of identity. The best definition is probably "title+year". Any thoughts as to a better way to think about this?#2018-08-2818:26markbastianHmmm, that makes a lot of sense. One thing I like about what I was doing was that I could do very fast queries along the lines of:
[:find [?e ...]
:in $
:where
[?t :title/string "Ben Hur"]
[?y :year/value 1942]
[?e :movie/title ?t]
[?e :movie/year ?y]]
As long as the set of titles and years were relatively small this will be quite fast. It should just be a set operation on the backreferences to the domain values. Essentially the domain values provide a gateway into the entities.
If, on the other hand, I did something like this
{:movie/title "Ben Hur"
:movie/year 1959
:movie/actors [{:actor/name "Charlton Heston"}
{:actor/name "Stephen Boyd"}]}
I would query with something like this:
[:find [?e ...]
:in $
:where
[?e :movie/title "Ben Hur"]
[?e :movie/year 1992]]
Wouldn't this option be dramatically slower for a large data set? It seems like I don't have a fast path to my movie entity. I don't really have a strong concept of identity. The best definition is probably "title+year". Any thoughts as to a better way to think about this?#2018-08-2820:25favila"Wouldn't this option be dramatically slower?" No, quite the opposite. Your first option has twice as many joins in it#2018-08-2820:31favilaare you aware of this? http://tonsky.me/blog/unofficial-guide-to-datomic-internals/#2018-08-2820:32favilaIt feels like you are trying to optimize row size in a relational database#2018-08-2820:32faviladoing the same thing in datomic is usually going to increase storage and lookup times#2018-08-2820:33favila(unless done carefully)#2018-08-2821:01markbastianYeah, I've read Tonsky's post several times. I actually get great performance with the first query and worse (but not bad) performance with the second when using Datomic. Not the case, though, with Datascript since it doesn't seem to have backreferences built in to the indexes. I do want to emphasize, though, that in the first model the title/string and year/value are references to unique identities. There is no concept of index or identity in the data model of the second query.#2018-08-2821:26favilathese attrs should all be indexed in either scenario#2018-08-2821:26favila(indexed by value)#2018-08-2821:27favilaif the second one is not indexed at all then that is why it is slower#2018-08-2821:27favilanot because they share entities for their values#2018-08-2821:28favila:movie/year 1959, if not indexed, will require a scan over :movie/year to get the matching value#2018-08-2821:28favilaif there is only one entity for any given :movie/year value obviously that will be a faster scan#2018-08-2821:29favilabut there's still a second lookup in the :vaet index to go from the movie-year entity to movies which reference it#2018-08-2821:29favila(you are still indexing by value--the automatic entity backref index)#2018-08-2821:30favilaasserting :movie/year on the movie entity directly when the attr is indexed removes this extra lookup#2018-08-2821:30favilanow it is simply an index-range scan over :avet where value = 1959#2018-08-2821:30favilaand attr = :movie/year#2018-08-2821:31favilathe e of the movie will be known without an extra index lookup#2018-08-2818:36markbastianBTW, I appreciate everyone's help on this. I've been trying to achieve "Datomic Enlightenment" for a while now and a few things, like establishing identity when there is no obvious primary key and a database function won't do, are still elusive for me. This was just something that I thought of that seemed to solve the problem of "weak identity". In other words, you know facts about something that, taken together, tell you exactly what you want, but the thing you want doesn't have a natural single ID.#2018-08-2819:06markbastianPerhaps setting :db/index true on :movie/title and :movie/year would accomplish what I am going for without adding any additional concept of identity to what are otherwise primitive values.#2018-08-2819:43ghadiI'm trying to enumerate tradeoffs on Datomic Ion placement. Our main production east1 account (A) is not the same as the Datomic Cloud east1 account (B).
Assuming I need to consume a Kinesis Stream with an Ion-backed lambda, do I:
Place the Stream in account A and the Lambda/Ion in acct B
Place both Stream and Ion in account B, produce to the stream remotely from acct A#2018-08-2819:52ghadiaccount A and B can never be the same because we have a legacy EC2 VPC#2018-08-2909:13dangercoderIs it possible to make it so that queries that takes too long automatically fails after X seconds?#2018-08-2909:27foobarHow does d/squuid work?#2018-08-2909:28foobarDo you create a random UUID and overwrite the significant 32 bits?#2018-08-2909:29foobarOr is there more to it?#2018-08-2911:42manutter51@jarvinenemil Yes, check out (doc d/query)#2018-08-2912:38foobar? It doesn't seem to add anything#2018-08-2912:39foobarOh sorry my bad#2018-08-2912:40marshall@foobar see the “sequential IDs” section of this: https://github.com/clojure-cookbook/clojure-cookbook/blob/master/01_primitive-data/1-24_uuids.asciidoc#2018-08-2912:58foobarThanks#2018-08-2913:05conan@foobar squuids aren't really necessary any more (but i love writing the word, so i still use them): https://forum.datomic.com/t/general-product-questions/309/2#2018-08-2918:39oscarI have a problem with client-cloud where it blows up if I pass it a transaction that contains a bigint. It gives me the error
clojure.lang.ExceptionInfo: Cannot write 10000 as tag null
cognitect.anomalies/category: :cognitect.anomalies/incorrect
cognitect.anomalies/message: "Cannot write 10000 as tag null"
datomic.client-spi/context-id: "70f24fc6-922b-400e-89ae-6cbef0317ef1"
I'm betting it's a dependency issue but I don't know where to look.#2018-08-2919:03oscarHmm. I tore out all of my deps except for com.datomic/client-cloud {:mvn/version "0.8.63"} and org.clojure/tools.reader {:mvn/version "1.3.0"} and I'm still getting the error. My :tx-data is just [{:example/bigint 10000N}].#2018-08-3015:12okocimfwiw, I see the same behavior as you when I try this. I also tried it with a java.math.BigInteger to no avail.#2018-08-3020:16rustam.gilaztdinovHello! Trying to work with datomic-free with console and have some issues
Steps for running datomic-free(v.0.9.5703)
-> bin/maven-install
-> bin/transactor config/samples/free-transactor-template.properties
;; Starting datomic:, storing data in: data ...
;; System started datomic:, storing data in: data
Then, in datomic-console(v.0.1.216) folder:
-> bin/install-console ../datomic-free-0.9.5703
;; Installing Datomic Console...
;; Copying files...
;; Done!
-> bin/console -p 8080 dev datomic:
;; this erros
bin/console: line 3: bin/classpath: No such file or directory
Error: Could not find or load main class clojure.main
#2018-08-3021:17johnj@rustam.gilaztdinov datomic-free doesn't come with the console stuff#2018-08-3021:18johnjor doesn't support it#2018-08-3021:19rustam.gilaztdinov#2018-08-3021:19rustam.gilaztdinov> Datomic Free users can download it below.#2018-08-3021:21johnjah ok#2018-08-3021:23johnj@rustam.gilaztdinov have you tried changing dev to free ?#2018-08-3021:24johnjbin/console -p 8080 free datomic:<free://localhost:4334/>#2018-08-3021:25rustam.gilaztdinovit's just alias#2018-08-3021:25rustam.gilaztdinovbut I try, doesn't help#2018-08-3021:26johnjwhere did the console installed?#2018-08-3021:26rustam.gilaztdinovactually, it's not affected at all
-> bin/console
bin/console: line 3: bin/classpath: No such file or directory
Error: Could not find or load main class clojure.main
#2018-08-3021:26johnjare you running ./bin/console from inside the datomic-free dir?#2018-08-3021:27rustam.gilaztdinovno, from console dir#2018-08-3021:28rustam.gilaztdinovyes, u right#2018-08-3021:28johnjwhere is the console dir located?#2018-08-3021:28johnjin relation to the datomic-free dir#2018-08-3021:28rustam.gilaztdinovi switched to datomic dir and it's worked now#2018-08-3021:28johnjok#2018-08-3021:28rustam.gilaztdinovthx!)#2018-08-3111:43olivergeorgeCan someone link me to an example of restricting/filtering query pull based on the users access.
Eg “users shouldn’t be able to see comments on jobs unless they wrote them”. Using a pull query I can find jobs and pull the related comments but then need to check access for each comment. #2018-09-0102:03olivergeorgePerhaps I can wrap the pull function and do the additional access checks #2018-09-0105:45olivergeorgeVague idea would be
(defn pull
"Like d/pull but checks has-read-access? on relations before including them"
[db uid eid selector]
(let [props (remove map? selector)
joins (apply merge (filter map? selector))
result (d/pull db eid props)]
(reduce-kv (fn [result k v]
(let [attr-name (attr-name k)
eids (d/q '[:find ?eid
:from $ ?eid ?uid
:where
[?eid ?attr-name ?fid]
[(app.perms/has-read-access? $ ?uid ?fid) true]]
db eid uid)
attr-data (map #(pull db uid % [{k v}]) eids)]
(assoc result attr-name attr-data)))
result
joins)))#2018-08-3114:15dominicmA filtered db can be handy at times like this 🙂#2018-09-0102:00olivergeorgeCan you point me at something related to this please. Sounds interesting.#2018-09-0105:30dominicmhttps://docs.datomic.com/on-prem/filters.html#2018-09-0105:41olivergeorgeThanks. That's pretty wild.#2018-09-0105:44olivergeorgePossibly not part of the cloud / client api just yet. Interesting idea for user access control though. Would need to get familiar with practicalities and assess performance considerations too. Food for thought though.#2018-09-0105:52dominicmI was something about cloud#2018-09-0105:52dominicmhttps://docs.datomic.com/cloud/time/filters.html#2018-09-0105:53dominicmOnly pre expected ones, disappointing#2018-08-3119:47henrik@olivergeorge isn't that just another vector in the where clause? :job/author ?user-id or something. Then pull in the find clause.#2018-08-3121:04olivergeorgeIts the relations in the pull data I can’t filter via a :where clause#2018-09-0106:38henrikIt sounds like it would be easiest to handle in a separate query then.#2018-09-0102:28lilactownI’m trying to go through the getting started steps and running into this when connecting to the SOCKS server:
debug1: Connection to port 8182 forwarding to socks port 0 requested.
debug1: channel 2: new [dynamic-tcpip]
channel 2: open failed: administratively prohibited: open failed
debug1: channel 2: free: direct-tcpip: listening port 8182 for port 8182, connect from ::1 port 55954 to ::1 port 8182, nchannels 3
#2018-09-0102:54stuarthalloway@U4YGF4NGM did you try https://docs.datomic.com/cloud/troubleshooting.html#connection-failure#2018-09-0102:56lilactownI got that message while connecting using the bastion test#2018-09-0102:56lilactownI’m unsure what it means by :endpoint configuration#2018-09-0102:57lilactownI’m wondering if it has to do with me using the cloudformation template in us-west-2?#2018-09-0102:59stuarthallowaymaybe? what is your endpoint#2018-09-0102:59stuarthallowayand are you picking up the right AWS creds (and region) from your environment?#2018-09-0103:00lilactown:face_palm: :face_palm: :face_palm: fixed a typo and it works#2018-09-0103:00stuarthallowaybtw endpoint is doc'ed at https://docs.datomic.com/cloud/getting-started/connecting.html#creating-database#2018-09-0103:01lilactownI appreciate it. working my way through the getting started guide. I appreciate the prompt replies!#2018-09-0121:49lilactownhow can I see message cast with cast/dev from inside a deployed ion?#2018-09-0202:05henrik@lilactown wrap it in your own function, then call that one instead.#2018-09-0202:06lilactownNot sure what you mean. I was hoping that a message sent with cast/dev would show up in cloudwatch, but I can't find it#2018-09-0204:30henrikSorry, too early, I’m spouting nonsense.#2018-09-0202:37oscar@lilactown I think you want to use cast/event. If I understand the docs correctly, you can only see cast/dev locally.#2018-09-0203:08lilactown🤔 I’m trying to see what the input is from an SQS event while developing. is there a better way to do this?#2018-09-0300:10jdkealymy queries are getting really slow and i took a look at memcache, this would appear that memcache is not being used at all right ?#2018-09-0310:18stuarthallowayHi @U1DBQAAMB -- that looks odd, what Statistic are you graphing there?#2018-09-0310:18stuarthalloway"getting really slow" implies change over time, what else (if anything) is changing over time?#2018-09-0313:21jdkealyThe statistic is just "Memcache"#2018-09-0313:21jdkealyit's in cloudwatch, under the datomic namespace#2018-09-0313:22jdkealyaccording to docs https://docs.datomic.com/cloud/operation/monitoring.html#metrics this metric shows the number of cache hits#2018-09-0313:24jdkealyin regards to slowness... i have two queries that do a regex on a word to find a partial match... like an autocomplete, the queries have gotten so slow that sometimes it goes over 60 seconds and gets taken off the load balancer and then i have a range of other problems. I believe they're getting slower because the growing number of records#2018-09-0313:25jdkealybut actually... there's well under 100k records in the query, so i feel like it shouldn't be that slow#2018-09-0313:35stuarthallowaysearching 100k records should be blazing fast#2018-09-0313:35stuarthallowaythis is On-Prem?#2018-09-0313:37stuarthallowayI would love to see the query. Also, how big are the records? Are they bits of text? Documents? Tomes?#2018-09-0313:39jdkealyemail addresses#2018-09-0313:39jdkealywhat's on-prem#2018-09-0313:40stuarthallowayvs. Cloud#2018-09-0313:40stuarthallowayhttps://docs.datomic.com/on-prem/moving-to-cloud.html#2018-09-0313:43jdkealyoh right#2018-09-0313:44jdkealyyes i set it up before cloud was offered#2018-09-0313:44jdkealy[{:find [?u],
:in [$ % ?orgid],
:where [(find-relationship ?u ?orgid)]}
#2018-09-0313:44jdkealyi guess there is some other stuff that makes it more complicated#2018-09-0313:46jdkealyoh wait sorry this is missing the email query#2018-09-0313:48jdkealy[{:find [?u],
:in [$ % ?orgid [?attrs ...] ?stext],
:where [(find-relationship ?u ?orgid)
[?u ?attrs ?val]
[(im.models.user.queries/includes-i? ?val ?stext)]]}
#2018-09-0313:48jdkealy(defn includes-i? [in out]
(clojure.string/includes? (clojure.string/lower-case in) out))
#2018-09-0313:48jdkealysorry this query references a lot of rules#2018-09-0313:48jdkealyi had another query that didn't... but i took it out because it brought my site down#2018-09-0313:59stuarthalloway@U1DBQAAMB to be seeing the slowdown you are describing, I would imagine that the query is considering several orders of magnitude more than 100K records#2018-09-0314:04stuarthallowaythe two most likely causes for this (if your data set is small) are covered in the docs here:#2018-09-0314:04stuarthalloway1. https://docs.datomic.com/on-prem/best-practices.html#most-selective-clauses-first#2018-09-0314:04stuarthalloway2. https://docs.datomic.com/on-prem/best-practices.html#join-along#2018-09-0314:07stuarthallowaywhen rules are involved, you can ensure that things are bound in the order you expect by using the "required variable" shape when creating rules https://docs.datomic.com/on-prem/query.html#rules#2018-09-0314:23jdkealyOk but wouldn’t a malformed cache config ration also lead to slowness?#2018-09-0314:27stuarthallowaymaybe, but that slowness would impact everything (all queries)#2018-09-0314:27stuarthalloway100k records should fit in memory, at which point the memcache is not being consulted ever#2018-09-0314:28stuarthallowayyou could look at the ObjectCache sum and average values#2018-09-0314:28stuarthallowaythe sum would tell you how much cache you need#2018-09-0314:28stuarthallowayand the average would tell you how often you were hitting memory, without ever needing memcache#2018-09-0416:08jdkealythanks stuart!#2018-09-0416:08jdkealythere's well over 100k records in the whole db more like 10M.... sorry i wasn't being clear, i meant 100k records of from the point of the first where clause#2018-09-0416:09jdkealy( i was thinking of my "users" like a SQL table, forgot it's nothing like that in datomic)#2018-09-0416:18jdkealyso... i had a wicked simple query take 53 seconds today#2018-09-0416:26jdkealyobject cache is getting hit, memcache isn't, no ?#2018-09-0418:51jdkealysorry to keep bothering you 😐 I just wanted to point out that the exact same query is slow in one instance and blazing fast in another, i don't know if that's helpful#2018-09-0419:04stuarthalloway@U1DBQAAMB if you are restarting processes, then the query could be slow the first time, and fast on subsequent times#2018-09-0419:04stuarthallowayhow hard is it for you to experiment with memcached disabled?#2018-09-0419:04stuarthalloway^^ if that makes things better then you certainly know you have a memcached config problem#2018-09-0419:22jdkealyi'm not sure if it's even enabled#2018-09-0421:49jdkealyi mean... i have no way of seeing that it's being touched. there are in fact 80M datoms, and i believe i followed the instructions correctly for memcached support ( i used the cloud formation template and i pass datomic.memcachedServers as args to the JVM). I would just like to see one single read or validate the connection is used at all... otehrwise i would just spin down the cache and use more peers#2018-09-0304:21clariceHello, I have created a new database with a schema and am trying to add data to it
(require '[datomic.api :as d])
(def db-uri "datomic:)
(d/create-database db-uri)
(def conn (d/connect db-uri))
(def schema
[{:db/ident :movie/title
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one
:db/doc "The title of the movie"}
{:db/ident :movie/genre
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one
:db/doc "The genre of the movie"}
{:db/ident :movie/release-year
:db/valueType :db.type/long
:db/cardinality :db.cardinality/one
:db/doc "The year the movie was released in theaters"}])
@(d/transact conn schema)
(def movies [{:movie/title "The Goonies"
:movie/genre "action/adventure"
:movie/release-year 1985}
{:movie/title "Commando"
:movie/genre "action/adventure"
:movie/release-year 1985}
{:movie/title "Repo Man"
:movie/genre "punk dystopia"
:movie/release-year 1984}])
(d/transact conn {:tx-data movies})
When I run it in the datomic REPL, I get ClassCastException clojure.lang.PersistentArrayMap cannot be cast to java.util.List datomic.api/transact (api.clj:92). Any chances that you can see what I am doing wrong?#2018-09-0304:57steveb8nI think you are using the peer api instead of the client api. for peer txns you don’t put the entities in a map with :tx-data. try (d/transact conn movies) or try switching to the client api instead of datomic.api#2018-09-0306:14clariceThanks for your insight! I am going to have to learn more about the differences between the two. 🙂#2018-09-0307:49steveb8nyeah, all the little differences caught me out too when I migrated from peer to client. I think of it like peer is v1 and client/cloud is v2#2018-09-0310:08stuarthallowayHi @UCM1FJA4E & @U0510KXTU! Let us know if you find something absent or confusing at https://docs.datomic.com/on-prem/moving-to-cloud.html.#2018-09-0310:11clarice:+1: Thanks, will do.
When I read the documentation again I noticed that it mentioned the Client API so that made sense. simple_smile#2018-09-0313:52stijnhi, I see that for versions >= 402 you can choose t2.small for the datomic instances instead of i3.large, but the documentation still mentions i3.large for production topology. we have 1% cpu utilization on the i3.large instances, so having smaller ones makes sense I guess. What are the recommendations here?#2018-09-0314:09stuarthallowayhi @U0539NJF7 -- the i3s are doing more for you than just CPU, e.g. the SSD caches keep your data hot across ion deploys#2018-09-0314:10stijnok, I see. So, is there a way to measure what type of instance would be sufficient?#2018-09-0314:11stuarthallowayalso the primary group instances have to do all the housekeeping work for Datomic, which is bursty and unrelated to your transaction load#2018-09-0314:12stuarthallowayall that said, we do not currently support anything smaller than i3.large for the production topology#2018-09-0314:13stuarthallowayare you (yet?) hosting your application code as ions?#2018-09-0314:14stijnno, we are still at the first version and now looking to do the upgrades#2018-09-0314:15stuarthallowayif/once you move to ions, your next tier out will disappear entirely (eliminating those instance costs, if any), and that load will be on the i3.larges, increasing the CPU utilization there#2018-09-0314:17stijnI understand, but we need quite a refactor for that as we're running on elastic beanstalk right now#2018-09-0314:17stuarthallowayit is definitely our intent that the baseline prod topology be a cost-effective way to host an entire application -- if you are not finding it so, I would be happy to look at the details -- you can email me <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>#2018-09-0314:21stijnno, I think the cost is not extreme for our production usage right now, just wondering why the marketplace mentions t2.small and was looking if that would be an option#2018-09-0314:22stuarthallowaybackstory: the marketplace was originally designed to sell ec2 instances (not Cloud-native solutions) and then gradually upfit to support systems that use CFT and run multiple services#2018-09-0314:23stuarthallowayas a result, the marketplace UI thinks EC2 instances are the center of the universe, and does not anticipate different topologies with different capabilities#2018-09-0314:24stuarthallowayso there are various places where the UI suggests instance/CFT combinations that are not practicable, nor permitted#2018-09-0314:26stuarthallowaysorry for the confusion! we are working on it#2018-09-0314:27stijnok, thanks for the explanation! 🙂#2018-09-0319:56lilactownI’m inexperienced in both AWS and Datomic, so sorry if this is dumb: I’m currently using the solo config and seeing a bunch of alerts about INSUFFICIENT_DATA. anything I should be doing when I see those?#2018-09-0320:01stuarthallowaygenerally not a problem! https://stackoverflow.com/questions/33639642/avoiding-insufficient-data-in-cloudwatch#2018-09-0320:02stuarthallowayif the system is running (e.g. you are able to follow the instructions and connect) then you are fine#2018-09-0320:03lilactownthanks! I’ll avoid disabling them and just ignore them for now. I appreciate the reply stuart#2018-09-0322:47kennyIs it possible to pull transaction attributes from an entity?#2018-09-0400:23lilactownI have a lambda ion type, and one thing I’ve noticed is that my ion will continuously execute if it does not respond with a value (e.g. an error occurs, or I return nil)#2018-09-0400:44stuarthallowayhi @U4YGF4NGM what do you mean by "continuously execute"?#2018-09-0400:45lilactownmy ion seems to be called over and over again#2018-09-0400:47lilactownI’m actually now not sure if it’s the fact that I have it connected to SQS; it might be that it’s putting it back in the queue when the lambda doesn’t return a success#2018-09-0400:47lilactownI don’t see any messages in the queue as it’s happening, but that might not be accurate I guess?#2018-09-0401:08stuarthallowayI would guess that the over-and-over is an AWS retry#2018-09-0401:08stuarthallowayyou might prove that by invoking it from the CLI or UI instead#2018-09-0401:09stuarthalloway... but if if in the end you think that the ion plumbing is behaving badly, let us know!#2018-09-0401:26lilactownthanks for that. It’s not happening when I invoke the lambda from the UI. so I’m 97% sure it’s SQS retrying. Ion plumbing seems fine :+1:#2018-09-0400:23lilactownis there a way to control this behavior?#2018-09-0406:58TwanIf I'd save an emoji in an entity, I get ? on retrieving that value. It does not happen when running the database in memory, only when doing it on Postgres. Do you have a suggestion what could cause this?#2018-09-0502:03jaretHow are you storing the emoji’s? (what data/type-- unicode strings?) What is the schema for the emoji? How are you querying?#2018-09-0410:03lambdamHello. Does someone know if it is possible to alias the namespace part of a namespaced keyword.
In other words (or codes), from this
(ns foo.bar.baz
(:require [clojure.spec.alpha :as s]))
(s/def :company.sub-category.sub-sub-cateory/field-a keyword?)
(s/def :company.sub-category.sub-sub-cateory/field-b string?)
(s/def :company.sub-category.sub-sub-cateory/field-c integer?)
to something like this
(ns foo.bar.baz
(:require [clojure.spec.alpha :as s]))
(kwd-alias 'css 'company.sub-category.sub-sub-cateory)
(s/def ::css/field-a keyword?)
(s/def ::css/field-b string?)
(s/def ::css/field-c integer?)
or even
(ns foo.bar.baz
(:require [clojure.spec.alpha :as s]))
(with-kwd-alias ['css 'company.sub-category.sub-sub-cateory]
(s/def ::css/field-a keyword?)
(s/def ::css/field-b string?)
(s/def ::css/field-c integer?))
The regular alias function doesn't work since the company.sub-category.sub-sub-cateory namespace doesn't exist in the code.#2018-09-0410:11stijn(ns foo.bar.baz
(:require [clojure.spec.alpha :as s]
[company.sub-category.sub-sub-cateory :as css]))
(s/def ::css/field-a keyword?)
(s/def ::css/field-b string?)
(s/def ::css/field-c integer?)#2018-09-0410:13stijn@dam ☝️#2018-09-0410:17lambdamIt throws
1. Unhandled java.io.FileNotFoundException
Could not locate company__init.class or company.clj on classpath.
The company.sub-category.sub-sub-category (example here) exists in Datomic, not in my code.#2018-09-0410:55alexmillerYou can’t alias a namespace that doesn’t exist in loadable form. You can use create-ns to fake that before you alias it though#2018-09-0413:38lambdamThanks @alexmiller
1. Is it idiomatic?
2. Do you think that with create-ns it is possible to do it in a scoped way, like in the (with-kwd-alias ['foo 'bar] & forms) example, or a side effect on the global environment is inevitable (I took a quick look at the source)?#2018-09-0415:45alexmillerRich has some stuff in mind for managing keyword aliases - future work#2018-09-0414:21favilahttps://dev.clojure.org/jira/browse/CLJ-2123#2018-09-0414:21favilaIt's not "idiomatic" if by that you just mean "common"#2018-09-0414:23favilayou can unmap aliases with ns-unalias, so it seems like scoping is possible. (Not sure of the caveats there.)#2018-09-0414:24favilaalso none of this is possible for CLJS because it doesn't have create-ns (or real namespaces)#2018-09-0414:33Ben HammondI am trying to connect to a datomic where both the Postgres storage and transactor are inside an AWS private network.
I am outside that network
I have ssh tunnels set up into the Bastion for port 5432, forwarding to Postgres storage
and 4334, forwarding to the transactor
I have /etc/hosts configured to point the postgres machine name at 127.0.0.1
and point the transactor machine name at 127.0.0.1
and yet when I attempt to connect locally I see the error
`org.apache.activemq.artemis.api.core.ActiveMQSecurityException: AMQ119031: Unable to validate user
clojure.lang.ExceptionInfo: Error communicating with HOST ip-10-0-1-228.eu-central-1.compute.internal on PORT 4334
clojure.lang.Compiler$CompilerException: clojure.lang.ExceptionInfo: Error communicating with HOST ip-10-0-1-228.eu-central-1.compute.internal on PORT 4334 {:alt-host nil, :peer-version 2, :password "<<excised>>", :username "SE3P59I1dDLm/A6MH5zKpJqluSf1qOae3LrPKPVMfwc=", :port 4334, :host "ip-10-0-1-228.eu-central-1.compute.internal", :version "0.9.5561.50", :timestamp 1536071491707, :encrypt-channel true}, compiling:(NO_SOURCE_FILE:1:9)#2018-09-0414:33Ben HammondI am trying to connect to a datomic where both the Postgres storage and transactor are inside an AWS private network.
I am outside that network
I have ssh tunnels set up into the Bastion for port 5432, forwarding to Postgres storage
and 4334, forwarding to the transactor
I have /etc/hosts configured to point the postgres machine name at 127.0.0.1
and point the transactor machine name at 127.0.0.1
and yet when I attempt to connect locally I see the error
`org.apache.activemq.artemis.api.core.ActiveMQSecurityException: AMQ119031: Unable to validate user
clojure.lang.ExceptionInfo: Error communicating with HOST ip-10-0-1-228.eu-central-1.compute.internal on PORT 4334
clojure.lang.Compiler$CompilerException: clojure.lang.ExceptionInfo: Error communicating with HOST ip-10-0-1-228.eu-central-1.compute.internal on PORT 4334 {:alt-host nil, :peer-version 2, :password "<<excised>>", :username "SE3P59I1dDLm/A6MH5zKpJqluSf1qOae3LrPKPVMfwc=", :port 4334, :host "ip-10-0-1-228.eu-central-1.compute.internal", :version "0.9.5561.50", :timestamp 1536071491707, :encrypt-channel true}, compiling:(NO_SOURCE_FILE:1:9)#2018-09-0414:33Ben HammondI can connect a Socket to that address/port though
(bean (Socket. "ip-10-0-1-228.eu-central-1.compute.internal" 4334))
=>
{:closed false,
:localAddress #object[java.net.Inet4Address 0x4997a018 "/127.0.0.1"],
:remoteSocketAddress #object[java.net.InetSocketAddress
0x48aade08
"ip-10-0-1-228.eu-central-1.compute.internal/127.0.0.1:4334"],
#2018-09-0414:34Ben Hammondso I am at a loss as to what else I need to do#2018-09-0414:35favilacould it just be a bad password?#2018-09-0414:35Ben Hammondwell I have specifically cut thata out of the post#2018-09-0414:35Ben Hammondits just a randomly generated string as far as I can see#2018-09-0414:35favila"unable to validate user" part is strange#2018-09-0414:35Ben Hammondhave no clue as to whether its good or bad ...#2018-09-0414:36favilathat's transactor communication#2018-09-0414:36favilaso it reached storage#2018-09-0414:36Ben Hammondyes, postgres comms never seem to be the problem#2018-09-0414:37Ben HammondI can watch the log on the transactor process#2018-09-0414:37Ben Hammondthat contains username/passwords#2018-09-0414:37favilaI've done this kind of forwarding too (both storage and txor over ssh tunnels with etc/hosts tricks) and never had this problem; I don't know how to proceed#2018-09-0414:45Ben Hammondi can see a Sep 04 14:40:07 transactor_i-0150df410b59045b6 datomicTransactor[1755]: {:event :transactor/remote-ips, :ips #{"10.0.2.55" "10.0.101.200" "10.0.1.115"}, :pid 1755, :tid 35}
pop up on the txtor#2018-09-0414:46Ben Hammondthat's when I achieve a connection from within the private network#2018-09-0414:46Ben Hammondas the 10.0.2.55 address#2018-09-0415:01Ben Hammondoh I was running a dev transactor locally, which was already squatting on TCP port 4334#2018-09-0415:02Ben HammondIts working now. Thanks for your help#2018-09-0415:32lambdamThanks @favila, that's exactly the point.#2018-09-0417:14jdkealyHi, I'm very confused about memcache. This would indicate that my memcache is not being used at all right?#2018-09-0417:15jdkealyi get these warnings in my datomic logs
2018-09-03 02:17:34.381 WARN default n.spy.memcached.MemcachedConnection - Could not redistribute to another node, retrying primary node for 5b8c95a3-691a-436b-a048-3f85e3546bbe.
2018-09-03 02:17:34.392 WARN default n.spy.memcached.MemcachedConnection - Could not redistribute to another node, retrying primary node for 5b8c99be-dc0a-41c1-b4f0-d273a6aa0e14.
#2018-09-0509:48steveb8nI’m seeing this Ion-specific error :db.error/invalid-rule-fn The following forms do not name predicates or fns: (clojure.string/starts-with?) when using rules in a query. the same clause works fine when not in a rule. This query runs fine when invoked from laptop to cloud but fails with the above when deployed as an Ion. It previously worked fine as a non-rule clause in an Ion. Any suggestions?#2018-09-0513:26stuarthalloway@steveb8n that seems not-right 🙂. We will try to repro.#2018-09-0513:28steveb8nthanks. let me know if I can provide more info#2018-09-0513:30stuarthallowaythat was a good problem description already!#2018-09-0612:37jaret@steveb8n I am working on reproducing this error for the team to review. Would you be able to put your rule and query in a gist? I was able to invoke a rule without running into an error and I want to look at the specifics of your case and incorporate them into my testing.#2018-09-0612:57steveb8nSure thing#2018-09-0514:39jdkealyI'm having scaling issues that are coming to a head. I believe it's Datomic related. I have queries that are intermittently taking a long time to execute. Like 3-5 seconds for a query that used to take a few milliseconds. Any advice on where to look and try to figure this out ?#2018-09-0514:42stuarthallowaycan you reliably reproduce?#2018-09-0514:44jdkealyno#2018-09-0514:44jdkealyi've taken some endpoints away from datomic and used elasticsearch and that sees to have relieved some of the pressure#2018-09-0514:49jdkealyi can reproduce a few slow queries, but can't reproduce the ones that are supposed to be fast but are acting slow#2018-09-0514:51jaretHey @U1DBQAAMB this is getting a little hard to track. We’d like to ask that you log a support case to <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>. With all of the points you’ve run into so far (memcached, query performance etc). I also reached out so we can setup a call, but it will be great to have all of this information in a case we can track and update.#2018-09-0514:52jdkealyok thanks Jaret, i'll summarize in an email, much appreciated#2018-09-0515:28jdkealyI sent an email. The gist is datomic queries are taking too long, my site keeps going down. My clients are ok with adding more servers and such especially for this week (this is their highest volume week of the year). I'm replacing datomic endpoints with elasticsearch endpoints, it's relieving some of the pressure, but some dead simple endpoints that sometimes take 5MS to load are intermittently taking over 5seconds, sometimes as long as a minute.#2018-09-0515:30jdkealyWhen they hit a minute, then containers get taken offf the load balancer, and i start having lots of issues. I've already added another server. I'm using Elastic Container services. There are 3 containers per server and 4 servers in total.#2018-09-0515:31jdkealyI'm seeing that there are only 4 peers connected (i guess peers work by IP? which would maybe explain why i'm not getting a performance bump by adding more containers ? )#2018-09-0515:55jdkealyand just had one 314922 MS query#2018-09-0514:48jdkealyi do have one slow endpoint i can reproduce, but that's because of a slow query...#2018-09-0516:44lilactownhow do I figure out why an ion deploy failed?#2018-09-0516:44lilactownhow do I figure out why an ion deploy failed?#2018-09-0517:27stuarthallowayhi @U4YGF4NGM I would start with https://docs.datomic.com/cloud/troubleshooting.html#check-alerts-first#2018-09-0518:09lilactownI have no alerts to speak of#2018-09-0516:55lilactownit seems like my ion deploys fail randomly, and changing an inconsequential thing and push/deploying again fixes it. Trying to figure out why#2018-09-0516:56lilactownokay, I just tried running a deploy from the codedeploy dashboard, and I’m seeing that the deploy-validate script is failing#2018-09-0516:57lilactownthe rollback succeeds#2018-09-0516:58lilactownthe log tail for the deploy-validate is just [stdout]Received 503
[stdout]Received 503
[stdout]Received 503
...
#2018-09-0517:08lilactownand now this time, pushing a new revision and deploying again did not fix it#2018-09-0518:10steveb8nYou might be experiencing out of memory errors like me and others. Causes intermittent deploy fails on solo. You would see this in the logs as a stackoverflow#2018-09-0518:10steveb8nIf so, there is a work around by editing the CF template but you should verify it first#2018-09-0518:17lilactown@steveb8n do you know offhand what exactly I should be searching for in my cloudwatch logs?#2018-09-0518:18lilactownoffhand I don’t see any exceptions, errors or alerts related to stack overflows#2018-09-0518:19steveb8nYou should be able to see stackoverflow in the logs for http invocations. If not, then maybe that's not it#2018-09-0518:21lilactown> in the logs for http invocations
sorry, I’m very new to both AWS and datomic. I’m not sure what you mean by “http invocations.” All I’ve been doing is running the deploy commands, and they fail.
The rollback succeeds and my app works as expected on the last successfully deployed revision
I am on the solo topology, so this is the best lead I have so far.#2018-09-0518:29steveb8nAh that's a different behaviour to what we saw if I'm recalling correctly#2018-09-0518:31steveb8nBut good to rule it out since it's another intermittent one#2018-09-0518:50lilactownokay, I am seeing stack overflows#2018-09-0518:50lilactown:datomic.cluster-node/-main failed: java.lang.StackOverflowError, compiling:(riddley/compiler.clj:12:3)#2018-09-0518:51lilactownsearching for StackOverflow (capitalization!)#2018-09-0518:52lilactown@steveb8n you said there was an underlying cause & fix?#2018-09-0518:52steveb8nYep, the causes is lack of memory so you can ignore the line number#2018-09-0518:53steveb8nIn my case I fixed by overriding the libs listed in the push warning#2018-09-0518:54steveb8nAnother team found a more reliable solution by editing the cloud formation template and increasing JVM mem params#2018-09-0518:55steveb8nBut to do that, you have to upgrade the initial setup so you have access to the CF templates#2018-09-0518:55steveb8nI haven't done this but the instructions seem pretty straight forward#2018-09-0518:56lilactown🤔 I’m trying to understand why overriding libs would fix it. do you mean forcing datomic to load your specified versions?#2018-09-0518:57lilactownor did you add the versions specified by the push warning to your deps.edn?#2018-09-0519:04steveb8nYep, that's exactly what I did. I still occasionally see stackoverflows but then I just redeploy#2018-09-0519:04steveb8nThe mem fix will be more reliable#2018-09-0519:04lilactownwhich one?? 😂#2018-09-0519:05steveb8nI don't understand why the deps versions would affect memory either#2018-09-0519:06steveb8nThe JVM fix was discussed by Stu about a week ago so you might find it by scrolling back slack retains enough history. Otherwise ping Stu for specifics#2018-09-0519:07lilactown@steveb8n I’m going to assume you meant you overrode them locally. I asked two questions 😛#2018-09-0519:08steveb8nYes overrode locally in deps.edn#2018-09-0519:08lilactownthanks I’m trying it now#2018-09-0519:08steveb8nNP#2018-09-0519:14lilactowndoesn’t seem to have affected anything#2018-09-0519:25lilactowni still can’t deploy#2018-09-0519:43stuarthalloway@U4YGF4NGM I answered on the ticket. You need to increase your stack size from -Xss256k to -Xss512k#2018-09-0519:44lilactown:+1::+1:#2018-09-0523:36lilactownthat fixed it. I also had some code errors, not sure if the two were related at all. but at least I was able to see logs of my errors now#2018-09-0523:37lilactownI’m trying to access query params in my api-gateway ion. I see that there is a :query-string key passed in via the input map, although it’s not documented in the reference#2018-09-0523:38lilactownI also see there’s a :datomic.ion.edn.api-gateway/data entry with a map containing :queryStringParameters and key-value pairs of the params, which is ultimately what I’d like#2018-09-0523:41lilactownI’m guessing I should just use :query-string for now? the :datomic.../data key looks not for public use#2018-09-0600:11cap10morganDoes a transactor failover event count as "restarting it" for the purposes of what the docs recommend after doing a restore?#2018-09-0602:34jaretYes, HA failover would work for a restore. However, I have to ask, did the “primary
transactor fail during restore?#2018-09-0603:53cap10morganno it didn't. this is just a planning hypothetical. thanks jaret!#2018-09-0612:59ninjaHi, when I want to call a custom function (A) from within a database function (B) what is the recommended way of doing so?
Should one install A as a database function and call it via invoke from within B or let B require it using :requires and call it like any other function?#2018-09-0612:59ninjaHi, when I want to call a custom function (A) from within a database function (B) what is the recommended way of doing so?
Should one install A as a database function and call it via invoke from within B or let B require it using :requires and call it like any other function?#2018-09-0613:05val_waeselynckDepends on whether A is on the classpath#2018-09-0613:20ninjaSo let's say A is not on the classpath. In this case installing the function as a database function would allow me to call it from B, right?#2018-09-0613:47val_waeselynckYes via d/invoke#2018-09-0613:50ninjaOk. So, if on the other hand it is on the classpath it should be made accessible to B using :requires and call it in a "normal" fashion.#2018-09-0613:54ninjaSo is it recommended to split database functions into smaller functions using either of these approaches? This would be pretty neat, since a named function contributes to better readability and it could be re-used.#2018-09-0614:00stuarthalloway@U8MH51GHZ unless you have a specific reason to install database functions (e.g. managing the classpath is operationally challenging) I would prefer classpath functions. https://docs.datomic.com/on-prem/database-functions.html#classpath-functions#2018-09-0614:13ninja@U072WS7PE this is a valid reason. But this requires me to have control over the transactor (i.e. how it is started)#2018-09-0614:38stuarthallowayyep#2018-09-0615:00johnjHow does the transactor knows of a classpath function that is added to the peer? or it doesn't have to know in this case?#2018-09-0616:09stuarthallowayclasspath management is up to you, put things on the classpath for processes where you need them#2018-09-0715:57kennyIs it safe to always set the :server-type to :ion?#2018-09-0716:00kennyi.e. if your app is deployed remotely, the :ion :server-type would connect as a client, and if your app is deployed as an ion, it would use the in-memory client.#2018-09-0716:19stuarthalloway@kenny that understanding is correct. The only reason :ion is even a separate thing is the (very) unusual case where you might have one Datomic app be a client to others#2018-09-0716:20kennyThanks.#2018-09-0716:14lilactownis there anything wrong with calling datomic.client.api/connect multiple times?#2018-09-0716:21stuarthalloway@U4YGF4NGM connect roundtrips with a server when remote, but that isn't right or wrong without context#2018-09-0716:24lilactownI think I asked the wrong question. I’m trying to work around an issue I’m having where if my REPL loads a ns at startup that calls datomic.client.api/client, the REPL hangs#2018-09-0716:26lilactownif I defer running the client function until I have a REPL, it loads just fine#2018-09-0716:27lilactownI copied the get-client and ensure-dataset functions from the starter repo and was running those in a top-level (def client (get-client)), but I’m moving them into the functions that need them now to avoid hanging my REPL#2018-09-0716:28lilactownalso I can see it connect successfully in my socks proxy terminal, though it also doesn’t error out if it can’t connect#2018-09-0716:39kennyI have a development k8s cluster deployed in an existing VPC. Because VPC endpoints are only supported in a production tolopology, is my only option for running tests on my k8s cluster that use Datomic to have a production topology system deployed? Seems like an expensive way to run tests when I don't have any HA requirements for tests.#2018-09-0720:06stuarthalloway@kenny can you test through the socks proxy & bastion to a solo node?#2018-09-0720:06stuarthallowaySome of our tests work that way.#2018-09-0720:09kennyPerhaps. Do you run the socks proxy in the background?#2018-09-0722:42stuarthallowaysome of our test suites launch the socks proxy from the CLI#2018-09-0722:46kennyThat is a blocking script - you'd have to run it in the background somehow. You'd also want to ensure that the socks proxy script connects prior to your tests running.#2018-09-0723:28sparkofreasonBarring a local option, which could be run as a test jig or containerized, VPC peering to the solo topology would be very helpful.#2018-09-0813:03stuarthalloway@kenny both points true, we have craptaculous CLI automation for that stuff#2018-09-1017:06kenny@U072WS7PE Those tools are expensive to develop, especially on a small team. We need to focus on product development, not tools to work with our database. If a database requires a craptaculous amount of CLI automation to use it in common use cases, it should provide that automation to its users.#2018-09-0718:42linus_gvWe’ve discovered that any call to a user function in a not-join will cause something to be recompiled every time we invoke the query. We know because criterium doesn’t converge. Is this expected?#2018-09-0718:42linus_gv(`not` works fine.)#2018-09-0718:55kennyDo you need to remove :proxy-port from the client arg-map in production?#2018-09-0719:02stijn@kenny: I think the proxy-port should only be there if you connect through the socks proxy. we don't have that parameter in our solo deployments either, only for local dev#2018-09-0719:06kennyI have :proxy-port set in my config and I'm getting Connection refused, not the SOCKS4 tunnel failed message.#2018-09-0719:35kennyRemoving :proxy-port fixed it. I suggest adding that to the docs.#2018-09-0719:36kennyDeploying an ion is failing at the ValidateService step. The Log Trail in CodeDeploy has a bunch of [stdout]Received 503.#2018-09-0719:38lilactown@kenny check your cloudwatch logs#2018-09-0719:38kennyAre there docs on what to look for? There's a lot of stuff in there.#2018-09-0719:39kennyThere are a bunch of these under the Alerts search.#2018-09-0720:11kennyTurns out my datomic config was not correct causing the those misleading alerts. It'd be much more obvious if the regular anomaly was thrown saying it cannot connect.#2018-09-0720:32jarethttps://forum.datomic.com/t/datomic-cloud-version-441-and-query-groups/608#2018-09-1020:03eoliphantYou guys are the best lol#2018-09-0720:42kennyIf my Ion returns a 500 and there are no Exceptions in the log, where else should I look?#2018-09-0723:10kennyIf I run my Ion via the API Gateway, I receive this in the logs:
Fri Sep 07 23:08:30 UTC 2018 : Endpoint response body before transformations: {"headers":{"Content-Type":"application\/transit+json","Set-Cookie":["ring-session=a8d0bf20-fe5b-4a3a-9d4d-9299003ce1dd;Path=\/;HttpOnly"]},"statusCode":200,"body":"WyJeICJd","isBase64Encoded":true}
Fri Sep 07 23:08:30 UTC 2018 : Endpoint response headers: {Date=Fri, 07 Sep 2018 23:08:30 GMT, Content-Type=application/json, Content-Length=198, Connection=keep-alive, x-amzn-RequestId=ef88cd93-b2f2-11e8-8a46-7fe9b475b814, x-amzn-Remapped-Content-Length=0, X-Amz-Executed-Version=$LATEST, X-Amzn-Trace-Id=root=1-5b9304ee-cb4b86778c5717156acdceed;sampled=0}
Fri Sep 07 23:08:30 UTC 2018 : Execution failed due to configuration error: Malformed Lambda proxy response
Fri Sep 07 23:08:30 UTC 2018 : Method completed with status: 502
#2018-09-0723:12kennyIt may be the headers. I'm just using the regular Ring cookies middleware.#2018-09-0723:18kennyThat must be it. The web docs say :headers is a map string->string https://docs.datomic.com/cloud/ions/ions-reference.html#web-code which does not adhere to the Ring Spec for responses https://github.com/ring-clojure/ring/blob/master/SPEC#L117-L122 😞#2018-09-0723:24kennyLooks like others have ran into this too: https://github.com/ring-clojure/ring/issues/338#2018-09-0812:40eraadHi! Anyone has experience changing a Datomic Cloud system name for a prod environment? Is it only a matter of updating compute and storage environment stacks?#2018-09-0813:05stuarthalloway@U061BSX36 system names cannot be changed#2018-09-0813:07eraad👍 thanks!#2018-09-0813:08eraadCan I change the application name?#2018-09-0814:20stuarthallowayunfortunately no#2018-09-0814:42eraadcool, thanks for the quick answer!#2018-09-1008:03sreekanthHi, Can i use keywords with nested hierarchy for idents(Ex: :sample.some/state) and able to pull :sample ns datom :sample.some/state value, if both are written with same tx id.#2018-09-1009:24kirill.salykinHi,
How I can fulltext filter based on multiple value. eg I want to to filter :person/name is John and Smith, so John Smith will match criteria and John West wont?
(d/q '[:find ?c
:in $ ?filter
:where
;; [?c :changeset/date ?date]
[(fulltext $ :person/name ?filter) [[?c _ _ _]]]
]
db
["John*" "Smith*"])
#2018-09-1009:25kirill.salykinSomething like this doesnt work#2018-09-1009:27kirill.salykinand collection binding uses or but I need and#2018-09-1013:10favilaMost efficient thing is to combine into one filter: https://lucene.apache.org/core/3_5_0/queryparsersyntax.html#Boolean%20operators#2018-09-1013:39kirill.salykinSo datomic uses lucene underneath
thanks!#2018-09-1014:26kirill.salykin@U09R86PA4 should I write something like
(fulltext % :person/name "John AND Smith")
?#2018-09-1014:27kirill.salykinhow datomic will understand this is lucene part?#2018-09-1014:28faviladatomic passes it straight through#2018-09-1014:28favilafulltext is a function which issues a query against the lucene index for the attr#2018-09-1014:29favilait's a query in a query; the fulltext query itself is opaque to the rest of datomic#2018-09-1014:29kirill.salykinor should I use "\"John\" AND \"Smith\""
#2018-09-1014:29kirill.salykinI think later is correct
Thanks!#2018-09-1013:31steveb8nthere are hints that Cloud will soon have an integration to ElasticSearch. It’s probably better to use that than fulltext in Datomic since it is not very good for search and has other limitations#2018-09-1014:48kirill.salykinwith pull api can I make nested attributes flat?#2018-09-1014:48kirill.salykinwith :as maybe?#2018-09-1014:53favilano; pull can only do renaming and limiting results, not changes in shape#2018-09-1015:05kirill.salykinclear, thanks#2018-09-1022:59pvillegas12I don’t understand what (d/pull db {:eid ident :selector [:db/ident]}) means#2018-09-1023:18favilaThis means exactly the same thing as (d/pull db [:db/ident] ident)#2018-09-1023:20favilai.e. get entity indicated by ident (whatever that value is, I guess you mean a :db/ident keyword like an attribute name) and give me a map like {:db/ident X} with X being that entity's value for :db/ident#2018-09-1023:01pvillegas12Relatedly, I’m trying to use pull to get data for multiple entities. For example, I want all entities with :inv/name#2018-09-1023:21favilapull only walks a graph from a supplied point. You need a query#2018-09-1023:21favila[?e :inv/name] then pull whatever you want from ?e (if anything)#2018-09-1023:22favila(d/q '[:find (pull ?e [:db/id :inv/name]) :where [?e :inv/name] db)#2018-09-1023:22favilafor example#2018-09-1103:47dpsuttoni'm issuing a pull with a reverse rel (d/pull db [{:item/_patient [:db/id ...]}] patient-id) and i'm getting back {:item/_patient {:db/id}} where i would expect to see {:item/_patient [{:db/id ...} ...}]} am i missing something obvious here?#2018-09-1103:48dpsuttoni thought reverse rel's always came back with vectors#2018-09-1107:05favilaNot if the attr is IsComponent #2018-09-1109:08kirill.salykinplease advice how i can do re-find with dynamic pattern on some attribute?
(d/q `[:find ?e
:in $ ?pattern
:where
[:e :user/id]
[:e :user/name ?name]
[(re-find ?pattern ?name)]]
db (re-pattern "John.*Smith"))
doesnt work for me#2018-09-1109:09kirill.salykinthis is on-prem#2018-09-1109:13kirill.salykinactually it is#2018-09-1109:13kirill.salykinsorry#2018-09-1113:49eraserhd@kirill.salykin that works for me. I assume you aren't putting a regex literal in the query because then Datomic fails to cache the query? (That's my experience.)#2018-09-1113:50kirill.salykin@eraserhd it worked for me as well, after I posted the message
there is mine sorry in the end#2018-09-1113:50kirill.salykinthanks for help#2018-09-1114:02lilactownthe on prem docs say you can use :find ?id ., with a period at the end, to return a single scalar value. however when I attempt to use it (on Datomic Cloud), I’m told it’s not a valid find-rel#2018-09-1114:29jaretOn Prem is not the same as Cloud. You cannot currently do that in Cloud.
https://docs.datomic.com/cloud/query/query-data-reference.html#find-specs
https://docs.datomic.com/cloud/query/query-data-reference.html#arg-grammar#2018-09-1114:39lilactown😞#2018-09-1114:39lilactownI guess I was confused since it was simply elided, instead of called out as a difference#2018-09-1114:02lilactown'[:find ?user .
:in $ ?id
:where
[?user :user/authentications]
[_ :user.authentication/id ?id]]#2018-09-1115:11markbastianIs there a way to set the transactor sql url as a Java property? Something along the lines of -Ddatomic.sqlUrl=jdbc:<postgresql://localhost:5432/datomic> rather than in the passed in properties file? I've looked here (https://docs.datomic.com/on-prem/system-properties.html#transactor-properties) and don't see a property listed.#2018-09-1115:36octahedrionmy db has entities representing users of the system, and I want to be able to query for which users transacted which assertions. How do people do this ? I'm thinking of including an assertion about a user (such as :last-active-timestamp) with every transaction so that I can join on the txid#2018-09-1115:50Joe Lane@octo221 Like an audit log?#2018-09-1115:53octahedrionwell it's for the users themselves to query for what they did#2018-09-1116:16kennyDo Ions work with :local/root in your deps.edn? I tried making one of my deps :local/root, did an unreproducible push, and got a CompilerException saying it cannot locate a file in the dependency I made :local/root.#2018-09-1116:20kennyI also don't get cognitect.s3-libs.s3/upload logged for the dependencies that are :local/root. Being able to push a :local/root dependency would be quite helpful for debugging. At the very least, it should warn that the local dependency will not be included in the uploaded zip.#2018-09-1116:23kennyHowever, if I look in the unrepro zip located in target/datomic/apps/<system>/unrepro/<name>.zip/app, I see a directory name __local-1455 which includes the source files for my local dependency.#2018-09-1116:25kennyNo idea how it actually works, but the local directory __local-1455 is in the bundle-classpath.edn file as the value for my local/root dependency.#2018-09-1116:34kennyHmm. It appears an older version of my dependency was getting put on the CP. Deleting .cpcache seems to have fixed it.#2018-09-1116:34alexmilleryou can always -Sforce to force recomputation of the cached classpath#2018-09-1116:44kennyWould it be possible to update AWS SSM dependency to at least 1.11.375? That release adds getARN and getLastModifiedDate to Parameter.#2018-09-1117:35stuarthalloway@U083D6HK9 I will bump to latest for the next release, presuming it causes no errors in testing.#2018-09-1118:25eoliphantany thoughts on 'inter app' comms? i've started mucking around with query groups. and thinking through approaches#2018-09-1121:31kennyWhat are the minimum IAM permissions required to push and deploy an Ion?#2018-09-1201:58stuarthalloway@U083D6HK9 we have not doc'ed these yet#2018-09-1122:11kennyIs there an easy way to call the Ion push and deploy functions from Clojure?#2018-09-1205:00olivergeorgeTreat there as nothing but rough notes but it’s pretty easy to script it into a single step deploy. https://gist.github.com/olivergeorge/f402c8dc8acd469717d5c6008b2c611b#2018-09-1216:50kennyExactly what I was after! Docs for those functions would be nice.#2018-09-1300:32olivergeorgeYeah, I guessed they existed and from that point the documentation on the ions reference page provided the necessary details.
https://docs.datomic.com/cloud/ions/ions-reference.html#push#2018-09-1204:02pfeodrippeHi, I'm using datomic on-prem with dynamoDB as backend and I have a function c/uuid which creates a random uuid (which namespace is not at transactor classpath!). I must pass some uuids to a transactor function tx-f, could I use a lazy-seq created with repeatedly as arg (e.g uuids) to tx-f and, as a example, use (take 4 uuids) inside tx-f?#2018-09-1205:00olivergeorge#2018-09-1214:42ninjaHi, question about retracting an entity:
Using a pull query in the form of (d/pull db '[*] eid) after having the entity retracted using the same eid yields the :db/id. Since I was expecting it to return nil, why does it not?#2018-09-1214:46henrik@atwrdik Are you using the DB from before it was deleted?#2018-09-1214:47henrikYou need to reacquire the DB from the connection to get the current one.#2018-09-1214:50favila@atwrdik this is an implementation detail#2018-09-1214:51favilaif an entity ever had assertions, you will get {:db/id x}; otherwise you will get nil#2018-09-1214:52favilaentity ids don't exist or not exist; the only meaningful question is whether anything is asserted of them#2018-09-1215:07ninja@henrik I'm already using the DB after it was retracted.#2018-09-1215:10ninja@favila So retracting only "removes" all attributes and their corresponding values from the entity (as well as references)? So an entity not having an attribute can be treated as if it is not existent?#2018-09-1215:21favilathe only thing that exists is assertions [e a v tx op]#2018-09-1215:22favilahttps://docs.datomic.com/cloud/whatis/data-model.html may help#2018-09-1215:22favilaAlso this: http://tonsky.me/blog/unofficial-guide-to-datomic-internals/#2018-09-1309:24ninjaThank you for the clarification. The links helped a lot.
The last paragraph in the documentation here (https://docs.datomic.com/on-prem/entities.html#basics) also was quite helpful.#2018-09-1307:07olivergeorgeI'm puzzling over how layer on some data integrity constraints. I see that using custom transactor functions are a way to add logic but I haven't seen any conventions around using it to ensure data integrity robustly.
Simple example might be: all People entities must have a :person/name.
This is easily enforced when creating but the other case to consider is a different piece of code retracting the :person/name attribute on an entity. Clearly the first issue is that entities don’t have a type.
I wondered about giving entities an optional :constraint/spec keyword attribute and using a generic transactor helper fn to ensure all entities affected are still valid data using s/is-valid?#2018-09-1307:08olivergeorgeHow do you ensure data integrity?#2018-09-1307:28dominicm@olivergeorge if all people must have a person/name, maybe you can't find a person if they don't have a person/name?#2018-09-1310:10greywolveQuestion about datomic cloud. So if i'm reading things correctly, the only way to run your client applications, is in the same VPC as datomic cloud? Ie there's no way to have say, an app running in Heroku access datomic cloud via the client lib?#2018-09-1310:10greywolveand if that's the case, how are people that run the transactor on aws, and their peers on heroku, managing to properly secure the transactor?#2018-09-1310:18steveb8n@olivergeorge I use spec but in the client instead of in a transaction fn#2018-09-1310:24stijn@greywolve there's a section in the docs that describes how to run clients in another vpc https://docs.datomic.com/cloud/operation/client-applications.html#separate-vpc#2018-09-1310:24stijn@greywolve there's a section in the docs that describes how to run clients in another vpc https://docs.datomic.com/cloud/operation/client-applications.html#separate-vpc#2018-09-1310:38greywolveThanks, do you know if it's possible to get this right if you're hosting on Heroku in the same aws region?#2018-09-1311:39stijnsorry, i have no experience with heroku...#2018-09-1311:46jeff.terrellI'm facing this same question myself. I'm thinking it would theoretically be possible either to have a proxy host within AWS that could mediate traffic between Heroku and Datomic Cloud, or else open up a relevant security group to allow traffic from Heroku's network.#2018-09-1311:47jeff.terrellBut I'm seeing a couple of concerns. First, it really wouldn't be a great idea to have the Heroku instances across the country (or the world) from the Datomic instances. That'd make the site really slow I would think.#2018-09-1311:47jeff.terrellSecond, maybe this kind of approach isn't allowed for Solo?
> VPC Endpoints are only supported in a Production Topology.#2018-09-1311:50jeff.terrellSo I'm weighing the costs of exploring a solution to bridge Heroku and Datomic Cloud against the costs of reinventing some Heroku functionality (especially review apps) in AWS with the code integration tools like Code Pipeline. Leaning towards the latter, to be honest, but I expect it'll take some work.#2018-09-1314:13greywolveHow do you secure your proxy host though?#2018-09-1314:14greywolveFrom my limited research, it doesn't seem like a good idea to whitelist ips, since the heroku ips are dynamic, and the range is too big. The other option is to fork out for one of the static ip addons, but they seem pricey#2018-09-1314:15jeff.terrellYeah, that's a good point about Heroku IPs being dynamic. This further inclines me to try to learn the Code* tools in AWS.#2018-09-1314:15greywolveYeah I'm starting to feel like the only sane approach is probably just to move over to AWS if we want to use Datomic#2018-09-1314:16greywolveIt's a pity Heroku private spaces cost so much ($1000 / month)#2018-09-1314:16jeff.terrellI feel the same way. It's a little discouraging considering how easy it is to deploy a Clojure app to Heroku, plus all the nice features Heroku provides, but oh well.#2018-09-1314:17greywolveYeah 😞 I didn't expect it to be this challenging to setup Datomic there#2018-09-1314:18greywolveI wonder if setting up the VPC endpoint, and connecting via Heroku on the same region would work, seems a bit fragile though#2018-09-1315:12idiomancyhey, I know this must be a FAQ, but...
okay, I'm just getting my feet wet with Datomic via datascript in the browser, and even though I can rip out and reschema my entire system with one version update, I am terrified by the notion of "correct" Data modeling with Datomic.. I don't know what should be namespaced, how to namespace them. There's so much flexibility that I have a serious "blank canvas" problem.
Can anyone point me towards where to start in modeling your domain from a blank canvas with Datomic?#2018-09-1316:45stuarthalloway@U250T6MFA the mbrainz example is a good starting place https://github.com/Datomic/mbrainz-sample#2018-09-1316:46idiomancyokay, great, I'll take a look!#2018-09-1319:04val_waeselynck@U250T6MFA table/column or class/method is a good starting point, you'll figure the paterns soon enough, and Datomic gives you enough flexibility to recover from your mistakes#2018-09-1514:20dustingetz@U250T6MFA +1 for table/column. You will be able to figure this out as you go. Just get started.#2018-09-2118:44eraadIn addition to table/column, I like to think about the schema as an attribute catalogue that I can use to create entities. So don´t worry about having ALL possible attributes upfront. Just add the minimal attributes your domain needs and follow the best practices https://docs.datomic.com/cloud/best.html#datomic-schema#2018-09-1315:13idiomancyIF things work out, moving to datomic proper might not be inconceivable so...#2018-09-1320:09linus_gvIs there like a Merkle hash for a database?#2018-09-1321:12okocimwhat is the proper way to read the body on an api-gateway ionized fn? I’m getting nil in the body on a POST, but the json is appearing on a :datomic.ion.edn.api-gateway/json key#2018-09-1321:12okocimthe ion-example doesn’t ionize any fns that actually read the POST body from what I can tell#2018-09-1321:26kenny@okocim I am using Ions in a production environment and the body is available under the :body key in my Ring handler that I pass to ionize.#2018-09-1321:48okocimthanks @kenny. Turns out I was passing a bad parameter on my test requests :face_palm:#2018-09-1408:10mpingHi, anybody using datomic for ETL? We have a low throughput webapp, and we’d like to start collecting events for some kind of BI. We don’t need fancy lambda architecture, just a collector (eg: kafka) and an ingestor. My only concern is that we want do to some heavy-ish aggregations and I’d like to do it straight from datomic#2018-09-1412:56stuarthalloway@U0MQW27QB I do some ETL with Datomic. Often nothing more complex than a functional entry point that transforms and transacts. Do you have a particular concern?#2018-09-1413:27mpingyes, how it will handle aggregations#2018-09-1413:28mpingwe basically want to do slice and dice, rollups and such on the fly - we’re pretty low volume, if don’t have to design a data warehouse that would be great#2018-09-1413:28mpinglow volume meaning a couple of events a sec on peak, at most#2018-09-1413:28mpingso in fact, besides the etl I’d like to query datomic directly#2018-09-1414:41stuarthalloway@U0MQW27QB this seems totally fine. I would consider putting this stuff in its own db.#2018-09-1414:42mpingyeah we will have a source of truth in sql, then emit some events, and ingest it to datomic#2018-09-1414:42mpingbacked by pg probably#2018-09-1414:42mpingthanks, will do a spike!#2018-09-1408:12mpingI also took a look at onyx but we’re low throughput so don’t need that right now#2018-09-1408:33dominicmI only have 1 data point for StorageGetMsec, and I'm a little confused by that. Shouldn't it be triggered fairly regularly?
Does Memcache have an effect on this metric?#2018-09-1412:51stuarthalloway@U09LZR36F caching is multi-layered, check out https://docs.datomic.com/on-prem/caching.html#2018-09-1414:42dominicmI didn't understand the difference between storage and object, that clears it up, that's! #2018-09-1409:10stijnquestion on ions: is it a good idea to handle asynchronous workloads on the datomic cluster (with core.async). the use case is: based on some update in the database, another HTTP api needs to be called. the process initiating the db update should not wait on the HTTP response of the other API. I think I can use vars or atoms to keep references to channels so the core.async machinery doesn't get garbage-collected. Are there any issues with this approach? Or is it better to hand-off this functionality to a physical queue, with e.g. lamba's processing the queue (which is quite a different level of operational complexity 🙂 )?#2018-09-1412:53stuarthalloway@U0539NJF7 I think you could do either. If your async work becomes compute intensive, it might interfere with our standard advice on scaling, since the async work would be invisible to our metrics#2018-09-1412:54stuarthallowaybut you would have choices at that point, including scaling on your own metric, and (as always) isolating the workload on specific processes#2018-09-1420:43stijnOk, makes sense, Stuart. Thanks! #2018-09-1415:47damianHi! Here in Magnet we’ve played a bit with awesome Conformity migration library by @avey_q. However we’ve had some issues with transactions getting reapplied unexpectedly so we’ve developed a simplified, atomicity-compliant version of it.
We haven’t explored Ragtime by @weavejester yet, but in case it’s useful for some of you, there it is lying.
https://github.com/magnetcoop/stork#2018-09-1415:53ghaskinshello…im having a strange (but probably simple PEBCAC) problem with unique identities. I have a transaction that includes something like:
[{:db/id pid :person/email email}
{:db/id mid :club/member pid}]
#2018-09-1415:53ghaskinswhere :person/email has the unique attribute in the schema#2018-09-1415:54ghaskinsinsert works fine, but upsert fails, and I dont understand why#2018-09-1415:54ghaskinsthe probem seems to be in referencing the entity-id of the existing item#2018-09-1415:55ghaskins(e.g. if I take the second add away, the upsert itself “works”#2018-09-1415:55ghaskinsi just dont know how to reference it in the subsequent add#2018-09-1415:56ghaskinsthe error I get is:
java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException: :db.error/not-a-data-function Unable to resolve data function: :db/id
#2018-09-1415:57ghaskinsany guidance appreciated#2018-09-1417:03marshall@ghaskins what’s the exact data you are transacting when you get that error? that’s usually caused by missing a seq wrapper around the tx-data#2018-09-1417:04marshalli.e. (d/transact conn {:tx-data [your-data-here]}) instead of (d/transact conn {:tx-data your-data-here})#2018-09-1418:52spiedenis anyone aware of a library that enables something similar to pull expressions but for EDN data?#2018-09-1418:55spiedenhmm, seems like this maybe https://github.com/juxt/pull#2018-09-1419:22ghaskins@marshall in this case, its happening inside a txn-function, so harder to get the exact set its using#2018-09-1419:22ghaskinsill see if I can tease that out#2018-09-1419:35ghaskins@marshall good call, that was it#2018-09-1419:35ghaskinsty#2018-09-1419:36ghaskinsthe txn-fcn was returning a sequence in the initial case and a map (with one :db/id it it) for the update case#2018-09-1419:58john@spieden This is also worth checking out https://github.com/halgari/odin#2018-09-1420:31spieden@john cool, thanks#2018-09-1421:07cjmurphy@spieden and perhaps https://github.com/wilkerlucio/pathom, although it's probably overkill for parsing just EDN.#2018-09-1423:49grzmI'm looking at upgrading Datomic cloud and have a couple of questions: (a) I'm unsure of whether a system has been upgraded before, so I'm trying to determine whether I should follow the First Upgrade path. (b) If it is a First Upgrade, it tears down the stack and creates a new one with "Reuse existing storage on create" set to True. Both "Upgrade" and this setting imply that existing data will be preserved. Is this correct?#2018-09-1502:11stuarthallowayhi @U0EHU1800 It is always safe to do a first upgrade#2018-09-1502:12stuarthallowayAny and all upgrades are data safe.#2018-09-1504:42grzmThanks for confirming @U072WS7PE . Perhaps it's just me being overly cautious. Is that explicitly documented somewhere? If not, I know I would have found it helpful.#2018-09-1511:07stuarthallowayit is documented (but not highlighted) at https://docs.datomic.com/cloud/operation/deleting.html. I will improve it.#2018-09-1514:06grzmThanks!#2018-09-1511:55miridiusHas anyone tried to programmatically provision/bring up/bring down query groups? I'm thinking for the purposes of doing something like review apps (automatic deployment of git feature branches to their own instances). The reason I ask is I'm comparing the pros/cons of switching from Datomic On Prem + Kubernetes app deployments to Datomic Cloud + Ions apps.#2018-09-1613:57andrea.crottiis anyone using Datomic on Heroku?#2018-09-1701:31caleb.macdonaldblackIs it recommended we use the peer? I have some old terraform that creates Datomic infrastructure for a peer. At first glance the docs seem to be in favour of using the client API these days. As far as I know though, querying in the client API is more limited than in the peer. Like for example the peer can run defined functions in data log queries. Is it fine to continue using the peer API and to be using this way?#2018-09-1702:59clariceI am going through the examples in the docs (https://docs.datomic.com/on-prem/query.html#function-expressions) and came across
(d/q '[:find ?celsius .
:in ?fahrenheit
:where
[(- ?fahrenheit 32) ?f-32]
[(/ ?f-32 1.8) ?celsius]] db 212)
which gives me a ClassCastException datomic.db.Db cannot be cast to java.lang.Number clojure.lang.Numbers.minus (Numbers.java:137)
Do you know why this could be happening? I hope it is just me that typed something in incorrectly.#2018-09-1703:38bkamphaus@cbillowes first arg to :in clause when present needs to be data source (i.e. the db), so using :in $ ?fahrenheit should fix that. (right now since db is first query arg and no data source is in the :in query will use var db for ?fahrenheit#2018-09-1704:08clariceAh I see, thanks. It works.#2018-09-1705:51caleb.macdonaldblackCan I automate the creation of Datomic Cloud infrastructure? For example through bash or terraform? We want to be able to spin up infra without needing to navigate through the AWS Marketplace#2018-09-1714:01jeroenvandijkI'm curious about this too#2018-09-1715:55jeroenvandijk@U3XCG2GBZ I'm guessing you can use the template url provided in the cloudformation interface as a nested stack in a cloudformation template. Not sure if that url get's updated a lot. I'm assuming it shouldn't as it would mean there is a change in Datomic Ion configuration and you have to update anyway (=> getting a new url). That being said i'm curious for the official answer#2018-09-1716:04jeroenvandijkIt seems the public Datomic Ion template url is publically available (just did a test). This means the above is very likely to work#2018-09-1717:04stuarthalloway@U0FT7SRLP we understand we can do more in this area and await your experience reports!#2018-09-1723:50caleb.macdonaldblack@U0FT7SRLP Thanks for the response. That’s what I’m going to do. Pull that cloud formation template url and use it in terraform.#2018-09-1808:36jeroenvandijk@U3XCG2GBZ @U072WS7PE Here's the code needed to automate via Cloudformation https://gist.github.com/jeroenvandijk/e92cf7333f6bbfee981accde4c9ec66f#2018-09-1712:57mpenethttps://www.youtube.com/watch?v=thpzXjmYyGk#2018-09-1719:25kennyWhat is the recommended way to handle async operations in an Ion?#2018-09-1719:29kennyFor example: a service that needs to interact with an external service to create the proper response.#2018-09-1719:34kennyI see in the event example project that <!! is used: https://github.com/Datomic/ion-event-example/blob/master/src/datomic/ion/event_example.clj#L106. Is this the recommended approach? Will you end up locking up the system in a high load situation?#2018-09-1719:45spiedencould exhaust the core.async thread pool for sure#2018-09-1719:46spiedensome kind of core.async/CSP support baked into ions would be nice#2018-09-1720:00kennyThat's what I figured. This seems like a pretty common use case and would certainly be a blocker for ours.#2018-09-1720:19slipsetHi, sorry totally datomic cloud n00b#2018-09-1720:19slipsetI’ve been following the tutorial for setting up datomic cloud, and I’ve gotten to where I want to (d/create-database client {:db-name "users"})#2018-09-1720:20slipsetMy config cfg looks like:#2018-09-1720:20slipset(def cfg {:server-type :ion
:region "eu-west-1" ;; e.g. us-east-1
:system "datomic-test"
:endpoint ""
:proxy-port 8182})
;; => #'datomic-test.core/cfg
datomic-test.core> (def client (d/client cfg))
2018#2018-09-1720:20slipsetbut#2018-09-1720:20slipset(d/create-database client {:db-name "users"})
ExceptionInfo Forbidden to read keyfile at . Make sure that your endpoint is correct, and that your ambient AWS credentials allow you to GetObject on the keyfile. clojure.core/ex-info (core.clj:4739)
datomic-test.core>#2018-09-1720:20slipsetwhat am I missing?#2018-09-1720:21slipsetI have my socks proxy up and running and#2018-09-1720:21slipset22:17 $ curl -x .$DATOMIC_SYSTEM.$
{:s3-auth-path "datomic-test-storagef7f305e7-1bpzuyaf5d-s3datomic-1wgl6uvtl9bei"}✔ ~/Documents/datomic-test
22:21 $#2018-09-1720:22slipsetthe socks proxy seem to be ok.#2018-09-1720:22marshallthe environment you’re running the REPL needs AWS credentials#2018-09-1720:22marshallare you using Cursive or how are you launching your REPL?#2018-09-1720:22slipsetemacs/cider#2018-09-1720:23marshallso however you setup your env vars / AWS credentials, you’ll need to make sure the env that you use to launch the REPL has them#2018-09-1720:23marshallwhether that’s sourcing in your shell before launching or whatever your tooling supports#2018-09-1720:23slipsetOk, got it, I thought that was sortof taken care of by the socks proxy#2018-09-1720:23marshallnope. the socks proxy env needs creds too#2018-09-1720:24marshallbut for different things#2018-09-1720:24slipsetok. could be a thing to mention in the docs.#2018-09-1720:25eraserhdWe need to associate some computed data with each successive database value, but I can't find a way to do it. It's needed for performance reasons.#2018-09-1720:26eraserhdWe are using Clara Rules for checking database constraints for a Datomic on-prem system. Loading the whole database into Clara is too slow, so we want to incrementally add facts to Clara.#2018-09-1720:26eraserhdWe are using d/with, so we can't use basis-t as a unique identifier of a database value.#2018-09-1720:54slipset@marshall setting the environment variables did the trick, thanks!#2018-09-1801:07caleb.macdonaldblackIn datomic on-prem, why does the transactor need the peer role? This is painful because I’d like multiple services/applications to be given different roles but the transactor only accepts one. Or am i confusing the peer server with a peer in an application? Is this role just for the peer server? Ideally I’d like to use the peer library over the client library so I don’t think I want to run a peer server.#2018-09-1801:29johnjare you talking about AWS roles? but yes, confusingly, the peer server and the peer library are two separate things, if you don't want clients you don't need the peer server#2018-09-1802:41caleb.macdonaldblack@lockdown- Yea AWS IAM roles. The transactor configuration requires a single peer iam role https://docs.datomic.com/on-prem/storage.html#dynamodb-transactor-properties#2018-09-1802:43caleb.macdonaldblackBut if I had two services/application with separate roles (I want to give permissions to different things but share datomic access) how would I set this if the transactor configuration only accepts one?#2018-09-1808:08fmnoiseOfficial docs https://docs.datomic.com/cloud/schema/schema-change.html say You cannot change the :db/valueType of an attribute
but I found a workaround to achieve this via renaming.
I had a :node/children which was EDN string and I migrated it to list of ref with 2 transactions:
[{:db/id :node/children
:db/ident :node/legacy-children}]}
[{:db/ident :node/children
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/many}]
It works just fine, but I worry what are downsides of this approach.#2018-09-1812:31stuarthalloway@U4BEW7F61 changing the meaning of a name breaks existing users (rule 6 at http://blog.datomic.com/2017/01/the-ten-rules-of-schema-growth.html). That is the biggest downside.#2018-09-1812:41fmnoise@U072WS7PE thanks for the answer, while I'm fine with that (meaning wasn't actually changed, it's the same data but now I just return it to client "as is" without clojure.edn/read-string, there are no datomic specific technical downsides, right?#2018-09-1812:52stuarthallowaycorrect#2018-09-1813:51fmnoisethanks!#2018-09-1815:58mpingHi all, I’m getting a noob error when transacting:
java.lang.ClassCastException: clojure.lang.PersistentArrayMap cannot be cast to clojure.lang.Named
#2018-09-1815:58mpingthis is on the logback log file#2018-09-1815:58mpingthe transact call just says there’s a server error#2018-09-1816:21mpingnvmind, bad parinfer indentation#2018-09-1819:35timgilbertSay, I upgraded my local dev transactor from 0.9.5561.62 to 0.9.5703 and now when I try to connect to it with a peer, I'm seeing this message:
org.h2.jdbc.JdbcSQLException: Remote connections to this server are not allowed, see -tcpAllowOthers [90117-171]
#2018-09-1819:39stuarthalloway@U08QZ7Y5S sounds like https://forum.datomic.com/t/important-security-update-0-9-5697/379#2018-09-1819:40timgilbertAha, look like that's it, thanks! I'll take a look.#2018-09-1819:36timgilbertThis is running inside a docker container. Has anything changed with regards to this? I didn't see anything obvious in the release notes#2018-09-1908:21rhansenWill datomic cloud upgrade to the t3 instances for solo topology? I'm thinking of reserving soon, so would be nice to know 🙂#2018-09-1908:54mpingI’m having trouble thinking in datomic terms; suppose I have a schema with datoms that have two instant (start and end) and I want to do a histogram across a time period (eg: count per day, where day between the datom’s start and end) . Some of my data may not have an entry for a given day. Is this kind of query possible in datomic? Or I’d have to query for each day?#2018-09-1909:04steveb8n@mping can’t you fill in the gaps using a fn in the Datomic client system?#2018-09-1909:05mping@steveb8n I was reading about that just now, I guess I can but to fill the gaps I’d have to do a query to know where the gaps are. I can live without gaps filled right now#2018-09-1909:20steveb8nor just query for all data, put in a map by date. then in client fn “get” each value in a loop that generates all dates, using a default value of zero in the get#2018-09-1911:20claricePlease can someone help me? I am following the docs (https://docs.datomic.com/on-prem/query.html#calling-java) and came across the Java method calls that I can't seem to get working. I get a FileNotFoundException Could not locate System__init.class or System.clj on classpath. clojure.lang.RT.load (RT.java:463) when I run
[:find ?k ?v
:where [(System/getProperties) [[?k ?v]]]]
I did try using java.lang.System/getProperties but it did not work.
I created a question on StackOverflow (https://stackoverflow.com/questions/52378601/using-system-getproperties-in-a-datomic-query-throws-a-filenotfoundexception) with more information.#2018-09-1911:33stuarthallowayhi @UCM1FJA4E! Are you running on-prem or cloud?#2018-09-1911:35clariceon-prem#2018-09-1911:40stuarthallowayI believe that is a bug introduced with classpath function support. Datomic is incorrectly treating that symbol as a Clojure name#2018-09-1911:40stuarthallowaythe workaround is to give it a Clojure symbol to work with, e.g. (defn get-props [] (System/getProperties))
(d/q '[:find ?k ?v
:where [(user/get-props) [[?k ?v]]]])
#2018-09-1911:40stuarthallowaythanks for the report!#2018-09-1911:45clariceAwesome! Thanks Stuart. That works.#2018-09-1912:52Roman TsopinHi there, is there way to use Datomic ions with websocket or other real time communication?#2018-09-1913:04henrikI have seen people talking about using AWS IoT for websocket stuff. I’m not familiar with the details, though.#2018-09-1913:44Joe Laneyeah, use aws IoT topics. I haven’t had the time to do a writeup yet (had to shave a few yaks) but since a lambda function can be a consumer and / or a producer to an IoT topic you can call an Ion whenever a message is published to an IoT topic (from a web browser or an app, for example)#2018-09-1913:56eoliphant+1 to this. IoT's websocket support makes this super easy. Have actually spent more time on figuring out the best approach to spitting out events. Been playing around with Onyx as well as just ions read the log and publish events in conjunction with lambda scheduling#2018-09-1914:06Roman TsopinThanks! Will check IoT#2018-09-1914:10johnjIs it posible to create a new system as a query group only? for having a system with just a t2.medium#2018-09-1914:13stuarthallowayno, query groups extend a production system https://docs.datomic.com/cloud/whatis/architecture.html#query-groups#2018-09-1914:16johnjyeah saw that but was no completely sure, thanks.#2018-09-1914:10mpingHi, what’s the syntax for 2-arity query functions on a peer? I have the following but it spits out a stacktrace:
(da/q '[:find (sql-events.datomic/occupation ?s ?e)
:where
[_ :booking/start ?s]
[_ :booking/end ?e]]
(da/db api-conn))
#2018-09-1914:11mping(defn occupation [s e] ...)#2018-09-1914:12mpingnvmind, it’s an aggregation and it can only have one variable#2018-09-1915:56grzm(Moving to #datomic from #ions-aws) When naming ion.cast/metric values, here's the behavior I'm seeing:
;; following the metric example
(cast/metric {:name :MyMetricName ,,,}) ;; => Cloudwatch metric: Mymetricname
;; following the keys example for events:
(cast/metric {:name :my-metric-name ,,,}) ;; => Cloudwatch metric: My-metric-name
In at least one of these cases I'd expect to get a Cloudwatch metric name of MyMetricName. How should I adjust my expectations?#2018-09-1916:48stuarthalloway@U0EHU1800 are you running on Solo?#2018-09-1916:49stuarthallowayCould be "In order to reduce cost, the Solo Topology reports only a small subset of the metrics listed above: Alerts, Datoms, HttpEndpointOpsPending, JvmFreeMb, and HttpEndpointThrottled." -- https://docs.datomic.com/cloud/operation/monitoring.html#2018-09-1916:50stuarthallowayhm, better link at https://docs.datomic.com/cloud/ions/ions-monitoring.html#metrics#2018-09-1916:55grzmI'm running production (2 × i3.large) (on 441)#2018-09-1916:56grzmI'm not missing metrics: they appear, they're just not spelled the way I'd expect looking at the transformation rules in the documentation.#2018-09-1916:56grzm{:name :MyMetricName} -> Mymetricname
{:name :my-metric-name} -> My-metric-name#2018-09-1916:59grzm(Considering now two people have misinterpreted my explanation, clearly I should have explained it better. My apologies.)#2018-09-1917:55stuarthallowayare you seeing those metric names printed in the AWS CloudWatchlog?, or as metric names in the metrics console, or both?#2018-09-1918:30grzmThose are the metric names in the Metrics (sub) console in CloudWatch. I actually don't see them when searching the CloudWatch log groups (using ${.Type="Metric"} as my filter. ${.Type="Event"} and ${.Type="Alert"} returns expected log messages).#2018-09-1918:31grzmShould I expect metrics to be reported in the CloudWatch log messages?#2018-09-1918:50stuarthallowaythe report in the log is an operational report about sending the messages#2018-09-1918:50stuarthallowayand cannot be trusted on this issue because it has its own necessary to-json transformation#2018-09-1918:50stuarthallowaywill investigate further#2018-09-1919:03jaret@U0EHU1800 Stu has asked that I investigate this further. I am opening a case/ticket to track this. I would like to add you as a contact on the ticket. Could you PM me a good e-mail to add to the ticket so that I can notify you of our findings?#2018-09-1917:27madstapWhat is the most convenient to, given a t value, find the previous t? The use-case is to query the history db as-of before transaction x happened.#2018-09-1917:41faviladec#2018-09-1917:42favilayou don't usually need the exact previous t, you just need a t less than the target#2018-09-1917:42favilaso you can just subtract one (from t or tx) and use that#2018-09-1917:54madstapPerfect, thanks!#2018-09-1917:42idiomancycould someone help me understand the difference between :db.unique/identity and :db.unique/value?
I'm just having a little trouble wrapping my head around it right now. Didn't get much sleep 😅#2018-09-1917:43favilathe only difference is what happens in txs like this:#2018-09-1917:44favila{:db/id "new-id"
:my-unique-value-attr "foo"}#2018-09-1917:44favilaif "foo" already exists on a db id somewhere, this tx will throw an exception#2018-09-1917:44favila{:db/id "new-id"
:my-unique-identity-attr "foo"}#2018-09-1917:45favilathis, however, will substitute "new-id" with the entity id of whatever has the "foo" assertion on it#2018-09-1917:45favilathis is what they mean by "upsert" in the datomic docs#2018-09-1917:45idiomancyohhhh, okay, that's what it means. the value is unique regardless of the key its associated with#2018-09-1917:45favilayes, but they both share that#2018-09-1917:46favila:db.unique/identity goes the extra step of inferring a db/id when a transaction doesn't have one#2018-09-1917:46favila:db.unique/value will not do that, you must explicitly say what entity you are writing to#2018-09-1917:47idiomancyhuh... so what would be a valid transaction using a value?#2018-09-1917:47favilathe first one is valid if foo does not exist#2018-09-1917:47favilaif it does exist, just don't use a tempid#2018-09-1917:48favilae.g. {:db/id [:my-unique-value-attr "foo"] :some-other-attr "bar"}#2018-09-1917:48favilabut you have to do a read before you write to determine which case it is#2018-09-1917:48favilacreating vs updating#2018-09-1917:48favila:db.unique/identity you can treat both the same#2018-09-1917:48idiomancyinteresting... the docs give social security number as an example use case, but I don't see how that doesn't fall under unique/identity 😕#2018-09-1917:49favilayou probably don't want to upsert on social security#2018-09-1917:50favilaif you are intending to add a new person record, and that person happens to have the same ssn as an existing person, it's unlikely that the desired behavior is to update the existing record#2018-09-1917:50favilathe desired behavior is probably "freak out"#2018-09-1917:52idiomancyintersting. I keep going back and forth conflating each concept with the other. I should treat myself to more sleep.
for quick reference right now, and to check my understanding, I'm going to be associating crypto-currency balances to crypto-currency account addresses, so the account address, the long uuid associated with the crypto account, is probably going to be a unique/value right?#2018-09-1917:53idiomancybecause no other account can ever be added with that same address, and that address will never be updated in any way..#2018-09-1918:06johnjthis an identity case, you want to identify accounts by their address (uuid)#2018-09-1918:10idiomancyso, I like the way you expressed that thought in terms of the verb "to identify". I want to identify an account by its address. Cool, I can get behind that.
would you happen to have a similar phrasing in mind for the value case? because I think that's the bit I'm missing#2018-09-1918:17johnjthe opposite 😉#2018-09-1918:18johnjwhen you want a unique identifier that doesn't need to identify an entity#2018-09-1918:18favila:db.unique/identity is precisely to identify the entity#2018-09-1918:18favilawhere upsert behavior is never a sign of trouble#2018-09-1918:18johnjyeah, I was talking about :db.unique/value#2018-09-1918:18favila:db.unique/value is merely that the value must be unique#2018-09-1918:19idiomancywhy would I ever have a unique identifier that doesn't identify an entity?#2018-09-1918:19favilathe value may not identify the entity at all#2018-09-1918:19favilae.g. other system's identifiers#2018-09-1918:20favilaor, values that may change on an entity; but you don't care you just want to make sure two entities don't have the same one at any time#2018-09-1918:20favilae.g. perhaps the value represents some limited resource#2018-09-1918:21favilathat the entity is "using" by asserting#2018-09-1918:21idiomancy^^ you're talking about ident or value right now?#2018-09-1918:21favilavalue#2018-09-1918:21favila> why would I ever have a unique identifier that doesn't identify an entity?#2018-09-1918:21favilaI am answering that question with examples#2018-09-1918:21idiomancyI see, wow, okay#2018-09-1918:21idiomancythe resource example might be the thing that does it for me#2018-09-1918:21idiomancyI think that is starting to make it clear#2018-09-1918:23idiomancyso its just a rule that two things can't have the value at the same time. it can be moved, removed, or changed, provided it doesn't change to a value that something else is already associated with.#2018-09-1918:23johnjmaybe you have some range of numbers you have to use to label something but you can only use a number from the range once#2018-09-1918:24idiomancyyeah, that's interesting. considering it in terms of a small pool of unique values that need to be shared is a good way of making it concrete for me#2018-09-1918:25idiomancythanks @U4R5K5M0A, @U09R86PA4, this was really helpful#2018-09-1918:28favilait may be better just to think of it in terms of desired behavior#2018-09-1918:29favila:db.unique/value: must be unique a+v assertion db-wide; asserting on a different entity will throw#2018-09-1918:29favila:db.unique/identity: must be unique a+v assertion db-wide; asserting on a tempid will assert on the existing "owning" entity, otherwise it will create a new one#2018-09-1918:30favilathe only difference between them is how transactions that assert the attribute on a temporary entity id will respond#2018-09-1918:30idiomancyyeah, okay, that makes sense.#2018-09-1918:30favilafor value, they will throw (tx rejected); for identity, they will "upsert" (resolve the tempid to the existing entity id)#2018-09-1918:31idiomancyvalue is about asserting distinct cases, identity is about resolving to the same case#2018-09-1919:23johnj@U09R86PA4 do you use datomic for public facing web apps?#2018-09-1919:23favilayes#2018-09-1919:24johnjcurious if you have cross yourself with response times/latencies issue that you couldn't handle easily (from the datomic side)#2018-09-1919:25favilanot really?#2018-09-1919:25johnjits my number one fear, response times increasing as the userbase grow#2018-09-1919:26favilaresponse for what?#2018-09-1919:26johnjqueries#2018-09-1919:26favilawhat kind of queries?#2018-09-1919:27johnjjust business data, say 100K rows#2018-09-1919:27johnjwhat storage do you use most of the time?#2018-09-1919:30favilawe use google cloud mysql#2018-09-1919:39johnjinteresting, hear everywhere dynamo is the most polished storage for datomic and more performant since reads and write don't fight each other for resources#2018-09-1919:39johnjI guess it depends on your needs, thanks#2018-09-1919:41favilawe had other constraints, it's not a great choice at all#2018-09-1919:41favilabut memcached makes read speed nearly irrelevant#2018-09-1919:42favilamysql traffic is nearly all writes#2018-09-1919:42idiomancyhow does dynamo overcome the transactor write bottleneck?#2018-09-1919:42favilait doesn't#2018-09-1919:43favilathe transactor itself, not the storage, is the bottleneck#2018-09-1919:43idiomancyhuh, yeah, then I don't see the advantage, considering peers can be scaled horizontally, right?#2018-09-1919:43favilathe advantage is operational#2018-09-1919:44idiomancyfair enough#2018-09-1919:44favilayou don't have to size any storage, you don't have to run regular maintenance#2018-09-1919:45favilamysql innodb has terrible garbage problems with datomic's workload#2018-09-1919:45favilaand it doesn't have an on-line space reclaimer like postgres vacuum#2018-09-1919:45favila"optimize tables" locks#2018-09-1919:46idiomancymm, yeah I could see that#2018-09-1919:46favilabut memcached provides basically unlimited read-scalability#2018-09-1919:46favilaand the write load is always limited by the transactor anyway#2018-09-1919:46johnj@U09R86PA4 your data doesn't fit on the peers?#2018-09-1919:46favilaand the queries are dead simple#2018-09-1919:47favilaeven if it did, it's got to get to the peer#2018-09-1919:48favilamany peers reading one mysql is worse than reading memcaches#2018-09-1919:48idiomancythere's a hot cache of frequently indexed accessed data, and then whenever you reach beyond that it has to pull more from storage. Ones entire data set is rarely ever present on the peer in its entirety#2018-09-1919:49idiomancyso the docs tell me, anyway#2018-09-1919:49favilayeah there are two levels#2018-09-1919:49favilathe object cache, which is datoms-as-java-instances, after decoding from storage#2018-09-1919:50favilathen there's the encoded blocks, up to 60-ish k in size I think, containing potentially thousands of datoms; those are what is in storage and cacheable by memcached#2018-09-1919:50favilathe storage is always just a key-value store from datomic's perspective#2018-09-1919:51idiomancygotcha. I'd not really understood the object cache#2018-09-1919:51favilaIn fact I don't understand why they didn't offer any explicitly key-valuey storages#2018-09-1919:51favilabdb for example would be a great fit#2018-09-1919:53favilawhen they say the entire db "fits in peer memory" that means that all blocks can be fully decoded into instances and fit in the object cache#2018-09-1919:53favilathat's considerably more memory than what storage uses because what's in storage is compressed and encoded#2018-09-1919:54johnjah good point#2018-09-1919:55johnjhave you seen http://myrocks.io/ ? may help you#2018-09-1919:55johnjoh nevermind, remembered you are in google cloud#2018-09-1919:55favilayeah#2018-09-1919:56favilathat's another pain point#2018-09-1919:56faviladatomic is very aws-centric#2018-09-1919:56favilawe can't even do backups to buckets#2018-09-1919:57johnjis their mysql backup not enough?#2018-09-1919:57johnjor you prefer at the datomic level?#2018-09-1919:57johnjhaven't read anything about datomics backup yet#2018-09-1919:58favilamysql backup can only restore to mysql#2018-09-1919:58johnjlooks like dynamo is the least headache from an operational point of view#2018-09-1919:59faviladatomic backups are storage-agnostic#2018-09-1919:59favilawe do both#2018-09-1920:07johnjok, backing to a file system instead to some cloud storage thing doesn't sound nice if you are already in the cloud#2018-09-1918:08idiomancyphew, alrighty then. good to know I'm correct in assuming that I'm still wrong.#2018-09-1918:46souenzzowhen I retract a noHistory value, this last will stay there forever or will be "cleaned"?#2018-09-1922:36iambrendonjohnI’m looking at migrating a production system to Datomic over time.#2018-09-1922:37iambrendonjohnDoes anyone have recommended readings of people that have done this before? I’m hoping to read about how they validated and justified the migration to their team and how the migration went.#2018-09-1922:37iambrendonjohnReally, I’m wanting to read some critical thinking and reflection on the decisions that were made.#2018-09-1922:54marshall@iambrendonjohn I’d be happy to discuss what we’ve seen from customers/users WRT that kind of migration. You can email me at marshall @ http://cognitect.com#2018-09-1923:05iambrendonjohnThanks Marshall, will do 👍#2018-09-1922:58matthew.grettonHi - I'm coming back to Datomic, after a year or so, and am interested in building an app using datomic cloud. Given there is no in-mem peer available, how do you generally test the validity of transaction data? Writing unit tests using the in-mem peer implementation meant that in the past I could test a large amount of my apps functionality through unit tests.#2018-09-1922:59marshallYou can use d/with #2018-09-1922:59marshallCould also use a tear off db in cloud#2018-09-1923:00marshallI tend to do both#2018-09-1923:00matthew.grettonI assume I'd still need to connect to an external client to use d/with?#2018-09-1923:00marshallYes#2018-09-1923:01marshallWhich is why I tend to use tear off db#2018-09-1923:01marshallI.e. uuid name#2018-09-1923:01marshallUse it for the tests then delete it#2018-09-1923:01matthew.grettonOk - But still creating and deleting the database externally?#2018-09-1923:02marshallIf you need isolation, you can run a separate solo system for that #2018-09-1923:03marshallShould be fine for functional tests. Spin up a prod system if you need to test perf#2018-09-1923:04matthew.grettonSo there somewhere between unit and integration tests in the traditional sense.#2018-09-1923:04marshallProbably so, yea#2018-09-1923:04marshallGet a bit more "reality" than unit#2018-09-1923:05marshallBy using the actual db#2018-09-1923:05matthew.grettonYup - That was the great thing about the in-mem impl. It gave you "reality" in a true unit test.#2018-09-1923:06matthew.grettonGiven there is an on-prem version still on offer, I suppose you could still use the in-mem for testing, are they compatible from a transaction data standpoint?#2018-09-1923:06marshallThis might be illuminating as well https://docs.datomic.com/cloud/operation/planning.html#2018-09-1923:07marshallMostly. There are some differences #2018-09-1923:08matthew.grettonThanks. Will take a look at the link, think it's been added since the last time I looked.#2018-09-1923:09marshallDifferences are noted here https://docs.datomic.com/on-prem/moving-to-cloud.html#2018-09-1923:13matthew.grettonThanks - Have you had any problems with the tear off db approach? E.g. increased time it takes for tests to run in-mem vs over the wire? Network flakiness breaking tests, etc...#2018-09-1923:37matthew.grettonAnyway, thanks for your answers, they have been really helpful#2018-09-2016:55kenny@U6M20CPK2 Because we wanted the ability to run offline by running in-mem dbs only, we have been using this https://github.com/ComputeSoftware/datomic-client-memdb to run our unit tests.#2018-09-1923:37marshallHaven't really seen much issue with that sort of thing#2018-09-1923:38marshallMem is obviously super quick, but most tests I use for tear off db stuff aren't "huge" so its been fine#2018-09-2007:20seantempestaIs anyone else experiencing intermittent Dynamo DB errors? My datomic instance will run for a week or so and then will crash with a message saying Critical failure, cannot continue: Heartbeat failed.#2018-09-2007:54seantempestaIs there a recommended restart protocol to recover from these errors?#2018-09-2013:05stuarthallowayHi @U06B77GQ6! Are your running in the HA configuration? That will recover automatically.#2018-09-2013:05stuarthallowayAlso, make sure your DDB provisioning matches your needs, you can start by looking at https://docs.datomic.com/on-prem/capacity.html#dynamodb.#2018-09-2108:29seantempestaHi @U072WS7PE! No, I am not using HA. I’m still in alpha testing for my app and we have a tiny amount of traffic and data (the entire database backed up is 336k w/o compression). I’ll take a look at those docs. Maybe we are pushing the defaults?#2018-09-2108:29seantempestaThanks for the quick response. 🙂#2018-09-2109:43stuarthallowayif you are still early days I would look at Datomic Cloud -- it is less subject to this problem, and your overall AWS cost will almost certainly go down if you use ions.#2018-09-2109:45seantempestaYeah, I’d like to transition there eventually. It would require rewriting my whole backend though.#2018-09-2109:49seantempestaI haven’t had any stability issues with the dev storage. Would it be a terrible idea to just setup frequent backups to S3 for the time being?#2018-09-2111:15stuarthallowayProbably 😂. I would recommend just turning up the DDB knobs.#2018-09-2111:16seantempestaWill do. Thanks for the advice @U072WS7PE. 🙂#2018-09-2013:48asier#2018-09-2013:56matthavenerhttp://datomic.com works for me#2018-09-2013:56matthavenerah, its an http-only redirect#2018-09-2013:55alexmillerhttps://www.datomic.com is the site#2018-09-2017:05joshkhhow might i go about writing a query to pull all entities?#2018-09-2017:06donaldball(map :e (seq (d/datoms db :eavt))) is a thing you could do#2018-09-2017:07donaldballProbably pour it into a set#2018-09-2017:07joshkhah, d/datoms! i was just looking at that. thanks.#2018-09-2017:11joshkhin case anyone else needs the same, :eavt needs to be in a map (seq (d/datoms (d/db conn) {:index :eavt}))#2018-09-2017:12donaldballhuh#2018-09-2017:12joshkh(at least for cloud?)#2018-09-2213:35ho0manthanks nosesmith
you are right but I wanted to know aside from go blocks blocking threads, in a system heavily dependent on go blocks should I be concerned about separating critical sections' thread pool from others ?#2018-09-2323:30noisesmithnothing critical should be in a go block#2018-09-2323:30noisesmiththe number of threads used for go blocks is small, and doesn't grow, they are a method of coordination, not a utility for execution / parallelism#2018-09-2416:16ho0manThanks again#2018-09-2416:27ho0manBut can you elaborate a little more about "nothing critical should be in a go block" and "they are a method of coordination, not a utility for execution / parallelism"#2018-09-2416:27ho0manAren't they supposed to be lightweight threads in a sense?#2018-09-2416:35ho0manYour comment kinda scared me, cause quite a few of my applications are actually a network of channels with async/go blocks doing almost all of the work in between them#2018-09-2416:35noisesmiththey are for lightweight coordination of state between tasks, anything that uses blocking IO or is CPU intensive should not be in a go block#2018-09-2416:35noisesmithasync/thread is OK for blocking IO or CPU intensive tasks#2018-09-2416:36noisesmithit returns a channel that a go block can park on#2018-09-2416:37noisesmithwhat happens is that when resource-intensive tasks are in go blocks, they can starve the channel operations, since the number of threads for go blocks are limited#2018-09-2417:08ho0manThanks alot#2018-09-2017:12donaldballinteresting, I didn’t realize the api varied thusly#2018-09-2017:16joshkhit also seems to max out at 1,000 values. hmmm...
(count (map #(nth % 3) (seq (d/datoms (d/db conn) {:index :eavt}))))
=> 1000
#2018-09-2017:43favilatry :limit and :offset in your arg map#2018-09-2017:44joshkhyup, that did it. the api docs refer to the top of the namespace for further docs. is the client open source?#2018-09-2101:23stuarthalloway@joshkh @U09R86PA4 Datomic client is Apache 2 license, license file and source code are present in the .jar#2018-09-2101:24favilaah didn't know that#2018-09-2017:45favilajust scroll up#2018-09-2017:45favilano, nothing datomic is open#2018-09-2017:46joshkhah. there it is. thanks.#2018-09-2017:46favilaa query would probably overwhelm the instance, but you could also do [:find ?e :where [?e]]#2018-09-2017:46favilait would dedup for you, but you still have to be able to fit all entity ids in memory#2018-09-2017:46favilain a set#2018-09-2017:46joshkhExceptionInfo Insufficient binding of db clause: [?e] would cause full scan clojure.core/ex-info (core.clj:4739)#2018-09-2017:46favilaah#2018-09-2017:47favilawell, that's good#2018-09-2017:47favilathere's no efficient way to do what you are doing#2018-09-2017:47favila...why are you doing it?#2018-09-2017:47joshkhthat's what i'm worried about. basically, all i want is to clone a (cloud) database, and i don't care about history.#2018-09-2017:48joshkhschema migration was the easy part.#2018-09-2017:49favilaoh, in this case you do want all datoms#2018-09-2017:51favila@joshkh https://gist.github.com/favila/785070fc35afb71d46c9#2018-09-2017:51favila#2018-09-2017:52favilaThis code is on-prem, and also predates mem dbs having history#2018-09-2017:52favilabut it may help you get started#2018-09-2017:52favilahonestly reading the tx log and transacting each one into a new db is the simplest and safest thing#2018-09-2017:58joshkhfantastic. thanks a lot. 🙂#2018-09-2102:50idiomancyhmmm, so... I want a query that returns a list of all items that match a predicate and a list of all items that do not match that same predicate... is that something that's possible with the datomic datalog?#2018-09-2102:51idiomancyas in, with a single query#2018-09-2102:55donaldballI think the most common advice would be to do the split outside of the query, e.g. using group-by and your predicate.#2018-09-2102:56favilaIt’s possible with a single query using or-join, but two queries is fine #2018-09-2102:57favila(Or rules)#2018-09-2102:57idiomancyinteresting.. or-join#2018-09-2102:57idiomancywill be studying that#2018-09-2103:08idiomancyhmm... anyone know what the problem here is?#2018-09-2103:08idiomancy#2018-09-2103:08idiomancy#error {:message "Cannot parse rule-vars, expected [ variable+ | ([ variable+ ] variable*) ]"#2018-09-2115:02marshallThe first argument to an or-join needs to be the set of variables to join with the rest of the query#2018-09-2115:03idiomancyHmm..#2018-09-2115:04marshallin your case it would be [?address]#2018-09-2110:38mpingHi, is it possible to insert a datom for a given db instant? I want to backfill historical data#2018-09-2113:26favilaYou can set the tx instant of a tx in the past, but not before other tx instants#2018-09-2113:26favilaSo if you already have newer data you can’t change history#2018-09-2113:27favilaIf this is a one-time thing you can replay the tx log of the old db into a new one, injecting your historical txs as appropriate#2018-09-2113:28favilaThis technique is called “decanting” for google#2018-09-2113:29favilaBut if making stuff in the past is a regular thing you might have a separate domain of time and you should model that explicitly#2018-09-2114:57mping@favila yes, I can inject in cronological order, how can I supply the tx instant when transacting?#2018-09-2115:00favilahttps://docs.datomic.com/on-prem/transactions.html#explicit-db-txinstant#2018-09-2115:01mpinggeez thanks, I searched for that#2018-09-2115:01favilae.g. {:db/id "datomic.tx" :db/txInstant #inst"2018-01-01"}#2018-09-2116:38mpingthanks again!#2018-09-2117:41idiomancyis there a good way to guarantee that an attribute can not appear on an entity unless another attribute is present?#2018-09-2117:43johnjin your application code or a transaction function#2018-09-2117:46favilathe only types of consistency datomic can maintain by itself are attribute value-type, unique-value (unique a+v asserted among all datoms), and any precondition (NOT postcondition!) you can test in a transaction function#2018-09-2118:23idiomancygotcha, that makes sense#2018-09-2119:22grzmWhat's the best way to filter with a date range in a Datomic query? :where [(.after ?time-a ?x)]? something else?#2018-09-2120:35csmyou can use < and > with dates, e.g. :where [(> ?time-a ?x)]#2018-09-2121:05grzmcool. thanks!#2018-09-2122:18ezmiller77I have an app that I migrated to use datomic cloud, but have only used on my local dev machine so far. This is mostly just a toy app. What's the easiest, most advisable way to run it elsewhere. I've been using a docker stack running on a single machine to run the app, and I thought I might try to run the datomic-socks-proxy script in a docker container, but this is proving more challenging that I thought. Any suggestions?#2018-09-2206:37rhansenuse datomic ions#2018-09-2209:03henrikAs above, use https://docs.datomic.com/cloud/ions/ions.html
It uses CodeDeploy under the hood. From a usage perspective, it’s just a push command and a deploy command to have your app up and running in the same instance as Datomic.#2018-09-2209:06henrikFor serving web pages, you set up API Gateway to call a lambda, which in turn calls a function in your app to serve the page.#2018-09-2315:09ezmiller77Hi, Sorry I missed this. I'll take a look at the ions. Just saw Rich Hickey's talk about it in NYC. But I had the impression that it would require significant re-engineering especially around the api layer.#2018-09-2315:11rhansenIf you're using ring, you're more or less ion ready.#2018-09-2315:23ezmiller77Ahh okay. I'll give it a shot then.#2018-09-2417:51ezmiller77@U0H2CPW6B I remembered you need to be using tools.deps to use datomic ions. Currently, this project just uses project.clj for deps.#2018-09-2417:55henrikPorting deps from Lein to tools.deps is pretty straightforward.#2018-09-2418:00ezmiller77Maybe this is a good bridge?#2018-09-2418:00ezmiller77https://github.com/RickMoynihan/lein-tools-deps#2018-09-2418:00ezmiller77Just a bit wary here because in moving to datomic cloud by far the most trouble I had related to dependency conflicts.#2018-09-2418:10henrikYeah, if you're using overlapping dependencies, Datomic is going to override them, and inform you. You might as well stick to the versions it's using.#2018-09-2419:02ezmiller77Got the application to run locally anyway with lein-tools-deps. Only oddness is that I get errors if I declare clj-time in the deps.edn file and not in project.clj. (lein-deps-tools merges deps from deps.edn with dependencies specified in project.clj).#2018-09-2122:30andy.fingerhutIs there any published version of Rich's slides from his Sep 12 clojure/nyc talk on Datomic Ions?#2018-09-2122:34andy.fingerhutHmm. Looks like at least some of them are the same from Stu's video on http://datomic.com, but that recording makes them more easily readable. Reason for asking: I was planning to publish a transcript of Rich's Datomic Ions talk, and wanted to include images for at least some of the slides, similar to the transcripts that are already available here: https://github.com/matthiasn/talk-transcripts/tree/master/Hickey_Rich#2018-09-2213:30hmaurer@andy.fingerhut how was the talk?#2018-09-2217:30andy.fingerhutI thought it was quite good. I've really never developed a database-backed app believe it or not (mostly embedded and low level software), and never deployed in the cloud, so I am an oddball in the software world these days, I expect. If you are desperate to read a transcript soon, before it has all of the slide images included, you can see it at the following link, but I plan to add those images before it is published in the repo that I forked from: https://github.com/jafingerhut/talk-transcripts/blob/master/Hickey_Rich/DatomicIons.md I will post another link to the final version here and in the announcements channel when it is done, if you want to wait for that version.#2018-09-2309:47rhansenWhich version of Java does datomic cloud run? 8?#2018-09-2318:01andy.fingerhutThere is now a published transcript for Rich Hickey's talk on Datomic Ions that he presented at clojure/nyc on Sep 12, 2018. If you have a better source for images of some of the slides, or know what he was saying in a few parts I couldn't understand marked "tbd", please open an issue or PR to improve it: https://github.com/matthiasn/talk-transcripts/blob/master/Hickey_Rich/DatomicIons.md#2018-09-2323:27iambrendonjohnThat’s amazing. Thanks!#2018-09-2321:29mpingHi, out of curiosity is there any reason datomic chose gzip over any other compression format?#2018-09-2407:23rhansen@mping my guess is that:
1) It's well supported
2) It gives good compression while still having good performance and relatively low memory usage.
3) Storage is cheap, so the differences in compression format might not be worth the time spent on finding the absolutely most optimal format.#2018-09-2407:30mpingI was wondering because if you want to do a lookup by id, I believe datomic will fetch and decompress the whole segment; maybe this isn’t a problem anyway#2018-09-2416:53favilaI suspect it's nothing more than GZIPInputStream and GZIPOutputStream being in the jdk already (no dependencies) and better than Deflate (the only other jdk option)#2018-09-2508:33mpingyeah, in hadoop stack sometimes people use lzo/snappy/bz2 because they can be splittable and decompressed without reading the complete file but again maybe that isn’t a problem#2018-09-2417:30kennyCan a first deployment of Datomic Cloud use the individual compute and storage templates or do I need to go through the marketplace template?#2018-09-2417:47kennyIs KeyName a required parameter for the Datomic compute stack?#2018-09-2421:34kennyIt appears KeyName is required. Why is this so?#2018-09-2501:05zkAnybody run into ion deploys failing silently?#2018-09-2501:06zkAt least I think silently, maybe I’m not looking in the right place but nothing’s showing up in CloudWatch > datomic-<stack-name> log#2018-09-2509:11stopahey all -- noob q -- is there a maximum length limit for a :db/type :string? if so, what is it?#2018-09-2509:22henrikFor Cloud, there seems to be a limit of 4096 characters: https://docs.datomic.com/cloud/schema/schema-reference.html
On-prem doesn’t seem to have this limitation.#2018-09-2509:29stopaThanks @U06B8J0AJ! Indeed we have on-prem -- I am guessing it's the size of a sql blob. If someone confirms would be awesome 🙂#2018-09-2509:51lenCan I get a conn from a db ?#2018-09-2513:48mping(d/db conn) using the peer lib#2018-09-2514:21Joe LaneI think he is asking the other way around.#2018-09-2515:06mpingyep, my bad#2018-09-2515:11mping(let [db (d/db conn)]
(d/connect (:id db)))#2018-09-2515:11mpingapparently (:id db) gets the uri#2018-09-2515:11mpingundocumented, though afaik#2018-09-2517:35cap10morganDo the recent Datomic transactor EC2 images enable enhanced networking?#2018-09-2519:10souenzzois a good idea compose d/as-of with d/since to make queries about a "time range"?
it works on client API too?#2018-09-2519:14souenzzoso I can do
(users-that-changed-name db) => all users that changed name
(users-that-changed-name (d/as-of db #inst"2018")) => users that changed name before 2018
(users-that-changed-name (d/since (d/as-of db #inst"2018") #inst"2017")) => users that changed name just in 2017#2018-09-2519:12idiomancySo, I'm thinking about a thing. It's probably crazy, and if someone can tell me off the bat "hey, that's crazy" I'd appreciate it.. I want to analyze log files stored on S3, and Amazon provides it's own query analysis tool using a redshift cluster, but you know what would be really cool?
Using datomic as the query engine#2018-09-2519:13idiomancyLet's assume that I have total flexibility over how the logfiles and S3 buckets are structured#2018-09-2519:15idiomancyThis might be a bit of a pipe dream, but can everyone just stop for a second and consider how cool it would be to make datomic a log aggregator?#2018-09-2519:22souenzzoI already made a big data product with datomic
(with on-prem)
I received some GB of CSV files + a running SQL database, then I made a pipe to transact this data into a temporary DB-URI
Once I finish this "raw" import, I made some queries in this db and insert "real" db-uri, that is used my the application
This "real" db-uri goes huge very quickly, then monthly I create a new one, then the aplication sometimes need's to query the current + all olders db.
A possible optimization is use CSV+SQL+Olders db's to create the "current" db.#2018-09-2519:26idiomancyHuh, fascinating. It might take me a while to fully parse that, but I'm pretty sure that sounds like what I had in mind. Hmm. I think what I would do is possible use an sns event to trigger a transform on s3 raw log files. It would reformat them to something that could be ingested by datomic?#2018-09-2519:26idiomancyBut either way, the basic idea is to have a datomic view of imported raw files#2018-09-2519:16idiomancyI mean, the model intuitively seems to map so well! It is after all, a log aggregator by nature#2018-09-2520:55Daniel HinesI'd like to make what is essentially a to-do list app to coordinate daily jobs between a team of people, and I want to use Datomic, but I'm not sure how to model my scenario in Datomic. Every day, users need to check off whether they've completed each task on the list, but the list will evolve over time. When, say, an admin user adds another item to the daily to-do list, should I then update the schema to include that item as an attribute belonging to a to-do list, or is there a better way to model it?#2018-09-2523:30oscarNo. List items are data, not schema. Your schema should define something like :todo/name as a string and when an admin adds another item, transact {:todo/name <another item>}.#2018-09-2604:04Daniel HinesOk, thanks, that makes sense. So list items would be an entity with a :todo/name as a string and a :todo/completed as bool, and todo list could be another entity that had a list item as an attribute with cardinality many, so that, given a todo-list, I could pull all its items. I could also imagine doing a full text search of all todo’s and wanting to know to which list a given result belonged; do I also need to include an attribute on the list item with cardinality one that points to which list it belongs to?#2018-09-2616:06oscarBack references are just as fast as forward references. If you're querying, just ask [?list :list/todo ?todo]. With that in mind, if you want the constraint that each todo is a member of only one list, I would instead have a cardinality one reference on the todo (`:todo/list`) and query for all todos pointing to a given list when you need it. If you're pulling, you can back-reference by saying (pull db [:list/name {:todo/_list [:todo/name :todo/completed]}] <list id>). Lookup :as in the pull docs if you want to rename the back-ref. Are you planning on resetting the completed bools daily?#2018-09-2521:34timgilbertSay, is there a practical difference between '[:find (count ?e) . :in ...] and '[:find (clojure.core/count ?e) . :in ...] ? I'm messing around with backtick and it (rather irritatingly) namespace-resolves all my symbols#2018-09-2521:36timgilbertI'm assuming the aggregation function looks for the literal symbol 'count, just curious if anyone knew whether that's true or not#2018-09-2601:22favilaThe “builtin” query aggregators and functions match on a symbol with no namespace #2018-09-2601:22favilaLike count, sample, get-else, missing?#2018-09-2601:24favilaThis is a thing that may help https://github.com/brandonbloom/backtick/blob/master/README.md#2018-09-2609:30joshkhbefore i continue down the road of a custom migration tool, does Cognitect ever plan on supporting a feature to copy/clone a Datomic Cloud database?#2018-09-2609:31henrikWell, even if they do, could you wait until the arbitrary point in time when it is released?#2018-09-2609:32joshkhthat's the reason i'm asking. if the feature was around the corner then i'd make do.#2018-09-2609:33joshkh..but can't avoid it forever 😉#2018-09-2612:15val_waeselynck@U0GC1C09L maybe an adaptation of this would work for you ? https://gist.github.com/vvvvalvalval/6e1888995fe1a90722818eefae49beaf#2018-09-2612:16joshkhthanks @U06GS6P1N. i guess i'm just curious if copying/cloning a database will ever be a cloud supported feature 🙂#2018-09-2612:16joshkh(officially)#2018-09-2613:17jeff.terrellThere's a feature request app you could check. Try this page then clicking on the 'feature request' button. https://www.datomic.com/support.html#2018-09-2704:12henrikIf you want to just copy/clone (not do a migration), isn’t the implementation basically just:
1. Create new DB with the same schema,
2. Write a loop that sources data from one and transacts it to the other.#2018-09-2617:47eoliphantHi, i'm trying to upgrade a Cloud 402, system to the latest version, and follwing the 'first upgrade ever' instructions. The stack deletion failed because it looks like the TxGroupRole still has policies attached. Is it ok to fix this by hand? and retry the deletion?#2018-09-2618:00eoliphantSolved, the problem, looks like the delete is only pulling off the policies that were added via the stack creation, the policies i added by hand were gumming up the works. Might be a useful FYI for the docs, and.or perhaps a future rev can grab the full list of policies and remove them#2018-09-2620:29jaretThanks! I didn’t think of that. I’ll add it to the first upgrade docs. It shouldn’t be a problem for upgrades after that point#2018-09-2623:03eoliphantquick question, I'd like to match an attribute's value if it exists, and nil if it doesn't. Tried or'ing my clause with a missing but that makes datalog frown lol because of the var mismatch. or-join doesn't help as I need the value
(or [?o :otuabund/accepted ?a]
[(missing? $ ?o :otuabund/accepted )])
I've a query aggregate function that looks at the collection of these and returns a value based on if they say they're all true, false, mixed, or 'missing'#2018-09-2623:18favilaget-else with a sentinel value#2018-09-2623:18favilanil seems to be toxic to datalog, i always use sentinels instead#2018-09-2623:19favila[(get-else $ ?o :outabund/accepted :missing) ?a] for example#2018-09-2623:40eoliphantah, that's simple. in the meantime, i realized it was better modeled with enums anyway , but will keep that in my back pocket#2018-09-2623:42eoliphantrandom question, :db/index is automatic in Cloud? I've a schema generator dusted off and I just realized it wasn't adding it to schema atttributes, and of course it blew up, when i added it#2018-09-2716:24oscarYeah. It's always on in Cloud.#2018-09-2710:43Petrus TheronIs Datomic a good choice for IoT time series sensor readings? Esp. for intermittent connectivity, where a sensor might only connect once a day.#2018-09-2717:52Mark AddlemanWe use Datomic for time series monitoring data from software environments. We are pretty pleased with the result. We store 300k+ metric streams of data every 30 minutes.#2018-09-2717:55Mark AddlemanWe encode each 30 minute metric timeslice as a b64 compressed string.#2018-09-2717:56Mark AddlemanI’m familiar with infrequent metric streams (like you describe) and I can’t see why that would be a problem. Does the sensor report a series of data points or a single point when it connect?#2018-09-2713:52stijnwe are migrating from client to ions, but in the meantime we need to let our code run on elastic beanstalk for a while to give clients the opportunity to move to api gateway (we can't keep url's the same). Now, we have a codebuild project setup for the elastic beanstalk that creates a docker container, but I'm wondering what are the exact credential requirements for being able to download the ion jar from s3?#2018-09-2713:53stijncan we fix this with iam instance profiles on the build server?#2018-09-2714:14stijnHmmm, nevermind, I should remove that dependency from the deps.edn profile that tries to create the beanstalk build #2018-09-2718:48donaldballI probably need to handle this a layer up in my app stack, but: there’s no effective way to put a datomic connection into read-only mode, is there?#2018-09-2808:55ninjaHi, is it possible to add multiple JARs to the transactor classpath? The documentation only covers the case where a single jar is added. There has to be some kind of separator...#2018-09-2811:34ninjaJust found a solution. For the record:
One needs to copy the JARs that should go on the transactor's classpath into the /lib directory of the transactor. There are already several other jars present like ant and so on. Nothing else needed to be done (at least for datomic free).
This is something I was not able to find in the docs. Maybe some lines about this topic should be added.#2018-09-2809:47Andreas LiljeqvistSometimes I want to use multimethods on a pulled entity.
Problem is that I have no explicit type to dispatch on.
Any words of wisdom regarding this?
One solution is to add :db/entity-type or something to all my schemas#2018-09-2813:26pvillegas12You can also dispatch on the presence of specific combination of attributes#2018-09-2816:05Andreas LiljeqvistI fail to see how, without leaking knowledge to the dispatch fn?#2018-09-2816:46steveb8nI have an :all/type attr on all entities#2018-09-2810:02joelsanchezwell. I use an :schema/type attribute and all my entities have a type#2018-09-2810:03joelsanchezI think it's pretty common practice with datomic, to have an attribute for that#2018-09-2810:17dazldif you have a shared attribute, does that cause any issues with indexes?#2018-09-2810:17dazldsay, :sys/created-at or something too#2018-09-2813:20octahedrionwhen Datomic says "tempid used only as value in transaction" is there any way to get the specific assertion that caused it ?#2018-09-2816:39joelsanchezthat's one of the most cryptic error messages in Datomic imo. it can mean using a tempid of an entity that's not in the transaction, or trying to use a string where a ref should've been#2018-09-2816:39joelsanchezin particular, the second case isn't obvious at all#2018-09-2819:09mishaladies and gentlemen,
1) do you add your custom attributes to datomic schema to specify the domain-type of a ref-type attributes value(s)?
For example, :foo/bars is a :type/ref :cardinality/many attribute, and it supposed to contain :bar entities. But vanilla db-schema does not specify the type of the objs it points to.
2) how often do you have same :type/ref attributes pointing to the objs of different domain type? E.g. :foo/bars pointing to 2 :bar objs and 1 :baz obj.#2018-09-2819:20joelsanchez1) yes I specify the type of a ref, but I don't validate it
2) almost always, a ref attr has values that refer to entities of the same type#2018-09-2819:21mishahow your (1) helps then? isn't some "name convention" enough then?#2018-09-2819:22joelsanchezit helps with ES indexing, but not much else#2018-09-2819:22misha3) how do you give a name to a domain obj, if there is no reified "entity" notion in datomic, because any id can "contain" any attributes out of schema#2018-09-2819:23mishato lookup clojure-to-ES mappings?#2018-09-2819:24joelsanchezI have a :schema/type attribute. every entity has this attribute. an example value is :schema.type/user. when I create a ref attribute, I usually set a :schema/refEntityType, for example :schema.type/user.#2018-09-2819:24joelsanchezin the case of ES, it helps me to identify special cases, like i18n entities#2018-09-2819:24joelsanchezI'm just saying, it's not a bad idea, it can be helpful, but it's not that needed#2018-09-2819:26joelsanchezfor example, in ES I can't just save :product/name as a product/name field in ES. I need to generate one for every lang, such as product/name__en_US for indexing purposes#2018-09-2819:26joelsanchezwhen I encounter a ref with a refEntityType of :schema.type/i18n, I do that#2018-09-2819:27joelsanchez> any id can "contain" any attributes out of schema
this is helpful, but it's also helpful to know the type of the entities, and to know what type should a ref point to 🙂#2018-09-2819:28joelsanchezthere are attrs that are specific to an entity, like :user/name, but also global ones#2018-09-2819:40mishasweet, thank you for reply#2018-09-2902:56drewverleeIs there anything that can take the Grammer https://docs.datomic.com/on-prem/query.html#sec-4-1 and tell you given an example, how the data corresponds to the symbols in the data?#2018-09-2911:00solussd@misha I avoid explicit entity “type” attributes. Oftentimes an entity can be of many “types”, e.g., an entity may have a :customer/orders attribute, meaning it is a customer that has placed orders and :twitter/id attribute meaning it is a twitter user. A query, containing, [?e :twitter/id ?twitter-id] … and returning ?e is about entities in the database of “type” twitter user.#2018-09-3000:11hmaurer@solussd don’t you find it problematic to be completely duck-typed in this way?#2018-09-3000:16solussdI wouldn’t call it duck typing, per se. Those attributes have type and they’re part of the datomic schema and mean something. I.e., a datomic attribute should only have one semantic meaning.
Practically though, no, it’s worked quite well for me. :)
W.r.t. the entities possibly being of multiple entity “types”, it’s set membership / set semantics for typing.#2018-10-0108:40octahedrionwe've found the same thing: instead of thinking in terms of types think in terms of qualities of things -- everything is a composite of multiple qualities#2018-10-0108:41octahedrionstill, it's convenient to have a :kind attribute for some things#2018-09-3000:32hmaurer@solussd oh; what I meant is that, with this approach, it seems to me quite hard to ensure that specific attributes are present on certain entities. i.e. that a “first name” and “last name” is provided for an entity that acts as a “person”, etc#2018-09-3000:32hmaurer(but my experience is very limited)#2018-09-3000:33solussd@hmaurer ah, I suggest using transaction functions as entity constructors. That’s the idiomatic way of ensuring a particular shape for a particular kind of entity in datomic.#2018-09-3000:42hmaurer@solussd oh interesting; thanks! I’ll look into this#2018-09-3000:42hmaurerwhen you say idiomatic, can you point at some articles/talks/projects that do this?#2018-09-3000:42hmaurerout of curiosity#2018-10-0109:54stijnis it possible to 'change' the name of a datomic cloud system? i.e. create a new stack, but use the storage of another one that has a different name? (we made a mistake on naming conventions)#2018-10-0117:24notanonHello all. I am beginning the arduous process of attempting to do a POC for datomic cloud at my very large company. Our cloud security team is very concerned with potential problems and I'm hoping you guys can help answer their questions.#2018-10-0117:24notanon1) How do they handle security/patch updates to the servers they allocate. We’ve seen AWS Marketplace offerings ignore critical patches on EC2 instances for example.
2) We do not wish for Datomic to create its own VPC, could they provide a cloudformation template without a VPC or give us the template so we can try to modify it ourselves.#2018-10-0117:27notanonI've seen in my own POC work that the answer to 2) is yes, we're free to modify the templates and they're available on the datomic cloud releases page. Are there any concerns with us modifying the template to run in an existing VPC?#2018-10-0118:15marshall@notanon You are free to modify the CFT as necessary. Be aware that future updates may not work seamlessly if you’ve modified the CFT to use your own VPC, however, and that is technically an ‘unsupported’ deployment
As far as updates - we intend to keep our AMIs as up to date as possible.#2018-10-0118:15marshallOf course, you always have the ability to SSH to your system instances if necessary#2018-10-0118:18notanonthanks. Is there anything specifically datomic cloud is doing that requires it's own VPC? Assumptions it's making perhaps?#2018-10-0118:18notanonIs there any guarentee/SLA around keeping the AMIs fully patched?#2018-10-0118:19notanonI'm not very familiar with AWS, do you mean by having the ability to SSH into the instances you're implying we can patch them ourselves if we needed to?#2018-10-0118:25marshallprobably not a great option#2018-10-0118:25marshallsince they should be considered to be ephimeral#2018-10-0118:25marshalli.e. if one goes down, it’s going to restart from the base AMI#2018-10-0118:25marshallThere are a number of reasons/assumptions included in the creating a VPC approach#2018-10-0118:26marshallin particular, network configuration, LBs, etc#2018-10-0118:37notanonSo the response to our cloud security team for 1) would be, the datomic team plans to keep the AMIs as up to date as possible, but there's no guarantee/contract ensuring this. It's also impractical to attempt to patch them ourselves.
And for 2) We are free to edit the templates as much as we want, but this would make our install unsupported (I don't know if enterprise support is included in datomic cloud, does this mean we'd be unable to have it in any case). We would likely need to handle all of the network configuration ourselves (this was expected by our team when we discussed it).#2018-10-0118:37notanonI'm not sure I understand the comment about load balancers, how does the VPC assumption play into that?#2018-10-0118:51eoliphant@notanon we initially wanted to do the same thing, had even started playing around with the CF scripts, etc. though as @marshall points out, you then run into potential support issues. We have a pretty sophisticated setup with accts/vpc's for each lifecycle stage, as well as dedicated ones for in/outbound network traffic, a vault vpc for logs, etc etc. While the VPC as the 'unit of deployment' can be understandably off putting and/or counterintuitive at first, a datomic cloud 'system' is comprised of a lot of stuff, to the point that you probably don't want it in 'your' VPC. We've basically just setup a mirror Datomic system/vpc for each post dev lifecycle stage , and N per developer ones that are peered into our lab/dev VPC#2018-10-0119:09notanonInteresting. I'll definitely pass this along. Though I doubt they'll be very receptive. We have 1000+ servers, hundreds of databases, machine learning stuff, basically the entire AWS offering lol all in one VPC. I doubt they're going to see datomic spinning up 10s of aws offerings as a drop in the VPC bucket.#2018-10-0119:11notanonIs there any kind of documentation out there on best practices for VPCs on AWS? Something to at least support the idea of isolating things like you've outlined and datomic cloud can't seem to live without?#2018-10-0120:45eoliphantNot that I know of,and yeah a more 'normal' database, that's 1-N servers, an ASG and a LB ,etc or even say Datomic On-prem, is more easily (normally) plopped into an existing VPC setup. So yeah, doing it this way is certainly atypical, but after mucking around with it a bit, I think it certainly makes more sense, based again, on the complexity of the various bits.
in practice, it's not really been an issue. We use terraform for our infrastructure,so we've just wrapped the datomic CloudFOrmation stuff into those scripts. so we just apply a 'build a "test2"' env and it spins up the AWS account, the 'main' and the datomic vpc, sets up the peering, etc etc.#2018-10-0120:48eoliphanthey we've loaded a good chunk of data into a datomic solo system, and since then, we're having issues even connecting to it, we've tried restarting the instance, etc to no avail, and there's nothing useful in the logs. Any ideas?#2018-10-0123:20stuarthalloway@U380J7PAQ how much is a good chunk?#2018-10-0123:27eoliphantSorry about that 🙂 Got this second hand from one of my devs. Ok, just checked his dashboard looks like around 2M datoms#2018-10-0123:29eoliphantand there's no data (except CPU) from the last couple days, at all. He did the load, and ran into the issue on friday#2018-10-0123:31eoliphantAlso, that stack is a little older, v407#2018-10-0314:37jaretHey @U380J7PAQ could we move this conversation to a ticket? I’d also like to see if we could get read-only access to your Cloudwatch logs to look closer at your inability to connect. That’s better done over a ticket.#2018-10-0314:37jaretIf you can give me a good e-mail address, I can start a case and copy this conversation into a ticket for us.#2018-10-0315:01eoliphanthi @U1QJACBUM <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection> is good#2018-10-0315:02eoliphantThe stack is being upgraded to the latest, so e#2018-10-0315:02eoliphantwe'll try again after that#2018-10-0315:10jaretGreat! I created a case and sent you an e-mail with instructions for a ReadOnly Cloudwatch account in the event you’re able to provide us access and you still can’t connect on the latest stack.#2018-10-0120:57marshallcan you define “good chunk”#2018-10-0120:58marshalland what does your dashboard show?#2018-10-0210:08jarppeIs there a way to use client api and have dynamically created in-memory databases for testing?#2018-10-0210:10jarppeIn testing it has been really nice to have an in-memory database created in test fixture, but I guess that is not possible if we use the client api#2018-10-0210:39steveb8n@jarppe yes! if you want to roll your own you can start with https://gist.github.com/stevebuik/9b219090a2d10cc4fb06d62ee928ca7e#2018-10-0210:40steveb8nor for a more refined solution https://github.com/ComputeSoftware/datomic-client-memdb#2018-10-0210:40steveb8nI rolled my own because I wanted interceptors in that layer as well. I have not tried the OSS lib#2018-10-0210:41Hadii have a query like that and im using datomic free. i want to display tx time with given entity id. but i currently only have tx id as the result. is it possible to get the time values? (FYI datomic pro can use :db/txInstant to make the time )#2018-10-0210:41Hadii have a query like that and im using datomic free. i want to display tx time with given entity id. but i currently only have tx id as the result. is it possible to get the time values? (FYI datomic pro can use :db/txInstant to make the time )#2018-10-0213:28favilaI'm not aware of any limitation in datomic free except for peer counts. It should just work. Try it.#2018-10-0213:29favilaYour query could be rewritten as#2018-10-0213:30favila(d/q
'[:find ?e ?attrname ?v ?txinst ?added
:in $ ?e
:where
[?e ?a ?v ?tx ?added]
[?a :db/ident ?attrname]
[?tx :db/txInstant ?txinst]]
(d/history (db))
eid)#2018-10-0304:11Hadithankyou. after i re-run the repl it solves the db/txInstanst. maybe its a bug#2018-10-0211:40jarppe@steveb8n Great! Precisely what I'm looking for, thanks.#2018-10-0215:47kennyI wrote that lib because we needed it for probably similar reasons that you need it. LMK if you have any questions.#2018-10-0213:00steveb8nQuestion: looking at this lib I learned about all the different kinds of uuids. In my ion code I'm simply using java.util.UUID but I'm wondering if there's any value in using other uuid types? Any uuid experts out there?#2018-10-0213:00steveb8nQuestion: looking at this lib I learned about all the different kinds of uuids. In my ion code I'm simply using java.util.UUID but I'm wondering if there's any value in using other uuid types? Any uuid experts out there?#2018-10-0213:32favilaName-based uuid (version 5) has some nice properties#2018-10-0213:32favilajava.util.UUID can represent all uuid versions, though, it's not a matter of needing a different type#2018-10-0214:20steveb8nCool. It's this lib https://github.com/danlentz/clj-uuid/blob/master/README.md#2018-10-0215:14jarppeI agree with Francis, v5 UUIDs are great. I use them when I need a to map something like user ID to UUID so that the same ID always maps to same UUID.#2018-10-0216:28steveb8nThat is a good tip. I can imagine some use cases e.g. saving a lookup by name for entities from a db when the name is immutable. Are there other less obvious scenarios where it's handy?#2018-10-0217:43jarppeThis is not relevant to datomic, but I used it when I generated a test fixture to MongoDB, basically exactly what temp-id's are in Datomic.#2018-10-0218:33favilacan also use v5 uuids for key-by-value situations#2018-10-0218:33favilacompound indexes, hashes, etc#2018-10-0218:48steveb8nI had not thought of the compound index. I'll try that in datomic. Thanks!#2018-10-0218:50favila(it won't be a sorted index)#2018-10-0307:03steveb8nGood point. I'll stick http://to.my txn fns for that#2018-10-0216:13donaldballHey, when applying a txn to a dev transactor earlier, my coworker hit this error:
WARN [2018-10-02 11:59:52,036] clojure-agent-send-off-pool-8 - datomic.connector {:message "error executing future", :pid 72084, :tid 243}
org.apache.activemq.artemis.api.core.ActiveMQObjectClosedException: AMQ119017: Consumer is closed
at org.apache.activemq.artemis.core.client.impl.ClientConsumerImpl.checkClosed(ClientConsumerImpl.java:962)
Google reveals scant. Any ideas what’s up?#2018-10-0217:44jarppeWhat about transaction functions with client api? What's the client api counter part for datomic.api/function?#2018-10-0217:59kennyhttps://docs.datomic.com/cloud/transactions/transaction-functions.html#custom#2018-10-0218:18jarppeThat's for cloud and says that only the built in functions and classpath functions are supported.#2018-10-0218:19jarppeDoes this mean that the cloud api does not support functions like peer api with datomic.api/function does?#2018-10-0218:19jarppeThe documentation is.... vague#2018-10-0219:16rgorrepatiHi, I ran into the same issue as @donaldball.. Deja vu there, almost happened at the same time, and we are not co-workers 😉#2018-10-0219:16rgorrepati[clojure-agent-send-off-pool-803] WARN datomic.connector - {:message “error executing future”, :pid 43, :tid 87854}
org.apache.activemq.artemis.api.core.ActiveMQObjectClosedException: AMQ119017: Consumer is closed
at org.apache.activemq.artemis.core.client.impl.ClientConsumerImpl.checkClosed(ClientConsumerImpl.java:962)
at org.apache.activemq.artemis.core.client.impl.ClientConsumerImpl.receive(ClientConsumerImpl.java:194)
at org.apache.activemq.artemis.core.client.impl.ClientConsumerImpl.receive(ClientConsumerImpl.java:406)
at datomic.artemis_client$fn__1607.invokeStatic(artemis_client.clj:169)
at datomic.artemis_client$fn__1607.invoke(artemis_client.clj:162)
at datomic.queue$fn__1363$G__1356__1368.invoke(queue.clj:18)
at datomic.connector$create_hornet_notifier$fn__7866$fn__7867$fn__7870$fn__7871.invoke(connector.clj:195)
at datomic.connector$create_hornet_notifier$fn__7866$fn__7867$fn__7870.invoke(connector.clj:189)
at datomic.connector$create_hornet_notifier$fn__7866$fn__7867.invoke(connector.clj:187)
at clojure.core$binding_conveyor_fn$fn__4676.invoke(core.clj:1938)
at clojure.lang.AFn.call(AFn.java:18)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)#2018-10-0219:17donaldballha ha nice#2018-10-0219:18donaldballIn our case, we’ve tentatively discovered that batching the forms of the txn into separate txns gets it to transact. Unfortunately, the original txn is only 77 forms, like 27k in size, not especially large, so it’s a little bit surprising.#2018-10-0219:26donaldballIt’s unsettling to note that downgrading from java10 to java8 fixes the problem#2018-10-0219:33donaldballSpecifically, downgrading the peer from java10 to java8 fixes the problem. Are there known issues with datomic peer and java10?#2018-10-0220:28rgorrepati@donaldball Do you mean to say you can reproduce it reliably?#2018-10-0220:29donaldballIt seems so, yes.#2018-10-0220:34rgorrepati@donaldball I was under the impression it is a connection issue between peer and transactor or peer and storage#2018-10-0221:00johnjon-prem has some very very old deps#2018-10-0221:01johnjusing anything past java 8 is asking for trouble#2018-10-0223:20kennyMy datomic storage stack pretty consistently fails to delete due to the Vpc not deleting. If I manually go into my VPCs and delete the datomic-created VPC, it works. This is pretty annoying. Is there a fix for this?#2018-10-0314:08stuarthallowayHi @U083D6HK9. We do not consider deleting a storage stack to be part of any regular workflow, so I am curious why you are doing this?#2018-10-0315:27kennyWe allow our developers to provision Datomic Cloud stacks when they need them. They then delete them when they no longer need them anymore. We end up with lots of stale VPCs and failed stack deletions. Also, because it is launched via a CloudFormation template, one would only expect for it to work with the regular CloudFormation operations, including Delete Stack.#2018-10-0316:26kenny@U072WS7PE a lot of those headaches would be mitigated if devs have a local instance to work against. That being said, the Datomic CFT deletion should work as expected.#2018-10-0316:28stuarthallowayI totally agree, and we test deletion in our regression suite. Can you send us more information about the error you are seeing?#2018-10-0316:28stuarthalloway@U083D6HK9 are developers recreating storage stacks against existing storage?#2018-10-0316:29stuarthallowayor have you handrolled something to deal with all of https://docs.datomic.com/cloud/operation/deleting.html#deleting-storage ? We left this manual on purpose to discourage people from deleting their data.#2018-10-0316:30kennyThe Events tab in the CF UI says:
> The vpc 'vpc-0931d229f45a061a1' has dependencies and cannot be deleted. (Service: AmazonEC2; Status Code: 400; Error Code: DependencyViolation; Request ID: 8d2f7d93-f945-4002-b424-c47aba885b04)
and then DELETE_FAILED:
> The following resource(s) failed to delete: [Vpc].
New storage each time. We have a custom script to delete all those resources. I'm planning on adding it to a public gist as I'm sure others have this workflow as well.#2018-10-0316:30stuarthallowayWhy not leave the storage stack up all the time, and just provision compute when needed? That is certainly what we do internally.#2018-10-0316:31stuarthalloway@U083D6HK9 the whole architecture is designed so that you can leave storage up and just reconnect to it. Is there some benefit to doing this extra work that I am not seeing? If there is some isolation we fail to support I would like to make it first class.#2018-10-0316:31kennyBecause developers want to ensure they are working in a clean environment with empty DBs.#2018-10-0316:32kennyWe at first tried the approach of suffixing DBs with UUIDs, but that became a real pain.#2018-10-0316:33stuarthallowayOK, that is good input, thanks! Will discuss with the team.#2018-10-0316:37stuarthalloway@U083D6HK9 Do you take a similar approach with AWS resources, e.g. automating the creation of 1-off DDB, S3, etc. as needed?#2018-10-0316:38kennyYes. We use Pulumi (similar to Terraform) which has the concept of a stack. A stack consist of any number of resources. When created, a stack provisions all the resources with a unique name. This allows us to spin up entire instances of our infrastructure for any given environment: prod, dev, qa, kennys-prod-test, etc.#2018-10-0316:39stuarthallowayare the unique names Pulumi makes better than DB+UUID suffix in some way?#2018-10-0316:48kennyYes. It makes our dev workflow much easier. As an example, our application calls for DBs named accordingly: admin, customerdb-<cust UUID>1, customerdb-<cust UUID>2, ..., customerdb-<cust UUID>N. When working in a REPL, we know that all we need to do is connect to the admin DB, not admin-<UUID>. We ended up writing a wrapper for the connect function that auto-suffixed the DB name with the current UUID suffix. But then it became a problem when you wanted to run a development system (i.e. http server for UI dev) and run your tests using a clean DB all in the same REPL. This was the primary motivator for writing https://github.com/ComputeSoftware/datomic-client-memdb. The workflow we followed when working with the peer library was identical and it worked really well. We didn't need to think about what the current binding for the db-suffix was.#2018-10-0316:51kennyUltimately it boiled down to less code we need to maintain. Writing and testing code against the peer library was intuitive and easy.#2018-10-0316:52stuarthallowaythanks @U083D6HK9! This is very helpful input.#2018-10-0314:13luchiniSuper dumb question: on Datomic Cloud, do system names have to be globally unique? By “globally” I mean unique even across completely different AWS accounts.
Context: I have a stack failing to create and the only thing that seems to be a potential source of confusion is that I’m recycling a system name I had used in a different AWS account.#2018-10-0314:17eoliphantdo you know where it's failing? something like a collision on the S3 bucket name might be an issue#2018-10-0314:19stijn@luchini: we have datomic systems in different accounts with the exact same name, so I think the answer is no, but if you try to recreate one in the same account with the same name, it's a bit more work if you don't want to reuse the existing storage#2018-10-0314:58Joe Lane@luchini I’ve had stack creations fail at various times with brand new names as well.#2018-10-0315:22luchini@eoliphant this is what we got "The following resource(s) failed to create: [CodeDeployApplication, LoadBalancer, DatomicCodeBucket, HostedZone, BastionInstanceProfile]."#2018-10-0315:24luchiniThanks @stijn and @lanejo01… I’ll try a few more times. Thanks for disproving my theory 😄#2018-10-0315:50marshall@luchini there are some resources that don’t get destroyed when you delete a stack. you’ll want to look in the tag editor to search for anything from that system name to delete it explicitly#2018-10-0315:51marshallhttps://docs.datomic.com/cloud/operation/monitoring.html#tags#2018-10-0316:06luchiniThanks @marshall#2018-10-0317:22kennyWe automate the creation and deletion of Datomic Cloud stacks. As you probably know, deleting a Datomic stack does not delete all the resources that the stack created. You need to follow these steps to entirely delete the stack https://docs.datomic.com/cloud/operation/deleting.html#deleting-storage. We wrote this clj script to automate the deletion of all the resources a Datomic Cloud system creates and thought the community may find it useful. https://gist.github.com/kennyjwilli/55007211070a260044c8e6abcb54dd5b.#2018-10-0318:08stijnI think I also had to delete an IAM policy (datomic-admin-datomic-eu-central-1), which isn't mentioned in the docs @marshall#2018-10-0318:09okocimAre there any recommendations around modeling schemas for dealing with many-to-many relationships? I’m trying to match up information to a main entity from three different data sources, and each of the integrations has their own id for the main entity. However, there is some ambiguity in the matching such one id from a given data provider can be many ids from another provider. I’m trying to determine whether it’s better to model as refs with a cardinality of many on the main entity, or if I should create ‘linkage’ records that are effectively tuples of the 3 ids that might go together. At the end of the day, I have to pare down the results so that the main entity has exactly one reference to the data from each of the other integrations, by doing a ‘best match’ with some code logic.
All of that is admittedly a bit abstract, so I guess my question boils down to whether there are any recommendations for doing many-to-many relationships among more than two entities using refs or values.#2018-10-0518:20eoliphantwell the good thing about datomic is that you can pretty easily experiment, given the 'universal relation' there's frequently no one correct way.
both of your ideas seem workable, the 'right' answer is probably going to be more dependent on the specifics of the resultant queries, etc. The many cardinality thing seems like a good idea to start, you could even perhaps model it in stages. Where you 'promote' it once you've done your paring down process#2018-10-0408:25staskhi, is there a limitation for number of databases in single datomic cloud system?#2018-10-0408:52steveb8nI'd also like to know this limit.#2018-10-0420:29stuarthalloway@U11FG9Z7Z and @U0510KXTU there is no fixed limit, but the thought (and testing) is around a fairly small number#2018-10-0420:30stuarthallowayi.e. database per customer would be a problem for most customer bases#2018-10-0420:30stuarthallowayWhere would the limit (if there was one) impact a design decision?#2018-10-0500:31steveb8nI considered 1 per customer but avoided it after thinking it through. For me this was just for interests sake so no effects on my design#2018-10-0510:12staskI’m thinking about having a database per customer (each customer has multiple users, so i’m talking about up to a thousand databases per system).
It will simplify things like moving customer data between systems (for example when moving a customer from US region to EU region) or removing customer’s data from the system.#2018-10-0522:38steveb8nAgreed. 1 per customer makes org delete easier. I'm betting that there will be a better excision solution (in cloud) in future to address this. If delete could be done easily, would you still choose 1 db per customer? I'm curious#2018-10-0611:47staskIt would be still simpler to have db per customer for moving between systems.#2018-10-0413:46khardenstineIs on-prem ok with java9+? I finally got around to updating my dev environment from jdk8 to 11 and im getting netty illegal reflection warnings with datomic free on startup#2018-10-0413:56donaldballIf you scroll up, you’ll see that a couple of us have discovered a substantive problem with standard datomic peers in jdk10 running at least against a dev mode transactor. I have yet to confirm the problem against a production transactor, or to reduce the problem to a simple test case, alas.#2018-10-0414:11khardenstineOoph thats disheartening. Thanks#2018-10-0414:14donaldballIt’s worth noting that we could be outliers and Cognitect is committed to on-prem moving forward, so you may want to have a conversation with their sales or support folk!#2018-10-0423:42johnjCurious, how do you know Cognitect is commited to on-prem?#2018-10-0501:11donaldballhttps://www.reddit.com/r/Clojure/comments/9gmss7/rich_hickey_on_datomic_ions/e696nk0/#2018-10-0416:42kennyIs there a way to pass datomic.ion.dev/push and datomic.ion.dev/deploy the ion-config map as a data structure and not have them implicitly read it in from a classpath location?#2018-10-0420:28stuarthalloway@U083D6HK9 no -- the config map becomes part of the deployed artifact, so reproducibility would take a hit if was in a command#2018-10-0420:30kennyWe generate the ion-config.edn dynamically based on our system configuration so the Ion config is not checked into version control. Not sure how reproducibility takes a hit.#2018-10-0418:44dfcarpenterI'm just starting out with datomic local dev and trying to connect with a client in the repl. When I try to setup the connection I get the error CompilerException clojure.lang.ExceptionInfo: Datomic Client Exception {:cognitect.anomalies/category :cognitect.anomalies/incorrect, :datomic.client/http-result and i'm not sure why. I have the transactor running, I created a database, and I have the peer server running as well#2018-10-0420:45dfcarpenterRealized I made a mistake when starting the peer. All good now.#2018-10-0420:46dfcarpenterCan anyone point me to open source clojure codebases which use dataomic. Im struggling to learn good schema design approaches#2018-10-0421:23eoliphantHi, I'm trying to optimize a query I have. It's relatively straightforward, I have a tree-structure and a recursive rule that says for a given node, I'll match on a parent of a given type. Works fine, returns in around 30ms for a single one. I'm using a collection binding for the node id, and the response time appears to be more or less linear, which is not a big deal for a few of them, but I just ran into a test case where it needs to match ~1000 and it's taking about a minute to complete. any ideas/suggestions on speeding this up?#2018-10-0501:12pvillegas12You can change your data shape to a flat parent child mapping (repeats data obviously) for querying#2018-10-0508:02eraserhdNot sure whether this applies to you, but I had tree structured data where I queried whether two nodes were in the same tree. Rephrasing it to whether two nodes have the same root made it much better.#2018-10-0508:04eraserhdAlso, I have some super complicated queries running all together in under 18ms. They jump to 30ms when Datomic can’t cache the compiled query. This happens when (not= (hash query) (hash (read-string (pr-str query)))).#2018-10-0508:05eraserhd(Regular expressions are the usual culprit.)#2018-10-0514:59eoliphantFigured it out. One of my dev's had gotten a bit too modular with his rules lol. I rewrote it more simply, and it's 20x faster and no longer displays that linearity#2018-10-0500:47eoliphant@dfcarpenter have you looked at the mbrainz db?#2018-10-0502:32dfcarpenter@eoliphant I will take a look#2018-10-0502:32dfcarpenterThanks#2018-10-0503:50dfcarpenterHow do I turn off the logging in the repl when running datomic?#2018-10-0504:16csm(.setLevel (org.slf4j.LoggerFactory/getLogger "datomic") ch.qos.logback.classic.Level/WARN) may do it#2018-10-0513:06marshall@dfcarpenter Datomic on-prem? You can edit your logback.xml to adjust level and logger target#2018-10-0515:27Andreas LiljeqvistCan I check if an input has a specific attribute - That is give entity ?e as input arg and return ?match if attribute ?e :whatever#2018-10-0515:28Andreas LiljeqvistWhere (= ?match ?e)#2018-10-0517:11okocimdid you figure out what you need here? I’m not exactly sure what you’re going for, but I think you can do it more simply than by using the ‘=’ predicate#2018-10-0515:33Andreas LiljeqvistI want do something like (d/q '[:find [?match ...] :in $ [?e] :where [?e :schema/type :logger] [(= ?e ?match)])#2018-10-0515:43Andreas LiljeqvistIt can be done in an easier way just by using d/entity and filter, but still?#2018-10-0515:58Andreas Liljeqvistidentity can be used to force the var#2018-10-0516:50eraserhdWhat does it mean when a peer logs at INFO "datomic.kv-cluster: {:event :kv-cluster/get-pod-meta, :pod-key "...", ...} with the same pod-key lots and lots and lots of times in a row?#2018-10-0517:30okocimHas anyone found a preferred way to compose or at least parameterize pull expressions in queries? I find using syntax quote to be a bit awkward, because of the need to var-unquote all of the other symbols in the query with ~'#2018-10-0517:58eraserhdI was doing this for a while with clojure.walk/postwalk. Just something like (clojure.walk/postwalk #(get params % %) expr), and then you can replace some symbol with a value.#2018-10-0517:58spiedeni just quote the individual things that need it like '*#2018-10-0517:59eraserhdOf course, use rules first, if you can. And pass extra parameters to q and bind with :in. if you can.#2018-10-0518:17okocimThanks for all of the replies. I think I’ll try the postwalk approach. This may be a bad idea, but I’m thinking of using a symbol prefixed with ! (e.g. !customer-with-address) and postwalk to replace those from a params list#2018-10-0518:25khardenstineYou can just pass pull expressions as inputs to your query:
(d/q ‘[:find (pull ?attr my-pull) .
:in $ my-pull
:where [:db.part/db :db.install/attribute ?attr]]
$ [’* :db/doc])#2018-10-0518:28okocimoh, well thanks. I think that’s the best one of all 🙂#2018-10-0521:21kennyIs it ok for me to add my own keys to datomic/ion-config.edn?#2018-10-0521:54kennyI performed a parameter upgrade on my Datomic cluster and it has been updating for the past 25 mins. Is this normal?#2018-10-0521:59kennyAh, it's because my Ions threw an exception. Seems like that should fail the parameter upgrade.#2018-10-0608:12Petrus TheronHey Guys 🙂 I need some advice on reusing the Datomic transaction format (or at least EDN) for serializing power station measurements coming out an STM32 microcontroller's serial bus (via an FTDI chip) that will remain human-readable while gaining machine-readability. I looked at the Datomic Cloud wire protocol, and I wanted to ask about the evolution of the design to save myself future hassle.
At the moment, the controller is spitting out a string like:
START;
Voltage Reading: 3.3V
Current Reading: 25mA
Last start time:
...more readings
END;
Which I'd like to replace with something more extensible and sane, like:
[{:my-company.v1/input-voltage 3.3M
:my-company.v1/timestamp #inst 12312312312
:my-company.v1/output-voltage 5.5M
:my-company.v1/cell-current 0.00005M
:my-company.v1/started-at 1231313123
...} {:my-company.v2/input-voltage 3.36M, ...}]
Specifically, I see that the Datomic Cloud protocol has a tx-data {:tx-data [tx1 tx2 ...]} key when passing txs around.
Any advice on growing the schema (flat vs. nested values), versioning? Stream-ability of the feed? Pub/Sub considerations?#2018-10-0700:53grounded_sageI’ve got an app which I would like to prototype with Datomic Cloud Ions. Though it is a chat not so I’m curious what the cold start/latency story is like. I’m having trouble finding information regarding this. Datomic Ions may not be suitable which is fine but if it is ok then I would prefer to use it. #2018-10-0721:47dominicmThe startup time is good. Clojure isn't run on the lambda, it runs in the auto scaling group, the lambda is a lightweight rpc proxy. #2018-10-0815:35grounded_sageCould you explain what the rpc proxy is, like the work it is doing? I’m still learning all of this. #2018-10-0815:35grounded_sage@U09LZR36F btw thanks for responding. Was thinking there is little activity in here and would go without a response#2018-10-0815:44dominicmI shouldn't have said rpc. I made that bit up, I meant to say "it acts as an rpc". Basically you can think of it as doing a HTTP request to your auto-scaling group, forwarding on the data that went to the lambda to your asg.#2018-10-0815:44dominicmI'm approximating a little bit here 🙂#2018-10-0703:55Joe Lanechat not or chat bot?#2018-10-0704:44grounded_sageChat bot haha#2018-10-0710:20misha@petrus http://blog.datomic.com/2017/01/the-ten-rules-of-schema-growth.html#2018-10-0719:01Joe LaneIt’s not a big deal. Once a conversation starts the lambda is warm, use one lambda for all bot calls and it will always be warm. #2018-10-0809:36grounded_sage@jarppe I get that. But I’m assuming Ions handles all the Lambda work so you essentially just write Clojure code? I’m not up on all of it. I’m in the front end world#2018-10-0809:36stijnquestion on ions push: it complains that there is a :local/root dependency and hence you have to specify a uname. However, what if this local dependency is in the same git repo, shouldn't that use the git commit then? (we are migrating to ions, but have to keep the existing API available, so we have introduced multiple deps.edn projects in the same git repo)#2018-10-0815:42unbalancedlooking for some advice on data modeling best practices. I'm currently using a compound key to track information but my intuition tells me that this is an anti-pattern#2018-10-0815:44unbalancedObviously it would be ideal if I could omit the :item/key and have the uniqueness of the datoms be predicated on :item/name, :item/category, and :item/subcategory but predictably if I make those :db.unique/identity it's steam-rolling other datums#2018-10-0815:48pvillegas12Have you looked at https://github.com/arohner/datomic-compound-index? It does not solve the problem but sheds light into it#2018-10-0816:08unbalancedinteresting. Well at least I'm not the first person to run into this 🙂#2018-10-0816:19marshall@goomba There’s nothing inherently “wrong” with modeling compound uniqueness as a munged-key#2018-10-0816:19marshallyes, it involves redundant data#2018-10-0816:20marshallbut, if you actually require compound uniqueness semantics, then you need to do something like that#2018-10-0816:20unbalancedyayyyy okay great.#2018-10-0816:21marshallis the type of :item/key string?#2018-10-0816:21unbalancedvector#2018-10-0816:21unbalancedof keywords#2018-10-0816:21marshallvector is’nt a datomic db.type#2018-10-0816:21marshallah, it’s cardinality many?#2018-10-0816:21marshalli would probably avoid that#2018-10-0816:22marshallin fact, you cant do that#2018-10-0816:22unbalancedahh sorry I'm actually putting the hash value of the vector but for omitted that for simplicity#2018-10-0816:22marshall" Only (:db.cardinality/one) attributes can be unique.”#2018-10-0816:22marshallok#2018-10-0816:22unbalanced"simplicity"#2018-10-0816:22marshallyeah, a hashed values is probably fine#2018-10-0816:23marshallalthough it does drop one potential advantage#2018-10-0816:23marshallwhich is index locality#2018-10-0816:23marshalli.e. [:a :aa :aaa] will hash very differently (maybe) than [:a :aa :ccc]#2018-10-0816:24marshallbut if you made them something like compound strings, they would sort more ‘realistically’#2018-10-0816:24marshall":a:aa:aaa" and ":a:aa:ccc"#2018-10-0816:25marshallalso human readable#2018-10-0816:25marshallwhich is nice for debugging and/or error handling#2018-10-0816:25unbalancedahh good point. yeah I'm at the dev phase where I just throw everything at the wall and see what ticks#2018-10-0816:25unbalancedbut that's a better idea#2018-10-0816:26unbalancedthank you 🙂#2018-10-0816:26marshallnp#2018-10-0819:28ghaskinshi all, im struggling to find out how to determine the txinstant of the last commit to the db#2018-10-0819:28ghaskinsi can get (basis-t) of course#2018-10-0819:28ghaskinsbut then im not sure how to get t -> txinstant#2018-10-0819:31souenzzo@ghaskins
(defn t->inst
[db t]
(:db/txInstant (d/pull db [:db/txInstant] (d/t->tx t))))
#2018-10-0819:32ghaskinsawesome, thank you @souenzzo#2018-10-0821:49eggsyntaxIs there any way to query and get one or a few results that doesn't require retrieving a large amount of data (assuming a cold peer, in this case my local [on-prem] REPL)? I think (is this correct?) that both the :find ?e . find specification and the sample aggregation function retrieve the complete result set before cutting it down. It's fairly common (for me at least) to want to get a couple of representative matching entities in a fast way, and it seems like there would be some way to achieve that with a query.#2018-10-0821:55Joe Laneasync query that returns a channel? take N results from channel, then close?#2018-10-0821:58eggsyntaxIs there such a thing as an asynchronous query (for on-premise)? I thought queries all went through datomic.api/query, which as far as I know is fundamentally eager.#2018-10-0821:59Joe LaneI’m very unfamiliar with on-prem.#2018-10-0821:59Joe LaneAll I know is the client api.#2018-10-0821:59eggsyntaxWhereas I'm pretty unfamiliar with the client api 😉#2018-10-0822:00eggsyntaxIncidentally, I think I can achieve something like this by hitting the indexes instead of querying. It just seems like there would be a way to do it via query, since (I imagine) it's a common need.#2018-10-0901:03ozWhat's the proper way to backup a Datomic Cloud database? I started down the route of trying to restore from a Dynamo snapshot, create a new storage stack, then update the existing compute stack to use that new storage stack system.#2018-10-0911:28stuarthallowayCloud databases are redundantly stored on multiple storages that are themselves redundant. There is not (currently) a backup/restore as with On-Prem.#2018-10-0913:12oz:+1:#2018-10-0901:04ozHowever it doesn't look like that will work as I was originally thinking.#2018-10-0908:30val_waeselynck@eggsyntax I don't see how you can do that via Datalog, but maybe you can use the index API to that end. Also, have you set up memcached on your local machine ? You can get a lot of speedup with no extra code this way#2018-10-0913:10eggsyntaxYeah, I think indexes end up being the way to go here. Thanks!
I just started on a new team recently, so I haven't tried to set up memcached yet, but definitely planning to. I haven't tried before to put it between my local machine and a remote DB (which is what I'm querying in this case) -- but it seems like that would be possible?#2018-10-0914:26val_waeselynckIt is totally possible - it's just a matter of starting your local REPL with the right JVM option (and the memcached server started somewhere else on your machine)#2018-10-0914:56eggsyntaxThanks Val, I appreciate it 🙂#2018-10-0915:37jocrauIs there a best practice solution for importing data from a large CSV file residing in an S3 bucket into Datomic Cloud? I currently use iota (https://github.com/thebusby/iota) in combination with tesser (https://github.com/aphyr/tesser) to read from a local file and parallel process chunks.#2018-10-0918:54ro6I'm just getting started with tools.deps + Datomic Ions today. Where should I post issues or unintuitive things as I find them so they will be useful to others?#2018-10-0919:17jaretYou can send them to me (<mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>) or if you think it would make a good community post, you can share it on our forums. https://forum.datomic.com/#2018-10-0919:33ro6@U1QJACBUM Thanks! I think this first one is more tools.deps specific, where should that go? I saw there's no "issues" section on the GitHub repo#2018-10-0920:10dominicmJira is linked from the contributing.md or readme#2018-10-0921:29jarethttps://forum.datomic.com/t/datomic-0-9-5783-now-available/642#2018-10-0922:52kennyDoes anyone have any examples of running the socks proxy script on CircleCI?#2018-10-1012:59mpingis there a way of retrieving a set of entities grouped by a certain key?#2018-10-1013:00mpingI know I can use a custom aggr function#2018-10-1013:46val_waeselynck@mping Maybe with distinct ? https://docs.datomic.com/on-prem/query.html#aggregates-returning-collections#2018-10-1013:58mpinggonna give it a try#2018-10-1014:29ro6How are people handling middleware with Ions since routing happens at the API Gateway layer?#2018-10-1014:42Joe LaneNothing stops you from using ring middleware with ions. put the middleware around your app inside your call to apigw/ionize#2018-10-1014:47ro6@lanejo01 That's what I was thinking, but then aren't I wrapping each handler individually, even if they all share common middleware?#2018-10-1014:47Joe Lanemake 1 handler#2018-10-1014:47Joe Lanedo your routing in your app#2018-10-1014:48ro6Oh, so just a bare API Gateway proxy on "/", then everything the same as a usual Clojure webapp from there?#2018-10-1014:50Joe Laneit can be, if you want to build it that way.#2018-10-1014:50Joe LaneI havent done it that way so I may be wrong, but that is my understanding.#2018-10-1014:50ro6Well, I don't want to duplicate routing at two layers....#2018-10-1014:54ro6Delegating everything to the app feels like a misuse of API Gateway, but I'm just getting started with this entire stack. I'm basically here asking for best practices, or stories of pain so I can avoid. Maybe those haven't really congealed around Ions yet.#2018-10-1015:09eoliphantHave 'ionized' 3 apps at this point, run every thing through a single API GW for two of them, and the last we have two for a 'hard' separation of user and admin functions. I've been back and forth on it as well, but I'm leaning more towards using APIGW, more or less like ions use lambda, more of an AWS'ey way to 'lift' stuff into the clojure/datomic world as early as possible in a given flow, then just take it from there.#2018-10-1015:13ro6Thanks for the response. I feel much more comfortable drawing from at least some experience rather than making it up as I go.
Just to clarify, you mean two "routes" in APIGW mapped to two Ion handlers?#2018-10-1017:52stijnwe went with one proxy resource on / in apigw too, we're migrating from datomic cloud client on elasticbeanstalk, so this was a question of get it working with the least rework.#2018-10-1017:52stijnmaybe one day, i'll try to separate the routing, but then it still makes sense to do that in clojure and use something like bidi to generate the swagger for api gw#2018-10-1023:48eoliphanthey @U8LN9KT2N yep, two separate ion/lambda handlers, and actually 2 separate endpoints, as opposed to 2 routes in a single endpoint. That was Probably more laziness than anything else lol. But may look at doing the routing in a single APIGW at some point#2018-10-1015:22ro6I'm definitely drawn to the "one APIGW entry proxying to one Ion handler per app" approach since then I can reuse the established Clojure webapp patterns and track all my routing/etc.. in code rather than a separate structure versioned in AWS. I've been second-guessing though since the core team's tutorials (eg https://docs.datomic.com/cloud/ions/ions-tutorial.html#webapp) guide towards "one Gateway entry and Ion handler per operation".#2018-10-1015:22jocrauHow can I grant a lambda ion read access to an S3 object?#2018-10-1015:27Joe LaneCreate a policy in IAM with read access to said S3 object, copy the arn for that policy, then do a parameter upgrade on your datomic cloud stack. At the bottom of the first ( I think) page there is a section that says effectively “attach the arn for the policy you want these nodes to have”. Paste the arn of the policy you created, complete the parameter upgrade, and that should do it.#2018-10-1015:27Joe Lane(Just did this yesterday on the 4th project we have with ions)#2018-10-1015:51jocrauThat worked. Thanks! For reference: My basic misconception was that I added an inline policy to the Lambda execution role. But the Lambda function created on is just a thin layer for invoking the Clojure code inside the Datomic compute node.#2018-10-1015:57jocrau(@stuarthalloway talks about this about 21 minutes into his intro video https://www.youtube.com/watch?v=3BRO-Xb32Ic)#2018-10-1016:07Joe LaneYeah, the docs are correct, however I believe that part is buried in “Operation > Access Control” no where near the rest of the ion tutorial. https://docs.datomic.com/cloud/operation/access-control.html#authorize-ions#2018-10-1113:24adamfreywhen I write a script that's supposed to run and exit and that script connects to a datomic cloud db via the client api, my script always hangs around when it's finished. And I have to kill it with Ctrl-c. The only way I've found to get it to shutdown on its own is by using (System/exit 0), which is pretty extreme.
Even (shutdown-agents) doesn't do anything#2018-10-1113:26adamfreyI wrote a test script that does nothing but connect to datomic cloud and these are the threads that exist when it hangs:
[#object[java.lang.Thread 0x794fbf0d "Thread[async-dispatch-7,5,main]"], #object[java.lang.Thread 0x504f2bcd "Thread[qtp1841195153-13,5,main]"], #object[java.lang.Thread 0x3c7e7ffd "Thread[qtp1841195153-14,5,main]"], #object[java.lang.Thread 0x18c2b4c6 "Thread[async-thread-macro-1,5,main]"], #object[java.lang.Thread 0xd5d9d92 "Thread[async-dispatch-2,5,main]"],
#object[com.amazonaws.http.IdleConnectionReaper 0x347c7b "Thread[java-sdk-http-connection-reaper,5,main]"],
#object[java.lang.Thread 0x372b2573 "Thread[qtp1841195153-18,5,main]"],
#object[java.lang.Thread 0x12bb3666 "Thread[async-dispatch-5,5,main]"],
#object[java.lang.Thread 0xaad1270 "Thread[Signal Dispatcher,9,system]"],
#object[java.lang.Thread 0x126f428e "Thread[async-dispatch-1,5,main]"],
#object[java.lang.Thread 0x41a372c1 "Thread[async-dispatch-6,5,main]"],
#object[java.lang.Thread 0x45d28ab7 "Thread[qtp1841195153-15,5,main]"],
#object[java.lang.Thread 0x3b75fdd0 "Thread[qtp1841195153-12,5,main]"],
#object[java.lang.ref.Finalizer$FinalizerThread 0x7e64b248 "Thread[Finalizer,8,system]"],
#object[java.lang.Thread 0x7e4f5062 "Thread[main,5,main]"],
#object[java.lang.Thread 0x66fd9613 "Thread[qtp1841195153-19,5,main]"],
#object[java.lang.Thread 0x461e9b31 "Thread[async-dispatch-3,5,main]"],
#object[java.lang.Thread 0x78652c15 "Thread[qtp1841195153-17,5,main]"],
#object[java.lang.ref.Reference$ReferenceHandler 0x6e5dc02d "Thread[Reference Handler,10,system]"],
#object[java.lang.Thread 0x1c71d704 "Thread[async-dispatch-4,5,main]"],
#object[java.lang.Thread 0x623e578b "Thread[clojure.core/tap-loop,5,main]"],
#object[java.lang.Thread 0x414a3c7d "Thread[clojure.core.async.timers/timeout-daemon,5,main]"],
#object[java.lang.Thread 0x74efc394 "Thread[async-dispatch-8,5,main]"],
#object[java.lang.Thread 0x201a84e1 "Thread[
#2018-10-1113:27adamfreyis there any way other than System/exit to fix this?#2018-10-1115:59jocrauI am trying to parse and import a 25GB CSV file into Datomic Cloud (prod topology with two i3.large instances). I get “clojure.lang.ExceptionInfo: Busy indexing”. Before I start to implement a retry strategy on the client side, what are the dials and knobs to improve the indexing performance? (I already set :db/noHistory to “true” on all my attributes)#2018-10-1116:22kvltIT seems that I cannot bind nil to a var in datomic, resulting in this failing:
(d/q '[:find [?tx ...]
:in ?log ?since ?til
:where [(tx-ids ?log ?since ?til)
[?tx ...]]]
(d/log conn) #inst "2018-10-11T15:53:51.974-00:00" nil)
Exception Unable to find data source: $__in__3 in: ($__in__1 $__in__2 $__in__3) datomic.datalog/eval-rule/fn--5763 (datalog.clj:1450)
While this works:
(d/q '[:find [?tx ...]
:in ?log ?since
:where [(tx-ids ?log ?since nil)
[?tx ...]]]
(d/log conn) #inst "2018-10-11T15:53:51.974-00:00")
[13194187931476 13194187931455 13194187931456]
Why is that?#2018-10-1116:25Joe Lane@jocrau If you look at https://github.com/Datomic/mbrainz-importer you can find examples of how to pipeline async transactions which should yield much higher performance of writes. Are you doing any reads when you’re importing the data or is it pure writes?#2018-10-1116:26Joe LaneYou can look at the cloudwatch dashboard to find different bottlenecks in your system. Sometimes it may be cpu, memory, or DDB allocated write-throughput units.#2018-10-1116:28Joe LaneThe dashboards are very helpful in getting started with perf. That being said, I highly recommend setting up retries on your writes. Things happen and its a good idea to program defensively. Maybe queue things up in kinesis? Thats one approach we took.#2018-10-1117:09kennyI am getting this exception when trying to connect to a DB running the production topology:
(def cust-conn (d/connect client {:db-name "cust-db/591b632f-6c14-4807-af7b-da30929d5791"}))
clojure.lang.ExceptionInfo: Datomic Client Exception
clojure.lang.Compiler$CompilerException: clojure.lang.ExceptionInfo: Datomic Client Exception {:cognitect.anomalies/category :cognitect.anomalies/forbidden, :datomic.client/http-result {:status nil, :headers nil, :body nil}}, compiling:(form-init3645246680023467651.clj:1:16)
The strange thing is that if I try to connect to another DB, it works:
(def admin-conn (d/connect client {:db-name "admin"}))
=> #'dev.system/admin-conn
Any idea what is going on here?#2018-10-1117:10kennyInteresting...
(d/create-database client {:db-name "foo"})
=> true
(d/create-database client {:db-name "foo/bar"})
=> true
(d/connect client {:db-name "foo"})
=> {:db-name "foo", :database-id "55e543d1-14f0-4c9a-b3a9-8fa089a730e9", :t 3, :next-t 4, :type :datomic.client/conn}
(d/connect client {:db-name "foo/bar"})
clojure.lang.ExceptionInfo: Datomic Client Exception
Are db names with a / not allowed??#2018-10-1117:22jocrau@lanejo01 Thanks for your help. I have studied the mbrainz importer. It makes the processing CPU bound by parallelizing it by using pipeline-blocking. I have used that approach in the past but switched to Kyle’s tesser library which works nicely (and I find easier to reason about). The ratio between actual and provisioned write capacity in DynamoDB is healthy. The current problem is that the indexing (which happens asynchronously in the background afaik) can’t keep up with the transaction throughput. And I wonder whether there is a configuration option to tune this in Datomic Cloud.#2018-10-1117:36Joe Lanegot a link to tesser? what does it buy you?#2018-10-1117:38jocrauhttps://github.com/aphyr/tesser#2018-10-1117:39jocrauI use it to execute composed functions in parallel.#2018-10-1117:45favilaare you using tesser in such a way that it propagates backpressure?#2018-10-1117:47favilaI don't know for sure that datomic client acts the same way, but with the datomic peer api as long as you deref your transact somewhere you won't get exceptions. Your application may slow to nothing, but eventually the transactor will catch up#2018-10-1117:48favilatesser is designed for cpu-level parallelism, but transacting with high throughput requires io pipelining with a bounded depth and blocking to receive backpressure#2018-10-1117:49faviladoesn't mean you can't use tesser but there has to be some care about how it is using the transactor#2018-10-1118:46jocrauThe call to the synchronous transact blocks, but in case of a transactor being busy indexing returns an ex-info map right away. The behavior differs from the on-prem client library which returns a future.#2018-10-1118:52jocrauOne way to handle that is to retry the failed transaction. On the other end, I am still trying to reduce the fails by increasing the indexing performance.#2018-10-1118:54favilahttps://docs.datomic.com/cloud/client/client-api.html#busy#2018-10-1118:54favilalooks like that is by design#2018-10-1118:55favilalooks like they want you to do something like this:#2018-10-1118:55favilahttps://github.com/Datomic/mbrainz-importer/blob/master/src/cognitect/xform/batch.clj#L70-L92#2018-10-1217:20jocrauI found https://github.com/BrunoBonacci/safely. It seems to be a great tool to implement a retry strategy.#2018-10-1218:16favilathank you for that link!#2018-10-1121:25grzmAny issues running Clojure 1.10.0-RC1 on Datomic Cloud?#2018-10-1121:56csmI just set up a new datomic cloud instance, can start the socks proxy, but get ExceptionInfo com.amazonaws.services.s3.AmazonS3Client.beforeClientExecution(Lcom/amazonaws/AmazonWebServiceRequest;)Lcom/amazonaws/AmazonWebServiceRequest; clojure.core/ex-info on trying list-databases or create-database#2018-10-1122:25csmaha, had the wrong version of aws-java-sdk-core from another dependency#2018-10-1122:00luchiniThe EC2 instances of my query groups started shutting down and terminating non-stop recently. It seems that the culprit is this:#2018-10-1122:00luchini#############################################################
/dev/fd/11: line 1: /sbin/plymouthd: No such file or directory
initctl: Event failed
#2018-10-1122:00luchiniAnyone else with this problem?#2018-10-1213:24jaret@U4L16CHT9 I might know what’s going on here. Could you copy out the lines above the “#” break line? The entire block delimited by the “#” break line rows.#2018-10-1517:22luchiniThese are the lines I get before the #:
Calculating memory settings
No cache configured
/opt/datomic/export-environment: line 31: {:retry: command not found
#############################################################
DATOMIC EXITING PREMATURELY
Error on or near line 31; exiting with status 1
Environment:
S3_VALS_PATH=primary-storagef7f305e7-35z8d8t26waz-s3datomic-xxjwrytensid/primary/datomic/vals
DATOMIC_INDEX_GROUP=primary
DATOMIC_TX_GROUP=primary
TERM=linux
DATOMIC_APPLICATION_PID_FILE=/opt/datomic/deploy/image/pids/application.pid
DATOMIC_CLUSTER_NODE=true
DDB_CATALOG_TABLE=datomic-primary-catalog
DATOMIC_CODE_DEPLOY_APPLICATION=red-robin
DATOMIC_PRODUCTION_COMPUTE=primary-Compute-N688L1Z9E3CF
JVM_FLAGS=-Dclojure.spec.skip-macros=true -XX:+UseG1GC -XX:MaxGCPauseMillis=50 -XX:MaxDirectMemorySize=256m
DATOMIC_CACHE_GROUP=primary
DISABLE_SSL=true
DATOMIC_HOSTED_ZONE_ID=Z38EPCPQ9NXY6O
DATOMIC_XMX=2582m
PATH=/sbin:/usr/sbin:/bin:/usr/bin
OS_RESERVE_MB=256
EFS_VALS_PATH=datomic/vals
S3_AUTH_PATH=primary-storagef7f305e7-35z8d8t26waz-s3datomic-xxjwrytensid
RUNLEVEL=3
runlevel=3
AWS_DEFAULT_REGION=us-east-1
PWD=/
LANGSH_SOURCED=1
DATOMIC_QUERY_GROUP=sandbox
DATOMIC_ENV_MAP=<REDACTED>
LANG=en_US.UTF-8
KMS_CMK=alias/datomic
FS_VALS_CACHE_PATH=/opt/datomic/efs-mount/datomic/vals
PREVLEVEL=N
previous=N
PCT_JVM_MEM_FOR_HEAP=70
HOST_IP=10.213.21.224
CONSOLETYPE=serial
SHLVL=2
CW_LOG_GROUP=datomic-primary
UPSTART_INSTANCE=
UPSTART_EVENTS=runlevel
EFS_DNS=
DDB_LOG_TABLE=datomic-primary
UPSTART_JOB=rc
S3_CERTS_PATH=primary-storagef7f305e7-35z8d8t26waz-s3datomic-xxjwrytensid/primary/datomic/access/certs
PCT_MEM_FOR_JVM=100
_=/bin/env
#############################################################
#2018-10-1713:17jaretThanks! We’re working on a fix for this.#2018-10-1200:46jarethttps://forum.datomic.com/t/datomic-cloud-441-8505-critical-update/645#2018-10-1209:08stijnto be clear, there's no update of the storage stack required, right?#2018-10-1212:59marshallCorrect#2018-10-1214:34ozIn CFT 441 you switched to YAML, now in 441-8505 you are using json again. It's not a big deal this time, but in the storage stack we have a couple of modification that get around the issue with running in a AWS account that has EC2 classic support. So we have to cherry pick those changes by hand into any storge CF template upgrades. I did this just recently from 297 to 409 and it wasn't too bad, but the version 441 was a bit harder due to the change from json to yaml. Again it's a non-issue for 441-8505 since it's only a compute stack change, but going forward can you please distribute one format or both yaml and json?#2018-10-1214:38marshallthe YAML change was a marketplace artifact; we did not choose that and we intend to use json#2018-10-1214:53oz👌#2018-10-1420:56eoliphantI posted this in the main thread as well. I'm applying this to a solo setup. the compute upgrade worked fine, but I'm getting a Error creating change set: The submitted information didn't contain changes. Submit different information to create a change set. back from CF when I try to apply the update for storage#2018-10-1200:53csmdoes upgrading really require deleting the stack and creating it again?#2018-10-1200:55stuarthallowayHi @U1WMPA45U, it depends on what you mean by "the stack".#2018-10-1200:56stuarthallowayIf you are running in the recommended two stack shape, then no: https://docs.datomic.com/cloud/operation/upgrading.html#compute-only-upgrade#2018-10-1200:57csmI launched the CF template and got three CF stacks in total — so update the stack named “compute”, yes?#2018-10-1200:58stuarthallowayUnfortunately, no. AWS's marketplace rules are in direct conflict with AWS's CloudFormation best practice guidelines. The "deleting" path takes you from Marketplace-land to CF-best-practice-land.#2018-10-1200:58stuarthallowayAfter you do this once, you will be in CF-best-practice-land and never have to do it again.#2018-10-1200:59csmgot it, thanks!#2018-10-1200:59stuarthallowaySee also https://docs.datomic.com/cloud/operation/upgrading.html#why-multiple-stacks#2018-10-1201:56jocrauJust a quick note on the update: I had to re-adjust the capacity settings of the compute stack autoscaling group. The update seems to reset this to “desired 2, min 2, and max 3”.#2018-10-1209:16stijnsame issue here#2018-10-1213:00marshallthose are the default values; had you changed them to something else?#2018-10-1213:01marshallnote: https://docs.datomic.com/cloud/operation/scaling.html#database-scaling#2018-10-1213:01marshallyou should not be using autoscaling on the primary compute group#2018-10-1213:38jocrauI have two use-cases to change autoscaling of the primary compute group: First, to save money while experimenting with the prod topology (I set them to 0 during times I am not working on it), and second to try to increase transaction performance for large imports (that might be a brute force approach, see also https://forum.datomic.com/t/tuning-transactor-memory-in-datomic-cloud/643).#2018-10-1213:40jocrauThe documentation is a bit confusing. The two sentences “If you are writing to a large number of different databases, you can increase the size of the primary compute group by explicitly expanding its Auto Scaling Group.” and “You should not enable AWS Auto Scaling on the primary compute group.” seem to contradict each other. Am I missing something?#2018-10-1213:41marshallThose should not be autoscaling events; Scaling the group will not affect throughput for a single DB#2018-10-1213:41marshallyou can adjust the size of the group explicitly#2018-10-1213:41marshallyou shouldn’t use AutoScaling#2018-10-1213:42marshalli.e. Autoscaling events == things that AWS does for you triggered based on some metric/event#2018-10-1213:43marshallChanging the “min” “max” and “desired” explicitly is OK, but should be a fairly infrequent human-required action#2018-10-1213:58jocrauYou are right that it does not make sense to change the “desired” setting of the autoscaling group to adapt to spikes (that’s what the “auto” in autoscaling is for). But to increase the “max” and “desired” seems to be the best option currently to increase transaction (and indexing) performance in case of a known, large import.#2018-10-1214:38marshallif the import runs against a single database, changing the size of the compute group will not affect throughput#2018-10-1214:43marshallyou can change to using an i3.xlarge (instead of the default i3.large)#2018-10-1214:43marshallin your compute group#2018-10-1214:43marshallthat will improve import perf#2018-10-1214:55jocrauOk. I will give that a try.#2018-10-1215:06jocrauDoes the number of nodes influence indexing performance (on a single database)?#2018-10-1215:12marshallno#2018-10-1215:17stuarthalloway@jocrau @marshall as long as you have at least two, no#2018-10-1507:15stijnour use case is to have a prod setup for our staging environment, but without HA. so we set the 3 values (min, max, desired) to 1.#2018-10-1214:33jocrauA deployment to the compute group seems to be performed sequentially (see attached graph which shows the incoming network traffic to 5 nodes; mostly the JAR files I assume). Can this be done in parallel to speed up the deployment?#2018-10-1214:44marshall@jocrau No, that is specifically the way that rolling deployments work to maintain uptime and enable rollback#2018-10-1214:45jocrau@marshall Makes sense. Thanks.#2018-10-1216:29unbalancedDoes anyone happen to know where I could find some good marketing materials on the Datomic value prop for non-technical executives?#2018-10-1216:36val_waeselynck@goomba I tried to answer that very question here: https://medium.com/@val.vvalval/what-datomic-brings-to-businesses-e2238a568e1c#2018-10-1216:36val_waeselynckNote that the value prop of Datomic Cloud is a bit different#2018-10-1216:36unbalancedHa! How serendipitous!#2018-10-1216:37unbalancedthat's fine, due to the nature of the data/work it would have to be self-hosted anyway.#2018-10-1217:16csmI don’t need to recreate my VPC endpoint again when I perform my first upgrade, do I?#2018-10-1217:21csmI think I have an answer to that: I can’t delete the datomic cloud root stack since the VPC endpoint stack depends on resources#2018-10-1217:28csmthe “compute” stack just failed to delete for me, with reason The following resource(s) failed to delete: [HostedZone].#2018-10-1217:31csm…which was because I had a record set for my VPC endpoint…#2018-10-1217:44grzmI'm setting up a new query group. The stack created without error. When doing the first deploy to the query group, it's failing ValidateService with ScriptFailed. The events details show [stdout]Received 503 a number of times, and then finally [stdout]WARN: validation did not succeed after two minutes. Ideas on where to start debugging this?#2018-10-1218:11kenny@grzm Check the CloudWatch logs - there was probably an exception.#2018-10-1218:15grzm@kenny cheers. thanks for the kick in the right direction.#2018-10-1317:52idiomancyhey, this is a data*script* so I apologize for asking it here, but there's a knowledge overlap and the datascript channel is deaaad.
is anyone aware of any performance considerations/ best practices for building derived databases in datascript?
For instance,
would it be generally faster to do something like
(db-with (empty-db) (d/q [:find (pull all entities I care about)]))
or to do something with d/filter?#2018-10-1318:00val_waeselynckWell, this depends on how much you read compared to how much you write. How often do you need to create a derived database, and what performance expectations do you have when reading it?#2018-10-1318:00idiomancythis would be for reading only#2018-10-1318:00idiomancyso, a materialized view#2018-10-1318:00idiomancythat can be queried with datalog semantics#2018-10-1318:01idiomancythe ideal optimization would be space complexity, honestly#2018-10-1318:01idiomancyso a better solution would take up less memory than other available equivelant solutions#2018-10-1318:02idiomancyI'm not sure how possible that is for datascript though#2018-10-1318:09val_waeselynckFiltering has essentially no space cost, it just slows down queries a bit#2018-10-1318:10idiomancyoh really!? that's great, so it's sharing the structures from the reference value?#2018-10-1318:12idiomancyohhh#2018-10-1318:12idiomancyI see#2018-10-1318:12idiomancyso its really just querying the same value with an additional predicate#2018-10-1318:12idiomancyfascinating#2018-10-1321:41Daniel HinesI'm trying to follow the Datomic on-prem tutorial in from a cider repl in emacs. When I eval (def client (d/client cfg)), I got the following error#2018-10-1321:42Daniel Hines2. Unhandled clojure.lang.Compiler$CompilerException
Error compiling datomic/client/impl/shared.clj at (349:13)
1. Caused by java.lang.RuntimeException
Unable to resolve symbol: int? in this context
#2018-10-1321:42Daniel HinesAdmittedly, I'm not sure whether this is a beginner, cider, or datomic question.#2018-10-1322:01dpsuttonWhat's your clojure version. That's a 1.9 predicate#2018-10-1322:01dpsuttonAny chance you're on 1.8 from lein new or some other reason?#2018-10-1323:14Daniel HinesThat's exactly what it was! Thanks @dpsutton#2018-10-1409:21avfonarevCan datomic ions handle file upload?#2018-10-1410:21henrikYep, no problem.#2018-10-1420:52eoliphantthough, given that you're in AWS, you have other options. We do all of our uploads to S3, then have a lambda ion handle the s3 notifications#2018-10-1508:09henrikI made a small UI to visualise the output of processing a file, a sort of visual debugger. This I handled by uploading a file directly to the Ion. For production processing, S3 is definitely the way to go.#2018-10-1420:51eoliphantHi, I'm trying to apply the 441-8505 patch. it worked fine for my solo compute stack, but i'm getting a Error creating change set: The submitted information didn't contain changes. Submit different information to create a change set. when I try to apply the update to the storage stack#2018-10-1511:02stuarthallowayHi @U380J7PAQ -- 441-8505 is a compute only update, so there is no change to storage. @U1QJACBUM if "compute only" is shown only in the summary column of the table, we should add it to the text description of each update so one does not have to look in two places.#2018-10-1513:56eoliphantAh, ok great, yeah wasn't quite clear from the release notes#2018-10-1515:33mgrbyteHi. Using datomic on-prem, just upgraded to 0.9.5703, now unable to connect via a repl. Getting connection timed out, but transactor running on the same port mentioned in the logs. (ddb-local) - Anyone seen anything similar?#2018-10-1515:34mgrbyteerror on connection fail is:
CompilerException clojure.lang.ExceptionInfo: Error communicating with HOST localhost on PORT 4334 {:alt-host nil, :peer-version 2, :password "xxxx", :username "xxxx", :port 4334, :host "localhost", :version "0.9.5703", :timestamp 1539617467299, :encrypt-channel false}, compiling:(00209e77b10857cd356c6f8ff55888c36688ab74-init.clj:57:40)
#2018-10-1517:37jaret@U08715BSS what version were you upgrading from? Did you upgrade your transactor or peer first? I am going to look at reproducing.#2018-10-1608:59mgrbyte@U1QJACBUM I was previously running datomic-pro-0.9.5697#2018-10-1609:00mgrbyteThis is just locally for dev atm, with ddb-local.
Stopped everything, ran the transactor.
Bumped version in my deps.edn
Ran repl
then usual require and connect produces the above.
(on-prem)#2018-10-1609:10mgrbytefwiw, I with a fresh ddb-local database and relevant changes to transactor config, have it working.#2018-10-1609:14mgrbyteI've just had another go with the ddb-local database and previous config, and can no longer re-produce :shrug: 😕#2018-10-1613:00jaretThat’s very odd. I am going to keep poking at this. Thanks for the added information and report.#2018-10-1517:24luchini@val_waeselynck do you have any plans on porting datomock to the client library? Is that even possible?#2018-10-1519:00val_waeselynckNot clear to me that the forking abstraction is feasible there due to the potential transience of with'd dbs there. Maybe @U072WS7PE could tell us ? In any case, there's really not much to Datomock's implementation, so if you need it don't be afraid of writing it :)#2018-10-1604:13luchiniIt’s not an urgency for me as of now but definitely something I want to explore sooner rather than later so I’ll keep you posted!#2018-10-1613:27stijnif we would like to automatically push and deploy some of our branches to datomic cloud ions through CodePipeline/CodeBuild. What exact permissions does the codebuild instance profile need for being able to e.g. download the ions dependencies from the datomic maven S3 bucket? Also, I don't see any documentation on what is needed for pushing to codedeploy. Currently everything is happening as an admin user from a dev machine. Or is there a better way to setup CI for your ions?#2018-10-1615:00Joe LaneMy company has the exact same questions as @stijn. We are very interested in hearing about the best practices for CI/CD with Ions. After digging last night I found the top level codepipeline page seems to have my Ions application registered so maybe there is just manual exploration to be done?#2018-10-1615:05jeroenvandijk@stijn Not sure what exactly is needed, but as a first step you could have a permission that is allowed to forward the admin role to codebuild. This will not give the admin permission to the dev machine#2018-10-1615:06jeroenvandijkHere is more background https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-service.html#2018-10-1619:30grzmWe just saw a blip when deploying an ion:
ERROR, :message cryo is not a recognized vendor code (Service: AWSResourceGroupsTaggingAPI; Status Code: 400; Error Code: InvalidParameterException
There's no reference to cryo in our code. We saw it happen from two different remote laptops in two different states (MN and TN). Retrying the same deploy a few minutes later succeeded just fine. Any ideas? (I'm stepping away from my machine for a while, so won't be following up immediately, but happy to do so when I get back.)#2018-10-1619:39Joe Lane@grzm I ran into this last night on a different project, thought it was just a blip.#2018-10-1619:43jaret@grzm can you DM me the full error with request ID#2018-10-1619:44jaretI am going to log a case to AWS since you’ve both seen this. I’d like to see if they can track this down or provide any clues on what is unavailable.#2018-10-1619:46wilkes@grzm I sent @jaret the error message#2018-10-1620:05favilacan/should the same valcache dir be shared by multiple peer processes?#2018-10-1700:35jaret@U09R86PA4 multiple peers each with their own valcache. I’ll look to add that to the docs, but sharing is not supported.#2018-10-1700:45favilaThat’s too bad. Having shared big Valcache on a dev laptop (which is often multiprocess but same small set of remote txors) is the best use case I see. I run memcached for this now; shared Valcache would be much bigger, persist across reboots, and free up the ram now used for memcached#2018-10-1700:46favilaHow do Valcache and memcached interact if both are enabled?#2018-10-1719:42jaret@U09R86PA4 You can’t use Valcache and memcached together. Its one or the other. The tradeoffs are discussed here http://staging.docs.datomic.com/on-prem/valcache.html#vs-memcached#2018-10-1719:43favilaI am aware of the tradeoffs; I didn't realize they were mutually exclusive choices#2018-10-1719:43favilacould this be made clearer also?#2018-10-1719:44jaretYes. I agree. It needs to be made clearer in the docs.#2018-10-1719:47favilathat's also unfortunate, because a transactor can no longer eagerly populate memcached to shield storage from peer cache misses if the peer is using valcache#2018-10-1621:18grzmThanks @wilkes I just sent @jaret one that I got as well.#2018-10-1703:54dfcarpenterbeginner question. I am trying to use datomic free from within a luminus web project. In the repl I can't seem to find the datomic.api namespace. I am using datomic-free "0.9.5561" and have the transactor running using bin/transactor with the sample config#2018-10-1705:46Hadisince datomic schema only describe characteristic of data, im so curious about how "datomic data" can draw relationship between all entities (in purpose of reporting). At least i need something like "design mode" in MySQL so that i could tell what is happening on current entities in datomic. i was wondering to do something like "select distinct entities from db" is it possible ? it would be helpful if i could get a sample from each entities. 😕
For example i want to uniquely retrieve entities with attributes in it like [ {:person/name :person/address} {:school/name :school/personlist} ...] which is based on facts that inserted in datomic#2018-10-1712:12chris_johnson@stijn @lanejo1 - for our builds of Datomic on-prem in CodeBuild, to get the Datomic Pro JARs into the build classpath, we use the documented method for putting your Datomic Maven repository credentials into environment variables, and then we have our company username and pw as SecuredString SSM parameters that get passed into the Environment block of the CloudFormation template that builds the CodeBuild project. I should think that would also work for the Ions dependencies, using $your_favorite_dependency_manager.#2018-10-1714:16Joe LaneThanks Chris, I hadn’t even though of the ion dependency being an issue.#2018-10-1713:48eoliphant@hadi.pranoto it's sometimes a pain to grok, especially if you're coming from say relational dbs, but in datomic there's no db concept of an 'entity definition' or even their relationships at the schema level. entities are just arbitrary bags of attributes, and refs are for lack of a better term 'anonymous'. any structure beyond attribute defs is up to you. That's part of datomic's power. You're able to do what you described in MySQL because a table does provide a fixed 'bag of attributes', the schema has a concept of fkey relationships between tables, etc.
To your question, If you already have data, some of this can be inferred, tools like this one (https://github.com/felixflores/datomic_schema_grapher) do this. You can use it directly or steal some of its code for your use. Other approaches include using naming conventions like your :person/.. examples, additional custom attributes for schema elements, spec, etc#2018-10-1714:42grzm@jaret We haven't been able to successfully push since yesterday afternoon due to the cryo issue. Happy to be available to work with someone to get this figured out. It's put a heavy damper on development.#2018-10-1714:46jaret@grzm can you log a case with AWS from your account? I’ve logged one from ours asking for more information on the error. I’d be happy to provide the case number for your reference, but we need to get AWS support’s input on what is unavailable/invalid.#2018-10-1714:47grzmSure thing. What relevant Datomic issues should we include in the case?#2018-10-1720:39kennyGetting this exception when calling push in my code:
Exception thrown: cryo is not a recognized vendor code (Service: AWSResourceGroupsTaggingAPI; Status Code: 400; Error Code: InvalidParameterException; Request ID: 8db867cb-d24c-11e8-bb2d-59cd3680b29e)
#2018-10-1720:40kennyAnyone seen this before? Not clear what is causing this.#2018-10-1720:42wilkes@kenny We’ve been seeing this as well. Cognitect has a ticket open, and we’ve opened up one as well. This appears to be related: https://forums.aws.amazon.com/thread.jspa?messageID=872875#2018-10-1720:45kenny@wilkes Thanks. Have you tried the workaround the aws forums suggest there?#2018-10-1720:46kennyActually that's probably hidden in the ion-dev code.#2018-10-1720:46wilkes@kenny I haven’t because I think that is buried in the ion push code#2018-10-1720:47wilkesUpside is that it has forced us to think about what we need to facilitate easier local dev 🙂#2018-10-1720:47kennyUgh. This is kinda a big blocker - we can't deploy code. Did the Datomic team say they'd push a release with the workaround?#2018-10-1720:50jaret@kenny are you US-WEST-2?#2018-10-1720:50kennyYes#2018-10-1720:52jaretI am going to add your error to our ticket. we’re waiting for AWS to provide specific instructions for the filtering solution discussed in the forum post.#2018-10-1720:53kennyHave you guys been able to reproduce the exception?#2018-10-1720:55jaretI have not. But I am still working on it. We have 3 separate AWS accounts reporting the error when pushing. One in US-EAST-1#2018-10-1720:57kennyOk.#2018-10-1721:39okocimFWIW, I’m getting this same error in us-east-2. I feel like it’s region-specific at this point.#2018-10-1721:54jaretI just re-created on US-WEST-2. I am going to look at the other regions.#2018-10-1722:58kennyJust tried deploying my Ion code again and it appears to be working now.#2018-10-1813:28stijnI have the following code for an API Gateway web ion:#2018-10-1813:29stijn(defn ring-handler
[req]
(do
(cast/dev {:msg "RequestLog" ::request req})
(handler req)))
(def ion-handler
"API Gateway web service for the FMS API"
(apigw/ionize ring-handler))
#2018-10-1813:40stuarthalloway@U0539NJF7 use cast/event. "NOTE Configuring a destination for cast/dev when running in Datomic Cloud is currently not supported." -- https://docs.datomic.com/cloud/ions/ions-monitoring.html#2018-10-1813:43stijnok, I totally misunderstood that sentence 😄#2018-10-1813:45stijnthanks#2018-10-1813:29stijnalthough the handler generates a response, I cannot see any message with the RequestLog in Cloudwatch#2018-10-1813:30stijnshould I do something special to get these logged?#2018-10-1813:53stijnis it possible that the Content-Length header gets stripped away somewhere between API Gateway - Lambda - ionize? Because I'm definitely sending it, but it doesn't arrive on the ion function as a request header.#2018-10-1814:05jeff.terrellIs this a good place to mention broken links on the Datomic website?#2018-10-1817:17jeff.terrellWell, before I lose track of it, I'll mention it here. The first item under "Getting Started" in the FAQ [1] links here [2] and shows a "page not found" page.
[1] https://www.datomic.com/cloud-faq.html#getting-started
[2] https://docs.datomic.com/cloud/getting-started/get-datomic.html#2018-10-1814:39jeff.terrellIs it true, as this Hacker News comment states, that Datomic Free Edition does not support the client API?
https://news.ycombinator.com/item?id=16169118#2018-10-1817:14jeff.terrellAh, I finally found it, after searching for a while:
> The Datomic Free transactor is limited to 2 simultaneous peers and embedded storage and does not support Datomic Clients.
https://www.datomic.com/get-datomic.html#2018-10-1814:55eraserhdSo, what happens if we excise history from a database? Will the log have a squashed transaction? Will the log not show the history at all?#2018-10-1816:16val_waeselynckBy «excise history», do you mean "excise all retracted datoms"?#2018-10-1816:46stuarthallowayHi @U0ECYL0ET "The resulting index (and all future indexes) will no longer contain the datoms implied by the excision predicate(s). Furthermore, those same datoms will be removed from the transaction log." -- https://docs.datomic.com/on-prem/excision.html#2018-10-1913:58eraserhdThanks! AFAICT from those docs, nothing in excision allows targeting only retracted datoms. Is that an omission in the docs?#2018-10-1817:46jeff.terrellIn the solo topology of Datomic Cloud, I can still create more than one database (i.e. with d/create-database), right?#2018-10-1817:49marshall@jeff.terrell definitely#2018-10-1818:06kennyWhy am I getting this exception trying to transact some schema?
(let [conn (d/connect client {:db-name "foo"})]
(d/transact conn {:tx-data [#:db{:valueType :db.type/instant, :cardinality :db.cardinality/one, :ident :session/last-used-on}]}))
clojure.lang.ExceptionInfo: Value of :db.install/attribute must be in :db.part/db partition, found :session/last-used-on
The db was just created and is completely empty.#2018-10-1818:07kennyI am running Datomic Cloud 441-8505 production topology and com.datomic/client-cloud {:mvn/version "0.8.63"}.#2018-10-1818:12kennyStrangely if I move :db/ident to be the first value in the map, the transaction works:
(let [conn (d/connect client {:db-name "foo"})]
(d/transact conn {:tx-data [#:db{:ident :session/last-used-on :valueType :db.type/instant, :cardinality :db.cardinality/one}]}))
=>
{:db-before {:database-id "ae464fcb-0bc3-48f3-b3a4-3c8e9eff1d5a",
:db-name "foo",
:t 3,
:next-t 4,
:type :datomic.client/db},
:db-after {:database-id "ae464fcb-0bc3-48f3-b3a4-3c8e9eff1d5a",
:db-name "foo",
:t 4,
:next-t 5,
:type :datomic.client/db},
:tx-data [#datom[13194139533316 50 #inst"2018-10-18T18:08:00.642-00:00" 13194139533316 true]
#datom[64 10 :session/last-used-on 13194139533316 true]
#datom[64 40 25 13194139533316 true]
#datom[64 41 35 13194139533316 true]
#datom[0 13 64 13194139533316 true]],
:tempids {}}
Relying on the order of keys in a map seems like a bad practice.#2018-10-1818:12marshall@kenny put the db/ident first#2018-10-1818:12marshallit’s a bug#2018-10-1818:12marshalli’ll pass it along#2018-10-1818:13kennyYuck. Ok thanks.#2018-10-1818:36favilayeah there's some order-dependence gotchas in schema-creation#2018-10-1818:38favilaa much older manifestation of the same thing: https://gist.github.com/favila/785070fc35afb71d46c9#file-restore_datoms-clj-L123-L134#2018-10-1818:38favila#2018-10-1818:38favilalines 123-134#2018-10-1818:39favilathe "install" assertions (which are implicit nowadays) must occur after the constraints they check#2018-10-1820:05jeff.terrellI'm getting the ExceptionInfo Forbidden to read keyfile error when I (d/create-database client {:db-name "test"}). The troubleshooting page gives this solution:
> Ensure that you have sourced the right credentials with all necessary permissions.
Can somebody unpack that a little? I get the notion of AWS credentials, and I have the access key and secret for a user with all IAM permissions in ~/.aws/credentials. So presumably something else is wrong. What specifically does 'sourced the right credentials' mean?#2018-10-1820:05marshallis it in a profile in your creds file?#2018-10-1820:05jeff.terrellIt's in the default profile.#2018-10-1820:05marshallexport AWS_PROFILE=default#2018-10-1820:05marshallin your environment#2018-10-1820:06marshallor whatever os equivalent is ^#2018-10-1820:06jeff.terrellThe environment that's running the datomic-socks-proxy <stack-name> process?#2018-10-1820:06marshallusually default gets grabbed automatically#2018-10-1820:06marshallboth envs#2018-10-1820:06marshallthe one that runs the socks proxy script needs it#2018-10-1820:06marshallbut so does the env that you’re using to connect from#2018-10-1820:07jeff.terrellOK, interesting. I'm launching from cider, and I'm not sure what's in that environment. But that's enough for me to go on, thanks!#2018-10-1820:07marshallyeah, not sure how you configure system envars in cider specifically, although exporting the envars before you start emacs should do it#2018-10-1820:08jeff.terrellBy the way, I don't think I missed that anywhere in the Datomic Cloud setup instructions, nor was it listed on that 'troubleshooting' entry. That might be worth adding for people like me in the future. simple_smile#2018-10-1820:08jeff.terrellAnd/or default the AWS_PROFILE lookup to default.#2018-10-1820:09marshallhttps://docs.datomic.com/cloud/getting-started/connecting.html#access-keys#2018-10-1820:09marshallyou have several optoins#2018-10-1820:09marshalloptions#2018-10-1820:09marshallyou can pass the :profile to the client in the connection map#2018-10-1820:09marshallor you can use system-level stuff#2018-10-1820:10jeff.terrellGotcha, so I did miss that…my bad. simple_smile#2018-10-1820:10marshallnp 🙂#2018-10-1820:24kennyThis section of the Ion docs says https://docs.datomic.com/cloud/ions/ions-tutorial.html#sec-5-3
> - Under your API in the left side of the UI, click on the bottom choice Settings.
> - Choose Add Binary Media Type, and add the */* type, then Save Changes.
Why do I need to do this? Will something not work if I skip this step?#2018-10-1908:19stijnyes, I forgot that, and body content was not properly encoded/decoded.#2018-10-1820:29jeff.terrell@marshall - Sorry, I'm still struggling here. I did export AWS_PROFILE=default and restarted my datomic-socks-proxy process. Then I did the same with my REPL. I can confirm that (System/getenv "AWS_PROFILE") returns "default", but I still get the Forbidden to read keyfile exception. (I also read the docs you linked me to, but didn't see anything amiss.) Is there something else I might be missing?#2018-10-1820:41jeff.terrellAh, figured it out. Apparently the ~/.aws/credentials file can't have comments. When I manually specified a :creds-profile value to the d/client call, that gave me a sufficiently explanatory error message to figure that out. (Or maybe the error was still on d/create-database, I don't remember now.)
The regular aws command tolerates comments in ~/.aws/credentials just fine. Is this a bug?#2018-10-1820:42jeff.terrell(Also, the regular aws command uses the default profile if AWS_PROFILE is not set, which is another way I was surprised at the behavior of Datomic, FWIW.)#2018-10-1821:14marshallDatomic uses the default credentials provider in the java SDK#2018-10-1821:14marshallit’s possible that behaves differently than version(s) of the aws CLI#2018-10-1821:15marshallDatomic never reads your ~/.aws/credentials file directly - that is always done by the AWS SDK#2018-10-1908:03stijnwe're seeing some exceptions during the compilation of an ion deploy. the instance terminates itself in this case and then we can deploy. it looks like it happens every other deploy. Anyone else have seen this?#2018-10-1909:21stijnmaybe, the answer lies in using a different http lib for making requests. the ion-event-example uses cognitect.http-client https://github.com/Datomic/ion-event-example/blob/master/src/datomic/ion/event_example.clj#L27#2018-10-1909:22stijnis there some documentation about that library? and since it's not in the dependencies, I assume it is available on ions by default?#2018-10-1912:37jeroenvandijkAre session credentials support in the dynamodb connection string? E.g.
"datomic:"
I've tried several combination (aws_security_token, session_token, leaving it out). But no luck so far. The error I get is:
com.amazonaws.services.dynamodbv2.model.AmazonDynamoDBException: The security token included in the request is invalid.
This is telling me that AWS supports it, but I can't tell if Datomic forward this information.#2018-10-1912:55jeroenvandijkCool, found a working thing via system properties: (defn with-system-properties [props f]
(let [p (System/getProperties)]
(try
(doseq [[k v] props]
(System/setProperty k v))
(f)
(finally
(System/setProperties p)))))
(with-system-properties
{"aws.accessKeyId" (.getAWSAccessKeyId credentials)
"aws.secretKey" (.getAWSSecretKey credentials)
"aws.sessionToken" (.getSessionToken credentials)}
(d/connect uri))#2018-10-1912:41ChrisCan anyone advise if it's more performant to use Java method calls in a :where clause or Clojure? Many examples I see online use Java but it's not clear what the reason is.
e.g. [:find ?e :where [?e :person/name ?n] [(.startsWith ^String ?n "Jo")]] vs [:find ?e :where [?e :person/name ?n] [(clojure.string/starts-with ?n "Jo")]]#2018-10-1912:43ChrisOr is it just for brevity because a function outside clojure.core needs to be fully qualified?#2018-10-1912:58jeroenvandijk@cbowdon clojure.string/starts-with has been added in clojure 1.8 [see 1]. So these are probably old examples [1] https://clojuredocs.org/clojure.string/starts-with_q#2018-10-1913:00Chris@jeroenvandijk Ah that makes sense, thank you#2018-10-1915:30jeff.terrellIf anybody has figured out how to do isolated dev environments with Datomic Cloud in a way that you're satisfied with, I'd be interested in your thoughts here:
https://forum.datomic.com/t/any-ideas-for-isolated-dev-environments-with-datomic-cloud/663#2018-10-1915:38marshall@jeff.terrell Have you looked at https://docs.datomic.com/cloud/operation/planning.html#2018-10-1915:40jeff.terrellYes, and I'm not sure that helps, but is there something in particular you're thinking about? Query groups maybe?#2018-10-1915:42jeff.terrellAnd also, are query groups only for the production topology? I couldn't figure that out from the docs (although maybe I'm just missing it).#2018-10-1915:44marshallYes, query groups can help with some of that. Yes, production only.#2018-10-1915:46jeff.terrellOK. I'm thinking that's more than I can afford at the moment. I don't suppose y'all have any plans to include the client API in Datomic Free Edition, do you?#2018-10-1915:51jeff.terrellAlthough, now that I'm checking the price (using the estimator on AWS marketplace), it looks like it might be as little as about $1.50/day all-in for a production topology with query groups. As a sanity check, does that sound realistic to you? (Maybe it matters that I'm still in the free tier on this account?)#2018-10-1915:53marshallI think it’d be a bit more than that
IIRC a “default” production topology with 2 compute nodes runs around $400 or so a month#2018-10-1915:53marshallinfrastructure + software#2018-10-1915:54jeff.terrellOK. Thanks for the sanity check. Not sure how I was estimating that so wrongly. simple_smile Thanks for all the help, by the way.#2018-10-1915:54marshallabsolutely#2018-10-1915:56ro6I guess Ions users are hanging out here more than #ions-aws? Doesn't seem to be much activity there.#2018-10-1915:57ro6Is there a way to specify to Ions that you want a certain set of tools.deps aliases to be used when you push/deploy? I guess by default the JVM process+classpath is constructed from the top level deps.edn specification without any aliases merged in?#2018-10-1916:45grzmIn Ions, is there a way to piggyback custom code on the validate step during deploy? For example, confirming that the equivalent of -main started without error?#2018-10-1918:52ro6Second. I'm wondering about the JVM init process in general with Ions.#2018-10-1918:55ro6For example, if I want to set a global uncaught exception handler (as recommended here: https://stuartsierra.com/2015/05/27/clojure-uncaught-exceptions), which I'd normally do once in -main, what's the best place to do that in Ions?#2018-10-2014:09luchiniI’ve been using the very entry-point function to do all sorts of global system setup and, wherever possible, memoizing things along the lines of the Datomic sample app.#2018-10-2014:09luchiniI’m not pretty sure I like this approach so I’m monitoring whether it scales well.#2018-10-1917:32jeff.terrellI notice that the latest version of com.datomic/client-cloud on Maven is v0.8.66 [1], but that version is not listed on the releases page of Datomic Cloud [2]. Is v0.8.66 not an officially supported version? Asking because I encountered a problem with it [3] (which may not be its fault; I dunno).
[1] https://search.maven.org/artifact/com.datomic/client-cloud/0.8.66/jar
[2] https://docs.datomic.com/cloud/releases.html#current
[3] https://github.com/ComputeSoftware/datomic-client-memdb/issues/2#2018-10-1917:54preDoes Datomic support SQL Server? The documentation includes three other sql databases.#2018-10-1919:34joshkha few hours ago our applications running on AWS lost their connection to datomic cloud, as did my ability to connect locally via the socks proxy. is there an easy way to debug this? we're getting the following error:
{:cognitect.anomalies/category :cognitect.anomalies/unavailable, :cognitect.anomalies/message "Total timeout elapsed",...
the docs describe this as a likely configuration error but nothing has changed locally or internal to the VPC.#2018-10-1919:37joshkh(or the following.. but i'm not sure how to recover or why it killed our silo'ed applications https://docs.datomic.com/cloud/transactions/transaction-processing.html#timeouts)#2018-10-1921:35ro6Unable to deploy: $ clojure -Adev -m datomic.ion.dev "{:op :push :uname \"jvm-init-test-1\"}"
{:command-failed "{:op :push :uname \"jvm-init-test-1\"}",
:causes
({:message "Map literal must contain an even number of forms",
:class RuntimeException})}
#2018-10-1921:36ro6same error with $ clojure -Adev -m datomic.ion.dev '{:op :push :uname "jvm-init-test-1"}'#2018-10-1921:37Joe Lanetry without dashes in the uname OR remove the double quotes and try it as a symbol.#2018-10-1921:37Joe LaneLet me know how it goes @robert.mather.rmm#2018-10-1922:01ro6Not so good...#2018-10-1921:41ro6no dashes: "{:op :push :uname \"jvminittest1\"}" -> same fail
as keyword: {:command-failed "{:op :push :uname :jvminittest1}",
:causes
({:message "Incorrect args for :push",
:class ExceptionInfo,
:data
#:clojure.spec.alpha{:problems
({:path [:uname],
:pred datomic.ion.dev.specs/name?,
:val :jvminittest1,
:via
[:datomic.ion.dev.specs/push-args
:datomic.ion.dev.specs/uname],
:in [:uname]}),
:spec :datomic.ion.dev.specs/push-args,
:value {:op :push, :uname :jvminittest1}}})}
-> Spec fail, probably the keyword in that position doesn't satisfy datomic.ion.dev.specs/name?#2018-10-1921:42ro6Maybe it's the way my shell is escaping the string?#2018-10-1921:44ro6sorry, you said symbol#2018-10-1922:00ro6as symbol: "{:op :push :uname jvm-init-test-1}" -> same fail (odd number of forms)
as symbol without dashes: "{:op :push :uname jvminittest1}" -> same fail#2018-10-1922:04ro6@lanejo01 If I want to escalate this a bit more, do you think the dev forum or the Cognitect support case system is better?#2018-10-1922:06Joe Laneclojure -A:dev -m datomic.ion.dev '{:op :push :uname "some-uname"}'#2018-10-1922:06Joe LaneWhen I push an Ion, it looks like this#2018-10-1922:07Joe LaneNote the single quotes and how i’m not escaping the double quotes.#2018-10-1922:07Joe LaneDoes this also not work for you? Because this works for me several times per day.#2018-10-1922:09ro6yep, that was my first attempt. I think it may be a shell issue. I'm running Bash on Debian on the Windows Subsystem for Linux (WSL)#2018-10-1922:10Joe LaneWeird, because the first one you posted here contains escaped double quotes.#2018-10-1922:11ro6Yeah, I had tried a few by that time#2018-10-1922:11Joe Lanewhat if you commit a WIP then push?#2018-10-1922:17ro6ok, now it's a for real problem: $ clojure -A:dev -m datomic.ion.dev '{:op :push}'
{:command-failed "{:op :push}",
:causes
({:message "Map literal must contain an even number of forms",
:class RuntimeException})}
#2018-10-1922:18ro6@lanejo01 What is your com.datomic/ion-dev version?#2018-10-1922:20Joe LaneSomething tells me the issue isn’t in the library. 0.9.176 is the version.#2018-10-1922:20ro6I'm on "0.9.176" as well, which is also the one Stu used in the event example#2018-10-1922:20ro6I agree, just covering the bases#2018-10-1922:21ro6It's weird though, because I have pushed successfully before from this shell#2018-10-1922:22Joe LaneWhat were the last 5 things you did before trying to push? Can you eval them in the repl and confirm they dont have typos?#2018-10-1922:23Joe LaneDo you have a typo in your ion-config?#2018-10-1922:23ro6haha, yes I do.#2018-10-1922:24ro6@lanejo01 Thank you sir.#2018-10-1922:25ro6Error message definitely could have pointed a bit better, but I still feel stupid...#2018-10-1922:26Joe LaneDon’t feel stupid, happens to all of us.#2018-10-2000:02ro6@stuarthalloway Looks like a typo: https://github.com/Datomic/ion-event-example/blob/master/src/datomic/ion/event_example.clj#L48
"/datomic-shard/" should be "/datomic-shared/"#2018-10-2201:29jaretThanks for the report, I’ve corrected the typo.#2018-10-2014:06joshkhdoes anyone have experience running through the "first upgrade" of a datomic cloud formation? our datomic cloud instance crashed yesterday, and in an attempt to revive it i've started an upgrade process, but it fails when deleting the existing stack.#2018-10-2018:40eoliphantyeah we've done quite a few at this point. we've ~10 solo's (1/dev) as well as 3-4 production sized systems and we generally apply the new revs as they come out. So far not too many problems.
@joshkh do you have the specfic thing that failed ? I ran into an issue once, where a delete failed because of the additional policies i'd added to the IAM role, I had to manually remove those in order for it to succeed.#2018-10-2123:26joshkhthanks for the confirmation. my road blocks were some vpcs, subnets, gateways etc. deleting the old stack left a lot of configuration hanging around, however upgrading to separate compute/storage stacks got my cloud instance back up and running.#2018-10-2117:38eoliphantok.. I have a pretty weird problem. I have some ion code that runs a query and i'm getting different behavior on the server vs running the same code locally via a repl. I've done all sorts of testing, dumped my params map to a string, edn parsed it and re-run the same code in the repl, dumped the param types to make sure that nothing weird was happening there, but so far, again, calling the exact same function (which just calls a query and flattens the result) with the exact same parameters is behaving differently on the server (incorrectly) vs the client
Ok, I think this might be a bug. I noticed I had some other code with virtually the same query, that wasn't exhibiting the same behavior. Ultimately the only difference between the two was that the one that is misbehaving was only returning a bound entity id, where the other was using it in a pull such that basically
; returns seq of ids on client, empty seq on server for exact same :where, params, etc
(d/q '[:find ?o ...
; as expected identical behavior on client and server
(d/q '[:find (pull ?o [...])...
; working around the first with something like following works fine
(->>
(d/q '[:find (pull ?o [:db/id])...
flatten
(map :db/id)
#2018-10-2317:55eoliphant@U1QJACBUM did you guys see this?#2018-10-2318:35jaret@U380J7PAQ Hey! Just saw this. I am not sure I am following you on the use of “client” and “server” here. Are you saying that you are noticing a different behavior between invoking a query on Ions and a local repl? Or, are you referring to a difference between on-prem/peer-server and client? Do you have a full gist showing what you’re running and where?#2018-10-2318:36eoliphantsorry I wasn't clear. Yes, that's exactly it. Seeing this behavior in local REPL, tunneled to server, vs on the server itself.#2018-10-2318:37jaretOk so if I write some queries as above and invoke them with query and connected directly to the stack via REPL I should see different behavior?#2018-10-2318:38jaretIf that’s the case, I’ll go reproduce and figure out what’s going on here.#2018-10-2318:38eoliphantyeah that's what I was seeing. Here's the actual full code
(defn get-all-results
[db {:keys [run/number primer-pair/pair-id sample/extid customer/cust-id]
:as params}]
(cast/event {:msg "get-all-results"
::params [number pair-id extid cust-id]
})
(->>
(d/q '[:find #_?o (pull ?o [:db/id])
:in $ ?run-num ?pp ?samp-id ?cust-id
:where
[?r :run/number ?run-num]
[?p :primer-pair/pair-id ?pp]
[?s :sample/extid ?samp-id]
[?u :customer/samples ?s]
[?u :customer/cust-id ?cust-id]
[?o :otuabund/run ?r]
[?o :otuabund/primer-pair ?p]
[?o :otuabund/sample ?s]
]
db number pair-id extid cust-id)
flatten
(map :db/id))
)
#2018-10-2318:38eoliphantwith my 'fix'#2018-10-2318:39eoliphantit's super weird#2018-10-2318:39jaretRoger! I’ll go digging in.#2018-10-2318:39eoliphantbut uncomment the ?o comment the pull, take out the map i see the behavior#2018-10-2318:39eoliphanti ran a ton of tests#2018-10-2318:40eoliphantcreated a quick wrapper ion#2018-10-2318:40eoliphantso I could test it from the lambda console#2018-10-2318:40eoliphantetc#2018-10-2318:41eoliphantbecause it was being called from an API and what have you. so wanted to get that out of the loop. But once I got it stripped down, the pull version worked, if I pushed the one that just returned the id it failed#2018-10-2123:14joshkhtotally inconsequential because it's not used, but there's a typo in the Datomic/ion-starter example. https://github.com/Datomic/ion-starter/blame/master/src/datomic/ion/starter.clj#L97 "contect" should be "context"? I tried to create a PR but i don't think they're accepted upstream so dropping a note here. 🙂#2018-10-2201:26jaretThanks for report/catch. I’ve fixed it.#2018-10-2209:09joshkh:+1:#2018-10-2210:07mkvlrthere hasn’t been any progress on https://groups.google.com/forum/#!topic/datomic/kOBvvc228VM has there? We’d also like to vote for getting more info on the exception. We’d like to know which attribute(s) cause the db.error/unique-conflict without having to parse the string…#2018-10-2219:42joshkhany ideas? java10 on a fresh ec2 instance: Caused by: java.lang.IllegalArgumentException: Can't define method not in interfaces: recent_db, compiling:(datomic/client/impl/shared.clj:304:1)#2018-10-2220:00eoliphantI think I've seen some other weirdness mentioned with j10. We're moving all of our stuff to cloud, but still using 8 for on-prem#2018-10-2220:00eoliphantah wait is that client code ?#2018-10-2220:14joshkhyup! client code. my project compiles fine on my local machine (with the same jvm), but now i have a hunch that i've juggled quite a few deps via lein install and maybe can't reproduce their order on a remote machine. yikes.#2018-10-2220:17joshkhwhen in doubt, uberjar? :man-shrugging:#2018-10-2221:57donaldballIt’s my observation that d/release on a memory connection actually deletes the underlying database. This is reasonable behavior, but the documentation doesn’t suggest it’s intended. Can anyone on the datomic product team clarify by any chance?#2018-10-2315:35grzmJust throwing this out there: is anyone successfully using datomic.ions.cast/initialize-redirect with an Emacs Cider repl? I've been getting stackoverflow errors due to reflection, and before digging further would like to know if it's a known issue or something no one else has been using, or just my own jacked configuration.#2018-10-2317:57eoliphant@grzm only used it with Cursive/IDEA works fine there#2018-10-2318:14grzm@eoliphant thanks for the confirmation. A coworker has been successfully using it with Cursive as well, and I'm currently getting myself reacquainted with it specifically because of this issue.#2018-10-2322:34steveb8n@grzm those stackoverflow errors are typically seen in Ion deploys due to lack of JVM memory in Solo topologies. Sometimes it goes away with repeated deploys. Otherwise you’ll need to upgrade and edit the CF template to raise the JVM memory limit#2018-10-2403:00ro6Is anyone taking steps to recover the Java logging that Ions throws away by default (eg below WARN level)? (see: https://docs.datomic.com/cloud/ions/ions-monitoring.html#java-logging)#2018-10-2404:57steveb8n@robert.mather.rmm not yet but I’m super interested in whatever solutions anyone is using for operations/monitoring of Ions apps.#2018-10-2410:13joshkhquick ions question. are code deploys supposed to have zero downtime? i'm seeing the following behaviour: 1. deploy a revision, 2. test the lambda and it fails, 3. wait a little bit, try again, and it works:
; A few seconds after an Ions deploy
$ aws lambda invoke --function-name my-compute-group-testfn /dev/stdout
{
"isBase64Encoded" : false,
"statusCode" : 500,
"headers" : {},
"body" : "java.io.IOException: Premature EOS, presumed disconnect"
}{
"StatusCode": 200,
"ExecutedVersion": "$LATEST"
}
; And then a few more second later
$ aws lambda invoke --function-name my-compute-group-testfn /dev/stdout
{"statusCode":200,"headers":{"Content-Type":"application\/edn"},"body":"(some working result)","isBase64Encoded":true}{
"StatusCode": 200,
"ExecutedVersion": "$LATEST"
}
#2018-10-2420:45ro6Solo topology or production?#2018-10-2507:58stijnI'm seeing this too, both solo and production topology#2018-10-2508:00stijn(although we have only tested with production topology without HA). I can report about full production topology later. But it seems like this only happens on the first request.#2018-10-2410:33joshkhalso (unrelated) - how might one go about applying ring middleware to functions that have been ionized with datomic.ion.lambda.api-gateway/ionize? for example, i'd like to use ring.middleware.format.#2018-10-2410:52joshkhfigured out that i have to wrap each function individually before ionizing it. :+1:#2018-10-2410:41grzm@steveb8n this is a stackoverflow error in Cider, not on a deployed machine. datomic.ions.cast/initialize-redirect sends cast/event, cast/dev and cast/alert to somewhere other than Cloudwatch for local development. (https://docs.datomic.com/cloud/ions/ions-monitoring.html#local-workflow)#2018-10-2410:42steveb8nAh ok, my mistake#2018-10-2413:35joshkhi think there might be a bug in the ion/get-params fn. it's dropping the first letter of keys that are defined at the root level:
$ aws ssm put-parameter --name rootlevelparam --type String --value "somevalue"
{
"Version": 1
}
(ion/get-params {:path "/"})
=> {"ootlevelparam" "somevalue"}
#2018-10-2414:29joshkhspeaking of which, should lambdas created via ions have access to SSM by default, presumably as part of their generated role? i'm getting the following error User: arn:aws:sts::accid:assumed-role/my-compute-and-region/someid is not authorized to perform: ssm:GetParametersByPath on resource: arn:aws:ssm:my-region:accid:parameter/location/here/ (Service: AWSSimpleSystemsManagement; Status Code: 400; Error Code: AccessDeniedException; Request ID: some-uuid#2018-10-2415:24amarHi. Has anyone come across this error before? Seems to happen if I am transacting using a transaction function.
java.lang.RuntimeException: Reader tag must be a symbol
File "NativeConstructorAccessorImpl.java", in sun.reflect/newInstance0
File "NativeConstructorAccessorImpl.java", line 62, in sun.reflect/newInstance
File "DelegatingConstructorAccessorImpl.java", line 45, in sun.reflect/newInstance
File "Constructor.java", line 423, in java.lang.reflect/newInstance
File "Reflector.java", line 180, in clojure.lang/invokeConstructor
File "form-init513985284846454305.clj", line 1, in user/[fn]
File "error.clj", line 135, in datomic.error/deserialize-exception
File "error.clj", line 117, in datomic.error/deserialize-exception
File "peer.clj", line 399, in datomic.peer.Connection/datomic.peer.Connection
File "connector.clj", line 169, in datomic.connector/[fn]
File "connector.clj", line 167, in datomic.connector/[fn]
File "MultiFn.java", line 233, in clojure.lang/invoke
File "connector.clj", line 194, in datomic.connector/[fn]
File "connector.clj", line 189, in datomic.connector/[fn]
File "connector.clj", line 187, in datomic.connector/[fn]
File "core.clj", line 2022, in clojure.core/[fn]
(f))
File "AFn.java", line 18, in clojure.lang/call
File "FutureTask.java", line 266, in java.util.concurrent/run
File "ThreadPoolExecutor.java", line 1149, in java.util.concurrent/runWorker
File "ThreadPoolExecutor.java", line 624, in java.util.concurrent/run
File "Thread.java", line 748, in java.lang/run
#2018-10-2421:31amarFor posterity, it seems the issue was related to using destructuring with namespaced keys. The transaction function had something like
(let [{:person/keys [age name]} data] ,,,)
which gets stored in datomic as
(let[#:person{:keys[age name]} data] ,,,)
changing to
(let [age (:person/age data) name (:person/name data)] ,,,)
was one fix. The root cause was an old version of tools.reader and/or another dependency. Upgrading dependencies independent of the code change resolves the issue.#2018-10-2418:16kennyI see the latest version of client-cloud is 0.8.66 on Maven (https://search.maven.org/search?q=g:com.datomic%20AND%20a:client-cloud&core=gav), but the Datomic releases page (https://docs.datomic.com/cloud/releases.html#current) says the current release of client-cloud is 0.8.63. Which is correct?#2018-10-2418:20jaret@U083D6HK9 the release page is correct. I’ll confirm with the team if we need to update to .66#2018-10-2419:33csmSo I created my first ion to hook up to API gateway, and everything works, except my HTTP response bodies are turning into base-64. Is there an obvious thing I messed up that would cause that?#2018-10-2420:49ro6Nope. As long as you set the binary encoding option in API Gateway to */* (under Deployment in the tutorial), you should get a normal response as you'd expect from the outside (eg using Curl). I guess the Gateway does the translation for you, but not from within the testing UI. Everyone seems to run into this! (such as: https://forum.datomic.com/t/base-64-encoded-response-body-in-api-gateway-method-tester/560)#2018-10-2421:22csmok, I think my problem was I deployed the API before setting the binary content type. A re-deploy seemed to fix that#2018-10-2420:41pvillegas12Is there any good example of updating a to-many relationship as one adds objects over time? Suppose I have a datom model that has a to-many ref rules. I’ll be adding rules over time, is the solution to this transact a new datom as
(d/transact conn {:tx-data [
{:model/name "name" :model/rules (conj past-rules {:rule/name "new-rule"})
]})
#2018-10-2500:41favilaThe map syntax is sugar for [:db/add ...]#2018-10-2500:42favilaIt’s not a “merge” or “reset”#2018-10-2500:42favilaThe conj is therefore unnecessary#2018-10-2501:01pvillegas12How would it be with :db/add?#2018-10-2501:04pvillegas12@U09R86PA4 would it be something like [db/add model-id :model/rules rule-id]?#2018-10-2501:05favilaYes#2018-10-2501:05favilaYou can still use the map syntax just understand what it is doing#2018-10-2501:07favila{:db/id a :many-ref [b]} expands to a single db/add, never ever any db/retract#2018-10-2501:07favilaThere is no map syntax for retraction#2018-10-2501:20pvillegas12Yeah, thanks! My conceptual problem was the ability to add a datom to a many-ref with a single datom on the many side#2018-10-2420:43matthaveneranyone used this library? https://github.com/RallySoftware/datomic-replication#2018-10-2421:32csmcan I have multiple ion “projects” per compute stack, or am I limited to one? That is, only one resources/datomic/ion-config.edn? I understand I can split things up into deps, I just want to understand the deployment strategy#2018-10-2600:54eoliphantAFAIK query groups are the only 'unit' of separation in a given system#2018-10-2422:00csmalso, I frequently get 502 errors on what looks like lambda cold starts (https://forum.datomic.com/t/api-gateway-internal-server-error/678)#2018-10-2600:56eoliphantwe've had some success by just pointing CW scheduled events at them#2018-10-2422:07kenny@csm I have also ran into that. I don't have a solution.#2018-10-2423:33luchiniWe are facing a fascinating problem with Datomic Ions. As our code base grew during the last couple of weeks, deploying to Datomic Ions started failing (CodeDeploy itself gives up and rolls back a previous version to the instance).
We initially thought it could be a problem in our setup (we were still on 441) so we updated to 441-8505 but the same behavior kept plaguing us.
After spending a considerable amount of time investigating, we found two ways to “solve” the issue but neither seems reasonable enough:
1) Hack Datomic’s EC2 instances and bump the stack size up of the JVM process
2) Keep the code base much simpler than we would need to 🙂#2018-10-2423:33luchiniNow that I come to think of it, it’s more like one solution 😄#2018-10-2423:35luchiniThe indication that increasing the stack would work was in the exception that was thrown by starting up datomic.cluster-node was a a stack overflow one.#2018-10-2423:36luchiniI wonder the reasoning behind keeping the stack at 256K. I’m pretty sure @marshall has a superb reason for it.#2018-10-2423:37marshalli have no such thing 😉#2018-10-2423:37marshallWe’re aware of the stack size issue#2018-10-2423:37marshallit was configured the way it was to make solo run well on a very small instance#2018-10-2423:38marshallwe’re working on general options all around, but editing the CFT to set the stack size higher is a reasonable option for now#2018-10-2423:38luchiniWe’ve got production topology and dedicated query groups… 🙂#2018-10-2423:38marshallyep, so you’re not going to hit the same memory issues as if you bumped the stack size on a small solo node#2018-10-2423:39luchiniI’m not an expert in CF so, what’s the implication of changing it on the CFT? Do we lose this setting when we upgrade the stack in the future?#2018-10-2511:04marshallYes, it may be that you’ll have to re-set it on upgrade. It depends a bit on what changes in the upgrade#2018-10-2515:05luchiniThank you!#2018-10-2609:29stijnis it this entry that you are talking about? "JvmFlags": "-Dclojure.spec.skip-macros=true -XX:+UseG1GC -XX:MaxGCPauseMillis=50 -XX:MaxDirectMemorySize=256m"#2018-10-2819:33luchiniI believe so… it’s missing the -Xss though#2018-10-2423:39luchini(btw… we’ve currently changed there, but I felt… dirty 😄 )#2018-10-2509:54joshkhbump from yesterday - any clues regarding ion lambda access to SSM? i see ion/get-params used in the datomic/ion-event-example project. i ended up adding a full SSM policy to our [compute-group]-DatomicLambdaRole (the execution role for the lambda) with no luck. the lambda returns User: is not authorized to perform: ssm:GetParametersByPath on resource: .... any help is much appreciated. 🙂#2018-10-2510:28steveb8nI can maybe help. I’ve got a node.js lambda reading SSM parameters. here’s the IAM perms that were required#2018-10-2510:28steveb8n{
“PolicyName”: “root”,
“PolicyDocument”: {
“Version”: “2012-10-17”,
“Statement”: [
{
“Action”: [
“logs:CreateLogGroup”,
“logs:CreateLogStream”,
“logs:PutLogEvents”
],
“Effect”: “Allow”,
“Resource”: “*”
},
{
“Effect”: “Allow”,
“Action”: [
“ssm:GetParameter”,
“ssm:GetParameters”
],
“Resource”: {
“Fn::Join”: [
“”,
[
“arn:aws:ssm:“,
{
“Ref”: “AWS::Region”
},
“:”,
{
“Ref”: “AWS::AccountId”
},
“:parameter/“,
{
“Ref”: “Application”
},
“-*”
]
]
}
}
]
}
}#2018-10-2510:29steveb8nin other words, try the two action in that JSON for your IAM role#2018-10-2511:06marshallJosh, as mentioned here: https://docs.datomic.com/cloud/ions/ions-reference.html#parameters-example
there is a default datomic-shared parameter store that is readable by allDatomic nodes#2018-10-2511:07marshallany additional parameter stores would require you to setup your own IAM permissions#2018-10-2511:08marshallalso see https://docs.datomic.com/cloud/operation/access-control.html#authorize-ions#2018-10-2522:56grzm@U0510KXTU For an additional anecdatapoint: we've currently handled this with full read access (we're using application-specific config outside of datomic-shared), but are looking to whittle it down from there.#2018-10-3012:19joshkhthanks! just returning to say that /datomic-shared/ did in fact work out of the box, and the policy suggestion was really useful.#2018-10-2602:01pvillegas12What is the canonical way to represent integers in datomic?#2018-10-2602:02Joe LaneAs a long.#2018-10-2713:43Logan Powell👋 Hi everyone, I'm trying to find some resources on normalizing nested data structures for designing/loading data into Datomic... I'm a little confused about how to represent ordered collections like PersistentArrayMap or vectors. Any pointers?#2018-10-2713:46Logan PowellI'm currently thinking of batch loading (maybe with doseq) one transaction at a time using time as a way to order the values, but that seems complected #2018-10-2715:40eoliphantHi @loganpowell say attributes with many cardinality don't maintain any concept of ordering. If you need this, you'd need provide additional attributes (`:next` refs for lists, :index or something for arrays) to capture that in the db. There are a couple libs that address some of this that might be helpful directly or give you some ideas https://github.com/vvvvalvalval/datofu#implementing-ordered-to-many-relationships-with-an-array-data-structure, https://github.com/dwhjames/datomic-linklist#2018-10-2715:41Logan PowellThank you @eoliphant! I also saw this: https://github.com/pmbauer/datomizer I will definitely look into the libraries you've recommended!#2018-10-2717:45schmeeis there anything equivalent to Postgres UNNEST for cardinality many attributes?#2018-10-2717:45schmeeI have an entity that contains a cardinality many attribute required-tags, then I want to get all the tags in that array#2018-10-2717:46schmeenow I’m doing two queries, one to get the array, and one that takes the array as input to get the tags themselves#2018-10-2717:46schmeebut is there any way to do it with only one query?#2018-10-2718:36favilaCan you show an example of what you are doing? I don’t know the valuetype of required-tags or the structure of your entities, or precisely what you mean by the two queries#2018-10-2718:36favilaCan you show an example of what you are doing? I don’t know the valuetype of required-tags or the structure of your entities, or precisely what you mean by the two queries#2018-10-2718:45schmeesure!#2018-10-2718:45schmee;; What the list of tags looks like
user=> (d/q '[:find ?t .;
:where [_ :file/required-tags ?t]]
@db)
["tag1" "tag2" "tag3" "tag4"]
;; What I wish would work
user=> (d/q '[:find ?e
:where [?e :tag/name ?t]
[_ :file/required-tags ?t]]
@db)
#{}
;; What does work, but uses two queries
user=> (d/q '[:find [(pull ?e [*]) ...]
:in $ [?names ...]
:where [?e :tag/name ?names]]
@db
(d/q '[:find ?t .
:where [_ :file/required-tags ?t]]
@db)))
[<a bunch of data>]
#2018-10-2720:05favilaWhat is it you want? What is “a bunch of data”?#2018-10-2720:06favilaI know you are doing some kind of join but I’m having trouble seeing what it is#2018-10-2720:07favilaI do t see that there is any difference between what you wish works and what does work#2018-10-2720:08favila:tag/name and :file/required-tags are both string attrs?#2018-10-2720:09favilaYou want entities with :tag/name values that are anywhere asserted as a :file/required-tags value?#2018-10-2723:09schmeein the second query, the :where essentially translates to [?e :tag/name ["tag1" "tag2" "tag3" "tag4"]]#2018-10-2723:09schmeeWhat I’m after is to “unnest” the array into something like
(or [?e :tag/name "tag1"]
[?e :tag/name "tag2"]
[?e :tag/name "tag3"]
[?e :tag/name "tag4"])
#2018-10-2800:25favilaNo that is incorrect that is not how catalog pattern matching clauses work#2018-10-2800:26favila*datalog#2018-10-2800:26favilaThey match individual assertions (datoms)#2018-10-2800:27favilaDid you actually try your second query? Is that output from an actual repl?#2018-10-2721:58idiomancyman. Cognitect really needs to partner with Salesforce. Everything that salesforce needs in order to accurately trend and analyze domain data is built into the core guarantees of datomic#2018-10-2723:44steveb8nI work a lot with SFDC. Analytic snapshots do this as well. They don’t work for your requirements?#2018-10-2721:58idiomancythat would skyrocket adoption#2018-10-2912:24stijnwhat does this error mean? what can we do to speed up the 'loading of the database'. This is when we do a first request to an ion function and are trying to open a connection.
{
"Type": "clojure.lang.ExceptionInfo",
"Message": "Loading database",
"Data": {
"CognitectAnomaliesCategory": "CognitectAnomaliesUnavailable",
"CognitectAnomaliesMessage": "Loading database"
},
"At": [
"clojure.core$ex_info",
"invokeStatic",
"core.clj",
4739
]
}#2018-10-2912:31marshall@stijn This indicates that you’ve hit a node that hasn’t yet loaded the requested database. You should only see this on ‘startup’ - are you getting the error in an ongoing system?#2018-10-2912:31stijnno, when deploying a new version of the ions#2018-10-2912:32stijnbut it takes quite a while (couple of minutes)#2018-10-2912:32marshallare you on production or solo?
also, how big is the database?#2018-10-2912:32stijnthis is a production topology, but with the desired instance count set to 1 (staging environment, we don't want HA)#2018-10-2912:33marshalli’m surprised it takes minutes; i would expect order of 15 seconds or so#2018-10-2912:35stijnthe error persisted for 11 minutes. i'll check the database size.#2018-10-2912:38stijnStorage size (in bytes) 259.36 MB
Item count 242,223#2018-10-2912:38stijnin the dynamo table#2018-10-2912:39marshallcan you get the db-stats from a REPL for that database? (https://docs.datomic.com/client-api/datomic.client.api.html#var-db-stats)#2018-10-2915:12mgrbyteHi, I'm looking for an example of the invocation syntax for using a classpath functions within a transaction - can anyone point me at one? (or have I misunderstood what they can do?) (on-prem)#2018-10-2915:56favilaIt's subtle. Use a symbol instead of a keyword for the transaction function#2018-10-2915:56favilain the transaction#2018-10-2915:56favila[my/tx-fn a b c] instead of [:my/tx-fn a b c]#2018-10-2915:57favilathe latter needs to be created with d/function or #db/fn and transacted in the database#2018-10-2915:57favilathe former just needs to be on the transactor's classpath#2018-10-2915:58mgrbytegreat, many thanks! out of curiosity, where did you find this? my google foo has deserted me (couldn't find it in datomic docs)#2018-10-2916:04favilahttps://docs.datomic.com/on-prem/database-functions.html#using-transaction-functions#2018-10-2916:05favilathey never call out specifically the difference that one is a keyword resolving to a :db/ident entry another is a symbol resolving to a clojure function on the classpath#2018-10-2916:05favilabut that shows the comparison in invocation#2018-10-2916:06favilaalso the section above shows how to add to the transactor's classpath via an environment variable#2018-10-2916:06favilainstead of adding more jars to libs/#2018-10-2916:24mgrbytethanks, I've seen that before but my brain failed me.. 😆 thx again#2018-10-2917:16ro6Can anyone give advice on debugging a Datomic Ion CodeDeploy failure (Solo topology)? It times out on the ValidateService step with nothing helpful showing up in the CloudWatch logs at all.#2018-10-2918:03ro6I think what's happening is that for whatever reason the app is starting really slow, so even though it's working, CodeDeploy gives up and rolls back. Is there a way to increase the timeout on that?#2018-10-2918:19ro6@jaret I'm unable to deploy to my Solo topology. CodeDeploy succeeds all the way to the ApplicationStart event almost immediately, then hangs on ValidateService until it times out (2 minutes). Then, at least 30 seconds later, I start seeing CloudWatch logs related to Datomic setting up the system (eg ClusterNodeCreate, CreateSoloComponents, and Registered solo node), followed by logs from my app indicating successful startup. I can't hit the new version though due to the CodeDeploy rolling back.#2018-10-2919:09jaretIf you’re not seeing any errors in cloudwatch, could you try a trick that I sometimes use to print the stack to console. Add a string to the end of your deploy command.
clojure -Adev -m datomic.ion.dev '{:op :deploy, :group jaret-lambda-test-compute, :uname jaret-testcommand}' <string here>
#2018-10-2919:12ro6Tryng now...#2018-10-2919:26ro6Just FYI, I've been running deploys from the Repl using a modified version of https://gist.github.com/olivergeorge/cc0ca9a945cb372d35d97e45573656ee#2018-10-2919:26ro6now that I ran from the CLI again, I do see this: SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See for further details.
#2018-10-2919:28ro6failed again with the same pattern#2018-10-2919:32jaretDid the stack print?#2018-10-3012:17joshkhhas anyone successfully deployed a CORS enabled api gateway for datomic ions? it's doing my head in. setting the API's Binary Media Types to */* returns my edn/json/transit as expected but breaks CORS. removing */* fixes CORS but all responses come back base64 encoded.#2018-10-3012:30joshkhI've dumbed it all down to json, the gateway's default:
(defn test-public-handler
"A test handler"
[{:keys [headers body]}]
{:status 200
:headers {"Content-Type" "application/json"
"Access-Control-Allow-Origin" "*"}
:body (json/write-str {:test 123})})
#2018-10-3012:35stijnI'm trying to setup AWS Codebuild for continuous integration of ions but can't get ion push to work. Getting the following error: "VPC endpoints do not support cross-region requests (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: B45398A68E31CD5C; S3 Extended Request ID: ZKoYFM1MHiGru6sw5tyy3nVslwFKbnkVXp4PU4ASWRKjbzMzKiufyZWLOFSJhyHDQ2yKybhj/3g=)"#2018-10-3012:35stijnanyone any experience with this?#2018-10-3014:24souenzzodatomic-peer throws a NPE from datomic.db.ProcessInpoint.inject when I try to (d/with db [{nil "value"}]) by trying to call (namespace nil)#2018-10-3015:08Joe Lane@stijn it sounds to me like codebuild doesn’t have the right role to push.#2018-10-3015:10stijn@lanejo01 to try out, I gave it an assumed role with all permissions in the policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "*",
"Resource": "*"
}
]
}#2018-10-3015:11stijnI think it has something to do with downloading dependencies from <s3://datomic-releases-1fc2183a/maven/releases/>#2018-10-3015:12Joe LaneIs codebuild in a different region from your system?#2018-10-3015:12stijnno#2018-10-3015:13Joe Lanehttps://stackoverflow.com/questions/39707923/aws-cli-copy-between-s3-regions-on-ec2#2018-10-3015:14Joe LaneInteresting problem, unfortunately I don’t have any answers…#2018-10-3015:22stijnyeah, I saw that too, but until I know what is the exact problem, it's difficult to start looking for solutions in the VPC world of AWS 🙂#2018-10-3017:39SalI’m having trouble writing a transaction to add an entity to an existing one to many relationship in datomic … something like this is failing me:#2018-10-3017:46SalCan someone help me or point me to a resource on how to add to a ref attribute that has cardinality many?#2018-10-3017:49joshkhare you getting an error somewhere?#2018-10-3017:52joshkhmaybe try wrapping the value of :user/docs in a vector? :user/docs [{:doc/id (d/squuid) :doc/name "Foo"}]#2018-10-3018:04marshall@fuentesjr 2 things. 1) what @joshkh said ^, cardinality/many assertions need to be in a vector, and if you’re asserting a lookup ref, which is a vector itself, you’ll need 2 vectors (i.e. [[:doc/id 12345]])
2) you’re mixing list and map form in your transaction. I would use the map form:
{:db/id 17592186045419 :user/docs [{:doc/id (d/squuid....]}#2018-10-3018:08Sal:message “:db.error/not-an-entity Unable to resolve entity: #:barber{:id #uuid \“5bd89d31-4fcf-4ce1-b82b-15ef047a8d6e\“, :name \“Foo\“} in datom [17592186045419 :barbershop/barbers #:barber{:id #uuid \“5bd89d31-4fcf-4ce1-b82b-15ef047a8d6e\“, :fname \“Foo\“}]#2018-10-3018:09SalOk let me try your suggestions#2018-10-3018:14joshkh@fuentesjr not a technical explanation, but whatever values you transact in the one-to-many vector are treated as additions, so you don't need to include previous values. pretty sure it's a Set in the background.#2018-10-3018:16joshkhbecomes obvious once you get the hang of it, but a slight gotcha if you're used to setting & replacing values.#2018-10-3018:29Sal@joshkh @marshall Thank you guys. I was able to commit the transaction after wrapping the doc entity in a vector and sticking to the map form#2018-10-3018:29joshkhmap form is the way to go. 💪#2018-10-3018:31SalBy the way, if I wanted to remove one of those documents at some point, would I retract the document?#2018-10-3018:33joshkhfor your one-to-many relationship you'd retract in the same way: supply a vector of ref id's to remove from the collection#2018-10-3018:34Saland how do i specify it’s a retract in map form?#2018-10-3018:34Salor would i have to use the list form?#2018-10-3018:35marshallretract is a list-form thing#2018-10-3018:35joshkhlisty#2018-10-3018:35marshallunless you’re using the built-in retractEntity function#2018-10-3018:35Salaww ok#2018-10-3018:35marshallwell, i guess that’s really list form too 🙂#2018-10-3018:36Sali think i would have spent hours trying to find the map form. really appreciate the help#2018-10-3018:37joshkhjust curious - are you on cloud/ions?#2018-10-3018:37Salon-prem#2018-10-3018:39joshkhcool 🙂#2018-10-3021:18ro6Is anyone from Datomic Support around? I'm having trouble deploying to Ions due to the CodeDeploy rolling back. Seems like the same issue as I worked on with @jaret yesterday.#2018-10-3021:30adamfreyI'm following the datomic ions tutorial and I can't pull the datomic ion dependency from s3:
aws s3 cp .
fatal error: An error occurred (403) when calling the HeadObject operation: Forbidden
My aws cli tool is configured with my work AWS account, is there some permission that is expected that I don't have?#2018-10-3022:06kennyI often receive this error when running an Ion after recently deploying Ions:
java.io.IOException: Connection reset by peer: datomic.ion.lambda.handler.exceptions.Fault
datomic.ion.lambda.handler.exceptions.Fault: java.io.IOException: Connection reset by peer
Is there a way to fix this?#2018-10-3108:35stijn@kenny we have a support ticket logged for that problem (although we call API gateway, but get the same error returned in the body of the HTTP response). @jaret can tell you more about the status.#2018-10-3118:14kennyI opened a ticket as well.#2018-10-3115:45joshkh@stijn not sure if it's related, but to add to kenny's comment, i get this error the first time i run an ion from API Gateway: java.io.IOException: Premature EOS, presumed disconnect#2018-10-3115:45joshkhfirst time after each deployment, and no matter how long i wait to invoke it#2018-10-3118:14stopaHey ya'll -- noob datomic question:
I have a bunch of records called "intents". These should have very rarely been updated in the last 1-2 years.
I want to find all the records that have been updated in the last 2 years#2018-10-3118:20favila"record" means? "updated" means?#2018-10-3118:21favilayour query seems consistent with "record" meaning entity with :intent/uuid and "updated" meaning any assertion/retraction about that entity at all#2018-10-3118:15stopaI'm reading through time-rules
https://github.com/Datomic/day-of-datomic/blob/master/tutorial/time-rules.clj
and so far made something like this
(d/q '[:find (max ?t) (max ?inst)
:in $ %
:where
[?e :intent/uuid]
(entity-at ?e ?tx ?t ?inst)]
(d/history (db)) time-rules)
query is still running as I write this
If someone can help me craft this query would appreciate it 🙂#2018-10-3118:28ro6Has anyone come up with fns/rules to help detect when you're about to transact schema that would violate http://blog.datomic.com/2017/01/the-ten-rules-of-schema-growth.html? I'm aware of Conformity and the other migration libraries, but I'm more interested in detecting breaking schema changes.#2018-10-3119:07kenny@robert.mather.rmm I wrote this to detect breaking changes in our schema.#2018-10-3119:36ro6Beautiful! Thank you sir. Is db-schema-attributes dynamically bound? Or maybe private to your code and defined elsewhere?#2018-10-3119:37kennyOops. Edited the snippet.#2018-10-3119:38kennyNote that this marks the automatic schema alterations Datomic can perform as breaking.#2018-10-3119:39ro6Got it, thanks.#2018-10-3120:06schmeehow can I check if an entity is a component of another entity?#2018-10-3120:11favilaCheck if there's an assertion that connects the two entities via an is-component attribute#2018-10-3120:12favilaexample rule: [(is-component-entity? [?component] ?owner)
[?component ?a ?owner]
[(datomic.api/attribute $ ?a) ?attr]
[(:is-component ?attr)]]
#2018-10-3120:12schmeeneat, thank you!#2018-10-3120:13favilathat attribute call can be just [?a :db/isComponent true] instead#2018-10-3121:24ro6Transaction functions should return a seq of :db/{add|retract} statements right? So what's the right syntax for invoking an installed tx fn?
1. (d/transact conn {:tx-data '(my.namespace/my-fn! arg1 arg2)})
2. (d/transact conn {:tx-data ['(my.namespace/my-fn! arg1 arg2)]})
3. Something else?#2018-10-3121:25ro6I see style 2 in https://docs.datomic.com/cloud/transactions/transaction-functions.html#text-calling#2018-10-3121:26ro6But when I call using that style, I get this error: clojure.lang.ExceptionInfo: clojure.lang.LazySeq cannot be cast to java.lang.Number#2018-10-3121:30ro6I can test locally like: (d/transact conn {:tx-data (my.namespace/my-fn! db-val arg1 arg2)}) and it works fine#2018-10-3121:37ro6nevermind, the problem was elsewhere#2018-10-3122:13ro6it was because arg2 to my tx fn was a function call itself that needed to eval before sending to the transactor, and I forgot that quoting (`'()`) would recursively prevent evaluation inside the list. I switched to a vector and just quoted the tx fn symbol and it worked.#2018-10-3122:14ro6my mind isn't used to mixing local and remote evaluation like that#2018-11-0103:08kenji_signifierI see ap-southeast-2(Sydney) has only “Query Group” fulfillment option in AWS Marketplace page. Does it mean that a query group can join an existing Production topology running in a different region? Can I use it as an effective cache for read operation?#2018-11-0115:46eoliphantI'm fairly certain that that wouldn't work, but I'd obviously defer to the Datomic folks. a system is encapsulated by a vpc so, I don't think there's currently any cross-region support#2018-11-0118:19souenzzoI have a datomic-peer application that runs/scale well using c5.xlarge machines
If I buy datomic-ions, how do I get this machine?
Should I buy m5 then change to c5 ? I will lose something by change the instance type? how easy is change it?#2018-11-0303:19souenzzoI have a datomic-peer application that runs/scale well using c5.xlarge machines
If I buy datomic-ions, how do I get this machine?
Should I buy m5 then change to c5 ? I will lose something by change the instance type? how easy is change it?
BUMP#2018-11-0122:43johnjyou can't, its not supported at least#2018-11-0200:00RustemI have a questions about Datomic Ions.
First of all I want to say that I like ideas behind this technology. As well as I like AWS lambda from a time when I met it.
But I have a problem when I add a dependency “ buddy/buddy-sign {:mvn/version "3.0.0" }” to my project and including name spaces from this lib to my module (:require [buddy.core.keys :as keys] )
Before this my lambda works without problem on AWS cloud. After this It doesn’t work.
It return misleading 502 error but AFAIU main problem that Datomic ions can not compile my lambda on AWS cloud because of dependencies conflicts.
Also I got this not very useful stack trace:
{"errorMessage":"java.net.ConnectException: Connection refused","errorType":"datomic.ion.lambda.handler.exceptions.Fault","stackTrace":["datomic.ion.lambda.handler$throw_anomaly.invokeStatic(handler.clj:24)",
"datomic.ion.lambda.handler$throw_anomaly.invoke(handler.clj:20)", "datomic.ion.lambda.handler.Handler.on_anomaly(handler.clj:139)", "datomic.ion.lambda.handler.Handler.handle_request(handler.clj:155)", "datomic.ion.lambda.handler$fn__4075$G__4011__4080.invoke(handler.clj:70)", "datomic.ion.lambda.handler$fn__4075$G__4010__4086.invoke(handler.clj:70)", "clojure.lang.Var.invoke(Var.java:396)", "datomic.ion.lambda.handler.Thunk.handleRequest(Thunk.java:35)"]}
I try to use different workarounds but without success. I use last version of Datomic ions.
As I found others inside this channel, who have similar problems. Quote “It was working fine as of 0.9.160 and no cheshire. The house of cards started collapsing with the introduction of cheshire, alas.”
So my questions is this :
Datomics Ions can not work with some libraries even with popular ones like “buddy”, isn’t it?
Does anybody know when or how this problem can be resolved?
IMHO Datomics Ions is not production ready without ability to work other libraries.#2018-11-0200:42csmdo you get :dependency-conflicts from your push? Cf https://docs.datomic.com/cloud/ions/ions-reference.html#dependency-conflicts#2018-11-0200:47csmcheshire (jackson, actually) is a mine field generally with dependency conflicts; possibly you can force a version of cheshire or jackson that is compatible with datomic’s classpath#2018-11-0203:44steveb8n@rustem.zhunusov I got conflicts when I started as well. As @csm says, you need to fix these warnings from the push and that will fix the connectivity problems. In my case, the s3-creds lib caused connection problems. Once fixed, Ions works fine with many libs. I have 7 non-Datomic libs working well in my project.#2018-11-0205:30dangercoderI am writing to Datomic. After 1 transaction of a certain type is done, I want to query the database for transactions of this type (which some predicates). this will happen frequently. Would it be good to use the transaction-report queue for this, or do I have to have a poller running every X seconds?#2018-11-0213:18ro6For local dev purposes, do I need to set up a Peer even to work against datomic:, or is there a way of doing that directly from the Client lib?#2018-11-0214:12eoliphant@jarvinenemil just to confirm, are you using on-prem? the report queue stuff isn't available in Cloud. Beyond that, as is frequently the case, the answer is 'it depends'. We do a lot of this by annotating the transaction with an :event-type, but the usecases vary. We have some code that uses the report queue, but all of our newer stuff is in cloud. So in that case we have 'pollers' that read the transaction log, save their offsets, etc. In both of those cases though we're interested the linear, temporal stream of stuff that's happening. If your need is more general, you can also just poll with regular queries#2018-11-0214:13dangercoder@eoliphant I am using Cloud, I guess polling it is 🙂.#2018-11-0214:14eoliphantok cool#2018-11-0214:14eoliphantso you can take advantage of AWS if that fits your usecase#2018-11-0214:14eoliphantyour 'poller' can be a lambda#2018-11-0214:15eoliphantthat gets invoked via a cloudwatch event#2018-11-0214:15eoliphantonly issue there is that the minimum interval is 1 minute#2018-11-0214:15dangercoderwriting that up. thank you. I will have a lot of writes/reads per second in this part of the system, it's a game queue with different alternatives that i need to "query" on.#2018-11-0214:15eoliphanthowever there's a trick#2018-11-0214:15eoliphantgotcha#2018-11-0214:15eoliphantso there's a 'hack'#2018-11-0214:16eoliphantto get around that if you need sub minute#2018-11-0214:16eoliphantpolling#2018-11-0214:16dangercoderoh#2018-11-0214:16eoliphanthttps://aws.amazon.com/blogs/architecture/a-serverless-solution-for-invoking-aws-lambda-at-a-sub-minute-frequency/#2018-11-0214:17eoliphantit's a little clunky to me but it works lol#2018-11-0214:20dangercoderI'll consider that one, I want to have close to instant queue-pop so I will look around 🙂#2018-11-0214:21dangercoderI used Core-async for this stuff before but now with the additional parameters that i need to choose players from I need to do something else.#2018-11-0214:21eoliphantgotcha#2018-11-0214:21eoliphantother alternative#2018-11-0214:21eoliphantof course#2018-11-0214:22eoliphantis to do it on your write side#2018-11-0214:22eoliphantarchitecturally of course, it's not 'reactive' if you do it that way#2018-11-0214:22dangercoderah yes#2018-11-0214:22eoliphantbut depends on your usecase#2018-11-0214:43dangercoderindeed, I will do some more research and try different things. I will build a game queue which batches different players based on their options with lightning fast speed. 🙂#2018-11-0215:06eoliphantyeah if that's the case#2018-11-0215:06eoliphantpolling#2018-11-0215:07eoliphantis probably not what you want#2018-11-0215:08eoliphantyou can probably do something like wrap the transact api#2018-11-0215:08eoliphantand fire off messages or something once you get back a good tx#2018-11-0220:59jaawerthfigure this is good a place as any: just wanted to give a heads up that there's a now-broken link to the strangeloop preconf at the bottom of the datomic site 😉#2018-11-0221:00jaawerthalso I'm now sad that I missed it after tracking down the real URL, realizing it was in September, and seeing who the speakers were. It's all datomics fault ;_;#2018-11-0221:01alexmillerpage link?#2018-11-0221:01jaawerthOh, I suppose that WOULD be helpful lol#2018-11-0221:01jaawerthbottom of https://www.datomic.com/#2018-11-0221:01alexmillerah, I see it - that’s dead now#2018-11-0221:01alexmillerbut we’re doing the same(ish) training the day before Clojure/conj!#2018-11-0221:02alexmillerhttp://2018.clojure-conj.org/training/ https://ti.to/cognitect/clojureconj-2018/with/vpty5pljlzi#2018-11-0221:02alexmillerif you’re interested…#2018-11-0221:03jaawerthyeah this is a good time to get on it and see if I can get work to pay for it. I'm really bad about missing the boat on conferences.... thanks!#2018-11-0314:53eoliphant@souenzzo given that Datomic cloud has some architectural differences, I'd venture that there's no easy 1 to 1 comparison in terms of instance sizes and you'll probably need to do some benchmarking, etc . Also, while query groups have looser constraints, your primary group has to be an i3 class machine. Having said that, say the i3.large has fewer vCPU's but twice as much memory as a c5.xlarge, the i3.xlarge has the same number of CPU's and nearly 4 times the ram. So, again, I'd probably just start with the default i3.large's and do some load testing#2018-11-0314:58souenzzo@eoliphant tnks
My application is CPU intensive. My transactor machine is a i3 that keeps at almost 0 even with 3x c5.xlarge peers on ~80%
I already tried to scale in different forms, but I need the c5 machines because their (single) threads are faster and it enables a better user experience.#2018-11-0315:18RustemTo @csm @steveb8n : I had warnings about :dependency-conflicts and I removed all this warning from :dependency-conflicts but unfortunately yours advice does not help me and result is the same.#2018-11-0315:19eoliphantah gotcha. is your i3 using the valcache setup? So, in that case, though, again, you can probably give it a shot with the default production config and test, but based on what you're saying, you can just add a query group, and to your original question, yeah I guess I'd give the m5 a try to see how it runs.#2018-11-0315:51souenzzoI already tryied to use m5. My wrost case goes from 5s to 8s. My needs for c5 there is not related to datomic.
Datomic really dictate the instance type that I use?#2018-11-0316:58eoliphanthmm ok, yeah that sucks. yes the query group config template currently only allows for I think t2.medium, m5.medium, and i3.large and xlarge#2018-11-0316:58eoliphanthave you tried the i3's in your query group instead of the m5? They are IO vs compute optimized, but they have the local SSD caching#2018-11-0315:33RustemI repeat this problem again by doing this:
1) Add “buddy/buddy-sign {:mvn/version "3.0.0" }” to deps.edn
2) Create new simple lambda ‘echo-simple’
----------
(ns topmonks.aino.skills.lambdas_echo
(:require
[clojure.string :as strings]
[buddy.core.keys :as keys]
)
)
(defn echo-simple
"simple echo "
[{:keys [input]}]
(str "echo lambda ---" input "---")
)
-------
3) After deploying my project then I test my lambda from AWS lambda (https://eu-central-1.console.aws.amazon.com/lambda)
And I got error when “[buddy.core.keys :as keys]” included
---------------
{
"errorMessage": "java.io.IOException: Premature EOS, presumed disconnect",
"errorType": "datomic.ion.lambda.handler.exceptions.Fault",
"stackTrace": [
"datomic.ion.lambda.handler$throw_anomaly.invokeStatic(handler.clj:24)",
"datomic.ion.lambda.handler$throw_anomaly.invoke(handler.clj:20)",
"datomic.ion.lambda.handler.Handler.on_anomaly(handler.clj:139)",
"datomic.ion.lambda.handler.Handler.handle_request(handler.clj:155)",
"datomic.ion.lambda.handler$fn__4075$G__4011__4080.invoke(handler.clj:70)",
"datomic.ion.lambda.handler$fn__4075$G__4010__4086.invoke(handler.clj:70)",
"clojure.lang.Var.invoke(Var.java:396)",
"datomic.ion.lambda.handler.Thunk.handleRequest(Thunk.java:35)"
]
}
------------
I got NO any error when “ ;[buddy.core.keys :as keys]” is commented
Could somebody try to repeat this problem? Does it work in your environment or not?#2018-11-0316:06cap10morganI have a datomic-pro download in a dir (so it has the datomic-pro-version.jar at the root, a lib subdir with deps jars, etc. I am trying to setup a tools-deps project to run some clojure code using datomic and all of its deps w/o triggering a download of them. Is there a more efficient way to do it than having to specify each individual jar in the :paths vector in deps.edn?#2018-11-0316:29cap10morganAnswering my own question:
:paths ["src" "/path/to/datomic/datomic-pro-version.jar" "/path/to/datomic/lib/*"] was the answer. (Not "/path/to/datomic/lib/" nor "/path/to/datomic/lib/*.jar")#2018-11-0516:52lwhortonthis weekend i started a mini project to explore datomic cloud (solo topo). i accidentally used the wrong key-pair when setting up cloudformation. after the stack was created I deleted the parent stack (which also removed storage, compute stacks). i then followed https://docs.datomic.com/cloud/operation/deleting.html to remove all durable storage stuff, too. i used aws’ tag search feature to find anything residual created by cloudformation to make sure it was removed, too (the only thing still around were the old terminated ec2 instances).
when i try to follow the marketplace template again (this time picking the correct key pair) i keep getting a failure creating the new Storage stack:
The following resource(s) failed to create: [StorageF7F305E7]. . Rollback requested by user.
#2018-11-0516:52lwhortoninside of that storage failure, the error resources are:
The following resource(s) failed to create: [LogTableReadScalingPolicy, MountTarget1, LogTableWriteScalingPolicy, MountTarget2, MountTarget0, AdminPolicy].
#2018-11-0516:54lwhortoni’m not clear on why these steps failed, other than maybe they weren’t 100% cleaned up properly following the delete steps?#2018-11-0517:05marshall@lwhorton More likely you don’t have the IAM permissions required#2018-11-0517:05marshallare you signed in as an AWS admin?#2018-11-0517:07lwhortonyes, i have the adminaccess group which is allow #2018-11-0517:08lwhortonshould i even be able to re-create a stack with the exact same stack name (and app name, for that matter)?#2018-11-0517:08marshallin general yes, although it could be that some resources did not get deleted, which would cause the kind of issue you’re seeing#2018-11-0517:08marshallwell, could cause#2018-11-0517:08lwhortoni feel like it’s just lingering state somewhere, but it’s hard to find where#2018-11-0517:09lwhortoni’ve had issues with CF in the past where a stack gets stuck in some interminable state, and i’ve had to contact aws support directly to clear things up lol#2018-11-0517:10lwhortoni’ll keep digging around and will let you know if i find the root of the problem#2018-11-0517:52kennyI received this exception during the ValidateService while deploying my Ions:
{
"Type": "clojure.lang.Compiler$CompilerException",
"Message": "java.lang.StackOverflowError, compiling:(potemkin/walk.clj:10:13)",
"At": [
"clojure.lang.Compiler",
"analyze",
"Compiler.java",
6792
]
}
This is the first time I have received this exception. It appears to be coming from clj-http. Upon redeploying the Ions, deploy succeeded. It smells like some sort of race condition. Any idea how I can avoid this problem in the future?#2018-11-0519:02steveb8n@kenny if you are using solo, it's likely a memory limit problem we all experience. The location in the exception is irrelevant. There is a workaround if you upgrade the datomic stack and edit the CF template to increase JVM memory Params. I haven't done this yet, I just rerun deploys and ignore it#2018-11-0519:03kennyI am using production.#2018-11-0519:09marshall@kenny version?#2018-11-0519:10kenny@marshall 441-8505.#2018-11-0519:23marshall@kenny i’m working on a doc improvement that should help suss that out and reproduce locally
I’ll ping you when i’ve got it put together#2018-11-0615:12marshall@kenny Take a look here: https://docs.datomic.com/cloud/ions/ions-reference.html#jvm-settings
Can you try running locally with the same JVM settings used in your stack and let us know if you don’t see the same behavior locally?#2018-11-0620:11kennyWhat exactly should I try running locally?#2018-11-0620:24marshallloading and invoking your ion code#2018-11-0620:24kennyVia clj?#2018-11-0620:25marshallclojure#2018-11-0620:25marshallbut yes#2018-11-0620:25kennyWhat -main should I specify?#2018-11-0620:26marshalljust require your ion ns#2018-11-0620:26marshallthe same way you would when you are testing your ion code locally#2018-11-0620:26marshallbefore deploying#2018-11-0620:26marshallthe idea is to load the ion code in a JVM with the same memory settings used by Datomic Cloud#2018-11-0620:28kennyOh. I test my Ions exactly as you described and have never hit the exception that was hit in production.#2018-11-0620:29marshallwere your JVM settings the same as those specified in that table?#2018-11-0620:30kennyProbably with far less memory. I haven't configured any JVM properties on my system and I'm guessing the defaults are really low.#2018-11-0620:31kennyOur Ion tests run via CI where the machine only has 4gb. The tests have been run thousands of times without hitting that exception.#2018-11-0620:33kennyAnd we are running production topology with i3.large. Unless you think having extra memory is the issue, I don't think that exception was due to RAM.#2018-11-0620:34marshall"Message": "java.lang.StackOverflowError, compiling:(potemkin/walk.clj:10:13)"#2018-11-0620:34marshallindicates you ran out of java stack space when compiling the code#2018-11-0620:37kennyGotcha. Our CI runs on 4gb and has not hit that exception. Datomic Ions run on i3.large and according to that chart that means they have 10.52gb available.#2018-11-0620:37marshallis CI a Datomic system?#2018-11-0620:38kennyNo, it is a CircleCI container. It runs the Ions by executing clojure, as you suggested, and has not hit the exception.#2018-11-0620:39marshallIs it using the same JVM settings (for instance -Xss) as Datomic?#2018-11-0620:44kennyIt is using the default clojure uses which I'd guess is the Java defaults. I was assuming, perhaps incorrectly, that the production topology configuration would configure that higher than the defaults. I do now see the GC flags which are not being used right now. I'll try a few runs with those flags.#2018-11-0620:47kennyNo failures out of 10 runs. I don't know what the failure rate in production is either because I have only hit this one time.#2018-11-0620:49marshallI suspect something that library was doing was using a lot of stack space (deeply recursive structure of some sort maybe). Your production instance happened to be doing something else at the time you tried to deploy and the combined load exceeded the -Xss#2018-11-0621:07kennyHmm ok. What is the solution if I hit it again?#2018-11-0813:12stijn@marshall why does the documentation mention these different instance types?#2018-11-0813:12marshall@U0539NJF7 They are the various instance types used by Datomic Cloud#2018-11-0813:12stijnwe can only select i3.large or i3.xlarge#2018-11-0813:13marshallQuery groups can use the others#2018-11-0813:13stijnok!#2018-11-0813:13stijnand the 1M is already there in version 411-8505?#2018-11-0813:14marshallthe settings in that table are accurate for the latest release (441-8505)#2018-11-0813:14stijnok. so, if that still fails with the StackOverflowError, what are the resolutions for that?#2018-11-0813:15marshallI would first want to see whether that failure occurs locally with the same stack settings#2018-11-0813:15stijnOK#2018-11-0813:15marshallbut, in general, the resolution is to alter the use, compiling, and loading of libraries that are causing it#2018-11-0813:16stijn(because we tried updating the jvm settings on the template, but it made the nodes not come up after termination - probably made a mistake)#2018-11-0813:17stijnok, I think that makes sense. we had the problem earlier with using the aleph http client and loading the netty namespaces was failing. replacing it with clj-http worked in this case#2018-11-0813:17stijnbut, some stuff is hard to remove from your code base 😄#2018-11-0813:17marshallagreed. you don’t always have to remove it, either.#2018-11-0813:18marshallsometimes it is about how/where/when you require#2018-11-0813:18marshalletc#2018-11-0813:18stijnok, are there any guidelines available?#2018-11-0813:20marshallthat would be non-datomic-specific#2018-11-0813:20marshalli.e. things like deeply nested lazy seqs / large recursive call stacks, etc#2018-11-0813:22marshallfor example: https://stuartsierra.com/2015/04/26/clojure-donts-concat#2018-11-0813:23stijnok, thanks#2018-11-0816:37kenny@U0539NJF7 interestingly for us, the lib that caused this problem in the first place was clj-http 🙃#2018-11-0816:41stijnhaha#2018-11-0600:23ro6On Datomic Ions Solo, I'm getting an error Unable to resolve entity: :db/index when trying to transact schema#2018-11-0600:24ro6oh, is :db/index not supported on Datomic Cloud?#2018-11-0600:29ro6I guess fulltext indexes aren't either...#2018-11-0605:53henrikAlas, no. And be careful with large strings as well.#2018-11-0612:27dazldhey all - how do you all deal with possible transaction timeouts? we’ve tried something like
(defn transact-timeout [conn tx timeout timeout-result callback]
(future
(callback (deref (d/transact-async conn tx)
timeout
timeout-result))))
but that doesn’t seem to do the trick. any advice?#2018-11-0612:28dazldretrying would be a use case#2018-11-0614:37mpingtheres probably a timeout somewhere#2018-11-0614:38mping(d/transact conn {:tx-data [[:db/add "" :db/doc "might not succeed!"]]
:timeout 1})#2018-11-0615:04marshallYou might want to look at https://github.com/Datomic/mbrainz-importer/blob/c095cfcff2bfa17ec66d313153cf0d7c75d1c0e9/src/cognitect/xform/batch.clj#L78-L114#2018-11-0615:04marshallThat’s an example impl of transact with backoff-retry using transducers#2018-11-0617:51dazldthank you both, will take another look#2018-11-0617:11lwhortoni’m having a rather hard time wrapping my head around datalog query/pull syntaxes. is there a resource other than the official guide that someone can point me toward?#2018-11-0617:12lwhortoni feel like reading the language grammar and the official reference documentation will help me as a step 2, but for step 1 i just need a bunch of examples#2018-11-0617:12ro6here's an interactive tutorial: http://www.learndatalogtoday.org/#2018-11-0617:13ro6not sure it covers pull#2018-11-0618:19schmee@lwhorton I recommend shoving a bunch of data you’re familiar with into Datascript and then experiment for yourself!#2018-11-0618:20schmeethat + the official docs will get you off the ground#2018-11-0618:25kennyI just received this exception when trying to transact to a newly created database:
clojure.lang.ExceptionInfo: Loading database
clojure.core/eval core.clj: 3206
...
dev.system/eval34859 REPL Input
compute.command-processor.streams.customer/create-customer customer.clj: 31
datomic.client.api/eval13775/fn/G api.clj: 127
datomic.client.api.sync/eval34382/fn sync.clj: 82
datomic.client.api.async/ares async.clj: 56
clojure.core/ex-info core.clj: 4739
clojure.lang.ExceptionInfo: Loading database
cognitect.anomalies/category: :cognitect.anomalies/unavailable
cognitect.anomalies/message: "Loading database"
What is the correct solution? Retry?#2018-11-0618:28kennyIf I delete the database and try again, I get:
clojure.lang.ExceptionInfo: :db.error/db-deleted 52ece0c2-7322-4eb7-af99-10f6dbc8ae5c has been deleted
#2018-11-0618:38marshall@kenny yes, retry on unavailable#2018-11-0620:16ro6Is it possible to retract an entire schema entity if its attributes haven't been used?#2018-11-0620:52marshall@robert.mather.rmm no, schema can’t be retracted#2018-11-0621:38stopahey team, noob question:
I am in the process of switching to a different db, in one of our usecases of datomic
I ran the switch in prod -- but I think there are still some stray reads happening, that I may have missed. Is there a quick way to check in datomic -- "what has been the last queried thing?"#2018-11-0622:54ro6Is it possible to run one transaction that both creates an entity and executes a tx fn involving that entity?#2018-11-0623:55chris_johnsonDoes anyone know what logging to expect from a Peer server talking to ElastiCache in AWS? As far as I can tell, this server has all the VPC networking/security group setup correct but it never seems to put any items into the EC cluster and also seems to log only one connection attempt event and nothing thereafter. #2018-11-0623:55chris_johnsonIs there a list somewhere of what I’d expect to see logged at which log levels? #2018-11-0710:00val_waeselynckDocumentation suggestion: document the :next-t key of Database values in clients https://docs.datomic.com/client-api/datomic.client.api.html#var-db
It's pretty useful for working with the Log API in particular#2018-11-0712:57quadrona stupid question: what is the clojure way of maintaining a certain relationship between datomic entities? Say there are three entities in the db, $PUBLIC_KEY $MESSAGE and $SIGNATURE such that the latter enjoys a mathematical relationship with regard to the first two. can datomic know about such a relationship or does it have to be programmed independently of datomic?#2018-11-0714:25val_waeselynckHas to be enforced externally by the writers - there are no equivalent to e.g column constraints in sql. See https://stackoverflow.com/questions/48268887/how-to-prevent-transactions-from-violating-application-invariants-in-datomic#2018-11-0714:12chouffeWhat is the story around integration testing and unit testing with datomic cloud? I would like to be able to run tests without internet connection. The only reference I found was this: https://forum.datomic.com/t/integration-testing/465#2018-11-0717:17kennyHi @UDDUSGGKB. Because this became a blocker for us, we ended up writing datomic-client-memdb [1] to facilitate local development and CI testing with Cloud. It’s a small library that wraps Datomic Free in-memory databases with the Datomic Client protocols.
I wrote the library so let me know if you have any questions.
[1] https://github.com/ComputeSoftware/datomic-client-memdb#2018-11-0719:53chouffeSweet! Thanks for the github link and the lib.
How do you deal with backups and restores with datomic cloud? Let say I want my in memory dev db to be a backup from prod?#2018-11-0719:55kennyI haven't needed to do that. I don't believe there's any built-in way to do that with Cloud 😞 Certainly would be helpful if the Datomic team provided that.#2018-11-0719:59chouffeDo you know if its possible to connect to one of the peer instances from the bastion server and run the datomic bin restore command?#2018-11-0720:00chouffeAlso, how can one restore a datomic db from the S3 buckets? I am still unsure how this all fits#2018-11-0720:31kennyI don't think there is anyway to backup/restore with Datomic Cloud.#2018-11-0720:31kennyhttps://forum.datomic.com/t/cloud-backups-recovery/370#2018-11-0716:28joshkhfor those of you who use ions / api gateway, do you ionize a handler for every REST end point? for example, a mostly 1-1 map: (/example/auth -> my-auth-ion, /example/colors -> my-colors-ion). my lambdas go cold after a while (to be expected), and so each visit to the web app is painfully slow as each supporting lambda heats up. just wondering in practice how you all deal with this? is this acceptable in your production applications? does anyone use one big lambda and then dispatch internally?#2018-11-0716:28joshkhin something like nodejs a cold start is barely noticeable, but as i fetch various assets i'm looking at 4-6 seconds per resource if it hasn't been touched in a while.#2018-11-0717:33Joe LaneI’m enjoying using one lambda, then dispatching with multimethods.#2018-11-0717:33Joe LaneThen it stays warm.#2018-11-0717:34Joe LaneI can always scale my lambda concurrency up, then set a timeout where if it takes more than 200ms (due to cold start when scaling up during a burst), retry and hit the other warm instance.#2018-11-0717:34joshkhso you have a single /{proxy+} integration?#2018-11-0717:37Joe LaneNo, not neccessarily.#2018-11-0717:38Joe LaneSome apps I do, some I dont.#2018-11-0717:38Joe LaneBut for the most part, a lot of the endpoints, even with different routes, all end up calling the same lambda.#2018-11-0717:39Joe LaneSo /some/route/foo calls my-ion-lambda and /some/route/bar ALSO calls my-ion-lambda#2018-11-0717:39Joe LaneInternally in my ion I can route based on the path.#2018-11-0717:39Joe Lanebut then my-ion-lambda stays warm.#2018-11-0717:40joshkhahhh i see. so do you dispatch on the url path?#2018-11-0717:40joshkhusing something like bidi?#2018-11-0717:41ro6yes, I'm doing the same. Just like a traditional Clojure webapp setup.#2018-11-0717:41Joe LaneI can. One of my apps uses AppSync + GraphQL so there I dispatch based on graphql stuffs.#2018-11-0717:41Joe LaneI know pedestal is coming out with an Ion based interceptor chain.#2018-11-0717:41joshkhinteresting#2018-11-0717:42Joe LaneHowever, I’ve also succeeded in calling an ion lambda directly from the web without fronting it with API Gateway.#2018-11-0717:42ro6There's some tricks to getting things like CORS to work with API Gateway, but once you wrap your head around it it's pretty simple. I'm actually going to write a quick post at some point about the little stuff I ran into and how to fix it.#2018-11-0717:42Joe LaneIll say it again. Call a lambda. From the Browser.#2018-11-0717:42joshkhyeah i ended up writing some custom wrappers to support cors and various mime types#2018-11-0717:43Joe Lanebrowser -> json -> ion -> json -> browser#2018-11-0717:43Joe LaneNo web layer.#2018-11-0717:43joshkhah, so you're not apigw/ionizing#2018-11-0717:43ro6Ha, I still need to learn from @lanejo01 how he's doing that. Basically the way the AWS SDKs do?#2018-11-0717:44Joe LaneHere is a scratch project. Its a bit under documented and not totally cleaned up. https://github.com/MageMasher/ion-web#2018-11-0717:44Joe Lane@joshkh In some cases.#2018-11-0717:45Joe LaneHowever after listening to rich’s talk at clojure nyc (theres a video on youtube) he made a comment that makes curious about sticking with apigw#2018-11-0717:49Joe LaneI’ve been toying with wrapping calls to ions with reagent atoms 🙂#2018-11-0717:49ro6you mean what he said about going direct from APIGW to your code w/o Lambda in between?#2018-11-0717:54joshkhi think my brain was warped by the datomic ions tutorial. it's just an odd place to leave the user: a single greedy end point pointing to a ring handler. i'm fairly new to AWS though, so maybe it's just me.#2018-11-0717:55ro6by "user", you mean the developer?#2018-11-0717:56joshkher yeah, trying to multitask in a meeting. 😉#2018-11-0718:02joshkhby the way, thanks for the discussion. finding best-practices or "real life" example setups of ions is tough. i'm planning to blog a few of my trials and tribulations along the way. might spare a few people some trouble.#2018-11-0718:02ro6yeah, it's early days. The community sharing is a big help.#2018-11-0718:02joshkh(it's mostly me not understanding what are probably basic principles but i'm sure i'm not alone)#2018-11-0718:03joshkhi'm used to whacking stuff on heroku or dokku and calling it a day#2018-11-0718:04ro6you have to keep in mind that API GW can be configured as an edge network and Lambdas probably have similar elastic scaling properties (though I'm not sure if they're bound to one AWS region), so even though it looks like you're piping everything through one "place", it's not really a bottleneck.#2018-11-0718:04ro6haha, me to. Heroku was always my goto for prototyping.#2018-11-0718:04ro6Ions isn't quite as ironed out (yet), but it's getting close, and with a much more powerful model#2018-11-0718:13joshkhagreed, and i'm certainly not complaining. it's a huge win for us to be able to push clojure code.#2018-11-0718:14ro6yeah, and as usual (in my experience), the difficulties come more from facts about how the JVM works (eg classloading and such), not Clojure proper#2018-11-0718:15joshkhyup. or in my case AWS configuration and infrastructure.#2018-11-0718:15ro6right, all the non-Clojure-itself stuff, ha#2018-11-0718:16ro6even still, it's much less AWS stuff than other deployment solutions#2018-11-0718:17joshkhfor instance i still get worried about co-authoring our ions project. if another developer is working on some newly exposed ions in their code base while i do the same, and we're both pushing and deploying, then are we removing each other's ions?#2018-11-0718:17joshkh(based on what's in ions-config.edn)#2018-11-0718:18ro6if you're pushing to the same system, I think so. I think the answer is for each developer to have their own Solo system for dev purposes#2018-11-0718:19joshkhso multiple compute stacks sharing a storage stack?#2018-11-0718:20ro6I was thinking an entire storage+compute stack for each#2018-11-0718:20ro6but maybe what you're saying would be cheaper and more efficient, if it's possible#2018-11-0718:21joshkhi guess so long as you're all in communication about schema changes that could work#2018-11-0718:21joshkhanyway that wasn't a huge concern. just an example of my unknowns. 🙂#2018-11-0718:22ro6right. I think if devs are doing a good job with schema attribute namespacing, and your project is organized sensibly, you're much more likely to have clean "schema merges" when you go to integrate than you would with eg relational schema, or even schemaless. This is all speculation on my part though, since I haven't gone through any of that yet.#2018-11-0718:24ro6Honestly though, Datomic's notion of schema, and the possibility that it has properties like that, is the main difference-making reason that I'm betting on it#2018-11-0718:24joshkhwe found that our schema changed quite a bit during the very start of the project. in a normal db we'd "take the good parts" and maybe start a fresh one, but we still haven't solved data dumps / loads in Cloud.#2018-11-0718:25joshkhbut we've also found that having such a factual and flexible schema means we can continue working with the same instance and chalk up changes to historical events#2018-11-0718:25joshkhwhich is great.#2018-11-0718:26joshkhanywho, gotta run. thanks for the chat!#2018-11-0718:29ro6I see. Again, I haven't gotten far enough to speak from experience, but my sense is that following the rules for non-breaking growth that Rich has been laying out across the Clojure ecosystem allows you to live with your past decisions/data without having to do massive structural migrations along the way. I think that makes us devs uncomfortable because we like to clean up our messes (eg cleaning up Git history, refactoring, schema migration, etc...), but that discomfort with messes is totally trumped by the pain of breakage at integration points.#2018-11-0721:01souenzzoHey I'm trying to create a datomic cloud from awsmarketplace and after some waiting on CloudFormation I get this error
The following resource(s) failed to create: [DhcpOptions, EnsureEc2Vpc].
Then it auto-rollback to ROLLBACK_COMPLETE status#2018-11-0721:08souenzzoNow with
The following resource(s) failed to create: [LambdaTagDdbPolicy, DhcpOptions, EnsureEc2Vpc]. #2018-11-0721:09eoliphantIs your account EC2-VPC? I think I ran into that for that reason#2018-11-0721:32souenzzoFailed to create resource. See the details in CloudWatch Log Stream: 2018/11/07/[$LATEST]c6878226f3da4b06b3a803c8873b0262#2018-11-0721:32souenzzoThe following resource(s) failed to create: [LambdaTagDdbPolicy, DhcpOptions, EnsureEc2Vpc]. #2018-11-0814:07eoliphantYeah that's super weird#2018-11-0814:09eoliphantwhen I had that issue it was because my account in that region had both the EC2 (classic) & VPC under supported platforms. Not sure what else it might be, maybe hit the datomic guys with a support ticket#2018-11-0814:31souenzzoI found that my AWS is really old and has this VPC/EC2 problem. I will try to solve it next week.#2018-11-0814:31souenzzotnks#2018-11-0721:09souenzzoYes. @eoliphant#2018-11-0721:14souenzzoAny tips?#2018-11-0723:00lwhortonit’s not clear to me if you are ec2-vpc, or just vpc https://docs.datomic.com/cloud/operation/account-setup.html#ec2-vpc-only#2018-11-0816:54donaldballIs there a good pattern for doing a datomic query where one of the params might not be given, and if it’s not given, does not constrain the query? For instance, I might want to query for people with a given name and possibly also a birthdate.
I’ve done this before by using the map query form and adding clauses and params, but it feels like I’m swimming against the tide.#2018-11-0820:46val_waeselynckYou could use a rule which handles nil maybe#2018-11-0821:05donaldballYou can’t pass a nil value in, though you could pass in e.g. an empty set, or a ::missing sentinel value that can’t appear in your corpus. I’m just having trouble expressing an or rule that unites correctly.
Currently I’m just punting on doing it in q and filtering the results which is… fine, it just feels like a common-ish query need for which I lack a pattern.#2018-11-0818:13lwhortonhaving spent only a few days with datomic, but a lot of time with spec, it seems silly to me to write a spec which essentially duplicates a schema. it gets especially “silly” feeling when my specs start referencing ids of other specs, much like a datomic ref. yet a few minutes after not having a spec i already wish i had s/conform available to me. how are people handling this? generating specs from schema, or vice versa?#2018-11-0818:18ghadispecs don't duplicate the schema#2018-11-0818:19ghadispecs can express much more#2018-11-0818:35lwhortonyes, I agree they’re really powerful. maybe i haven’t figured out in my own head what i dont like about a schema right next to a spec. perhaps it’s that much like datomic entities can have different contexts and aggregate attributes accordingly (a person can be a user, or a client), specs have to do the same thing. and i think I just don’t like all that near-identicalness? .. just musing i suppose#2018-11-0818:50lilactownthere's probably some fundamental concerns/functionality of a Datomic schema and a spec (after all, they're both describing collections of data, usually, and information about the problem domain you are solving), but it might be harder to tease out their true nature enough to build an abstraction to share them#2018-11-0821:56kennyFWIW, we've been using this for about a year now, and it's been working nicely: https://github.com/Provisdom/spectomic#2018-11-0821:58lwhortonwhen modeling attributes related to time (usually a lot of entities deal with things like created, updated, etc) what’s a general approach? attrs like updated seem unnecessary due to the information model. but what about if you had two entities A and B, where both have a start-date and end-date? is it better to model these idents as :time/start-date, :time/end-date or more specific to the entity? :a/start-date, :b/start-date, ... etc?#2018-11-0822:00lwhortonon one hand i feel like a particular entity might in the future be modeled as an aggregate like :a/start-date, b/start-date in which case separate idents makes sense. but i’m not sure if this is ever really going to happen, and i’m not quite sure what the disadvantages are for this kind of model.#2018-11-0822:02ro6This is actually something I've been thinking about a lot lately. There's this natural tension when defining a spec or schema between "what's the most generic, open-ended thing I can assert about this granular value?" and "what do I want to say/guarantee about this value in a certain context? (eg when it's attached to an entity)?"#2018-11-0822:05ro6This: https://docs.datomic.com/on-prem/best-practices.html#use-aliases got me thinking about the cases where you might be able to do both. The trouble is that Datomic schema does more than just define structural properties of the value itself, it also says things about how the [attr value] pair relates to the containing collection, such as uniqueness/identity.#2018-11-0822:07ro6The case I was thinking about was something like :email vs :. I wanted to be able to say both things about the (string) value itself.#2018-11-0822:09ro6Now that I'm writing that though, I realize that perhaps this is a good chance to pick up a multimethod or Datalog rule to express a relationship, something like "all : qualify as :email, but the reverse may not hold"#2018-11-0822:10lwhortonmultimethods seem like a more extensible strategy there. they’re defined in the app code, whereas a rule may or may not hold true in the context of another app using the same db? maybe?#2018-11-0822:12ro6Depends on where that logical deduction needs to take place I guess#2018-11-0822:13ro6but delaying the decision and moving it out of the schema buys you flexibility. I think this is the "a-la-cart" polymorphism Rich is always mentioning. You just decide what things should count as "the same" when you define the operations/actions you need to perform with them.#2018-11-0822:21lwhortonas always in the clj ecosystem, so much to think about. the more you learn the more you realize there’s better ways to do everything.#2018-11-0822:21ro6Yep. That's why it's a programmer's dreamland. #2018-11-0821:59ro6I've felt similarly, but I think it comes down to the Clojure community bias towards decomplecting, even if convenience or short term experience makes it seem like two things are "the same"#2018-11-0911:28dlhey how are you?#2018-11-0911:29dlI have setup datomic cloud yesterday and trying to push the first ion#2018-11-0911:29dlI have downloaded the ion-starter repo, but even though I added the changed files to .gitignore I still receive this error:#2018-11-0911:29dl"You must either specify a uname or deploy from clean git commit"#2018-11-0911:29dlwhen pushing to git#2018-11-0911:32dlas a workaround I just added :uname NAME to the argument and it seems to work#2018-11-0911:33dlbut thats weird, the git commit command states that no changes added to commit and still I need to specify a uname#2018-11-0915:17timgilbertSay, running a restore from backup in on-prem datomic, I see a lot of messages resembling Copied 4848 segments, skipped 39282 segments. as the process runs. Is there a way to find out what the total number of segments is?#2018-11-0916:51grzmOn the releases page, (https://docs.datomic.com/cloud/releases.html), the top item is dated 2018-08-21, with version "0.8.66". Newest version number, but the date is older than the next item (2018-10-11). Typo? Or am I missing something?#2018-11-0917:21lwhortoni have a pretty basic data modeling question that i think would clear up some confusion i have about entities vs attributes:
a user entity has an id, so we define the schema
{:db/ident :user/id
:db/valueType :db.type/long
:db/cardinality :db.cardinality/one
:db/unique :db.unique/identity}
but down the road we find out that there’s not really “an id”, but rather two disparate ids from separate systems. system A has a uuid for each user, system B has a different uuid for each user.
if we want to model our “unified user” in our own system, do we define a schema like:
{:db/ident :system-a/id
:db/valueType :db.type/long
:db/cardinality :db.cardinality/one
:db/unique :db.unique/identity}
{:db/ident :system-b/id
:db/valueType :db.type/long
:db/cardinality :db.cardinality/one
:db/unique :db.unique/identity}
at which point our complete user entity becomes
{:system-a/id 123 :system-b/id 456}
#2018-11-0917:22lwhortonOR do we model the entity where the user schema itself reflects the multiple ids:
[
{:db/ident :user/system-a-id
:db/valueType :db.type/long
:db/cardinality :db.cardinality/one
:db/unique :db.unique/identity}
{:db/ident :user/system-b-id
:db/valueType :db.type/long
:db/cardinality :db.cardinality/one
:db/unique :db.unique/identity}
]
#2018-11-0917:24ro6@lwhorton I guess if it's a UUID you could get away with the first one, but the second makes more sense to me. You're trying to say it's a unique id among users, not necessarily everything that could ever come from system-b#2018-11-0917:26lwhortonah, okay. my next question was exactly that — do i have to elucidate further and say something like :system-a/user-id, :system-b/user-id#2018-11-0917:28lwhortonwhich would open the door for a “system-a user schema”
{:db/ident :system-a.user/id
:db/valueType :db.type/long
:db/cardinality :db.cardinality/one
:db/unique :db.unique/identity}
which seems like a whole can of worms and is not a good way to approach it.#2018-11-0917:33ro6yeah, as far as functional differences between :, :id.system-a/user, :system-a.user/id, etc..., I'm not sure. I'm not sure if there's implementation details in Datomic that treat the different segments of a namespace differently. I think generally :keywords are just interned strings.#2018-11-0917:36ro6meaning that equality checks amount to reference equality, so more efficient#2018-11-0917:52dpsuttonis there a good guide on how to remove everything after following a datomic cloud tutorial? Went through it last month for a few days and now i've got an $18 bill from last month and I'm on track for $30 this month#2018-11-0917:53ro6use the CloudFormation part of AWS I think#2018-11-0917:54ro6that should show you your compute and storage stacks, as well as the "parent" stack that subsumes those two. I think there's a way to clean up all the resources by removing those. After it's done, you could go double check that there aren't any EC2 instances left running#2018-11-0917:54dpsuttonthanks. that seems to be the entry point to begin this. much appreciated#2018-11-0917:55ro6np#2018-11-0917:57kenny@dpsutton Datomic Cloud leaves around a bunch of other resources as well. Those extra resources shouldn't cost much, but no reason to keep them around after the tutorial. I wrote this clj script https://gist.github.com/kennyjwilli/55007211070a260044c8e6abcb54dd5b that follows the steps in the Deleting a System docs https://docs.datomic.com/cloud/operation/deleting.html.#2018-11-0917:58ro6@kenny Those aren't handled by the CloudFormation delete?#2018-11-0917:58kennyNo#2018-11-0922:46benoit@lwhorton I tend to prefer :system-a/id (or :system-a.user/id if system A has other types of entities). Particularly if you will have other types of entities that can have an id in system A. Let's say you have groups of users. And those groups have ids in system A. Then you will have to define 2 attributes: :user/system-a-id and :group/system-a-id. This will make your code unnecessarily complex to handle both cases when manipulating this identifier. With :system-a/id, you can just reuse the same attribute for users and groups.#2018-11-0923:47lwhortoni feel like i’m also missing something fundamental about getting values out of the db. surely query results can be converted back into a collection of entity maps where the keys are the schema idents?#2018-11-0923:49lwhortonis this where the pull api comes in? define the shape of the query result in a datalog query?#2018-11-0923:51ro6yes. I regularly use a query to get a sequence of entities, then map over that with d/pull to get the data I want#2018-11-0923:53ro6I'm no old vet though, so I'd wait for a second opinion.#2018-11-0923:53dselphIs the Datomic forum dead? Getting search results linking to http://forum.datomic.com pages, but virtually a blank page when followed. For example: https://forum.datomic.com/t/running-with-vpc-peering/441#2018-11-1214:08marshalli’m not having any issue reaching that URL
do you potentially have ghostery or something like that blocking ?#2018-11-1214:08marshalloh, sorry, just saw this was already discussed in the room 🙂#2018-11-0923:54ro6works for me. Do you have browser extensions that block things?#2018-11-0923:55dselphHmm, maybe? I use other forums without any problem, but let me try a different browser.#2018-11-0923:59dselphLooks like something not available through the DNS provider - weird. VPN allowed it to load. Thanks @robert.mather.rmm!#2018-11-1023:34dustingetzhttps://github.com/mtnygard/datoms-say-what#2018-11-1208:38val_waeselynckNew article: "Datomic: Event Sourcing without the hassle". https://vvvvalvalval.github.io/posts/2018-11-12-datomic-event-sourcing-without-the-hassle.html#2018-11-1317:07ro6@val_waeselynck Just finished the article, definitely echoes my own thoughts about Datomic. One of my favorite aspects of working with it is just how little you need to know about your domain model to make incremental progress. I am curious about the way you described transaction handling in traditional ES. I haven't built a system like that myself, but from the literature my sense of best practice is to do CQRS+ES, meaning you'd have some sort of command processor protecting the internal consistency constraints of each aggregate. Is that what you were getting at? #2018-11-1209:04kirill.salykin@val_waeselynck I would argue a little: datomic stores current state snapshots, I would not call it trully event sourcing
event sourcing stores events that lead to current state
so it is a bit different in my perception#2018-11-1209:06val_waeselynck> I would not call it trully event sourcing
@kirill.salykin have you read this paragraph? https://vvvvalvalval.github.io/posts/2018-11-12-datomic-event-sourcing-without-the-hassle.html#cost-benefit_analysis#2018-11-1209:07kirill.salykinoeps, sorry 😞#2018-11-1212:24fmjreyNice snarky phrase indeed:
>Any sufficiently advanced conventional Event Sourcing system contains an ad-hoc, informally-specified, bug-ridden, slow implementation of half Datomic.#2018-11-1212:31val_waeselynck🙂 hope you didn't find it offensive#2018-11-1212:50fmjreyNot at all, I quite agree too. I think the datom is the right abstraction for 95% of data storage need, I'm convinced.#2018-11-1212:52fmjreyJust skimmed through the article, as I'm at work doing other stuff, and like very much the case you're making.#2018-11-1214:13benoit@val_waeselynck Thanks for the article. I have a question. Isn't event sourcing usually reactive on the events? It seems like your design requires regular polling via the log API to see what changed.#2018-11-1215:05val_waeselynckI think Event Sourcing is agnostic of that, you can consider it an implementation detail. By the way, Datomic on Peers enables you to be reactive too#2018-11-1215:22benoitI'm assuming you are talking about the report queue. Unfortunately the report queue is not really practical. All peers receive the tx report so you need to implement a mechanism to ensure only one of the peer react to the event. You also have to implement a mechanism to support peer restarts and ensure that all your tx reports are processed.
I agree that it might be considered an implementation detail but this is an important detail in my opinion if you try to implement event sourcing with Datomic.#2018-11-1215:43val_waeselynckEven if that's important to you, it's not like Datomic prevents you from doing that more than any other data store. You can for instance just plug in Onyx and there, you have reactive processing.#2018-11-1215:44val_waeselynckThere are tons of aspects for which you can say "it's not really Event Sourcing because X", but the whole point of the article is to encourage thinking about ES in a different way :)#2018-11-1217:31benoitI agree with you about thinking abut ES in a different way. And I think you make a compelling case for Datomic tx over application specific events.
I was just wondering if you thought about the architectural implications of using the log API in the "Aggregates". Something we often need in this kind of system is speed (reactivity) and coordination across the "Aggregates".#2018-11-1217:39val_waeselynckI did, I think if you want high speed convergence you will probably need the help of some external coordination service. That probably takes some effort to set up, but insignificant at the scale of a project imo. That being said, Datomic is not especially designed for low-latency writes, I would not expect to react in under 10ms#2018-11-1320:48lwhorton> maybe slack isn’t the place to post this and i should take it to the datomic forum. if so, please let me know and i’ll clean it out.
tldr; how do i more-simply configure/load/structure my database when foreign-keys rule my schema?
---
after playing around with datomic for a few days in a “real” domain, i feel like i still have a fundamental conceptual misunderstanding around entities, attributes, and refs. i’m attempting to load two disparate datasets into datomic in a way that is easy to work with. “forecast”, a dataset for scheduling personnel resources into the future, assigning them to projects, etc., and “harvest”, which is the other half of forecast dealing with personnel logging hours worked to a particular project, etc.
the issue i’m having starts just at the fundamental schema level - how do i model my own schema to either attempt to unify the two disparate datasets under common entities, or keep the systems separate but setup the refs properly so working with them is easy?
a real example is probably the clearest way to get across my confusion:
harvest has a “user”, which has a uuid
harvest has a “project”, which has a uuid
harvest has a “time entry”, which has a uuid, and points to a user uuid, and points to a project uuid
forecast has a “user”, which has a uuid AND a “harvest-user-id”, which points to a harvest user
forecast has a “project”, which has an id AND a “harvest-project-id”, which points to a harvest project
forecast has a “assignment” which point to a user id, and points to a project id
my first approach was to invoke “one big fn” that extracted all this data from a dataset, picked out the relevant attributes, and transacted everything into a db. after playing around with it, it’s really cumbersome to query anything meaningful because I have to keep working from “the top down” and chase attrs across entities based on some foreign keys.
for example, if i’m trying to find out which user is assigned to which projects:
(defn assigned-to-projects [db lname]
(d/q
'[:find
(pull ?p [*])
:in $ ?lname
:where
[?u :user/last-name ?lname]
[?u :user/forecast-id ?fid]
[?a :assignment/user ?fid]
[?a :assignment/project ?pid]
[?p :project/forecast-id ?pid]]
db lname))
every query i write resolves to some version of getting the id from harvest/forecast and tracing it through the domain entities until we can link up database entities. the way the foreign keys are linked in the domains and how they interact with datomic just feels wrong, like i’m doing something stupid. additionally, these arbitrary foreign key linkages aren’t really reified anywhere in my schema, and it’s like i have to go searching for “out of band” documentation somewhere to understand how to query the system.
is this just a consequence of my domain? of bad schema modeling? is it my responsibility to better shape this data before transacting into datomic? do i load specific entities first (such as users) and then assign other entities (such as assignments) to the datomic identifiers so the refs are entity-id based and not foreign key based?#2018-11-1321:04lwhortonperhaps i need to think harder about https://codeandtalk.com/v/clojure-conj-2016/simplifying-etl-with-clojure-and-datomic-stuart-halloway#2018-11-1321:11benoit@lwhorton As you suspected, your main issue is that you try to link your entities using your generated identifiers instead of actually linking the entities in Datomic with a ref attribute.#2018-11-1321:13lwhortoni’m not sure i follow, so if you dont mind i want to keep with the example so maybe i can understand “in the small”:
(def assignment
[{:db/ident :assignment/id
:db/valueType :db.type/long
:db/cardinality :db.cardinality/one
:db/unique :db.unique/identity}
{:db/ident :assignment/user
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/one}
{:db/ident :assignment/project
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/one}
])
#2018-11-1321:15lwhortonis this fundamentally just a bad schema? should i declare idents assignment/user or assignment/project to instead be ? (what)? a more explicit schema like assignment/forecast.user.id? my relational-biased brain is trying to think that I can point my assignment/user to an entity that is a user, but really there’s no such thing, there’s just a collection of attributes that represent a user entity.#2018-11-1321:17lwhortonis “the way” to configure a user schema :db/ident :forecast.user/id, load all the users first, and then (while loading assignments) point each :assignment/user at a db attr :forecast.user/id? if so, given a “assignment” entity (from the external dataset) with a forecast-user-id, wouldn’t it be required to perform a lookup for the db’s ref to the forecast.user/id for each “assignment”?#2018-11-1321:25Joe Lane@lwhorton I’d read up again on datomics universal schema. I usually only have Id’s for external systems and then have entities point to each other with refs. That should help with modeling. sorry I dont have more time right now to help.#2018-11-1321:34lwhorton> then have entities point to each other with refs
do you mean have an attribute belonging to an entity point to another attribute belonging to a different entity? weird sentence, but maybe that’s where i dont understand.#2018-11-1321:37benoitAttributes do not belong to entities, they're relationships between two entities (ref attributes) or 1 entity and 1 value (all attributes other than ref).#2018-11-1321:46lwhortonso does that mean when transaction assertions, in order to assign all facts to the related entity id, you have to assert 1 fact first, grab the eid, then use the eid to assert future facts? i know there’s shorthand map syntax for doing this, which means the T part of ETL should be responsible for shaping (joining) these maps before transacting?#2018-11-2016:56ro6Working my way through this conversation after the fact, so sorry if this got resolved later, but I think this is what temp-ids are for in transactions. If you want to assert a bunch of facts about the same entity, and the entity doesn't exist yet, you just provide a placeholder string in place of the entity id (the same string in each fact).#2018-11-1321:47benoit@lwhorton your schema is fine. It is I think your query that was not right.#2018-11-1321:48lwhortonif i do as i have done so far, which is just “dump a big old pile of facts into datomic all at once”, i can’t utilize any entity refs for queries, and have to resort to foreign keys… i think?#2018-11-1321:49lwhortongah i don’t get it. i have to go watch more videos and read more docs.#2018-11-1321:49benoitYou should not have to resort to foreign keys.#2018-11-1321:50benoitThis should give you the projects for a user with a given last name:
(defn assigned-to-projects
[db lname]
(d/q '[:find (pull ?p [*])
:in $ ?lname
:where
[?u :user/last-name ?lname]
[?a :assignment/user ?u]
[?a :assignment/project ?p]]
db lname))
#2018-11-1321:51benoitWhen you transact data, Datomic identify entities for you: https://docs.datomic.com/on-prem/transactions.html#identify-entities#2018-11-1321:54favila@lwhorton what are your goals here? if the goal is to keep separate (possibly messy) datasets as-is, what you are doing (joining via values on attrs) is appropriate. You can even keep them in separate datomic dbs if you want and quary across them#2018-11-1321:55favilaif you goal is to harmonize them and produce a consistent view of the universe, then your schema will look more like refs and a single entity will have attributes from both datasets. but you have a bit of an ETL job ahead of you to harmonize them#2018-11-1321:55favilayou can also use the first to help you build the second#2018-11-1321:56lwhortoni think that is perhaps my problem: my schema and etl->{transacting data} is not correct. assuming datomic generates temporary ids and automatically joins them to setup proper refs, i think i just have missing refs in the schema (as well as bad rename-keys, etc. from the external data).#2018-11-1321:57favilafundamental question: is a thing one entity or multiple#2018-11-1321:57lwhortoni do like the idea of using a temporary datomic db to hold all the data as an intermediary ETL step#2018-11-1321:58favilaif a thing is always one entity you can still have assertions about that thing from multiple sources. they are joined by entity id so that you know the assertions are all about the same "thing"#2018-11-1321:58favilaif a thing may be many entities, you are siloing assertions by provenance#2018-11-1321:59lwhortonthat makes sense. thanks @me1740 and @favila for sticking with me. my root misunderstanding is just that i’m missing steps to unify two disparate data sources into a single datomic schema from which i can perform EID-based queries instead of attribute-based joins.#2018-11-1322:01favilajoin-by-value is absolutely appropriate if you can't trust that all their assertions can be "said-of" the same entity#2018-11-1322:01favilajoin-by-ref is what you want when you have created a consistent view of the world#2018-11-1416:53lwhortonthere’s no way to forward-declare an entity ref, right? i can’t tx data into a db like {:foo/bar [:bar/id 123]} where bar/id (and the associated entity) don’t yet exist in the db?#2018-11-1416:59marshallyou can assert a nested entity, thereby creating it#2018-11-1416:59marshall{:foo/bar {:bar/id 123}}#2018-11-1416:59marshall@lwhorton ^ however, if you are using a lookup ref then yes, the entity needs to exist already#2018-11-1417:02lwhortonah, cool. i guess i need to order my etl properly#2018-11-1417:07mgrbyte@lwhorton see https://docs.datomic.com/cloud/transactions/transaction-data-reference.html#sec-7-1 about referencing other entities created within the same transaction#2018-11-1417:08lwhorton👍 i just ran some experiments with this and bumped into the “entity [ref val] doesn’t exist” problem#2018-11-1417:10mgrbytei've been using val (from your above example) as a tempid when I know it's unique within a transaction and linking that way.#2018-11-1417:13lwhortonyea, i’m going to have a tiny ETL process that xforms external data into tx-data with using “tempids” that are really external foreign keys. the problem right now is just tracing my schema’s refs so i can ensure correct import order (and avoid trying to lookup-ref some entity that doesnt yet exist)#2018-11-1420:39grzmIs there a dictionary of Datomic errors? This one is currently inscrutable to me:
{
"Type": "clojure.lang.ExceptionInfo",
"Message": "Server Error",
"Data": {
"CognitectAnomaliesCategory": "CognitectAnomaliesFault",
"CognitectAnomaliesMessage": "Server Error",
"DatomicAnomaliesError": {
"Error": "Unable to create catalog entry",
"Cause": {
"Conflict": true
}
}
}
Is this a user-level issue? (e.g., attempting to transact tx-data that would cause a unique conflict) Or is it something lower down, say, at some interaction with DynamoDB (implied by the "catalog entry")?#2018-11-1420:40marshall@grzm solo or production?#2018-11-1420:42grzmsolo, but seen in production as well#2018-11-1420:46marshallcan you file a ticket on the support portal so we can investigate?#2018-11-1420:47grzmWill do.#2018-11-1421:49lwhortonwhat’s the conventional “last step” after a query w/ entity pull to get the entities as a collection instead of a collection of single-entry collections? do we just use mapcat?#2018-11-1421:52lwhorton(i ask because this seems like the pull syntax does something important wrt laziness, and i would hate to overflow-bomb myself trying to map over a huge entity result)#2018-11-1500:26favila:find [?x ...] gets you a flat coll of ?x#2018-11-1500:27favila(map peek) would work too as a post-processing step#2018-11-1500:27favilaeverything in query is eager#2018-11-1500:28lwhortonah, okay. if it’s eager it won’t matter. but i do like peek better#2018-11-1421:51steveb8n@lwhorton re forward declaration: I learned a trick for this. all my entities have a shared uuid attribute. just before the save, I copy the str version of the uuid into the :db/id as a “temp-id”. this means that all references to that entity can now also use the string version in their ref attrs and Datomic will resolve them#2018-11-1421:53lwhortonhmmm. interesting. do you have the uuid (squuid?) generator as some out of band process elsewhere before you try to transact?#2018-11-1422:12steveb8nI don’t bother with squuids anymore (not required) so I just create the uuid in calling code for insert (it’s always read first for update)#2018-11-1422:12steveb8nbtw only works in apis that allow string tempids. not sure if the older peer apis support this#2018-11-1514:36marshallYes, I like this approach as well. I’ve used it successfully in several projects#2018-11-1517:41lwhortoni’m not sure i follow completely, could you provide a few lines of code as an example? do you mean each entity has a :db/ident :my/uuid? where do you store refs to your uuids when they’re created? or is this just a trick for single large transactions and not multiple txs?#2018-11-1503:08bmabeyIs it possible to develop ions using a standalone/local datomic (in memory or otherwise) DB? (I don't always have internet access when programming 🚌)#2018-11-1510:27lwhortoni was thinking about this the other day, but i don’t know. if i had to guess, no. we’re working with something called “_ cloud” which i figure is a pretty good excuse to not have an offline story#2018-11-1614:26Joe Lane@U5QE6HQK0 I mean, you can develop your codebase, you just can’t connect to the database. If you wanted to mock things out you practice tdd to work on Biz Logic.#2018-11-1514:23dpsuttoni've got some attributes like the following:
{:response/id 4
:response/items [...]
:response/request {:request/id 4
:request/items [...]}}
and i'm trying to query for response/request pairs where the item counts are different. ie, the request talked of 5 items but the response talks only of 2.
My first stab at it has been like so
(d/q '[:find ?request
:in $
:where
[?response :response/items ?resp-items]
[(count ?resp-items) ?resp-item-count]
[?response :response/request ?request]
[?request :request/items ?req-items]
[(count ?req-items) ?req-item-count]
[(!= ?resp-item-count ?req-item-count)]]
db)
but this is trying to take the count of a long which makes sense but i'm not sure how to tweak it to count the collection#2018-11-1514:34marshallEvery clause operates on each individual element that matches a given logic variable.
So you’re counting the entityIDs (which are bound to resp-items)#2018-11-1514:35marshallI would use a nested query for this case#2018-11-1514:35marshallput the count in the find clause of the nested query#2018-11-1514:36marshallexamples: https://forum.datomic.com/t/subquery-examples/345 and https://stackoverflow.com/questions/23215114/datomic-aggregates-usage/30771949#30771949#2018-11-1514:37dpsuttonthank you very much marshall#2018-11-1514:38marshall👍#2018-11-1514:29schmeedoesn’t look like it’s trying to take the count of a long? are you getting any errors or just no results?#2018-11-1514:37dpsutton?resp-items is an id and (count 4) blows up#2018-11-1517:30marshallDatomic 0.9.5786 is now available https://forum.datomic.com/t/datomic-0-9-5786-now-available/696#2018-11-1521:38uwohere’s a naive little utility fn for pull expressions that I find myself reaching for during development. let me know if there’s another (likely better and more full) implementation out there! https://gist.github.com/uwo/9ae7a83737093dd59b6a1c6086cc0837#2018-11-1522:27favilaI'm not clear on the input to this. You give it the result of a pull (maps of keywords to vectors-of-maps) and it gives you the pull expression that would pull the same data?#2018-11-1523:33dustingetzHere is something similar: pulled-tree-derivative https://github.com/hyperfiddle/hyperfiddle/blob/master/src/contrib/datomic.cljc#L69 and test case https://github.com/hyperfiddle/hyperfiddle/blob/master/test/contrib/datomic_test.cljc#L81-L85#2018-11-1523:35dustingetzAs to whether to drive datomic utils by polymorphism or by schema, I have learned the hard way to just always use schema and stop thinking about it. There are too many edge cases. We have a whole graveyard of implementations with comments like "todo isComponent" and "todo lookup refs" etc.#2018-11-1614:33uwo@U09R86PA4 yep. for some reason I often have a concrete tree of data on hand and don’t want to spend a bunch of time writing the pull expression that would generate it. It usually just serves as a starting point, not something I do dynamically in production code#2018-11-1614:34uwothanks @U09K620SG !#2018-11-1614:39favilaI have a similar one that makes a pull from a let map destructure #2018-11-1619:38uwonice! gist?#2018-11-1600:33lwhortonwhat’s “the better way” to handle things like aggregation (of values), comparison (of dates), etc. should it be done in the db query? in post-processing code afterward? does it even matter if you’re not using a peer?#2018-11-1600:35lwhortonin my head i guesstimate that it’s better to not use the db’s resources for such concerns, but that’s probably wrong because 1. you can have as many queriers as you want, so you’re not “clogging up the db process” 2. the data is right there and it’s likely faster for the query engine to handle things like sorting, aggregation, etc.#2018-11-1614:16dustingetz@lwhorton I don't think there is a one size fits all answer here. In the peer/ions model, db and application share resources so i wouldn't worry about that.#2018-11-1614:17dustingetzIn the peer/ions model, queries compose like code, so for a first pass, i would just do what makes my code simpler. Splitting complex operations into multiple queries that compose together is perfectly idiomatic in the peer/ions model.#2018-11-1622:27kennyGetting this exception with Datomic Cloud production topology:
java.lang.RuntimeException: No such var: p/list-databases, compiling:(datomic/client/api/async.clj:78:3)
#2018-11-1622:28kennyRunning with com.datomic/client-cloud {:mvn/version "0.8.54"}.#2018-11-1622:28solussdtypo, maybe? did you mean d/list-databases ?#2018-11-1622:28kennyNo#2018-11-1622:28kennyThe last call in my code before the trace goes into datomic is (d/client datomic-config).#2018-11-1622:29solussdah#2018-11-1622:56kennyIt appears to be some sort of race condition with Datomic's require. Has anyone experienced this?#2018-11-1823:14kennyAny idea what could cause this? java.lang.ClassCastException: datomic.client.impl.shared.Db cannot be cast to datomic.client.impl.shared.QueryArg#2018-11-1900:58Vincent CantinCould you show the expression that causes this exception? I guess that the parameters are not at the right place in your function call, w.r.t. your query.#2018-11-1901:45kennyThis is the query that causes it.
(d/q '[:find ?cust-id
:in $ ?metric
:where
[?metric :metric/integration ?integration]
[?customer :customer/integrations ?integration]
[?customer :entity/id ?cust-id]] db [:entity/id metric-id])
#2018-11-1901:45kenny@vincent.cantin ^#2018-11-1904:56gws@kenny is it possible that metric-id is bound to a database value instead? (e.g. swapped function args during a refactor)#2018-11-1918:25kenny@gws I don't think so. I tried it locally and the exception is different.#2018-11-1921:49kennyI'm pretty sure the above exception is due to a bug in Datomic related to require not being thread safe.#2018-11-1921:58marshall@kenny is this in an ion or a client app?#2018-11-1922:15kennyClient app#2018-11-1923:26joshmillerIf I have user-groups with many users, and a user has a boolean attribute like :user/activated, how can I compose a query to find just user groups where all users have that attribute true? Something like [:find ?group :where [?group :user-group/users ?u] [?u :user/activated true]] just finds me groups with at least one activated user. I’ve tried using (not ...) and looked for some kind of all predicate without any success.#2018-11-1923:30joshmillerI can get the desired effect with (count-distinct ?activated) and filtering out groups with counts of 2, but that doesn’t seem like a great solution.#2018-11-1923:39marshall@joshmiller have a look at the missing? predicate: https://docs.datomic.com/cloud/query/query-data-reference.html#missing#2018-11-1923:39marshallor do they all have the attr. but it’s set to false?#2018-11-1923:39joshmillerYeah, it’s set for all, but some are true and some are false.#2018-11-1923:42marshalli’d have to poke at it a bit, but i think you can use a :with grouping https://docs.datomic.com/cloud/query/query-data-reference.html#with#2018-11-1923:43marshallwhich will get you the “bag not set” of values that you can then do an aggregation on#2018-11-1923:43marshallit might be something better handled with a rule or a query function, though#2018-11-1923:44marshallI think you could also do it with a not-join https://docs.datomic.com/cloud/query/query-data-reference.html#not-join#2018-11-1923:56joshmillerCool, I’ll take a look at :with#2018-11-1923:57marshallYou may have to do nested queries using with#2018-11-1923:57joshmillerI tried not-join without any success, via something like: (not-join [?group ?user] [?user :user/activated false]) but that didn’t do it for me#2018-11-1923:58marshallYeah, I'd have to work thru it but I'm on my cell right now :) #2018-11-1923:58joshmillerNo worries, thanks for the leads#2018-11-2002:35benoit@joshmiller not-join should work
(d/q '[:find ?group
:where
[?group :group/id]
(not-join [?group]
[?group :user-group/users ?u]
[?u :user/activated false])])
You need a clause outside of the not-join to bind the ?group to something that is a group (here I guessed :group/id). Otherwise you will get all entities that don't have users that are not activated ... which is probably a lot 🙂#2018-11-2003:26joshmiller@me1740 Ah, awesome, that did it. I was missing the binding of users to groups inside the not-join. Thanks!#2018-11-2006:22MMHello everyone, I have a question regarding datomic transactor in AWS#2018-11-2006:23MMI am deploying it via cloudformation, following this guide: https://docs.datomic.com/on-prem/aws.html#2018-11-2006:23MMbut I cannot find an official way to inject my classpath functions jar file#2018-11-2006:23MMany idea on how to do this?#2018-11-2006:29steveb8n@mmuallem with on-prem the idiomatic way to include custom fns is to install them as db or transactor fns. it is possible to add your jar to the classpath for the datomic peer or transactor but that’s up to your devops. Cloud has a much nicer story i.e. it reads your deps.edn and deploys that classpath to all the nodes. In other words it handles the devops for you. So you should use this as one way to pick between on-prem vs cloud#2018-11-2006:33MMhey @steveb8n, thanks for getting back to me. Unfortunately, the decision of going with on-prem vs. cloud has been made in favor of our own deployment. I am picking up the OPs from the old person, and I am new to this. Currently, we are using a modified cloudformation template and start scripts to inject the jar file directly into our bin folder. It works, but it requires a recreating of the CF stack to pick up the new jar file. I need to a find a way that's more scalable. Can you please shed a bit more light on this: it is possible to add your jar to the classpath for the datomic peer or transactor but that’s up to your devops. Many thanks!#2018-11-2006:37steveb8nSadly I can’t. I have not deployed on-prem using CF so I’m shallow on knowledge there. I read somewhere that it was possible to augment the JVM classpath for on-prem but have not done it myself. Maybe someone else here (when they wake up) has more experience with this?#2018-11-2006:38MMThanks man, appreciated#2018-11-2007:21steveb8njust in case: do you need a jar or could your custom fns be installed instead? using 3rd party libs I presume? if not, you can avoid all this#2018-11-2011:43benoit@mmuallem https://docs.datomic.com/on-prem/database-functions.html#classpath-functions#2018-11-2012:51yogidevbearHi everyone 👋 I hope this doesn't sound like a silly question, but...
Is there a way to construct a value out of two other columns within a datalog query? Similar to how you would achieve the following in T-SQL:
SELECT t1.foo + '_' + t1.bar AS foobar
FROM t1
#2018-11-2013:04manutter51Something like this seems to work: (d/q '[:find ?e ?str
:where
[?e :company/name ?name]
[?e :company/group ?gname]
[(str ?name ", " ?gname) ?str]]
(db))#2018-11-2013:08yogidevbearThanks @U06CM8C3V I'll give this a try#2018-11-2013:15yogidevbearI think that will do the trick. Thanks again @U06CM8C3V 🙂#2018-11-2013:07henrik@yogidevbear I can’t recommend http://www.learndatalogtoday.org/ enough to learn the basics of Datalog.#2018-11-2013:08yogidevbearAlways good to find recommended learning resources other than official docs 🙂 Sometimes helps to fill in gaps of knowledge#2018-11-2013:16yogidevbear@U06B8J0AJ another post someone shared with me that was very easy to digest was http://gigasquidsoftware.com/blog/2015/08/15/conversations-with-datomic/#2018-11-2013:17stijnis there a way to get the datoms transacted between 2 't' values (in the client api (cloud))?#2018-11-2013:31val_waeselynck@U0539NJF7 the Log API? (https://docs.datomic.com/cloud/time/log.html)
Be careful, it's start-inclusive and end-exclusive, whereas the inverse is what you would usually need.#2018-11-2013:33stijn@U06GS6P1N ok. I don't even know why I was asking this. Confusion about the 'live' transaction log not being available in cloud I guess 🙂#2018-11-2013:33stijnthanks#2018-11-2015:20eraserhdHave I asked this before? We sometimes have a peer lose connection to the transactor, and since it mostly operates correctly as a read-only clone, we don't discover this for a while. Is there something we can do to check if we are still connected that doesn't transacting?#2018-11-2016:38val_waeselynckMaybe use d/sync, or submit a transaction that throws an exception via a function#2018-11-2017:50eraserhdOh, I like the second one. Thanks!#2018-11-2015:20eraserhdFor purposes of system monitoring.#2018-11-2016:22ro6Is the idea with datomic.ion.cast that dev events should show up in CloudWatch automatically? I may be searching incorrectly, but it seems like mine aren't making it.#2018-11-2017:34benoit@robert.mather.rmm I think dev events are just on your local machine.
Dev is information of interest only to developers, e.g. fine-grained logging to troubleshoot a problem during development. Dev data can be much higher volume than events or alerts.
#2018-11-2018:00ro6Got it, thanks! #2018-11-2022:00kennyI get a couple reflection warnings when creating a Datomic client:
Reflection warning, cognitect/hmac_authn.clj:80:12 - call to static method encodeHex on org.apache.commons.codec.binary.Hex can't be resolved (argument types: unknown, java.lang.Boolean).
Reflection warning, cognitect/hmac_authn.clj:80:3 - call to java.lang.String ctor can't be resolved.
#2018-11-2107:38mavbozoI tried using clojure 1.10.0-beta7 and datomic 0.9.5703 but during the uberjar runtime, I got this exception:
java.lang.IllegalStateException: Attempting to call unbound fn: #'datomic.common/requiring-resolve#2018-11-2107:40mavbozoI upgraded my uberjar from clojure 1.10.0-beta5 to clojure 1.10.0-beta7. looks like requiring-resolve was added to clojure.core in clojure 1.10.0-beta6#2018-11-2110:03hawkeyhi, is there any Datomic Console alternative?#2018-11-2112:30dustingetzHyperfiddle is coming but we aren't launched for self-host yet http://www.hyperfiddle.net/#2018-11-2115:19dpsuttonwould a feature request like this have interest from the cognitect side? we've had some queries return way more than expected because typos caused clauses to not unify. Would there be interest for some type of "strict" mode that would throw an error if an identifier other than _ or a logical variable that starts with "_" (like ?_customer) did not unify?#2018-11-2115:20dpsuttonthis current one was from ?schedule and ?schedlue not unifying#2018-11-2118:37jchenIs there a way to shrink the size of blobs written to storage? Our transactors are trying to write 27MB at a time to MySQL (5.6.34), which throws the MySQL exception The size of BLOB/TEXT data inserted in one transaction is greater than 10% of redo log size. Increase the redo log size using innodb_log_file_size. We can't change that parameter without incurring downtime -- I'm hoping there's a knob we can turn in datomic to split that 27mb blob up. As it is, that exception means our transactors cycle every 2-3 minutes because they can't finish indexing#2018-11-2119:00ambroiserelated to above, can I query in datalog for all transactions that have a size greater than N?#2018-11-2119:03souenzzovia log
(filter (fn [{:keys [data]}]
(> (count data) 3))
(d/tx-range (d/log @config/conn) nil nil))
#2018-11-2119:04ambroisethanks will try!#2018-11-2119:48marshall@jchen you should not have segments that large. are you making very large transactions?#2018-11-2119:49marshallDatomic should not be used to store BLOBs or other large unstructured data#2018-11-2119:50marshallAlso, you should consider this a critical issue @jchen if your system can’t complete an indexing job it will eventually reach a point where it is no longer available for writes#2018-11-2119:50marshallYou need to get the system past this indexing job and resolve the underlying issue that is causing the large segments#2018-11-2119:52marshallthe most common causes for large segments are: storing large blob values, making very large transactions (many datoms), or storing large strings that are frequently updated in a way that the leading bits are unchanged (i.e. we’ve had users storing serialized HTML or CSS in a string and updating it frequently with updates that only alter content somewhere deep into the string)#2018-11-2122:08dustingetz@marshall, how large strings are we talking (since this is Onprem and no 4k limit)#2018-11-2122:08marshalli’d avoid anything that smells like a serialized value#2018-11-2122:09marshalldon’t have a specific size limit, but if it isnt a “fact” it doesnt belong in a datom#2018-11-2119:59jchenThanks for the reply @marshall. The index job eventually succeeded: we noticed that the :kv-cluster/create-val bufsize peaked at around 27.8mb while throwing exceptions, then slowly dropped to 26.8mb and succeeded. This is the second time in two days we've had this issue -- the first time it also seemed to resolve itself without any intervention.#2018-11-2120:01marshall@jchen i would definitely look for any instances of the types of usage patterns i mentioned above
Do you have a paid Datomic On-Prem license or is this with Starter?#2018-11-2120:03jchenWe've got a paid license. I think in general we don't have many large transactions (the largest tx we found in the last 2 days was 300kb), and we have some 50kb blob attributes#2018-11-2120:04marshall@jchen i would suggest you file a support ticket
Are the blob attributes frequently updated?#2018-11-2120:06jchenhow do I file a ticket? I don't see a way to do it on the datomic website#2018-11-2120:06marshallhttp://support.cognitect.com#2018-11-2209:58tslockeBeginner question: is it idiomatic to work directly with numeric entity-ids, e.g. in a URL like /user/:id? I've seen example code where an attribute like :user/id is used, but I'm not sure what the advantage of using such an attribute would be over the entity-id.#2018-11-2210:07val_waeselynckStrongly not recommended to expose entity ids externally (especially in a durable way).#2018-11-2210:09val_waeselynckThere are various reasons in the evolution of your database which may cause you to renumber your entities, so don't rely on entity ids being stable.#2018-11-2210:11tslockeThanks v much for the help. So, If I'm going to create e.g. :user/id, is there any support for generating those IDs?#2018-11-2210:15val_waeselynckYou could use UUIDs, or if that's not suitable use or take inspiration from https://github.com/vvvvalvalval/datofu#generating-unique-readable-ids#2018-11-2210:18tslockeUUID seems like overkill for something that only needs to be unique within this database#2018-11-2210:32val_waeselynckOverkill in what sense? UUIDs are essentially the least thought-consuming id generation strategy, it's the last thing I'd call overkill 🙂#2018-11-2211:13tslockeI guess I meant unnecessarily large, but I take your point.#2018-11-2214:01val_waeselynckOh I see. Well, 128 bits of entropy, could be worse. I think Datomic has efficient storage and memory layouts for it. You could also represent it in text by encoding it in base64, this way you can represent it with 22-chars Strings, not that bad.#2018-11-2214:27dustingetzhttps://github.com/tonsky/compact-uuids @U0GQG7ZV5#2018-11-2214:28dustingetzYou can find much shorter for not-quite-uuids#2018-11-2216:14tslocke@U09K620SG thanks for the pointer. However:
> Many UUID generators produce data that is particularly difficult to index, which can cause performance issues creating indexes. To address this, Datomic includes a semi-sequential UUID generator
(https://docs.datomic.com/on-prem/identity.html)
I think I'll just accept the long URLs 🙂#2018-11-2216:14dustingetzIs squuid still a thing? I thought Datomic has since solved this with adaptive indexing#2018-11-2216:15tslockeHmm that could well be true. I have no idea#2018-11-2216:16dustingetzhttps://forum.datomic.com/t/why-no-d-squuid-in-datomic-client-api/446#2018-11-2216:17tslockeFound something similar at the same time. Thanks#2018-11-2308:28grzmI’m working on CI with ions so am looking to use datomic.ion.dev in CodeBuild. What are the minimal permissions are required to access the datomic-cloud S3 repos? (Or perhaps I’m approaching this wrong?)#2018-11-2310:25stijn@grzm: according to my knowledge it's currently impossible to use CodeBuild unless you are in the same region as the datomic-releases S3 bucket. CodeBuild connects to the outside world through a specific AWS managed VPN endpoint and that doesn't allow cross-region S3 requests. I have filed a support ticket with Cognitect (@jaret). If you are in the same region (is it us-east-1 , I don't know), I believe it should be possible. On the permissions, I have no answer, we were testing with admin permissions first, and will narrow it down once everything works. It would be good to have some documentation though.#2018-11-2315:59grzmThanks, @U0539NJF7 Can you add more to “impossible to use CodeBuild”? I’m now thinking of maybe creating a Docker image with ion-dev jars and using that in the CodeBuild environment.#2018-11-2316:20stijn@grzm: this is the feedback I got from AWS Support#2018-11-2316:20stijnFor security and performance reasons, the traffic from CodeBuild is configured to egress via a VPC Endpoint only. The VPC endpoints used are the VPC endpoints of the AWS Service. Therefore, even if you have not configured the CodeBuild to use a VPC endpoints, the traffic gets routed via VPC Endpoint of AWS Service. And unfortunately, VPC Endpoint service do not support cross-region request[1].
If we want to access the S3 bucket in different region, then our best option will be using Cross Region Replication with a destination bucket in the same region as our CodeBuild project. When an object is created in the source bucket, it will automatically be replicated in the destination bucket by S3 and therefore there is no need to manually copy the object to destination bucket. Although the operation is asynchronous, objects are typically replicated nearly instantly. Please see this documentation[2] for more information about cross-region S3 replication, and this documentation[3] fro information on how to set it up.
#2018-11-2317:07grzmThanks for that. Good stuff to think about. How are you currently doing CI?#2018-11-2312:24mkvlrwe’re starting to work on https://nextjournal.com/mk/datomic a runnable article about datomic. The goal is to enable others to learn datomic without having to do any setup. I’ve included the datomic free license at the end of the article. If possible, I’d like to get confirmation from someone at Cognitect that it’s ok to do this. The way I read the license it should be but would be great to get confirmation.#2018-11-2314:57dustingetzso cool#2018-11-2312:25mkvlrTo be clear: datomic free is downloaded in this article and turned into an docker image which is later reused without having to download it again.#2018-11-2314:37marshall@mkvlr Yes, Datomic Free is redistributable, so that is fine#2018-11-2314:44mkvlr@marshall awesome, thanks!#2018-11-2316:09ChrisI'm considering an event sourcing application. Although Datomic is a good fit for most of the application's needs, the rate of events is likely to be high, maybe 1000/s and I understand this is not ideal given the transactor. Would it help to batch these events so there are fewer writes per second, even if they are larger writes and the overall data rate remains the same?#2018-11-2316:33dustingetzIt is my understanding that the transactor is not actually the bottleneck here, but the total number of datoms → size of the db indexes. the rule of thumb per https://www.datomic.com/cloud-faq.html is 10 billion datoms. 10,000,000,000 / 365 / 24 / 60 / 60 = 317 datoms per second average throughput#2018-11-2411:17ChrisThanks, that’s very helpful. It seems I probably can’t use Datomic for this application then, it is likely to exceed that average. A shame, it was my first choice. #2018-11-2411:53samcgardnerYou can safely write well over 300 Datoms/s assuming that many of your events modify existing Datoms (which most use-cases primarily do). If you really do want to mostly append new keys C* is probably a closer fit#2018-11-2414:42dustingetz@U8S4V8JE5 Do you have evidence of this? (I understand the reasoning – the index size for present-time queries should reflect the total number of datoms under consideration – but a comment from marshall suggested this may not be the case – http://tank.hyperfiddle.net/:dustingetz.storm!view/~entity('$',17592186047105) )#2018-11-2414:48samcgardnerSo I think the issue represented there isn't the same thing - that's just saying that you have to perform very large commits to your backing storage if you have massive transactions or blobs, and that's generally problematic for most backing stores. I'm just making the point that there's an enormous difference between 300 inputs per second of any kind and 300 new writes per second - I know anecdotally of usage that's well above 300/s, so it depends on the eventual number of datoms in the DB, not the number of writes/updates#2018-11-2414:49dustingetzThank you @U8S4V8JE5#2018-11-2414:52samcgardnerI don't think I added anything to what you said, was just clarifying for Chris as he seemed to take your comment as a "no" for Datomic in his use-case, which wasn't clear to me from what he said#2018-11-2416:14ChrisThanks for the further detail. I’m struggling a bit to keep up but it seems maybe Datomic could work. The use case is building a graph with edge weights incremented based on a stream of events, plus the occasional new node. The number of nodes would be a few thousand, and I’d expect them to average 100 edges each, so maybe only a million datoms total. But the rate of updates is pretty high - 1000 events/s, with each event updating 10-100 edges. I think these could be batched to an extent, but couldn’t say how much that would help. #2018-11-2422:32dustingetzYou might be able to just test it at the repl#2018-11-2412:01samcgardnerQuestion about the Datomic model - does Datomic seek to ensure that every node is at the same point in the transaction history? i.e. if I append to an existing Datom, is it legal for one node to return the Datom before append (and eventually converge on the new value as it reads more of the log) and another to simultaneously have read the log and return the appended value?#2018-11-2414:46dustingetz@samcgardner I think all queries have a time-basis, so you will get strongly consistent results (i'm not sure what happens if the node doesn't have that time-basis, my system calls d/sync a lot to wait for catchup when necessary) https://docs.datomic.com/client-api/datomic.client.api.html#var-sync#2018-11-2414:50samcgardnerI believe you can use (datoms db arg-map) to request things without a time-basis if you want to#2018-11-2414:55samcgardnerIf you use a single db you will of course observe consistency in your usage, I'm just interested in whether or not you're guaranteed to get the same db whichever backend ends up servicing your request at a time T#2018-11-2415:22dustingetzOh! d/db will return a dbval with the most recent time-basis known to the node. If this is important to your app, can you extend the time-basis all the way down to your api client, so the node can d/sync to exactly the right time basis every time?#2018-11-2515:48samcgardnerI'm just curious about the implementation in Datomic. I assumed it was going to be along those lines, since enforcing immediate consistency across all nodes seems needless according to the model, but wanted to validate my assumption. Thanks!#2018-11-2615:39ro6I have a long plane ride later today (heading to NC for the Conj!), so I'm hoping to do some offline development while I can't connect to my Datomic Cloud instance. How do I approach setting that up? I still want to use the Client API for consistency. Do I run a local transactor+peer then use Client to connect to that?#2018-11-2620:33dustingetzI have seen a lib that wraps datomic free into the client api, is that sufficient for your need?#2018-11-2620:33dustingetzOr just use OnPrem#2018-11-2620:33dustingetzThe client APIs might be a little different#2018-11-2621:46kenny@U8LN9KT2N We have been using this lib [1] for offline usage of the Datomic Client API. It may be the lib that @U09K620SG is referencing.
[1] https://github.com/ComputeSoftware/datomic-client-memdb#2018-11-2700:44ro6@U09K620SG Yes, that should be fine. Ideally I'd like to be able to use the Client API (since that's what I'm coding against for Ions in prod) locally (offline) against either an in-memory db or a filesystem db. #2018-11-2700:44ro6@U083D6HK9 Thanks, I'll check that out. #2018-11-2701:50dustingetzsee u in NC! at the datomic workshop?#2018-11-2701:56ro6@U09K620SG I wish. Signed up late and never made it off the wait list. #2018-11-2716:35bhurlowI understand that Datoms are stored in segments, how does Datomic know which segments to fetch for a given eid or attr?#2018-11-2716:49marshall@bhurlow https://docs.datomic.com/cloud/whatis/data-model.html#indexes and https://docs.datomic.com/on-prem/indexes.html#2018-11-2722:46lwhortonis there a mechanism for pulling just the raw db/ident value instead of #db{:id 123 :ident :my.custom/ident}?#2018-11-2722:49lwhortonit’s not that huge a deal, i just figured before i go speccing some stuff (which is kind of difficult to think about wrt (s/keys :req [:db/ident])), i should make sure there’s not something obvious i’m overlooking from the docs#2018-11-2723:27dustingetz@bhurlow This helped me understand what's going on under the hood http://www.dustingetz.com/:datomic-performance-gaare/#2018-11-2723:28dustingetz@bhurlow omg i want db/ident to come down instead of db/id when available too! 100% of cases it is what i want. The entity api does this, but i think it surprised beginners#2018-11-2723:32dustingetz@bhurlow I have a function "smart-lookup-ref" that takes "what i've got" and turns it into the most idiomatic lookup ref, e.g. db/ident when available, but it's not really that much of an improvement, you still need to ensure that each pull follows the same idioms#2018-11-2723:56dustingetzhttps://aws.amazon.com/blogs/aws/new-amazon-dynamodb-transactions/#2018-11-2812:44asierAn audit/accounting firm has approached us looking for a centralized alternative to Blockchain, and my first thought has been Datomic (immutability). I have shallow understanding of Blockchain, so do you have links/use cases where Datomic could solve what Blockchain does.
Thanks in advance.#2018-11-2813:05hkjelsI think a centralized blockchain defeats it’s purpose, it’s then just a database and as such, Datomic would be a good option. Namely due to immutability as you yourself mention#2018-11-2814:51val_waeselynckIf distributed / trustless control is not a requirement, there's probably hardly any use case than won't be better served by Datomic than by a blockchain#2018-11-2814:53val_waeselynckNote that immutability is not a particularly exotic need in data management - most use cases need it, but most people are just accustomed to accepting the fact that traditional databases don't match this need very well#2018-11-2815:07benoitWe stopped overriding our production code by FTPing the source to the server without version control systems. Maybe we will stop doing that for our data too 🙂#2018-11-2815:17ro6haha, well put. Don't you miss those days though?#2018-11-2817:19idiomancy"give me all accounts whose order values sum up to more than $1000"
Is that a thing you can do with a single query? Can I aggregate values directly in the where clause to be used in a subsequent predicate?#2018-11-2817:20marshallI would use a nested query for that @idiomancy#2018-11-2817:21idiomancyoh, okay TIL#2018-11-2817:23idiomancyhuh, I see, basically by calling a query from within the query itself, you're ensuring the work is done directly on the read peer.#2018-11-2817:26dustingetzBlockchain and Datomic both run compute in the application, so you can query them together!#2018-11-2817:26idiomancywait, what?#2018-11-2817:27idiomancyI think I need some dots connected on that one#2018-11-2817:27ro6I think that was related to an earlier question#2018-11-2817:29idiomancyahh, I see#2018-11-2818:35Peter WilkinsHi all, I’m using on-prem and trying to find a way to automate deleting a db on dev so I can restore from prod daily backup. I do this manually with the repl shipped with datomic. Running a script with ansible would be perfect. Feels like overkill to create a clojure project just for this small task. any suggestions?#2018-11-2823:32kennyIs there a way to get the SHA or uname of the deployed Ions on a Datomic system?#2018-11-2823:36steveb8nQ: are we able to use Clojure v1.10 in an Ions project? What controls which versions we can use?#2018-11-2900:07souenzzoWhich JAVA version is use in Ions? We should care about that? How updates are handled? The new model from Oracle will affect ions somehow?#2018-11-2900:36lwhortonhow close can one get to datomic with use of triggers into audit tables via a sql relational db? 💥
i’m trying to survey the landscape to see what giant gaping holes i’m going to encounter by not using datomic to manage time#2018-11-2900:44lwhortonsome immediate issues i can see:
1. queries into the audit tables are just plain going to stink
2. a proliferation of CREATE OR REPLACE FUNCTION {trigger_fn} for each table, and those trigger functions are ad-hoc, and will change each time the schema evolves
3. trigger functions themselves dont keep around a history, so how does one query older versions of a schema? you probably have to keep versions of these functions around (somewhere, maybe in git?)
4. at what point does performance become an issue double-dipping every transaction? and how large can an audit table grow before we’re in trouble?
5. a lot of ad-hoc decisions to make: which extra fields to track in an audit table. tracking “who updated” certain rows (can you even keep track of who updated at the column level?). easy to get wrong, very punitive if you get it wrong#2018-11-2909:29val_waeselynck@U0W0JDY4C This is a bit similar to 1., but because you have a sequence of changes doesn't mean you have much leverage over it. Datomic gives you an indexed, relational persistent data structure - you get consistent, queryable snapshots of the db at every point in time. This offers much more leverage than just "keeping all past versions of each record".#2018-11-2909:31val_waeselynckThis article I wrote recently may help you in this reflection; although it's not about audit triggers, I believe similar issues arise: https://vvvvalvalval.github.io/posts/2018-11-12-datomic-event-sourcing-without-the-hassle.html#2018-11-2916:00lwhorton👍 will take a look#2018-11-2900:49lwhorton6. past a certain volume of data is it possible to copy the data and its history accurately into other sources (e.g. for analysis queries)?#2018-11-2900:53lwhorton7. (more generally) i’m going to miss the flatness of a universal schema
8. similarly to 5, 7; there’s a whole lot of up-front decisions around time modeling
9. i foresee problems with pagination as it relates to consistency#2018-11-2901:32hueypis there a way to wrap a datomic database? I implement datomic.Database and proxy to the datomic.db.Db but datomic.api/q is not happy with it 🙂#2018-11-2903:01kennyIt sounds just like a lib I wrote to use the Datomic peer mem db with the Client API: https://github.com/ComputeSoftware/datomic-client-memdb#2018-11-2903:02kennyPerhaps something like this? https://github.com/ComputeSoftware/datomic-client-memdb/blob/f011f3096900c993de3c0ba4d48d37217fce8247/src/compute/datomic_client_memdb/core.clj#L45-L49#2018-11-2902:14steveb8n@hueyp does this help? https://gist.github.com/stevebuik/9b219090a2d10cc4fb06d62ee928ca7e#2018-11-2902:16hueypthat looks promising 🙂#2018-11-2902:19hueyphm, I’m using the datomic.api vs datomic.client.api but still gonna check it out .. thanks!#2018-11-2902:31steveb8nit has worked great for me. I added the Pedestal interceptor lib to it and now have cross-cutting / middleware in all my db api calls#2018-11-2903:03dustingetzwait what did you do exactly?#2018-11-2904:02steveb8nyou can use pedestal interceptors as a standalone middleware lib i.e. add middleware to anything#2018-11-2904:02steveb8nso I used it to add middleware to datomic api calls#2018-11-2904:02steveb8nimagine api call logging/ops#2018-11-2904:03steveb8ntransparent query transformations#2018-11-2904:03steveb8nresult filtering etc#2018-11-2902:34hueypI’m still using datomic.api/q not the client stuff … everything works great in my wrapping database except for d/q which just proceeds to find nothing ;/#2018-11-2902:35hueypI had to implement Iterable / ISeq to get d/q to not throw, so now it doesn’t throw, but it just doesn’t find a dang thing 😜#2018-11-2902:35hueypI don’t know if there is some other marker interface I need to tell it to treat it as a db#2018-11-2904:05steveb8nthat’s the limit of my ability to help. I didn’t try this with the peer api because it doesn’t have a protocol in front of it, like the client api does#2018-11-2904:24hueypyah — there is the datomic.Database interface, but it doesn’t seem to cover d/q 😜#2018-11-2904:24hueypthanks again tho!#2018-11-2905:23steveb8n@hueyp you might find an idea or two in here https://github.com/stevebuik/ns-clone which does work with the peer api. it doesn’t proxy but delegates so pretty similar#2018-11-2909:20jeroenvandijkThis one is interesting for Datomic though (I hope): https://aws.amazon.com/blogs/aws/amazon-dynamodb-on-demand-no-capacity-planning-and-pay-per-request-pricing/#2018-11-2917:06johnjThat's great for on-prem users that are on aws#2018-11-2917:36jeroenvandijkJust on-prem, not also datomic cloud?#2018-11-2917:56johnjI don't think there will be any difference for users, since this is already handle for them. Maybe good news for the implementors of datomic cloud.#2018-11-2917:56johnjCloud doesn't use dynamo in the same way as on-prem#2018-11-3009:02jeroenvandijkOk I can't say, but my guess would have been that it will also have an effect on potential write speed (transactions) in Cloud#2018-11-3009:02jeroenvandijkWe're using on-prem so I'm not able to check#2018-11-2917:15abdullahibraHello guys#2018-11-2917:16abdullahibrathere is something confused me, how much costing for using Datomic for starter project ?#2018-11-2917:16abdullahibraanother question how Datomic different from rdf triple stores ?#2018-12-0102:09ro6Any triplestore in particular? I think the primary difference in the data model is that Datomic stores all historical states of the database, so you can query a snapshot of the database as of any transaction, or even run queries that aggregate over time. I don't have experience with RDF beyond researching it, but Datomic has several other unique capabilities as well. #2018-12-0319:28eoliphantaside from not ‘speaking’ the RDF spec out of the box, and as @U8LN9KT2Nmentioned, the native concept of time, the basic information model is more or less identical. You could say easily build an RDF server as a probably small set of functions on top of datomic#2018-11-2917:40lilactownDatomic cloud or the self-deployed version?#2018-11-2917:41lilactownthe self-deployed version (if i recall correctly) has a free offering with fewer features and no support. great for a starter project.#2018-11-2917:41lilactownfor Datomic Cloud, I'm using the solo topology and it's ~$25/mo at the moment. I haven't been using it super actively, but I don't imagine it would scale up much more even if I was#2018-11-2919:40hueyphow often does the datomic forum get looked at? 😜#2018-11-2921:36marshallDaily @hueyp #2018-11-2921:36marshall:)#2018-11-2922:24lwhortonis there a document somewhere that explains how to connect your non-ion based app to a datomic cloud instance? presumably the app code has to execute in an EC2 somewhere inside the datomic VPC, but that’s about all i know 😞 . (do i setup my own EC2, do i use beanstalk, what about autoscaling, etc. etc.)#2018-11-2923:55kennyGetting this exception when deploying my Ion:
java.lang.NoClassDefFoundError: org/slf4j/event/LoggingEvent
Datomic appears to be using an old version of slf4j 1.7.14. The latest is 1.7.25. Could this dependency be bumped?#2018-11-3000:05kennyThis is a problem for anyone who has a library that uses logback 1.1.4 or greater and introspecs a Logger instance. This forces us back to logback 1.1.3 (as per https://logback.qos.ch/news.html) which was released 03/24/2015 - over 3 years ago 😬#2018-11-3006:02arnaud_bosHello all, I'm following the on-prem "getting started" guide to, well, "get started" on datomic, with the end goal of migrating a side project app to ions later.
https://docs.datomic.com/on-prem/getting-started/connect-to-a-database.html
When starting my repl I get Could not find artifact com.datomic:client-pro:jar:0.9.5786 in central ().
Here's my ~/.m2/settings.xml file:
<settings xmlns=""
xmlns:xsi=""
xsi:schemaLocation="
<servers>
<server>
<id>my.datomic.com</id>
<username>
And my deps.edn file:
{:deps
{some-other-libs {:mvn/version "blah"}
com.datomic/client-pro {:mvn/version "0.9.5786"}
}
:mvn/repos
{"" {:url ""}
}
}
For some reason I can't see what I did wrong, "it" doesn't seem to try fetching the dependency from "http://my.datomic.com" but rather gets stuck after not finding it on maven central. Can anyone help?#2018-11-3006:03arnaud_bosp.s. I gotta go to work, I'll check-in later, so I think a thread is appropriate for grouping answers, yes?#2018-11-3008:49thomasHi, we are going to run Datomic with Postgres as a backend and our sysadmin is wondering which of the two to back up? At the datomic level or doing a pg_dump ? what would you advise?#2018-11-3009:07Christian JohansenDatomic backups#2018-11-3009:08Christian Johansena postgres backup will not guarantee a working datomic database, as postgres does not know about datomic transactions etc#2018-11-3009:09asierwe do datomic backups too.#2018-11-3010:04thomasok, thank you for the advice!#2018-11-3019:29jdkealyHey @marshall @jaret... a few months ago i was posting to the channel talking about transaction timeout issues... Finally i figured it out and just wanted to point out what the issue was. Transaction functions! I was trying to ensure that a particular entity type had a unique name/type combo, and my transaction function would do a lookup for the name/type. The function would get called sometimes a few thousand times in a minute and i guess it created a bottleneck.#2018-12-0118:56abdullahibraHi guys,
I'm in situation which need to build database tables on demand, for example how to model the data storage when user1 wants to create table to save student name and age and user2 wants to save table for x and y and z, x data type string, y is time stamp, z is integer#2018-12-0118:57abdullahibraWhat I want to do is to let the user create his own tables on demand , how can I model this in datomic?#2018-12-0118:58potetmI mean, you can let them transact db attrs.#2018-12-0118:58potetmIf you’re into that kind of thing.#2018-12-0118:59mdhaney@abdullahibra Yes, it’s very easy in Datomic. The only scheme you define is for attributes, and tables (or entities, as Datomic calls them) can have any mix of attributes you want.#2018-12-0118:59potetmYou don’t really want 1million attrs though.#2018-12-0119:00potetmSo it’s a little more complicated than that.#2018-12-0119:00potetm(the same as you don’t want 1million db tables)#2018-12-0119:01abdullahibraWell, so I need to make the schema first?#2018-12-0119:01potetmSo, is there a fixed number of “columns” you’re interested in?#2018-12-0119:02potetmstudent name, age, time, grade, addr, etc. But there’s like 20 of them instead of “whatever your client can imagine.”#2018-12-0119:02abdullahibraNope, this should certainly up to the user#2018-12-0119:03potetmSeems like you might want some entities like: {:db/id 1, :column/name "foo"} {:db/id 2 :row/column 1 :row/value "bar"}#2018-12-0119:04potetmso a “column” entity and rows can refer to it#2018-12-0119:04mdhaneySo you want the user to define attributes as well? In that case, you almost certainly need some metadata so your code will know how to deal with those attributes, so I would add a meta layer where user attributes are defined as entities.#2018-12-0119:04ChrisWhat @potetm said seems right to me, plus record the type the user asked the column to be so you can do any conversions necessary #2018-12-0119:06abdullahibraThat's good thanks guys#2018-12-0119:07abdullahibraAnother thing could apache Jena be close alternative to datomic ?#2018-12-0119:08abdullahibraIs datomic considered rdf facts store + other?
#2018-12-0119:09abdullahibraOther is defined as some features #2018-12-0119:09potetmcan of worms that^#2018-12-0119:09potetm😛#2018-12-0119:10potetmI honestly don’t know enough about rdf to answer the question. It’s an [entity attribute value] store (which seems similar to rdf to my unexperienced eye)#2018-12-0119:10mdhaney@abdullahibra There was a talk at the Conj years ago that you should check out. It’s not exactly what you’re trying to do, but should give you ideas for how to dynamically build this meta layer for your user attributes.
https://youtu.be/sQCoTu5v1Mo#2018-12-0119:10potetmbut it’s immutable, so all previous versions are available#2018-12-0119:11potetmso it’s like an index + log + [e a v] store#2018-12-0119:12abdullahibra@mdhaney thanks for the link#2018-12-0119:13abdullahibra@potetm so seems triple store + other is correct#2018-12-0119:13potetmseems like it#2018-12-0119:14potetmhighly read scalable, write constrained are the other important bits I can think of#2018-12-0119:14abdullahibraYep#2018-12-0119:14abdullahibraThanks guys#2018-12-0121:35lwhortonare the client librarys open source? i can’t seem to find them anywhere. the reason i ask is i’m trying to see what effort would be involved in an elixir version. unfortunately my company is neck deep in elixir and there’s a strong impedance mismatch between clojure and elixir, so i’m trying to work around it#2018-12-0121:45potetmnone of datomic is open source#2018-12-0121:46marshallClient lib source is in a src jar in maven centeal#2018-12-0121:57lwhortonso … not open source?#2018-12-0121:58lwhortonit’s all probably AOT’d classes right?#2018-12-0121:47marshall@lwhorton ^#2018-12-0121:48potetm@marshall only available on the JVM tho, right?#2018-12-0121:48potetmoh wait, nm#2018-12-0121:48potetmI see#2018-12-0121:58lwhortonhmm bumah dude. we have a client with a more or less a perfect usecase for datomic cloud but i can’t convince the rest of the team to walk into the promised land.#2018-12-0122:00lwhortonand building an elixir client against the http implementation (which i’m pretty sure is deprecated now?) seems like a bad idea#2018-12-0302:58kvltAnyone know where I can find examples of ions with a ring/pedestal handler?#2018-12-0308:02rnandan273How do we justify Datomic vs Amazon QLDB. Some features look similar. Any ideas?#2018-12-0309:16jeroenvandijkMaybe you can elaborate on how they look similar? I don't see the similarities so much#2018-12-0309:19rnandan273@jeroenvandijk immutable, history, sure datomic supports rich querying capabilities,#2018-12-0310:07jeroenvandijkI would surely miss those rich querying capabilities, the flexible data modelling, integration with Clojure etc. I would also be very curious for the first production experience reports on the new Database and how people are using it.#2018-12-0312:01val_waeselynckWrites expressiveness and speculative writes are also a big deal#2018-12-0312:24chris_johnsonI hope this doesn’t come across as overly cynical; I haven’t had my coffee yet this morning. That said, if someone is asking you to “justify” Datomic over QLDB as a choice for your system of record, my immediate tack would be to investigate why I was stuck trying to get all my architecture decisions past someone whose default position is “but AWS service exists and it’s close, use that.”#2018-12-0312:41jeroenvandijkI think the reverse question is more in order at the moment. That begin said, I wouldn't feel the need to justify anything lol 🙂#2018-12-0312:25chris_johnsonThe second thing I would say, though, is “QLDB is clearly, both in terms of released features and in terms of AWS’ own documentation about it, meant to be the backing store for an immutable ledger, e.g., a managed, centralized deployment of “blockchain!“. We aren’t building “blockchain!” with this effort, are we? Please tell me because if we are I need to go short my RSUs.”#2018-12-0312:41potetmqldb appears to be indexed#2018-12-0312:41potetmnot the same a blockchain#2018-12-0312:42potetmAlso: All of your decisions should be justifiable compared to other options.#2018-12-0312:43potetmThere’s nothing wrong with someone asking questions. Those are usually helpful when trying to think through your options.#2018-12-0312:45jeroenvandijkMy first answer would be they are two different things and you should try both to see the difference, if you don't see the difference already. I would for instance have to try QLDB to fully understand it's implications#2018-12-0312:55chris_johnson@potetm I was being snarky and reading perhaps too much into the original question’s formulation of “How do I justify Datomic vs. QLDB” - which seems to me to describe less an honest question and more a “nobody ever got fired for buying AWS” reaction#2018-12-0312:58chris_johnsonAnd my dim view of all things ledger-y is of course also just snark and deep suspicion of hype. There are plenty of great applications for ledgers and I’m sure that if the OP is building one of those, QLDB is in the running for a backing technology of choice. My initial take was that either they aren’t and someone is asking them, essentially, “but you said Datomic is great because it’s immutable; well this new QLDB thing is immutable too, so why can’t we use that instead?” or that they are building a ledger and then my question is “do you know why you need a ledger for this application or are you on the ’chain train?” 😇#2018-12-0315:30rapskalianI don’t think being indexed disqualifies QLDB as not being a blockchain technology necessarily (see nanocurrency).
From the surface, it seems like one of the main practical differences between using Datomic or QLDB is the ability to cryptographically verify your ledger history out of the box. If the application needs this sort of functionality it might be worthwhile to forego the read/write expressiveness etc that Datomic offers and use something more tailor fit for that use case. #2018-12-0315:31rapskalianThat said, this could probably be implemented on top of Datomic, but that’s additional development effort. #2018-12-0315:31rapskalianDatomic’s datom limit may also be something to consider when choosing Datomic #2018-12-0315:31rapskalian@rnandan273 #2018-12-0315:59Joe Lane@petr https://github.com/pedestal/pedestal-ions-sample , https://github.com/pedestal/pedestal.ions#2018-12-0316:01Joe Lane@cjsauer What Datom limit are you referring to? Can you add a reference to where you read this?#2018-12-0316:06rapskalian@lanejo01 I’ve heard that larger databases can become problematic, but according to this post it isn’t a “hard” limit:
https://forum.datomic.com/t/datomic-cloud-datom-limits/473#2018-12-0316:08rapskalianHCA gave a talk at this year’s Conj that showed how they were hot swapping Datomic databases after they had “filled up”. But as @marshall says in that post, it may be a “data shape/usage” constraint. #2018-12-0316:10rapskalianFound the talk: https://youtu.be/AyWbB52SzAg#2018-12-0316:10Joe LaneHmm, I must have missed that part of the conj. I’ve run multi-tenant datomic cloud databases in production and its pretty seamless.#2018-12-0316:11Joe LaneThanks for the reference, was the talk referring to on-prem or cloud?#2018-12-0316:12alexmillerI think it was on-prem, and a lot of the things they were encountering are unlikely to be issues w/cloud#2018-12-0316:13rapskalianYeah I believe it was on-prem as well, but don’t quote me. #2018-12-0316:15rapskalian> a lot of the things they were encountering are unlikely to be issues w/cloud
This is good to know. Do you have a specific example @alexmiller? I ask so that I don’t propagate any misinformation regarding the latest and greatest. #2018-12-0316:24alexmillerwell with db size, a single cloud system can support many dbs, so the size thing has more ways to mitigate#2018-12-0316:24alexmillerand I think they were talking about restart time or something, which is a non-issue#2018-12-0316:24alexmillersorry, I didn’t catch the whole talk#2018-12-0316:26rapskalianThat makes sense. Hot swapping to a new db might just be a config change in Cloud. #2018-12-0319:16alexmillerI don’t think you even need to do that - one cloud instance can host many dbs so don’t think you need to change any config at all, just prob an app issue.#2018-12-0319:53grzmTalking with @marshall at the Conj, I recall him mentioning a potential issue using the new aws api with Datomic cloud until the next version of Datomic is released, though I think it was limited in scope. I don’t remember the details, however. Can anyone fill in the gaps in my memory?#2018-12-0319:55marshallThat’s Correct.
The currently available release of Cloud uses an older version of an http client that will not work with aws-api#2018-12-0319:55marshallThus you can’t run the aws-api in ions right now#2018-12-0319:56grzmThanks, @marshall for confirming. Any word from AWS on when the next version might drop?#2018-12-0319:57marshalli don’t have an update there#2018-12-0319:57marshallas soon as possible#2018-12-0321:41Jonathan@solussd I saw your thread about no such var errors in datomic client api from few weeks ago#2018-12-0321:42solussdhmm, do you have a link? I must have gotten past it.#2018-12-0321:43solussdoh, here in the datomic channel#2018-12-0321:43solussdThat was someone elses issue, iirc.#2018-12-0321:42JonathanDid you ever find a resolution there? I’m having a very similar issue with a basic datomic connection#2018-12-0321:44solussd@jonathan617 what's your issue?#2018-12-0321:50JonathanFor dependencies I have
[[org.clojure/clojure "1.8.0"]
[com.datomic/client-pro "0.8.28"]
[com.datomic/client-impl-shared "0.8.28"]]
#2018-12-0321:50JonathanAnd then I require like
(ns datomic-client-example.core
(:require [datomic.client.api :as client-api]))
#2018-12-0321:52JonathanAnd then do this,
(def cfg {:server-type :peer-server
:access-key "..."
:secret "..."
:endpoint "localhost:9001"})
(client-api/client cfg)
#2018-12-0321:53Jonathanthe first time the error was CompilerException java.lang.RuntimeException: No such var: p/recent-db, compiling:(datomic/client/api/async.clj:101:9) #2018-12-0321:53marshallTake out the dep on client-impl-shared#2018-12-0321:53Jonathanbut now it’s CompilerException java.lang.Exception: namespace 'datomic.client.api.async' not found, compiling:(datomic/client/api/sync.clj:16:1) every time#2018-12-0321:55marshall@jonathan617 Take out the dep on client-impl-shared#2018-12-0321:55marshallAnd restart your repl#2018-12-0321:57Jonathan@marshall thanks, I did that, and now it does this
datomic-client-example.core=> (client-api/client cfg)
2018-12-03 15:57:14.299:INFO::nREPL-worker-0: Logging initialized @11630ms
CompilerException java.lang.RuntimeException: Unable to resolve symbol: int? in this context, compiling:(datomic/client/impl/shared.clj:349:13)
datomic-client-example.core=> (client-api/client cfg)
CompilerException java.lang.Exception: namespace 'datomic.client.impl.shared' not found, compiling:(datomic/client/api/sync.clj:16:1)
datomic-client-example.core=>
#2018-12-0321:59marshallThat client version may require clojure 1.9 not sure#2018-12-0322:00Jonathanoh geez let me try that#2018-12-0322:02Jonathanyep that was it#2018-12-0322:02Jonathanthanks 🙂#2018-12-0322:05dustingetzhttps://github.com/pedestal/pedestal.ions#2018-12-0322:44steveb8nQ: do component entities maintain a sort order? I suspect so and I want to rely on it but I can’t find any docs on this#2018-12-0323:11Joe Lane@dustingetz (and anyone who cares) I just used that library in combination with https://github.com/walmartlabs/lacinia-pedestal to serve graphql from api-gateway using pedestal for interceptors and web related stuff and then lacinia-pedestal for my resolvers and gql stuff. Datomic cloud for the database the resolvers work upon.
Works against apigw with curl but graphiql doesn’t work because apigw hasn’t released their new websocket support yet. Soon I’m sure.
That being said, because I’m using pedestal I can run graphiql locally and have this whole stack running on my laptop (except for datomic cloud of course, may be able to mock it but haven’t invested in that yet.)#2018-12-0323:17steveb8n@lanejo01 that’s good to know. I’m using AppSync with Ions but I suspect lacinia-pedestal might have less overall complexity. I’m curious why graphiql needs websocket support? I’m using it in my re-frame client and I don’t see any websocket requests#2018-12-0323:17steveb8ndo you mean for subscriptions?#2018-12-0323:18Joe LaneThe lower complexity is exactly why I wanted to do this.#2018-12-0323:18Joe LaneI’ll confirm the behavior#2018-12-0323:18Joe LaneCan I ask what re-frame client you’re using?#2018-12-0323:18Joe LaneI’ve been eyeing re-graph#2018-12-0323:20steveb8nI’m currently using artemis but I’m gonna spike re-graph and sub-graph as a replacement - again to reduce complexity. artemis does not appear to be originally designed for re-frame#2018-12-0323:20steveb8nand using shadow-cljs to add graphiql via npm#2018-12-0323:21Joe LaneOnce apigateway releases websocket support (:crossed_fingers: soon) I’ll probably write my own subscription streamer and have a serverless graphql stack.#2018-12-0323:23Joe LaneI think my problem is that I’m not serving the assets from the right place, thanks for the feedback. I saw websocket errors and assumed thats why it wasn’t working. Just confirmed I get those locally as well so I may be able to still get it to work!#2018-12-0323:26steveb8nagreed, it should work. it could be the first request which is an OPTIONS instead of a GET that is breaking via apigw#2018-12-0323:26steveb8nI think that request reads the meta-data for your GQL schema#2018-12-0323:27steveb8nI’d really like to know if you can make that work. it’ll be one more data-point to convince me to switch over#2018-12-0323:43Joe LaneI’m gonna hack on this, will let you know tomorrow or later tonight.#2018-12-0323:46Joe LaneI think its possible, i’m just not correctly exposing the assets directory#2018-12-0323:46steveb8ngreat thanks. btw if you need subscriptions sooner, you could always use AppSync just for the subs and accept the extra complexity in a constrained area#2018-12-0323:46steveb8nalthough I haven’t tried appsync subscriptions with Ions so not sure if it works#2018-12-0323:46Joe Lanegraphiql is being included from lacinia-pedestal on my behalf. Its configured on rn but I can turn it off in prod mode.#2018-12-0323:47Joe Laneappsync + ions works, the subscriptions only work if you mutate through appsync though. Thats why I want this to be backed by lacinia, my ions can then send outbound updates if changes occur outside the purview of appsync’s graphql api.#2018-12-0405:34rnandan273Thanks everyone for your insights on my question on datomic vs qldb#2018-12-0406:44Andreas EdvardssonAm I thinking to boldly about Datomic, or is it possible to have a entirely purely functional non-trivial buisiness logic, similar to how one could do it with a plain closure map? That is, passing in the state (database) at the top, then adding, updating, retracting, getting etc. in any combination, and lastly do something similar to a "swap!" or "reset!"?
Preferably I'd like to get a "new" database instance back after every step, so that I can continue to query and alter the database state until I'm happy. Also, if something goes wrong in the middle I obviously don't want anything committed to the actual database.
I imagine transactor functions as in Datomic ions ought to make this possible?
#2018-12-0407:55val_waeselynckThat's not exactly how it works, but you can essentially achieve the same power. Datomic Connections and Database values essentially correspond respectively to Clojure Agents and values. However, unlike regular Clojure values, you don't update a Database value by calling conj, assoc etc. - you do it by emitting Datoms. Transaction functions enable you to accept a 'present' Database value, optional additional arguments, and to emit an arbitrary set of Datoms which will be added to form the next Database value.#2018-12-0407:56val_waeselynckSo the model is not that you fabricate intermediary database values and 'commit' to the last one. However, you can use speculative writes (aka db.with()) to 'preview' what Database value will be yielded by adding the Datoms you're considering.#2018-12-0412:09Andreas EdvardssonThank you @U06GS6P1N for the reply!
It remains then to figure out a reasonable architecture that lets me output the datoms in a sensible way. :)
Also, regarding unit testing, is there any way to get an in-memory equivalent to an datomic cloud/ions database value that supports the same query/pull API? The idea is to transact (plain) datoms to a clear database, and then query it using the function I'm presumably testing. Would either the datomic Free database or an in-memory peer with a wrapper like https://github.com/ComputeSoftware/datomic-client-memdb/blob/master/README.md do the trick?
By the way, I found your awesome blog the other day, I have yet not managed to digest all the datomic-specific parts yet, but I have still learned a lot, especially the post about event sourcing was enlightening! 👏#2018-12-0413:06chris_johnsonAt my job we are using ‘on-prem’ Datomic run on AWS EC2 because we considerably pre-date the advent of Cloud and Ions, so we haven’t had a need to wrap the in-memory peer, but otherwise we do exactly as you describe for unit tests and CI. We have a manageable amount of test data that just lives in source control as datoms in EDN, and before a test run we stand up an in-mem DB, transact our schema to it, and transact in the test data.#2018-12-0413:09chris_johnsonA nice knock-on effect is that we also have a server-dev Boot task which stands up that same “test” in-mem DB and also starts up our backend services locally, pointed to it. One fully-operational but local backend stack, ready to attach to a REPL, please! 😄#2018-12-0414:27Andreas EdvardssonThat's awesome! 😄 I hope that I'll be able to construct something similar.#2018-12-0416:42Joe Lane@UEJ28A9PH I think you can attempt that by using d/with-db and d/with, although the bookkeeping of what data is tx-data would be your responsibility.#2018-12-0420:42Andreas EdvardssonThanks for the suggestion @U0CJ19XAM ! However, the scenario I asked for in the first post does not seem to be very idiomatic, so I believe I'd be better off just doing it "the right way" instead. I guess the d/with and d/with-db could be helpful in some situations though, especially in tests maybe? #2018-12-0416:59Andreas LiljeqvistIn our SPA we do a lot of local ui-bookkeeping resulting in a couple of keys that won't be accepted by Datomic for transact.
What is a good way to dissoc them? Or selecting relevant keys.
We have a spec for what a complete entity should be.#2018-12-0417:00shaun-mahood@andreas862: Does https://clojuredocs.org/clojure.core/select-keys work?#2018-12-0417:01Andreas LiljeqvistIt sure would, but I would rather not repeat them again#2018-12-0417:02Andreas LiljeqvistAlso the changes are nested in some cases#2018-12-0417:03Andreas Liljeqvistcomponents can also have local additions#2018-12-0417:04Andreas LiljeqvistI suppose the only way would be with some spec magic, but I think it is frowned upon#2018-12-0417:06shaun-mahoodMaybe take a look at https://github.com/metosin/spec-tools - it has tools for coercion, transformations and walking nested specs (including stripping out extra keys)#2018-12-0417:11Andreas LiljeqvistSeems interesting, thanks!#2018-12-0418:10lwhortonhaving worked with datomic cloud for the last 3 weeks i’m ruined to postgres. i want to thank and shame everyone responsible for this. /rant#2018-12-0418:51lwhortonnow i have to go find a way to reify transactions, write triggers into an audit table, and compose some horrific transactions and queries#2018-12-0420:02grzmWhen using the Datomic cloud client, (d/db ,,,) returns a :database-id value as well as the db-name. This :database-id value isn’t returned by (d/db ,,,) when running in the cloud AFAICT. Can someone confirm this? I’m looking to get a globally unique, consistent value for the database across connections. Alternative ideas welcome.#2018-12-0420:08eraserhdYou mean, two databases are equal when they have equivalent facts?#2018-12-0420:09eraserhdI've used db values themselves as keys, although there could be two equivalent databases.#2018-12-0420:10eraserhdAnd just in case, since I'm trying to get my company to open source this library, what are you doing?#2018-12-0420:12grzmNope, making sure I’m connected to the same database I thought I was before. We churn through databases (and Datomic Cloud stacks, for that matter) and I want to be able to confirm I’m connected to the same database I was before.#2018-12-0421:15matthavener@andreas862 we namespace all the ui artifact keys with :ui/, and then we just use clojure.walk to remove all the keys that (= "ui" (namespace k))#2018-12-0510:37Andreas LiljeqvistThanks, I think that will be the simplest solution#2018-12-0421:16matthavenerfwiw, you could also build a list of all idents the db knows about and and then walk the transaction data to remove anything invalid#2018-12-0511:36hanswhi all#2018-12-0511:39hanswI have a query that looks like this:
[:find (pull ?e [*])
:in $ [?veh ...]
:where
[?e :vehicle/plate ?veh]]
This returns one value, where multiple rows are squashed together into one map, which I don't understand.#2018-12-0511:44souenzzo:find [(pull ?e [*]) ...] works?#2018-12-0511:45hanswyeah#2018-12-0511:45hanswbut it's squashing multiple rows into one map#2018-12-0511:46hanswso the result shape is [[{}]]#2018-12-0511:46hanswwhere there are multiple rows inside the {}#2018-12-0511:47hansweg.
{:vehicle/a "foo" :vehicle/b "bar" :vehicle/a "more" :vehicle/b "other"}#2018-12-0511:48hanswso I can't distinguish 'rows' from each other#2018-12-0511:50hanswWhat I need is:
[{:vehicle/a "foo" :vehicle/b},
{:vehicle/a "more" :vehicle/b "other"}
]
#2018-12-0511:53hanswi think it broke after upgrading my datomic client...#2018-12-0511:53souenzzowell. let's wait someone from datomic team 🙂#2018-12-0513:51hanswmaybe there were subtle changes when the client-api changed from datomic.client to datomic.client.api#2018-12-0513:52hanswin any case the q function changed from requiring a conn parameter#2018-12-0513:53hanswto just the argmap#2018-12-0513:57thegeezCould you paste the output you are seeing? Your first example is not valid clojure with the same :vehicle/a key appearing multiple times in the same map#2018-12-0513:58hanswi made it shorter for brevity sake#2018-12-0513:58hanswone moment#2018-12-0513:59hanswooh wait yes#2018-12-0513:59hanswhaha#2018-12-0514:01hanswit seems that i was in desperate need of coffee when i pasted this#2018-12-0514:16hanswok, this was another case of confessional debugging.#2018-12-0514:22thegeezRemote rubber ducking works 🙂#2018-12-0514:26hanswthnx for listening 🙂#2018-12-0516:08kvltHey there. Last night I was trying to stand up a pedestal service on IONS.
I have been following: https://github.com/pedestal/pedestal-ions-sample
However, when trying to deploy, as such:
clojure -Adev -m datomic.ion.dev '{:op :deploy, :group "ion-pet-Compute-1Q6752A2P837M", :uname "pet-service-sample"}'
{:execution-arn
arn:aws:states:us-east-2:272695641059:execution:datomic-ion-pet-Compute-1Q6752A2P837M:ion-pet-Compute-1Q6752A2P837M-pet-service-sample-1543987389875,
:status-command
"clojure -Adev -m datomic.ion.dev '{:op :deploy-status, :execution-arn arn:aws:states:us-east-2:272695641059:execution:datomic-ion-pet-Compute-1Q6752A2P837M:ion-pet-Compute-1Q6752A2P837M-pet-service-sample-1543987389875}'",
:doc
"To check the status of your deployment, issue the :status-command."}
I get the following
clojure -Adev -m datomic.ion.dev '{:op :deploy-status, :execution-arn arn:aws:states:us-east-2:272695641059:execution:datomic-ion-pet-Compute-1Q6752A2P837M:ion-pet-Compute-1Q6752A2P837M-pet-service-sample-1543987389875}'
{:deploy-status "FAILED", :code-deploy-status "FAILED"}
Looking at the logs it's clear that I have given the wrong :deployment-group:
2018-12-05T05:23:20.008Z 059ddc53-7801-4f4c-b688-60918314b781 DeploymentGroupDoesNotExistException: No Deployment Group found for name: ion-pet-Compute-1Q6752A2P837M
at Request.extractError (/var/runtime/node_modules/aws-sdk/lib/protocol/json.js:48:27)
at Request.callListeners (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:105:20)
at Request.emit (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:77:10)
at Request.emit (/var/runtime/node_modules/aws-sdk/lib/request.js:683:14)
at Request.transition (/var/runtime/node_modules/aws-sdk/lib/request.js:22:10)
at AcceptorStateMachine.runTo (/var/runtime/node_modules/aws-sdk/lib/state_machine.js:14:12)
at /var/runtime/node_modules/aws-sdk/lib/state_machine.js:26:10
at Request.<anonymous> (/var/runtime/node_modules/aws-sdk/lib/request.js:38:9)
at Request.<anonymous> (/var/runtime/node_modules/aws-sdk/lib/request.js:685:12)
at Request.callListeners (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:115:18)
My question is where do I find that? I copied it direclty from the cloudformation name. Is that incorrect?#2018-12-0516:14Joe LaneWhat version of clojure are you using?#2018-12-0516:15kvlt1.9.0#2018-12-0516:15Joe LaneAlso, have you deployed the datomic cloud system correctly, but can you confirm it by connecting to that system before this push?#2018-12-0516:16kvltI believe I have deployed it correctly. I verified that by testing following the getting started docs#2018-12-0516:18Joe LaneWhen I deploy something I dont have quotes around the group.
clojure -Adev -m datomic.ion.dev '{:op :deploy, :group myproject-dev-compute, :rev "2fee444890ccf58d4629294f3904bf1c38bb762q"}'#2018-12-0516:19Joe LaneI’ve gotta run but I hope thats helpful.#2018-12-0516:24kvltThanks for the input. My group needs to have the random identifier as such or it can't find the group as such ion-pet-Compute-1Q6752A2P837M or it will not do anything#2018-12-0516:25kvltI still get the error: 2018-12-05T05:23:20.008Z 059ddc53-7801-4f4c-b688-60918314b781 DeploymentGroupDoesNotExistException: No Deployment Group found for name: ion-pet-Compute-1Q6752A2P837M#2018-12-0516:25kvltCan anyone tell me how to find my deployment group. THe docs are pretty bad around this. https://docs.datomic.com/cloud/ions/ions-reference.html#deploy#2018-12-0516:26kvltBeing that there is no :deployment-group I used $(SystemName)-Compute-$(GeneratedId) but the docs suggest that that is for group#2018-12-0517:09jaret@petr to deploy you replace $(GROUP) with the name of your compute stack. are you sure the C is capitalized in “Compute”? You can see the name in the CF console https://console.aws.amazon.com/cloudformation/home?region=us-east-1#/stacks?filter=active#2018-12-0517:12kvltThe C is defintiely capitalized. I copied the name of the stack from the UI in cloudfront#2018-12-0517:13kvltThis is my last failed attempt:
clojure -Adev -m datomic.ion.dev '{:op :deploy, :rev "674e654d429ebc5092be8008b1463617720a1a7c", :uname "pet-service-sample", :group "ion-pet-Compute-1Q6752A2P837M", :region "us-east-2"}'#2018-12-0517:25kvltIf i omit the random identifier, it doesn't even accept the request#2018-12-0517:25jaret@petr what is your output from your Push command?#2018-12-0517:26jaretit will have a :deploy-command that is generated for you#2018-12-0517:26kvltclojure -A:dev -m datomic.ion.dev '{:op :push :region "us-east-2" :uname "pet-service-sample"}'
Downloading: com/datomic/java-io/0.1.11/java-io-0.1.11.pom from
(cognitect.s3-libs.s3/upload "datomic-code-8fe8a54a-0daf-48b9-a4b8-bbdf996b81ae" [{:local-zip "target/datomic/apps/ion-pet-service/unrepro/pet-service-sample.zip", :s3-zip "datomic/apps/ion-pet-service/unrepro/pet-service-sample.zip"}] {:op :push, :profile "devthenet", :region "us-east-2", :uname "pet-service-sample"})
{:uname "pet-service-sample",
:region "us-east-2",
:deploy-groups (),
:dependency-conflicts
{:deps
{commons-codec/commons-codec #:mvn{:version "1.10"},
org.clojure/tools.analyzer.jvm #:mvn{:version "0.7.0"},
com.fasterxml.jackson.core/jackson-core #:mvn{:version "2.9.5"},
org.clojure/tools.reader #:mvn{:version "1.0.0-beta4"},
org.clojure/core.async #:mvn{:version "0.3.442"}},
:doc
"The :push operation overrode these dependencies to match versions already running in Datomic Cloud. To test locally, add these explicit deps to your deps.edn."},
:deploy-command
"clojure -Adev -m datomic.ion.dev '{:op :deploy, :group <group>, :uname \"pet-service-sample\", :region \"us-east-2\"}'",
:doc
"To deploy, issue the :deploy-command, replacing <group> with a group from :deploy-groups"}
#2018-12-0517:27kvltclojure -Adev -m datomic.ion.dev '{:op :deploy, :rev "674e654d429ebc5092be8008b1463617720a1a7c", :uname "pet-service-sample", :group "ion-pet-Compute-1Q6752A2P837M", :region "us-east-2"}'
Is what I got when I took the deploy-command and added the group#2018-12-0517:29jaretwhats in your ion-config.edn?#2018-12-0517:32jaretAlso, @petr what version of Cloud are you running?#2018-12-0517:32kvlt{:allow [ion-sample.ion/app]
:lambdas {:app {:fn ion-sample.ion/app :description "Exploring Ions with Pedestal"}}
:app-name "ion-pet"}
#2018-12-0517:32kvltI deployed it yesterday, so the latest version#2018-12-0517:35kvltSo I realised that I hadn't committed that change. So I committed pushed and deployed. It's still failing, but this time cloudwatch doesnt' give anything useful
17:31:29
START RequestId: cdd511ba-86b4-4242-9abb-e587c536b404 Version: $LATEST
17:31:29
2018-12-05T17:31:29.248Z cdd511ba-86b4-4242-9abb-e587c536b404 { event: { codeDeploy: { deployment: [Object] }, lambda: { cI: 0, c: [Object], uI: -1, u: [], dI: -1, d: [], common: [Object] } } }
17:31:30
END RequestId: cdd511ba-86b4-4242-9abb-e587c536b404
17:31:30
REPORT RequestId: cdd511ba-86b4-4242-9abb-e587c536b404 Duration: 1271.95 ms Billed Duration: 1300 ms Memory Size: 128 MB Max Memory Used: 38 MB
No newer events found at the moment. Retry.
#2018-12-0517:39kvltclojure -Adev -m datomic.ion.dev '{:op :deploy-status, :execution-arn arn:aws:states:us-east-2:272695641059:execution:datomic-ion-pet-Compute-1Q6752A2P837M:ion-pet-Compute-1Q6752A2P837M-pet-service-sample-1544031395342, :region "us-east-2"}'
{:deploy-status "FAILED", :code-deploy-status "FAILED"}
#2018-12-0517:39marshall@petr can you try putting in the region explicitly#2018-12-0517:39marshallas a :region key#2018-12-0517:39marshallin the deploy map#2018-12-0517:39marshalloh, you did#2018-12-0517:40marshallhrm#2018-12-0517:41jaretI think you’re on the right track though. This has to be a permissions/creds or region thing. I just tested on a new stack and my deploy command was populated with the compute stack as the $group#2018-12-0517:41marshallwell, actually, did you provide an “application name” in your CFT when you launched?#2018-12-0517:41marshallif you did, the group name is that not your comput group name#2018-12-0517:41kvltI am an admin on this account. I did provide an application name#2018-12-0517:42kvltion-pet#2018-12-0517:42marshalllook in the outputs of the stack in the CF dashboard#2018-12-0517:42marshallsorry, by that i mean the app name#2018-12-0517:43kvltSystemName ion-pet System Name#2018-12-0517:43jaret#2018-12-0517:43marshallcan you look in your cloufformation dashboard#2018-12-0517:43marshallfind the stack you launched (the compute stack)#2018-12-0517:43marshalland in outputs find CodeDeployDeploymentGroup#2018-12-0517:43kvltCodeDeployDeploymentGroup ion-pet-Compute-1Q6752A2P837M CodeDeploy Deployment Group#2018-12-0517:44jaretwhat do you have under “AvailabilityZone1”?#2018-12-0517:44kvltAvailabilityZone1 us-east-2b AvailabilityZone1#2018-12-0517:45marshallyou have a :rev and a :uname -> i dont think that would cause this, but i would expect you only to have one or the other#2018-12-0517:46kvltThe example I was following specified one:
To push the project to your Datomic Cloud environment, execute the
following command from the root directory of the sample project:
`clojure -A:dev -m datomic.ion.dev '{:op :push :uname "pet-service-sample"}'`
We provide a `:uname` key because the sample has a `:local/root` dependency.
This command will return a map containing the key
`:deploy-command`. Copy the value and execute it at the command line
to deploy the ionized app. You will need to unescape the `:uname` value.#2018-12-0517:47marshallah right, the local dep#2018-12-0517:47marshalln/m#2018-12-0517:47marshallhttps://console.aws.amazon.com/codesuite/codedeploy/home?#2018-12-0517:47marshall^ go there#2018-12-0517:47marshallyou should be able to look at your list of codedeploy groups#2018-12-0517:48kvltI can see 2 attempted deployments#2018-12-0517:48kvltCould it be that the example code does not work and its' failing because it's unable to start the app?#2018-12-0517:49marshallnot if the error is about not finding the group#2018-12-0517:49marshallif that’s not the error, then yes#2018-12-0517:49marshallif you click on the latest failed deployment you can look at the reported cause#2018-12-0517:50kvltSo once I made i commited and pushed the change to ion-config.edn, the deployment group error went away#2018-12-0517:50jaretah!#2018-12-0517:50kvltBut now it just tells me that the deployment failed#2018-12-0517:50jaretyeah, then that means the code is failing/unable to start the app#2018-12-0517:51kvltOh man, that sucks#2018-12-0517:51jaretBut hey! We found the deployment group 🙂#2018-12-0517:51jaretIt was right where you said it was 🙂#2018-12-0517:51kvltYeah! Thanks for that!#2018-12-0517:52kvltSo do you think it's that https://github.com/pedestal/pedestal-ions-sample is just not compatible with the new cloud or there is just a mistake somewhere?#2018-12-0517:55marshallDo you see an error in your cloudwatch logs#2018-12-0517:57kvltI do not#2018-12-0517:58kvltAt least not in the compute#2018-12-0518:05marshalllatest log group? the redeploy usually creates a new log group within the stream#2018-12-0518:08jaret@petr these docs show an example of navigating the log group for exceptions#2018-12-0518:08jarethttps://docs.datomic.com/cloud/troubleshooting.html#http-500#2018-12-0518:08kvltThanks!#2018-12-0518:09kvltI think I'm going to try to redeploy with the exact name of the system used in the example (later today). I'll report back#2018-12-0519:44lwhortonam i right in thinking that not null requirements on a standard sql column is a recipe for never being able to evolve your schema into the future? if i had foo varchar not null and a year later we deprecate foo to instead use bar, i dont want my application layer to have to carry the baggage of “filling in” old foos as well as bars?#2018-12-0519:47lwhortoni just want to leave the app code alone which deals with foos, and everything new uses bars instead.#2018-12-0519:51eraserhdPostgres, at least, allows efficient dropping of a "not null" constraint.#2018-12-0520:00lwhortonis it more useful to have nothing marked not null than* to pick and choose and in the future drop it?#2018-12-0520:00lwhortonor am i being silly by leaving everything nilable?#2018-12-0520:02eraserhdIt depends?#2018-12-0520:05eraserhdIn practice, I've seen a lot of Java code that has to check if every thing it touches is null. This comes from not having a coherent idea throughout the system of what null means (error? not yet filled in?). This is wasteful and super painful. The "not null" constraint on fields might be useful to communicate to other devs, who cannot push code which violates contracts. That said, there are other ways to validate contracts, and which is best depends on the domain.#2018-12-0520:05eraserhdAnd if you don't have other developers, it's moot. Unless you have an old, dodgy brain like me.#2018-12-0520:06lwhortonhaha, 👍#2018-12-0520:10alexmillerDatomic (and Clojure) strongly embrace the idea that you should say things that you know and omit things that you don’t#2018-12-0520:10alexmilleror omit stating the absence of a thing, if that makes sense#2018-12-0520:11alexmillerso, I do not think it is silly#2018-12-0520:11lwhortoni just watched rich’s last talk “maybe not” and was trying to apply it to this case where i dont have datomic but want datomic in postgres#2018-12-0520:11alexmillerbut as with anything, it depends :)#2018-12-0520:13lwhortoni do like the idea of nil means ‘i dont know’ not ‘empty to satisfy a constraint’#2018-12-0520:14alexmillerif you don’t know, then why say anything at all? :)#2018-12-0520:17lwhortonwell, i guess to be more clear (arghh the english language): i like the idea of nil meaning “look person, i dont even know what you’re talking about…“, which currently cant be represented in postgres (unlike datomic where you can simply omit something). and maybe the best way to do that is simply nil the field#2018-12-0520:17eraserhdYou could always make one Postgres table called "facts" with "e" "a" and "v" columns .... (jk, don't do this)#2018-12-0520:18lwhortonbetween triggers, log tables, and a reified transactions table with some cleverness we’re getting close to about 25% of the power of datomic#2018-12-0520:18ro6Somewhere I read that the Drupal CMS basically does that, never confirmed it though.#2018-12-0520:20shaun-mahood@lwhorton: Anecdotally, I've worked with systems that blow up hard if there's a null in the wrong place - particularly where it is either a vendor database and so I can't change anything, or if there are legacy or external systems that I'm integrating one of my databases with. I'm seriously considering stripping out nulls as part of my JDBC queries as a step towards migrating to Datomic and to make the Clojure code a little more consistent (though this may be a terrible idea).#2018-12-0520:24lwhortonthanks for the insight @shaun-mahood. i’m working in an elixir system and also very much considering modifying the db connector to auto-strip nulls too. as alex said, if we dont know it why bother even declaring we dont know it.#2018-12-0613:49dustingetzHistory modeling question. Say you have :post/url-slug :keyword :identity. And when you change the slug, you want to prevent breakage by remembering the old slugs. Is slug :cardinality :many, or should we use history for this?#2018-12-0617:11val_waeselynckI'll keep shouting it: don't use history to implement your application logic! https://vvvvalvalval.github.io/posts/2017-07-08-Datomic-this-is-not-the-history-youre-looking-for.html 😛#2018-12-0617:11val_waeselynckMaybe have 2 attributes, one which is the current slug, another which keeps track of past slugs?#2018-12-0617:16benoitAgree with @U06GS6P1N. I would store multiple slugs in this case, especially if you don't care which one is the last one and only use them to resolve URLs. But if I could take a step back I would try to put an immutable identifier in this URL for resolution.#2018-12-0617:38lwhortonthinking about your post there @U06GS6P1N… if you’re concerned about the earth-time that something actually happened (and you probably should be) would it not be better for pretty much every entity to have an created-at in addition to the automatic tx instant?#2018-12-0617:39lwhortonthat would enable clients or really any other process (for example, offline-mode clients) to hold onto the when and not conflate the when-it-actually-happened with when-i-learned-about-it#2018-12-0617:40lwhortonthough i suppose it depends on the needs of your application. how important the “event time” is compared to the “recording time” also seems like a domain concern#2018-12-0621:01dustingetzIt is not clear to me that preventing breakage of public identities over time should be considered application logic#2018-12-0621:23benoitI think val's "don't use history to implement your application logic" is a shortcut. Sometimes it makes sense to use history for application logic when real-world time = database time. In your case, I'm guessing that this slug started to exist when it was created in the database so the two times coincide. So it would be correct to use history for that. I think it all depends whether you want the :url/slug attribute to mean "the last slug for this resource and the one to use when publishing this URL" or "all the slugs that redirect to this URL".
Another thing to consider is that if you use those slugs to identify resources you might want to ensure unicity. You might not want to look at the whole history when you create a new slug to detect collisions. A cardinality many with a "identity" flag seems easier.#2018-12-0621:29lwhortonwhen real-world time = database time seems to me to affect two scenarios*: any sort of ‘offline mode’ feature, and any time you have a queueing system to process heavy load, where ‘real world event time’ != ‘database time’ (by a significant enough margin to matter)#2018-12-0621:54benoit@U0W0JDY4C Not sure I understand what you mean. The difference between the domain time and the database time is more general I think. If you record a fact that person P joined company C at a certain date then it is "domain time" and datomic history will not help you with that. But val's article showed that even if what you model are entities that could coincide with database time (what happens to a blog post is what gets recorded to the database), it is still not a good idea to rely on the history functions to implement features.#2018-12-0621:55lwhortonyes, sorry for the confusion. we are on the same page-- datomic doesn’t magically handle time related specifically to domains, and if you need domain time it’s important to model that explicitly.#2018-12-0616:14marshallANN:
If you are running Datomic Cloud Production topology and are using a VPC Endpoint (as detailed here: https://docs.datomic.com/cloud/operation/client-applications.html#create-endpoint), we are considering improvements that impact this service and would like to hear from you.
Please email us (<mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>) or respond on the forums with a short description of your use case for the VPC Endpoint.
https://forum.datomic.com/t/requesting-feedback-on-vpc-endpoint-use/721#2018-12-0621:15kennyIf I delete a large Datomic DB (10-50m datoms), Datomics DDB read provisioned spikes to 25 and read actual to 250 for a while. Is there a reason for this?#2018-12-0621:18kennyFurther, shouldn't Datomic have auto scaled the DDB reads up to at least 100?#2018-12-0621:19kennyRead actual stayed up at 250 for about 15 mins.#2018-12-0717:09SalI was reading datomic’s tutorial on transacting schemas referenced here: https://docs.datomic.com/on-prem/getting-started/transact-schema.html#2018-12-0717:11SalBut I notice in other sites or examples, there are two additional attributes specified that are not in the tutorial: :db.install/_attribute and :db/id#2018-12-0717:12SalIs there a reason those attributes are not presented in the tutorial?#2018-12-0717:12favila@fuentesjr those used to be required, but now are not#2018-12-0717:12favilathe other examples you are seeing are older#2018-12-0717:13Salare those attributes no longer necessary, or do they simply have default values that can be overriden?#2018-12-0717:14favilathe attributes are there, you just do not have to supply them#2018-12-0717:14favilain every case db id should be tempid in :db.user/db partition and :db.install/_attribute should be :db.user/db#2018-12-0717:15favilathey just went from requiring them for installation to inferring them#2018-12-0717:15favila(inferred from the presence of :db/type and :db/cardinality probably)#2018-12-0717:16favilawhat those did was add the new attribute to the :db.user/db entity (entity id 0) on its :db.install/attribute attribute#2018-12-0717:16favilanow that happens without you asking for it#2018-12-0717:17favilahttps://docs.datomic.com/on-prem/changes.html#0.9.5530#2018-12-0717:18favilareleased 2016-11-28#2018-12-0717:24Salso by default now all schemas are created in the db.part/user partition when those attributes are not specified?#2018-12-0717:25SalAlso, are the entity ids unique across all of the db or are they unique only within the partition they reside?#2018-12-0717:26ro6@fuentesjr I think so. It seems like partition control in general has fallen out of common practice. I'm not sure it's even possible in Datomic Cloud. I think I read somewhere that they found it wasn't being used much and may not be "worth its weight" in the API.#2018-12-0717:27ro6Not sure about your second question.#2018-12-0717:28favilaI'm pretty sure schema is still put in the db.part/db partition#2018-12-0717:29Salinteresting#2018-12-0717:29favilathis was mostly an interface change (you no longer need to include those assertions in your tx data) not a functionality change#2018-12-0717:29favilaif you look at the datoms added after the tx I am pretty sure they are the same#2018-12-0717:30ro6My understanding is that partitions are primarily a means of improving performance by controlling which of your datoms get indexed/cached "closer" to each other. I'm not sure if they have other functional/semantic implications.#2018-12-0717:30favilathey don't, but schema is supposed to go in db partition and I don;t think they changed that#2018-12-0717:31favilapractically speaking this just means that the entity id of attributes will be smaller#2018-12-0717:32SalI see. So by looking at the presense of :db.type and/or :db/cardinality, they store those datoms in db.part/db otherwise … they store them in db.part/user#2018-12-0721:44benoitThis "best practice" in the Datomic docs might need to be updated. https://docs.datomic.com/cloud/best.html#optimistic-concurrency
I'm getting a java.util.concurrent.ExecutionException containing an ExceptionInfo with {:db/error :db.error/cas-failed}. Is that how to detect CAS failures now?#2018-12-0809:14jwkoelewijndoes anyone have some tips regarding troubleshooting a hanging Push with datomic-ions? The last thing i see is Downloading: com/datomic/java-io/0.1.11/java-io-0.1.11.pom from and then it hangs while having 99% cpu usage#2018-12-0809:19tomjackhmm, I was just struggling with the same symptom#2018-12-0809:19tomjackhttps://dev.clojure.org/jira/browse/TDEPS-79#2018-12-0809:19tomjackbut unrelated to datomic#2018-12-0810:18jwkoelewijnhmmm, interesting, will have a look, thanks#2018-12-0909:13jwkoelewijnI seem to have found the culprit: for some reason I had "target" in my :paths section. Removing this removed the hang and enabled me to push and deploy#2018-12-0904:07steveb8nSeeking your opinion: after following the Ions tutorials where each fn is it’s own lambda, then trying out a single request handler fn/Ion, the single Ion seems much better. The main reason is cold starts: with a single Ion, there are a lot less cold starts for users. It means using less of the API Gateway machinery but this is actually a good thing if you want a local dev server. So that’s two compelling reasons to use a single entry-point. What am I missing in this assessment?#2018-12-1017:46ro6That's been my conclusion so far as well. There may be tasks around long-term API maintenance that the Gateway features help with, but I haven't reached that problem yet.#2018-12-1017:50Joe LaneSecurity through cognito is certainly one usecase. It would allow you to disentangle biz logic from the auth(z) code.#2018-12-1017:51Joe LaneIf you can isolate all that stuff at the boundary it can simplify quite a lot of stuff. But thats kind of a design+biz tradeoff for whether you want to separate auth(z) from biz code.#2018-12-1017:52Joe LaneOn the one hand you could trust that the functions are only run by properly authorized roles if you have a mechanism to ensure all function invocations are piped through cognito.#2018-12-1017:53Joe LaneOn the other hand, whats the consequence for getting it wrong because of a typo if you decouple them? Does a user in a game get to do something they shouldnt? nbd. Does your firm have a catastrophic hippa violation? Company ends with lawsuits burning it to the ground.#2018-12-1017:54ro6in that case, perhaps you'd complect on purpose#2018-12-1018:55joshkhfor what it's worth i started by deploying many "atomic" functions behind API Gateway routes, and then eventually folded them in to one proxy resource to avoid cold starts. my reasoning was that some end points are very important but not used often, and the cold start of those end points resulted in a poor user experience.#2018-12-1018:58joshkhi think the Ions tutorial leaves readers in a funny place - on one hand Ions advertises itself as atomic functions in the cloud, yet the tutorial steers readers to internal routing without demonstrating how to do so. you're left to choose one path or the other without knowing the consequences.#2018-12-1020:54steveb8nThanks for the thoughts. Re Cognito, I am using it already and I learned that with 1 interceptor I can replicate the checks that are done by API-GW. However I had to make an extra AWS call because the Cognito ID token doesn’t contain the roles but it’s the one used for decorating requests. Instead the auth handler needs to extra the role from the Access Token i.e. a bit of extra complexity at Auth time. Not a high price to pay. At this point I’m pretty much ready to not use Cognito roles and implement it myself because the local dev server can use that as well#2018-12-1108:34stijnI'll give my 2 cents: we have even given up on API GW as a proxy. There's 2 reasons.#2018-12-1108:36stijn1/ if you call the datomic lambda and that fails (e.g. after an ion deploy it happens frequently), you'll get back an internal server error, but api gw doesn't let you change the response on proxy methods. we would like to add some headers for CORS and set the response to e.g. 503, because a retry makes sense in these cases. you could solve that by adding another lambda in front i guess#2018-12-1108:40stijn2/ if you have large requests (> 6MB = the lambda limit), you have to find another way to get your data in/out. if you go the serverless way that would mean something like using presigned S3 urls for both upload and download. Also the max timeout for api gateway is 30s. maybe we are misusing all this, but file uploads / downloads is kind of crucial to our application#2018-12-1108:41stijnif you don't have any of these requirements I think api gateway is good, but i'd still use it with 1 proxy endpoint, 1 lambda and do the routing in the ion.#2018-12-0918:49bbloomin the context of datomic, what patterns to people tend to use to deal with “unknown values” (ie missing datums) and “known unknown values” (ie explicit nils) given that datomic doesn’t support the latter?#2018-12-0918:56favilaKnown unknowns are common in healthcare#2018-12-0918:56bbloomyeah, anything with a form that permits an “N/A” - which i’ve dealt with a lot#2018-12-0918:56favilaUsually there is some code that expresses it in the same coding system as whatever expresses a positive value #2018-12-0918:57favilaIn fields that have less extensive coding I’m not sure how to handle it without having two attributes#2018-12-0918:57favila(Because the type will be different)#2018-12-0918:57bbloomlike foo and foo_known?#2018-12-0918:57bbloomi’ve done that a bunch, but it tends to be fiddly#2018-12-0918:57favilaYeah and then a constraint that you have one or the other not both#2018-12-0918:57favilaYes it is fiddly#2018-12-0918:58bbloomi’m curious why Datomic is the way it is, especially given the context of the recent Maybe Not talk#2018-12-0918:58bbloomi wonder if there are some technical reasons having to do w/ indexing or the datalog implementation - or if it’s just an oversight, but i’d be skeptical of the latter#2018-12-0918:59favilaAnother pattern that I use for polymorphic attrs in general in datomic is this#2018-12-0919:00favila{:attr/base :attr/baseTYPE :attr/baseTYPE VAL}#2018-12-0919:00favilaI’ve never thought of using this to express known unknowns but it seems possible#2018-12-0919:01favila:attr/nameUnknown and then the value is an enumeration to kind of known#2018-12-0919:01bbloomjust a generalized tagged union? yeah seems like that’s perfectly reasonable#2018-12-0919:01favilaEssentially, but a way that cooperated well with daatalog and datomic’s model#2018-12-0919:02favila[?e :attr/base ?a][?e ?a ?v]#2018-12-0919:02bbloomah - clever#2018-12-0919:04bbloomseems like maybe it’s intentional for known/unknown to be encoded “one level up”#2018-12-0919:05favilaNil is convenient for simple cases of “I know about this attr but I don’t know the value” but there are more dimensions of unknownness#2018-12-0919:06bbloomfor sure, i’ve encountered many different variants of nil 🙂#2018-12-0919:06favilaNil can blur those the same way using a Boolean vs a enum can#2018-12-0919:06bbloomyup#2018-12-0919:06bbloombut a variant type might be nice#2018-12-0919:07favilaFor fun google “hl7 nullflavor”#2018-12-0919:07bbloomie string or keyword, so you could do ‘Brandon’ or :unknown, or :not_yet_named or whatever#2018-12-0919:07favilaExtreme example of this#2018-12-0919:07bbloomheh, i’ve seen this 🙂 fuuuun times#2018-12-0919:08bbloomalthough this brings up a related problem i’ve encountered a bunch: the “when” of classification#2018-12-0919:08bbloomie do i have just one nil value? or do i have 10 different keywords? i may need to distinguish, but i may also want to just treat them all the same#2018-12-0919:08bbloomand can’t put metadata on nil 😉#2018-12-0923:02idiomancyIf anyone has time, I could use help structuring this query a little better 😕
The issue is it's an or join situation, but each branch is conceptually joined to the branch that came before it. So..
What I'm trying to say is get the event where:
the event is a link issued (has a link) that was sent to this recipient-address,
OR the event is a session creation (has a session) that originated with (has reference to) said link issued event
OR the event is a session joined (has a session) that originated from afore specified session creation event
phrased another way, given the following events with the following keys:
------------------
link/created: [link, recipient-address]
session/created: [session, link]
session/joined: [session]
I want all events that relate to that recipient address.#2018-12-0923:03idiomancyMy best effort so far has produced this:#2018-12-0923:03idiomancy(patch/q
'[:find [?e ...]
:in $ ?email
:where
[?e ::tid ?tid]
(or-join [?email ?tid]
(and [?e :recipient_address ?email]
[?e :magiclink ?magic]
[?e ::tid ?tid])
(and [?e :recipient_address ?email]
[?e :magiclink ?magic]
[?e2 :session ?session]
[?e2 :magiclink ?magic]
[?e2 ::tid ?tid])
(and [?e :recipient_address ?email]
[?e :magiclink ?magic]
[?e2 :magiclink ?magic]
[?e2 :session ?session]
[?e3 :session ?session]
[?e3 ::tid ?tid]))]
db
"#2018-12-0923:17idiomancyokay, I've gotten it running... but this can't possibly be the best way to do it#2018-12-0923:17idiomancyediting the above^^^#2018-12-0923:19idiomancyso that's the rawest of the raw ways to do that, and doesn't at all take advantage of the fact that the steps are kind of an accumulation of the previous steps plus something else#2018-12-1000:55benoit@idiomancy The logic makes sense to me. As for performance, I'm not sure. You could try to separate the queries to see for yourself. You could get all the events representing when the links were issued and then get the related events in two other queries. But I'm not sure that would necessary speed up anything. I would be interested to know what you find.#2018-12-1000:58idiomancyhmm. I feel like I must be missing something.#2018-12-1000:59idiomancyI guess they are referring to separate groups of entities 🤔#2018-12-1001:00benoitOh you might have a logic issue in the second or clause. The ?session var is used only once.#2018-12-1001:01idiomancymein got! 😱 good catch!#2018-12-1001:11idiomancyhahaha, interesting! yeah, you actually made me realize that those joins are superfluous!#2018-12-1001:12idiomancyit doesn't matter that ?e2 has a session in the second case or indeed or that ?e has a magiclink in the first case!#2018-12-1001:34benoitYou also likely don't need the tid, you can just join on the ?e events you're looking for.#2018-12-1012:54arnaud_bosIf anyone wants to help, I'm still struggling with the datomic getting-started guide.
I've finally retrieved datomic-pro dependency from the repo (using leiningen, deps still doesn't work) and now I'm seeing a weird exception when opening my repl:
I've setup the smallest repro case I could here: https://github.com/arnaudbos/thisisnotalovesong
This is basically just
(require '[datomic.client.api :as d])
(def cfg {:server-type :peer-server
:access-key "myaccesskey"
:secret "mysecret"
:endpoint "localhost:8998"})
(def client (d/client cfg))
And then java.lang.IllegalArgumentException: Unable to load client, make sure com.datomic/client is on your classpath#2018-12-1013:36mping@arnaud_bos I guess you are missing the datomic client lib#2018-12-1013:36mpinghttps://github.com/arnaudbos/thisisnotalovesong/blob/master/deps.edn#2018-12-1013:24thegeez@arnaud_bos the client lib to connect to a running datomic instance is a separate library: com.datomic/client-pro {:mvn/version "0.8.28"} https://docs.datomic.com/on-prem/getting-started/connect-to-a-database.html#2018-12-1013:30arnaud_bosAh, I see, I did mix info from the getting started guide and from the http://my.datomic.com/account page...#2018-12-1013:30arnaud_bosThank you!#2018-12-1015:06grzmI’ve run into an issue running (datomic.ion.cast/initialize-redirect :stdout) on CIDER. Works when redirecting to :stderr or to a file. I’ve posted a repro case in the hopes someone with more CIDER-fu might be able to figure it out more quickly than I can: https://github.com/grzm/cider-ion-cast-stackoverflow (Also posted in #cider)#2018-12-1015:43m_m_mHi all. Am I right that Datomic in a free version is only based on memory not SSD?#2018-12-1015:45benoitIt only supports local storage (disk). The PRO starter supports all storages.#2018-12-1015:50m_m_mdo you know what is the minimal price for the pro version? I can't find it on their site. There is only some AWS calculator.#2018-12-1015:51marshallDatomic On-Prem information: https://www.datomic.com/get-datomic.html#2018-12-1016:58dustingetzIdea: pull through relations as if ref https://gist.github.com/dustingetz/cfd6882e2acae6e8b48759ec24c4de0a#2018-12-1018:48joshkhmy Ions lambdas are returning the following:
java.net.ConnectException: Connection refused
and my local SOCKS connection to my cloud instance is returning the following stack trace. any clues?
:cognitect.anomalies/category :cognitect.anomalies/fault, :cognitect.anomalies/message SOCKS4 tunnel failed, connection closed, :cognitect.http-client/throwable
#error {
:cause SOCKS4 tunnel failed, connection closed
:via
[{:type java.io.IOException
:message SOCKS4 tunnel failed, connection closed
:at [org.eclipse.jetty.client.Socks4Proxy$Socks4ProxyConnection onFillable Socks4Proxy.java 165]}]
:trace
[[org.eclipse.jetty.client.Socks4Proxy$Socks4ProxyConnection onFillable Socks4Proxy.java 165]
[org.eclipse.jetty.io.AbstractConnection$ReadCallback succeeded AbstractConnection.java 281]
[org.eclipse.jetty.io.FillInterest fillable FillInterest.java 102]
[org.eclipse.jetty.io.ChannelEndPoint$2 run ChannelEndPoint.java 118]
[org.eclipse.jetty.util.thread.strategy.EatWhatYouKill runTask EatWhatYouKill.java 333]
[org.eclipse.jetty.util.thread.strategy.EatWhatYouKill doProduce EatWhatYouKill.java 310]
[org.eclipse.jetty.util.thread.strategy.EatWhatYouKill tryProduce EatWhatYouKill.java 168]
[org.eclipse.jetty.util.thread.strategy.EatWhatYouKill produce EatWhatYouKill.java 132]
[org.eclipse.jetty.util.thread.QueuedThreadPool runJob QueuedThreadPool.java 762]
[org.eclipse.jetty.util.thread.QueuedThreadPool$2 run QueuedThreadPool.java 680]
[java.lang.Thread run Thread.java 844]]},
:config {:server-type :cloud, :region .., :system .., :query-group .., :endpoint http:// entry.. /, :proxy-port 8182, :endpoint-map {:headers {host entry..}, :scheme http, :server-name entry.., :server-port 8182} } }
#2018-12-1019:19marshall@joshkh if you’re using ions your server type should be :ion not :cloud#2018-12-1019:42joshkhstarting a thread because... slack! i'm seeing a CloudWatch alarm for ConsumedReadCapacityUnits <750 for 15 datapoints, and ConsumedWriteCapacityUnits < 150 for 15 datapoints#2018-12-1019:46joshkhand a cliff edge in the metrics that went from N capacity units to 0.#2018-12-1019:51marshallThe CW alarms for capacity are not relevant. they’re used by autoscaling policies internal to AWS (i.e. dynamodb) to trigger scaling up or down#2018-12-1020:00joshkhah, that's good to know. thanks Marshall. i'll comb through the logs and look for something useful. nothing in the local or remote config has changed - things just stopped working, although i know that another dev has been running some transactions (but no config changes). we had a similar problem a few months ago when our :cloud went down for 24 hours until we solved it by upgrading from a very early release to the split compute/storage stacks.#2018-12-1020:04marshallWhat version and what deployment *(solo or prod)#2018-12-1020:04marshallAlso, can you take a look at your CloudWatch dashboard for that system and see what the instance CPU usage looks like?#2018-12-1020:06joshkhis it okay if we take it to a DM for privacy reasons? happy to post any useful results after. 🙂#2018-12-1020:08marshallyup#2018-12-1019:27joshkhhmm, it is :ion in my code despite what the exception says.#2018-12-1019:28joshkhand i'm seeing channel 2: open failed: connect failed: Connection refused in my proxy connection#2018-12-1019:30joshkhi noticed the problem locally, then hit the remote ions via my API gateway and saw they were down as well without deploying any changes. i can't say for sure but it feels like something tipped over.#2018-12-1019:49marshallhave you committed and pushed your ion code? is it possible you’re running code that is using an older config?#2018-12-1019:50marshallif not, i would look in your CloudWatch Logs in the log group named datomic-<yourSystemName>#2018-12-1019:50marshalland see if the ions are firing and reporting any errors there#2018-12-1021:54jaretDatomic Cloud 454 and Ions release
https://forum.datomic.com/t/datomic-cloud-454-and-ions-release/732#2018-12-1109:15stijnnice release! preloading databases during deploy is a big improvement to us#2018-12-1109:15stijnwhat is considered an 'active database'?#2018-12-1114:14ro6This is great stuff! The longer CodeDeploy timeout means I can switch back to using Mount (which I like for development reloading) and still eager load things like the db connection.#2018-12-1023:07grzm@jaret Does that include an update of the Cognitect HTTP library to allow the use of the new (wonderful) AWS API in Ions as well? @marshall indicated that might be in this version. (Please say yes! Please say yes!)#2018-12-1103:26henrikIs this in reference to https://github.com/cognitect-labs/aws-api ?#2018-12-1023:10marshallYep @grzm #2018-12-1023:17grzmI think if you look out your window towards the upper midwest you’ll see the glow from my beaming smile 🙂#2018-12-1023:18marshallYes, i see it shining through the foot deep snow drifts in NC :D#2018-12-1112:49joshkhwondering if someone from cognitect is around to help? yesterday we upgraded from solo to production after our ec2 instance tipped over. the upgrade went well and we can connect to our cloud instance, but we can't deploy our existing ion functions.#2018-12-1112:55alexmillerI’m from Cognitect, but probably not qualified to help. But if I were, I would ask what “can’t deploy” means#2018-12-1113:06joshkhwhoops. i solved the problem shortly after asking. that's how it works right? 😉 the upgrade path to production failed so we had to delete the existing compute stack and install as-new (using the previous app-name). our bastion cloud connections came back, however the already-deployed ions were throwing an internal server error, and re-deploying them resulted in {:deploy-status "FAILED", :code-deploy-status "FAILED"}. i tested the lambdas via the AWS console which worked as expected, and a third code push finally succeeded. i still had to edit our API gateway, reselect the proxy resource and authorizer functions, then deploy the API.#2018-12-1113:07joshkhjust a wild guess, but i'm assuming that the new stack with the old app name crossed some wires. problem solved.#2018-12-1114:25ro6Is there a common practice for purposefully triggering rollback of the CodeDeploy from within the app? Eg if a Datomic schema migration fails in production or another startup condition isn't met. What condition is CodeDeploy polling to determine that "the service is up"?#2018-12-1115:16marshall@robert.mather.rmm The Datomic process not starting is the most common cause of rollback
usually caused by a bug or deps conflict in an ion that throws when the ns is loaded#2018-12-1115:30ro6@marshall Can my app explicitly signal an issue and cause CodeDeploy to roll back?#2018-12-1115:30Joe LaneThrow an exception when trying to load a namespce.#2018-12-1115:48Joe Lane@robert.mather.rmm ^^ Do you think this covers your usecase?#2018-12-1115:58ro6Probably. I'll have to try it out. I generally set an uncaught exception handler (ala https://stuartsierra.com/2015/05/27/clojure-uncaught-exceptions), but maybe I can do that last in my startup process.#2018-12-1116:16stijn@robert.mather.rmm so, you are loading a bunch of stuff at compile time instead of at first request time?#2018-12-1117:38ro6Initially yes. The Code Deploy was timing out and rolling back because it only gave 2 minutes for startup. I switched to first request and doing everything lazy, I was hoping to switch back. Is the time to establish the Datomic connection proportional to data size or something? #2018-12-1117:39ro6To me it's quite desirable to be able to do schema transaction/migration and check everything worked before exposing the new instances to the world. #2018-12-1215:16stijnyes, I think it would benefit our use case too, just checking if it is possible. i'll try it out#2018-12-1116:17stijnhow does that work out with the 'database loading'?#2018-12-1117:17val_waeselynckFailing to deploy on Ion because of a mysterious error thrown when calling d/client: "Assert failed: cfg". Does anyone know what this could mean?
The error is thrown by:
(d/client {:server-type :ion
:region "eu-central-1"
:system "linnaeus"
:query-group "linnaeus"
:endpoint ""})
#2018-12-1117:17val_waeselynckHere's what the error looks like in Cloudwatch (reported via cast/alert):#2018-12-1117:18val_waeselynck#2018-12-1117:18val_waeselynckRunning on com.datomic/ion {:mvn/version "0.9.26"} and org.clojure/clojure {:mvn/version "1.9.0"} on a freshly-updated stack.#2018-12-1117:58marshall@val_waeselynck Can you go to the latest ion? (0.9.28) and also, what version of ion-dev are you using?#2018-12-1117:59val_waeselynck@marshall running on com.datomic/ion-dev {:mvn/version "0.9.176"}#2018-12-1118:00val_waeselynckLet me try updating ion#2018-12-1118:00marshallion-dev also#2018-12-1118:00marshallis now com.datomic/ion-dev “0.9.186”#2018-12-1118:00marshallhttps://docs.datomic.com/cloud/releases.html#2018-12-1118:00marshallworking on fixing the duplicate rows in that table now#2018-12-1118:01marshallalso, what version of client?#2018-12-1118:01val_waeselynckcom.datomic/client-cloud {:mvn/version "0.8.71"}, but that's a :dev dep#2018-12-1118:02marshallyeah, that’s fine; should be overridden on the instance anyway#2018-12-1118:02val_waeselynckdeploying, stay tuned...#2018-12-1118:03marshallyou re-pushed yes?#2018-12-1118:04val_waeselynckYes#2018-12-1118:04val_waeselynckNope, still same error 😕#2018-12-1118:06marshallThis happens on deploy or invoke>#2018-12-1118:07marshall?#2018-12-1118:07val_waeselynckdeploy#2018-12-1118:07val_waeselynckNote that I'm making this call at ns-load time#2018-12-1118:07val_waeselynckDon't know if that's OK#2018-12-1118:08marshallthe call to client?#2018-12-1118:08val_waeselynckyes#2018-12-1118:08marshalllike in a def ?#2018-12-1118:08val_waeselynckas in (def client (d/client ...))#2018-12-1118:08val_waeselynckyup#2018-12-1118:08marshallright#2018-12-1118:08marshallwell, i wouldnt expect it to be an issue; could you put it in a memoized fn instead?#2018-12-1118:09marshalli.e. https://github.com/Datomic/ion-starter/blob/master/src/datomic/ion/starter.clj#L12#2018-12-1118:09marshallif that fixes the issue i can explore the reasons further and then either fix or at least document#2018-12-1118:09val_waeselynckI could, but it could take some time for me to be able to reproduce the issue then. I'll try that tomorrow. Is there a particular rationale for this memoization anyway?#2018-12-1118:10val_waeselynckOK you just answered#2018-12-1118:10marshall🙂#2018-12-1118:10marshallI’ll also try doing a similar thing in a test without the memoized client call#2018-12-1118:10marshallto see if i can reproduce the behavior you see#2018-12-1118:11val_waeselynckI think (and this is totally not rigorous reporting) that it used to work on a previous deploy - I'm suspecting our stack is in a pathological state, will re-create it tomorrow#2018-12-1118:12marshallok. let me know if you’d like some help investigating at all#2018-12-1118:12val_waeselynckBy the time our timezones meet, I will either have succeeded or be in need for more help 🙂#2018-12-1118:15marshallfair enough#2018-12-1118:23marshallVal, what version of Cloud are you running?#2018-12-1118:23marshallThis should be something that is fixed in recent releases#2018-12-1118:24marshall@val_waeselynck ^#2018-12-1118:37marshall@val_waeselynck I’ve confirmed that a change in the latest version loads user namespaces earlier in the process to reduce instance cycle time; one consequence is that you can’t connect to a database as a side effect of ns loading
I would recommend following the pattern shown in the example I provided (delay connection until first invoke). I will also look into making this more evident in documentation#2018-12-1122:44marshallhttps://docs.datomic.com/cloud/troubleshooting.html#assert-failed#2018-12-1206:38val_waeselynck@marshall thanks, initializing the connection lazily did work. I suggest adding a comment with that link in the tutorial's repo as well.#2018-12-1119:27timeyyythe link for A running Datomic System is broken -> https://docs.datomic.com/cloud/getting-started/connecting.html#2018-12-1119:34timeyyythe link for datomic-cloud repository is broken -> https://docs.datomic.com/cloud/releases.html#2018-12-1119:57marshall@timeyyy_da_man thanks, i’ll fix it#2018-12-1121:02timeyyyThanks marshall.#2018-12-1121:04timeyyyJust a suggestion, the free tier of amazon is for t2.micro, when creating the solo topology`t2.micro` is not in the dropdown, i had to go figure out how to update the CF stack. Would be nice to have it accessible from the dropdown.#2018-12-1122:09marshallyou can’t run solo nodes on t2.micro instances#2018-12-1122:36timeyyyhmm, you are correct, the template doesn't create successfully.#2018-12-1204:15ro6@marshall @jaret Just thought you guys should know, I tried switching to the new CloudFormation console view when Amazon prompted, and when I tried to upgrade to 454 via https://s3.amazonaws.com/datomic-cloud-1/cft/454-8573/datomic-storage-454-8573.json (it's my first upgrade), I couldn't find the Reuse Existing Storage option. Once I switched back to the old console UI, it was there again.#2018-12-1204:18ro6Actually, I think what happened is they changed the way they compose the UI from your configuration, because now it looks like this:
Other parameters
Restart
Set to 'false' only when initially creating a system, must be set to 'true' every time thereafter
Much less descriptive than before. I'm trying to complete the process using the new UI and I'll report back...#2018-12-1204:28henrikI ran into the same problem. I had to go back and read everything carefully to find it.
Other than that, the new UI is quite nice.#2018-12-1212:53joshkhi have a single ion function that routes my API, and quite often i get back a response: java.io.IOException: Connection reset by peer. a second or third try sometimes solves the problem. not knowing much about CloudWatch logs, how can i debug this?#2018-12-1212:57joshkhthat's without any new deployments#2018-12-1213:02Oleh K.Can I receive the last time an attribute was updated in datomic?
Something like this, but this doesn't work:
(d/q '[:find (max ?tx-time)
:in $ ?user
:where
[?user]
[?user :user/password ?password]
[?password _ _ ?tx _]
[?tx :db/txInstant ?tx-time]]
(d/db conn) 123))
#2018-12-1213:19joshkhi think you can just do
(d/q '{:find [(max ?tx)]
:in [$]
:where [[?p :user/email "#2018-12-1213:23joshkhwhoops, that might not be exactly what you asked for.#2018-12-1213:25Oleh K.yeah, it's not)#2018-12-1216:19Oleh K.I've found:
(d/q
'[:find (max ?tx-time)
:in $ ?e
:where
[?e ?a _ ?tx _]
[?tx :db/txInstant ?tx-time]
[?a :db/ident :user/password]]
(d/history (d/db conn)) 123)
#2018-12-1221:26joshkhthat's handy. thanks for sharing.#2018-12-1217:06ro6Does anyone know what configuration is required (or recommended) when creating a Datomic Client instance while running on an Ion in the cloud?#2018-12-1217:07kennyWhat configuration are you referencing? The args passed to the client function?#2018-12-1217:07ro6yes#2018-12-1217:08kennyWe pass this:
{:region "us-west-2"
:server-type :ion
:system "..."
:endpoint "..."}
#2018-12-1217:08ro6great, thanks#2018-12-1217:07ro6- https://docs.datomic.com/cloud/ions/ions-reference.html#server-type-ion looks relevant for creating a local client to connect to a cloud system
- https://docs.datomic.com/client-api/datomic.client.api.html#var-client covers the cloud and peer cases#2018-12-1217:23val_waeselynckI'm suddenly failing to start my development environment, it seems the Datomic-Cloud private Maven repo is refusing access:#2018-12-1217:24val_waeselynck#2018-12-1217:27val_waeselynckMy deps.edn is:
{:paths ["src"
"resources"]
:deps {org.clojure/clojure {:mvn/version "1.9.0"}
com.datomic/ion {:mvn/version "0.9.28"}
}
:mvn/repos {"datomic-cloud" {:url ""}}
:aliases
{:dev
{:extra-paths ["dev" "dev-resources"]
:extra-deps {com.datomic/ion-dev {:mvn/version "0.9.186"}
com.datomic/client-cloud {:mvn/version "0.8.71"}
}}}}
#2018-12-1217:28val_waeselynckStrangely this happens only when I start my REPL from an EC2 instance - it's fine on my laptop.#2018-12-1217:28marshallWhat region?#2018-12-1217:29val_waeselynck@marshall eu-central-1#2018-12-1217:39marshalldoes your EC2 instance have S3 read permissions via IAM?#2018-12-1217:40marshallthe deps resolution will access the s3 repos required with AWS java API calls, not just HTTP, so running on an ec2 instance will require that instance to have IAM permissions to use S3#2018-12-1217:41val_waeselynckIt's got permissions to use s3 indeed, although not that repository#2018-12-1217:41marshallone second. i’m tracking down something that may actually be the issue#2018-12-1217:42val_waeselynckBut it's surprising that this issue would arise just now#2018-12-1217:50henrikIs this the same error? https://dev.clojure.org/jira/browse/TDEPS-20
Symptoms seem similar.#2018-12-1217:58marshallhrm. doesnt appear to be the issue i thought it was#2018-12-1218:00marshall@U06GS6P1N I saw a related issue yesterday when my instance role didnt have read * for s3; not sure if that’s a viable option for you, maybe at least add read permissions for the datomic-releases-1fc2183a bucket#2018-12-1218:03val_waeselynckI did and it worked, although I can't tell if adding the permissions solved it or if it was a transient error - not easy to tell as there must be caching involved#2018-12-1217:35benoitHow do people usually evaluate the performance differences of Datomic queries? (without being impacted by the caching)#2018-12-1218:10ro6I'm getting "Assert failed: cfg" when trying to create a client instance in the Ion Cloud. I'm passing the config map:
{:server-type :ion
:system "my-system-name"
:region "us-west-2"
:endpoint ""}
#2018-12-1218:18marshall@robert.mather.rmm https://docs.datomic.com/cloud/troubleshooting.html#assert-failed#2018-12-1218:29ro6@jaret If I'm reading that correctly, it means I can't run migrations and check things worked before the instances are exposed to the world, which is what I've been hoping to do all along.#2018-12-1220:22jaret@robert.mather.rmm I am sorry for not catching you on this earlier, I thought we had figured that out on the previous ticket by removing “defstate.” I didn’t get a chance to look at your new ticket until just now. But Marshall is right, connections as side effects of loading a namespace are not supported.#2018-12-1220:24ro6No worries. I knew that removing the defstates worked, but never got to the root of why I guess.#2018-12-1220:25jaretYeah I think we left that hanging once removing it worked and never found root cause for you, but looking back it makes sense now.#2018-12-1220:26ro6Is the team thinking about a way to support sanity checks during the deploy phase? That would be very nice.#2018-12-1218:24marshall@lwhorton I missed your reply a while back - the client API source is in the jar file#2018-12-1219:31eoliphantHi, i’m having a weird problem (on 441-8505). Trying to transact a bigint of any form 1N (bigint 1) (biginteger 1) causes a “ExceptionInfo Cannot write 1 as tag null” error#2018-12-1221:53markbastianis :db/index not a valid schema attribute in datomic cloud?#2018-12-1221:54markbastianIf I try to transact this:
(d/transact conn {:tx-data [{:db/ident :foo/bar
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one
:db/index true}]})
I get "Unable to resolve entity: :db/index".#2018-12-1222:10marshallCorrect, everything is index true#2018-12-1222:10marshall@markbastian ^#2018-12-1222:10markbastianeverything is indexed in the cloud?#2018-12-1222:11markbastianI can't wait to try this out.#2018-12-1222:11marshallEverything is indexed in on prem as well#2018-12-1222:11markbastianis that true for in-mem?#2018-12-1222:12marshallThat flag tells datomic to include that arrtibute in the AVET index#2018-12-1222:12marshallBut all datoms are in eavt#2018-12-1222:12marshallAnd aevt#2018-12-1222:13markbastianright, so can I put the attribute in avet?#2018-12-1222:14marshallAvet is "on" always in cloud#2018-12-1222:14markbastianok, so for on-prem avet is configurable, but cloud is always on?#2018-12-1222:14marshallYup#2018-12-1222:14markbastianok, that makes sense#2018-12-1222:14markbastiansuper cool#2018-12-1222:26johnjalso, :db/unique also implies :db/index in on-prem#2018-12-1222:27markbastianyep, got the unique part#2018-12-1304:28henrikIs it possible to specify a creds profile for S3 deps? I.e.,
:mvn/repos {"datomic-cloud" {:url ""}}#2018-12-1304:34henrikApparently, no. The access/secret key have to be inserted into ~/.m2/settings.xml.#2018-12-1305:26alexmillerDocs at https://clojure.org/reference/deps_and_cli, search for maven s3#2018-12-1305:34steveb8nQ: what’s everyone doing for composite unique keys in Datomic these days? I’m thinking of using a unique string attribute with value concatenated to let the transactor check it but wondering if anyone has uncovered a better trick lately?#2018-12-1419:08matthavenerThat’s the best I’ve seen.. and some transaction function helpers for upserting or conflict checking#2018-12-1312:15joshkhsorry for the repost, but i was just wondering if anyone has experienced their ions intermittently throwing java.io.IOException: Connection reset by peer exceptions behind an API Gateway? in my case it's a single ion that routes web requests. i've checked the CloudWatch logs for the routing ion, but nothing stands out.#2018-12-1314:04val_waeselynckSame problem here, no explanation.#2018-12-1315:20joshkhuhg. it's affecting our production setup.. no bueno.#2018-12-1312:18joshkhit seems to happen every few minutes, and randomly for various REST requests#2018-12-1312:52benoit@steveb8n I usually implement those kind of checks in a transaction function.#2018-12-1322:15steveb8nThanks. Yes I have the same thing currently but it is pretty complex and can be difficult to debug so I’m exploring other options.#2018-12-1322:16steveb8nAlso, it’s harder to convert these exceptions into friendly user-facing messages so that’s another reason to find an alternative. Right now I’m thinking a combination of a query in the peer and a unique attribute as a fallback#2018-12-1313:06stijnis there a reason that query timeouts generate an exception with :cognitect.anomalies/category :cognitect.anomalies/incorrect instead of :cognitect.anomalies/interrupted? or am I misinterpreting what 'interrupted' means?#2018-12-1314:50marshallThat is likely a bug; i will investigate#2018-12-1317:58stijnOk, if needed I can provide you with a stacktrace #2018-12-1319:20marshallI see sandbox.core=> (<!! (d/q {:query '[:find ?e
:where [?e :db/doc]]
:timeout 1
:args [(d/db conn)]}))
#:cognitect.anomalies{:category :cognitect.anomalies/interrupted, :message "Datomic Client Timeout"}
sandbox.core=> #2018-12-1319:20marshallwhen getting a timeout query#2018-12-1319:20marshallboth with sync and async apis#2018-12-1319:21marshallwhat version of client?#2018-12-1319:21marshall(this is against cloud)#2018-12-1319:29marshallnevermind I’ve seen the behavior in another context; we will investigate#2018-12-1314:18val_waeselynck@marshall So now we have implemented lazy client initialization as recommended, and the deploy succeeds, but we're getting a #:cognitect.anomalies{:category :cognitect.anomalies/unavailable :message "Loading database"} upon lambda invocation (reported by alert: "IonLambdaException").
What can we do about this?#2018-12-1314:22val_waeselynckThe exception is thrown after about 1 min, and the trace contains:
[
"datomic.client.impl.local.Client",
"connect",
"local.clj",
85
]
#2018-12-1314:42marshallSolo or production stack?#2018-12-1314:42val_waeselynckproduction#2018-12-1314:43val_waeselynckI'm thinking this may be due to switching to on-demand provisioning on DynamoDb#2018-12-1314:45marshallI have no experience with that; I would hesitate to use it until we’ve tested it with Datomic#2018-12-1314:45marshallit would not surprise me, however#2018-12-1314:46marshallloading a database requires reading from dynamo to retrieve the log tail; if there’s a delay there (i don’t know how the new provisioning thing works) it would slow that process#2018-12-1314:46marshallin general the unavailable due to loading database will resolve and should be retried#2018-12-1314:47val_waeselynckGot it, thanks. What are the recommended Dynamodb settings fro production ?#2018-12-1314:47marshallerm. whatever was in the template 😄 I’d have to go look#2018-12-1314:52marshall#2018-12-1314:52marshallVal ^ that’s a prod system i stood up yesterday#2018-12-1314:52marshallread @ 200 min, write @ 25 min#2018-12-1314:53marshall2000 and 500 max, respectively#2018-12-1314:53marshallit seems that slack isn’t letting me put in a screenshot#2018-12-1314:58val_waeselynckGreat, thank you! I'm seeing the screenshot all right#2018-12-1314:58marshallOh good. Slack sent me some message about file upload and free plan. shrug#2018-12-1315:03henrikIf you need some write capacity upfront, because you're ingesting a ton of data, is it worth fiddling with minimum write capacity, or should you leave it well enough alone and wait for auto scaling to kick in?#2018-12-1315:03marshallThe former#2018-12-1315:03marshallAt least in my experience #2018-12-1315:04marshallScaling will get you there eventually but if you know you're doing an import, you can bump it a bit and get a head start#2018-12-1315:04marshallUsually means less headache around making sure your import has long enough retry backoff etc#2018-12-1315:05marshallIve rarely needed more than 300ish for a bulk import#2018-12-1315:05marshallCloud uses a lot less ddb throughput than on prem#2018-12-1315:06val_waeselynckGood to know#2018-12-1315:06val_waeselynckSwitching back to provisioned capacity solved it, thanks @marshall#2018-12-1315:06marshallGood deal.#2018-12-1315:32markbastianDoes moving from solo to prod or some other better tier of datomic cloud help with ::cognitect.anomalies/busy issues when transacting data into the db, or is the transactor rate limited in that direction? The documentation at https://docs.datomic.com/cloud/troubleshooting.html#busy indicates that expanding the capacity of your system only helps with reads, not writes. Is this correct?#2018-12-1315:45marshallMoving to prod will definitely increase your write throughput#2018-12-1315:45marshallyou’ll be moving from a very small (t2.small) instance to a fairly substantial one (either i3.large or i3.xlarge)#2018-12-1315:45marshallyou also get larger and faster caching on the local instance#2018-12-1315:46marshallwhich will make indexing jobs run faster#2018-12-1315:53markbastiangreat, thanks#2018-12-1321:13ro6Is there supposed to be anything that needs to be done with API Gateway or Lambdas after my first CloudFormation stack upgrade?#2018-12-1321:23ro6nevermind, figured it out. The Lambda had changed name and needed to be rebound from API Gateway.#2018-12-1321:24ro6is it possible to just deploy the separate Compute and Storage CloudFormation stacks from the start, rather than starting with the "unified" template from AWS Marketplace? When we switch to the production topology I'd like to start off that way unless there's strong reasons not to.#2018-12-1322:29stuarthallowayYes, that is fine!#2018-12-1322:30stuarthallowayVideo from the Day of Datomic at Strange Loop is available at https://www.youtube.com/watch?v=yWdfhQ4_Yfw&list=PLZdCLR02grLoMy4TXE4DZYIuxs3Q9uc4i#2018-12-1420:41ro6I'm trying to write a transaction function that links two entities together via a :ref, but one or both of the entities may not already exist in the db at the time this runs. I was hoping the upsert behavior would allow me to ignore whether they exist or not, but I'm getting java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException: :db.error/not-an-entity Unable to resolve entity:
I'm using a lookup ref on an identity/unique attribute in the entity and value positions.#2018-12-1420:45ro6if I can't abstract over entity existence, what's the idiomatic way to check if the target entity existed from the result of a pull expression?#2018-12-1421:04tomjackyou just can't use lookup refs#2018-12-1421:04tomjackbut you can do e.g. [[:db/add "e1" :ident1 "id1"] [:db/add "e2" :ident2 "id2"] [:db/add "e1" :ref "e2"]]#2018-12-1421:06ro6so basically those first two are just setting up temp ids?#2018-12-1421:06tomjackor even [{:ident1 "id3" :ref {:ident2 "id4"}}]?? 🙂#2018-12-1423:20ro6is that :ref thing really valid syntax?#2018-12-1716:19matthavenerO#2018-12-1716:19matthavenerI’m just using the schema from tomjack. :ref is a :db.type/ref attribute#2018-12-1421:07tomjackright. in the old days we would've had to pick a partition there for each entity#2018-12-1421:07ro6sorry, the :ident1 stuff is throwing me a bit#2018-12-1421:07tomjack[{:db/ident :ident1
:db/cardinality :db.cardinality/one
:db/valueType :db.type/string
:db/unique :db.unique/identity}
{:db/ident :ident2
:db/cardinality :db.cardinality/one
:db/valueType :db.type/string
:db/unique :db.unique/identity}
{:db/ident :ref
:db/cardinality :db.cardinality/one
:db/valueType :db.type/ref}]
#2018-12-1421:07ro6those would be the identity/unique val?#2018-12-1421:07ro6got it#2018-12-1421:09ro6that's interesting. I wonder if there's a technical reason why lookup refs can't work in this case.#2018-12-1421:12ro6I wanted to do it that way so the tx fn could accept anything valid to identify an entity, so I guess lookup ref or entity id at this point, and pass it through to Datomic without introspecting to figure out which type it was#2018-12-1421:12tomjackgiven the existence of the "default partition" (and lack of partitions in cloud) it would seem reasonable to me#2018-12-1421:13ro6maybe just a "not yet implemented" on that one#2018-12-1421:16matthavenerro6: I’ve always wished idents would eval to temp ids as well as entity ids, would be really cool#2018-12-1421:17matthavenerthere’s a case where the semantics might be confusing though (imo)#2018-12-1421:17ro6what's that? I haven't thought deeply about it#2018-12-1421:18matthavener[[:db/retract [:ident1 "thing"] :ident1 "thing"]
{:thing1 "thing" .. other stuff ..}
{:thing1 "other" :ref [:ident1 "thing"]}]#2018-12-1421:19matthavenercurrently, that ref would always point to the “old” thing#2018-12-1421:59ro6hm, I'll have to stare at that later...#2018-12-1423:37mikethompsonWhat do I miss out on when if I use cloud Datomic. I'm aware of:
1. No excision
2. No backup/restore
3. Lambda cold starts (> 10 seconds)
What else could be a pain point?
I'm not trying to be negative. Everything is a tradeoff. I just want to go in with my eyes open.#2018-12-1423:38ro6Nothing in the Peer library that isn't also in the Client lib, so the entity api and multi-database queries#2018-12-1423:40ro6The cold start thing is surmountable. In the early stages you can just hit the endpoint periodically to keep it warm (I actually have an endpoint called /keep-warm), and I think they are working on a direct link from API Gateway to the EC2 instances that would bypass Lambda, and make $ sense as you scale.#2018-12-1423:40lilactownI really like the lambda idea, but I think a lot of the slowdown is because the lambda is inside the VPC. could be wrong#2018-12-1523:14steveb8nI’ve tested this and you can see the ENI init is approx 8 secs, the other 8 is the lambda. I consistently see 16 sec cold starts.#2018-12-1523:15steveb8nafter the first one, other lambdas start in approx 8 secs i.e. ENI is already in place#2018-12-1523:16steveb8nif Ions stop using Lambda we will still have ENI cold start but that will be once and, with any active app, that will be almost never#2018-12-1523:16lilactownhmm yeah. 8s still sounds rather slow#2018-12-1523:17lilactownpresumably that’s just by how Lambda, the Ultimate is implemented and could be optimized a lot further#2018-12-1523:20lilactownit seems like the lambda code itself would be the simplest part to optimize since it’s just a data transform layer depending on the type of event.#2018-12-1523:21lilactownI wonder if the new “layers” stuff announced at re:Invent would help. my completely naive hypothesis is that a lot of the cold-start time is loading the dependencies necessary for speaking to the Ion code in the right format#2018-12-1523:22lilactownI’m surprised it’s that long outside of the ENI init, though#2018-12-1523:53steveb8nagreed. I have some cljs lambdas not in a VPC and they cold start in 3 secs#2018-12-1523:53steveb8nI suspect the 8 secs is due to my Ion deps#2018-12-1523:54lilactownwell the Ion deps shouldn’t affect cold start time#2018-12-1523:54steveb8nyou are right, they are not in the Lambda. my mistake#2018-12-1523:55steveb8ncan’t wait for that direct access architecture. will be a big upgrade for all of us#2018-12-1423:41ro6true, I've read that#2018-12-1423:41ro6takes it from 2 sec to 15 or so maybe#2018-12-1423:56lilactownI guess we can cross our fingers that AWS helps ameliorate the cold start time in a VPC. the security model of datomic makes it hard to do anything else#2018-12-1500:02mikethompsonSolutions are good! But I'm kinda hoping to first come up with the best list of potential pain points.#2018-12-1613:06claudiuhi 🙂 trying to get datomic cloud all set up but it keeps failing 😞 anybody else had problems with the CloudFormation setup ?#2018-12-1613:06ro6@claudiu No, do you have more specifics about the failure?#2018-12-1613:06claudiu15:06:01 UTC+0200 ROLLBACK_COMPLETE AWS::CloudFormation::Stack myapp
15:06:00 UTC+0200 DELETE_COMPLETE AWS::CloudFormation::Stack StorageF7F305E7
15:05:26 UTC+0200 DELETE_IN_PROGRESS AWS::CloudFormation::Stack StorageF7F305E7
15:04:58 UTC+0200 ROLLBACK_IN_PROGRESS AWS::CloudFormation::Stack myapp The following resource(s) failed to create: [StorageF7F305E7]. . Rollback requested by user.
15:04:58 UTC+0200 CREATE_FAILED AWS::CloudFormation::Stack StorageF7F305E7 Embedded stack arn:aws:cloudformation:us-east-1:772499141725:stack/myapp-StorageF7F305E7-GKBTUNLZOV60/0aa62010-0133-11e9-b872-1273bfab49fc was not successfully created: The following resource(s) failed to create: [DhcpOptions, EnsureEc2Vpc].
15:03:56 UTC+0200 CREATE_IN_PROGRESS AWS::CloudFormation::Stack StorageF7F305E7 Resource creation Initiated
15:03:53 UTC+0200 CREATE_IN_PROGRESS AWS::CloudFormation::Stack StorageF7F305E7
15:03:49 UTC+0200 CREATE_IN_PROGRESS AWS::CloudFormation::Stack myapp User Initiated
#2018-12-1613:33claudiuworked for US Est (ohio) will try again in other regions#2018-12-1613:08claudiuSeems to fail at storage. A bit new to aws (mostly google cloud till now), trying to figure it out 🙂#2018-12-1616:07timeyyyI'm using the latest datomic cloud. I'm trying to do the getting started tutorial (https://docs.datomic.com/cloud/getting-started/connecting.html). I have a connection to the bastion server running. When i run (d/create-database client {:db-name "movies"}), i get ExceptionInfo Datomic Client Exception clojure.core/ex-info (core.clj:4739). Repl started with clj -A:dev Fairly inexperienced with clojure, not sure how to debug further.#2018-12-1616:56dustingetzWHen to use dash in attribute names, and when to use dot or camel case?
:dustingetz.gist/src-clojure
:dustingetz.gist/src.clojure
:db/valueType
All camel case - weird!#2018-12-1617:02alexmillerNever use dot in the name, only in namespace#2018-12-1617:04alexmillerhttps://clojure.org/reference/reader implies this is invalid#2018-12-1617:04alexmillerBut not directly stated#2018-12-1617:04alexmillerWould overlap with class symbols#2018-12-1616:56dustingetz#2018-12-1616:58rapskalian☝️ I’ve wondered this myself...#2018-12-1616:59rapskalianIn code I control, I just 100% always use dots in the namespace portion, and kebab case in the keyword portion. #2018-12-1617:01rapskalianDatomic’s schema keywords are one of the few anomalies to that rule...#2018-12-1617:01dustingetzyeah, i adopted the same style as you. It could be that Datomic was designed to make java interop tolerable#2018-12-1617:01rapskalianAh that would make sense #2018-12-1617:03rapskalianEven JS interop is very camel case heavy...”the web” seems to favor it, so maybe that’s also part of it #2018-12-1617:05alexmillerUsually we use camel only when dealing with java interop#2018-12-1617:05alexmillerDatomic was originally designed to be Java friendly so that may be why, but just guessing #2018-12-1618:49rapskalianI have a tricky query that I'm struggling to express using my limited experience with datalog. I have a domain (video game) in which matches reference many rosters which reference many players. I'm trying to formulate a query that would find all rosters containing ALL of the given players. In other words, find all rosters in which players [p1 p2 p3 ... pN] all participated in together. Right now my query is:
:find ?r
:in $ [?name ...]
:where
[?p :player/name ?name]
[?r :roster/players ?p]
but this is pulling all rosters that contain any of the passed in players' names. How do I achieve the all semantics that I'm after?#2018-12-1618:55tomjackI am sorry to ask, how would you want the query to actually work? pull down player list for all rosters and find supersets of given player set? pull down roster list for each player and take set intersection of them all?#2018-12-1619:00rapskalian@tomjack no need to apologize, I'm still very new to datalog. I think maybe what you're suggesting is that I'm placing too large of an expectation on the query itself, and instead I should just pull down all rosters and then filter on those that are supersets of names?#2018-12-1619:00tomjackwell, depending on the sizes of various sets, different ways to implement this will perform better#2018-12-1619:01tomjackif (say) you have a few players to search in your query, many more rosters, many more rosters than rosters containing all of the query players... then the second impl I suggested (intersection of roster set for query players) seems reasonable#2018-12-1619:02tomjackand, you don't need to try to stuff everything in datalog like SQL, yes 🙂. so maybe :find ?p (distinct ?r) and then clojure.set/intersection is a reasonable approach?#2018-12-1619:03tomjackmaybe if you are a client you might want to stuff it in, it should be doable too...#2018-12-1619:05rapskalian>it should be doable
I'm very interested in an example of how it might be achieved completely inside of the query, purely as an excuse to expand my datalog vocab#2018-12-1619:07rapskalianDoes it require the ability to bind a ?var to an entire "unification set" (I'm not sure what to call this)? Right now ?p above is only bound to one player at a time, but I think a query of this nature requires the ability to bind ?pset, if that makes sense...#2018-12-1619:08rapskalianSomething like (pseudo-code):
:where
[(into #{} ?p) ?pset]
#2018-12-1619:10tomjackI mean worst case you can call q again inside the q (or use a custom query function) https://forum.datomic.com/t/subquery-examples/345#2018-12-1619:11tomjacksomeone asked this kind of question before and I think got some answers, don't remember where...#2018-12-1619:19rapskalian>so maybe :find ?p (distinct ?r) and then clojure.set/intersection is a reasonable approach?
@tomjack I think this is really close...but would indeed require two queries (not a big deal, but I'm still curious if datalog can express this idea in one)#2018-12-1619:22tomjackyou want like a 'pure datalog' solution? when you have custom db fns in query, you can express anything 🙂#2018-12-1619:30tomjackalso, it's not two queries. one query, then you do the intersection on the peer/client#2018-12-1619:50dustingetzCalvin, you want to call set logic from clojure.core inside the query, search the datomic forum#2018-12-1619:51dustingetz@cjsauer https://forum.datomic.com/t/exact-unordered-match-of-multi-valued-string-field/365/2#2018-12-1619:53rapskalian@dustingetz aha! Thank you for the link. It was this line that I was missing:
[(datomic.api/entity $ ?e) ?fat]
I was blanking on how any query function could be useful without being able to operate on the entity/set itself.#2018-12-1619:54dustingetzyea, Datomic is pretty different, having access to all of clojure.core inside our queries is very sideways from what we are used to#2018-12-1619:54dustingetzits the clojure.core/= on a typed set#2018-12-1619:55rapskalianIt certainly takes some getting used to, but having a language's core library available inside of queries is a super power...#2018-12-1619:55rapskalianOne question though, d/entity is not available in Cloud, correct?#2018-12-1619:56dustingetzYeah you'll have to get the set in some other way#2018-12-1619:56rapskalianIs d/pull available for use inside the query?#2018-12-1619:56dustingetzyou can call clojure.core/set and d/pull#2018-12-1619:56dustingetzI mean this is begging to be an Ion, but you'll be able to make it work with datalog#2018-12-1619:57dustingetzSee also http://www.dustingetz.com/:datomic-ion-launch-day-questions/#2018-12-1619:58rapskalianAwesome, thank you (and thank you @tomjack as well). Ions were going to be my go-to deployment strategy for this, so I'm glad to hear it 🙂#2018-12-1700:56steveb8njust spotted this on the AWS channel https://www.nuweba.com/AWS-Lambda-in-a-VPC-will-soon-be-faster#2018-12-1708:03val_waeselynckAre reversed attributes (e.g :order/_customer) not supported in Datomic Cloud transactions?#2018-12-1708:16val_waeselynckMore precisely, it seems they're not supported in a to-many form, e.g:
[{:customer/id "cust-id-1"
:order/_customer
[{:order/id "order-id-1"}
{:order/id "order-id-2"}]}]
#2018-12-1717:39souenzzois it supported in datomic peer?
I think that I also had this "problem" in datomic peer.#2018-12-1718:04val_waeselynckI haven't checked recently, but I believe it is. It could be that I'm getting this impression from DataScript.#2018-12-1714:03Oleh K.Please help
when I try this on some data (not all data produces error) I get StackOverflow error despite of the fact that there are only 11 entities in total in cardinality many field:
(d/q '[:find [(pull ?e [:db/id
:user/name
:user/description
:user/account
:user/creator
:user/images ;;cardinality many entity
:user/subscriptions
]) ...]
:in $ ?creator
:where [?e :user/creator ?creator]]
(d/db conn) (Long. (:creator args)))
#2018-12-1715:14val_waeselynck@U5JRN2PU0 could it be that there's a cycle in your data graph?#2018-12-1715:15val_waeselynckah wait that doesn't make sense, there's no recursion here#2018-12-1715:21Oleh K.:user/images entities also have :image/account and :image/creator and these are the same entities as :user/creator and :user/account#2018-12-1715:22Oleh K.accordingly#2018-12-1715:29val_waeselynckit's weird that your ref-typed attributes are not in maps, I didn't know that was allowed#2018-12-1714:05Oleh K.if I do like this all is ok:
(let [in (d/entity (d/db conn) (:db/id x))]
{:id (:db/id in)
:name (:user/name in)
:account (:user/account in)
:creator (:user/creator in)
:images (:user/images in)
:description (:user/description in)
:subscriptions (:user/subscriptions in)})
#2018-12-1714:08Oleh K.The error occurs only if the field :user/images is present in the pull vector, (cardinality many field)#2018-12-1714:08Oleh K.And there are only 7 main entities in my database and only 11 :user/images#2018-12-1714:18eproziumthe wikipedia page of Datomic seems quite empty. Any chance having an update?#2018-12-1717:07Joe LaneFor only $1 a day! 🚀#2018-12-1717:13eraserhdright?? That's actually super cheap. Last time I looked (quite a while back), you couldn't get an EC2 instance that runs Java for that.#2018-12-1717:32ro6has anyone come up with a clever way to test tx fns locally without having to rewrite the syntax of the :tx-data back to a local fn invoke (and explicitly pass the db again)? I'm using Cloud and Ions.#2018-12-1718:29ro6When I throw an error from inside a transaction fn with eg ex-info, is there a way for me to get the ex-data back as data from the Datomic Cloud Client? All I'm getting right now is:
{....
"cognitect.anomalies/category": "cognitect.anomalies/fault",
"cognitect.anomalies/message": "My exception message",
....}
none of the data I associated with the exception from the transactor.#2018-12-1718:52ro6Do I need to serialize all the error data as a string and pass through the message?#2018-12-1719:40Joe Lane@robert.mather.rmm Looks like you’ve got 2 questions. The first is around testing ion tx functions and the second is around exceptions. Have you attempted returning your own anomolies map? I have no idea if returning your own anomolies map is best practice/a good idea/even possible. Can you provide a sketch of what you’re looking for with problem 1?#2018-12-1816:02ro61) When developing locally, I write something like: (d/transact conn {:tx-data (my.tx/fn db {:some "args"})})
and when I want to switch to running it remotely on the Cloud transactor, I change that syntax to: (d/transact conn {:tx-data [['my.tx/fn #_db {:some "args"}]]})
That's not a big deal, but it seems like something pretty natural to abstract over. Just wondering if anyone else had approached that.#2018-12-1816:28Joe LaneLooking on https://docs.datomic.com/cloud/transactions/transaction-functions.html it looks very different from what you showed above.#2018-12-1816:29Joe LaneDo you need to transact the data when testing? Are these integration tests?#2018-12-1816:34mgrbyteto answer the original question:
1) is executing locally, not within the transaction, where as 2) is within the transaction.
Think you should use 2) when developing locally as well (shouldn't be any difference between local and remote). my.fx/fn just needs to be on the transactor classpath.#2018-12-1816:35ro6You mean the last section on that page? It looks different because all their args are literals and sometimes I have bound values or fns where the semantic I want is "evaluate this stuff to values locally, then send to the transactor" When you use the '(my.tx/fn locally-bound-name) syntax, locally-bound-name is sent as a literally rather than getting evaled right?#2018-12-1816:40ro6@U08715BSS I develop locally using Datomic Cloud Client against an in-memory db using https://github.com/ComputeSoftware/datomic-client-memdb, so when developing locally I get :db.error/not-a-data-function Unable to resolve data function when using the syntax in (2).#2018-12-1816:43Joe Laneuntested, but does `(my.tx/fn ~locallybound-name)
work?#2018-12-1816:53ro6It does, just tested. I think that and the way I do it evaluate similarly before being sent. I do think your syntax is a bit more intuitive/explicit about what's going on, but semantically they both work.#2018-12-1816:54Joe LaneCool, well I hope this was helpful Robert#2018-12-1816:58ro6I appreciate the thoughts for sure. The thread of this whole thing for me is about seamlessly switching between local and transactor while developing at the REPL, and for that, my second question is actually the one that bugs me more.
Throwing data-rich errors, pulling contextual info from the site where they occur (as data) is common practice for me. If all I'm able to get from the transactor when throwing an error is a single message string, I'm going to end up
1) Using EDN or Transit to put everything I need in that string
2) Just returning an error code and the db tx info, so the caller can construct a good error to propagate by running it's own queries.
Either approach requires me to treat error handling differently when running tx fns locally vs on the transactor, which seems to me like pointless mental overhead. I was surprised with all the focus on data, data, data that when I threw an ex-info with data from the transactor, it dropped my data and returned a string.#2018-12-1817:03ro6I realize that not pretending a remote call is a local call is also a thing in Clojureville, but I'm not sure if the specific reasons for that apply in this case.#2018-12-1817:07Joe LaneSo, to be clear, you’re intentionally throwing the exception in your tx function, right?#2018-12-1817:09ro6Yes#2018-12-1817:13ro6It's an explicit statement like (if some-condition
(throw (ex-info "Can't complete tx because...."
{:type ::an-error-code
:context {:some "relevant"
:info "here"}})))#2018-12-1817:13ro6ha, layout failed there. Guess I should use the code thing in Slack#2018-12-1817:14ro6oh, it doesn't seem to be available in a thread like this#2018-12-1817:16ro6@U0CJ19XAM Does your Ions usage not involve stuff like this? Maybe you end up using CAS more, or just don't have as many operations that need atomicity?#2018-12-1817:18Joe LaneI’ve been questioning how much I should or shouldn’t be using tx-fn’s for atomicity. I don’t have many hard requirements for atomic tx-fn’s.#2018-12-1817:20ro6That makes sense. My app has many critical operations that need transactions for correctness. Are you questioning due to write throughput concerns or just generally?#2018-12-1817:20Joe LaneHaha, no I am more questioning if I should use them more 😉#2018-12-1817:22Joe LaneA test project i’m going to spin up over the holidays will be designed where the primary group only has an application of exclusively tx-fns, then all reads will come off a query group. I want to see how building an application like that feels.#2018-12-1817:22Joe Lane(last example of the “Planning your system” page on the operations seconds of the docs)#2018-12-1817:26ro6Ah! Minus the stuff we've been talking about here, I'd say an enthusiastic "yes" to tx fns. It's a killer capability relative to any other system I'm aware of.#2018-12-1817:27ro6"My database is an atom, and I can call swap! on it" is such a clean mental model.#2018-12-1817:28Joe LaneAre you using namespaced keywords in your exception?#2018-12-1817:29ro6not exclusively, and not for the keys#2018-12-1817:29Joe LaneFrom https://github.com/cognitect-labs/anomalies
“Extensible: As maps are open, applications can add their own context via namespaced keywords. That said, try to do as much as possible by dispatching only via ::anom/category.” (emphasis mine)#2018-12-1817:31ro6interesting, good thought. Maybe I need to cozy up to cognitect.anomolies to get my data through. I'll play with that#2018-12-1817:33Joe LaneAre you using the Synchronous api? and Do you have a centralized call to transact or are they dispersed through your application? If they are dispersed then the calling code KNOWS the context of the possible errors right? The generic anomolies errors could be enough and then the calling code could enrich / interpret the error and provide meaning.#2018-12-1817:39ro6Synchronous and dispersed. What you're saying is what I meant by option (2) above. I think the default error includes db ref info so I could grab whatever I need from the same snapshot the tx fn saw and combine that with local info to deduce whatever I need, but the throw from within the tx fn still feels like the best place to do it from a coding perspective.#2018-12-1817:41ro6and again, it's different when developing local vs on-transactor#2018-12-1809:04msewell#2018-12-1812:44m_m_mHi. I have idea to hold marketplace data in datomic. Each item is quite unique and can be resell a lot of times (it sounds like a perfect fit for datomic). In the future it will be fully opensource. Would datomic be a good choice? and is it possible to use free version for that? I have to save all data on disc (free version is memory use only? )#2018-12-1812:58claudiu@U4PCP37B8 Have you looked into datomic cloud on aws marketplace ? Experimenting with the solo version now, costs like 1$/day, pretty happy with it + ions so far#2018-12-1812:59m_m_mto be honest...i hate AWS...and if it is possible at the beggining I would like to use something "not on the cloud" 😄#2018-12-1813:02claudiumakes sense. Also not a fan of aws... until now only used google appengine 🙂 The only reason I gave it a try is ions since it configures everything and seems like little overhead for me.#2018-12-1813:04rhansen@U4PCP37B8 free version does store to disc.#2018-12-1813:05m_m_m@U0H2CPW6B what is the main difference between free and pro version ?#2018-12-1813:06rhansenFree version is limited to 2 simultanious peers (pro is unlimited) and, I think, can't target all types of storage (uses H2 as storage backend I believe, which is the Java equivalent of SQlite)#2018-12-1813:52m_m_m@U0H2CPW6B you mean to simultanious readers ?#2018-12-1814:01rhansenPeers. At most two instances can be connected to your datomic instance.#2018-12-1814:11claudiuHi 🙂 datomic cloud has solo/production, are there any pitfalls if I change the instance types ? Ex: production config but with t2.small#2018-12-1814:12marshall@claudiu You can’t run production with smaller instances#2018-12-1814:12marshallthe architecture is designed around certain aspects of the instanes#2018-12-1814:12marshallinstances*#2018-12-1814:12marshallfor instance, Valcache on the local NVMe SSDs#2018-12-1814:13claudiuIs there any solution to have solo instances but with a load balancer ? 🙂 (no downtime on deploy and to work with without aws lambdas when the feature is released)#2018-12-1814:13marshallno, HA / no-downtime-deploy is a feature of production#2018-12-1814:23claudiuoky, So it's solo or production no in between ? Solo should be just fine, but still need to find out what's the SLA for the project.#2018-12-1814:54igrishaeva dull question: does Datomic Free have any limitations when working with in-memory databases? I’ve got a special case when I need a memory-only database. So would it be enough to link just Datomic Free but not the paid version?#2018-12-1815:54ro6What do you mean "any limitations"? You get the full programming and data model. If you want to develop against the Client API (rather than Peer) so your code is compatible with Datomic Cloud, https://github.com/ComputeSoftware/datomic-client-memdb works.#2018-12-1815:55igrishaevI meant, any connection limitations. I know that Free might have two simultaneous connections at once.#2018-12-1815:56igrishaevBut I believe it isn’t related to in-memory databases, is it?#2018-12-1817:35fmjreyThe link to https://github.com/ComputeSoftware/datomic-client-memdb was new to me, thanks. Is there something similar that can abstract differences from datascript and datomic?#2018-12-1817:37Joe LaneWhat are those differences?#2018-12-1817:42fmjreyDifferent deps#2018-12-1817:42fmjreyat the very least#2018-12-1817:56igrishaevCan I pass a query result into another query as a data source?#2018-12-1818:01eraserhdigrishaev: Yes. (In fact, I've passed all sorts of neat things as a data source.)#2018-12-1820:43vnctaingHi, I’m new to datomic. Given a situation where “I have store called Auchan”, it gets renamed “Carrefour Market”. Is there a way in datomic to express “find me any store that is called Auchan or has been called Auchan at any point of time” ? Does my question make sense ?#2018-12-1820:50shaun-mahoodhttps://docs.datomic.com/cloud/tutorial/history.html#sec-2 seems like it should have what you want#2018-12-1820:50Joe LaneAhh, beat me to it shaun!#2018-12-1909:23igrishaevDoes anyone know how to resolve refs in nested maps when transacting data?
Say I have a vector of maps:
{:account/id 42
:account/likes [{:like/id [:account/id 5166]}]}
But when I load the whole dataset, Datomic says
Unable to resolve entity: [:account/id 5166] in datom [-9223301668109571139 :like/id [:account/id 5166]]
Yet the account 5166 really exists in the vector.
What should I do?#2018-12-1909:33kirill.salykinit suppose to be :db/id, isnt it?#2018-12-1909:33tomjackon-prem docs have the following note which I think is relevant#2018-12-1909:33tomjack> When used in a transaction, the lookup ref is evaluated against the specified attribute's index as it exists before the transaction is processed, so you cannot use a lookup ref to lookup an entity being defined in the same transaction.#2018-12-1909:33tomjackhttps://docs.datomic.com/on-prem/identity.html#2018-12-1909:34igrishaevwhat would be the best way to load nested likes then?#2018-12-1909:34kirill.salykindo you use pull api?#2018-12-1909:35kirill.salykinare they reference not by :db/id?#2018-12-1909:35igrishaevNo, I’m talking about data creation#2018-12-1909:36tomjackperhaps you can get away with using {:account/id 5166} instead of [:account/id 5166]#2018-12-1909:37igrishaev@tomjack I’ll try, but what exactly does it mean?#2018-12-1909:38igrishaevis is another form for ref lookup?#2018-12-1909:38tomjackno, it is just another nested map, "an entity with account/id 5166"#2018-12-1909:38igrishaevor is that a temp id?#2018-12-1909:39tomjackif account/id has db.unique/identity then the upsert behavior will happen#2018-12-1909:39tomjackno temp ids in sight, but you could use them#2018-12-1909:41igrishaevwait, that’s something different. I need to lookup a ref inside a nested map#2018-12-1909:41igrishaevan account has a list of likes, where each like references another account#2018-12-1909:42igrishaevso how can I reference a user who has been liked? I have all the IDs I need#2018-12-1909:43kirill.salykinare you sure id is correct?#2018-12-1909:43kirill.salykincan you pull data with this lookup?#2018-12-1909:44igrishaevyes. for example, I don’t load likes at all, but then I can pull a user with such id#2018-12-1909:44tomjackit's not different. you don't need to lookup a ref#2018-12-1909:44tomjacktry it 🙂#2018-12-1909:45tomjackI'm assuming :account/id is :db.unique/identity, though#2018-12-1909:45kirill.salykinotherwise you cant lookup, right?#2018-12-1909:45igrishaev@tomjack oh my god that works!#2018-12-1909:45tomjacklookup refs made more sense (in tx data) when tempids could not be omitted#2018-12-1909:46igrishaevnow I’ve got the idea, thank you!#2018-12-1909:46kirill.salykininteresting, thanks for that case 🙂#2018-12-1909:46kirill.salykindidnt know datomic can do like this#2018-12-1909:47igrishaevas I see it, the idea is to create nested accounts of the fly and then update them#2018-12-1909:48tomjacklookup refs are also useful for db.unique/value, I guess#2018-12-1909:48tomjack(and can be handy generally and outside of tx data..)#2018-12-1912:59avfonarevI'm getting a ssh: connect to host [ip] port 22: Connection refused on a fresh Datomic Cloud stack. Other ssh connections work. Is there a way to trouble shoot?#2018-12-1913:10avfonarevOk, that was a problem of parallelism. I was being clever and set up the permissions as soon as they appeared in the AWS Console, but before the stack formation completed... Restarting the bastion server helped...#2018-12-1916:48markbastianI'm trying to understand the estimated costs of Datomic Cloud using the calculator at https://aws.amazon.com/marketplace/pp/prodview-otb76awcrb7aa?ref=vdr_rf. The "Datomic Cloud" choice is kind of confusing. If I choose a particular EC2 type (e.g. i3.xlarge) it has a Software/hr and an EC2/hr price along with a Total/hr price (e.g. $0.312, $0.312, $0.624). This selection is then reflected in the "Estimated Infrastructure Cost," but only the "1/2 value" is used (e.g. $0.312). What is the actual number I should use to estimate my costs for a month of usage with a particular config?
To keep it simple, let's say I choose i3.xlarge (costs shown above) and have one instance running 24/7 for a 30 day period. Would I expect to pay 24 x 7 x $0.312 = $224.64 or twice that? We can ignore data storage for the moment. It looks like that is just $0.10/GB-month.#2018-12-1917:26johnjTwice that, since production requires at least two nodes, so $224 x 2 for the i3.large, for the i3.xlarge would be $449 x 2#2018-12-1917:26johnjdon't forget storage and data transfer costs and other aws services datomic cloud uses 😉#2018-12-1917:35markbastianThe two nodes isn't baked in to the config? Is the second for HA?#2018-12-1917:35johnjbut I don't think you can start with i3.xlarge though, these instances are only enabled as additions to the production i3.large ones#2018-12-1917:36johnjyes, for HA but both do work at the same time, it leverage both nodes.#2018-12-1917:37johnjthe price you see in aws is per node, production requires two#2018-12-1917:05shaun-mahood@markbastian The estimate per hour for the solo topology comes out the same as production if you set them both to the same instance type, so I would expect to estimate twice what the tool shows for the production topology. I think it's a limitation in the estimation tool - hopefully someone who knows for sure will give you a more certain answer.#2018-12-1917:13markbastianYeah, the choice of solo vs. prod vs. anything else in the "Fulfillment Option" dropdown doesn't seem to have any effect on the details. I think it's what you choose. Even then, though, there isn't any "bottom line" number. The "Datomic Cloud" choice you make only reflects the Software/hr price and the "Estimated Infrastructure Cost" seems to reflect the EC2/hr price. I'm guessing the actual cost is the Total/hr, but I don't know if you get billed again for the Estimated Infrastracture Cost item. If solo is $1/day the answer the agrees best with that would be $0.035/hr (Total/hr).#2018-12-1917:41johnja single DB uses only one node for transactions though#2018-12-1917:45johnjlets hope on-premise doesn't get abandoned 😉#2018-12-1920:08donaldballIs is Incorrect to assert :db/doc values on entities that have :db/ident values but are not actually attribute entities?#2018-12-1920:13favilano#2018-12-1920:13favilait's not incorrect to assert :db/doc on any entity#2018-12-1920:13tomjackit is so correct 🙂#2018-12-1920:13favilado it#2018-12-1920:14favilaI often add db/doc to transactions (like a commit message)#2018-12-1920:15benoitAs long as it respects the semantic of the attribute which is "Documentation string for an entity".#2018-12-1921:35Joe LaneHey friends, I’m x-posting from the datomic forum. Looking for guidance on Cognito claims information in the ion apigw request object. Is it stable? Was this documented and I missed it? https://forum.datomic.com/t/datomic-ions-using-cognito-user-pool-apigateway-authorizer/748?u=jplane#2018-12-1923:05steveb8nI looked into this as well. I’m currently using AppSync which means I have an extra VTL transform layer so I can’t help with API-GW specific knowledge#2018-12-1923:06steveb8nbut I did learn about the difference between the 2 tokens that Cognito generates#2018-12-1923:07steveb8nit’s annoying because you need the access token to make your request but that token doesn’t contain claims#2018-12-1923:07steveb8nnot sure if this helps but I thought I’d mention it since it’s related#2018-12-1923:10Joe LaneI was going down the appsync route and backed out for lacinia#2018-12-1923:10Joe LaneThere were so many problems I ran into that didn’t even throw errors I just dropped it.#2018-12-1923:10Joe LaneThat was a few months ago.#2018-12-1923:11steveb8nI agree with that. I’m planning to move to Lacinia as well#2018-12-1923:12steveb8nIt’s been working well but not enough upside#2018-12-1923:14Joe Lanelacinia pedestal + pedestal.ions works really well.#2018-12-1923:14Joe Lane(so far anyways)#2018-12-1923:14steveb8ncool. I’m looking forward to it#2018-12-1923:14Joe Lanebut I’m much farther and moving faster than I was with appsync#2018-12-1923:15steveb8nI think AppSync is good if you want to use Amplify.js but otherwise it’s extra complexity for not much extra value#2018-12-1922:43johnjIs it recommended to give every entity an external unique identifier?#2018-12-1922:45Joe LaneI’ve been giving a UUID to all my graphql entities#2018-12-1922:45Joe LaneI think maybe it could be thought of as dont expose datomic db/id’s externally#2018-12-1922:47johnjOk, I'll think this a bit more#2018-12-1922:55johnjI guess your index strategy comes into play#2018-12-1923:00favilait's ok to share :db/id for short-lived uses (interactive, in-memory); it's ok to not give an external identifier to entities you don't want to be directly addressable from outside your system.#2018-12-1923:00favilaentities at the other end of an is-component attribute usually should not have an external id#2018-12-1923:08johnjexactly the case I was figuring it out for (entities that belong to others by means of :db.type/ref and isComponent, and not directly addressable), couldn't think of a use case for given them external keys, thx.#2018-12-1923:09steveb8nI have a UUID on every entity in my db. It is useful for a lot of reasons so I consider this a best practice now#2018-12-1923:10johnjdo you need AVET for those entities?#2018-12-1923:10steveb8nIn some cases I also derive (type 3) UUIDs from entities so that I can count on stable ids, even when not persisted#2018-12-2003:26dustingetzwhat? how do you do this#2018-12-1923:11johnjonly reason I can think of#2018-12-1923:11steveb8nI have not added any extra indexing but I’m also not prod/live yet#2018-12-2007:17lwhortonwhat is the right approach to handle as-of queries as the schema evolves? if you do it wrong, i imagine that adding a datom to an entity then updating a query to look for that datom is essentially breaking any functionality as-of might offer? the entity as of the time the new attribute was appended can no longer be queried via as-of since it doesn’t actually have that attribute at any point in the past, right? unless i’m mistaken and there’s an equivalent/corollary “join with an entity already known to the db” that since filters have to consider.#2018-12-2012:50val_waeselynck@U0W0JDY4C you may want to look at this: https://vvvvalvalval.github.io/posts/2017-07-08-Datomic-this-is-not-the-history-youre-looking-for.html#2018-12-2017:08lwhortonhmm. oops.#2018-12-2017:09lwhortonso is there no way to join an entity (prev unavailable attrs with newly added attrs) or use something like get-else or just or?#2018-12-2017:16val_waeselynck@U0W0JDY4C not sure that answers your question, but you can put the actual db you want at the beginning of a Datalog clause [$past ?e :first-name ?past-first-name] [$current ?e :first-name ?present-first-name]#2018-12-2017:26lwhortoni suppose that could work — select with an as-of db the entity with the new value, and join on the entity with an as-of db where the attribute doesn’t yet exist#2018-12-2017:34val_waeselynckCool - but again, you're probably doing something you shouldn't attempt in the first place IMHO - your application code should not use as-of#2018-12-2017:36lwhortonyea, i think you’re right, im still trying to wrap my head around it though. kind of a bummer because i thought i could implement a “novelty” report wrt revenue projections, etc.#2018-12-2009:51claudiuis there any way to distinguish in aws "cost explorer" between prod/solo stacks ?#2018-12-2012:42val_waeselynckI'm really struggling to get a batch import job to work reliably with Datomic Cloud.
I'm running it from a an EC2 instance in the same VPC as my Cloud stack (prod, i3.xlarge), with a client arg-map looking like so:
{:server-type :cloud
:region "eu-central-1"
:system "linnaeus"
:endpoint ""
:timeout (* 1000 60 5)}
Connecting to the db (via d/connect) fails 3 out of 4 times with a Datomic Client Timeout error, and even when that works d/transact fails after a few Transactions with a Service Unavailable error:#2018-12-2012:42val_waeselynck#2018-12-2012:42val_waeselynckExponential backoff doesn't help, even after several minutes. The EC2 nodes of the Datomic stack exhibit a near-zero utilization in terms of CPU, memory and network, while the Cloudwatch metrics show a constant HttpEndpointAsyncTimeout of 1.0.
What's driving me crazy is that these failures seem so random. Everything works fine for a dozen txes, and then after virtually no load I get 100% failure.
What might be causing this?#2018-12-2013:33marshallVal, can you file a support ticket and we can help troubleshoot #2018-12-2013:35val_waeselynckSure#2018-12-2013:39val_waeselynck@U05120CBV here you go: https://forum.datomic.com/t/difficulties-connecting-to-production-system-for-batch-imports/750#2019-12-3112:34eoliphantany update on this? We’ve run into something similar but more sporadically#2018-12-2013:29dustingetz:where [?e :crm/tag ?tag] (not [(#{:lead} ?tag)]) results in error "Join variables should not be empty" {:error :parser/where, :form (not [(#{:lead} ?tag)])} but removing the not works fine, whats up#2018-12-2013:30val_waeselynckTo stop wondering about this stuff, I've personally decided to always use not-join over not 🙂#2018-12-2013:50dustingetzThanks, not-join made the problem super obvious#2018-12-2015:16dustingetzWhy is :db.fn/cas -> :db/cas but :db.cardinality/many is not :db/many ?#2018-12-2017:15lwhortonsemantically isn’t one a function and the other an ident?#2018-12-2017:25lwhortonnot really an answer, but maybe cardinalities were namespaced for clarity and functions remain top level for convenience?#2018-12-2017:26dustingetzit's either historical, or there is a deep reason i dont see#2018-12-2017:26dustingetzbut if something is going to be changed, why change :db.fn only#2018-12-2020:07igrishaevI’ve got an account model that has interests, which is an array of strings (db.type string with cardinality = many). Then another list of interests come from request. How can I query accounts what have ALL the specified interests?#2018-12-2020:09igrishaevIn query, Datomic compares values one-by-one so I’ll have accounts who have at least one interest from the list. But I need all the interests#2018-12-2021:03dustingetzinvoke clojure.core/= on a set instance #2018-12-2022:38eraserhd@igrishaev You need double negation: find an account such that there is not an interest on the request that is not on the account.#2018-12-2105:10igrishaevCould you give me a tip please? What I have is
'[:find ?id
:in $ ?a [?interest ...]
:where
[?a :account/id ?id]
(not-join [?a]
[?a :account/interests ?interest])]
but I got stuck with that. It returns accounts without passed interests. How can I revert the logic?#2018-12-2022:51tomjackhah, does that perform well assuming the desired strategy is to scan through all accounts? I guess I can try it myself#2018-12-2023:02dustingetzAnyone who likes types want to help me understand what the heck is going on in here?
(defn normalize-result [qfind result]
; This function confuses me. I am trying to explode it to its essence.
; mapv is coupled to vector, how could this be generalized to reactive collections?
(let [vector vector
mapv mapv]
(when result ; unclear if nil result is server error
(condp = (type qfind)
FindRel result
FindColl ((partial mapv vector) result)
FindTuple ((partial mapv vector) result)
FindScalar ((comp vector vector) result)))))
#2018-12-2104:46igrishaev@dustingetz how is that’s possible in Datomic I wonder?#2018-12-2104:47igrishaev@eraserhd that makes sense! the second time you give me a hand, thank you!#2018-12-2106:23kardanWhile reading up on Web Ions I see that they are supposed to return a ring map, does this mean that anything like server side events or websockets are not supported?#2018-12-2107:39rhansen@kardan I haven't tested it myself, but ions run behind aws gateway, which recently got support for websockets. So I think you can support websockets that way.#2018-12-2107:58kardanI was thinking about the Ion web API (https://docs.datomic.com/cloud/ions/ions-reference.html#sec-7). But I have to say that I have not chewed on Ions enough to understand how it all hangs together. But I got the impression that the Ion itself wanted a ring request / response workflow.#2018-12-2107:59kardanAnyway, I was only interested it’s no show stopper for me reading up more 🙂#2018-12-2108:20rhansenSo ions are simple clojure functions which are inserted into your datomic cluster.
AWS Gateway can send events to your ions as a ring request.
AWS Gateway now supports websockets, which means that it in theory can send a request on user connection, user message and user disconnect. It essentially converts websockets communication to a request/response thing. I would assume that you can give each user an ID and so do asynchronous message sends, but I haven't looked to closely at it yet.#2018-12-2108:35kardanI see, thanks for explaining. I will look a bit more 🙂#2018-12-2108:51rhansenSure, no problem :9#2018-12-2108:51rhansen:)=#2018-12-2108:51rhansen🙂#2018-12-2109:19claudiuHi 🙂 anybody know what's the role of "preload database" and what it adds on top of the deploy life-cycle ? in the deploy reference (https://docs.datomic.com/cloud/ions/ions-reference.html#deploy-outputs) it says Ensure that active databases are present in memory before routing requests to newly updated nodes..#2018-12-2111:14igrishaevI still cannot solve my problem. Imagine I have a model: an account with a set if interests.
{:db/ident :account/id
:db/valueType :db.type/long
:db/cardinality :db.cardinality/one
:db/unique :db.unique/identity}
{:db/ident :account/interests
:db/valueType :db.type/string
:db/cardinality :db.cardinality/many
:db/index true}
Now I need to query all the accounts that have ALL the interests I’ve got from request. What I tried (for a single account):
(d/q
'[:find ?id ?interest
:in $ ?a [?interest ...]
:where
[?a :account/id ?id]
(not [?a :account/interests ?interest])]
(d/db conn)
[:account/id 1]
["foo" "bar" "baz"])
or
(d/q
'[:find ?id ?interest
:in $ ?a [?interest ...]
:where
[?a :account/id ?id]
(not-join
[?a ?interest]
(not [?a :account/interests ?interest]))]
(d/db conn)
[:account/id 1]
["foo" "bar" "baz"])
It works partially yet I still have wrong data. How can I improve that?#2018-12-2111:50octahedrion@igrishaev - I have a similar problem: I have a thing with a :db.cardinality/many attribute & I need to query for all having two kinds of values e.g. this thing can have multiple colours and I want all the things having red and green colours#2018-12-2112:24benoit@igrishaev One way to do it would be to dynamically generate a and clause with all the interests you're looking for:
(and
[?a :account/interest ?i1]
[?a :account/interest ?i2]
...)
#2018-12-2113:39octahedrionwhat if :account/interest was another entity which you wanted to ensure there were 2 instances of which each had attributes ?i1 and ?i2#2018-12-2113:58benoitYou can add clauses with intermediate vars for those entities. If that gets too long you can abstract the set of clauses with a rule.#2018-12-2114:31octahedrionI tried that, but it didn't work#2018-12-2112:25igrishaev@me1740 hm, makes sense…#2018-12-2112:27igrishaeva friend of mine suggested me to pull entity and intersect sets#2018-12-2112:27igrishaev[(d/entity $ ?e) ?ent]
[(:interests ?ent) ?interests]
[(subset? #{"cars" "games"} ?interests)]
#2018-12-2112:30benoitYou're not taking advantage of your indexes with something like that. You have to pull every single account and do the subset? operation on it.#2018-12-2112:30igrishaevright, that would be too slow#2018-12-2112:30igrishaevthe idea of multiple conditions is much better#2018-12-2112:33anderswhich aws instance types are supported by datomic on-prem? ensure-cf fails for m5.large#2018-12-2113:56jaret@U0ESP0TS8 I believe all legal instance types are supported ( https://aws.amazon.com/ec2/instance-types/). I just tested by ensuring a my-cf.properties file with aws-instance-type=m5.large in us-east-1 and the ensure did not fail.
$ bin/datomic ensure-cf my-cf.properties my-cf.properties
{:success my-cf.properties}
#2018-12-2113:57jaretWhat error are you seeing?#2018-12-2113:58anders$ bin/datomic ensure-cf cf-template.properties cf-template.properties
java.lang.Exception: Key not found: m5.large
at datomic.common$getx.invokeStatic(common.clj:191)
at datomic.common$getx.invoke(common.clj:184)
at datomic.memory$aws_transactor_settings.invokeStatic(memory.clj:174)
at datomic.memory$aws_transactor_settings.invoke(memory.clj:170)
...
#2018-12-2113:58anderseu-central-1 region, datomic-pro-0.9.5786#2018-12-2114:04jarethmmm I am not seeing that at all. Would you mind logging a ticket to support or sharing your properties file with me? I’d like to try with your file.#2018-12-2114:05jaret<mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection> e-mail should generate a ticket that I can look at#2018-12-2114:11anderswill do#2018-12-2114:18andersjaret, have you tried against eu-central-1?#2018-12-2114:19jaretI tried with ensure on eu-central-1.#2018-12-2114:20jaretBut just ensure. I didn’t start the process from the beginning.#2018-12-2114:23jaretPlease let me know when you send the e-mail, I want to make sure I jump on this when it comes in. 🙂#2018-12-2113:21benoit@igrishaev In case you don't know about it. It's probably easier to dynamically generate queries using the map form. https://docs.datomic.com/on-prem/query.html#list-vs-map#2018-12-2114:25igrishaev@me1740 sure, I used to compose complicated queries, for example: https://github.com/igrishaev/highloadcup/blob/master/src/highloadcup/db.clj#L161#2018-12-2114:31dustingetz@igrishaev @me1740 Rules to test if a collection of players are a subset of the players
in a roster https://gist.github.com/dustingetz/66e493e87a99b9656e2cfe96bf6a51cc cc @cjsauer#2018-12-2114:31eraserhd@igrishaev here's what I meant (I haven't tested, but it should work): https://gist.github.com/eraserhd/c918cd1fa8cf06694b071c135e532125#2018-12-2114:33benoit@dustingetz yes, recursive rule should work too#2018-12-2114:34dustingetzI am collecting this stuff in https://www.reddit.com/r/datomic/ btw#2018-12-2114:34igrishaev@eraserhd thanks a lot, let me check it#2018-12-2114:39benoitI had a bad surprise with using clojure core functions on the transactor a while back. Never got an answer as why it fails https://forum.datomic.com/t/conj-in-a-rule-fails-on-the-transactor/413#2018-12-2114:40benoitI would be very curious to figure out why this fails. And since then I try to stay away from this kind of queries that build clojure data.#2018-12-2115:43dustingetzinteresting#2018-12-2115:43dustingetz"This only happens in the transactor and not with d/with.
"#2018-12-2117:53eraserhdI had an issue much like this. I thought Datomic was reordering clauses, which they tell me they don't do.#2018-12-2117:53eraserhdIn my case, a bound variable was nil and I got an NPE.#2018-12-2117:54eraserhdIt had two characteristics of yours: calling a function that takes $, followed by calling regular clojure code.#2018-12-2117:56eraserhdIn my case, IIRC (I've lost the commit hash), the function which takes $ checked a precondition and failed, but the clojure code following it seem to run before the precondition was checked.#2018-12-2117:56eraserhder, by "and failed", I mean like Prolog's cut, not like it actually failed.#2018-12-2117:48joshkhfriday pre-holiday brain melt.. can i test a transaction against a db using d/with-db? it returns a db, but d/transact requires a connection.#2018-12-2117:55tomjackso what? why do you want to put the square peg into the round hole?#2018-12-2118:05joshkhno, of course not. just explaining my disconnect. i recall someone testing transactions against a with-db'ed version of the database and then checking the results, but only in memory.#2018-12-2118:09tomjackyes, you can do that. I don't understand what the connection argument to d/transact has to do with it#2018-12-2118:09tomjackyou just use with-db#2018-12-2118:10tomjackahem, with#2018-12-2118:11joshkhthere's my disconnect. 🙂#2018-12-2118:12tomjackif you want to later transact the transaction and be sure to get the same thing... you have to do something about it#2018-12-2118:15joshkhyup. got it. forgot that with should replace transact, that's all. thanks!#2018-12-2308:38igrishaevSay I have an account model with cardinality-many attribute:
{:db/ident :account/id
:db/valueType :db.type/long
:db/cardinality :db.cardinality/one
:db/unique :db.unique/identity}
{:db/ident :account/interests
:db/valueType :db.type/string
:db/index true
:db/cardinality :db.cardinality/many}
Now I’d like to update the account with new interests. I don’t want to merge them but replace old interests with new ones. Is there a quick and simple way to do that? I found a SO answer but the explanation looks a bit vague to me.
https://stackoverflow.com/questions/42786046/how-to-update-overwrite-a-ref-attribute-with-cardinality-many-in-datomic#2018-12-2309:46val_waeselynckYou can get inspiration from this https://github.com/vvvvalvalval/datofu#resetting-to-many-relationships#2018-12-2311:47igrishaevlooks nice, just to clarify: what is the meaning of the last boolean flag of clear-to-many function? It’s a bit unclear to me#2018-12-2311:49igrishaevnervermind, that’s clear now. Thank you!#2018-12-2414:25rlhkI watched a few Datomic Ions videos and actually tried the solo topology. So far so good.
However I’m looking for a guide on multi developer story that’s similar to traditional CI/CD pipeline so that we can use hosted git repo (say github) for code review and trigger deployment on selected branches. All docs seem to be showing pushing & deploying code from a developer’s local dev machine. Am I missing anything? Thanks!#2018-12-2416:24rapskalian@rlhk.open I haven’t set this up yet, but I always imagined those push/deploy commands could easily be added to a CI build script. They are, after all, just Clojure functions. #2018-12-2416:26lilactownI imagine the challenge would be around getting the appropriate credentials in your CI pipeline to execute the push and deploy commands#2018-12-2416:29lilactownand setting it up to push/deploy to the correct compute query group#2018-12-2416:29rapskalianI’ve used environment variables for AWS_ACCESS_KEY and the like to do that. I had a special “Heroku” user in one of my projects that would perform the commands. It wasn’t Ions-based though. Maybe there is an included role that ships with Datomic for attaching to CI users. #2018-12-2416:30lilactownyeah. it doesn’t sound more difficult than the way we build and deploy to AWS without ions#2018-12-2416:32rapskalianIndeed. I can imagine a small Clojure script containing a map of git branch to compute group, calling the properly configured push/deploy commands, all based on environment variables. I know CircleCI exposes a GIT_BRANCH var for doing this kind of scripting, for example. #2018-12-2416:33rapskalianThe datomic.ion.dev namespace should offer pretty flexible scripting capabilities. #2018-12-2416:46lilactownone use case that I think datomic feels very well suited for is managing content. am I wrong?#2018-12-2418:02rapskalianDefinitely sounds like a solid use-case to me.#2018-12-2416:46lilactownthinking about requirements like auditing/tracking changes, reverting and rolling back changes, diffing changes…#2018-12-2416:48lilactownplus the open information model seems extremely well suited to having the capability to define ad-hoc content “models” (e.g. an Article model, LandingPage, etc.)#2018-12-2416:48lilactownI’ve tried to do the latter in a more static, relational way and you always end up just throwing some JSON in a column and call it a day 😕#2018-12-2416:52rlhk@cjsauer & @lilactown thanks for the thoughts, which are reasonable. Didn’t investigate myself but hopefully not too difficult to be able to hook up Github directly from within AWS services.#2018-12-2417:12Lennart BuitHey, I tried asking in #beginners, but apparantly too specific of a question. I am starting to look at datomic using the datomic client API and am looking at how to manage my schema over time. Now, when using the peer API I could use conformity for schema management but that doesnt support the datomic client API… Now I was wondering how other people managed their schema on datomic. It can’t be that you manually transact the schema and never migrate right?#2018-12-2422:41val_waeselynckYou should ask this in a place where it's searchable, such as StackOverflow#2018-12-2417:31lilactownI think one reason you’re not finding as much info about migrations using Datomic vs. other relational DBs is because they’re not as strictly necessary with Datomic’s open information model#2018-12-2417:31lilactownI’m speaking from little experience, though, so take this with a grain of salt#2018-12-2417:33lilactownthe datomic cloud documentation has stuff about changing your schema: https://docs.datomic.com/cloud/schema/schema-change.html#2018-12-2417:36lilactownsince there’s a programmatic API to datomic and you already have the ability to atomically transact & revert, I could see a pretty simple wrapping API to ensure those changes have been made#2018-12-2417:37lilactownyou might even get away with just updating your schema definition and transacting that each time your app starts up ¯\(ツ)/¯#2018-12-2417:38Lennart BuitYeah, I’d see so too, but it feels like this is a problem I should not be the first one to encounter so to say. Managing schemas is a very “core” problem, don’t you think?#2018-12-2417:39Lennart Buityeah, but migrations are not only about adding fields, its also about maintaining consistency. Or — it is in the most frameworks I have seen. I may be looking at it wrong again tho#2018-12-2417:42lilactownI think schema migrations cover several problems that are worth thinking about:
1. Updating the database schema to reflect new information we may want to track
2. Updating rows/columns to with derived or filler data in order to match new schema
3. Tracking changes to the schema for audit purposes
4. Allow easy revert / rollback of changes
5. Propagating these changes across databases (e.g. moving from staging to prod, we want to ensure the same changes are made)#2018-12-2417:46Lennart Buitalso, a reconstructable schema as it is the sum of all migrations. And, migrations themselves serve as a history of your database schema, which I feel is a different level than history of its contents.#2018-12-2417:50lilactownyeah, I guess that’s what I was thinking with #2 #3#2018-12-2417:51Lennart BuitBut, these problems are solved in most database frameworks right. What I am saying is, I think that these issues are either not solved in the datomic world (unlikely!), or there is a reason so compelling that they don’t have to be solved. I am looking for either, either a library or workflow that solves these concerns, or a compelling reason why we shouldn’t care about these issues.#2018-12-2417:53Lennart BuitAnd, given the focus datomic appears to have (from its documentation) on a schema, I currently don’t see such a compelling reason. But; once again, I am a newbie in clojure so I must miss the bigger picture here#2018-12-2417:54Lennart Buit(anyway, biking home, thanks for the suggestions, will be back here soon)#2018-12-2417:55lilactownI think that because Datomic operates at the attribute (not table) level, most changes end up being backwards compatible. So you can just have a schema.clj somewhere that transacts your currently-used schema on start, updated as needed and checked into git, which covers the 80% case.#2018-12-2417:56lilactownthere’s another library, called datofu, which also has mechanisms for migrations, but also speaks to why you might not need them: https://github.com/vvvvalvalval/datofu#managing-data-schema-evolutions#2018-12-2417:57lilactownbut anyway, I should probably let other people with more experience guide you since I’m still dipping my toes into Datomic as well 🙂 merry christmas eve!#2018-12-2418:07Lennart Buithave a nice christmas as well!#2018-12-2418:14Lennart BuitI must say, I am always very thankful for the help that is provided here ^^#2018-12-2418:05rapskalian@lennart.buit I ported schema ensure from the peer API rather easily. Here is a gist of the code and a small sample:
https://gist.github.com/cjsauer/4dc258cb812024b49fb7f18ebd1fa6b5#2018-12-2418:08rapskalianThis was adapted from an example that Stu has available, but I’m on mobile. #2018-12-2418:16rapskalianThe ensure-schema function would presumably be run on every app startup. Transactions only occur if the schema doesn’t already exist in the database. As said in the datofu read me linked above tho, schema transactions are idempotent, so there’s really no penalty to just transact the schema on every app start...#2018-12-2418:28Lennart Buitright, that would solve #1 and #5 (partially) of @lilactown’s list then?#2018-12-2418:38rapskalian3 is checked naturally by Datomic’s historic capabilities I would think...as well as being checked into source control.
4 is sort of contrary to Datomic’s “accrete only” best practice, but I might be misunderstanding.
2 is left unchecked, yes. I imagine the ensure function’s contract could be augmented to include derived data...I haven’t attempted that myself though. @lennart.buit #2018-12-2418:44Lennart Buit3 refers to the change of schema instead of the data the database contains. That is indeed (partially) stored in a vcs, but only for attributes added. Changes made to the data in the database as a result of the schema being changed are not recorded. I personally see a difference between schema and data migrations, and sometimes they even go hand in hand.#2018-12-2418:44Lennart Buit4 is usually used for undoing borks. “Oh god deploy broke production, better revert”#2018-12-2418:47Lennart Buit2 is a hybrid schema/data migration which I think is most interesting. For example when you split a field in two (think “full-name” -> “first-name” / “last-name”) you would want to have a structured way to do so across stages right (dev/staging/production) other than “just doing it in the repl”.#2018-12-2418:47Lennart BuitEspecially because “doing things in the repl” is not very peer-reviewable#2018-12-2418:50Lennart Buitand, when you would do Continuous Deployment, exec’ing code in a repl is … ehh#2018-12-2418:51rapskalianI believe one of the core principles of Datomic is that you don’t change the schema, but instead only grow it. If you realize you modeled the data in a less-than-ideal way, you should deprecate (but leave untouched!) the old schema and content.
If for example an attribute is found to be a one-to-many relationship, rather than one-to-one, that is a new attribute, and the old schema and data must remain in order to guarantee backward compatibility.
This may be why migrations don’t get as much attention around Datomic...#2018-12-2418:51rapskalianI do agree that REPL changes are not the way to go. #2018-12-2418:52rapskalianFor derived data*#2018-12-2418:53rapskalianSchema accretion and “data derivation” are orthogonal in my opinion though. #2018-12-2418:55Lennart Buithow so? splitting, without removing, full-name is something that is both a schema change (add last-name/`first-name`) and a derivation of data right?#2018-12-2418:55Lennart Buit(lets skip over the fact that splitting a name to first/last is a very hairy problem)#2018-12-2419:07rapskalianWell, the schema change is one operation, and going forward, your UI might start prompting new users to enter separate first/last names. So in that sense, the schema change is “simple”.
Reshaping the old data is, as you mentioned, a very hairy (and separate) problem. The simplest solution in my opinion for a problem like that is documentation...it’s now just a known fact in the system that users created before December 24th, 2018 were using the full name field. It might not be the most attractive solution, but it’s surely “correct”. #2018-12-2419:11rapskalianSo, I suppose it’s a different philosophy, and one that permeates through Clojure rather deeply. It’s why you see libraries like clojure.spec remain in alpha for a very long time...design work is difficult, and it’s important to tease apart the data model upfront, because “fixing” it retroactively is nye impossible. #2018-12-2419:13rapskalianAnd when I say “documentation” above, Datomic will help you with that. :dB/deprecated is a first class attribute, and you can transact a reason for deprecation, as well as a “use first/last name instead for new dev” as well. #2018-12-2419:13Lennart Buitwell, they are definitely separate problems in that sense, one operates on a schema and one on the data. However, in traditional database design, these changes may operate on a different level (one on the “meta” schema level, the other on data), they are usually executed simultaneously and atomically. As if the world changed right under the feet of the application. Coming from such a background right, I see all sorts of crazy coming from not maintaining the strong invariants I would enforce in a traditional relational db. Think: “everyone has a first name and a last name”.#2018-12-2419:15Lennart Buitbut it wouldn’t be the first time my perspective on software engineering would differ from what is custom in the clojure world#2018-12-2419:18Lennart Buit(please don’t think I am discrediting your points! I enjoy this discussion)#2018-12-2419:19rapskalianIn Datomic you’d have a much easier time shifting your view to “everyone has a first and last name as of December 2018, and a full name before that”.
I absolutely sympathize with the desire to “fix” the old data, however...”legacy” is a four letter word in software development haha. #2018-12-2419:27rapskalian> (please don’t think I am discrediting your points! I enjoy this discussion)
Of course :)
I myself came from the “table/SQL” world, and so Datomic and Clojure practices are still a fun learning experience. #2018-12-2419:28Lennart Buityeah, legacy is … uhh, the bane of our existence. But I have conflicting interests in trying to get things working, and realising I misunderstood the problem before. If I would ‘move fast and break things’, I would inevitably make lots of mistakes and therefore would need to migrate/deprecate. If I would not move fast, shipping may suffer.#2018-12-2419:30Lennart BuitI used to think, before coming to Clojure, that the only way to ‘solve’ these issues is by accepting that errors happen and have processes in place to break and not be stuck with all that legacy#2018-12-2419:40rapskalianYep, the eternal struggle...perfection is always at odds with shipping. My entire career feels like one big lesson in finding the “sweet spot”. Clojure certainly approaches it.
I think ”move fast and break things” is still a valid strategy behind the curtain (non-prod), and this is where the REPL really shines. Hammock time coupled with quick experiments at the REPL is an awesome development flow. I find myself in “the zone” pretty often this way. #2018-12-2419:42Lennart Buit80% of my mistakes never ship, 20% comes back to bite me ^^#2018-12-2419:49rapskalianRich gave a good talk where he hinted at versioning at the function level. Something like “if you plan on breaking the foo function’s contract...don’t. Just create foo2, leave a note, and get on with your life. You don’t need a new name, and you don’t need to break anyone...just allow them to migrate at their leisure.”
Paraphrasing of course, but by the third watch I finally started to come around to the idea...our egos can invent problems so easily. #2018-12-2419:54Lennart BuitI understand the concept, but I didn’t get to the point yet where I can readily accept these mantras. My inner perfectionist will rage at the sight of such impurities. Anyhow, thanks for the nice discussion and enjoy the holidays ^^!#2018-12-2420:33rapskalianYou too, happy holidays!#2018-12-2619:41whilohi#2018-12-2619:41whilois there a reason why function calls in queries can only get one data source?#2018-12-2619:43whilo#2018-12-2619:44whiloThis yields: AssertionError Assert failed: Can't have more than one data source in expression
(< (count sources) 2) datomic.datalog/expr-clause (datalog.clj:1171)#2018-12-2910:14whiloAny ideas to work around this issue?#2018-12-2918:39dustingetzyour original comment with the query is lost but can you split it into two queries?#2019-12-3112:38eoliphantAre you using the client api. AFAIK it still doesn’t support cross-database joins#2018-12-2619:44whilopointing to the passing of $ and $foo to the nested query.#2018-12-2619:45whiloany function hits this restriction, but this subquery demonstrates the use case i am trying to cover#2018-12-2619:47whiloi do not see a technical necessity for this right away. in datahike/datascript it works fine.#2018-12-2716:13adamfreyHi, everyone. I'm trying to go through the datomic ions tutorial but I'm hitting a permissions issue when trying to download the ions dependency from the datomic s3 maven repo. I get:
Could not transfer artifact com.datomic:ion:pom:0.9.28 from/to datomic-cloud (): Access Denied
I've tried many permutations of setting the AWS credentials through the environment vars as well as the ~/.aws/credentials file but nothing's changed. Any ideas?#2018-12-2716:19marshall@adamfreywhat step are you seeing this?#2018-12-2716:20adamfreyjust running clj in the ion-starter repo#2018-12-2716:20marshalllocally on your machine or on an EC2 instance?#2018-12-2716:20adamfreylocally#2018-12-2716:21marshallinteresting
usually that ^ occurs when you’re on an instance that doesnt have IAM permissions to use S3#2018-12-2716:23marshallhave you run aws configure or otherwise set up your default AWS credentials>#2018-12-2716:28adamfreyyes, my laptop had AWS credentials for my work account in the ~/.aws/credentials file. I created a new AWS account for this tutorial and put that user name and password under the [datomic-tutorial] header in that file. But it doesn't seem to work with either credentials#2018-12-2716:29adamfreyshould I be able to do this?:
aws s3 ls
#2018-12-2716:29adamfreybecause I get access denied trying to do that as well#2018-12-2716:33adamfreyoh @marshall I just fixed it using your comment. My new IAM user needed S3FullPermissions to be attached#2018-12-2716:33adamfreythanks for your help!#2018-12-2716:33marshallGreat!#2018-12-2716:34marshallNp#2018-12-2720:01mrgHey, I'm running into java.lang.IllegalStateException: Attempting to call unbound fn: #'datomic.common/requiring-resolve with clojure 1.10.0 and datomic-free-0.9.5703#2018-12-2720:02mrgCould anyone point me in the right direction?#2018-12-2720:04mrgit's possible for me to (in-ns 'datomic.common) (def requiring-resolve clojure.core/requiring-resolve) but that doesn't seem right 🙂#2018-12-2720:15mrgoh, I thought this happened on transaction, but actually this is the offending function:#2018-12-2720:15mrg#2018-12-2720:25mrgAh, got it. Clojure/string is not part of the transactor and I need to use .toLowerCase instead. I'm coming from datahike where this worked#2018-12-2800:26rboydbesides datomic console, are there any notable tools to analyze/report on my database? specifically I'd like to understand how my db is growing or which entities account for the most used storage#2018-12-2820:18dogenpunkCould someone explain to me how to transact entities with components using the client API? I seem to have a critical hole (or two) in my understanding.#2018-12-2820:20dogenpunk{:db/ident :booking/rrule
:db/cardinality :db.cardinality/one
:db/valueType :db.type/ref
:db/isComponent true}
{:db/ident :rrule/frequency
:db/cardinality :db.cardinality/one
:db/valueType :db.type/long}
#2018-12-2820:22dogenpunkWhen I try transacting
{:booking/rrule {:rrule/frequency 1}}
I get errors re: tempids as only value#2018-12-2820:23dogenpunkBut when I transact
{:booking/rrule {:rrule/frequency 1 :_booking tempid}}
I get fault anomalies when I try to pull :booking/rrule attributes#2018-12-2820:24marshallwhat is the schema for :rrule/attr#2018-12-2820:25marshallhttps://github.com/cognitect-labs/day-of-datomic-cloud/blob/b4103e4a8f14518ed3f6d7f66f56cbf863117974/doc-examples/tutorial.clj#L100 is an example of transacting several component entities#2018-12-2820:27dogenpunkOk, I looked that over and thought that
{:booking/rrule {:rrule/frequency 1}}
would work, however, I keep getting “tempid used only as value” errors#2018-12-2820:28dogenpunkDo components have to be wrapped in a vector even if :db.cardinality/one?#2018-12-2820:29marshallI don’t believe so#2018-12-2820:29dogenpunkOr are components required to have unique ids aside from :db/id?#2018-12-2820:30marshalltry adding :db/id "foo" to the inner entity#2018-12-2820:30dogenpunkShould “foo” refer to the parent :booking entity tempid?#2018-12-2820:30marshallit shouldn’t “need it” but i’m wondering if there is an edge case here#2018-12-2820:30marshallno, a random tempid#2018-12-2820:30marshalldo you have a parent entity tempid?#2018-12-2820:31marshallif that’s a truncated ^ version of your transaction, can you share the full thing please?#2018-12-2820:33dogenpunkHere’s the full transaction:
{:booking/duration "PT1H", :booking/recur-set {:recur-set/rdate [], :recur-set/exdate [], :db/id "bar"}, :booking/student 60842575434612841, :booking/studio 16958867346817130, :booking/dtstart #inst "2015-04-06T21:30:00.000-00:00", :db/id "de92a84f-257c-47b9-bb14-6059bc534c4f", :booking/rrule {:rrule/frequency 1, :rrule/interval :rrule.interval/weeks, :db/id "foo"}, :booking/dtend #inst "2015-04-06T22:30:00.000-00:00", :booking/status :booking.status/scheduled, :booking/instructor 41539549297377384}#2018-12-2820:34marshalland :booking/duration is db.type string?#2018-12-2820:34dogenpunkYes#2018-12-2820:34marshallrrule and recur-set are cardinality 1?#2018-12-2820:34dogenpunkYes#2018-12-2820:35dogenpunkIf I remove :booking/rrule and :booking/recur-set the transaction succeeds#2018-12-2820:35marshallif you leave either one it is still an issue?#2018-12-2820:35dogenpunkYes, I have to remove both#2018-12-2820:36dogenpunkIf I replace the :db/id in the recur-set and rrule with the parent db/id it succeeds, but then I get faults when querying those attributes#2018-12-2820:36marshallright#2018-12-2820:37marshalltry making them “unnested” for testing:#2018-12-2820:37marshall{:booking/duration "PT1H",
:booking/recur-set "bar",
:booking/student 60842575434612841,
:booking/studio 16958867346817130,
:booking/dtstart #inst "2015-04-06T21:30:00.000-00:00",
:db/id "de92a84f-257c-47b9-bb14-6059bc534c4f",
:booking/rrule "foo",
:booking/dtend #inst "2015-04-06T22:30:00.000-00:00",
:booking/status :booking.status/scheduled,
:booking/instructor 41539549297377384}
{:recur-set/rdate [], :recur-set/exdate [], :db/id "bar"}
{:rrule/frequency 1, :rrule/interval :rrule.interval/weeks, :db/id "foo"}#2018-12-2820:40dogenpunk(let [{:keys [instructor student studio dtstart dtend duration status rrule recur-set]} booking-two
booking "baz"
tx-booking #:booking{:instructor instructor
:student student
:studio studio
:dtstart (java.util.Date/from dtstart)
:dtend (java.util.Date/from (t/>> dtstart duration))
:duration (.toString duration)
:status ((fnil keyword "booking.status" "scheduled") "booking.status" status)
:db/id booking
:recur-set "bar"
:rrule "baz"}]
(d/transact conn {:tx-data [tx-booking
{:rrule/frequency 1
:rrule/interval :rrule.interval/weeks
:db/id "baz" }
{:recur-set/rdate []
:recur-set/exdate []
:db/id "bar"}]}))
Execution error (ExceptionInfo) at datomic.client.api.async/ares (async.clj:56).
tempid used only as value in transaction#2018-12-2820:45dogenpunkJust to be sure:
(d/transact conn {:tx-data [{:booking/duration "PT1H",
:booking/recur-set "bar",
:booking/student 60842575434612841,
:booking/studio 16958867346817130,
:booking/dtstart #inst "2015-04-06T21:30:00.000-00:00",
:db/id "de92a84f-257c-47b9-bb14-6059bc534c4f",
:booking/rrule "foo",
:booking/dtend #inst "2015-04-06T22:30:00.000-00:00",
:booking/status :booking.status/scheduled,
:booking/instructor 41539549297377384}
{:recur-set/rdate [], :recur-set/exdate [], :db/id "bar"}
{:rrule/frequency 1, :rrule/interval :rrule.interval/weeks, :db/id "foo"}]})
Execution error (ExceptionInfo) at datomic.client.api.async/ares (async.clj:56).
tempid used only as value in transaction#2018-12-2820:46marshalli would try leaving one of the 2 components in (rrule or recur-set) and then remove one attr at a time from it#2018-12-2820:46marshallsee if you can narrow it to a specific one#2018-12-2820:46marshallthis is almost always caused by either type mismatch or cardinality issue#2018-12-2820:47dogenpunkOk, makes sense. I’ll see if I can get a minimal case#2018-12-2820:47dogenpunkBut, nesting a map for a component like this is supported?#2018-12-2820:49marshallit should be#2018-12-2820:49marshalli’m testing it also#2018-12-2821:02dogenpunkOk, this works:
(d/transact conn {:tx-data [{:booking/duration "PT1H",
:booking/student 60842575434612841,
:booking/studio 16958867346817130,
:booking/dtstart #inst "2015-04-06T21:30:00.000-00:00",
:db/id "de92a84f-257c-47b9-bb14-6059bc534c4f",
:booking/dtend #inst "2015-04-06T22:30:00.000-00:00",
:booking/status :booking.status/scheduled,
:booking/instructor 41539549297377384}
{:recur-set/rdate [], :recur-set/exdate [], :db/id "de92a84f-257c-47b9-bb14-6059bc534c4f", }
{:rrule/frequency 1, :rrule/interval :rrule.interval/weeks, :db/id "de92a84f-257c-47b9-bb14-6059bc534c4f", }]})#2018-12-2821:13marshall(def client (d/client cfg))
(d/list-databases client {})
(d/create-database client {:db-name "marshall-test"})
(def conn (d/connect client {:db-name "marshall-test"}))
(def schema [;; person
{:db/ident :person/email
:db/valueType :db.type/string
:db/unique :db.unique/identity
:db/cardinality :db.cardinality/one}
{:db/ident :person/name
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/one
:db/isComponent true}
;; name
{:db/ident :name/first
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one}
{:db/ident :name/last
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one}])
(d/transact conn {:tx-data schema})
(def data [{:person/email "#2018-12-2821:13marshall@dogenpunk ^ seems to work fine#2018-12-2821:17ronnyCould somebody help howto setup datomic cloud lambdas and logging?#2018-12-2821:53ronnyI tried lambda-logging together with clojure.tools.logging but I don’t find any documentation howto bring it to work.#2018-12-2823:00dogenpunk@marshall Thanks, I’ll dig in more.#2018-12-2900:01Joe Lane@ronny622 Use ion/cast. Sends info to cloudwatch.#2018-12-2917:19ronnyThanx a lot.#2019-12-3000:21jaihindhreddyTypo at
sed 's/iProcessing/Processing/g'#2019-12-3019:48jaihindhreddyAlso, typo at
sed 's/HMAC-SHA26/HMAC-SHA256/g'#2019-01-0218:04marshallI’ve fixed these. Thanks!~#2019-12-3017:23ronnyIs there a way to write unit test with datomic cloud? I tried to mock the database name to be an in-memory db for each test but didn’t work.#2019-01-0217:23kennyHi @UEUB9VA30. We have been using this lib for writing unit tests with the Datomic Client API: https://github.com/ComputeSoftware/datomic-client-memdb#2019-12-3017:48jaihindhreddyWhy does :db/unique require :db.cardinality/one? A person can have multiple emails and each email can still uniquely identify a person.#2019-12-3022:34favilaI'm not aware of any such restriction? I just made one in a mem db, no problem.#2019-12-3022:39jaihindhreddy@U09R86PA4 here's where it says so:
#2019-12-3022:39jaihindhreddyJust setting up ions. I'm yet to try Datomic.#2019-12-3022:43favilaThis might be cloud specific#2019-12-3022:43favilaOn prem doesn’t care#2019-12-3022:44favilaAs to why, donno. It can cause confusion (known by personal experience) when trying to upsert different entities in the same transaction#2019-12-3022:45favilaOr more precisely, what you think are different entities#2019-12-3022:46favilaSo either semantically they had a greenfield and decided it was a good idea but couldn’t do it on on prem for backward compat; or the impl of cloud drove them to it; or the docs are wrong#2019-12-3017:57jaihindhreddyIs this restriction due to technical or architectural reasons, or is multi-cardinality unique attributes a bad idea in some way I'm blind to?#2019-12-3018:02lilactownI would imagine there’s greater chances of checking uniqueness to be quite slow if it could be multi-cardinality. but honestly I don’t know#2019-12-3019:00lilactownhas anyone put REBL into their Ions app yet?#2019-12-3019:00lilactownabout to try it, wondering if there’s an example floating around#2019-12-3019:00lilactownI’m using Emacs/CIDER so I imagine I’ll have to futz with that 😕#2019-12-3019:43Adrian SmithI'm trying datomic for the first time what does this error mean? bin/transactor config/samples/dev-transactor-template.properties
Launching with Java options -server -Xms1g -Xmx1g -XX:+UseG1GC -XX:MaxGCPauseMillis=50
Starting datomic:, storing data in: data ...
System started datomic:, storing data in: data
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by io.netty.util.internal.PlatformDependent0 (file:/Users/adriansmith/Datomic/datomic-pro-0.9.5786/lib/netty-all-4.0.39.Final.jar) to field java.nio.Buffer.address
WARNING: Please consider reporting this to the maintainers of io.netty.util.internal.PlatformDependent0
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
#2019-12-3019:45lilactownI don’t think that should effect operation at all#2019-12-3019:48Adrian Smithah ok#2019-12-3019:57Adrian Smithshame we can't do brew install datomic with cask then brew services to control datomic#2019-12-3020:12dogenpunk@marshall circling back on my issue from Friday. Want to say thanks for your time investigating. The issue seems to be connected to the empty vectors in the components. Once I removed those, the components were properly stored and retrievable.#2019-01-0213:35marshallGlad you got it resolved!#2019-12-3020:16lilactowndoes the current datomic client API implement a nav-igable protocol for data returned by queries / etc.?#2019-12-3020:17lilactownI’m trying to use it with REBL and, e.g. when I pull an entity that has a ref to another entity, it doesn’t appear to have a custom nav implemented on it. just trying to figure out if PEBKAC or if it’s not out yet#2019-12-3100:58jaihindhreddyTo build something like Google Groups, because the posts can cross the 4096 character limit Datomic strings have, how/where would you store these?#2019-12-3101:01johnjin another database#2019-12-3101:01johnjin datomic you save the reference#2019-12-3101:03johnjon-prem doesn't mention this limit but I have heard long strings make on-prem slow, maybe that's why they set a limit in cloud#2019-12-3101:03jaihindhreddylike S3?#2019-12-3101:05johnjlike postgres, dynamo, or some other key-value store#2019-12-3101:09johnjs3 should work too I suppose but a database seems more fit for your use case#2019-12-3101:14lilactownI’m working on a thing for my writing and am thinking of using S3#2019-12-3101:15lilactownit’s cheap and reasonably fast#2019-12-3112:48eoliphantwhat JDK version do Datomic cloud instances (most recent, 454-8573) use?#2019-12-3120:39rboydwill datomic peers use memcached even if the transactor isn't?#2019-12-3120:53rboydI added "memcached=server:port" to a datomic pro starter cloudformation template, and based on memcached stats (bytes/cur_connections) I think it's using it correctly, but it hasn't added any new metrics to cloudwatch#2019-01-0213:36marshallYes, peers can use memcached even if the transactor isn’t using it
You’d only get cloudwatch metrics if you’ve configured Peer-level cloudwatch reporting via your own callback for metrics#2019-12-3122:39favilayes they will#2019-12-3122:39faviladonno about metrics#2019-01-0116:30augustlfor storing strings larger than 4k in datomic cloud, there's a number of ways to get that wrong, I suppose..? 🙂#2019-01-0116:31augustlwould this work? Generate a squuid, store the string in external storage along with that squuid, wait for external storage to report A-OK, store that squuid in datomic?#2019-01-0116:34lilactownI was thinking of using the hash of the string as the filename#2019-01-0116:37rapskalianWatch out for filename collision when two strings happen to be equal#2019-01-0116:37augustlsounds better to create an unique ID every time you want to create a fact for that string, if you override the old one, and the transaction fails, then external storage and datomic is out of sync#2019-01-0116:37augustland that 🙂#2019-01-0116:38lilactownI mean, if two strings are equal - then no need to store it again? 😄#2019-01-0116:39lilactownthere’s the chance of hash collisions but it should be fairly low#2019-01-0116:40rapskalianYeah that’s true...external storage would need immutable/accrete-only semantics then yeah? Every modification creates a new, for example, S3 object. #2019-01-0116:41lilactownyep exactly#2019-01-0116:42lilactownotherwise you couldn’t use historical queries to read past strings#2019-01-0116:42rapskalianRight. I need an app that shocks me every time I revert back to mutable place oriented thinking. #2019-01-0116:43lilactown😂#2019-01-0116:43lilactownwell, I’m working on a blog-esque type app right now so these problems are at the forefront of my mind#2019-01-0118:31dustingetzPing me if you make progress here, hyperfiddle is going to integrate a foreign string store soon too#2019-01-0118:55lilactownwill do. it’s one of my stretch goals, once I get the rest of the app up and running#2019-01-0121:15dustingetzi would appreciate that thank you!#2019-01-0116:44rapskalian@augustl I think this would prevent the out of sync issue you mentioned #2019-01-0116:45augustlyeah, seems like it would#2019-01-0116:45augustlonly downside I can think of is to have to create hashes for potentially large strings 🙂#2019-01-0116:45augustlbut for something like a blog that probably shouldn't be a problem#2019-01-0118:55lilactownanyone setup an Ion as a custom authorizer for API Gateway?#2019-01-0201:57olivergeorgeShould I see cast/dev in Cloudwatch Logs? Both cast/alert and cast/event come though okay. Perhaps "fine-grained logging to troubleshoot a problem during development" implies that it's not something which isn't intended to be logged after deployment.
https://docs.datomic.com/cloud/ions/ions-monitoring.html#dev#2019-01-0202:40lilactown> NOTE Configuring a destination for cast/dev when running in Datomic Cloud is currently not supported.#2019-01-0202:40lilactownSounds like#2019-01-0202:41lilactownNo, you currently can't see them#2019-01-0202:41lilactownI've been using event to do Dev loving because if that 😕#2019-01-0206:14johanatananyone else experiencing a problem where Datomic Cloud insists on version 0.1.23 of com.cognitect/s3-creds but that version isn't found in any maven repos when running locally. local run works fine with 0.1.22. if i try pushing and deploying 0.1.22, after getting the warning that my 0.1.22 was overridden by the cloud's 0.1.23 version, it gets stuck in ValidateService (times out after 5 minutes).#2019-01-0206:17johanatan[this is with a fresh project created in the last few days following the latest templates and advices per the Getting Started guide]#2019-01-0217:11johanatananyone?#2019-01-0217:14marshall@johanatan Are you using s3-creds directly?#2019-01-0217:14johanatannope#2019-01-0217:14johanatanit's one of the 8 or so deps that Datomic Cloud is adding#2019-01-0217:15marshallis your Ion doing something with another AWS lib of some sort? or are you just trying the basic ion tutorial?#2019-01-0217:15johanatannope, it's very basic#2019-01-0217:16johanatanhere's the ns form for my code:
(ns core
(:require [datomic.client.api :as d]
[datomic.client.api.async :as da]
[aleph.http :as http]
[manifold.deferred :as deferred]
[manifold.time :as mt]
[manifold.stream :as st]
[byte-streams :as bs]
[clojure.data.json :as json]
[clj-time.core :as t]
[clj-time.local :as l]
[clj-time.format :as f]
[clj-time.coerce :as c]
[com.rpl.specter :as s]))
#2019-01-0217:17marshallah. you said it gets stuck in validate service#2019-01-0217:17marshallyou mean the deploy step fails?#2019-01-0217:18marshalland eventually the codedeploy times out?#2019-01-0217:27marshall@johanatan https://forum.datomic.com/t/loadionsfailed-caused-by-stackoverflowerror-in-clj-antlr/747/3#2019-01-0217:48johanatan@marshall yes, that’s right #2019-01-0217:49marshallSolo or Production?#2019-01-0217:49marshallnot that it matters overly much, but you can set the Java thread stack size per my last comment on that ^ thread#2019-01-0217:50marshallI suspect you’re hitting a thread stack overflow given the largeish set of dependencies you listed there#2019-01-0217:50marshallI should also note that you can’t use the datomic async client in an ion#2019-01-0219:23johanatanSolo#2019-01-0219:23johanatanWhy no async?#2019-01-0219:24johanatanAlso, even if the thread size tweak fixes this, how can I get 0.1.23 locally? Do I need to add another repo?#2019-01-0219:25marshallthe deps mismatch is not related to the issue you’re hitting#2019-01-0219:30marshall@johanatan Did you look in your CloudFormation logs for your datomic stack to see the specific error responsible for the failure? i suspect it’s the stack overflow#2019-01-0220:13johanatanLet me check #2019-01-0220:16johanatanI have three stacks (two are nested under the first): datomic, datomic-Compute-XXX, and datomic-StorageXXX. all three are in CREATE_COMPLETE state and have never had an "UPDATE" attempted on them.#2019-01-0220:16johanatan@marshall ^#2019-01-0220:26grzmHow does one get the “basis t” value of a db returned by since? The value of :t that I see is equivalent to the :t of the current database.#2019-01-0220:33marshall@johanatan Sorry I meant CloudWatch logs#2019-01-0220:33marshalltypo#2019-01-0220:33marshallhttps://docs.datomic.com/cloud/operation/monitoring.html#searching-cloudwatch-logs#2019-01-0220:34marshallgo to the CloudWatch logs dashboard and find the log group named “datomic-<yourSystemName>”#2019-01-0220:34marshallthen you can search for “Exception”#2019-01-0221:22johanatan@marshall cool, thx!#2019-01-0300:50johanatan@marshall this is what is contained in the CloudWatch logs:
"Error": "Fault",
"CognitectAnomaliesMessage": "java.lang.AssertionError: Assert failed: cfg, compiling:(core.clj:25:1)"
#2019-01-0300:51johanatanline 25 is the last line of the following block:
(defonce system "datomic")
(defonce region "us-east-1")
(defonce cfg {:server-type :ion
:region region
:system system
:creds-profile "personal"
:endpoint (format "" system region)
:proxy-port 8182})
(defonce client (d/client cfg))
#2019-01-0304:59johanatantried inlining the cfg as follows and am still getting the same error (although there is no longer a binding named cfg):
(defonce client (d/client {:server-type :ion <== error points to this line
:region region
:system system
:creds-profile "personal"
:endpoint (format "" system region)
:proxy-port 8182}))
#2019-01-0305:04lilactowndoes this work locally for you?#2019-01-0305:04lilactowne.g. in a REPL, connecting to the system through the SOCKS proxy?#2019-01-0305:04johanatanyep, it works locally. i just tried without the cred-profile because that is needed for local only#2019-01-0305:04johanatanbut i just found that the ion-starter has a :query-group specified which i am missing#2019-01-0305:04johanatanhttps://github.com/Datomic/ion-starter/blob/master/src/datomic/ion/starter.clj#L20#2019-01-0305:05johanatanperhaps that is the problem?#2019-01-0305:05lilactownyeah that’s the only real difference I can see between what you have and what my Ions code looks like#2019-01-0305:05johanatan:+1:#2019-01-0305:05johanatanok, i'll try it#2019-01-0305:16johanatansame error with :query-group#2019-01-0305:18lilactownit’s an assertion error?#2019-01-0305:18johanatanyep#2019-01-0305:19johanatan"Msg": ":datomic.cluster-node/-main failed: java.lang.AssertionError: Assert failed: cfg, compiling:(core.clj:17:1)",
"Ex": {
"Cause": "java.lang.AssertionError: Assert failed: cfg, compiling:(core.clj:17:1)",
"Via": [
{
"Type": "clojure.lang.ExceptionInfo",
"Message": "java.lang.AssertionError: Assert failed: cfg, compiling:(core.clj:17:1)",
"Data": {
"CognitectAnomaliesCategory": "CognitectAnomaliesFault",
"DatomicAnomaliesException": {
"Cause": "Assert failed: cfg",
"Via": [
{
"Type": "clojure.lang.Compiler$CompilerException",
"Message": "java.lang.AssertionError: Assert failed: cfg, compiling:(core.clj:17:1)",
"At": [
"clojure.lang.Compiler",
"load",
"Compiler.java",
7526
]
},
{
"Type": "java.lang.AssertionError",
"Message": "Assert failed: cfg",
"At": [
"datomic.client.impl.local$create_client",
"invokeStatic",
"local.clj",
97
]
}
],
"Trace": [
[
"datomic.client.impl.local$create_client",
"invokeStatic",
"local.clj",
97
],
[
"datomic.client.impl.local$create_client",
"invoke",
"local.clj",
94
],
[
"clojure.lang.Var",
"invoke",
"Var.java",
381
],
[
"datomic.client.api.impl$dynarun",
"invokeStatic",
"impl.clj",
19
],
...
#2019-01-0305:19johanatanwould it be a problem that this file has a -main defined?#2019-01-0305:19johanatan[i'm using that main to run locally from command line]#2019-01-0305:20lilactownoooo probably#2019-01-0305:20johanatanhmm, let me try removing it#2019-01-0305:21lilactownI’m not sure how ions get loaded, but I could see that mucking with it#2019-01-0305:22johanatanyea, me too 🙂#2019-01-0305:23johanatanbummer. same problem without the -main#2019-01-0305:23lilactownwould you be willing to post your source file?#2019-01-0305:24johanatanlet me see how much i can strip from it to still reproduce the problem#2019-01-0305:29johanatan#2019-01-0305:30johanatan{:allow [
;; lambda handlers
core/ion-func
]
:lambdas {:load-chains
{:fn core/ion-func
:description "A description."}
}
:app-name "datomic"}
#2019-01-0305:30johanatan{:paths ["src" "resources"]
:extra-paths ["resources"]
:deps
{clj-time {:mvn/version "0.15.0"}
com.rpl/specter {:mvn/version "1.1.2"}
aleph {:mvn/version "0.4.6"}
org.clojure/clojure {:mvn/version "1.9.0"}
com.datomic/ion {:mvn/version "0.9.28"}
com.datomic/client-cloud {:mvn/version "0.8.71"}
org.clojure/data.json {:mvn/version "0.2.6"}
com.cognitect/transit-java #:mvn{:version "0.8.311"}
com.datomic/client-api #:mvn{:version "0.8.12"}
org.msgpack/msgpack #:mvn{:version "0.6.10"},
com.cognitect/transit-clj #:mvn{:version "0.8.285"}
com.cognitect/s3-creds #:mvn{:version "0.1.22"}
com.amazonaws/aws-java-sdk-kms #:mvn{:version "1.11.349"}
com.amazonaws/aws-java-sdk-s3 #:mvn{:version "1.11.349"}}
:mvn/repos {"datomic-cloud" {:url ""}}
:aliases
{:dev {:extra-deps {com.datomic/ion-dev {:mvn/version "0.9.186"}}}}}
#2019-01-0305:30johanatan^^ that should be the entirety of it.#2019-01-0305:34lilactownhm, I wonder if it could be the fact that you’re creating the client when your code is first run#2019-01-0305:35lilactownin the Ions tutorial / example (which I pretty much copied), they define get-client:
(defonce get-client
;; "This function will return a local implementation of the client
;; interface when run on a Datomic compute node. If you want to call
;; locally, fill in the correct values in the map."
(memoize #(d/client {:server-type :ion
:region "us-west-2"
:system "datomic"
:query-group "datomic"
:endpoint ""
:proxy-port 8182})))
I made it a defonce for REPL-ing but it’s still a function, that has to be invoked when your Ion is first invoked#2019-01-0305:37lilactownI know there’s a bunch of spinning up and down that the Datomic system does when new code gets deployed. For example, a lot of times the first request I send after a deployment fails because it can’t connect to the database#2019-01-0305:52johanatanah yea. that could be it#2019-01-0306:01johanatanyep, that was it.#2019-01-0306:02johanatanthanks for your help!#2019-01-0306:02lilactownsure thing!#2019-01-0306:30johanatanbtw, the docs at: https://docs.datomic.com/cloud/ions/ions-reference.html don't mention the need to delay the client creation#2019-01-0306:30johanatan/ has the code I was trying to run initially#2019-01-0311:12stijn@johanatan this page mentions the specific error you got https://docs.datomic.com/cloud/troubleshooting.html#assert-failed#2019-01-0311:13stijnit changed recently, because previously you could do this, although it wasn't recommended, but now, with the preloading of the active databases, you can't do this anymore#2019-01-0320:18johanatanOh ok. It might be a good idea to update the rest of the documentation (linked to previously) so that people don’t continue going down this path. #2019-01-0314:27dmarjenburghHi, is there a way to pull :db/ident values (like enums). E.g. taking the tutorial example of colors and inventory items (https://docs.datomic.com/cloud/tutorial/assertion.html):
clojure
(d/pull db
{:selector [:inv/type :inv/size :inv/color]
:eid [:inv/sku "SKU-60"]})
; =>
; #:inv{:type #:db{:id 15617463160930376, :ident :shirt},
; :size #:db{:id 29304183903486023, :ident :xlarge},
; :color #:db{:id 32330039903125571, :ident :yellow}}
I would like to retrieve: #:inv{:type :shirt :size :xlarge :color :yellow} without transforming the query result. Is this possible?#2019-01-0314:43markbastianAnyone know if there are plans to bring the datomic cloud find-spec up to date with the on-prem version? For example, support for . and ... to return scalars and vectors?#2019-01-0315:29Jules WhiteI have a strange issue. The following rule works fine when invoked via the REPL, but fails when invoked via a Lambda in Datomic Cloud. However, if I deploy, invoke the rule via the REPL, and then invoke it via Datomic Cloud Lambda, it will work from then on when invoked via Lambda.
Code:
[(foo ?x ?y)
[(my.namespace/foo ?x ?y) [?q ?r]]]
Initial error when invoking from Datomic Cloud:
The following forms do not name predicates or fns: (my.namespace/foo)#2019-01-0317:24timgilbert@dmarjenburgh: basically no, to my knowledge, though you could use [{:inv/type [:db/ident]} {:inv/size [:db/ident]}] as your pull expression to elide the :db/id stuff out of there. One possible alternative is to just use keywords as your data types for enum values, though there are tradeoffs.#2019-01-0320:38dmarjenburghOk, thanks#2019-01-0514:54eoliphantit’s not gonna work, AFAIK with pull, as they’re just regular distinct entities as far as datomic is concerned, even though we group them together ‘mentally’. We do what you’re after with regular queries. Like the following returns all of our :action/.. enums
:where
[?a :db/ident ?i] ;;finds all :db/idents
[((comp #{"action"} namespace) ?i)]
#2019-01-0317:26timgilbertThere's a bit of background on the difference between how those approaches behave here, although this is in a datomic on-prem/peer context, not a cloud/client context:
http://docs.workframe.com/stillsuit/current/manual/#_stillsuit_enums#2019-01-0319:12idiomancyhey, I assume the answer to this question is "no, that's not a thing, you have to use a query", but is there any way to specify the combination of a key and a value as a unique identifier. so, only one user at a time (who I would like to have easy access to) will have ::role ::admin but many users might have ::role ::moderator. so I'd love to be able to use [::role ::admin] as an eid.#2019-01-0319:29favilawhat is the difference between what you want and making ::role a unique attribute?#2019-01-0319:36idiomancyAs in a db.unique/identifier? Well the fact that for certain values of ::role, multiple distinct entities can share the same value#2019-01-0319:37lilactownare you sure you only ever want one admin role? I don’t know your use case but almost everywhere I look that has a role-based permission system has the capability to add more than one administrator#2019-01-0319:38idiomancyHonestly role is the wrong semantics to signal. We have a smart contract that ensures the existence of one and only one admin. #2019-01-0319:38idiomancyBut i was trying to find a way to describe that which could be general purposed#2019-01-0319:39idiomancyBecause making ::admin true a unique identifier seems weird#2019-01-0319:40idiomancyTechnically there is one admin and one 'owner' where the only thing the owner can do is assign a new admin#2019-01-0319:40idiomancySo, its a unique value#2019-01-0319:41lilactowncould the owner and admin be the same person?#2019-01-0319:41idiomancyUnfortunately, yes#2019-01-0319:41lilactownsounds like the best way would be to use a bool value then IME. ::admin true ::owner true#2019-01-0319:42idiomancyYeah#2019-01-0319:42lilactownfor ease of querying#2019-01-0319:42idiomancyI think youre right#2019-01-0319:42idiomancyThats what I was kind of landing on myself#2019-01-0319:42idiomancyThanks!#2019-01-0319:44lilactownsure thing! 🙂#2019-01-0320:30johanatanhi, is there an env var i can check the presence for to determine if my ion is running in the cloud or locally?#2019-01-0320:30johanatan[or some other/better way to do this?]#2019-01-0320:43johanatani ended up going with:
(clojure.core/string? (System/getenv "AWS_LAMBDA_FUNCTION_NAME"))
(which should work fine)#2019-01-0320:45johanatani am getting an error when trying to query my solo datomic instance while a process is inserting data in the bkg:
1. Unhandled clojure.lang.ExceptionInfo
Datomic Client Exception
{:cognitect.anomalies/category :cognitect.anomalies/busy,
:http-result
{:status 429,
:headers
{"content-length" "9",
"server" "Jetty(9.3.7.v20160115)",
"date" "Thu, 03 Jan 2019 20:44:43 GMT",
"content-type" "text/plain"},
:body nil}}
is this because my solo server can't handle the load I'm trying to place on it?#2019-01-0320:45johanatando i need to change it to production?#2019-01-0320:51marshallhttps://docs.datomic.com/cloud/troubleshooting.html#busy
@johanatan You should either retry or potentially, yes, move up to production#2019-01-0320:52johanatan:+1: thx!#2019-01-0320:52johnjsolo its just a demo#2019-01-0320:52johanatanwhat's the easiest way to upgrade from solo to production?#2019-01-0320:52marshallnot sure i’d categorize it as a demo
lots of workloads are totally feasible on solo#2019-01-0320:52marshallbut if you need more than that - then yes, production#2019-01-0320:53marshall@johanatan https://docs.datomic.com/cloud/operation/upgrading.html#2019-01-0320:53johanatanthx#2019-01-0321:21johanatanfyi.. i did a simple distribution of my load (~20 reqs spaced 250 ms apart) and it fixed my "busy" issue:
(defn- distribute-load [funcs stagger]
(map-indexed (fn [idx itm] (mt/in (* idx stagger) itm)) funcs))
#2019-01-0321:21johanatanmt is manifold.time#2019-01-0321:35marshall👍#2019-01-0321:49johanatananyone know what's going on with this?
13:49 $ clojure -A:dev -m datomic.ion.dev '{:op :push :creds-profile "personal"}'
{:command-failed "{:op :push :creds-profile \"personal\"}",
:causes
({:message
"Bad Request (Service: Amazon S3; Status Code: 400; Error Code: 400 Bad Request; Request ID: 98AED4A6FE429CBB; S3 Extended Request ID: LsWqYlVK8gDQAwmUPwh32CFjQp6B2UCos0IDDnNGzwnxokfkPJkeJLrw4wiurwi517UY42Rho8g=)",
:class AmazonS3Exception})}
#2019-01-0321:50johanatansomewhere i can look for additional diagnostics?#2019-01-0321:53marshall@johanatan try explicitly adding :region#2019-01-0321:54johanatansame result#2019-01-0321:55johanatan[and the one without :region has been working for me for the last week or so]#2019-01-0321:55marshallwhat changed?#2019-01-0321:55johanatani added a new lambda to ion-config.edn and tweaked the code a bit. nothing drastic#2019-01-0321:56marshalli’d check to be sure the ion-config.edn is formatted properly#2019-01-0321:56johanatanyea, that was my thought too. i've double checked. but i'll triple check it#2019-01-0321:58johanatanlooks good to me:
{:allow [
;; lambda handlers
core/load-chains
core/volatility-skews
]
:lambdas {:load-chains
{:fn core/load-chains
:description "Loads latest option chain data from td ameritrade into datomic."
:timeout-secs 900}
:volatility-skews
{:fn core/volatility-skews
:description "Returns any current volatility skew opportunities."
:timeout-secs 60}
}
:app-name "datomic"}
#2019-01-0321:58johanatanand the code's buffer is loading fine in CIDER#2019-01-0322:02marshallnot sure what else could be responsible if your creds profile file is correct and all#2019-01-0322:02johanatanhmm, maybe my creds have expired or something#2019-01-0322:02johanatani'll look into that#2019-01-0322:02marshallyou can submit an AWS support ticket with the Request ID and the extended request ID and they might be able to tell you what the actual error cause was#2019-01-0322:03marshallusing latest ion-dev and all that?#2019-01-0322:03johanatanit could be intermittent AWS issues (but that seems unlikely)#2019-01-0322:03marshallhttps://docs.datomic.com/cloud/releases.html#current#2019-01-0322:04johanatanyep, i have all of those versions#2019-01-0322:37johanatanthis works so the creds seem valid:
14:37 $ aws s3 --profile personal ls
2018-12-28 13:07:14 datomic-code-2025dbc4-e342-4b10-99d8-24ce8346fec1
2018-12-28 13:03:05 datomic-storagef7f305e7-ulwi6f7m5ipi-s3datomic-1xpzc6j152563
2017-05-02 19:20:12 numerai-data
#2019-01-0323:37johanatanare there additional diagnostic steps I can take? I’m not sure if I can call AWS support as this is just a personal playground account #2019-01-0323:52johnjIf I'm not wrong, you are entitled to cognitect support: https://support.cognitect.com/hc/en-us/requests/new#2019-01-0323:54johnjah solo doesn't have standard support https://www.datomic.com/pricing.html#2019-01-0400:07johanatanit has "developer forum" wherever that is? (I assume not here)?#2019-01-0400:07marshallhttp://Forum.datomic.com#2019-01-0401:54dustingetzIf cloud doesn’t officially support cross database query, but does expose raw index access, cannot I implement it myself?#2019-01-0415:28jaretHi All! We’re looking to add some community created Datomic Cloud/Ions examples to our documentation. If you have a project you’d like to share and a repository, or blog/video demoing Datomic Cloud/Ions we can link to please let us know. Feel free to DM me or send an e-mail to <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>.#2019-01-0418:55grzmI’m getting a “Datafiable does not exist” error when including the cognitect.aws.client.api with Datomic cloud#2019-01-0418:55grzm#2019-01-0418:57grzmJust deployed a solo stack today using the latest and greatest versions featured on https://docs.datomic.com/cloud/releases.html#2019-01-0418:57marshallwhat version of clojure in your deps.edn?#2019-01-0418:58grzm#2019-01-0418:59grzm1.10.0#2019-01-0419:02lilactownwhen I last built + deployed my Ions, I tried to use 1.10 but was told that it was overridden and (AFAICT) my compute nodes are using 1.9 still#2019-01-0419:04lilactownit also looks like cognitect.aws.client.api depends explicitly on clojure.datafy, which was introduced in 1.10. so it is not compatible with 1.9#2019-01-0419:05lilactownhopefully once people are back from the holidays we’ll get a compute update to Clojure 1.10 😬#2019-01-0419:05grzmWell, we’re back from the holidays 🙂 And @marshall’s here to keep us company 😄#2019-01-0419:06grzmGuess I’ll stub back Amazonica. The code looks so nice using the aws client api.#2019-01-0419:08lilactownyeah. AFAICT the actual functionality of the aws-api library doesn’t depend on clojure.datafy#2019-01-0419:09lilactownso in an ideal world it would detect whether the Datafiable protocol was available and optionally extend the protocol#2019-01-0419:09marshallI believe I’ve used the aws-api from ions#2019-01-0419:09marshallhowever the current release does indeed use clojure 1.9#2019-01-0419:09marshallit will be moved to 1.10 on the next release#2019-01-0419:11rapskalianI may have spotted a small typo in the docs: https://docs.datomic.com/cloud/transactions/transaction-data-reference.html#Transaction
db-fn and db-fn-arg should potentially be tx-fn and tx-fn-arg.#2019-01-0419:11lilactownthe Datafiable bits were added in November 29th#2019-01-0419:13lilactownyou could probably clone the project and delete the Datafiable line and be good-to-go tbh#2019-01-0419:24grzm@lilactown good idea. I’ll give that a go.#2019-01-0421:56grzm@lilactown that worked just fine. Thanks!#2019-01-0421:56grzmhttps://github.com/Dept24c/aws-api/commit/967f0d639c61e32a39c2e6b2ce97aa64f735bcde#2019-01-0422:48timgilbertSay, does datomic on-prem with a DynamoDB back-end support encryption at rest?#2019-01-0422:49timgilbert(via the AWS stuff, eg https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/EncryptionAtRest.html)#2019-01-0422:57timgilbertAh, I now see that our current ddb tables are using it, so the answer is yes (although I'm not sure how actually useful that is, seems like it just protects us from somebody taking down amazon and stealing their hard drives)#2019-01-0423:39steveb8nIt’s most useful when your product needs to pass security review from your customers. If you build a SAAS product for business/enterprise customers, this will come up a lot#2019-01-0423:36steveb8nQ: I have an attribute in all my entities called :common/version which allows me to do app level data migration. I wonder if this is a bit of a contradiction of the “accretion only” design idea. what are the pros/cons of this idea that I am missing?#2019-01-0513:25dmarjenburgh@steveb8n Not exactly sure what you intend to do, but isn’t the version derivable from the transaction that last updated an entity?#2019-01-0523:08steveb8nInteresting idea. In that case the txn would need an annotation with something like a version so it seems like I would just be moving the complexity elsewhere#2019-01-0523:10steveb8nwhat I want to do is make it simple/easy to upgrade a database prior to a release or at app read/write time#2019-01-0607:44dmarjenburghI was more thinking of having a single :app/version attribute that gets updated once per deploy. If the history for this attribute shows [100 :app/version "1.0"] [200 :app/version "2.0"], then transactions with ids between 100 and 200 happened on version 1.0 and all transactions after 200 happened on version 2.0.#2019-01-0515:05eoliphantwe use something similar for some of our apps, where there’s a business need for ‘versions’ of entities, where it’s a more typical, monotonically increasing int/long. We’ve also used something similar in transaction functions where we needed to make sure the transact is based fresh data. I don’t think it violates the accretion philosophy, as you’re not ‘breaking stuff’ as you evolve your schema, it just happens to be a sometimes useful (meta?) attribute,#2019-01-0523:09steveb8nthat’s my thinking as well. glad to know I’m not crazy 😉#2019-01-0600:44rboydif memcache and valcache properties are both set for a peer does it prefer one? does it use both? maybe fetch from valcache only if memcache misses?#2019-01-0600:45marshall@rboyd you should only use one or the other, using both valcache and memcached is not supported #2019-01-0600:46rboydok thanks#2019-01-0600:50rboydis there a way to warm the cache with the entire index+log?#2019-01-0601:05marshallRun large queries#2019-01-0601:05marshallOr walk indexes with the datoms api#2019-01-0614:12lambdamHello,
I struggle to find how to query against a collection.
Something like this (just to show the intention):
(d/q '[:find ?e
:in $ ?names
:where [?e :user/team (in-collection ?names)]]
db
#{:foo :bar :baz})
Does someone knows how to do that?#2019-01-0614:56pvillegas12Take a look at http://www.learndatalogtoday.org/chapter/3#2019-01-0615:02lambdamThanks. I see it in the Collection section.#2019-01-0614:56dmarjenburgh@dam You can bind inputs to collections and tuples (https://docs.datomic.com/cloud/query/query-data-reference.html)
(d/q '[:find ?e
:in $ [?name ...]
:where [?e :user/team ?name]]
db
#{:foo :bar :baz})
#2019-01-0614:59lambdamOh great. Thanks.
My brain fits a bit more to Datalog than 10 min ago.#2019-01-0618:38Roman TsopinIs there any guide about connecting websocket gateway api with ions? #2019-01-0619:46hlolliI was doing the datomic cloud setup week ago, and I was able to get all working and running, but now Im hit with error that I didn't get before when I'm connecting with client api (def client (d/client config)) the socks is running on a given port as systemd service.
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See for further details.
Syntax error (ClassNotFoundException) compiling at (form-init2453726782059748262.clj:12:13).
org.eclipse.jetty.util.thread.ThreadPoolBudget
#2019-01-0619:50hlolliah nevermind, I think this is an old story of root -> user conflict with using systemd#2019-01-0716:21hlolliAh ok, as I changed the systemd service to user, found out my problem is totally unrelated. I start emacs from program runner and I'm guessing some environment variables are missing. Which explains why it works when I do it from the terminal. What (aws) variables do I need to include to connect to my datomic cloud?#2019-01-0717:00marshall@hlolli depending a little on your setup, AWS_PROFILE can be sufficient. if you dont have profiles set up you’d need to have your Access key and secret key in envars
one sec i’ll find you the aws docs#2019-01-0717:01marshallhttps://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.html
The first 2 there (access key id and secret access key)#2019-01-0717:02marshallusually easier to run aws configure and set up your profiles, then use AWS_PROFILE envar in the particular context you’re connecting from#2019-01-0717:02hlolliOk (on the train). I did provide the aws profile in the socks scripts. Now its just user space localhost tunnel, but i guess the config needs to know some vars. I repost in a bit.#2019-01-0717:03marshallyou can also pass a profile to the client configuration map#2019-01-0717:03hlolliYes I thought I did that. Maybe it cant find ~/.aws #2019-01-0717:03marshallhttps://docs.datomic.com/client-api/datomic.client.api.html#var-client (the :creds-profile key)#2019-01-0717:03hlolliExactly#2019-01-0717:04marshallthe class not found error looks more like a deps issue#2019-01-0717:05marshalli.e. too old a version of clojure maybe?#2019-01-0717:20hlollithe current error
2019-01-07 18:18:07.907:INFO::nREPL-worker-0: Logging initialized @23460ms to org.eclipse.jetty.util.log.StdErrLog
2019-01-07 18:18:07.929:WARN:oejusS.config:nREPL-worker-0: No Client EndPointIdentificationAlgorithm configured for
my config
(def config
{:server-type :ion
:region "eu-west-1"
:system "visitor"
:creds-profile "hlolli-visitor"
:endpoint ""
:proxy-port 8182})
and my profile
$ AWS_PROFILE=hlolli-visitor aws configure list ~
Name Value Type Location
---- ----- ---- --------
profile hlolli-visitor manual --profile
access_key ****************PJ6Q shared-credentials-file
secret_key ****************m5Ze shared-credentials-file
region <not set> None None
#2019-01-0717:20hlollithis all works if I run this trough $ clojure in the terminal btw, so environment is the only variable that's different.#2019-01-0717:22marshalli don’t use nrepl, no idea how that would work with it#2019-01-0717:25marshallthat error does appear to be jetty related - makes me suspect version differences again
how are you launching your repl when you’re not just running clojure in the terminal?#2019-01-0717:40hlollithen with lein!#2019-01-0717:40hlolliok, so that could be a factor#2019-01-0717:40hlollibooting cider now with clojure-cli...#2019-01-0717:42hlollimy deps.edn includes only datomic, but leiningen whole bunch of server stuff#2019-01-0717:45hlollithe same, it's either cider-nrepl problem or env problem
here's with cider + clojure-cli
2019-01-07 18:43:21.942:INFO::nREPL-worker-0: Logging initialized @13446ms
ExceptionInfo Unable to connect to localhost:8182 clojure.core/ex-info (core.clj:4739)
#2019-01-0717:46marshallyour socks proxy running on 8182?#2019-01-0717:46marshallit does occasionally fail#2019-01-0717:46marshallworth checking that it’s still up#2019-01-0717:52kardanDon’t you need a query-group in your config?#2019-01-0717:53kardanhttps://github.com/Datomic/ion-starter/blob/master/src/datomic/ion/starter.clj#L20#2019-01-0718:08hlolliI need to get more into this ion world, probably for those cases. I'm just starting out with datomic on a project.#2019-01-0719:44kardanI just done the Ion tutorial, so my knowledge is a “bit” thin. Also spent almost all my time on Google cloud so all this Aws docs is entertaining 🙂#2019-01-0718:01hlolliyeh, need to investigate, my journal
ssh -v -i /home/hlolli/.ssh/datomic-eu-west-1-visitor-bastion -CND 8182
sudo netstat -tulpn ~
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:8182 0.0.0.0:* LISTEN 7986/ssh
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 937/sshd
tcp6 0 0 ::1:8182 :::* LISTEN 7986/ssh
#2019-01-0718:03hlolliand here's my terminal where everything works fine
➜ visitor git:(master) ✗ clojure ~/Documents/visitor
Clojure 1.9.0
user=> (ns visitor.datomic.core
(:require [datomic.client.api :as d]))
nil
visitor.datomic.core=> (def config
{:server-type :ion
:region "eu-west-1"
:system "visitor"
:creds-profile "hlolli-visitor"
:endpoint ""
:proxy-port 8182})
#'visitor.datomic.core/config
visitor.datomic.core=> (def client (d/client config))
2019-01-07 19:03:11.036:INFO::main: Logging initialized @23502ms
#'visitor.datomic.core/client
visitor.datomic.core=> client
#object[datomic.client.api.sync.Client 0x2fc698a9 "#2019-01-0722:08sekaofor datomic ion, is there any way to get a bigger instance than t2.small without going to production topology? or is the solo topology hard coded to use t2.small?#2019-01-0723:05Joe Lanesolo topology is build to use t2.small’s#2019-01-0723:26sekaogotcha, thanks#2019-01-0808:37dmarjenburghIs there a way to subscribe to the live index feed in datomic cloud? I really want this and I believe this is possible in the on-prem version.#2019-01-0815:57Joe LaneCurrently no, not without rolling your own solution. There are several possible ways to do it with the result of a transaction by placing that onto a queue or topic of some sort.#2019-01-0816:28Ben HammondI'm investigating {:db/error :db.error/transactor-unavailable} errors that seem to be triggered by a series of 9kB (237 datom) transactions to a on-prem txtor (hosted on t2.large connecting to db.t2.small RDS Postgres)
and I want to make sure I've covered my bases of Things To Think About.
Are there any resources I can be pointed at? so far I've onlly really found
https://groups.google.com/d/msg/datomic/88eo9lV8jXE/NTjuxk1oBwAJ#2019-01-0816:52markbastianDoes transaction size affect indexing time? For example, if I have 10,000 datoms does Datomic perform best when it comes to indexing if I transact 1 datom x 10,000 transactions, 100 datoms x 100 transactions, or a single 10,0000 datom transaction?#2019-01-0816:56alexmillerI have no knowledge of the correct answer, but I’d place my bet strongly on the middle one :)#2019-01-0816:57alexmillerand secondly on the first one#2019-01-0816:57marshall@markbastian yes and you should prefer the middle one#2019-01-0816:58marshall@ben.hammond how many is a series? in general that message indicates the transactor is busy - i would look at metrics to see whether you’re hitting storage backpressure, wouldn’t surprise me with a small RDS instance#2019-01-0816:59Ben Hammondi see a surprising volume of writes on the Postgres end ~91Mb#2019-01-0817:00marshallyou’re likely in an indexing job, which will write significant amounts of data to storage#2019-01-0817:02Ben Hammondaround 3 of these transactions running simultaneously#2019-01-0817:02Ben Hammondtwice a minute#2019-01-0817:07markbastianAny sort of a sweet spot? Like "shoot for a few hundred datoms at a time."#2019-01-0817:19markbastianOn a related note (and providing some background), I am doing an initial large import of about a million records (each being a map of 50 entries) into a prod cloud instance. After a few hundred thousand items are transacted (300k) I start getting "Busy Indexing" transaction failures. I am applying a 60 second backoff then retrying and eventually everything picks up again. My goal would be to write as fast as possible with as few transaction failures as possible. I think I have 2 parameters to play with to smooth this out: 1) the batch size. I'm doing 10 at a time ATM, but can adjust that. 2) the number of async threads that are working on the job. Is there a good value for #2? Seems like you'd want > 1 to keep the transactor busy, but a much larger number will eventually saturate the transactor and cause indexing or other issues. I'm currently using 4 async threads, but am going to try dialing it down to two. Just wondering if anyone had any particular advice on this as well. Thanks!#2019-01-0817:23alexmilleryou might try asking on the forums too, I think this has been discussed there in the past#2019-01-0817:24markbastiancool, thanks!#2019-01-0817:25alexmillerand https://docs.datomic.com/on-prem/capacity.html#data-imports#2019-01-0817:28markbastian"Pipelining 20 transactions at a time with 100 datoms per transaction is a good starting point for efficient imports." - I think that's what I'd seen before. I'll try dialing down the size of my writes and do some experiments with number of transactions as well.#2019-01-0818:21marshallYes, that ^ is a good starting point for Cloud (despite being in on-prem docs)#2019-01-0818:41tony.kayHi, I’ve got an existing on-prem Datomic instance, and want to spin up another app that can just use the datomic client library so that it is a much lighter weight process. For deployment cost purposes, it would be nice if one of my existing Peers (which is also a running application) could act as the peer server. I can figure out how to start the peer (just get the classpath right, and invoke the peer server main), but I’m wondering if this is going to cause problems. I’m hoping the peer server will share the same resources with the Peer it is running within (connections are singletons if I remember right), but there is nothing in the docs about running it this way, so I’m wondering if someone that has more “internal authority” can tell me if that is a “sane” thing to do.
TL;DR; Will a peer-server running within another Datomic Peer (application) share the database resources (if the peer server is providing API for the same db that the in-process Peer application is using)?#2019-01-0818:47marshall@tony.kay do you mean run the peer server and your peer application on the same instance as two separate JVMs?#2019-01-0818:48marshallor run them both in the same JVM?#2019-01-0818:48tony.kaysame JVM#2019-01-0818:49tony.kayliterally add the peer server stuff to classpath and run it on alt port from what else is running in JVM#2019-01-0818:50marshallinteresting
Never considered it. I suspect it will work, but it’s an unsupported configuration
Part of the reason for process isolation in Datomic is for reliability and problem isolation. If you run them together, then a problem in one will affect the other#2019-01-0818:50marshalli.e. a runaway query, memory issue, etc#2019-01-0818:54tony.kaysure…in this particular case the API server is going to be very lightly used at first…if it becomes more important, then of course spinning up another VM will make more cost sense.#2019-01-0818:54marshalldepending on your instance size/type I might consider at least running 2 separate JVMs#2019-01-0818:56tony.kaythat’s the thing…we’re already running 2 for the main app for redundancy…and since the peer server is going to be used very little, spinning up 2 more for it seems like a lot of peers from a provisioning cost perspective#2019-01-0818:56marshallbut in answer to your initial question, yes connections are shared and thread-safe#2019-01-0818:56tony.kayesp. since they are high-memory instances#2019-01-0819:09tony.kayThanks @marshall#2019-01-0819:14johanatandoes anyone have experience with/ tips for using a datomic instance as a substack of another CF template? is it possible?#2019-01-0820:27marshallCloud or On-Prem?#2019-01-0819:46johanatan#2019-01-0820:04markbastianIs index memory (IndexMemMb) in Datomic Cloud only used for generating indexes or for all index storage? Meaning, once all data has been transacted does it ever go down or does it grow with your data? As I write large amounts of data it flatlines at what looks like 1.1GB in the Cloudwatch mgmt console and then returns "Busy Indexing" failures from there on out. I'm wondering if I have a fixed limit on my index capacity at this point or if I can back off for a while and it will recover. The docs at https://docs.datomic.com/cloud/operation/monitoring.html#metrics lead me to believe that's where all the indexes are stored, but that seems like a pretty limiting factor on how much data datomic can store.#2019-01-0820:28marshallthe memory index works just like it does in Datomic On-Prem
The memory index holds novelty that has been transacted until it gets merged into the persistent disk index via an indexing job#2019-01-0820:33marshallYou should definitely implement exponential backoff#2019-01-0820:34marshallDatomic will return busy indexing messages until it has merged enough novelty into the persistent disk index to free up memory index space for more transactions#2019-01-0820:36marshallhttps://docs.datomic.com/on-prem/indexes.html#efficient-accumulation#2019-01-0822:05markbastianThanks for the info. Based on what you've said and what I read I think I need to wait longer for the in-mem indexes to be written to disk. Here's a plot of my current index db mem. When I hit the flat line at 1.1GB I reliably stop writing and wait for indexing. I have waited 10s of minutes for that to occur, though. Does that seem reasonable? How long should it take for the indexing process to catch up?#2019-01-0822:08markbastianFYI, the time span from about 19:40-20:05 was to write 100,000 records. The next run was an attempt to write another (separate) 100,000 records after waiting a few minutes in the hopes that indexing would stop.#2019-01-0822:25markbastianIs there a way to determine when indexing has completed? That seems to be my issue. Even an hour or so after my last attempt to transact if I try a new transaction it still reports the "Busy Indexing" anomaly.#2019-01-0914:13markbastianHere's some new data: I ran 2 async threads that write about 150 datoms/transaction. It was able to successfully write ~1.4 million records in a couple hours with very few indexing delays. Thanks again for all the help in getting this to work.#2019-01-0914:17markbastianAnd for completeness, here's the IndexMemMb profile for the import.#2019-01-0914:30marshallIs this a solo system?#2019-01-0914:30marshallI have a couple of suspicions about why it behaves that way#2019-01-0914:31marshallif it is indeed solo, the initial waiting was probably due to waiting for dynamo db scaling - you can look at your ddb graph to see if that’s the case#2019-01-0914:31marshallthe other thing you can do to speed this up is to move to a production system#2019-01-0914:32markbastianThis is a prod system (i3.large).#2019-01-0914:33marshallok. then yes I think it’s just a matter of providing enough time for the system to do the indexing job(s)#2019-01-0914:34markbastianYeah, I saw the dynamodb scaling issue on a prior attempt to load the data. I think what I was seeing yesterday that got me confused was some extremely long wait times for the indexing jobs, but I think I was just doing way too many writes so it couldn't catch up.#2019-01-0820:08meanderingcodeI am new to clojure and datomic, and I am having the darndest time finding information about how to manage schema migrations.
The project I am playing with has a function init-database in a file dev/user.clj. This requires running (init-database) in lein repl, and is obviously not ideal for initializing a "production" database or running migrations on a "production" instance.
But my google-fu has not surfaced clear guides on how to manage deployments, initialization, migrations.
Do people put migration functions that auto-fire into their core code, like many frameworks in other languages? Is there a good script-based way to do it? I thought about the possibility to modify the code to allow s'th like lein run -m user --init-database, as i've seen some other functions in this project do [from a different file, env/dev/user.clj]#2019-01-0820:09meanderingcodeDoes anyone have advice or a simple post/guide that I just missed?#2019-01-0820:10lilactown@meanderingcode I think the 80+% case is you just transact the schema every time your app starts up#2019-01-0820:11lilactowntransacting the schema is idempotent; if the schema attributes already exist, nothing happens#2019-01-0820:12lilactownthere are a few operations that aren't idempotent. you should endeavor not to use them 🙂 if you do need them, then they're pretty special and you should write some special case code to handle them appropriately#2019-01-0820:15eraserhdWhat @lilactown said. We used a migration tool and it caused more pain than it was worth. We now just compute our schema from our internal description and transact it. This sometimes fails, but only when existing data doesn't fit the new schema shape. We fix the data manually and redeploy.#2019-01-0820:16johanatananyone? https://clojurians.slack.com/archives/C03RZMDSH/p1546974846429300#2019-01-0820:17lilactownI haven't done it#2019-01-0820:17johanatan:+1:#2019-01-0821:16chrisblomI've used CFN's export functionality to expose a datomic instance to other stacks#2019-01-0821:17chrisblomThis was with a custom CFN template though, not the one provided with Datomic#2019-01-0821:23chrisblomi'd recommend againt using substacks for this#2019-01-0821:25chrisblomi'd prefer using Exports and Fn::ImportValue for sharing#2019-01-0820:26meanderingcodeThanks @lilactown @eraserhd.
Is there an example or doc about where and how to load that? Completely new to clojure, over here 🙂#2019-01-0821:24johanatan@meanderingcode it's one step in the "getting started" tutorial#2019-01-0822:00meanderingcode@johanatan That makes sense, i'm just not familiar enough with clojure to really understand the application lifecycle and where to call that.#2019-01-0822:03johanatan@meanderingcode call it on startup. You’re going to want your db object to be lazy initted. So call the schema update when your db is constructed #2019-01-0822:08meanderingcodeAlright. I think this is where it would go:
In src/clj/myapp/system.clj
(defn system [env]
(component/system-map
:conn
(datomic/new-datomic
(if-let [datomic-url (:datomic-url environ/env)]
(str datomic-url "?aws_access_key_id=" (environ/env :datomic-access-key) "&aws_secret_key=" (environ/env :datomic-secret-key))
(when true #_(= :dev env)
(println "WARN: no :datomic-url environment variable set; using local dev")
"datomic:)))
#2019-01-0822:12meanderingcodehmmm, maybe not#2019-01-0822:15meanderingcodeI might go on the next line as a function call, or maybe after the db connection is created in
src/clj/myapp/datomic.clj
(ns orcpub.datomic
(:require [com.stuartsierra.component :as component]
[datomic.api :as d]))
(defrecord DatomicComponent [uri conn]
component/Lifecycle
(start [this]
(if (:conn this)
this
(do
(assoc this :conn (d/connect uri)))))
(DB SCHEMA TRANSACTION HERE)
(stop [this]
(assoc this :conn nil)))
(defn new-datomic [uri]
(map->DatomicComponent {:uri uri}))
#2019-01-0822:16meanderingcodeI really appreciate the guidance. Clojure is my first lisp, other than simple changes to config in Emacs I am so new to.#2019-01-0822:26johanatanYes that’s right #2019-01-0822:26johanatanShould be fine there #2019-01-0822:28meanderingcodeThanks!
Looks like i got the parens wrong, i'm guessing. take two off the preceding line and put them at the end of the transaction line?
Going to test when i get a minute#2019-01-0911:35Ben HammondI have a datalog query that returns 10e6 rows.
I wish to process these lazily
I've been using the sample datalog function to work on subsets of data at a time
but now I'm timing how long the sample takes and I'm not sure its any faster than a simple (take 256 (d/q ...)#2019-01-0912:02Ben Hammondits not any faster#2019-01-0913:05eoliphantq doesn’t return a lazy seq, but datoms does#2019-01-0913:09Ben HammondI was attempting to adhere to https://docs.datomic.com/on-prem/best-practices.html#prefer-query_#2019-01-0913:09Ben Hammondbut now I've tried the datalog approach#2019-01-0913:10Ben HammondI shall get down and dirty with the datoms#2019-01-0913:12eoliphantyeah q is generally what you want to start with, but in your case, sounds like just traversing one of the indicies might make more sense. Are you using on-prem or cloud? If it’s the former, you can say filter the db, then grab your datoms#2019-01-0913:15Ben Hammondon-prem#2019-01-0913:15Ben Hammondooh I hadn't considered doing that. interesting#2019-01-0913:36m_m_mHi. I have problemy with my query: https://pastebin.com/bGRMKfxm it is really small. The error is: Caused by: java.lang.ClassCastException: mount.core.DerefableState cannot be cast to datomic.Connection I don't understand what is wrong with that. Any help? I am using datomic with Luminus framework.#2019-01-0914:13Ben Hammondyou are passing something unexpected as the parameter conn ?#2019-01-0914:13Ben Hammondit needs to be a datomic.Connection#2019-01-0914:14Ben Hammondbut it is getting a mount.core.DerefableState
whatever that is#2019-01-0914:14Ben Hammondoh its probably a system state, isn't it#2019-01-0914:15Ben Hammondso you would need to pluck your datomic Connection value out of the mount Derefable state#2019-01-0914:16Ben Hammondor perhaps https://github.com/yogthos/clojure-error-message-catalog/blob/master/lib/mount/derefablestate-cannot-be-cast-to-ifn.md#2019-01-0915:58Joe LaneHi friends, when reading the docs (https://docs.datomic.com/cloud/ions/ions-reference.html#parameters-example) it states NOTE The datomic-shared prefix is readable by any Datomic system. If you want more granular permissions, you can choose your own naming convention (under a different prefix!), and explicitly add permissions to the IAM policy for your Datomic nodes. #2019-01-0915:58Joe LaneDoes this mean any datomic system in my aws account or truly that a different org has access to whatever I put under that prefix?#2019-01-0916:01marshallno, only within your account#2019-01-0916:04Joe LaneThanks @marshall, I know it seemed silly but I wanted to confirm.#2019-01-0916:06marshall@lanejo01 take a look at DatomicSharedParams in IAM#2019-01-0916:06marshallit’s a role that gets created#2019-01-0916:06marshall"Statement": [
{
"Effect": "Allow",
"Action": [
"ssm:GetParametersByPath"
],
"Resource": [
"arn:aws:ssm:us-east-1:879742242852:parameter/datomic-shared/*"
],
"Sid": "DatomicSharedParameters"
}
]#2019-01-0916:09marshalland if you look at the <stack-name>-<region> IAM role you’ll see a similar permission ^ under get-parameters#2019-01-0916:09marshall"Statement": [
{
"Action": [
"ssm:GetParametersByPath"
],
"Resource": [
"arn:aws:ssm:us-east-1:879742242852:parameter/datomic-shared/*"
],
"Effect": "Allow"
}
]#2019-01-0916:09marshallnotice, scoped to ARN#2019-01-0916:09marshallall nodes will get a similar role#2019-01-0916:33ennI'm attempting to work through the Datomic on-prem tutorial here: https://docs.datomic.com/on-prem/tutorial.html. The example of d/transact throws a ClassCastException for me:
(d/transact
conn
{:tx-data [{:db/ident :red}
{:db/ident :green}
{:db/ident :blue}
{:db/ident :yellow}]})
If I get rid of the containing map and the :tx-data and just pass the vector of idents, it seems to work as expected. Am I misunderstanding something?#2019-01-0916:34matthavenerthat tutorial seems wrong to me#2019-01-0916:35matthavenerunless there’s some newer datomic om-prem API that accepts a map instead of tx data#2019-01-0916:37ennOK, that was my instinct as well.#2019-01-0916:37matthavenerI’m running ‘0.9.5561.50’ and I get the same ClassCastException#2019-01-0916:37ennThank you.#2019-01-0916:37ennYes, I'm running 0.9.5661.#2019-01-0916:38matthavenerI think that must be a copy/paste error from a tutorial written for the client API https://docs.datomic.com/client-api/datomic.client.api.html#2019-01-0916:39ennahh#2019-01-0916:42marshallyes, that’s client syntax#2019-01-0916:42marshalli’ll fix the example. thanks for catching it#2019-01-0916:42marshalloh.
actually that tutorial is intended for use with the client lib#2019-01-0916:43marshallbased on the prior page that gets you connected with a client#2019-01-0916:43marshallhttps://docs.datomic.com/on-prem/getting-started/connect-to-a-database.html#2019-01-0916:43marshalla similar walkthrough for using the peer API can be found here: https://docs.datomic.com/on-prem/peer-getting-started.html#2019-01-0916:43marshall@enn ^#2019-01-0916:44matthavenerthe confusion probably came from the URL which contains “on-prem” https://docs.datomic.com/on-prem/tutorial.html#2019-01-0916:44marshallOn-prem supports both client and peer API access#2019-01-0916:44matthavenerooh, didn’t know that 🙂#2019-01-0916:44ennAh, yeah, I just googled "datomic on-prem tutorial" so I didn't see the previous page.#2019-01-1016:59m_m_mI've spend on that so much time. What is wrong with that query: https://pastebin.com/TvgmasTw my error is: (q__36748 ?block_nr), message: processing clause: [ :blocknr ?block_nr], message: :db.error/not-an-entity Unable to resolve entity: :item/block_nr what is wrong with that entity? I just want a simple aggregation.#2019-01-1017:08marshallthe error suggests that :item/block_nr is not an installed attribute name in your database#2019-01-1017:08marshall@m_m_m ^#2019-01-1017:24m_m_m@marshall which is strange becoude my schema is: https://pastebin.com/CUwY6JqP so there is :item/block_nr 😞#2019-01-1017:27marshallcan you pull it or query for it?
i.e. (d/pull (d/db conn) '[*] :item/block_nr)#2019-01-1017:31marshalluser=> @(d/transact conn [{:db/ident :item/block_nr :db/valueType :db.type/string :db/cardinality :db.cardinality/one :db.install/_attribute :db.part/db}])
{:db-before #2019-01-1017:31marshall^ I’d suspect that your schema didn’t get transacted for some reason#2019-01-1017:33marshall{:db/ident :item/amount
:db/valueType :db.type/number
:db/cardinality :db.cardinality/one
:db.install/_attribute :db.part/db}#2019-01-1017:33marshall:db.type/number is not a valid db type @m_m_m#2019-01-1017:34marshallhttps://docs.datomic.com/on-prem/schema.html#required-schema-attributes#2019-01-1017:34m_m_mThank you!#2019-01-1017:34m_m_mwhat a stupid mistake!#2019-01-1017:34marshallhappens to all of us 🙂#2019-01-1018:22apseyIs there an official docker image for the Datomic Transactor from Cognitect?
Has anyone used this one? https://hub.docker.com/r/pointslope/datomic-pro-starter/dockerfile/
Use case is: running transactors inside kubernetes to avoid running standby transactors (half the cost without the ec2 boot time) and better embrace resource usage within kubernetes (having less idle resources).#2019-01-1201:08AndresAnybody know how to use missing? with a reverse lookup? Is this supposed to work?
[(missing? $ ?l :account/_licenses)]#2019-01-1201:10AndresAlso, is there a reason why I can't use 2 missing? statements in an or?
(or [(missing? $ ?a :account/licenses)] [(missing? $ ?a :account/email)])#2019-01-1201:11AndresError CompilerException java.lang.RuntimeException: Unable to resolve symbol: $ in this context#2019-01-1201:15jdkealy@andres paste the full query#2019-01-1201:15Andres(d/q '[:find [?a ...]
:in $
:where
(or [(missing? $ ?a :account/licenses)]
[(missing? $ ?a :account/email)])]
db)
#2019-01-1201:21Andresah, @jdkealy are you referring to the Error or missing? reverse lookup?#2019-01-1201:27AndresI would like to use the reverse lookup to find the transitions (?t) that do not have a current license (?l).
[?t :license-transition/current-license ?l]
(d/q '[:find [?l ...]
:in $
:where
[?l :member-license/status _]
[(missing? $ ?l :license-transition/_current-license)]]
db)
#2019-01-1217:05AndresCan anybody help?#2019-01-1217:13marshall@andres you dont need reverse lookup in query, just turn the clause around #2019-01-1217:14AndresHow would you do that with missing??#2019-01-1217:16AndresSorry, I don't quite see it. Do you mind showing me with an example or use my own query above?#2019-01-1217:17marshallIm not sure what youre trying to do. Id need to understand the data model and entity relationships #2019-01-1217:20AndresI would like to use the reverse lookup to find the transition entities (?t) that do not have a current license entities(?l).
Ex. Relationship [?t :license-transition/current-license ?l]
Desired query. Is this a valid way to use missing?
(d/q '[:find [?l ...]
:in $
:where
[?l :member-license/status _]
[(missing? $ ?l :license-transition/_current-license)]]
db)
Does this clear up the model and it's relationships?#2019-01-1217:22marshall[(missing? $ ?t :license-transition/current-license)]#2019-01-1217:22marshallFind all entities that dont have that attr set ^#2019-01-1217:23AndresAh#2019-01-1217:24AndresCool, what about the second issue#2019-01-1217:24marshallThen do whatever additional joins you need to narrow down those entities#2019-01-1217:24marshallWhats the second issue?#2019-01-1217:41AndresA query with or with two missing? brings up an error.
Account entities have licenses entities and an email string.
(d/q '[:find [?a ...]
:in $
:where
(or [(missing? $ ?a :account/licenses)]
[(missing? $ ?a :account/email)])]
db)
Error CompilerException java.lang.RuntimeException: Unable to resolve symbol: $ in this context#2019-01-1217:55Andres@marshall sorry it took me a bit, was going through airport security.#2019-01-1218:18AndresIs there a reason why I can't have two missing??#2019-01-1218:18AndresIn an or#2019-01-1218:24jaihindhreddy^ Shouldn't the (or ...) be wrapped in a vector?#2019-01-1218:29AndresHmm, I've usually done without.#2019-01-1218:31AndresDoesn't seem like you need a vector.#2019-01-1218:36AndresHmm, might be because or implicitly targets the db $, and missing? gets confused??? (guessing here) Similar to how you can't have embedded anonymous functions that use %.
Not sure how that would be resolved though...#2019-01-1219:39marshallI’ll have to look into that one next week#2019-01-1219:39marshallit may be related to the implicit $ though#2019-01-1220:12benoit@andres Not sure why the query returns an error but but you will need to bind ?a to something outside of the or anyway and it looks like it is working with or-join:
(d/q '[:find [?a ...]
:where
[?a :account/id]
(or-join [?a]
[(missing? $ ?a :account/licenses)]
[(missing? $ ?a :account/email)])]
db)
I bind ?a with :account/id but you should replace this clause with whatever you need to only return account entities.#2019-01-1221:25AndresCool! I'll take a look at or-join and try it out. Thanks!#2019-01-1302:22lilactownIn a talk a couple years ago, David Nolen talked about work on a Client API library for JavaScript. I’m guessing that was discontinued?#2019-01-1312:20dmarjenburghDue to account restrictions in our company, we are not allowed to create our own VPC’s. There is one provided VPC which we must use. Is there a way to setup datomic cloud inside of this existing VPC?#2019-01-1315:05Oleh K.I'm getting an error when I try to connect to Datomic GUI, does anybody know how to repair it?
Console started on port: 8080
dev = datomic:
Open in your browser (Chrome recommended)
java.lang.NullPointerException
at clojure.core$namespace.invokeStatic(core.clj:1600)
at clojure.core$namespace.invoke(core.clj:1594)
at clojure.core$comp$fn__6823.invoke(core.clj:2542)
at clojure.core$group_by$fn__9430.invoke(core.clj:7031)
#2019-01-1315:07Oleh K.the error occurs when I select my database from the list in the GUI#2019-01-1321:45jaret@okilimnik what version are you on? Do you still see this error with the latest? You can copy the full exception out from the logs (in the datomic/logs directory if you’re running local).#2019-01-1411:00Oleh K.It happens with the latest Datomic, too (0.9.5786). I've attached the log#2019-01-1411:01Oleh K.I've removed some sensitive data from the log#2019-01-1411:14Oleh K.the full error#2019-01-1413:20Oleh K.I've found that the reason of the error is this type of migration:
(def foo-bar-v1
[{:db/ident :foo/bar
:db/valueType :db.type/ref ;;any type
:db/cardinality :db.cardinality/many
:db.install/_attribute :db.part/db}])
(def retract-foo-bar
[[:db/retract :foo/bar :db/ident :foo/bar]])
(def foo-bar-v2
[{:db/ident :foo/bar
:db/valueType :db.type/string ;;new type
:db/cardinality :db.cardinality/one
:db.install/_attribute :db.part/db}])
How to repair the database now?#2019-01-1411:46m_m_mHi. I'm trying to find some datomic benchmarks (on real cases). I am wondering how fast I can write msg into datomic (writes / s). I can only find "it is not so fast at writing", "writing is not the strongest point of datomic"...cool but this is closer to 10 writes / s or 10000 writes /s compared to other databases?#2019-01-1413:21dustingetzRule of thumb is keep database size smaller than 10 billion datoms, which if you reach that in a year is 317 datoms per second. At that rate you are very likely to be read constrained, not write constrained#2019-01-1413:22dustingetz“datomic is not fast at writing” lacks sufficient nuance to be useful at all#2019-01-1413:25dustingetzIf you transact 10k datoms per second, you’ll have 10 billion datoms in 10 days#2019-01-1413:31m_m_mWow. Thank you. Now I see.#2019-01-1415:37Joe Lane@m_m_m You’re not finding the results because of a few different reasons:
1. Its against the license
2. writing of different sizes, shapes, constraints, batches of data will yield different results.
3. It’s harder to compare to other systems due to query groups allowing for reads to come for free and letting your primary groups focus exclusively on writes.
My understanding with datomic cloud is that if you run into a write performance constraint you can crank up the knob on dynamodb’s write throughput and use an i3.xlarge as your primary group.
When to do that and what it will produce I can’t answer, as I haven’t had to do that.
Can I ask what the usecase is?#2019-01-1415:41m_m_m@lanejo01 Sure, I would like to store items form multiple games. Those items have content and state which can be changed in time (the same with owner). Datomic can give me whole history of each of those items which is nice....but more important feature is easy access to the newest state of each of those items. Hmm is there a huge difference between performance of proverion and free version of datomic?#2019-01-1415:43Joe LaneWell Free version of datomic is, I believe, in memory and on-prem. I was referring to datomic cloud.#2019-01-1415:43m_m_mHow about performance of the "on-perm" version?#2019-01-1415:50Joe LaneMy 1 and 2 are still relevant, but 3 falls off and so does the i3.xlarge/knob-cranking. I don’t have the experience with on-prem. Is the “access the whole history of each item” part of the game or just a development convenience?#2019-01-1416:03m_m_mdevelopment convenience. There should be possibility to see history of the item, price changes, owner changes, stats changes in time 🙂#2019-01-1418:21eraserhdDoes anyone here have an on-prem Dockerfile, maybe with sqlite jdbc backend for CI from a known-good database? #2019-01-1418:55johnj@m_m_m you shouldn't rely on datomic's history features to model your domain, its really mostly just an audit thing, add your own time as needed#2019-01-1419:01johnjhttps://vvvvalvalval.github.io/posts/2017-07-08-Datomic-this-is-not-the-history-youre-looking-for.html#2019-01-1421:36m_m_mThank you. Now I can see it more clearly. So each time when I would like to change one param in my item I have to create a new item with each of my params and change one of them ? This is a lots of unnecessary data.#2019-01-1419:24lilactown@lockdown- I'm a bit confused by the author's "revision entities" example#2019-01-1419:24lilactownhow does it route around the fact that the tags don't exist in the previous versions?#2019-01-1419:28val_waeselynckAuthor here. In the example, the tag would have been retroactively added to the previous revision when the intern backfilled the data manually. Of course, that's a business decision from the blog platform, they could have chosen not to backfill the tags.#2019-01-1419:40lilactownhm OK. so walk me through what this does.
- finds the version-t. how is this different than t? what is version-t supposed to represent?
- finds version-eid. Is this the eid as-of the correct version for time t?#2019-01-1419:41lilactownand then finally, the assumption is that the intern has gone back not only to the entity, but every previous version of the entity, and added the tags?#2019-01-1419:51val_waeselynck> finds the version-t. how is this different than t
version-t a domain-specific timestamp, representing the creation time of an Article Revision - nothing to do with Datomic's t.
version-eid is the Entity Id for the Article Revision.
Keep in mind we're not using Datomic's historical features at all in this code, that's the whole point - no as-of, no t, etc.#2019-01-1420:06lilactownso each "revision" is it's own entity; when you make a change to the latest revision, would you transact a new entity? or do you transact against the existing entity?#2019-01-1420:21val_waeselynckNormal usage would mean transacting a new Entity (after all, the whole point of a revision is to be immutable); Revisions would typically be updated in data migrations.#2019-01-1420:22lilactownOK, I see. I was still stuck on using datomic's history API for tracking non-breaking modifications 😵 that explains it.#2019-01-1420:22lilactownthank you!#2019-01-1419:24lilactownmaybe I'm just not reading the queries correctly#2019-01-1421:39dazld(def con! (d/connect "datomic:))
Syntax error (ClassNotFoundException) compiling at (form-init3668295776313994723.clj:1:11).
com.amazonaws.services.dynamodbv2.model.PutItemRequest
^ any ideas what the issue is?#2019-01-1421:39dazldguess i’m missing a dependency somewhere#2019-01-1421:40dazld(using a plain deps.edn with just datomic pro in there)#2019-01-1421:42gwsyou probably need the dynamodb dependency from the AWS Java SDK, like this: https://docs.datomic.com/on-prem/storage.html#dynamodb-aws-peer-dependency#2019-01-1421:45dazldthanks gws, that was exactly it#2019-01-1509:51olivergeorgeShould it be possible to use the aws cloudformation cli to spin up a datomic solo stack? I've not had much luck trying.#2019-01-1509:51ChrisDoes Datomic on-prem support Java 11? This page says not but I find that surprising: https://docs.datomic.com/on-prem/get-datomic.html#2019-01-1513:37marshall@cbowdon not yet. we’re working on supporting newer java. hopefully in a release soon#2019-01-1513:49ChrisThanks 🙂#2019-01-1514:23marshall@olivergeorge it should be, although I’ve never tried it myself#2019-01-1522:19olivergeorgeThanks. My exploration was somewhat chaotic. I'll try again. Would be nice to have a "one step solo/dev setup" of an app.#2019-01-1523:45olivergeorgeYeah, works fine. I'd managed to screw up my aws account somehow.
aws cloudformation create-stack --stack-name MYAPP --template-body --parameters --capabilities CAPABILITY_NAMED_IAM
With params of
[
{"ParameterKey": "KeyName", "ParameterValue": "MYAPP"},
{"ParameterKey": "EnvironmentMap", "ParameterValue": "{:env :dev}"}
]
#2019-01-1615:19marshall👍#2019-01-1622:35olivergeorgeIn case it's of interest. I'm able to reproduce the problem I had before.
As a developer, I want to be able to setup and tear down datomic stacks including reusing an old stack name.
Steps to repeat:
• aws cloudformation create-stack --stack-name MYAPP ... (as above)
• (wait)
• aws cloudformation delete-stack --stack-name MYAPP
• (wait)
• aws cloudformation create-stack --stack-name MYAPP ... (as above)
Expect: stack comes up cleanly
Actual: stack creation fails, storage related even reports: Embedded stack arn:aws:cloudformation:ap-southeast-2:826491830380:stack/actinium-StorageF7F305E7-GDBDL32025UD/d9de1d70-19dd-11e9-b73f-0a7bf1960652 was not successfully created: The following resource(s) failed to create: [DatomicCmk, CatalogTable, FileSystem, LogGroup, LogTable].#2019-01-1622:36olivergeorgeIt's a slow process so hopefully I'm not leading you astray.#2019-01-1714:09marshallYou’d have to check the “keep existing storage” the second time you create#2019-01-1714:09marshallwhich i assume you can pass as a parameter, but not sure how#2019-01-1714:09marshallerr. reuse storage#2019-01-1714:09marshallwhatever it’s called#2019-01-1714:09marshallalso, is this using the master template or are you launching storage and compute separately?#2019-01-1802:52olivergeorgeOkay, thanks. I suspect there is still something buggy in the delete-stack side of things. Another case which seemed repeatable was (create stack A, delete stack A, create stack B) but the error was different that time. Not hurting me enough to fight anymore as it requires recreating a fresh account each time it gets stuck.#2019-01-1802:52olivergeorgeI was using the solo master template.#2019-01-1802:53olivergeorgeThanks for the tip about needing to specify when reusing existing storage. I'll read up.#2019-01-1516:58rapskalianAttempting to delete a solo CloudFormation stack, and it’s stuck trying to delete resource with logical ID AvailabilityZones of type Custom::ResourceQuery. I’m afraid I may have tampered with something I shouldn’t have while it was in progress…any tips for manually deleting this resource?#2019-01-1517:25rapskalianI may be doomed to waiting a few hours for the stack to fail to delete:
https://forums.aws.amazon.com/thread.jspa?threadID=234657#2019-01-1517:30kennyWhy does Datomic pull return cardinality many values as a vector instead of a set?#2019-01-1519:10faviladepending on the sub-pull, the entries may not be unique#2019-01-1519:10favilayou get one entry per entity#2019-01-1519:21favila@kenny (d/pull d [:db/id {:my-many-ref-attr [:db/id :my-not-unique-str-attr]}] 17592277488932)
=>
{:db/id 17592277488932,
:my-many-ref-attr [{:db/id 17592277488933,
:my-not-unique-str-attr "not-unique-value"}
{:db/id 17592277488934,
:my-not-unique-str-attr "not-unique-value"}]}
(d/pull d [:db/id {:my-many-ref-attr [:my-not-unique-str-attr]}] 17592277488932)
=>
{:db/id 17592277488932,
:my-many-ref-attr [{:my-not-unique-str-attr "not-unique-value"}
{:my-not-unique-str-attr "not-unique-value"}]}
#2019-01-1519:22favilaif it were a set, the second result would not be possible#2019-01-1519:23kennyOh, that only applies for ref attributes, correct?#2019-01-1519:27favilayes#2019-01-1600:01marcolHow do we use the push command for datomic.ion.dev through lein?#2019-01-1604:16marcolSo once the repl is running, I manage to call the push function in datomic.ion.dev, (datomic.ion.dev/push {}), although I'm getting the following error IOException CreateProcess error=2, The system cannot find the file specified java.lang.ProcessImpl.create (ProcessImpl.java:-2)
Not sure how to debug this and what could be wrong, has anybody ever encountered this?#2019-01-1611:18ChrisIs there a way to define which bits of a schema a peer server can use? It looks like filters (https://docs.datomic.com/on-prem/filters.html) could be used for this but it doesn't seem to be their intended use case. Is there a better option? The best practices guide briefly mentions aliasing schema elements for different applications, is that a better direction?#2019-01-1615:14marshall@marcol ions require the use of a deps.edn project#2019-01-1615:20marshallhttps://docs.datomic.com/cloud/ions/ions-reference.html#requirements#2019-01-1615:43dustingetzAre reverse navigations always cardinality many#2019-01-1615:49dustingetzunless they are :unique#2019-01-1616:07favila@dustingetz isComponent#2019-01-1616:08favilaisComponent implies a 1-1 relationship, so the reverse of an isComponent attr will be cardinality-one#2019-01-1616:08dustingetzah yes#2019-01-1616:08favilayou're right though, in theory a unique ref should have the same property#2019-01-1616:09favilaI wonder if every isComponent should also assert unique just to get the extra constraint checked#2019-01-1616:10favilanm, a unique ref doesn't imply that no other attr points to the entityvalue#2019-01-1616:10dustingetzis unique ref a thing?#2019-01-1616:10favilasure#2019-01-1616:10Ben Hammondit is possible to transact a ref to someone else's component entity#2019-01-1616:11Ben Hammondalthough frowned upon#2019-01-1616:11favilait's a constraint that datomic doesn't enforce#2019-01-1616:12favilaif you don't enforce it yourself retractEntity behavior may surprise you#2019-01-1616:13dustingetzYeah. so unique scalars aren’t refs so you can’t pull them backwards; unique refs are :many; so it’s just component refs that are :card/one#2019-01-1616:17dustingetzThank you#2019-01-1616:40favila@dustingetz I think card-many isComponent will also have reverse-ref card-one#2019-01-1616:41favilaif you have extra they will just be dropped#2019-01-1616:41favila(using d/entity or d/pull api)#2019-01-1617:57mssquick q re: https://docs.datomic.com/on-prem/clojure/index.html#datomic.api/shutdown#2019-01-1617:58msswhat does will release Clojure resources mean specifically? kill the process with the peer? or just terminate things like the core async thread pool?#2019-01-1617:59favilaafaik it just calls shutdown-agents#2019-01-1617:59favilaso the agent thread pool is killed#2019-01-1617:59favilaunless you have another non-daemon thread running your process will probably die after that#2019-01-1618:00mssmakes sense. thanks for the clarification#2019-01-1618:00favilaI think (shutdown true) is mostly for java users of datomic#2019-01-1618:01favilawho don't know/care about the clojure system and just want all of it to shut off#2019-01-1618:56tjgGiven a datomic DB or schema, what do people use to generate an entity-relationship diagram? escherize/drytomic, or perhaps Hodur can be used this way?#2019-01-1619:40lilactownI tried hodur but it was a bit too opinionated/simple for my taste#2019-01-1619:41lilactownI tried it specifically because I wanted to get diagrams for free 😛#2019-01-1622:30dpsuttonsetting up schema at work. i was prototyping with :db/valueType ref but now i prefer to go to string. The docs say there is no way to do this. What should i do at this point? Must I come up with a new name for my attribute? Can i do some renaming shenanigans? This is all on our test db#2019-01-1622:31favilarename attribute to something else (or retract it's db/ident completely) then make a new attribute#2019-01-1622:33favilaonce you make an attribute you cannot change its type or remove it#2019-01-1622:34favilayou also cannot excise attributes#2019-01-1622:34dpsuttonwhat's the difference between retracting and excising#2019-01-1622:34favilaretracting adds a retraction datom#2019-01-1622:34favilaexcising deletes old datoms#2019-01-1622:36dpsuttoni see. thanks#2019-01-1710:49jaihindhreddyretracting facts is equivalent to asserting those facts are no longer true.
Excision is a painful surgery that changes the past (not without trace. Datoms about excisions are un-excisable and are retained)
#2019-01-1711:45jarppeI'm looking for a way to query the most recent transaction that has changed a specific entity. What would be the best way to achieve this?#2019-01-1711:53jarppewhen I make transactions, I add user id and other information to the tx, and then I'd like to show "Last changed by <user> at <time>" on frontend#2019-01-1712:00jaihindhreddy@jarppe Like this?
[:find ?user ?updated-at
:in $ ?e
:where [?e _ _ ?t]
[(max ?t) ?last-txn]
[?last-txn :user/id ?user]
[?last-txn :db/txInstant ?updated-at]]
#2019-01-1712:03jaihindhreddyBy "changed a specific entity", I hope you mean, changed an attribute of the entity directly, and not transitively changed something owned by an entity.#2019-01-1712:05jaihindhreddyIf you're looking at something like "changed an attribute on this entity, or another entity that this entity owns" for some definition of owns, take a look at this talk by the folks at Nubank.
EDIT: The above does not work. I was mistaken. Sorry for the noise.#2019-01-1712:13jarppeThanks @jaihindh.reddy, that's exactly what I'm looking for#2019-01-1712:28jaihindhreddyBTW, a side note, thanks a lot for the talk about lupapiste. Is the "state machines with guards" approach documented or available as a library somewhere?#2019-01-1713:51favilaDid you test this? I’m Pretty sure that “max ?t” is not doing what you think it is. You can’t perform aggregations in datalog where clauses#2019-01-1815:27jaihindhreddyWow. didn't test it. Thanks for clarifying this.#2019-01-1815:39favilayou can either introduce a nested query or you can process the results#2019-01-1712:13jarppeDo you have any idea what's the performance with (max ?t)?#2019-01-1712:27jaihindhreddyI just setup my ions recently. Am a Datomic neophyte. Never used at work or in anger.
So sorry, no idea 😄#2019-01-1713:23m_m_mI have a case: I have item and I have state of my item changed by a sort of events. For example state = amout of items then I have update on my state that the state is equal 7 (for example it was 10) cool...that update was with timestamp 1234, next I am getting next event which should be done earlyer so the state was 8 with timestamp 1233. Is it possible to put something in the past of my item? I would like to put 8 before 7 without changing 7 as my actual state value.#2019-01-1713:24m_m_mI understand that I can not remove anything from Datomic database (which it super cool) but can I add new states as a past of my actual state ? 🙂#2019-01-1713:48Ben Hammondit is only possible to assert new data#2019-01-1713:48Ben Hammondit is not possible to assert old data#2019-01-1713:50Ben Hammondit might be a clue that you need a model 'data insertion as-of` as a data item in you domain#2019-01-1713:50Ben Hammondrather than expecting to use the datomic :db/txInstant timestamp, which records the actual time the data was transacted into datomic#2019-01-1713:52Ben Hammond(I don't have a time machine - time only moves forwards for me. I assume that's the same for most people here)#2019-01-1715:50m_m_mIt is a little bit complicated. Those events are from third party API. It is possible that "cancel" event which is cancelling an offer will be delivered faster then "trade" event (last trade before "cancel"). At the end I have to have all of those "change state" events in a right order in my db because I have to render a list of active orders with their actual state. But at the end I can ask datomic to give me only actual orders with the good state asking for the "last" state from each of my items?#2019-01-1715:53Joe LaneYou need to model time as a first class attribute on your “trade”s. There are two different time models here. There is the logical time (datomic’s time) and then there is temporal time (information about your domain). This will allow you to add facts about things that happened in the past.#2019-01-1715:54Joe LaneDatomic’s logical time is extremely helpful for debugging, operations, and verifying why decisions were made.#2019-01-1715:55Joe LaneI’ve learned not to try and gather domain information from the mechanics of how datomic stores the information. You could consider them two different concerns.#2019-01-1716:00m_m_mGreat. Now I see. Thank you @U0CJ19XAM and @U793EL04V!#2019-01-1720:12lwhortonthis helped me wrap my head around it https://vvvvalvalval.github.io/posts/2017-07-08-Datomic-this-is-not-the-history-youre-looking-for.html#2019-01-1803:20joshkhi've created a blog post about how I handle routing in the context of Datomic Ions and AWS API Gateway. for anyone interested, am i missing something? https://medium.com/@rodeorockstar/datomic-ions-aws-api-gateway-and-routing-d20a1bb086dd#2019-01-1803:21joshkhit's meant to pickup where the Datomic Ions Tutorial stops#2019-01-1807:46fdserrhi there, is this very bad?
WARNING: requiring-resolve already refers to: #'clojure.core/requiring-reso
lve in namespace: datomic.common, being replaced by: #'datomic.common/requi
ring-resolve
and how can I resolve the conflict (using deps, not lein)
TIA#2019-01-1814:02alexmillerno, it’s not bad (that’s why it’s a warning)#2019-01-1814:03alexmillerI expect a newer version of something on the datomic side probably fixes it, but I’d defer to someone from the datomic team to verify that#2019-01-1816:54johnjI think requiring-resolve was introduced in 1.10, hence the name clash#2019-01-1817:25alexmilleryes, but name clashes are quite intentionally not a bug#2019-01-1817:26alexmillerso nothing is broken here, but newer versions can or do either silence the warning (by intentionally excluding it) or by switching to the version now in Clojure#2019-01-1817:27alexmiller(I’m not sure which of those potential actions has already been taken)#2019-01-1819:09jjfineis there a logical/performance difference between these two:
[(missing? $ ?foo :bar)]
(not [?foo :bar])
#2019-01-1819:54mdhaneyWith Datomic Cloud, if I need to fail/abort a transaction (i.e. validation failed) within a transaction function, can I just throw an exception? That's how it works with on-prem, but the docs for Cloud don't mention anything about aborting transactions from within a transaction function.#2019-01-1820:48Joe Lane@mdhaney yup#2019-01-1823:02apseyIs there an official docker image for the Datomic Transactor from Cognitect?
Has anyone used this one? https://hub.docker.com/r/pointslope/datomic-pro-starter/dockerfile/
Use case is: running transactors inside kubernetes to avoid running standby transactors (half the cost without the ec2 boot time) and better embrace resource usage within kubernetes (having less idle resources).
cc @marshall#2019-01-2113:48marshallThere is not an official docker image.#2019-01-1904:23dpsuttonin a query, i want to find things by year and month and i plan to pass this to frequencies. here ?start is an instant. Can someone help me unify ?date-string with the string concatenation of the year and month? I'm stumbling for some reason
(d/q '[:find ?date-string
:in $ ?encounter
:where
[?encounter :fhir.Encounter/period ?period]
[?period :fhir.Period/start ?start]
[?date-string (str (.getYear ?start) "-" (.getMonth ?start))]]
db encounter-id)
#2019-01-1904:29dpsuttonif anyone has the same problem, its because i was doing way too much work in one unification clause. split the get years and string concatenation and bob's your uncle#2019-01-1905:23favilaYou have the parts of the clause backwards#2019-01-1905:24favila
[(str (.getYear ?start) "-" (.getMonth ?start)) ?date-string]
#2019-01-1905:26favilaAlso you will need a :with if you want a count of dates. (Remember the results are a set unless you use with)#2019-01-1905:27favilaOr you can do this instead:#2019-01-1905:27favila:find ?date-string (count ?encounter-id)#2019-01-1905:27favilaNow you don’t need frequencies#2019-01-2023:30okocimI don’t quite understand the tradeoffs between creating enumerations as :db/idents then accessing them as attribute-refs vs. creating attributes of :db.valueType/keyword. I a blanket statement in the docs to model enumerations as refs (i.e. the first way I’m talking about), but
I don’t really understand any tradeoffs that this might create. I was wondering if anyone might know when using idents would be better than using keywords or vice-versa. Or, if someone could point me at some further reading. At the end of the day, I’ll stick to the published advice, but I’d like to be able to reason more about my schema choices.
Typical of enumerations, these are low-cardinality attributes that will be on many, many entities.#2019-01-2101:48timhttps://forum.datomic.com/t/enums-vs-keywords/356#2019-01-2122:17johnjDatomic is a slow DB use mostly for low load internal apps, using ident for enums improves performance by the way of reducing memory and storage use.#2019-01-2122:44okocimThanks for the reading material. The discussion makes sense. I didn't notice any appreciable difference in perf between the two approaches (take that with a grain of salt, it was for my specific use case), so I ended up going with the keywords because I plan on adding more values in the future, and don't see the need to require a schema change to do so in my case.
Still, I do appreciate the help that folks have given me to gain better understanding. #2019-01-2122:53tim> using ident for enums improves performance by the way of reducing memory and storage use.
I don’t see how. Idents still store dbids per entry.#2019-01-2122:58johnj@UF1LL8Y95 https://docs.datomic.com/cloud/best.html#idents-for-enumerated-types#2019-01-2123:02timYeah I read that, but how is storing a keyword any different memory wise than storing a pointer? you still have to store something per entry. That was the question I was posing in the original post, but no one from the datomic team bothered to respond.#2019-01-2123:21tim@U4R5K5M0A it’s probably worth reading the last post by anton here: https://groups.google.com/forum/#!topic/datomic/KI-kbOsQQbU#2019-01-2123:45johnjfair, this needs more info from them#2019-01-2123:48johnjI don't think they answer stuff like this in detail to encourage users to buy support.#2019-01-2123:48johnjfrom what I have seen#2019-01-2214:27marshallall idents are stored in memory#2019-01-2214:28marshallin particular#2019-01-2214:29marshall“Idents are designed to be extremely fast and always available. All idents associated with a database are stored in memory in every Datomic transactor and peer.
”#2019-01-2214:29marshallhttps://docs.datomic.com/on-prem/identity.html#idents#2019-01-2214:29marshallif you’re using Cloud, all nodes in your compute groups and query groups (instead of transactor and peer)#2019-01-2216:06tim@U05120CBV fair enough. I’m not sure how different it would be performance wise when you consider the jvm hotspot internals against cached data. I think it’s fair to say enums are optimal, but the question was really how much so… if it’s a nominal difference then enums don’t look too good when they require schema changes for additions and lack cardinality support.#2019-01-2216:07marshallthere are tradeoffs of both approaches; i suspect the perf difference is quite minimal; sometimes it’s nice to have your “enumerated options” live in the same place as your data model definitions (schema); sometimes it’s nice to be able to build up more complex data structures around/about your enums (i.e. more attrs on them);
on the flip side, not dealing with refs can be more straightforward (i.e. just have a kw value)#2019-01-2212:27jeroenvandijkA difference between enumerations and keywords are that the enumeration keyword is only saved once (in the schema), where as a keyword is saved on every transaction (and giving you the option of freeform input). So if you have a limited set of keywords and you know them in advance you should use enums for better read, write and storage behaviour (so pointer to keyword in the schema = 1 datom VS pointer to keyword and keyword = 2 datoms)#2019-01-2212:30jeroenvandijk@lockdown- I'm curious where you got this from Datomic is a slow DB use mostly for low load internal apps ?#2019-01-2213:19mishayeah, slow and low need something to be contrasted with. And why internal?#2019-01-2213:59tim@jeroenvandijk > a keyword is saved on every transaction
so is the entity-id for each enum reference pointer. the question, as I understand it from antons post, is not about disk space, as we know enums means we store less data. it’s not about transactions, because we know there’s a write either way. it’s about reads, memory footprint and performance. I could care less about storing the extra data to disk.#2019-01-2214:02jeroenvandijkI think the difference comes down to:
1. the enum is indexed, the keyword is not (by default)
2. the enum is one datom read, the keyword is two
I don't know how this translate into your specific performance numbers, but enum should always be better#2019-01-2214:10anderswhat is the suggested strategy for backing up dbs using dynamodb as storage? is enabling backups of dynamodb tables a safe route or should i rather use the datomic cli?#2019-01-2214:31marshallDatomic On-Prem or Cloud?#2019-01-2215:38anderson-prem#2019-01-2215:44marshallYou should use datomic backup#2019-01-2215:45marshallhttps://docs.datomic.com/on-prem/backup.html#2019-01-2215:45marshallddb backup will not work: https://docs.datomic.com/on-prem/ha.html#use-datomic-backup#2019-01-2215:45marshall“Replication or backup of eventually consistent storages cannot (by definition) make consistent copies and is not suitable for disaster recovery.
”#2019-01-2216:00andersthanks 🙂#2019-01-2312:12Ben Hammondis there a nice way to pass sample size into a datalog query as a parameter?
(d/q '[:find (sample ?sample-size ?eid) . :in $ ?sample-size :where [?eid :organisation/id]]
db
3)
Execution error (ClassCastException) at datomic.aggregation/sample (aggregation.clj:63).
class clojure.lang.Symbol cannot be cast to class java.lang.Number (clojure.lang.Symbol is in unnamed module of loader 'app'; java.lang.Number is in module java.base of loader 'bootstrap')
has problems with variable binding#2019-01-2312:20Ben HammondI can do something like
((fn [sample-size]
(d/q {:find [(list 'sample sample-size '?eid) '.]
:in '[$]
:where '[[?eid :organisation/id]]}
db))
3)
=> [17592186045475 17592186045480 17592186045485]
but seems like a pretty ugly solution#2019-01-2312:25Ben HammondI guess
((fn [sample-size]
(take sample-size
(shuffle
(d/q '[:find [?eid ...] :in $ :where [?eid :organisation/id]] db))))
3)
=> (17592186045484 17592186045483 17592186045485)
is my best bet ..?#2019-01-2312:30Ben Hammondabstracted as
(defn sampled-query
"return no more than n items from datomic query, shuffled randomly"
[n & qargs]
(take n
(shuffle (apply d/q qargs))))#2019-01-2314:26favilaAfaik each item in find can only use one bound var and each bound var can only be used once in find#2019-01-2314:27favilaSo for eg you can’t put a pull expression in a binding then do (pull ?eid ?pull-expr) (violates first rule)#2019-01-2314:28favilaNor can you do :find ?eid (pull ?eid [:my-attr]) (violates second rule)#2019-01-2314:29favilaAnd the error message you get will be utterly mysterious#2019-01-2314:29favilaSo you can’t paramaterize sample size#2019-01-2312:47souenzzo@ben.hammond try
(d/q '[:find (sample sample-size ?eid) .
:in $ sample-size
:where [?eid :organisation/id]]
db 3)
#2019-01-2312:48Ben Hammond(d/q '[:find (sample sample-size ?eid) .
:in $ sample-size
:where [?eid :organisation/id]]
db 3)
Execution error (ClassCastException) at datomic.aggregation/sample (aggregation.clj:63).
class clojure.lang.Symbol cannot be cast to class java.lang.Number (clojure.lang.Symbol is in unnamed module of loader 'app'; java.lang.Number is in module java.base of loader 'bootstrap')#2019-01-2312:48Ben Hammondsample wants a number
we've only given it a symbol#2019-01-2319:51spiedenhmm, i have a confounding situation where pull isn’t behaving as i expect. shouldn’t i see the same two :demux/id values in both situations below?
(q '{:find [?did]
:where [[?f :flowcell/id "HW27TBBXX"]
[?d :demux/flowcells ?f]
[?d :demux/id ?did]]})
=> #{["demux-id-two"] ["demux-id"]}
(pull [{:demux/_flowcells [:demux/id]}]
[:flowcell/id "HW27TBBXX"])
=> #:demux{:_flowcells #:demux{:id "demux-id-two"}}
(`q` and pull are just partial applications of the fns from datomic.api)#2019-01-2320:38favila:demux/flowcells is isComponent cardinality-one?#2019-01-2320:39favilaisComponent card1 reverse-refs are card-one#2019-01-2320:39favilaif this is a data shape you expect (multiple "demux"es sharing the same "flowcell" then :demux/flowcells should not be isComponent=true#2019-01-2320:40favilaif you retractEntity a demux entity the flowcell it points to will also be retracted#2019-01-2323:18spieden@U09R86PA4 that was it! thanks. looks like i can’t alter the schema to make flowcell no longer a component, but i need to rethink this data model now anyway.#2019-01-2323:18favilareally? that should be possible#2019-01-2323:19favilahttps://docs.datomic.com/cloud/schema/schema-change.html#sec-3#2019-01-2323:19favilawhy do you think you can't alter this schema?#2019-01-2323:20favilahere for on-prem: https://docs.datomic.com/on-prem/schema.html#altering-component-attribute#2019-01-2417:13spiedeni get: {:errors ({:db/error :db.error/incompatible-schema-install,
:entity :demux/flowcells,
:attribute :db/isComponent,
:was true,
:requested false}),
:db/error :db.error/invalid-install-attribute}#2019-01-2417:14spieden(on prem)#2019-01-2417:14spiedeni think i’m actually going to keep it as a component, though, and create new flowcell entities each time#2019-01-2417:17spiedeneh, this would be a major change actually with a big cascade#2019-01-2417:27spiedenoh i see, i’m just doing it wrong#2019-01-2417:27spiedenit wants a retraction instead of an assert false#2019-01-2417:27spiedenthanks!#2019-01-2419:33spiedenhmm, actually no the docs say it should work the way i’m trying#2019-01-2419:44spiedenseems like this is the first place where i can’t just :db.install/_attribute over what i have but need to detect update vs create and do :db.alter/_attribute instead#2019-01-2419:52spiedenah nevermind. i just switched to using neither and seems good#2019-01-2320:32kennyWhy does an Ion parameter update request the creation of new physical resources?#2019-01-2408:50henrikThe logic is managed by CloudFormation. Certain parameters can't be updated in place, but mandates construction of a new resource. It has to do with AWS rather than Ions itself.#2019-01-2321:31lilactowndo I need to do a restart after updating the IAM policy attached to my compute node(s)?#2019-01-2321:43lilactownwelp, it looks like after I re-deployed it works now so… I guess so?#2019-01-2321:49kennyI had invalid EDN in my Ion parameters and I got an exception that looks like this:
"Msg": "LoadIonsFailed",
"Ex": {
"Cause": "EOF while reading",
"Via": [
{
"Type": "clojure.lang.Compiler$CompilerException",
"Message": "java.lang.RuntimeException: EOF while reading, compiling:(config.clj:10:14)",
"At": [
"clojure.lang.Compiler$InvokeExpr",
"eval",
"Compiler.java",
3700
]
},
It would've been great if I got an error saying that ion/get-env failed due to an EOF error.#2019-01-2323:40okocimis it possible to make use of a src-var in an aggregator, or is that just for datomic to know which source to use for passing in the coll to the aggregator? I’m trying to write a “best” aggregator that takes in a collection of entity ids, pulls in some further attributes from those entities, calculates a composite score, and returns the best composite.
Can I access the database specified by the src-var passed into the aggregator, or is what I’m trying to do only possible in-memory on the client side?#2019-01-2323:45okocim(d/q '[:find ?i (offer.calcs/best-composite $ ?o)
:where
[?s :store/id "demo-customer-shop"]
[?q :quote/store ?s]
[?q :quote/product-suite ?i]
[?q :quote/offer ?o]]
(db/latest-db))
;; here, :quote/offer is a composite ref with cardinality many
#2019-01-2323:45okocimsomething like that#2019-01-2402:01tjgI'm connecting to someone's Datomic On-Prem DBs, backed by DynamoDB. It's very slow to connect, and some tiny queries seem to eat RAM & never complete. Any ideas?
(My current theory: the Peer is filling its cache with way too many things.)
;; Tested under:
;; [com.datomic/datomic-pro "0.9.5561"]
;; [com.datomic/datomic-pro "0.9.5786"]
;; "Elapsed time: 132848.045478 msecs"
(defonce ^:private db-prod
(-> "datomic:ddb://..." d/connect d/db time))
;; "Elapsed time: 37830.972005 msecs"
(defonce ^:private db-dev
(-> "datomic:ddb://..." d/connect d/db time))
;; Despite commenting out `sample` & `count`, query still fails on db-prod.
(defn request-that-eats-memory [db]
(d/q '[:find [(rand 1 ?e) #_(sample 1 ?v) #_(count ?e)]
:where [?e :foo/bar ?v]]
db))
;; :foo/bar has only 5 entities in db-dev.
;; "Elapsed time: 2263.876834 msecs"
(time (request-that-eats-memory db-dev))
;; Consumes RAM at the rate of 70 MB/min.
;; Runs a few minutes before I abort.
(time (request-that-eats-memory db-prod))
#2019-01-2404:28favilaMaybe A long time since last reindex? Do transactor logs complain about indexing failures?#2019-01-2404:31favilaThinking out loud here, if index is very old then the log data in Txor might be big, leading to slow conn times as the peer got the log and slow queries because the unindexed portion of data is so big#2019-01-2413:48marshall@U050TF6A1 I agree with Francis here - long connection times usually indicate an issue with log tail size; do you have any exceptions in your transactor logs recently?#2019-01-2413:49marshallanother possibility is vastly underprovisioned storage#2019-01-2413:51tjgThanks fellows, I'll get access to their transactor logs to check any complaints...#2019-01-2408:14okocimI was wondering if anyone has seen this before:
* I have a bunch of rules defined (example below)
* Rules 1, 3, 4, 6, & 7 all work fine
* Rules 2, 5, & 8 all fail
* The failing rules contain ‘fn-expr(s)’
* I am running on datomic cloud
* ALL rules work when running the query through the bastion (i.e. locallyi at the repl)
* I get the following exception message when running through API Gatway (i.e. in my environment):
clojure.lang.ExceptionInfo:
:db.error\/invalid-rule-fn
The following forms do not name predicates or fns:
(* * -)
{:cognitect.anomalies/category
:cognitect.anomalies/incorrect,
:cognitect.anomalies/message
"The following forms do not name predicates or fns: (* * -)",
:symbols (* * -)
:db\/error :db.error\/invalid-rule-fn}
* I have clojure.core/* and clojure.core/- in the :allow key in my ion-config
* I tried using the fully qualified fn name in the rule (No luck either)
* The attribute_groups example in day of datomic cloud appears to be doing the same thing.
(https://github.com/cognitect-labs/day-of-datomic-cloud/blob/229b069cb6aff4e274d7d1a9dcddf7fc72dd89ee/tutorial/attribute_groups.clj#L28)
MY RULES BELOW:
(def sort-calcs
'[;; WORKS
[(rule-1 [?p] ?value)
[?p :q.p/t :my-val]
[?p :m.p/p ?value]]
;; FAILS
[(rule-2 [?v ?p] ?value)
[?v :f/m ?ms]
[?p :q.p/t :my-val]
[?p :m.p/rp ?r]
[?p :m.p/p ?pa]
[?p :q.p/t ?tl]
[(* ?r 0.01 ?ms) ?ra]
[(* ?tl ?pa) ?tot]
[(- ?tot ?ra) ?value]]
;; WORKS
[(rule-3 [?p] ?value)
[?p :q.p/t :my-val-2]
[?p :m.p/p ?value]]
;; WORKS
[(rule-4 [?p] ?value)
[?p :q.p/t :my-val-2]
[?p :m.p/r ?value]]
;; FAILS
[(rule-5 [?v ?p] ?value)
[?v :f/m ?ms]
[?p :q.p/t :my-val]
[?p :m.p/rp ?r]
[(* ?ms ?r 0.01) ?value]]
;; WORKS
[(rule-6 [?v] ?value)
[?v :f/sp ?value]]
;; WORKS
[[rule-7 [?v] ?value]
[?v :v/dis ?value]]
;; FAILS
[(rule-8 [?v ?p] ?value)
[?v :f/sp ?pr]
[?p :q.p/t :my-val-2]
[?p :m.p/ma ?a]
[(- ?a ?pr) ?value]]])
I could use some guidance on what to do next.#2019-01-2408:22okocimI posted this in the forum too, sorry if that feels spammy; I’m still feeling my way for what to put where… :shrug:#2019-01-2415:06lilactownmy attempts to reach S3 in my Ions succeeds for a couple hours after deploying, but if I come back hours later, fails completely#2019-01-2415:07lilactowndoing another (unrelated) deploy then brings up it’s ability to access S3 again#2019-01-2415:09lilactownwhen I execute the lambda for my Ion, I get this error:
{
"errorMessage": "Cannot open <nil> as a Reader.",
"errorType": "datomic.ion.lambda.handler.exceptions.Incorrect",
"stackTrace": [
"datomic.ion.lambda.handler$throw_anomaly.invokeStatic(handler.clj:24)",
"datomic.ion.lambda.handler$throw_anomaly.invoke(handler.clj:20)",
"datomic.ion.lambda.handler.Handler.on_anomaly(handler.clj:139)",
"datomic.ion.lambda.handler.Handler.handle_request(handler.clj:155)",
"datomic.ion.lambda.handler$fn__4075$G__4011__4080.invoke(handler.clj:70)",
"datomic.ion.lambda.handler$fn__4075$G__4010__4086.invoke(handler.clj:70)",
"clojure.lang.Var.invoke(Var.java:396)",
"datomic.ion.lambda.handler.Thunk.handleRequest(Thunk.java:35)"
]
}
#2019-01-2415:13marshall@lilactown do you have some kind of long-lived connection in your S3 ion?#2019-01-2415:14lilactownOK, I could be dense but if this:
(def s3 (aws/client {:api :s3}))
creates a long-lived connection, then yes#2019-01-2415:15lilactownwhich would explain a lot tbh#2019-01-2415:16lilactownI’m using cognitect’s aws-api#2019-01-2415:17marshallyou probably don’t want to do that in a def directly#2019-01-2415:17marshallyou’ll want something like a memoized “get client” function#2019-01-2415:17lilactownthat makes sense! I didn’t realize it was actually creating a connection#2019-01-2415:17marshalli.e. sort of how the ion starter project handles datomic connections https://github.com/Datomic/ion-starter/blob/master/src/datomic/ion/starter.clj#L13#2019-01-2415:18marshallthat error looks like trying to do something with a closed connection; which would make sense after it sits for a while#2019-01-2415:18marshalland the redeploy of another ion would cycle the process#2019-01-2415:19marshallcausing your namespace-loading side-effect of creating the client again#2019-01-2415:19lilactownmystery solved 😄#2019-01-2415:19lilactownI’m a bit naive about how memoize actually works; if my connection drops, my assumption is that it would not gracefully reconnect but throw an error#2019-01-2415:19marshallyou dont really even need it to be memoized#2019-01-2415:19marshallyou can just have a ‘get client’ function#2019-01-2415:19marshallthe key is not to create the client as a side effect of ns loading (via def)#2019-01-2415:19marshallbut instead create it (or refresh it) when you invoke#2019-01-2415:20marshalllike this: https://github.com/pedestal/pedestal-ions-sample/blob/master/src/ion_sample/service.clj#L45#2019-01-2415:22lilactownshould I bother trying to avoid calling (aws/client {:api :s3}) on each invocation?#2019-01-2415:28marshalli dont’ know exactly what the overhead is creating the client for that library; if it is cheap i wouldnt worry about it; if it isnt you could put it in a memoized function#2019-01-2415:29lilactownI’m wondering that as well 🙂 I didn’t understand that it had some sort of long-lived connection#2019-01-2415:29lilactownwhich begs the question how I should clean it up#2019-01-2415:30lilactownif I’m opening a new connection to S3 on every invocation, and those connections are just lying around, I’m afraid I’ll end up taking up a ton of resources after awhile#2019-01-2418:18marshallnot if it goes out of scope#2019-01-2417:13adammillerCurious if anyone has had experience with utilizing Ions in a ring style app with CORS enabled? My issue is related to using the binary media type */* as the Ions tutorial recommends along with the CORS setup in API Gateway as apparently they don't play well together. If both are enabled, the preflight OPTIONS request generates an internal server error. Removing the */* binary type fixes the preflight request but then the body of all operations are returned base64 encoded. Any suggestions on what the right solution is to this?#2019-01-2417:22okocim@adammiller I had the same problem. I landed on setting up a single API gateway proxy ion endpoint instead of one per ion. Then I wrote a universal router that will handle the OPTIONS requests and CORS details.
Here is an article written by Joshua Heimbach a few days ago that talks a little more about this approach. Mine is slightly different in detail, but conceptually the same. (I am using a different router)
https://medium.com/@rodeorockstar/datomic-ions-aws-api-gateway-and-routing-d20a1bb086dd#2019-01-2417:27adammillerYeah, I'm already using one endpoint to route to my lambda which is served as ring app (basically) but I think handling cors at the app layer has some downsides 1) would be cost as you will be invoking lambdas for preflight requests (not huge deal), 2) Not sure it's possible to use amazon security this way, again not totally positive on this, just what I've found searching this problem where others talked about having the app layer handle cors.#2019-01-2417:32okocimOk, yeah sorry I didn’t catch that you were on a single proxy ion from your description. I wasn’t happy with those tradeoffs either, but I decided to defer that issue for a little while because I had to move on to other ones 😅. If you come up with a solution that you like, I’d appreciate it if you share.#2019-01-2417:34adammillerI definitely will, thanks for your input! I've been going back and forth on making those concessions myself as I can't spend much more time on this!#2019-01-2418:51adammiller@okocim I found the answer (after a lot of searching). You have to run the following commands (apparently no way to change this in the Console):
aws apigateway update-integration \
--rest-api-id <api-id> \
--resource-id <resource-id> \
--http-method OPTIONS \
--patch-operations op='replace',path='/contentHandling',value='CONVERT_TO_TEXT'
aws apigateway update-integration-response \
--rest-api-id <api-id> \
--resource-id <resource-id> \
--http-method OPTIONS \
--status-code 200 \
--patch-operations op='replace',path='/contentHandling',value='CONVERT_TO_TEXT'#2019-01-2418:52adammillerWould probably be nice for this to be in the Ions documentation somewhere as I'm guessing it will be a common problem for anyone who decides to host a full webapp (or api) inside Ions.#2019-01-2418:52adammillerdocumentation related to setting up CORS, that is.#2019-01-2418:56lilactownfor now sounds like a good blog post 😄#2019-01-2419:05adammillerGood idea, I'll try to write one up....if nothing else I'll know where to find it next time I run into this!#2019-01-2419:01okocim@adammiller Thanks! That’ll allow me to do one of my favorite things: delete some code 🙂#2019-01-2504:49codelimnerIs there a way to turn off datomic.process-monitor info logs from the repl?#2019-01-2504:50codelimnerI get plenty of these:#2019-01-2504:50codelimnerINFO datomic.process-monitor - {:MetricsReport {:lo 1, :hi 1, :sum 1, :count 1}, :AvailableMB 5970.0, :ObjectCacheCount 0, :event :metrics, :pid 18037, :tid 127}#2019-01-2514:27marshallThe settings for metrics reporting are all configured with your logback.xml file#2019-01-2514:28marshallhttps://docs.datomic.com/on-prem/configuring-logging.html#2019-01-2505:43brycecovertAre there any updated recommendations on how to sorting/pagination on large datasets? Most of the posts I’ve seen about this are 5 years old by now. My approach has been to do two queries. The first applies filters, and pulls just the ids and sort field. I sort in-process, and then do another query to fetch the collection of desired entities by id. This is still pretty slow (500ms for 30k entities).#2019-01-2505:44favilaIf there is a single pretty-selective attribute you can use, you can use it with index-range to pull subsets then feed into queries#2019-01-2505:45favilaAnother alternative is to use a secondary database as an index, either polling or watching the tx queue to update#2019-01-2505:48brycecovertInteresting. What do you mean by pretty-selective attribute? Could you give an example?#2019-01-2507:13favilaIt’s usually the first clause in your :where#2019-01-2507:14favilaSuppose you want all things posted on a certain day and also a bunch of other stuff#2019-01-2507:15favilaSo your first :where clauses assert that the thing is in the date range (because that is most selective) then a bunch of other clauses check other stuff before finally deciding if it’s in the result set#2019-01-2507:16favilaBut there’s 100000 things in that date range and you only need the first 10 that match#2019-01-2507:17favilaSo you don’t want those first few clauses to look at everything just to get extra results you won’t use#2019-01-2507:19favilaSo you can either divide up the range in the query itself, or you can use d/datoms or d/index-seek to lazily fetch a subset of that range and feed those entity ids to the query (to test the other stuff) as input#2019-01-2507:19favilaIf you get less than your desired limit in results, advance the range then repeat#2019-01-2507:21favilaThe important thing is that this attribute test/range must by itself ensure that a thing may or may not be in the result set. Otherwise you may advance the range and end up with repeated results#2019-01-2507:22favilaIe the attribute test if subsetted must produce non-overlapping result sets#2019-01-2507:33brycecovertThanks, that’s helpful. 🙂#2019-01-2518:40tony.kaySo, I’ve been working with on-prem Datomic for a while now, but now I have a client that is using the client API…with on-prem my testing is super easy since I can leverage datomock to make a db with sample data in it, transact/query/etc, and see the result.
With client I have an external server dependency…how are others testing against that?#2019-01-2519:43Lennart BuitI started using Datomic client memdb #2019-01-2519:44msshey all, playing around locally with datomic 0.9.5385 using the dev storage protocol. trying to transact a schema, which looks something like the following:
(def my-schema [{:db/ident :user/id
:db/valueType :db.type/uuid
:db/cardinality :db.cardinality/one}
{:db/ident :user/email
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one}
{:db/ident :user/first-name
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one}
{:db/ident :user/last-name
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one}])
when I call (d/transact my-conn my-schema) I get an error ":db.error/entity-missing-db-id Missing :db/id". I was under the impression that datomic assigned db/ids automatically once an entity was transacted. is that not the case?#2019-01-2519:44Lennart Buithttps://github.com/ComputeSoftware/datomic-client-memdb#2019-01-2519:48Lennart BuitIt suits me in sort of small-scoped integration testing #2019-01-2519:50Lennart BuitNot sure how commonly used it is, or if there is anything better, but it at least works for me#2019-01-2519:45msswas following the docs here fwiw: https://docs.datomic.com/on-prem/getting-started/transact-schema.html#2019-01-2519:46marshall@mss what version of datomic?#2019-01-2519:46mssdatomic-pro 0.9.5385 w the dev storage protocol#2019-01-2519:47marshallthe implicit db/id was introduced in 5530#2019-01-2519:47mssah I see. well that explains that 😂#2019-01-2519:47mssappreciate the help#2019-01-2519:47marshallhttps://docs.datomic.com/on-prem/transactions.html#temporary-ids#2019-01-2519:47marshallnp#2019-01-2521:11idiomancywhats the difference between Datomic Cloud and Utility Bastion per https://aws.amazon.com/marketplace/pp/prodview-otb76awcrb7aa?ref=vdr_rf ?
Or, more succintly, wth is Utility Bastion?#2019-01-2521:22idiomancyI'm trying to make some purchasing decisions today lol#2019-01-2521:36osiis it https://docs.datomic.com/cloud/operation/bastion.html ?#2019-01-2521:44lilactownthe bastion is a node that you can connect to to give you access to the Datomic VPC in order to do local development#2019-01-2521:46lilactownnormally, the only things that can talk to the Datomic DB have to be within the same VPC#2019-01-2521:46lilactownmainly for security reason#2019-01-2521:48lilactownthe bastion is an EC2 instance that you can connect to from your computer so that you can access the Datomic Cloud system without pushing your code to AWS#2019-01-2521:48idiomancyAhh I see#2019-01-2521:48idiomancyGotcha, that makes sense#2019-01-2600:35lilactownso I got a spike of traffic to my solo system and it’s just not working#2019-01-2600:35lilactownI’m trying to throw up a CDN, but in the meantime how do I troubleshoot and bring it back online?#2019-01-2601:08lilactownrestarting the compute instance seemed to do the trick#2019-01-2718:19ronnyThe function will be called with the current date (Date.). On the repl I get 2 entries but on the server I get zero. This is running on a datomic-cloud solo instance. Could anybody tell me what I am doing wrong?#2019-01-2718:19ronnyThe function will be called with the current date (Date.). On the repl I get 2 entries but on the server I get zero. This is running on a datomic-cloud solo instance. Could anybody tell me what I am doing wrong?#2019-01-2718:27ronnyOn both is the same version running (deployed)#2019-01-2808:16ronnydate is an instant#2019-01-2811:09ronnyI found the problem, but I have no idea why this was the problem? I removed the (flatten) and it worked.#2019-01-2811:10ronnySeems something on the server side is inconsistent…#2019-01-2816:09marshallWhat is the type and cardinality of :rule/id ?#2019-01-2803:41henrikHaving created a CloudFormation template that produces an ElasticSearch cluster, isolated in its own VPC and exposed via an endpoint service, hooked up to CodeDeploy for configuration updates, I have a newfound respect for the people who put Datomic Cloud together.
Making AWS hook things up programmatically for you is like arguing contract details with a bureaucrat from a national telecom.#2019-01-3016:58eoliphantWe use terraform, it's head and shoulders above cloud formation for this kind of stiff#2019-02-0218:54henrik@U380J7PAQ Interesting, I'll have to look into it.#2019-01-2816:51Oleh K.I don't understand how I can upload my current datomic database to the cloud, can anybody tell me?#2019-01-2816:53Oleh K.restore-db function doesn't recognize the cloud type url#2019-01-2816:54benoitThe on-prem storage is not compatible with cloud's.#2019-01-2816:55marshall@okilimnik https://docs.datomic.com/on-prem/moving-to-cloud.html#2019-01-2816:55marshallWe don’t currently have tooling for migrating between On-Prem and Cloud#2019-01-2816:57Oleh K.thanks for clarification#2019-01-2818:14benoitThat might be obvious to some people here but it was not for me until this morning so I thought I would share. The top paragraph at https://docs.datomic.com/on-prem/transactions.html#identify-entities is a bit misleading because you can assert facts about entities that are not "already in the database". The facts get added even if the :db/id does not exist or the entity was retracted. That might not be a problem with one peer but with multiple peers you can end up in situations where an entity gets retracted by one peer and updated by another right after. I'm guessing that's a good argument to use lookup refs instead of :db/ids in transactions. Otherwise you would have to check that the entity id already exists in a tx function every time you want to assert new facts. Did I miss something?#2019-01-2818:51bkamphausI agree that this is a reason to use lookup refs: if you intend the transaction to succeed only when the entity to which the facts refers can be found in the database already. I think it’s worth taking care in the language to note that Datomic does not have a separate notion of modeling entity existence — just whether or not there are facts about the entity.#2019-01-2819:09benoit@bkamphaus I think it's worth clarifying this in the docs and not mention things like "an existing id for an entity that's already in the database"#2019-01-2820:27favilathere is a sort of "entity existence" check in datomic. The "t" is a db-wide counter incremented for each tempid (those that don't resolve to an entity id). if an entity id's "t" bits exceed the db's current t, datomic may say the entity doesn't exist#2019-01-2821:24grzmWhat are people doing wrt development of Datomic Cloud/ions on Windows? We’ve got a partner where one of the developers uses Windows.#2019-01-2821:43dustingetz@me1740 you can use :db.fn/cas to detect a concurrent modification and fail a transaction#2019-01-2909:28stijnanyone seen this and knows what the meaning / cause of this error is? (datomic cloud)#2019-01-2909:28stijn{:type clojure.lang.ExceptionInfo
:message Next offset 3000 precedes current offset 2000
:data {:datomic.client-spi/request-id 8e372a82-abc5-4a52-805e-fe90786c82f5, :cognitect.anomalies/category :cognitect.anomalies/fault, :cognitect.anomalies/message Next offset 3000 precedes current offset 2000, :dbs [{:database-id fa80ec7a-d124-4cf2-971e-e43c8d7e8516, :t 1885, :next-t 1886, :history false}]}
:at [datomic.client.api.async$ares invokeStatic async.clj 56]}#2019-01-2911:00joshkhjust curious what's going on here. 🙂 when i run my (datomic) clojure project i consistently get messages that some of my deps are being downloaded from datomic's s3 releases bucket. always the same ones.
$ clj -Stree
Downloading: com/amazonaws/aws-java-sdk-cloudwatch/maven-metadata.xml from
Downloading: com/amazonaws/aws-java-sdk-dynamodb/maven-metadata.xml from
Downloading: com/amazonaws/aws-java-sdk-kinesis/maven-metadata.xml from
Downloading: com/amazonaws/amazon-kinesis-client/maven-metadata.xml from
#2019-01-2912:55alexmillerThose are the version metadata files for each of the artifacts. You should need to download them any time there are new releases (which I’d guess are about weekly right now) but they should be cached in your ~/.m2/repository#2019-01-2912:56alexmillerAny chance that’s getting cleaned between builds?#2019-01-2912:57alexmillerClasspaths get cached in your local dir under ./.cpcache too - that should be in front of the m2 stuff assuming your deps.edn isnt getting updated#2019-01-2914:37joshkhinteresting. if i don't update deps.edn then i can see the cache at work: when i run clj -Stree for a second time there are no downloads. however, launching to Ions still triggers the download process.#2019-01-2911:26joshkhi'm guessing it's because i've included the aws jdk, amazonica, and ions, and that there's a version mismatch. might this affect the size of my ions push to code deploy? i've seen a big increase in the overall time to deploy.#2019-01-2912:58alexmillerMight be something with that, prob something the Datomic team could answer better than I #2019-01-2913:34joshkhthanks, alex. in the mean time i'll play around with some exclusions and see where i end up.#2019-01-2913:29Per WeijnitzHi! I've recently started with Datomic, so please bear with me. Is there a way to perform a full scan of the database (with the purpose of studying the internals while learning)? I've tried things like
(d/q '[:find ?e ?a ?v ?tx :where [?e ?a ?v ?tx]] (get-db))
but Datomic refuses with
:db.error/insufficient-binding Insufficient binding of db clause: [?e ?a ?v ?tx] would cause full scan.#2019-01-2913:40joshkha few people have been looking into this for the purpose of cloning a datomic (cloud) db, at least until Cognitect provides official support (please please please). if you're interested in the inner workings and mappings then maybe d/datoms might be of interest?
(seq (d/datoms (client/db) {:index :eavt}))#2019-01-2913:41joshkheven if you could query for everything, i think it would timeout#2019-01-2915:00Per Weijnitz@U0GC1C09L d/datoms looks very useful to me in my studies, thanks! Let's hope Cognitect adds support for full scan soon!#2019-01-2915:04joshkhno problem! full scan probably isn't what we want 🙂 maybe a clever way to copy over tables and s3 buckets (although i appreciate it's not that easy). just curious, are you using datomic cloud?#2019-01-2917:28souenzzoadd [?a :db/ident ?ident] then you can query#2019-01-2923:31favilahttp://tonsky.me/blog/unofficial-guide-to-datomic-internals/#2019-01-3008:14Per Weijnitz@U2J4FRT2T @U09R86PA4 Thanks for the advice! I'll dig into it today and see what I can learn.#2019-01-3008:19Per Weijnitz@U0GC1C09L I see, that seems practical indeed. No, I use on-prem with the dev backend. Hmm... that makes me think. Did you ask this because it may be possible inspect the datoms directly by inspecting the database backend? (postgres table contents for example)#2019-01-3009:04Per Weijnitz@U2J4FRT2T That does indeed work!#2019-01-3011:23souenzzoIt's like
- "I can't query all data"
- "Can you query only valid data?"
- "Sure I can!"#2019-01-2917:19eraserhdIs there a protocol I can implement to be able to use a data structure with d/q?#2019-01-2917:28benoitYou should be able to pass any collection or relations to d/q. (d/q '[:find ?b :in $ ?a :where [?a :a ?b]] [['a :a 'b]] 'a)#2019-01-2918:37eraserhdbenoit: I know about that, but this is a Clara Rules session. I could pass the results of a query, I think, but the fact that d/q supports databases and vectors suggested to me that maybe there's a protocol that I can implement.#2019-01-3011:34souenzzoI'm interested on that too#2019-01-3012:17souenzzoI think that #datascript may be a better solution to make it.#2019-01-2919:16crowlHi, can I pull many entities in one query with the datomic client api?#2019-01-2920:52joshkhhey @U44C8GM7T, how do you mean? the client api returns as many different entities as matched in your :where clause. can you elaborate?#2019-01-2920:59joshkhwithout knowing more this might not answer your question, but here's an example of pulling many (all) "item" entities that have a sku and are on sale, and returns all of the entities' attributes:
(d/q '{:find [(pull ?item [*])]
:in [$]
:where [
[?item :item/sku _]
[?item :item/on-sale? true]
]}
db)
#2019-01-3007:24kommenjfyi, the link to SQUUID in https://docs.datomic.com/on-prem/best-practices.html#unique-ids-for-external-keys is a 404#2019-01-3013:46marshallFixed. Thanks ^ !#2019-01-3014:59Oleh K.I'm trying to develop via datomic-socks-proxy, but I cannot receive the most part of queries because of request timeout. I have a satellite internet and a big ping. How can I fix it?#2019-01-3015:17okocimif you use the 1-arg form of d/q, you should be able to set a timeout that suits your needs:
(d/q
{:query '[:find ...] :args [db ...]
:timeout 60000}) ; or whatever you need
#2019-01-3018:06Oleh K.Thanks, I just thought that if there is no timeout than it is maximized and therefore the problem is not in queries#2019-01-3018:07Oleh K.Are there some defaults?#2019-01-3018:09Oleh K.Seems strange to be forced to use 1 args form for local dev#2019-01-3019:55Oleh K.I've checked, this timeout has nothing to do with my error
{:status -1, :status-text "Request timed out.", :failure :timeout}
#2019-01-3019:56okocimwhat’s the query that you’re running?#2019-01-3020:23Oleh K.I'm sorry, the reason really was in my queries (I'm migrating from on-prem to the cloud), thanks for your time#2019-01-3017:01eoliphantHi. Running into an issue where cloud deployments are timing out that seems to be related to the number of lambda ions (~30). Temporarily Pulling some of them out of the config makes the problem go away. Is there any way to override the timeout?#2019-01-3018:03joshkhi'm curious about recursive pull syntax. in the doc's example, we grab names of friends:
[:person/firstName :person/lastName {:person/friends ...}] where by the ... refers back to the attributes in the parent data structure (i think?)
in my case my root entity has a relationship to a node, and that node has a relationship to another node, and then a third node which is where i want to recur. i guess it would be like (:person/diary)->(:diary/contacts)->(person) instead of (:person/friends)->(:person/friends). is this possible?#2019-01-3018:46miridiusHi there, I've got a datomic cloud instance running and I'm trying to connect to it from a client application running (on EB Tomcat) within the datomic VPC subnets. But when trying to connect I get the following exception:
Unable to connect to localhost:8182 {:cognitect.anomalies/category :cognitect.anomalies/unavailable, :cognitect.anomalies/message "Connection refused", :config {:server-type :cloud, :region "eu-west-1", :system "recipes-dev", :proxy-port 8182, :endpoint "", :endpoint-map {:headers {"host" ""}, :scheme "http", :server-name "", :server-port 8182}}}
Is it me or does it think it's running in ions and is trying to connect to localhost?#2019-01-3018:59marshall@miridius You need to remove :proxy-port from your config map#2019-01-3019:07miridiusah thanks, that would make sense since it's not using the socks proxy! I couldn't find anything about it in the docs though. Does the endpoint url stay the same?#2019-01-3019:09marshallyep endpoint is the same#2019-01-3021:15johanatanhi, does anyone have an example of including the datomic-pro peer library in deps.edn format (including the repository and login information for http://my.datomic.com) ?#2019-01-3021:16johanatanhttps://my.datomic.com/account only includes it for maven and leiningen#2019-01-3021:17okocimIs there any way to replicate the functionality of tx-report-queue in datomic cloud?#2019-01-3021:33johanatani suppose it would be the "Maven authenticated repos" section on the following page:
https://clojure.org/reference/deps_and_cli
giving that a try#2019-01-3021:39johanatanhmm, getting 401 unauthorized even though i put my username and password in ~/.m2/settings.xml as directed#2019-01-3021:39johanatani tried both my-datomic-com and for server-id (matching in both server.xml and deps.edn)#2019-01-3021:44johanatanany ideas/guidance?#2019-01-3021:45alexmillerdo you have any AWS env vars set?#2019-01-3021:50alexmillerif not, try setting any valid AWS creds#2019-01-3021:52johanatanyes, those are set to valid values#2019-01-3021:52johanatani get an AWS error otherwise#2019-01-3021:52johanatanthis is my actual invocation:
13:40 $ AWS_ACCESS_KEY_ID=`aws configure get personal.aws_access_key_id` AWS_SECRET_KEY=`aws configure get personal.aws_secret_access_key` clj -m core
#2019-01-3021:53alexmillerultimately here you’re trying to access a jar in an S3 bucket, and the deps resolver needs to access the bucket, which requires checking the bucket location (region), which is subject to IAM, and this is something I’ve seen be problematic before#2019-01-3021:53johanatanis there an s3:// url i could try for it?#2019-01-3021:54johanatana la the "maven s3 repos" section#2019-01-3021:54ghadiyou might need to export those vars#2019-01-3021:54ghadiclj launches a subprocess#2019-01-3021:54johanatanah, good call#2019-01-3021:55johanatannope. same result#2019-01-3021:56alexmillerwhat do you have for your datomic repo in deps.edn?#2019-01-3021:58johanatan{"" {:url ""}
#2019-01-3021:59johanatanhmm#2019-01-3022:00johanatanthis is interesting:
✘-1 ~/Documents/SourceCode/option-scanner/options-datomic [master|●1✚ 2]
13:59 $ aws s3 cp deps.edn
upload failed: ./deps.edn to An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
#2019-01-3022:00johanatanperhaps my access keys need more s3 perms#2019-01-3022:01alexmillerno, I don’t think that’s it#2019-01-3022:02alexmillerwhat do you have the AWS keys set for?#2019-01-3022:02alexmillerwhat you should have is a ~/.m2/settings.xml that looks like:#2019-01-3022:03johanatani am now just using the "default" aws credentials profile as specifed in ~/.aws/credentials#2019-01-3022:03marshallare you running this locally on your laptop or on an EC2 instance?#2019-01-3022:04johanatanhere's my settings.xml:
<servers>
<server>
<id>my.datomic.com</id>
<username>redacted</username>
<password>redacted</password>
</server>
</servers>
#2019-01-3022:04alexmilleryes, good#2019-01-3022:04johanatan@marshall locally on laptop#2019-01-3022:04alexmillerand deps.edn should look like:#2019-01-3022:04alexmiller{:mvn/repos
{"" {:url ""}}
:deps
{com.datomic/datomic-pro {:mvn/version "0.9.5786"}}}#2019-01-3022:04alexmillerand you should need no AWS settings#2019-01-3022:05johanatanyep, that's what i've got#2019-01-3022:06johanatanhere's the full deps.edn:
{:paths ["src" "resources"]
:extra-paths ["resources"]
:deps
{clj-time {:mvn/version "0.15.0"}
com.rpl/specter {:mvn/version "1.1.2"}
aleph {:mvn/version "0.4.6"}
org.clojure/clojure {:mvn/version "1.9.0"}
com.datomic/datomic-pro {:mvn/version "0.9.5786"}
org.clojure/data.json {:mvn/version "0.2.6"}
com.cognitect/transit-java #:mvn{:version "0.8.311"}
org.msgpack/msgpack #:mvn{:version "0.6.10"},
com.cognitect/transit-clj #:mvn{:version "0.8.285"}
com.cognitect/s3-creds #:mvn{:version "0.1.22"}
com.amazonaws/aws-java-sdk-kms #:mvn{:version "1.11.349"}
com.amazonaws/aws-java-sdk-s3 #:mvn{:version "1.11.349"}}
:mvn/repos
{"" {:url ""}}
;:aliases
;{:dev {:extra-deps {com.datomic/ion-dev {:mvn/version "0.9.186"}}}}
}
#2019-01-3022:06johanatan[i'm no longer using ion so those two lines are commented out]#2019-01-3022:06marshallwhat’s the error you get when you try to run it?#2019-01-3022:06johanatan#2019-01-3022:07johanatanbasically Unauthorized 401#2019-01-3022:07marshallCould not transfer artifact com.datomic:datomic-pro:pom:0.9.5786 from/to http://my.datomic.com (https://my.datomic.com/repo): Unauthorized (401)#2019-01-3022:07marshallthe creds you put in your m2 settings are the ones from your http://my.datomic.com account dashboard?#2019-01-3022:08johanatanyep#2019-01-3022:08johanatancould it be a problem with those creds somehow on the server side? i generated them about 4 weeks ago and haven't tried them until today#2019-01-3022:09johanatanthe username is an email address and the password is a UUID#2019-01-3022:09alexmilleryou can test directly in the browser#2019-01-3022:10alexmillergo to https://my.datomic.com/repo/com/datomic-pro/0.9.5786/datomic-pro-0.9.5786.jar and enter those in the basic auth user/password dialog#2019-01-3022:10johanatanit worked with wget#2019-01-3022:10johanatanso that's not the problem#2019-01-3022:10alexmillerwhat version of clj are you using?#2019-01-3022:10alexmillerclj -Sverbose#2019-01-3022:11alexmillerlatest is 1.10.0.411#2019-01-3022:11johanatan14:11 $ clj -Sverbose
version = 1.10.0.411
install_dir = /usr/local/Cellar/clojure/1.10.0.411
config_dir = /Users/jonathan/.clojure
config_paths = /usr/local/Cellar/clojure/1.10.0.411/deps.edn /Users/jonathan/.clojure/deps.edn deps.edn
cache_dir = /Users/jonathan/.clojure/.cpcache
cp_file = /Users/jonathan/.clojure/.cpcache/4079603067.cp
Clojure 1.10.0
#2019-01-3022:12johanatani have to step out now. i'll check back again later#2019-01-3022:13alexmillerfor grins, I wouldn’t mind seeing you run that without the AWS creds#2019-01-3022:16alexmilleralso curious if that was your full settings.xml or just a snippet. full should have some stuff around what you posted#2019-01-3022:16alexmiller<?xml version="1.0" encoding="UTF-8"?>
<settings xmlns=""
xmlns:xsi=""
ssi:schemaLocation="">
<servers>
<server>
<id>my.datomic.com</id>
<username>redacted</username>
<password>redacted</password>
</server>
</servers>
</settings>#2019-01-3023:29johanatanThis was the problem. The missing outer <settings> which isn't mentioned on: https://clojure.org/reference/deps_and_cli
I was creating this file from scratch as it didn't yet exist on my system.#2019-01-3022:22alexmillerbtw, your deps.edn worked for me and downloaded everything#2019-01-3022:22alexmillerso I’m going to suspect settings.xml for now#2019-01-3022:35lilactownat work, we're considering testing out Datomic On-Prem on our AWS stack. are there any recommendations on what storage service (rds, ddb, etc.) to use?
Also, we currently are not allowed to use CloudFormation in our accounts; are there any resources on how to setup a transactor and peers outside of using CF?#2019-01-3022:39alexmillerasking the dumb question - do you have a good reason to avoid Datomic Cloud?#2019-01-3022:40alexmillerthe datomic team could probably answer better but generally I think we would nudge you towards ddb over the others#2019-01-3022:42donaldballJuxt has some terraform recipes: https://github.com/juxt/pack-datomic#2019-01-3022:45donaldballFair warning; running on-prem is moderately operationally complex. If none of the cloud restrictions are deal-breakers, I’d think you’ll probably have a better time with it esp. for experimenting.#2019-01-3022:46lilactown@alexmiller I would love to use Datomic Cloud, but I have a feeling that it will be politically as well as operationally complex#2019-01-3022:47lilactownour AWS account is not wholly owned by my team; we share one account for each environment (dev/qa/prod/etc.) with the entire organization 🙃#2019-01-3022:49alexmillerThat’s not necessarily an obstacle, might be worth considering as cloud is really the better choice technically and operationally between these options#2019-01-3022:52lilactownI'm going to try and approach it on both fronts. I want to also explore the on-prem deploy in case I can't make headway with the cloud version#2019-01-3022:53lilactownlike I said, even use CF is not allowed for us atm 😕 the entire datomic cloud infra setup needs to be vetted by the team that handles the AWS accounts, services and architecture to see if it will behave well and should be allowed#2019-01-3023:30johanatan@alexmiller @marshall thx for the help!#2019-01-3023:42alexmillercool#2019-01-3114:59danierouxIs this an acceptable thing to do in an ion? (async/thread-call process-forever!)#2019-01-3115:00marshall@danie I would avoid doing async work in ions#2019-01-3115:02danierouxI have an AWS instance doing this for me right now. Would love to kill that instance and just use the Datomic Cloud infrastructure going forward.
What would you suggest?#2019-01-3115:10marshallis your processing triggered by something external? i.e. something you can hook up to an SNS queue?#2019-02-0107:36danierouxThe long running process is a Kafka consumer, consuming a few thousand messages a second.
I'm not sure it fits in with an external trigger?#2019-01-3117:38JonathanHi, has anyone tried reading datomic data into a spark RDD? I saw the Nubank video/slides and I wonder if that’s still the state of the art.#2019-01-3117:531zaakHi @stuarthalloway, when can we expect Datomic Cloud will be available in ap-southeast-1?#2019-01-3117:53stuarthalloway@1zaak in the next release#2019-01-3117:561zaakcool thanks!#2019-01-3118:12joshkhhow y'all deal with (possibly) nil query parameters?
for example, a broken query to collect all public catalogue items and those catalogue items for which a user has particular access
(d/q '{:find [(pull ?catalogue-item [*])]
:in [$ ?person-id]
:where [
[?catalogue-item :view/visibility :public]
(or-join [?catalogue-item ?person-id]
[?person-id :access/owns ?catalogue-item])
]}
db
; could be nil:
<some-person-id>
)
#2019-01-3118:44grzmWhat strategies are people using to effectively SIGHUP all nodes in a query group, say, to restart/reload with a new configuration (but no code change)?#2019-01-3118:50marshall@grzm you should be able to do an ion deploy to force process restart#2019-01-3118:50marshalldo you need to down the instances or just the processses?#2019-01-3118:54grzmJust the processes.#2019-01-3118:54marshallyeah, i’d just redeploy your ion#2019-01-3118:55grzmGotcha.#2019-01-3118:55marshallthat will restart instances in a rolling fashion#2019-01-3122:09mynomoto@stuarthalloway What about when Datomic cloud will be available in sa-east-1?#2019-02-0117:12m_m_mHi. Is it possible to build pipeline from datomic queries? I have a lots of filters in my app so I would like to create protocol Filter and implement that protocol for each of my filters (by using different datomic query in each of them) then build pipeline from them.#2019-02-0123:08currentoorwhen using datomic.client.api/q is it possible to have optional inputs? and only apply some where clauses when those inputs are specified?#2019-02-0205:50favilaIt’s possible but awkward. Use a special sentinel value (not nil) and make rules or or clauses that assert it matches or doesn’t match the sentinel#2019-02-0205:52favilaUsually it’s easier to dynamically build up the query clauses appropriate to your inputs using cons-> instead of pattern matching on the value of “empty” sentinel input values#2019-02-0220:13currentoormakes sense, thank you!#2019-02-0123:22Christian JohansenI don't think that's possible. But queries are data - just build the query dynamically before passing it to q#2019-02-0202:11johanatanwhat's the best way to reconcile "across time" queries to singular points in time across that series? i.e., return values adhering to relations at they existed at single points in time across large swaths of time.
is the best we can do: create as-of snapshots across all of time (i.e., at each tx-id) and run the query against each of those? or is there something that can be done with a join/filter involving ?tx and :db/txInstant in a historical query?#2019-02-0202:12johanatanmy naive approach is essentially resulting in a cross-join of all values of particular entity x with all values of attribute of related entity y across all of time (which is obviously not what I need).#2019-02-0205:54favilaIs your puzzlement about doing all of this in a query?#2019-02-0305:39johanatan@U09R86PA4 my problem is that a join (on a d/history db) seems to join across all of time; i.e., all attr A for entity Y values related to entity X that occur after X comes into existence are returned. a la a "cross join". I want to get attr A for related entity Y at each consistent point in time across all of history (preferably in a single query without mapping over each transaction ID and doing as-of on each tx-id [which would be very costly]). does that make sense?#2019-02-0205:55favilaI’m not sure what you mean by “reconcile to single points in time”#2019-02-0300:38Brian AbbottHey, cool!#2019-02-0300:38Brian AbbottGlad I found this channel#2019-02-0300:38Brian AbbottI have a lot of questions RE Datomic#2019-02-0300:50lilactownshoot! 🙂#2019-02-0302:14Ian Fernandezguys, Google Cloud has a Datomic instance?#2019-02-0316:26hmaurerYou could run it by yourself on google cloud, but it will have to be datomic on-prem, that you manage, and not datomic cloud#2019-02-0302:24alexmillerNo#2019-02-0303:05Ian Fernandez=/#2019-02-0303:05Ian Fernandezthanks#2019-02-0305:41johanatanso, in other words, if i were to map over transaction IDs and do the query as-of each of them, 99.9% would be duplicate values for any particular entity X, related entity Y and attr A (because each transaction changes only a single entity typically).#2019-02-0305:43johanatanso another solution to this would be to "de-dup" the transaction IDs for a particular entity X and related entity Y before doing the queries and then do the as-of queries for each transaction ID actually involving a change to entity X or entity Y.#2019-02-0323:04favilaOr you could return the tx id from the query and pull later#2019-02-0323:05favilaOr you could create an as-of db and use it as an argument to a sub query#2019-02-0323:06favilaCould you be more concrete about the desired input and output you want? Tx is available to datalog so with some range filtering you can do some things in one query, but I don’t know if it’s worth it without a more specific problem#2019-02-0422:46johanatanWell the bigger problem is the sheer # of transactions that would be irrelevant. I need to find a list of txids where let’s say a particular entity (or any of its related entities) actually changed value. Then it is straightforward to do the query I care about only at those transition points. It would be infeasible to run this query on every txid.#2019-02-0519:56johanatan@U09R86PA4 ^^#2019-02-0523:44favila[?e ?a ?v ?tx ?added] clauses don't look at every tx, only those which involve ?e ?a ?v. I think you need a concrete example (some code) to suggest something more concrete#2019-02-0523:44favilait could be that you can't do what you want efficiently, but we're talking too abstractly for me to say for sure#2019-02-0523:45favilamaybe you could make a stackoverflow question?#2019-02-0721:14johanatanyea, honestly i didn't really understand [?e ?a ?v ?tx ?added] clauses. where can I find more information on that?#2019-02-0721:15johanatanalso, i can create a concrete example if I explore the provided material and still have an open issue.#2019-02-0721:16johanatanbut it does sound like my answer will lie in understanding that clause.#2019-02-0721:37faviladatomic is datoms#2019-02-0721:38faviladatoms are a tuple: [entity-id attribute-id value tx-id assert-or-retract-bool]#2019-02-0721:39faviladatalog clauses are datom pattern matches#2019-02-0721:39favilathis may help:#2019-02-0721:39favilahttp://tonsky.me/blog/unofficial-guide-to-datomic-internals/#2019-02-0721:39favilathe maps are an illusion#2019-02-0721:40favilait's datoms at the bottom#2019-02-0721:42favilaand "time travel" is filtering datoms by tx#2019-02-0819:17johanatanok, thanks!#2019-02-1105:46johanatanone question: for a typical clause [?e :some/attribute ?some-value]: is this just a truncated form of the [?e ?a ?v ?tx ?added] clause? i.e., are ?tx and ?added optional? also what exactly does ?added mean, is it true for only the single point in time where an association is originally made?#2019-02-1105:46johanatan[and false at all other times]#2019-02-1113:24favilaOmission is truncation#2019-02-1113:25favilaIe match anything#2019-02-1113:26favilaAdded is true when the datom is asserted, false when retracted#2019-02-1113:26favilaYou will only see “false” in history dbs.#2019-02-1117:23johanatan:+1::skin-tone-2:#2019-02-0315:36adamfeldman@d.ian.b On Google Cloud, you can run Datomic On-prem yourself, and I believe you may use a managed service like Google Cloud SQL as the backing store. But yes, no Datomic Cloud on GCP, that product is tightly built around AWS services#2019-02-0406:20cl0jurianHello #datomic Is it possible to open bin/console for a url like datomic:mem://... ?
When I try that Removing storage with unsupported protocol: dev = datomic: No storages specified
Any comments please?#2019-02-0412:25alexmillerI don’t think the console works with mem dbs.#2019-02-0413:33cl0jurianYeah that more likely. Thanks @alexmiller#2019-02-0414:11danierouxTrying my first Ion push:
Dependency X has a :local/root. You must specify a :uname argument to push.
Does that mean that I have to upload X to a maven repository to be able to push? It's a local Java lib, and we just build it locally right now.#2019-02-0414:40alexmillerit means that because there is local state, the git status of your current project is not sufficient to capture its state, so you must specify a :uname to “name” the release#2019-02-0415:13danierouxSo I have to setup mvn repo to refer to, to have a "strong" push?#2019-02-0415:17alexmilleror private git repo#2019-02-0415:17alexmilleror just supply a uname#2019-02-0415:20danierouxIt's a java lib, a private git repo won't help?
I'll setup a private repo on s3 and push there. Eventually would want a reproducible push.#2019-02-0415:31alexmilleroh, yeah, you’ll need a published artifact for a java lib#2019-02-0414:42henrikCan you reference a local git repo to satisfy the demand?#2019-02-0414:44alexmillerno, no support for that (other than via :local/root) right now#2019-02-0417:06Oleh K.I've followed this tutorial https://docs.datomic.com/cloud/operation/client-applications.html#separate-vpc
but it doesn't work, the server crashes on start because it cannot connect to the database. How can I learn what is wrong with setup?#2019-02-0508:58tslockeClient api question. Given ident is not available, how can I make sense of the 'a' in the :tx-data datoms after a transaction?#2019-02-0514:02favilaAnother query#2019-02-0514:27Peter ÖnnebyHello - I’ve got what seems like a jetty dependency issue between ring and the datomic client library#2019-02-0514:27Peter Önneby[com.datomic/client-pro "0.8.28"]
[ring "1.7.1"]
#2019-02-0514:28Peter Önneby#error {
:cause org.eclipse.jetty.http.CookieCompliance
:via
[{:type clojure.lang.Compiler$CompilerException
:message java.lang.NoClassDefFoundError: org/eclipse/jetty/http/CookieCompliance, compiling:(ring/adapter/jetty.clj:49:5)
:at [clojure.lang.Compiler analyzeSeq Compiler.java 7010]}
#2019-02-0514:29Peter ÖnnebySome people have had similar issues - but no real solution found#2019-02-0514:31Peter ÖnnebyMust be a pretty common stack - is there a recommended exclusion to use here?#2019-02-0515:24Peter ÖnnebyI found a fix here: https://forum.datomic.com/t/dependency-conflict-with-ring-jetty/447/5#2019-02-0515:55johnj@ponneby also https://docs.datomic.com/cloud/troubleshooting.html#dependency-conflict#2019-02-0515:56Peter ÖnnebyThanks, I passed by there too but that did not work#2019-02-0516:45lilactownI actually switched to http-kit just to avoid all the deps conflicts#2019-02-0516:45lilactownsince I was only using it for local dev#2019-02-0516:49Peter ÖnnebyYeah I’ve read about that one#2019-02-0516:51Peter ÖnnebyFor info I was including datomic client in the reagent lein template which depends heavily on ring#2019-02-0516:51Peter Önnebynot sure if it is easy enough to replace jetty with http-kit in this context#2019-02-0516:59lilactownI think it is, but I haven't used that template#2019-02-0517:00lilactownlooks like there's some stuff here: https://www.http-kit.org/migration.html#2019-02-0517:00lilactownbut hey if it works ¯\(ツ)/¯#2019-02-0517:00lilactownI just ran into multiple conflicting deps when I was doing Ions development. YMMV#2019-02-0517:02Peter ÖnnebyThanks for the pointer, I will try that out#2019-02-0604:43TwiceHey guys! Sorry for a stupid question: how should I omit certain :db/ids in my query? Tried with (not [?e :db/id ?currId]) but that clearly doesn't work.#2019-02-0605:23favila[(!= ?e ?blacklisted-eid)] (or a literal)#2019-02-0605:24favilaFor larger sets (not [(contains? ?bad-eid-set ?e)])#2019-02-0605:25favilaCan be faster#2019-02-0605:26favilaToo clever: (not [(identity ?e) ?blacklisted-eid])#2019-02-0605:27favilaAlso you can repeat the binding clause with a literal#2019-02-0605:29favila(not [?bound1 ?bound2 ?blacklisted-eid]) [?bound1 ?bound2 ?e]#2019-02-0605:30favila(This is probably going to be slower)#2019-02-0605:35TwiceThanks! Now it seems obvious 😁#2019-02-0611:17thomasHi Everyone... we are currently running Datomic on-prem version 5067 and management is wondering what it would take to upgrade to the latest version? Any idea? Any Incompatibilities? (we use postgres as a DB).#2019-02-0612:06chrisblomNormally you can just stop the transactor, update and start the updated transactor, and then update all the clients, but some versions have had special cases. 5067 is pretty old so better check https://docs.datomic.com/on-prem/changes.html#2019-02-0611:17thomasand what would be a good way to do this? do a Datomic DB dump... install new version and import?#2019-02-0612:07chrisblomWith dynamodb stop,update,start worked fine, but i'd do a backup before just to be sure#2019-02-0612:11chrisblomSee https://docs.datomic.com/on-prem/deployment.html#upgrading-live-system#2019-02-0613:40favilaWe’ve upgraded for 5 years (without changing storage) and have rarely had to do anything other than restart the txor with the newer version#2019-02-0616:29John MillerIon question: Can you deploy multiple projects to a single query group? We have several workloads that peak at different times and so could efficiently share the same compute resources. But they are different projects.#2019-02-0616:37marshall@jmiller Each query group can only be the target of a single Code Deploy application, so you could serve multiple applications from a single QG, but the code for all of them would need to be in the same Ion repository#2019-02-0616:38marshallor use a “master” repository that had deps on all the others#2019-02-0618:01John MillerThanks. That was how it looked but I wanted to make sure.#2019-02-0620:11tylerI’m running into some unexpected behavior using CAS, I was hoping someone might be able to point me in the right direction. I’m trying to transact a payload:
[[:db/add "account" :account/email (:email identity)]
[:db/cas "account" :account/organization nil "org"]
[:db/add "org" :organization/name (:name args)]
[:db/add "org" :organization/id id]]
Where :account/organization is cardinality one and :account/email is unique identity. I’d expect that if I ran this transaction twice, the :db/cas would fail as the organization is no longer nil, however, I am able to transact this many times without failure. Feels like I’m missing something fundamental but I cannot see it.#2019-02-0620:41dustingetztempids#2019-02-0620:41dustingetzThose string tempids identify a new entity each time the transaction runs#2019-02-0620:42dustingetz:account/email maybe is not set to :identity ?#2019-02-0620:42tylerAh, so if I break it into two steps: Upsert the account and then upsert the organization it should work?#2019-02-0620:43dustingetzYou shouldn’t have to do that#2019-02-0620:44tylerIf I upsert the account first and then transact:
[[:db/cas [:account/email (:email identity)] :account/organization nil "org"]
[:db/add "org" :organization/name (:name args)]
[:db/add "org" :organization/id id]]
It behaves as expected.#2019-02-0620:45dustingetzIt could be a bug#2019-02-0620:46dustingetzIs the first version dangling a bunch of entities or something? Is there any evidence that an upsert didn’t resolve?#2019-02-0620:46tylerI’m trying that right now with a fresh db. I don’t think so but going to confirm.#2019-02-0620:50tylerThe accounts behave as expected, but a new organization entity gets created each time#2019-02-0620:51dustingetzhuh#2019-02-0620:52dustingetzAnd in the second form @ 3:44?#2019-02-0620:58marshall@tyler can you try with the map form instead of list form?#2019-02-0620:58marshallfor the things that arent the call to cas#2019-02-0620:59tylerYeah one sec#2019-02-0620:59marshall{:db/id "account" :account/email (:email identity)}
{:db/id "org"
:organization/name (:name args)
:organization/id id}
[:db/cas "account" :account/organization nil "org"]#2019-02-0620:59marshallsomething like that#2019-02-0621:06tylerIt has the same behavior with the map form it looks like.#2019-02-0621:07tylerFwiw it seems to only be a problem when there is a tempid in the entity position of :db/cas#2019-02-0621:10tylerI also tried hacking it with a boolean attribute to rule out weirdness with two entities and it still doesn’t work. This is on datomic cloud btw.#2019-02-0621:30marshallinteresting. i’m guessing there is an issue with resolving the tempid passed to a db fn#2019-02-0622:31favilaIn my experience db fns do not get their tempids resolved (at least for strings this seems impossible, since you don't know what type the db fn expects: a real string or a tempid), and it is impossible to resolve a tempid within a transaction. So in practice you can't use tempids with tx fns unless you can expand to something where it doesn't matter#2019-02-0622:36favilaA larger question is when is tempid resolution performed? It seems impossible in the most general case to know that a tempid resolves to an eid because of an indexed/upserting attribute+value. You can't see those assertions until all expansion is done, and you can't expand everything without running the tx fns.#2019-02-0623:27marshallAgreed. #2019-02-0623:28marshallIm not sure if that's well documented. Ill look into improving that#2019-02-0623:29marshallThe docs do say an entity id specifically https://docs.datomic.com/cloud/transactions/transaction-functions.html#sec-1-2#2019-02-0623:30marshallBut it may be worth making it more obvious somewhere #2019-02-0623:31tylerThanks this explains what I was seeing.#2019-02-0703:27favilaTx fns need to be written defensively. If you need a real eid (eg to read from the vs just expanding to something template-like) use d/entid on input to coerce it, check for nils and strings and throw if you can’t make sense of it#2019-02-0703:28favilaIMO if cas can’t work with tempids it should throw when it encounters them#2019-02-0703:28favila(Also it would be nice if the replacement value could be nil to mean “retract”)#2019-02-0712:39dustingetzCas should be able to look at the valueType of the argument to know of a string is a ref or scalar#2019-02-0713:24dustingetz“In my experience db fns do not get their tempids resolved (at least for strings this seems impossible, since you don’t know what type the db fn expects: a real string or a tempid), and it is impossible to resolve a tempid within a transaction. So in practice you can’t use tempids with tx fns unless you can expand to something where it doesn’t matter” This doesn’t entirely follow for me, can you say more#2019-02-0715:09favila:db.fn/cas is fundamentally a function call#2019-02-0715:10favila[:db.fn/cas "some-string" :a "b" "c"] --- is "some-string" a tempid placeholder or a literal string#2019-02-0715:11favilawithout fancy type info how can datomic know? so datomic just invokes the fn as-is, with a string#2019-02-0715:12favila(with tempid records--the old tempid method--at least it had a type so conceivably datomic could do deep walks of all tx fn input and replace tempids with eids)#2019-02-0715:13favilasecond problem: tempids that resolve to existing eids need to do so by datomic detecting a [:db/add "tempid" :unique-attr "unique-value"] pattern in the tx data#2019-02-0715:14favilahowever, you can only look for those patterns after all tx expansion is done (including those by tx fns)#2019-02-0715:15favilaso how can you substitute a tempid for a real id in a tx fn call if you can't know what substitutions to make until all tx fns are evaluated?#2019-02-0715:17favilathere might be hacky half-measure ways through this (double expansion for eg, or doing substitution eagerly during expansion instead of after, or implicit restrictions on tx fn behavior) but datomic doesn't do that#2019-02-0715:19favilaso bottom line, tx fns get as arguments exactly what was present as parameters (no transformation) and it's up to the tx fn to interpret it and throw if it can't do it's work#2019-02-0715:21favilad/entid can help with coercing lookup ref and ident sugar to real eids and you should use it in tx fn bodies where you want to allow such sugar (and check for nil)#2019-02-0715:21favilaand if your tx fn needs a real eid e.g. because it wants to do a db read and check some invariant, it can't accept a tempid#2019-02-0715:22favilait should throw#2019-02-0719:45dustingetzYou lost me in the first part, i think the “fancy type info” is perfectly well defined, e.g. this cas is on a :db.valueType/long which is known because :account/balance has schema [[:db.fn/cas 42 :account/balance 100 110]]#2019-02-0719:50favila:db.fn/cas is a function invocation#2019-02-0719:50favilait is not special or privledged#2019-02-0719:51dustingetzthe function knows the type of its arguments#2019-02-0719:51favilain this case, the caller would need to know the type of its arguments#2019-02-0719:51favila(the caller is the TX runner/applier itself)#2019-02-0719:51dustingetzi dont see why#2019-02-0719:52favilahow do you know that "bar1" in [:myfoo "bar1"] is a tempid or just the string "bar1"?#2019-02-0719:53dustingetzwhy does the caller need to know that, i would implement cas to pass the entity name to d/entity or some such#2019-02-0719:54favilathat moves to the second problem#2019-02-0719:54favilatempid resolution can't be done until all the ops are expanded#2019-02-0719:55favila[[:create-fancy-thing "foo"][:db.fn/cas "foo" :a nil 1]]#2019-02-0719:56favilasuppose create-fancy-thing expands to [:db/add "foo" :thing-type :fancy][:db/add "foo" :upserting-thing-id 123]#2019-02-0719:56favilasuppose :upserting-thing-id exists already, so "foo" should resolve to an existing eid#2019-02-0719:56favilahow can :db.fn/cas know that?#2019-02-0719:57dustingetzi dont see why it matters, :db.fn/cas knows “foo” can be treated as a ref, and in proper Clojure form allows us to shoot foot off if malformed#2019-02-0719:57favila:db.fn/cas can never assert the precondition#2019-02-0719:57favilanot that it can assert it true or false, it can't assert it at all#2019-02-0719:58favilawhat eid is "foo"?#2019-02-0719:58dustingetzi see#2019-02-0719:58favilaunknown and unknowable#2019-02-0719:58dustingetzso the effect has to be delayed, which means txfns need an expansion phase and an effect phase#2019-02-0719:58favilanot just that#2019-02-0719:58favilasuppose you expand, find the tempids#2019-02-0719:59favilawhat if the tx fn is not pure?#2019-02-0719:59favilayou'll have to expand again, this time with the tempid mapping available (e.g. smuggled in through d/entid)#2019-02-0719:59favilabut if txfns are not pure, the expansion will be different#2019-02-0720:00dustingetzby not pure you mean (launch-missles!) or something different#2019-02-0720:00favilad/squuid is sufficient#2019-02-0720:00dustingetzah#2019-02-0720:00favilajust that calling with same arguments+db does not produce exactly the same value every single time#2019-02-0720:00favilayeah that's stronger than "pure"#2019-02-0720:00favilaidempotent?#2019-02-0720:01favilathere are perfectly legit reasons to have that, eg issuing new ids#2019-02-0720:01favilanow to ensure retriability you need blessed randomness sources that are reseedable#2019-02-0720:01dustingetzThis is all onprem anyway, Cloud doesn’t even have this, right?#2019-02-0720:01favilathere's no bottom to this problem#2019-02-0720:02favilaanother approach: with a fancy enough type system you could discover all the places in the un-expanded tx that had tempids, then you could make a flow graph#2019-02-0720:03favilaif you expand them in the right order maybe it will all work out#2019-02-0720:03favilaah, but what if a tx fn itself makes a new tempid#2019-02-0720:03favilaanyway#2019-02-0720:04favilacloud doesn't have cas?#2019-02-0720:04favilait certainly has tx fns doesn't it?#2019-02-0720:04dustingetzi think cloud has a closed set of txfns? NOt sure#2019-02-0720:04favilayou can write cas yourself#2019-02-0720:04favilain fact cas has two annoying problems which caused me to write one myself anyway#2019-02-0720:04favilaone is not throwing on tempids#2019-02-0720:04favilathe other is not doing retractions#2019-02-0720:05favila[:db.fn/cas e :attr "x" nil] should work IMO#2019-02-0720:05favilahttps://docs.datomic.com/cloud/transactions/transaction-functions.html#2019-02-0720:05favilanope, cloud can do anything#2019-02-0720:06dustingetz“classpath transaction functions” you are right#2019-02-0720:06dustingetz“Transaction functions must be pure functions, i.e. free of side effects.”#2019-02-0720:06favilatrue for onprem too#2019-02-0720:07favilaI think you can't guarantee how many times they will run#2019-02-0720:07dustingetzso squuid is disallowed (and not a thing anymore anyway)#2019-02-0720:07faviladoes "pure" mean "same input same output"?#2019-02-0720:07favilaor just "no side effects"#2019-02-0720:07dustingetzi think they mean the former#2019-02-0720:08favilaperhaps, but I doubt they hold to it#2019-02-0720:08favilaif they were super strict about it they could do double-expansion like I talked about#2019-02-0720:08favilabut I doubt they would ever consider that#2019-02-0720:09dustingetzThanks for the deep dive#2019-02-0720:09favilanp, it's a really subtle gotcha#2019-02-0720:13favilanm double expansion won't work#2019-02-0720:14favilathe tx fn can't produce the same value for the same input (or at least can't without severely limiting its usefulness)#2019-02-0720:14favilathe first time it's called (d/entid "foo") will be nil#2019-02-0720:14favilathe second time it won't be#2019-02-0720:14favilahow can a fn produce the same output both cases?#2019-02-0720:14favilaeven cas can't do that#2019-02-0720:15favilaso double expansion is also a no-go#2019-02-0621:42kennyDatomic free uses io.netty/netty-all 4.0.39.Final. We are using other dependencies that require Netty 4.1. If I try excluding io.netty/netty-all from Datomic, I get an exception:
Execution error (ClassNotFoundException) at java.net.URLClassLoader/findClass (URLClassLoader.java:382).
io.netty.handler.codec.http.HttpRequest
Is there a way to make Datomic work with a newer version of Netty?#2019-02-0621:47kennySeems like this has occurred before: https://clojurians-log.clojureverse.org/datomic/2016-07-04. Any solutions?#2019-02-0621:49kennyI updated to org.apache.activemq/artemis-core-client {:mvn/version "1.5.0"} and it appears to work.#2019-02-0622:02spiedendo ions have any story for application initialization? e.g. ion-starter just does a lazy initialization of the db in get-connection — presumably there’s no lifecycle to hook into for application initialization like this?#2019-02-0623:10devnhttps://github.com/Datomic/ion-event-example -- would anyone in here mind cloning and running clojure deps to verify it's not just me this is broken for?#2019-02-0623:16lilactown@devn clojure deps what are you expecting to happen when you run that command, and what are you getting?#2019-02-0623:17devnorg.eclipse.aether.resolution.ArtifactDescriptorException: Failed to read artifact descriptor for com.datomic:ion:jar:0.9.26
Caused by: org.eclipse.aether.resolution.ArtifactResolutionException: Could not transfer artifact com.datomic:ion:pom:0.9.26 from/to datomic-cloud (): Unable to load AWS credentials from any provider in the chain
Caused by: com.amazonaws.SdkClientException: Unable to load AWS credentials from any provider in the chain
#2019-02-0623:18marshall@devn you need aws creds in your env#2019-02-0623:18devni believe i have creds, i will double check#2019-02-0623:18marshallDoesnt matter what they are, as that repo is publicly readable, but they need to be available #2019-02-0623:19devnstill no dice-- 🤔#2019-02-0623:19marshallEither envars or profile set up with aws-configure#2019-02-0623:20devni have creds in ~/.aws/config -- two profiles, with no default. i will try to s3 ls that bucket.#2019-02-0623:20devnerr creds in ~/.aws/credentials#2019-02-0623:21marshallIf no default then you need to set your profile name#2019-02-0623:21marshallVia AWS_PROFILE envar#2019-02-0623:21devnah, yeah, that's the ticket#2019-02-0623:21devnthanks @marshall#2019-02-0623:21marshallNp#2019-02-0623:24devnactually, turned out that it wasn't a default missing, but instead that i hadn't updated my session credentials. my bad! thanks again.#2019-02-0703:09kennyHow would I pass the proxy host when calling client? I only see a way to pass the :proxy-port: https://docs.datomic.com/client-api/datomic.client.api.html#var-client#2019-02-0713:17marshallthe proxy host is always localhost (for the socks proxy script)#2019-02-0716:01kennyWhy does it have to be? What if my service is running in a docker locally and I’m running the socks proxy locally? The docker can’t connect to it because it assumes localhost. The host should be host.docker.internal. #2019-02-0716:39kennyHaving the ability to set the proxy host would solve this problem.#2019-02-0712:33danierouxNew to Datomic Cloud, I cannot find this log entry:#2019-02-0712:34danieroux (cast/event {:msg "SQS Lambda invoked" ::json input})#2019-02-0713:17marshallcast sends messages/events to the cloudwatch stream for your datomic system: https://docs.datomic.com/cloud/operation/monitoring.html#searching-cloudwatch-logs#2019-02-0713:26danierouxI'm stuck on "Click on the Log group". I don't have that log group.#2019-02-0713:26danierouxI filed a support ticket with more details, I think the CloudFormation went awry.#2019-02-0719:49ozHello, we are using Datomic Cloud in production and looking to start making use of IONs. We got a couple of questions we need some help on.#2019-02-0719:52oz1. We currently run multiple Datomic Cloud installs one for each of our code environments test, dev, stage, prod. It's was brought up that maybe this isn't the ideal way to handle this. Instead it's suggested that maybe we should be using 1 install and use different db's and query groups to allow for multiple environments in a single Datomic Cloud install. Anyone have any insights on the suggestion of a single Datomic Cloud install to service multiple environments?#2019-02-0719:56oz2. Assuming the Clojure code deployed in an ION requires an system level dependency (a linux specific library or installed cli application) how do we install that in the EC2 instance that ION is deployed to?#2019-02-0720:00marshall@ozanzal regarding #1 - either is fine, the decision probably depends on your specific isolation and resource sharing requirements. I’d tend to keep prod as a separate system itself and potentially run stage/qa as a merged system, and let developers have individual query groups or solo systems for dev#2019-02-0721:07johnj@ozanzal #2 is not supported, you'll have to hack it yourself to achieve that to overcome whatever they have in place to prevent it,#2019-02-0721:07johnjor be a nice citizen and use lambdas#2019-02-0721:24oz@lockdown- gotcha#2019-02-0721:26ozThanks y'all!#2019-02-0803:10dogenpunkAre requests for documentation updates handled through JIRA?#2019-02-0815:12jaretYou can ping me or ship them into <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>. Probably best to use the e-mail as it would generate a case that the whole team could see.#2019-02-0815:28dogenpunkGreat, thanks!#2019-02-0809:39stijnI'm trying to use REBL with datomic, but the navigation into :db/id just shows an integer. I'm using com.datomic/client-cloud {:mvn/version "0.8.71"}. Am I missing something?#2019-02-0817:54lilactownAFAICT nav support hasn't landed for datomic yet#2019-02-0817:54lilactowncould be wrong. but i also couldn't get it to work#2019-02-0816:03johnjIf you need an email entity :email/address :email/type and a generic attribute to reference it, what would you call it? :email/emails ?#2019-02-0817:29marcolUsing delete-database on datomic cloud does not seem to remove the datoms, is this normal? Shouldn't it?#2019-02-0818:10marshallIf you mean datoms as reported in the dashboard under the Datoms metric, they will be cleaned up, but not immediately.#2019-02-0818:14marcolActually I meant more the data that seems to remain in DynamoDB that is associated to the deleted database#2019-02-0821:30marshallThose segments will get garbage collected at a future point, but not immediately#2019-02-0900:53marcolPerfect, Thanks for the answer!#2019-02-0817:56okocimis there any way to show the “query plan” in datomic cloud? It would be nice to see how the query will be executed against the indexes if only to get a better sense of how to put queries together. I may be thinking about this the wrong way, but I’ve always found the “explain” functionality of traditional RDBMSs to be quite useful in practice.#2019-02-0818:23favilaIt’s not very elaborate#2019-02-0818:24favilaFor eg clauses are never reordered#2019-02-0818:25favilaThere was a third party query explainer. Third party was possible because the optimizations are very simple#2019-02-0818:25favilaThere is none built in#2019-02-0818:33okocimcool thanks. I a third party one here:
https://github.com/dwhjames/datomic-q-explain
which should be simple enough to adapt for the client api.
appreciate the help#2019-02-0818:56okocimlooks like this doesn’t handle the parsing the queries very well for things like (or-join) and (and) etc. :shrug:#2019-02-0819:26favilathose are new features#2019-02-0819:26favilaprobably didn't exist when it was written#2019-02-0819:26favilathey are just sugar for rules though#2019-02-0819:26favilayou can rewrite as rules#2019-02-0913:34Oleh K.https://docs.datomic.com/cloud/whatis/overview.html#sec-8
here they say about backup and export features for Datomic Cloud, but I see none of them in the documentation. Do they exist?#2019-02-0922:25johnj@okilimnik I think your best option now is https://docs.datomic.com/cloud/query/raw-index-access.html#2019-02-0922:28johnjI understand backups in cloud are no a priority for them, their rationale is: data is stored redundantly in aws (dynamo, s3, EFS, local SSD).#2019-02-1000:13marcol@lockdown- @okilimnik Wouldn't it be possible to use the DynanoDB's backup feature to backup Datomic Cloud?#2019-02-1004:04johnjNope, you can't restore from there#2019-02-1013:13marcol@lockdown- Sorry for my ignorance but why wouldn't it work?#2019-02-1109:21Ben Hammondis it possible to query against an as-of db within a datalog query; something like
(d/q '[:find ?e ?attr-name ?v ?tx ?user-id ?product-initial ?ingested-at
:in $ [[?e ?a ?v ?tx]]
:where
[?e :user/id ?user-id]
[(d/asof $ ?tx) ?e :historical/product-initials ?product-initial]
[?tx :db/txInstant ?ingested-at]
[?a :db/ident ?attr-name]]
datomic-db
previously-found-datom)
#2019-02-1113:30favilaNo. All dbs must be :in parameters in datalog and cannot be bound dynamically.#2019-02-1113:30favilaYou could however invoke a subquery with your dynamically made db#2019-02-1109:21Ben Hammonddo I have to pass the historical db as a parameter to the datalog?#2019-02-1113:31favilaYes#2019-02-1109:24Ben Hammondlike (d/q '[:find ?e ?attr-name ?v ?tx ?user-id ?product-initial ?ingested-at
:in $current $historic [[?e ?a ?v ?tx]]
:where
[$current ?e :user/id ?user-id]
[$historic ?e :historical/product-initials ?product-initial]
[$current ?tx :db/txInstant ?ingested-at]
[$current ?a :db/ident ?attr-name]]
datomic-db
(d/as-of datomic-db (:tx previously-found-datom))
previously-found-datom)#2019-02-1109:57steveb8nQ: I’m getting ready to add a circuit breaker into my Ions for http calls. I’m considering https://github.com/sunng87/diehard and https://github.com/Netflix/Hystrix/tree/master/hystrix-contrib/hystrix-clj. Anyone got any experience with this pattern in Ions yet?#2019-02-1115:14mpenethystrix is somewhat deprecated I think fyi#2019-02-1115:29Joe LaneHystrix team seems to recommend https://github.com/resilience4j/resilience4j going forward.#2019-02-1116:44dazldthat looks pretty interesting - do you know of any open source CLJ projects that use it, to see what the integration looks like? especially things like retrying..#2019-02-1118:14Joe LaneJust found it today, so unfortunately no.#2019-02-1121:37steveb8nthanks. I didn’t know Hysterix was EOL#2019-02-1122:24johanatandoes anyone know why the following would only seem to return the first 3-tuple rather than all of them across time (when run against (d/history db):
(def price-history-for-underlying
'[:find ?price ?time ?tx
:in $ ?sym
:where
[?und :underlying/symbol ?sym ?tx]
[?und :underlying/price ?price ?tx]
[?q :quote/underlying ?und ?tx]
[?q :quote/time ?time ?tx]])
do i need an aggregating function somewhere? (these two related entities are always inserted in the same call to transact and thus should always have the same ?tx across them).#2019-02-1122:25johanatan[not only are these inserted in the same call to transact but the underlying entity is actually nested under the quote entity via it's underlying attribute (both maps)].#2019-02-1122:28favila:find returns a set#2019-02-1122:28johanatan[the one wrinkle is that there are many quotes with the same underlying value all transacted at once. but that should be fine as long as everything inside the collection of maps passed to transact receives the same tx-id]#2019-02-1122:28favilaif [?price ?time ?tx] is the same across all you will get only one item#2019-02-1122:28johanatanyes, i'm getting a set of one 3-tuple rather than a set of many 3-tuples#2019-02-1122:29favilaif you include ?sym or ?q does it change?#2019-02-1122:29johanatanlet me try#2019-02-1122:30favilayou're also joining all ?tx which seems odd#2019-02-1122:30favilaunless you actually do transact all at once#2019-02-1122:30johanatani do actually transact all at once#2019-02-1122:30johanatanwhat i want is for tx to be fixed to various points in time#2019-02-1122:30johanatanok, adding ?sym did not get many but adding ?q did#2019-02-1122:31favilayou want tx-time = time-of-record?#2019-02-1122:31favilaor domain-time?#2019-02-1122:31johanatandoesn't matter. tx-id or time both should be identical across all of these#2019-02-1122:31favila:quote/time = tx time?#2019-02-1122:31johanatan:quote/time is roughly wall clock time at the point of the insertion#2019-02-1122:31favilawhy not == tx time?#2019-02-1122:32johanatanbecause they're not equal 🙂#2019-02-1122:32favilathat's the red flag to me. generally it's really really hard to make tx time anything but a git-commit like audit history#2019-02-1122:32favilayou can set :db/txInstant#2019-02-1122:32favilaas long as you don't set it before the most recent one#2019-02-1122:32johanatantime is just a piece of data. doesn't matter if it is skewed by a few hundred milliseconds. it is the time that the server quoted the price to me#2019-02-1122:32johanatanso, txInstant will be several hundred milliseconds later#2019-02-1122:33johanatani am more interested in knowing that the price in question was the price at the moment that the server reported it for#2019-02-1122:33johanatanhence its inclusion in the query#2019-02-1122:33favilaok#2019-02-1122:34favilaso winding back, unless you include ?q you are not guaranteed a result per ?q#2019-02-1122:35favilayou can use :with to include a binding for the purposes of computing the result set but do not want to include it in the result set#2019-02-1122:36favilahttps://docs.datomic.com/on-prem/query.html#with#2019-02-1122:36johanatanhmm, ok. with may be the answer here. including q only got me the fan-out at a particular moment in time. i am still not getting the fan-out across all moments in time#2019-02-1122:36favila:with ?q would ensure you got at least one result per q#2019-02-1122:36johanatani am more interested in getting at least one result per tx#2019-02-1122:37johanatanand including ?tx in the :find doesn't seem to be doing that#2019-02-1122:37favilaand you are sure there is a unique ?q and ?und per ?tx#2019-02-1122:37johanatanpretty sure. if that isn't true then much bigger problems are here. how can i verify it?#2019-02-1122:38favilajust remove the tx#2019-02-1122:38favilafrom query and find#2019-02-1122:38favilasee what you get#2019-02-1122:38favila(include the q)#2019-02-1122:39favilaor you can do a clause at a time#2019-02-1122:39favila[:find ?q ?und ?tx :where [?q :quote/underlying ?und ?tx]#2019-02-1122:39johanatanbut if i'm not including the ?tx, how can i know how many ?tx's are accounted for in the result?#2019-02-1122:40favilaremoving the tx is to be sure you actually have multiple entities#2019-02-1122:40johanatanah, right. sorry, lol#2019-02-1122:40johanatanwell this query is taking a long time#2019-02-1122:40johanatanso, it is likely gonna return a lot of data#2019-02-1122:41johanatani have to leave now. i'll play more with this later. thx!#2019-02-1123:36johanatanso, i got OutOfMemoryError on that query so i think it's safe to say that there are multiple entities#2019-02-1123:52superancetreHi, I'm sure you've got the question a thousand time before#2019-02-1123:52superancetrebut I am trying my hands with datomic#2019-02-1123:52superancetrewas trying to setup the dev storage#2019-02-1123:52superancetreI put my key in the properties files#2019-02-1123:53superancetrelaunch a transactor with .\bin\transactor .\config\dev-transactor.properties#2019-02-1123:54superancetreI get back a message System started datomic:, storing data in: data#2019-02-1123:54superancetreThen I try to launch a peer-server like in the tutorial using in memory storage#2019-02-1123:55superancetreCommand is bin/run -m datomic.peer-server -h localhost -p 8998 -a myaccesskey,mysecret -d mydb,datomic:#2019-02-1319:36marshallThe peer server can’t create databases in dev storage. you’ll need to connect a REPL and create the database first, then launch the peer server#2019-02-1319:37marshalli.e. https://docs.datomic.com/on-prem/dev-setup.html#create-db#2019-02-1123:55superancetreAnd I get an error Exception in thread "main" java.lang.RuntimeException: Could not find mydb in catalog#2019-02-1123:56superancetreSo I guess I need to create a catalog called mydb but how I am supposed to do?#2019-02-1123:56superancetreI'm sorry I am very confused by the Datomic Architecture right now#2019-02-1123:58johanatan@favila why is this only returning one transaction id?
core=>
(def price-history-for-underlying
'[:find ?tx
:with ?und
:in $ ?sym
:where
[?und :underlying/symbol ?sym ?tx]
; [?und :underlying/price ?price ?tx]
; [?q :quote/underlying ?und ?tx]
; [?q :quote/time ?time ?tx]
])
#'core/price-history-for-underlying
core=>
core=> (d/q price-history-for-underlying (d/history (db)) "OEX")
[[13194139534793]]
core=>
I was hoping to get all tx's in which that symbol is involved#2019-02-1123:58johanatan[and i think this is the root of the other problems i'm having]#2019-02-1123:59favilabecause there is only one ?und?#2019-02-1123:59favilawhat does a transaction look like#2019-02-1123:59johanatanthere is only one ?und but across time there are many#2019-02-1200:00johanataneach time there is a transaction, a new one replaces the existing one#2019-02-1200:00favilaIt looks here like you have one ?und per ?sym#2019-02-1200:00johanatanyes, correct. and many :quotes per ?und#2019-02-1200:01favilawhy do you expect many tx if you have only one und per sym?#2019-02-1200:01johanatanbecause many calls to transact for that und/sym pair are occuring#2019-02-1200:01johanatanroughly one every 10 minutes#2019-02-1200:02johanatanthe database is also growing so that "history" is there somewhere#2019-02-1200:02favilaassertions which don't change the value that is there already do not add datoms#2019-02-1200:02johanatanoh, gotcha. in this case, the price is the part that should be changing#2019-02-1200:03favilaif :my-attr is card-one, [:db/add 123 :my-attr "my-value"] will only add a datom the first time#2019-02-1200:03johanatanhmm, so perhaps i need ?tx on the :price and the :time attrs to tie these together#2019-02-1200:03favilawhy are you joining on tx?#2019-02-1200:04johanatanbecause i want the data to be consistent per some point in time#2019-02-1200:04johanatani.e., the tx#2019-02-1200:04johanatantx is a synch point#2019-02-1200:04favilayour data is consistent by virtue of your DB value#2019-02-1200:05johanatanif these were one entity rather than two, no need for join#2019-02-1200:05johanatanbut since it is split into two related entities, i need to fix them to the same point in time no ?#2019-02-1200:06johanatanyes, this worked!#2019-02-1200:06johanatanthe answer was:
(def price-history-for-underlying
'[:find ?price ?time ?tx
:with ?q ?und
:in $ ?sym
:where
[?und :underlying/symbol ?sym]
[?und :underlying/price ?price ?tx]
[?q :quote/underlying ?und]
[?q :quote/time ?time ?tx]])
#2019-02-1200:06johanatani am now getting all prices across all time#2019-02-1200:07johanatanthanks!#2019-02-1200:07favila [?und :underlying/price ?price ?tx]#2019-02-1200:07favilaI don't understand this#2019-02-1200:07johanatanit ensures that the ?und and the ?q that we select are "paired" together/, married together if you will, at a single point in time, namely at ?tx point in time.#2019-02-1200:08favilais $ a history db or an ordinary db?#2019-02-1200:08johanatanhistory#2019-02-1200:08favilathis will included retractions too#2019-02-1200:09favila:underlying/price is cardinality-one and you keep writing over it?#2019-02-1200:09favilaor is it cardinality-many?#2019-02-1200:09johanatanyea, it's one#2019-02-1200:09johanatanand yea keep writing over it#2019-02-1200:09favilaso, if your price doesn't change from one-tx to the next, then you won't join#2019-02-1200:09johanatanthat's ok#2019-02-1200:10favilathe TX of the price will be from the previous tx#2019-02-1200:10johanatani only care when prices actually change#2019-02-1200:10favilaok#2019-02-1200:10johanatanthat's fine#2019-02-1200:10johanatanso it will effectively just "skip" over those#2019-02-1200:10favilait's conventional to mark history databases with $h since they usually change the nature of a query substantially#2019-02-1200:11favilainstead of just $#2019-02-1200:11johanatanah, thx#2019-02-1200:11favilayou should also probably include true to ensure you are only matching assertions#2019-02-1200:11favilahistory dbs also include retractions#2019-02-1200:12favilaIn this case set-wise results will make it all work out but it's easy to imagine getting surprised by duplicates with some query changes#2019-02-1200:12johanatanhmm, what is a "retraction" in this context? like a deletion?#2019-02-1200:14johanatanbtw, $h isn't working for me. it says "did you forget to pass a database?"#2019-02-1200:14johanatanbut just $ works fine#2019-02-1200:15favilayou put $h in front of every caluse#2019-02-1200:16favila[$h ?e ?a ?v]#2019-02-1200:16favilaretraction is a deletion yes#2019-02-1200:16johanatanhmm, ok#2019-02-1200:16favilayou can use d/datoms to see the actual stuff these clauses are matching against#2019-02-1200:17favilaif you use a history db, you will notice some datoms that end with "false"#2019-02-1200:17favilathose are retractions#2019-02-1200:40johanatanCool, thx#2019-02-1213:13m_m_mHi. What I have to do to "update" my item in Datomic database and not add another one?#2019-02-1213:19benoit@m_m_m Your transaction must include one of the attribute that identifies the entity you want to update. https://docs.datomic.com/on-prem/identity.html#2019-02-1213:31superancetreHi, I'm having trouble finding which dependency to add to my project.clj to complete the dev setup#2019-02-1213:31superancetrenotably requiring datomic.api like (require '[datomic.api :as d])#2019-02-1213:31superancetreIn the getting started the dependency is [com.datomic/client-pro "0.8.28"]#2019-02-1213:32superancetrebut it doesnt work when using (require '[datomic.api :as d]), any pointer please?#2019-02-1319:39marshallhttps://docs.datomic.com/on-prem/integrating-peer-lib.html#2019-02-1319:39marshallthe peer library needs to be in your deps.edn project.clj or maven pom file#2019-02-1601:18superancetreThanks!#2019-02-1214:17superancetreAlso doing bin\maven-install then mvn install:install-file -DgroupId=com.datomic -DartifactId=datomic-pro -Dfile=datomic-pro-0.9.5786.jar -DpomFile=pom.xmlgenerate an error#2019-02-1214:19superancetremy bad on windows you have to escape things, so the correct command is mvn install:install-file -DgroupId="com.datomic" -DartifactId="datomic-pro" -Dfile="datomic-pro-0.9.5786.jar" -DpomFile="pom.xml"#2019-02-1214:23danierouxI'm on cloud:
export DATOMIC_ENV_MAP="{:env :local}"
(ion/get-env) => "{:env :local}"
... it does not get read as EDN. What am I missing?#2019-02-1214:35danierouxAh.
repl: export DATOMIC_ENV_MAP={:env :local}#2019-02-1214:35danierouxAnd not:
repl: export DATOMIC_ENV_MAP="{:env :local}"
... Makefiles are weird.#2019-02-1220:09johanatandoes anyone know how to use a java.util.Date instance in a predicate expression clause:
e.g.,
[(<= ?time $before)]
when $before is an instance of java.util.Date, I get the following error:
ClassCastException java.base/java.util.Date cannot be cast to clojure.lang.Symbol clojure.lang.Symbol.compareTo (Symbol.java:105)#2019-02-1313:20favilaHuh I just use the operators all the time. Where is $before from? It is named like a db#2019-02-1505:50johanatanOops, typo. ?before#2019-02-1505:52johanatanInterestingly enough it seemed to work with vars named that way#2019-02-1505:52johanatanMaybe it's just convention#2019-02-1220:15johanatanah, i think i need .before#2019-02-1220:15johanatanper: https://groups.google.com/forum/#!msg/datomic/X3I0Ozd4d0I/EkANn6HRn3UJ#2019-02-1221:02mishaNot actually a spec/datomic question, but closely related.
Do you have any tricks to alleviate some pain of "broken sugar" for long namespaces of a
... Datomic ident style naming convention where :datomic.client.protocol/response enumerated values are like :datomic.client.protocol.response/body, which breaks the sugar
other than "suck it up"?
This often eliminates most of let map destructuring too.
(from this short sub-thread: https://www.reddit.com/r/Clojure/comments/8ousfs/namespaced_keywords_question/e0844vk)#2019-02-1221:08benoitNamespaces are usually the solution to this. Clojure uses them. XML too. I have no idea if this notion of namespace will come to spec though.#2019-02-1221:09benoitBut no I have no trick except to not do that for every single piece of data inside your system 🙂#2019-02-1221:09mishanot sure what do you mean, @me1740. like "ns declared in some header"?#2019-02-1221:09benoitGlobal names make the most sense for public interfaces.#2019-02-1221:10benoitYes, I meant a mechanism to declare the namespace for a given context.#2019-02-1221:12lilactownthe problem is that key namespaces don’t compose with the normal keyword operations. you can’t derive :datomic.client.protocol.response.error/status directly from :datomic.client.protocol.response/error#2019-02-1221:12mishayeah, but as soon as you have several origins for the "same" key-val, having short implicit ns for your app key, and long global ns for external key – is kinda eew e.g.: :myapp/email and :oauth2.gmail/email (assuming ofc gmail ns would be much longer :) )#2019-02-1221:14mishayeah, this is probably what I'm fishing for, @lilactown. some approach to juggle these "hierarchical"-but-not-really keywords#2019-02-1221:15mishawhich does not break various destructuring sites, go-to-spec-definition in IDEs, etc. along the way#2019-02-1221:16lilactownyeah. you can probably come up with a clever helper function for your s/def’s, but it falls over when trying to destructure#2019-02-1221:18mishaspeaking of destructuring: as soon as you have :foo/id and :bar/id in the same map – you often have to go w/o destructuring too :(#2019-02-1221:18lilactownyeah. it’s really annoying how destructuring completely loses the keyword ns. merging maps is easy but tearing off data requires more work#2019-02-1221:19mishawhich, I hate to admit, makes :foo-id and :bar-id more appealing.#2019-02-1221:19lilactownI usually end up doing something like:
{foo-id :foo/id bar-id :bar/id} big-map
#2019-02-1221:20mishawhich often longer than just (:bar/id big-map) once later in the fn#2019-02-1221:21lilactownwell, I usually like tear off all my data in one place. but yes, it’s not exactly pushing us into the pit of success#2019-02-1221:21mishaand is "naming same things over and over again" which Rich hates so much in java signatures and pattern matching#2019-02-1221:23mishawhich brings us back to forms explosion, when keyword's ns is long enough (take any from https://gist.github.com/dustingetz/e3d0650d903d95ffe1cab1b343cfa073 from same reddit thread):
:datomic.client.protocol.response/error#2019-02-1221:24mishafrom short and sweet hello world form to enterprise in 2 seconds opieop#2019-02-1221:36mishaalso clojurescript does not have the alias fn (?)#2019-02-1307:17dmarjenburghQuestion about setting up Datomic Cloud. If everything is setup in your own account with CloudFormation templates, how is the instance usage for billing monitored? I’ve set it up in two different accounts and noticed the cloudformation templates are exactly the same. Are the instances communicating with the marketplace?#2019-02-1319:35marshallThis is handled via AWS marketplace internal monitoring. The AMI itself has marketplace-related modifications that allow them to bill hourly#2019-02-1319:35marshallfor any more specifics about that you’d have to look at marketplace docs and/or support#2019-02-1315:21grzm'lo all. I'm back at trying to help a colleague use ions on Windows with IntelliJ/Cursive. AFAICT, deploying ions requires using the command line clj tool and clojure.tools.deps, which isn't supported on Windows. Does anyone have experience with such a setup?#2019-02-1315:26grzmMy current strategy is to use Windows Subsystem for Linux (WSL) and run clj there. That works (i.e., we've successfully gotten a REPL running in WSL using clj). The next trick is connecting from Cursive to a repl started from a WSL repl. From what I can tell, the "remote" option from Cursive expects an nREPL connection, not a socket repl connection, so when I've attempted to create a "remote" REPL connection from Cursive, it hangs with "Connecting to remote nREPL server". (I have been able to successfully connect to the clj-repl from Emacs.) Any suggestions or pointers most welcome!#2019-02-1316:08johnjI would ask in #cursive if someone knows how to connect to the built-in socket repl#2019-02-1316:08johnjI would ask in #cursive if someone knows how to connect to the built-in socket repl#2019-02-1316:48grzmYeah, I've already cross-posted.#2019-02-1317:27johnjCool, why not just start an nrepl from clj ?#2019-02-1317:29johnjhttps://nrepl.org/nrepl/0.6.0/usage/server.html#_using_clojure_cli_tools#2019-02-1318:08grzmThat's an idea. @U0CJ19XAM brought up https://github.com/mfikes/tubular as well, which I was able to get working.#2019-02-1323:41grzm.@U0567Q30W confirmed tubular is the way to go until he provides native support.#2019-02-1322:30johanatanis it possible to have :find return an associative data structure rather than a tuple (i.e., to provide "keys" for the values being returned) ?#2019-02-1322:44johanatanis the answer a "map specification pattern" for the pull expression?#2019-02-1323:49johanatanfor anyone reading along at home, another way (which I ended up going with) is to have:
[(hash-map :key-1 ?val :key-2 ?val2) ?your-map-name]
as one of your where clauses and then just :find the ?your-map-name (where ?val and ?val2 are the products of other clauses).#2019-02-1405:17amarjeetQuestion regarding Transaction - I have 3 attributes, say x, y, & z, where z = (+ x y), and z will be evaluated and committed only if it gets the values of both x and y. Now, at t=0, my db has just x = 3. At t=1, y=5 gets into the system, so z gets evaluated and becomes z=8 and now both y and z are getting committed in a read & write transactions (composed together). At the same time (at t=1), some other thread changed x to 4. But, since Datomic doesn’t have any read transaction that can be composed with the write transaction, how can I make sure that my read & write transaction should retry. Note that I am not committing x in my write transaction.#2019-02-1412:16amarjeetI got my answer#2019-02-1417:36johnjWhat solutions are used for online hot backups? for on-premise#2019-02-1417:45johnjI understand using the storage tools is not good enough?#2019-02-1421:13favilawhat is an "online hot backup"?#2019-02-1421:13favilathe backup-db tool can backup at any time, and can do incremental backups#2019-02-1421:13favilathis is a storage-agnostic backup#2019-02-1421:14favilayou can also do storage backups, but to be safe they need to be transactional#2019-02-1421:28yogidevbearHi, I have a syntax/feature question. Is there an equivalent in Datomic d/q for a T-SQL in clause?#2019-02-1421:50favilathere's nothing built in. use clojure or destructuring#2019-02-1421:51favila[(ground [1 2 3]) [?match ...]] [?e ?a ?match] is better when ?a is indexed and ?match is selective#2019-02-1421:51favila[?e ?a ?match] [(contains? #{1 2 3} ?match)] when filtering would be faster#2019-02-1421:52yogidevbearThanks Francis, will give it a look#2019-02-1421:29yogidevbeare.g. :where [?e :col in [1,2,3]#2019-02-1421:47johnj@favila they are real time backups not just snapshots#2019-02-1421:48favilayou can use them if they are consistent#2019-02-1421:50johnj@favila what none consistent storages are available for datomic?#2019-02-1421:51johnjah dynamo has both strongly and eventually consistent options#2019-02-1421:52favilaI'm not talking about the storage but the backups#2019-02-1421:52favilae.g. I don't know that every possible type of mysql backup will give me a consistent snapshot#2019-02-1421:52favilamaybe it has dirty reads in it#2019-02-1421:52favila(for e.g.)#2019-02-1421:56johnj@favila I see, so how do you deal with the "real time backup" issue? I can make backups every night but what about the data that is being inserted the next day before the backup?#2019-02-1421:57favilayou can run backup-db continually; or use a storage tech that can do this#2019-02-1421:58favilaI don't think most production systems have up to the second backups#2019-02-1421:58favilaif they need that much redundancy they do replication#2019-02-1421:59johnj@favila Guess one can do both, datomic's backup and storage#2019-02-1422:00favilastorage is still a backup#2019-02-1422:00favila"backup" I understand to mean "offline, periodic snapshot"#2019-02-1422:00favilano service needs to be running to keep it#2019-02-1422:00johnjyeah, by storage I mean something like aws dynamo and postgresql point-in-time recovery features#2019-02-1422:01favilaif you want "if this machine catches on fire I don't lose data" that's clustering/replication territory not backups#2019-02-1422:01favilaso dynamodb, you just trust will never lose your data#2019-02-1422:01favilaand you keep periodic backups in case someone accidentally deletes production#2019-02-1422:01johnjdoes datomic's backup create any load/performance issues?#2019-02-1422:02favilait's the same as having another peer#2019-02-1422:02favilaactually it's less#2019-02-1422:02favilano transactor load at all#2019-02-1422:02favilabut it is storage load, because it's reading blocks#2019-02-1422:03johnjthat's good#2019-02-1422:06favilaIt looks like postgresql point in time recovery will make something safe to restore directly to postgresql#2019-02-1422:07favilaI am not an expert though#2019-02-1422:10favilabut usually I think of a backup as something to either stand up the same data in another storage system or to correct some catastrophic developer error#2019-02-1422:10favilae.g. I deliberately do not want to restore to an up-to-the-second backup because an hour ago someone fatfingered a DELETE#2019-02-1422:11johnjyeah, I conflating terms, this is mostly clustering/replication territory not backups#2019-02-1422:11favila(this is less an issue with datomic obvs)#2019-02-1422:11johnjas you said#2019-02-1422:12favilawe keep both kinds of backups, mysql (done automatically by google's hosted mysql backups) and datomic#2019-02-1422:12favilawe've only ever used the datomic ones though#2019-02-1422:12johnjthis is mostly "I don't want to lose todays work because I only run backups at night" issue#2019-02-1422:12favilayeah for that I think you can run backups more frequently#2019-02-1422:13faviladatomic backups#2019-02-1422:13favilaor trust postgresql if that's a faster+easier restore#2019-02-1422:13faviladatomic restore of an empty storage will likely be much slower than that storage's native backup method#2019-02-1422:13johnjyeah, looking for less OPs, both dynamo and rds have point in time recover#2019-02-1422:14johnjso if you trust them with the data as you said, they will handle this#2019-02-1422:15johnjanyway, running incremental backups every 30 mins using datomic doesn't sound that bad#2019-02-1422:16favilathe real advantage of the datomic backup is storage-independence#2019-02-1422:17favilain the era of hosted, clustered, SLAed database-as-a-service I don't really see the point of the storage's own backup for something like datomic#2019-02-1422:17favilaother than a faster restore if the storage falls over#2019-02-1422:17favilabut the datomic backup is "your" copy, you know you can use it even if your storage host goes up in flames#2019-02-1422:18favilaand you can switch to a different storage#2019-02-1422:19johnjgood point, but if you are all in in a cloud provider is hard to see why you will change storages, but who knows#2019-02-1422:19favilathe cloud provider comes up with a better storage?#2019-02-1422:19favilawell, we can dream#2019-02-1422:20johnjand then you have to wait for datomic to leverage it 😉#2019-02-1422:20favilae.g. google has postgresql now#2019-02-1422:20favilawe could switch to that if we weren't lazy#2019-02-1422:20favilaif datomic someday supports google's dynamo-alike natively we would switch to that#2019-02-1422:21favilabut there's other scenarios too#2019-02-1422:21favilacreate a db on a laptop, backup, restore to a production storage to stand it up#2019-02-1422:21johnjbut dynamo is getting nicer features every year, like on-demand capacity mode recently#2019-02-1422:23johnjhave to go, really helpful, thanks!#2019-02-1510:10Per WeijnitzHi! Can someone recommend an example project demonstrating unit testing of functions that write/read to datomic, please? I read a blog post about Yeller's testing, which seems fine, but as a beginner I would benefit from actual code to study (they refer to a function empty-db which is not included).#2019-02-1512:32val_waeselynckClients or Peers ?#2019-02-1512:41Per WeijnitzHi! I am still a bit unsure about this terminology, as a peer server, iirc, sits between a traditional client and the datomic server. The functions I need to test boil down to calls to "(d/transact (get-conn) {:tx-data ...data..})" and "(d/q some-query (get-db))". I thought it could be a suitable unit test to verify that the program can assert and read back stuff using these functions as expected.#2019-02-1512:46Per WeijnitzBut perhaps it is better to refactor these functions to leave out these mutable calls, and write tests only for these immutable bits (that go before the actual datomic calls).#2019-02-1512:49Per WeijnitzStill, if it were feasible to create unit tests that could use datomic (without actually destroying anything in your real datomic instance), it would be possible to do testing of functions that compose a number of datomic transactions.#2019-02-1513:39val_waeselynckConsider using d/with as well#2019-02-1513:44Per WeijnitzThanks! Yes, d/with seems like the way to go. If I understand it correctly d/with can make safe unit testing possible, if you provide a connection to an actual datomic instance. It could be handy to be able to setup/teardown throwaway in-memory databases as well, so your main instance does not need to be available to run the tests.#2019-02-1513:53val_waeselynckYou want Peers to do that#2019-02-1513:53val_waeselynckhttps://vvvvalvalval.github.io/posts/2016-01-03-architecture-datomic-branching-reality.html#2019-02-1513:57Per WeijnitzAh! That is very, very interesting, thank you!#2019-02-1515:49twl8nIt seems like Datomic Ions grabs clojure.tools.log which prevents userspace code (my ion function) from using normal logging. What is the recommended way to do logging from an Ion function?#2019-02-1515:56lilactown@tlaudeman what I've done is use cast/event 😕 I know it's not exactly what it's meant for, but cast/dev can't currently be shown in the cloudwatch logs#2019-02-1515:57lilactownif there's a more official answer I'd like to know. by "not what it's meant for" I mean that I've been using it to occasionally troubleshoot certain issues, not log actual events to cloudwatch#2019-02-1515:57marshallhttps://docs.datomic.com/cloud/ions/ions-monitoring.html#2019-02-1515:57marshall^ cast is the “official” answer 😉#2019-02-1515:58lilactownFWIW I find the biggest struggle with ions atm is how opaque the system can be when things aren't quite working#2019-02-1515:59lilactownis the issue in the lambda? in the api gateway? in the Ion? where do I get the feedback for it?
Having a way to direct cast/dev somewhere would be an improvement#2019-02-1515:59marshall^ that should get a feature request 🙂#2019-02-1516:00marshalli’ll see about making one for it#2019-02-1516:01lilactown😄 you're right. thank you!#2019-02-1516:01marshallany reason cast/event isn’t suitable?#2019-02-1516:01marshallincidentally, that is what we use internally#2019-02-1516:01lilactownah, it has been once I figured it out... just per the docs it seemed like not what it was meant for#2019-02-1516:01marshallfor that kind of investigation/debugging#2019-02-1516:02marshallmaybe i should work on improving that - that is definitely the intention#2019-02-1516:02lilactown> An event is an ordinary occurrence that is of interest to an operator, such as start and stop events for a process or activity
vs.
> Dev is information of interest only to developers, e.g. fine-grained logging to troubleshoot a problem during development#2019-02-1516:02marshalldev is for local dev; event is for getting things wired up#2019-02-1516:02marshallYeah, I think we could improve that description#2019-02-1516:03twl8nSoo... I guess we should have a thin logging abstraction layer. I'm bringing over legacy code that is rife with (log/infof "Important thing happened to id: %s" my-id). There are also implications for local unit tests outside of Datomic Cloud.#2019-02-1516:03marshallI wouldn’t hesitate to use cast/event for that purpose; you can always ‘switch’ it on the env or parameters#2019-02-1516:03marshallif you don’t want to remove it once you’re done using it#2019-02-1516:05lilactownso e.g. for some reason my api stops working after I deploy the change. in reality, it's a problem with coercing the response payload properly.
I'd like to log the the response payload. I'd initially reach for cast/dev since that sounds more applicable, but in actuality I want cast/event.#2019-02-1516:05lilactownI gotcha. so it is supported, just I misinterpreted the intent of the tools#2019-02-1516:06marshalli’ll look at how we might improve the docs
i would consider dev for things you’d never want to end up in your prod system, events are for anything that you would want there, either for monitoring or troubleshooting#2019-02-1516:06marshalli kind of think of dev as the ion equivalent of println 😉#2019-02-1516:07lilactown:+1:#2019-02-1519:59msshey all, beginner datomic/clojure related question. I have an attribute I want to use to do a lookup, a :user/id that’s a datomic uuid. schema looks like:
{:db/ident :user/id
:db/valueType :db.type/uuid
:db/unique :db.unique/value
:db/cardinality :db.cardinality/one}
when actually using the attr to do a pull, I’m running into an issue where datomic can’t recognize the uuid as such without the #uuid reader literal, but using the reader literal breaks the compiler because the actual value is a var. rough example here:
(defn my-var-fn []
"users-uuid")
(let [user-id (my-var-fn)]
(datomic/pull (datomic/db conn)
'[*]
[:user/id user-id])) => nil
(let [user-id (my-var-fn)]
(datomic/pull (datomic/db conn)
'[*]
[:user/id #uuid user-id])) => compiler error
(datomic/pull (datomic/db conn)
'[*]
[:user/id #uuid "users-uuid"])) => pulls data properly
any tips for how to get around this?#2019-02-1520:02Lennart Buit(UUID/fromString ...) instead of using the reader tag, that would at least solve your issue 😛#2019-02-1520:02mssexactly what I was looking for. knew there had to be some dead simple solution to this like that 😂#2019-02-1520:02mssthanks for the help#2019-02-1520:03Lennart Buitreader literals work… well only with literals ^^#2019-02-1520:55pvillegas12Trying to upgrade my datomic cloud system. Currently have three stacks, two of which are nested (compute and storage stacks under a common ancestor). When upgrading the compute stack I get a warning telling me I should upgrade the parent instead. Does it matter?#2019-02-1522:10johanatanare backups created by backup-db compressed? I'm seeing my dynamo backing store usage at 236.94MB but the s3 backup created with bin/datomic backup-db ... is only 50MB. Is this normal?#2019-02-1522:14marshall@johanatan https://docs.datomic.com/on-prem/capacity.html#garbage-collection#2019-02-1522:15johanatanthere shouldn't be any garbage in this system. i care about retaining the full history#2019-02-1522:15favilato expand on what he said, backup-db follows the roots to the branches so it doesn't have garbage#2019-02-1522:15favila(a fresh backup db; an incremental one will have garbage)#2019-02-1605:09johanatanso... after running gcStorage, my dynamo db is still: 189.30MB yet the backup is 50MB. is this normal?#2019-02-1615:15favilaIt seems high but not bonkers high#2019-02-1615:15favilaWhat was your ago time?#2019-02-1615:15favilaAre there any other dbs in there?#2019-02-1615:15favilaDid you delete any dbs? (There’s a separate command to reclaim deleted db space)#2019-02-1622:52johanatanmy ago time was: 1 day#2019-02-1622:52johanatanno other dbs#2019-02-1622:52johanatani don't think i've ever created any other dbs so shouldn't be any dbs to reclaim#2019-02-1623:41johanatanhmm, dynamo now says 76.79MB. I think it may have been the delayed stats update#2019-02-1522:16marshall sorry, I meant to type more but I was literally on my way out the door#2019-02-1522:16johanatanhmm, what is "garbage" in this context? if i only ever write new values, will garbage still accumulate?#2019-02-1522:16favilayes#2019-02-1522:16faviladatomic stores datoms in compressed blocks of fressian#2019-02-1522:16favilathey are linked together in a wide tree structure#2019-02-1522:17johanatanah, ok. so there's some striping going on?#2019-02-1522:17favilaeverytime the transactor reindexes, it updates this tree, some blocks become garbage#2019-02-1522:17favilano, not really#2019-02-1522:17favilahttp://tonsky.me/blog/unofficial-guide-to-datomic-internals/#2019-02-1522:17favila^^ read this#2019-02-1522:18johanatanah ok. so do i need to explicitly invoke gcStorage periodically in order to free space on the dynamo or will it happen automatically at some point under just normal operation?#2019-02-1522:18favilayou need to invoke it#2019-02-1522:18johanatan:+1:#2019-02-1522:18johanatandoes it need to be during a time when the transactor is not busy?#2019-02-1522:19johanatani assume not based on: "Garbage collection does not lock, block, nor even read any of the segments representing live data."#2019-02-1522:20favilait adds operation load on the storage#2019-02-1522:20favilathat's it#2019-02-1522:20favila(you are deleting blocks after all)#2019-02-1522:20johanatanok, in this case the storage is dynamo so that is amazon's problem ?#2019-02-1522:20johanatani.e., not a problem 🙂#2019-02-1522:21faviladoesn't dynamo make you set capacity limits?#2019-02-1522:21favilaoperational capacity not storage#2019-02-1522:21johanatani have it set to on demand#2019-02-1522:21johanatanrather than provisioned#2019-02-1522:21johanatanbecause my load is tiny#2019-02-1522:21favilaok, then no, it will scale up whenever#2019-02-1522:21johanatancool. thx!#2019-02-1522:21favilathen scale down I presume#2019-02-1522:21johanatanright#2019-02-1522:22favilai'm trying to find the "operational guide" to datomic on prem#2019-02-1522:22favilaI thought it covered this maintenance stuff you need to do#2019-02-1522:23johanatanit probably does. i have a rather scattered method of consuming docs#2019-02-1522:23favilano I can't find it#2019-02-1522:23favilait may not exist#2019-02-1522:24favilabah#2019-02-1522:24favilabig oversight imo#2019-02-1522:24favilaanyway, you should run gcstorage after a big import right after indexing#2019-02-1522:25favilaand you should run it regularly (weekly or daily) with an "ago time" that is generous enough to encompass any peer's last d/db call#2019-02-1522:25favilaI think the baseline recommendation is run it weekly for a time one week ago#2019-02-1522:26favilaif you have lots of writes, run it more often to recover storage faster#2019-02-1522:26favilaif you have long-running jobs, set the "ago" time to something longer#2019-02-1522:27favilasorry, I mean not encompass any peer's last d/db call#2019-02-1522:28favilae.g. if you have a peer still using a db from a d/db call it made a week ago, your gcstorage ago time should be more than one week#2019-02-1522:28favilaor you may be collecting blocks that that peer still has references to#2019-02-1522:51pvillegas12I am getting a dependency conflict on
{:deps
{com.cognitect/transit-java #:mvn{:version "0.8.311"},
which conflicts with other libraries#2019-02-1522:52pvillegas12is there a way to tell datomic cloud to use a more up to date library of transit?#2019-02-1600:40marcolI'm getting the following error NoClassDefFoundError com/amazonaws/AmazonWebServiceResult datomic.ddb/fn--6205 (ddb.clj:47) when trying to connect to a transactor on DynamoDB although I have [com.amazonaws/aws-java-sdk-dynamodb "1.11.6"] in my clojure dependencies. What am I missing?#2019-02-1601:27pvillegas12Looks like it was updated https://forum.datomic.com/t/new-client-release-for-pro-and-cloud/591#2019-02-1601:27pvillegas12however I’m on the most up to date version of the solo architecture and still get an old version of transit-clj#2019-02-1601:47pvillegas12Narrowed down the transit problem to transit-java version "0.8.311"#2019-02-1611:08Brian AbbottHi - I was wondering if anyone might now if I might be able to get the slides - most especially the diagrams from Rich Hickey's talk given on 9/12/2018 (https://www.youtube.com/watch?v=thpzXjmYyGk)#2019-02-1611:08Brian AbbottAlso... has anyone thought of or is anyone working on a book for Datomic?#2019-02-1611:10Brian AbbottAlso, I havent been able to find anything online but, is the datomic source available? Has or will Cognitect make Datomic Open Source?#2019-02-1611:24Brian AbbottThis is an exceptional talk BTW - TY so much Rich Hickey for both creating Clojure and, this, what appears to be a very cool DB arch./platform.#2019-02-1618:41dustingetz@briancabbott I believe those slides are on reddit clojure from a day or two after the talk#2019-02-1815:25Brian AbbottAwesome, thank you @U09K620SG!!#2019-02-1815:38Brian AbbottHrmmm I dont see it in there#2019-02-1815:38Brian AbbottThere is a link to the video on YouTube but, no slides that I am able to see#2019-02-1816:47dustingetzAh I#2019-02-1816:47dustingetzSorry i was thinking of this: https://github.com/stuarthalloway/presentations/raw/master/2018-06-DatomicIons.pdf#2019-02-1817:49Brian AbbottHey, thank you so much @U09K620SG!! That has a lot of the diagrams that I was looking for!#2019-02-1813:39TwiceHi, given some entity-id and current db is there a way to check if this entity is in "retracted (via :db/retractEntity) state"? Or should I just check if this entity has no attributes - hence was retracted?#2019-02-1813:56ncgEntities only exist implicitly, insofar as there are datoms talking about them. If you are interested in controlling an entities visibility to users, you could model that explicitly as an :entity/visible? attribute.
If you check whether an entity exists in the current snapshot, you cannot distinguish between entities that were retracted and entities that never existed in the first place.#2019-02-1915:20okocimGood Morning 🙂. I vaguely remember coming across some information a discussion at one point mentioning that when using multiple databases in a datomic cloud system, there is some overhead cost for each database, which can impact the system when there are very many such datbases.
I was wondering if the source of this overhead is the idents, which are stated in the datomic cloud documention to be available in-memory on all compute nodes. If so, is there any way to mitigate this cost for a set of databases which ostensibly will have the same exact idents (logically).
I’m asking because I’m trying to figure out the feasibility of using a separate db per tenant in a multi-tenanted system (with roughly same-sized tenants), in an effort to avoid having too many datoms for any one system.#2019-02-1915:59ghadiI think it's about cache locality @okocim. If you have a lot of different databases on the same query group, usage of one might evict others from cache#2019-02-1916:00ghadiyou may want to spin up several query groups and use specific DBs in specific query groups#2019-02-1916:00ghadior if you have a particularly oversized tenant db, you put it in its own query group#2019-02-1918:17okocimThanks, that makes sense. However, I would expect the cache locality issue to also be there to some degeree if I had all of my tenant data in a single database. What I mean is that I’d expect the data from one tenant to be evicted over another. Of course this probably has more to do with segments, so I expect that you’re right in that this situation is more likely to occur with different databases in the same query group.#2019-02-1921:30steveb8nI wonder if this will be how we stop using Lambdas (i.e. cold starts) with Ions? https://www.alexdebrie.com/posts/aws-api-gateway-service-proxy/#2019-02-2000:51Brian AbbottHi! I just wanted to report a small defect in the Datomic Documentation. On this page: https://docs.datomic.com/on-prem/getting-started/query-the-data.html - The link about 1/3 of the way down, titled "Seeing the History of the Data" points here: https://docs.datomic.com/on-prem/getting-started/seeing-the-history-of-the-data.html which, gives a 404 error. Maybe we could correct that, it would be nice to have the link!! 🙂#2019-02-2000:52Brian AbbottAt the very bottom is perhaps the correct link for the earlier one which is: https://docs.datomic.com/on-prem/getting-started/see-historic-data.html#2019-02-2000:55Brian AbbottHas anyone considered authoring a book on Datomic? Conversely, would anyone be interested in reading one?#2019-02-2000:58Brian AbbottHas Cognitect considered Open Sourcing the Datomics (prem, cloud, Ions)? I think it would go a long way to expediting adoption within the industry.#2019-02-2000:59Brian AbbottIs there any path toward viewing/accessing the source outside of regular Open Source channels?#2019-02-2001:06Joe LaneI’ve considered authoring a guerilla guide to ions and aws. I would be interested in reading one. I HIGHLY doubt cognitect will opensource datomic since it’s their product.#2019-02-2002:12dpsutton> To those who think that Datomic ought to be open source: We don't see a viable economic model there. If you think otherwise, come up with the money to buy the IP and make a go of it. If you can't, then recognize your arguments for the hot air of entitlement they are.
https://www.reddit.com/r/Clojure/comments/73yznc/on_whose_authority/do1olag/#2019-02-2002:14dpsuttonThere's a bit of pepper in the response based on the circumstances that elicited the response. But the idea is there are mortgages to pay and mouths to feed. If you can point them towards a better model that will do these things well I believe they are all ears.#2019-02-2007:53Brian AbbottThank you for that link @dpsutton, I really appreciate it! As we talked in DM, it misses the motivation behind the suggestion. Its really not to make another step in some kind of idealistic march toward a kind of software socialism - most definitely not - personally, I march with Adam Smith. The reasoning is to produce an uptick in adoption and acceptance in Datomic. Many of the lists, blogs, conversations in the dev community focus on or around the Open Source Data Platforms with the OS part of it being merely the token cost only for entrance into the room - its funny whenever you ask most devs to cruise the source of a project with you to find an answer to a programming question - the responses make you wonder how OS ever happened in the first place - or highlight the spread we have in our minds between our words and our actions... Nevertheless, I feel that in order to make Datomic (semi-) mainstream (which would be nice for those banking on it) I feel that two things would need to happen - 1) the OS suggestion or, at least just part of the core engine - that pays that token cost, it gets to be written about, considered outside of the clojure comm., etc.. and 2) 1st tier support for client-apis for all major programming languages. If those two things happened, I believe that Datomic would see far greater adoption quite rapidly.#2019-02-2008:46Chris@briancabbott much as I’m with you on the goal of promoting Datomic, I don’t think 1st tier support for other langs would pay off. Are there that many prospective Datomic users who aren’t interested in using Clojure or can’t change their client language?#2019-02-2008:50Brian AbbottYes. 🙂#2019-02-2008:51Brian AbbottIts the key to being a first tier DB#2019-02-2008:54Brian Abbottand really... JDBC (which we have) and, ODBC gets you a long way there but, having a totally supported native client API for every major platform opens the doors with the DB and Dev guys. OS opens the doors for the community. Then its what we make of it but, ATM we're out in the cold.#2019-02-2009:23ChrisFair enough - it’s not been my experience, but I can only guess at what’s normal. And it’s true that Ops teams would prefer a client they can use more easily. (Telling Ops about Datalog reminds me a bit of this webcomic, actually, please excuse the profanity: http://howfuckedismydatabase.com/nosql/)#2019-02-2015:04okocimis there any way to start a “system” inside the ec2 instances of a query group after deployment? I’m trying to reliably start some SQS message listeners without having to invoke a lambda from outside the system.#2019-02-2016:44pvillegas12Looking at schema design https://docs.datomic.com/cloud/best.html#group-related-attributes, it looks like the best practice is to namespace attributes. However, is it not better to leverage the universal schema (specially with spec in mind)? Why would you prefer :release/name, :artist/name, and :movie/name, over something generic like :model/name where you would define one spec instead of 3 different ones? Want to hear the tradeoffs between both approaches (1. Namespace all attributes pertaining to an entity in your business domain (explosion of attributes), 2. Leverage the universal schema to have shared attributes through your entities)#2019-02-2018:56ro6I thought about this tension a lot when I came on board with Datomic and Spec. I think the wise choice is to use small, specific, sufficiently namespaced names for stuff in durable storage. There are mechanisms available for adding and changing abstractions over time, but un-abstracting (ie separating things you once tried to treat as functionally equal) is much harder, especially when your abstractions are reified in the data model. I think this topic deserves a book. The stuff that helped me sort it out was Zach Tellman's writing about abstraction in "Elements of Clojure" and "Data and Reality" by William Kent. Also, Rich's talks about Spec and namespaces make it pretty clear that the idea is to nail down enduring semantics and meaning at the attribute level. By being overly broad, something like :entity/name actually gives you less useful information.#2019-02-2019:28osi“Data and Reality” sounds neat, thanks for the pointer @U8LN9KT2N#2019-02-2016:53johnjI don't know spec yet but I do try to use generic attributes where I can, but curious how would you use spec here?#2019-02-2016:55johnjspect would check the entity you are referring to? like, this model is release, artist, etc...#2019-02-2016:59dustingetz:facebook/person-name has different validation rules than :linkedin/person-name. If they have the same semantics and validation rules, then use the same attribute. That way they can also share code that implements some semantics. Another example is :commonmark/markdown vs :github/markdown#2019-02-2017:05dustingetzI think attributes that originate from the same system (which is pretty much most attributes) generally benefit from sharing semantics. For example, :amazon/product-title is probably useful on both books and electronics, but :amazon/isbn is not. If you’re just tagging stuff with a string for display to the user, i see benefit from having the same attribute, and see pointless cost from differing it. In the future you can & will change this stuff anyway#2019-02-2017:47pvillegas12A downside I see from going universal schema with all attributes is as follows: I created a company entity which had address information and business information. All of these attributes are shared so they don’t have the :company/ prefix. When I’m going to query for all companies in datalog I have no way of specifying an attribute which is exclusively company based. #2019-02-2017:51pvillegas12Another area of interest is that of refs. If I have an attribute that references let’s say an invoice, why would I not always want an :invoiceable/invoice attribute vs a :company/invoice or :inventory/invoice?#2019-02-2018:56benoit@pvillegas12 It's a hard question and very much related to what computation your system makes in my opinion. It's about the abstractions in your system and how you decompose the computation. Choosing whether you share a set of attributes across entity types is a bit like choosing to define a clojure protocol that will be implemented by different types. To take the invoice example, there is no reason to multiple attributes like :company/invoice, :person/invoice... Or if you have the inverse relationship, you would have :invoice/company and :invoice/person. But the recipient of an invoice should be abstracted (maybe with something like :invoice/recipient) and then you can refer to any type of entity with it.#2019-02-2019:01benoitI think it helps to think in terms of relationships and protocols rather than types.#2019-02-2019:24ncgI would also recommend staying specific. Rules can help with implementing something like a generic notion of user, sourced from various specific user attributes in different namespaces.#2019-02-2019:25ncgAttributes are the granularity at which you can define all your important semantics. And as ro6 says, generalizing later is easier than specializing later (an advantage of the universal relation over tables in the first place).#2019-02-2019:26ncg(there are also indexing benefits to using specific namespaces, but that's an implementation detail)#2019-02-2019:38dustingetz@pvillegas12 a company-specific entity needs only one company-specific attribute, it doesnt have to be all of them#2019-02-2019:40pvillegas12@dustingetz in this case all attributes are shared!#2019-02-2019:44lilactownsome things might have similar attributes but semantically they are different#2019-02-2019:45benoitI don't think it makes sense to go all shared or all type-specific. If you identified abstractions in your system then share attributes for those. If you haven't then don't share attributes. If you have 10 entity types sharing all the same attributes but you still need to be able to distinguish each type, nothing prevent you to have an attribute like :entity/type to indicate the type. But most often some entity types will be involved in relationships and others won't so you often have different attribute namespaces.#2019-02-2019:45lilactowne.g. I would differentiate between a :person/name and a :company/name, because they are semantically different things#2019-02-2019:46dustingetzConsider also :company/address vs :natzip4/address (http://www.zipinfo.com/products/natzip4/natzip4.htm) – natzip4 is far more semantic and yet flexible enough to be decoupled from company#2019-02-2019:46benoit@lilactown It all depends on your system. If you manage invoice and you don't care wether your invoice is for a person or company then they might share the same attribute. It is hard to have discussions like this without actual system requirements 🙂#2019-02-2019:49lilactownright. I think a safe default is to be as specific as possible#2019-02-2019:50lilactownit's easy to assoc a new attribute to an entity that's more general. it's harder to sort your data and make it more specific after the fact#2019-02-2019:53benoitI hear that advice often but that's not my experience. I encountered more systems fail because of an explosion of complexity due to special cases rather than a bad design. I still think it's better to have a bad plan than no plan at all.#2019-02-2020:01benoitThat said I have also seen systems where a rigid type system was put on top of datomic and prevented this kind of mix and match of namespaced attributes. Each entity could be of only one type. That made me sad. So you can definitely shoot yourself completely in the foot with bad abstractions. Especially if you artificially restrict power for no good reasons.#2019-02-2110:57joelsanchezit's possible to have a type system which assigns one type to each entity, but allows you to use different namespaces for the attributes. that doesn't restrict power and allows you to know that a user is a user, and have a user spec etc#2019-02-2112:21benoit@joelsanchez Agree#2019-02-2119:30joshkhcan someone help me with dates in queries? in this example i'm looking for entities that have transactions made to their :item/available attribute before some date (now).
*edit:
(d/q '{:find [?e]
:in [$]
:where [
[?e :item/available? _ ?t]
[?t :db/txInstant ?inst]
[(< ?inst (java.util.Date.))]
]}
db)
=> ExceptionInfo processing clause: [?t :db/txInstant ?inst], message: java.lang.ClassCastException clojure.core/ex-info (core.clj:4739)
#2019-02-2119:35joshkhwait that's all screwy. updating...#2019-02-2119:37joshkhand flipping < to > runs successfully with no results, which makes sense because nothing has been transacted in the future. => []#2019-02-2119:58benoit@joshkh Use the methods .before or .after on j.u.Date#2019-02-2119:59benoit> should have thrown a ClassCastException too.#2019-02-2120:01joshkhcheers @benoit, that was driving me nuts. thanks.#2019-02-2120:07benoitI thought the ClassCastException was on the < clause, expecting a Number instead of a Date, but apparently this is somewhere else: processing clause: [?t :db/txInstant ?inst], message: java.lang.ClassCastException: clojure.lang.PersistentList cannot be cast to java.util.Date and you're right, it return an empty set with >. Very surprising behavior.#2019-02-2120:08joshkhyeah.. i started erroneously heading down the type hinting route with no luck#2019-02-2120:09joshkh(of course)#2019-02-2120:11benoit@marshall I know the query was malformed in the first place but shouldn't the error indicate a problem with using < on a Date? Does the ClassCastException on PersistentList makes sense to you?#2019-02-2121:19bkamphausyou can do comparison on dates in Datomic datalog clauses. You want to pass a date as a parameter into the :in portion of the query, not instantiate one in-line with the java.utile.Date constructor.#2019-02-2121:39joshkhthat was my next thought. i know i've used comparisons in the past.#2019-02-2122:03benoit@bkamphaus Good catch. That's likely why I didn't remember this behavior.#2019-02-2122:06benoitThis is a little surprising that the 2 queries don't have the same behavior whether you pass the date or inline it. I would like one day to understand why that is.#2019-02-2122:10bkamphausI’d guess that the way it errors is considered undefined behavior b/c it’s not valid Datomic datalog. A fn-expression can be an expression-clause which can be a where-clause but you can’t nest functions this way. A fn expression itself is only valid as [[fn fn-arg+] binding]#2019-02-2122:10bkamphausas per: https://docs.datomic.com/on-prem/query.html#grammar#2019-02-2122:36furkan3ayraktarI have a datomic ions related question. Is there a way to include sources and dependencies from extra-paths and extra-deps defined within an alias in deps.edn while creating a new ions revision? I’ve tried to run push command with including my alias but it didn’t help: clojure -A:dev:my-alias -m datomic.ion.dev '{:op :push}'. I’ve checked the contents of the resulting revision zip file and it didn’t contain the sources defined in extra-paths.#2019-02-2510:19furkan3ayraktarI’ve seen a similar post in datomic forum https://forum.datomic.com/t/tools-deps-aliases-in-ion-deployments/667. Is there anyone who can help about this?#2019-02-2618:08ro6That was my post, still haven't heard anything.#2019-02-2620:52furkan3ayraktarAha! I thought it would work pretty straightforward, and I was wrong 🙂. Hope someone has answers for us!#2019-02-2620:53ro6Definitely #2019-02-2200:06benoit@bkamphaus Of course, I'm tired tonight. Thanks 🙂#2019-02-2218:02jarethttps://forum.datomic.com/t/datomic-cloud-version-470-8654/854#2019-02-2218:02jarethttps://forum.datomic.com/t/datomic-cloud-version-470-8654/854#2019-02-2219:09pvillegas12@U1QJACBUM was https://forum.datomic.com/t/datomic-transit-java-dependencies/846 fixed in this release?#2019-02-2219:11jaretno unfortunately we had already submitted the CFT to AWS at the time of your report. We are still investigating that issue.#2019-02-2305:23henrikOn eu-central-1, I'm getting the following error for updating query groups:
1 validation error detected: Value '' at 'imageId' failed to satisfy constraint: Member must have length greater than or equal to 1
#2019-02-2305:24henrikI'm guessing there's something going on with this bit in the template:
"ImageId": {
"Fn::FindInMap": [
"RegionMap",
{
"Ref": "AWS::Region"
},
"Datomic"
]
},
#2019-02-2305:25henrikYeah, no AMIs for most regions. Is this intentional?
"RegionMap": {
"us-east-1": {
"Datomic": "ami-069156466c1112347"
},
"us-east-2": {
"Datomic": ""
},
"us-west-2": {
"Datomic": ""
},
"eu-west-1": {
"Datomic": ""
},
"eu-central-1": {
"Datomic": ""
},
"ap-southeast-2": {
"Datomic": "",
"Bastion": ""
}
}
#2019-02-2316:43jaret@U06B8J0AJ is this only with the query group template? Or have you identified it in another template?#2019-02-2316:43jaretAh Yes! We’ve confirmed an issue with the query-group template on the Marketplace page#2019-02-2316:43henrikOnly the query group as far as I can see. Storage and compute updated just fine.#2019-02-2316:43jaretwe have the correct template on our releases page and will contact AWS to update#2019-02-2316:43jarethttps://docs.datomic.com/cloud/releases.html#2019-02-2316:44jaret@U06B8J0AJ use the template from the releases page for now#2019-02-2316:54henrikNow it works. 🙂#2019-02-2220:19johnjEnhancement: Improved valcache cleanup algorithm. <- will Pro get this?#2019-02-2223:59marshallYes. Likely in the next release#2019-02-2221:13mssis there a way to pull the actual ident value as opposed to a :db/id if that ident is a ref on another fact? I have something like :todo/type and where type is a ref to an ident. pulling :todo/type returns the :db/id of the ref, but I’d like it to return e.g. todo.types/chore#2019-02-2221:15favilapull :db/ident#2019-02-2221:15mssah it’s just :db/ident, makes sense#2019-02-2221:15mssyep thank you!#2019-02-2221:15favilaunfortunately if you want a d/entity-style representation of the ref you will need to postprocess the result of the pull#2019-02-2221:16favilathere's no way to make the pull expression do it#2019-02-2415:38pvillegas12How can I remove an entity using datomic cloud? I know how db/retract specific attributes. Is it to construct the right db/retract array with all attributes with a given db/id?#2019-02-2415:40pvillegas12:db/retractEntity 😄#2019-02-2507:44henrikA fairly common occurrence with Datomic Cloud is,
"java.io.IOException: Connection reset by peer"
This is when waking an HTTP endpoint during a cold start. A second request returns normally. I saw this with Solo, and now it's presention on Production as well. Something seems to be getting ahead of itself.
Has anyone seen this? I'm struggling to find anything in CloudWatch that indicates an error.#2019-02-2508:04henrikFound a thread on this here: https://forum.datomic.com/t/api-gateway-internal-server-error/678/8#2019-02-2512:45joshkh@U06B8J0AJ according to a ticket i opened in December, Cognitect knows about the problem and they're working on a release to prevent it from happening. in the mean time they recommend implementing retry logic in your code.#2019-02-2512:45henrikI love the idea of Lambdas, but a way to get around it sounds great right about now.#2019-02-2512:49joshkhfor sure. we eventually implemented retry logic, but it felt a little dirty to commit extra code to handle a predictable downtime after each deployment. deploying Ions behind AWS API Gateway is part of the official Ions tutorial.#2019-02-2512:52joshkhwe wanted zero downtime deployment and instead we got one guaranteed to happen 😉#2019-02-2512:52joshkh(but we love Ions and Datomic so our brand loyalty won in the end)#2019-02-2514:52henrikYeah, for what it's worth, the issues with hooking up lambdas in this affects everyone who tries it, not just Cognitect.
As far as AWS goes though, it's a bit disappointing that they still haven't solved some of the basic issues with lambdas, given how they want to position them as glue between all their services.#2019-02-2512:15joshkhis there a safe/sane way to bump the sync-libs 120 second timeout when deploying to ions?#2019-02-2513:27Laurence ChenHi, I have certain question with datomic.query.EntityMap. I posted my question in stackoverflow.
https://stackoverflow.com/questions/54863210/
Any hints will be appreciated.#2019-02-2514:10benoitd/touch is not invloved. You can simply reproduce with (seq (d/entity (db) 1)). I'm not sure of the rationale to not return the special :db/id attribute when "seqing" an entity. My guess is that the semantics of the operation is to just return all the attributes defined on the entity, excluding the special attribute :db/id and all reversed attributes.#2019-02-2515:48brycecovertHas anyone used something like datomic.api/filter to implement access control? I.e., is there a straightforward way to create a view of the database that only contains what a user has access to see?#2019-02-2516:06favilaYes, it has been done#2019-02-2516:07favilaThat is the most straightforward way available. Because it runs your predicate once per datom make sure that predicate is fast#2019-02-2516:34johnjI'm not in this situation but how do you document your entities if you reach for example 400 attributes? ex: what attributes belong to what entities, namespace prefixs are not enough since a single entity can have many (belong to more than one group of related attributes). do you write a function to check/validate each new entity? seems like sql is a win in that you have a central point that explicitly documents this part.#2019-02-2516:50benoitWith that many attributes, you likely need a proper documentation process in place. Since Datomic attributes are themselves entities, you can define your own Datomic attributes and organize all your attributes however you want to support your documentation needs. You can also define specs for each set of namespaced attributes and s/merge them to validate your entities.#2019-02-2516:54favilaif your entities are "typed", you can add a simple meta-schema to your schema#2019-02-2516:56favila:entity/type ref to an entity-type entity which has e.g. :entity.schema/required-attrs (ref to attribute entities) :entity.schema/optional-attrs (ref to attrs also) and documentation (:db/doc)#2019-02-2516:56favila:entity/type could even be cardinality-many, giving you something trait-like#2019-02-2517:06johnjguess, there are many ways about going with this, both your ways are valid, another way could be to just always prefix all attributes an entity might have with the same namespace and use references, but this increases the number of attributes.#2019-02-2517:08johnjOn the other hand, tables gives you all this work for free (at maybe a cost in other parts)#2019-02-2518:14octahedrionhi, will Datomic ever support :db.type/vector ?#2019-02-2518:14octahedrion(or list)#2019-02-2518:16marshallstay tuned 😄#2019-02-2518:27Mark AddlemanThis is exciting!#2019-02-2519:46souenzzo:db.type/edn 🙏#2019-02-2518:33benoit:drum_with_drumsticks:#2019-02-2613:57p14nIs there a way of referencing an entity at a point in time? I want a record to reference the value of an exchange rate at the time it was created. I can create a copy of the various bits of info, but I just wondered if I could instead simply reference the entity id and db and have that realised in the query#2019-02-2614:32benoit@p14n You could store the transaction id or the date of the transaction of the point in time you're interested in.#2019-02-2614:35p14nYup, I was thinking along those lines. Means that I'd need to perform 2 queries (one to get the tid and one to get the value)#2019-02-2614:35p14nThanks#2019-02-2614:38benoitYou will have to query with a database value at the time you're interested in anyway I think. But if this value is going to be queried a lot I would store it separately in the database instead of having to query the old database value every time.#2019-02-2614:44p14nI think you're right. I was just checking as referencing another entity at a point in time seemed like a 'natural' fit for datomic#2019-02-2616:02favilaif you need to store this reference in datomic itself, you can create a ref attribute that points to the tx#2019-02-2616:03favilae.g. {:snapshot/entity eid :snapshot/time tx-eid}#2019-02-2618:37Joe LaneDoes anybody know the current query group instance size options?#2019-02-2618:37Joe LaneI cant find them documented anywhere (I’m sure they’re subject to change, which may be why)#2019-02-2618:40jaret@joe They are in the QG CF template. You can choose from the sizes there. Let me pull it up.#2019-02-2618:40jaret79 "InstanceType": {
80 "Description": "Cluster node instance type",
81 "Default": "t2.medium",
82 "AllowedValues": [
83 "t2.medium",
84 "m5.large",
85 "i3.large",
86 "i3.xlarge"
#2019-02-2618:44Joe LaneThanks @jaret#2019-02-2618:52Joe Lane@jaret Follow up question. If I were interested in using ~10 GB of that tasty 500 GB NVMe SSD on the production i3.large instances (or from an m5.large query group) are there any datomic cloud issues around that?
Can I write to a directory on disk from an ion?
The use case is a Lucene index where documents are filtered by data coming back from datomic.
I know this is a potential can of worms if I don’t keep the index size managed well, but lets assume this can be done.
Asking for a friend who is estremely dissatisfied with ElasticSearch…#2019-02-2618:54Joe LaneSaid friend is also happy to do the index management off on a query group, so as to not affect the primary groups what so ever.#2019-02-2619:19jaret@joe You should be able to do something like that. We make minimal use of disk, and have done nothing to get in the way of using the boxes (that we know of). The important caveat here is we have not tested/vetted writing to a directory from an Ion or implementing Lucene etc. So I cannot officially recommend the approach and it remains untested/unsupported. All that being said, you should try/test it and let us know how it goes. 🙂#2019-02-2619:21Joe LaneMy “friend” is extremely excited#2019-02-2620:55steveb8n@U0CJ19XAM what was it about elastic search that your friend didn't like? I'll need this in future so really useful to know. #2019-02-2621:34Joe LaneCan you ping me about it later tonight?#2019-02-2621:37steveb8nWill do#2019-02-2703:27Joe LaneWe have some search scenarios which require joins and subselections of data. Right now we have a system that encodes this stuff in elasticsearch and operationally its been difficult to keep everything in sync (we are doing Change Data Capture off a mysql binlog). When it was originally built we didn’t have datomic as a stable time basis so we were losing data whenever there was downtime. It was a nightmare. Now we have datomic cloud as a stable basis for our indexing. Our usecase is small enough and specialized enough that everything can fit on a single machine for now, and we benefit greatly from first doing a datalog query to limit the documents we are going to search against. Hope that helps @U0510KXTU. I’m happy to talk more over DM.#2019-02-2704:02steveb8nYes that does help, thanks. I think there might be an OOTB elastic search integration coming for Datomic cloud as well. Will be interesting to compare these two options#2019-02-2715:25Joe LaneInteresting, where did you hear that?#2019-02-2720:19steveb8nwas mentioned in some Cognitect talk somewhere. can’t remember specifics but I remember thinking I’ll wait to see how that looks#2019-02-2717:03lambdamHello,
I'm strugling with Datalog to express this: give me all the user ids that are not in the list.
Here are two versions that I tried and that don't work:
(d/q '[:find [?e ...]
:in $ [?excluded-id ...]
:where
[?e :user/email]
(not-join [?excluded-id]
[?excluded-id :user/email])]
db
[5678 9123])
;; => []
;; no id
(d/q '[:find [?e ...]
:in $ [?excluded-id ...]
:where
[?e :user/email]
[(not= ?e ?excluded-id)]]
db
[5678 9123])
;; => [1234 5678 9123 ...]
;; all the ids
Does someone know how to do that ?
Thanks#2019-02-2717:56favilaYou don't want to join against your excluded-id collection. I.e. for any id ?e , even if it matches one of the excluded ids it can't match all of them. the not-join and not= will only exclude if it doesn't match any of them#2019-02-2717:57favilajust use a set#2019-02-2717:58favila'[:find [?e ...]
:in $ ?excluded-id-set
:where
[?e :user/email]
(not [(contains? ?excluded-id-set ?e)])]#2019-02-2718:03favilaanother possibility is to negate any successful join#2019-02-2718:03favilayou do this by pushing the destructuring down into the join#2019-02-2718:03favila(d/q '[:find [?e ...]
:in [?e ...] ?not-e+
:where
(not-join [?e ?not-e+]
[(identity ?not-e+) [?e ...]])
]
[1 2 3 4] [2])
=> [1 3 4]#2019-02-2718:04favilausing a set with contains? is usually faster and clearer#2019-02-2718:04favilaactually not-join is unnecessary, you can just (not [(identity ?not-e+) [?e ...]])#2019-02-2802:44Keith HarperI will say that using a collection binding [?excluded-id ...] causes a logical OR to happen. #2019-02-2802:47Keith HarperLogical AND can be enforced by using a double`not-join` as shown here: https://stackoverflow.com/a/43808266#2019-02-2811:40lambdamThanks for the solution. I was almost there with a version that I didn't put:
(d/q '[:find [?e ...]
:in $ ?excluded-ids
:where
[?e :user/email]
[(not (contains? ?excluded-ids ?e))]]
db
(set excluded-ids))
Version that doesn't work either. It doesn't filter and returns all the ids.
Do you know why?#2019-02-2811:54favila[(not (contains? ?excluded-ids ?e))]]#2019-02-2811:54favilaonly the first level of an expression is evaluated#2019-02-2811:55favilayour not is negating a literal list#2019-02-2811:55favilathe list (contains? ?excluded-ids ?e)#2019-02-2811:55favilayou can do (not [(contains? ?excluded-ids ?e)]) like I did#2019-02-2811:56favilaor you can do two clauses [(contains? ?excluded-ids ?e) ?excluded] [(not ?excluded)]#2019-02-2812:22lambdamOk thanks. Now I get why my nested version wan't working.
Thank you very much @U09R86PA4 and @U424XHTGT#2019-02-2718:23Joe Lane@jaret IMO a solution/example/explaination the above problem statement should be documented somewhere other than slack, as I’ve run into this before and it was extremely difficult to find an answer. Eventually I had to phone a friend from a different corp so he could walk me through it. This was after going through all datomic cloud documentation, all day of datomic cloud, day of datomic documentation, and every post I could find.
If I missed something, and then please ignore this documentation request.#2019-02-2720:25jaret@lanejo01 Thanks! I’ll bring this up with the team and find the best place for it. I’ll cross post to the thread once we have it up.#2019-02-2814:25folconHey, I’ve been using datomic on-prem for a while now and been reasonably happy with it and am now looking at the feasibility of running one of our products using datomic cloud/ions. What I’ve not seen is how well it integrates with existing aws search options as :fulltext is not available, I’ve seen the odd mention that suggests using cloudsearch, although my understanding is that aws elasticsearch is presently considered to be the better offering. Anyone able to shed some light here?#2019-02-2822:13johnjshouldn't (d/pull (d/db conn) '[*] lookup-ref) return all the attrs of an entity?#2019-02-2822:29pvillegas12yes#2019-02-2822:35johnjah yeah, thanks, made the mistake of not putting all the attrs inside the same map in the transaction#2019-03-0100:02souenzzo(datomic.ion.dev/-main "{:op :push}")
{:command-failed "{:op :push}",
:causes ({:message "INSTANCE", :class NoSuchFieldError})}
Process finished with exit code 1
Why I can't push from REPL?#2019-03-0112:37souenzzoBump
Context: I'm doing a script that calls push and do some other stuff#2019-03-0203:15souenzzoBump#2019-03-0203:22kenny@U2J4FRT2T This might help: https://gist.github.com/olivergeorge/f402c8dc8acd469717d5c6008b2c611b#2019-03-0100:40tylerIs there something that would cause an ion deployed to a query group to perpetually retrieve a cached db value?
I have a deployed ion that I am able to transact with and I am logging the time value of the db-after and any time I call (d/db (conn)). The db-after time is increasing (as expected) after every transaction but I always see the same (d/db (conn)) :t value despite transactions occurring. The only thing that will bring the db up to date is a re-deploy.
So far as I can tell I’m not caching anything on my side (the client, connection and d/db are all called without memoization).#2019-03-0102:05marshallThis issue was fixed in the latest release#2019-03-0102:05marshallI would recommend updating to it and seeing if that helps#2019-03-0114:52tylerProblem solved, thanks marshall.#2019-03-0415:59marshall👍#2019-03-0100:41tylerEventually it seems to pick up a fresh :t but it takes ~5 mins to do so.#2019-03-0102:00tylerAnother data point to add is when I try to query against the :db-after of a transaction I get exceptions like Database does not yet have t=257. Is this expected? From the docs on this page https://docs.datomic.com/cloud/whatis/client-synchronization.html it would seem like the latency should be less then a second before being able to query against the new value but I’m seeing latencies on the order of minutes.#2019-03-0108:44PiotrHi, I wonder what is it that I am doing wrong:
(def limited
(let [sdb (db/db)
mess (d/q {:query '[:find ?mc
:where
[?e :message/content ?mc]]
:args [sdb]
:limit 2
:offset 0})]
mess))
(def non-limited
(let [sdb (db/db)
mess (d/q '[:find ?mc
:where
[?e :message/content ?mc]]
sdb)]
mess))
The limited one will complain that find clause does not exist.
All I want to do is to read data in chunks.
1. Caused by java.lang.IllegalArgumentException
No :find clause specified
#2019-03-0108:44PiotrHi, I wonder what is it that I am doing wrong:
(def limited
(let [sdb (db/db)
mess (d/q {:query '[:find ?mc
:where
[?e :message/content ?mc]]
:args [sdb]
:limit 2
:offset 0})]
mess))
(def non-limited
(let [sdb (db/db)
mess (d/q '[:find ?mc
:where
[?e :message/content ?mc]]
sdb)]
mess))
The limited one will complain that find clause does not exist.
All I want to do is to read data in chunks.
1. Caused by java.lang.IllegalArgumentException
No :find clause specified
#2019-03-0109:34mdhaneyAll the variables in the :where clauses that aren’t part of the :find clause will be unified. Since you only have 1 :where clause, there is nothing for ?e to unify with. Replacing ?e with a _ (underscore) should fix it.#2019-03-0111:36Piotr@U0JPBB10W did you mean this: (def limited
(let [sdb (db/db)
mess (d/q {:query '[:find ?mc
:where
[_ :message/content ?mc]]
:args [sdb]
:limit 2
:offset 0})]
mess))
still no luck#2019-03-0111:37Piotr1. Caused by java.lang.IllegalArgumentException
No :find clause specified
#2019-03-0113:35benoit@UBC1Y2E01 What API are you using? client (`datomic.client.api`) or peer (`datomic.api`)?#2019-03-0113:36benoitIt looks like you might be passing the argument map of the client api to the peer api.#2019-03-0114:15PiotrI use datomic.api#2019-03-0114:25Piotr@U963A21SL I think I know what is going on. I use q, but it looks like I should be using query from datomic.api as in here: https://docs.datomic.com/on-prem/clojure/index.html#datomic.api/q
Thanks for pointing me in the right direction!#2019-03-0114:44benoitOk. The datomic.api does not support :limit and :offset according to the doc.#2019-03-0114:48PiotrYes, I will figure out another way 🙂 Thanks!#2019-03-0115:57Brian AbbottI found this channel: https://www.youtube.com/user/datomicvideo/#2019-03-0115:57Brian AbbottI think it is from Cognitect directly#2019-03-0115:57Brian AbbottI would be cool if it could host pointers to other Datomic videos and chats as well... maybe through playlists?#2019-03-0115:58Brian AbbottIt looks really old though - 6 years ago#2019-03-0115:59alexmillerI'm not sure if that is official stuff or not. the most recent official series of datomic videos is at https://www.youtube.com/playlist?list=PLZdCLR02grLoMy4TXE4DZYIuxs3Q9uc4i#2019-03-0116:00alexmillerand clojuretv has many clojure conference videos about datomic https://www.youtube.com/user/ClojureTV/search?query=datomic#2019-03-0119:48okocimIs there an idiomatic way to “abandon” a transaction from within a transaction function? Say I am processing an external update feed, and in so doing, I want to go ahead and transact the message from a feed only if the message represents data that is more current than what’s in the db. If I return an empty list, a transaction still happens, which is fine, but I’m not sure what the implications are of adding an empty transaction to the system.#2019-03-0120:15benoit@okocim Throwing will cancel the transaction.#2019-03-0120:38okocim😅 thanks#2019-03-0120:44mdhaneyI think @okocim question is due to a problem with the documentation. The on-prem docs mention throwing an exception from a transaction function to abort the transaction, but I found no mention of it in the Cloud docs. I had to ask on here a few weeks ago if it still works for Cloud. I think an update to the Cloud docs would be useful.#2019-03-0122:34timDoes Datomic support on-prem (with non-hosted storage) capability to evolve from a single node to a cluster as the need arises?#2019-03-0122:39timAs I understand it, this was not supported, but I haven’t followed all the changes.#2019-03-0122:48johnjyou can't do distributed writes if that is what you mean#2019-03-0122:57timNo, If I remember correctly the team said it’s not ok to have the transactor and the storage on the same node…. It’s been a long time since I’ve looked at all this..#2019-03-0123:02johnjthat's only (my guess) because you don't want both fighting for the same resources, but then, I don't understand what your asking, by node, you mean a transactor?#2019-03-0123:06timno by node I mean server. I just remembered why I couldn’t use datomic… I would need to support customers whom only want to run a single db server node, but may need to evolve. I was somehow hoping Datomic was capable of this, but I’m now remembering this was not a good fit.#2019-03-0123:34favilawhat is a "single db server node"?#2019-03-0219:05timI have many customers. I need a model where each can have their own DB and for that DB to be isolated to a single machine on my network. At the same time some customers may require their data to exist in house (their house), but are not going to be interested in managing a cluster (particularly at the start).#2019-03-0219:07timEssentially, Datomic is a good fit for my product as an SAAS, but when it comes to client installs, it’s too much. And I would rather choose a DB that can support both models so that I am not doubling my efforts.#2019-03-0220:48favilaDev transactor isn’t good enough for in-house use?#2019-03-0220:56favilaYou might even be able to use the free datomic (check the license)#2019-03-0221:46timThe name Dev alone doesn’t give me a good feeling for a production install which is why I asked if it was a supported model. btw, thnx for the responses.#2019-03-0221:52timI’ll look into the free version, I remember it was once memory only, but I think they changed that… thnx#2019-03-0300:11favilaDev = transactor itself serves an embedded h2 (sql) db. This is the same as the “free” storage#2019-03-0300:12favilaWe use it internally for a shared staging server. It’s fine#2019-03-0300:13favilaIf you exceed what that can handle then you should have a separate storage anyway. Even a MySQL server can be used#2019-03-0300:13favilaYou need to get an estimate of your read and write capacity#2019-03-0300:40timI see, that really helps, so thnx. As for capacity it’ll depend on each client and their data so that would need to be assessed at install time, but that would be the case with any db.#2019-03-0123:35favilayou mean they want their data in a different storage?#2019-03-0123:37favilathe limitation with datomic is that a transactor cannot manage multiple storages#2019-03-0123:38favilaso if you want different datomic dbs to be segregated at the storage level (e.g. different sql tables), you need at least one transactor per each storage#2019-03-0123:39favilabut one storage can have many datomic dbs in it, and can have multiple transactors connecting to it (one will be active and the rest will be hot spares), and can have as many peers as your storage can handle#2019-03-0123:41faviladoes that clarify @tim792?#2019-03-0308:05Vincent CantinHi. I am trying to write a spec for queries of Datomic and I found 2 different grammars on the website. Which one is the correct one?
- https://docs.datomic.com/cloud/query/query-data-reference.html
- https://docs.datomic.com/on-prem/query.html#2019-03-0308:37dmarjenburghThe first one is for Datomic Cloud, the second one for Datomic On-Prem. I believe the valid Cloud queries are a subset of the valid on-prem queries. E.g. collection binding is not supported in Datomic Cloud queries#2019-03-0308:38Vincent CantinSo they are both valid? I see. Thx#2019-03-0312:09Keith Harper@U05469DKJ collection binding is supported in Datomic Cloud queries: https://docs.datomic.com/cloud/query/query-data-reference.html#binding-forms#2019-03-0313:58dmarjenburghYeah, it’s supported for inputs. I meant it’s not supported in the :find spec.#2019-03-0401:13steveb8nDid you find this in your travels? https://github.com/lab-79/datomic-spec#2019-03-0407:37Vincent CantinI did not know this project. I will still go ahead with my own spec, but thank you for the link.#2019-03-0408:16Nolanis there a way to directly pull the :db/ident of a :db.type/ref? e.g. (pull ?e [:the/ref]) results in {:the/ref #:db{:id 123 :ident :ident-value}}, and i’m looking for a way to just get {:the/ref :ident-value}#2019-03-0414:40favilaThere is no way without postprocessing (see clojure.walk/walk) but please upvote this feature request if it strikes your fancy: https://receptive.io/app/#/case/49752#2019-03-0408:21Nolaneasy to do with scissors afterwards, just wondering if theres syntax in pull to remove the extra step. separately: datomic is nuts#2019-03-0413:11PiotrWhat is the idiomatic way to query entities for dates? I want to get all the posts with created-at inst that fall in range of dates. Would that be database asof and since filters or something else?#2019-03-0414:15favilahttps://vvvvalvalval.github.io/posts/2017-07-08-Datomic-this-is-not-the-history-youre-looking-for.html#2019-03-0414:34favilaThat article is important for getting clear what datomic's time features can and cannot do#2019-03-0414:35favilaI would say this: if you want "created-at" to be a business domain concept, make an indexed create-at attribute and query it#2019-03-0414:35favilasigns that it is a business domain concept: 1) you want to be able to change it ("oops, I got the created-at time on this thing wrong, it's supposed to be last week")#2019-03-0414:36favila2) It's value comes from another system ("I need to copy this record's created-at time from the inventory to the accounting db")#2019-03-0414:37favilaIf created-at is really more like a git "commit" time, then you can use some immutable attribute on the entity, and use its TX as the "created at" time#2019-03-0414:38favilanote that it will be a bit trickier to get that value, because TX entities are not reachable via pull expressions#2019-03-0413:23benoit@piotr.kurnik One option is to index the created-at attribute and use a query.#2019-03-0413:25benoitIf you're interested in transaction time, you have the log API Datomic on-prem.#2019-03-0416:51ozWe are seeing a intermittent timeouts from our client trying to connect to datomic cloud. 2019-03-04 08:56:49 ERROR middleware:286 - Exception encountered processing request
clojure.lang.ExceptionInfo: Datomic Client Timeout {:cognitect.anomalies/category :cognitect.anomalies/interrupted, :cognitect.anomalies/message "Datomic Client Timeout"}#2019-03-0416:52ozAnyone else seen this before and can advise on a course of action?#2019-03-0419:12ozIf you run into this error is appears you'll need to restart the datomic cloud compute instances.#2019-03-0419:52johanatanhi, is it possible to connect to the localhost transactor running on port 4334 for the ddb database type without going to dynamo to lookup the IP? (i want to use ssh port forwarding from remote 4334 to local 4334 and then connect using the ddb protocol to the local 4334 port). is that possible?#2019-03-0419:53johanatani tried using datomic:dev://... but got Connection refused (likely due to either ddb v dev protocol mismatch or missing authentication or ??? )#2019-03-0419:54johanatanoh, oops. i see the mention of ddb-local in the transactor.properties file#2019-03-0419:54johanatani will try that#2019-03-0419:55johanatanoh, nope. i bet that refers to an actual dynamo running locally#2019-03-0419:55marshall@johanatan no; datomic writes the address of the transactor into storage#2019-03-0419:55marshalland the peer looks it up before connecting to the transactor#2019-03-0419:55marshallhttps://docs.datomic.com/on-prem/deployment.html#getting-connected#2019-03-0419:55johanatanso even if i know the transactor is running on localhost, there's no way to connect to it directly?#2019-03-0419:55marshallthe database URI is a storage address#2019-03-0419:56marshallthe peer will connect to storage (whatever it is, in this case ddb), to look up the transactor endpoint#2019-03-0419:56johanatanyes i know all of that. i know how it "normally" works. but what i want to do here is abnormal.#2019-03-0419:56johanatanmy dev is not in any VPC#2019-03-0419:56johanatanso cannot connect to dynamo to get the ip#2019-03-0419:56johanatanand even if it could, that ip would not be reachable#2019-03-0419:56johanatanwould rather not have to set up a VPN for this use case#2019-03-0419:57marshallthe address of your transactor is configured via the transactor properties file#2019-03-0419:57marshallthe peer must be able to read storage#2019-03-0419:57marshallit can’t work otherwise#2019-03-0419:57johanatanyes, that is all true#2019-03-0419:57johanatanmy transactor and peer are running in ec2#2019-03-0419:57johanatanmy local laptop is on my desk#2019-03-0419:57johanatanthere is no vpn between them#2019-03-0419:57johanatan🙂#2019-03-0419:58marshallif you’re trying to connect to your cloud database from your laptop, you’re running a peer (or client)#2019-03-0419:58johanatanah, sure. it's in the repl i suppose#2019-03-0419:58marshallif you’re using peer it must be able to read storage#2019-03-0419:58johanatantechnically i'm using datomic.api/connect#2019-03-0419:58johanatanwith a datomic:... db-uri#2019-03-0419:59marshallwhich is the peer library#2019-03-0419:59johanatanright#2019-03-0420:00johanatanso what's the easiest way to accomplish the end goal i have here?#2019-03-0420:00johanatanis setting up a vpc/vpn my only option?#2019-03-0420:01marshallis your transactor running in a private vpc?#2019-03-0420:01johanatanit's running on a lightsail instance#2019-03-0420:01marshallor something otherwise inaccessable remotely?#2019-03-0420:01johanatani doubt there is any vpc#2019-03-0420:01johanatanbut i can ssh into it#2019-03-0420:01johanatanand port forward ports#2019-03-0420:02marshallif you can ssh to it, it must have a publicly accessible IP address (presumably the same address your cloud peer uses to connect)#2019-03-0420:02marshallwhich is already written to storage#2019-03-0420:02marshallso all you need to do is export AWS credentials locally that allow you to access dynamo#2019-03-0420:03johanatanhmm, it would seem that it wrote a "private"/internal IP to storage. i would also rather not open my database port to the world (seems like asking for trouble)#2019-03-0420:05marshallyou may need to use HOST & ALT-HOST to accomplish this if you cant resolve the address it is using from your laptop; alternatively you might be able to do something like a bastion server that handles port forwarding and remote dns resolution, but you’d have to have an instance in your VPC (or whatever) to do that with#2019-03-0420:05johanatanmm, ok. alright. thanks so much for your input. some things for me to ponder.#2019-03-0420:44okocimis it possible to run a query that returns multiple different counts in a single query, where each count is for the specific attribute value, and not for the set of attribute values together, sort of like frequencies does in clojure.core?#2019-03-0421:07okocimso I ended up doing this:
(d/q '[:find ?attr ?value (count ?i)
:where
[?i :inventory/store ?s]
[?s :store/short-code "demo-store"]
(or
(and
[(ground :model) ?attr]
[?i :item/model ?value])
(and
[(ground :size) ?attr]
[?i :item/size ?value]))]
(d/db (db/get-conn)))
Not sure if i’d be better off using rules in such a situation, or something else, so I’d appreciate any feedback if anyone has done the same sort of thing before. Ultimately, this code is going to move to a search engine, but I need it for the time being.#2019-03-0421:18favilaThat is how you would do it. Rules would encapsulate the logic better and make the intent clearer but it's no different from this ultimately. (or, or-join, and, and-join are ultimately just sugar for inlining anonymous rules)#2019-03-0502:00okocimThanks for replying/validating my logic. I ended up switching to rules since it’s a bit clearer, as you mentioned.
Sometimes it takes a bit of thinking to get to what you want, but I’m finding datalog queries to be incredibly expressive.#2019-03-0421:13shaun-mahoodAre there any recommendations for batch size when doing a one-off data import to Datomic Cloud? I'm looking at about 100K entities that are logically part of the same group.#2019-03-0423:46johanatan@marshall i think the following will do the trick (with my ssh tunnel in place) without having to open my db port to the world:
http://www.vincecutting.co.uk/web-development/redirect-ip-address-back-to-localhost-on-mac-osx/#2019-03-0423:58johanatani tried it with the AWS internal IP first and it worked like a charm.#2019-03-0423:58johanataninteresting that reads don't require the transactor#2019-03-0423:58johanatanonly when attempting a write was this necessary#2019-03-0501:04Nolancurrently calling the client api from an aws lambda. has anyone received java.lang.OutOfMemoryError: Metaspace {:cognitect.anomalies/category :cognitect.anomalies/fault, :cognitect.anomalies/message "java.lang.OutOfMemoryError: Metaspace"} while trying to do the same? it’s a super basic query: [:find ?e :in $ ?t :where [?e :some/attribute ?t]] that works without issue from the repl, and runs as quickly as expected. im suspicious of throwing more memory at the lamdba—i have others that are more involved than this one, which have never exposed this issue#2019-03-0501:19Nolanfor others: throwing more memory at the lambda did erase the issue, but i remain confused as to why that query would gobble more than my others.#2019-03-0502:51favilais the result large? (queries are eager and must hold the entire result set in memory)#2019-03-0502:51favilais :some/attribute maybe mistakenly not indexed by value?#2019-03-0504:28Nolanno, the result is only 1 entity, and the entity is quite small. still didnt totally understand why that was happening but it seems to be performing correctly now#2019-03-0510:51Lennart BuitCan I in datomic, update a :db.unique/identity datom, but only if it previously existed. So I don’t want to assert a new one if it didn’t exist, I just want to swap the value that previously existed if it was there#2019-03-0511:07favilause the lookup ref as the db id#2019-03-0511:08favilayour cas is correct#2019-03-0511:09favilaalthough, if you can "change" the value of this attribute on an entity, can it really be said to be a :db.unique/identity attribute? could it just be :db.unique/value?#2019-03-0511:52Lennart BuitRight, so you are saying that it is not unreasonable to expect a CAS failure when the “old” value was not present in the database to begin with#2019-03-0511:57favilathis is not restricted to cas#2019-03-0511:58favila{:db/id [:unique-attr "val-that -does-not-exist] :some-other-attr "other-val"} has the same "problem"#2019-03-0511:59Lennart Buitright yeah, that gives an error that the entity was not found right?#2019-03-0511:59favilathat is the same error the cas will give#2019-03-0510:53Lennart BuitI tried something like: (db/transact conn {:tx-data [[:db.fn/cas [:attr "old"] :attr "old" "new"]]}), but it doesn’t feel right to catch an exception for when the cas fails because it could not find the entity#2019-03-0514:14tylerIs there any way to build a filtered db in datomic cloud https://docs.datomic.com/cloud/time/filters.html seems to imply there is but d/filter is no longer in the api.#2019-03-0517:57eoliphantDo Cloud/Ions support 1.10 yet?#2019-03-0518:02marshall@eoliphant yes, the latest release uses clojure 1.10#2019-03-0518:03eoliphantsweet 🙂 thanks. Was thinking it might be good to put info like that and the java version in the release matrix or something?#2019-03-0518:03marshallagreed; i’ll look into adding that#2019-03-0518:06eoliphantanyone seen this before in the step function log? I have a deployment that’s crapping out here
{
"error": "States.DataLimitExceeded",
"cause": "The state/task 'arn:aws:lambda:us-east-1:946021084982:function:dat-GW-compute-CreateLambdaFromArray-12972SFV85YCH' returned a result with a size exceeding the maximum number of characters service limit."
}
#2019-03-0521:49pvillegas12Do ions strip out query params on the URL to API Gateway?#2019-03-0521:52lilactownI believe you need to configure the API gateway to pass them through#2019-03-0523:22pvillegas12@U4YGF4NGM I already configured them in the Request, but not getting through, do I have to redeploy the API or something?#2019-03-0600:59johanatanCan someone explain how to use as-of and since bounds on the history function? per this doc (particularly the * delimited part):
history
function
Usage: (history db)
Returns a special database containing all assertions and
retractions across time. This special database can be used for
datoms and index-range calls and queries, but not for entity or
with calls. ***as-of and since bounds are also supported.*** Note that
queries will get all of the additions and retractions, which can be
distinguished by the fifth datom field :added (true for add/assert)
[e a v tx added]
#2019-03-0601:16johanatanI would expect a query containing the following three clauses to return exactly one (or none) results. Yet I'm seeing a bunch of them (with different values for ?max-time):
[?a :quote/time ?time]
[(max ?time) ?max-time]
[(= ?time ?max-time)]
#2019-03-0601:16favilamax makes no sense here#2019-03-0601:17johanatanyea, i was just thinking that#2019-03-0601:17favilaIt’s the same as identity#2019-03-0601:17johanatanwhat i'm trying to do is get the entity with the largest value for ?time#2019-03-0601:17johanatan[these are time values within the data; not relying on datomic's own sense of time]#2019-03-0601:18johanatanis there any way to do this without using a (d/history) database ?#2019-03-0601:19johanatanwhat i need is to do a "function expression" binding at the :find level rather than within the :where clauses#2019-03-0601:19johanatanbut i bet that isn't possible/supported#2019-03-0601:20johanatanas it would create a sort of circular dependency between the :find and the :where (not that that would be insurmountable)#2019-03-0601:21johanatanoh, hmm. perhaps a nested query is the answer#2019-03-0601:21favilaSubquery is the easiest#2019-03-0601:21johanatanyea, exactly 🙂#2019-03-0601:21johanatanany reference demonstrating a sub query?#2019-03-0601:22johanatanfound it: https://forum.datomic.com/t/subquery-examples/345#2019-03-0601:22johanatanthx!#2019-03-2617:29favila#2019-03-2617:38johnjNot to sound negative, but why do you keep up with datomic when stuff like this is never fixed?#2019-03-2617:45favilaSame as sticking with Clojure I guess? the alternatives are worse for what we do.#2019-03-2617:46favilaand using mysql with datomic is something you do only under duress. For us it was the only compatible hosted google storage option available#2019-03-2617:51johnjOk I guess "the alternatives are worse for what we do." is a good of enough reason. Just find it weird that they never fixed this, since mysql is still listed as a supported storage#2019-03-2617:52johnjwith no warning about this#2019-03-2617:52favilathe supported storage is "sql", and they happen to include some sql scripts for setting up the tables#2019-03-2617:52favilain each of mysql, postgresql, and oracle dialects#2019-03-2617:52favilaI'm not sure that's the same as "mysql supported"#2019-03-2617:53johnjis included in the distribution package#2019-03-2617:54johnjso its ok to ship bad sql scripts?#2019-03-2617:55favilaI'd like it if they were better, but I don't feel like they're promising very much#2019-03-2618:00johnjDo you need the history/time features of datomic?#2019-03-2618:02favilalife is pain without them#2019-03-2618:03favilait's really hard to go back#2019-03-2618:04favilaany mutable storage we use as a disposable one#2019-03-2618:04johnjIs that because you need high audit ability or somehting else?#2019-03-2618:06favilawe're in medicine so there's a baseline that's required, but datomic alone is not a full solution to that anyway#2019-03-2618:06favilayou need read logs too, and the datomic write log is too granular to make sense to an end user#2019-03-2618:07favilabut the difference for developers is incredible#2019-03-2618:07favilait's like the difference between vcs vs non-vcs code#2019-03-2618:08faviladbs are consistent snapshots even on small time scales, so lots of distributed system problems go away#2019-03-2618:08favilaif we make buggy writes to the db, we can find them and fix them#2019-03-2618:08johnjYeah, I can see how it can also help a ton with debugging#2019-03-2618:10favilawe end up answering "how the heck did this happen" questions we never could have otherwise, or which would have been too much work#2019-03-2618:10favilaand having a graph-shaped db with extensible schema is a big plus too#2019-03-2618:11favilamuch easier to model and change than table-shaped or document-shaped dbs#2019-03-2618:11johnjyes, those are all very handy and nice features of datomic, I guess my initial concern was about the sql script was more about the future of datomic on-premise#2019-03-2618:11favilanot as good for fast searches but because the db is so consistent and has a granular, incremental change queue it's easy to make derived views in other systems#2019-03-2618:12favilaoh, yeah, I don't know if on-premise has much of a future#2019-03-2618:12favilathat's the way the whole world is going for better or worse#2019-03-2618:12favila"run this service yourself" is getting increasingly niche#2019-03-2618:13johnjheh true#2019-03-2618:14favilaI think it's just a Clojure thing generally though#2019-03-2618:14johnjwhat do you mean?#2019-03-2618:14favilastuff in clojure's constellation tends to be well designed and "simple" (in the sense of decomposable pieces) but have terrible fit-and-finish#2019-03-2618:15favilawith experience you don't see it anymore#2019-03-2618:17johnjvery true, mirrors my experience, get stuck too much in these "terrible fit-and-finish" details#2019-03-2618:17johnjLike "this is great" but then this little thing puts you off#2019-03-2618:18johnjlike a bad sql script heh#2019-03-2618:19johnjgoing to try it with pg, thanks for the msyql warning#2019-03-2619:07mdhaneyRegarding on-prem, I don’t see it going away anytime soon, if ever. It’s still their “flagship” product - Cloud is a more constrained version that is great if it happens to fit your particular requirements, but that only hits a small subset of use cases. To replace on-prem completely, they would first need to achieve feature parity in Cloud as it is missing some features that are critical requirements for a lot of use cases, like excision (pretty much anyone operating in the EU would need this capability). They would also need to extend Cloud to other platforms, like GCP and Azure, which would probably be a substantial effort. And even then, you would still have a non trivial number of customers that would demand deployment on their own infrastructure, either self hosted or on a private/hybrid cloud.
Also look at the difference in licensing between the two - on-prem has got to be more profitable for Cognitect. I think Cloud was always meant to be a low friction way to bring new users to Datomic, and then up sell them to on-prem if/when their requirements exceed what Cloud can do. I doubt it was ever intended to replace on-prem completely.#2019-03-2619:37favilaI'm sure on-prem will continue to exist, but it looks to me like cloud is getting their energy and attention in a way on-prem no longer will. On-prem has actually lost rather than gained supported storage backends. Cloud is a much bigger market. I frequently hear people ask about migrating from on prem to cloud and never the other way around, even granted the feature disparity. And cloud is closely associated with datomic ions, which is a big feature advantage vs on-prem and will likely never have any on-prem equivalent because of how deeply it is wedded to aws and the datomic cloud system#2019-03-2619:40favilaI'm pretty sure clojure's tools.deps/deps.edn tooling was developed to further the goals of datomic ions; if true that means ions is even bending clojure roadmaps into its orbit#2019-03-2619:43johnj@U09R86PA4 are you using mysql for other stuff that you can't trivially switch to pg ?#2019-03-2619:48favilano, we're only using it for datomic, but it would be a hassle#2019-03-2619:48favilaswitching is never trivial#2019-03-2619:49johnjI though datomic backups made this easy, but I"m not experienced with it#2019-03-2619:49favilabackup-restore the db, figure out, set up and distribute new credentials, change all connection strings everywhere, rebuild and redeploy all images, etc#2019-03-2619:49favilayeah, but moving the data is just one part#2019-03-2619:49favilait's embedded in a running production system now. we could do it but without a clear tangible benefit it's just pointless busy-work#2019-03-2619:50johnjah true#2019-03-2619:51favilaand like I said, most of the load is borne by memcached#2019-03-2620:51mdhaney@U09R86PA4 you may be right. Just as a counter argument, I see Ions a bit differently. Ions were needed to make Cloud feasible for non-trivial applications. For me at least, initially Cloud was just one of those “oh, neat” things - it wasn’t until Ions were available that I could really take it seriously. Without local querying, Datomic loses quite a bit of its power IMO. That’s the real purpose of Ions, to restore local querying, and the rest - like running your back end on Datomic’s infrastructure- is just gravy (and honestly, I can’t think of another way to make local querying work without going that route). So, that could explain why so much focus and resources were put into Ions, tool.deps, etc. - it was critical functionality needed to make Cloud a viable product.
Even if the long term roadmap is away from on-prem, it will take some time. Given the pace of releases in the past, it will probably be at least a year before Cloud reaches feature parity with on-prem (unless Cognitect decides to double down and invest a lot more resources into Cloud). And more platforms will take time - AWS is the king now, but it’s hard to ignore the gains Microsoft have made in the last 2 years, to where Azure is on the verge of being a serious competitor (if not already). Point being, if the death of on-prem is imminent, we’re talking years before that happens.#2019-03-2620:53johnjwell sure, they have to support their existing customers, but all the hot/new stuff is going to happen in cloud for sure#2019-03-2619:45stijnHi, if a call to (d/db conn) is consistently throwing a timeout (anomalies/interrupted), where can I start looking for solutions? The only thing that is abnormal in the Cloudwatch dashboard is OpsPending, OpsThrottled, OpsTimeout reporting OpsTimeout numbers > 0#2019-03-2622:11Brian AbbottHas anyone gotten errors like this before:#2019-03-2622:37benoitIt means your transaction contains a tempid as value but does not create a new entity with the same tempid so there is nothing to point this attribute to.#2019-03-2623:37joshkhwhen i see this it's usually because i forgot that some reference value should be more nested than i thought:
{:tx-data [
;wrong
{:player/name "Brian"
:player/color :green}
;right
{:player/name "Brian"
:player/color {:color/rgb :green}}
]}
#2019-03-2701:41Brian AbbottThank you for the replies guys - I really appreciate it.
It turned out.... basically it was the wrong type. We were setting an id to another object as a string when it should have been a Long.#2019-03-2701:42Brian AbbottBut... the error message was confusing#2019-03-2701:43Brian AbbottIt should do some kind of "is this a valid ref format" check before hand. Also... I think I discovered a small bug/defect in datomic. When it dumped its stacktrace, for every method it dumped the full text of its API documentation.#2019-03-2712:44benoitStrings are valid ref formats. You can now use simple strings instead of temp ids. I don't think Datomic can return a better error here.#2019-03-2712:44benoithttps://docs.datomic.com/on-prem/transactions.html#creating-temp-id#2019-03-2622:11Brian Abbott:db.error/tempid-not-an-entity tempid used only as value in transaction#2019-03-2714:50quadronif in a domain every single event is timestamped (time being the integral part of every 'entity'), then is there a point in the database itself keeping a record of 'transaction history'?"#2019-03-2714:54Joe Laneyes#2019-03-2714:54Joe LaneYou have multiple time models, wall clock time, and logical time. Think of it as 2 axis on a graph.#2019-03-2714:55Joe LaneIf you don’t have a logical clock time you could never learn anything new about something in the past or correct any information.#2019-03-2714:56Joe LaneThat being said, datomic is a good fit for your transactional database, not as great a fit in the time-series analytics department when you have tons of datoms.#2019-03-2715:04quadrondoes the fact that datoms are timeless in general, make the order of transactions dependent on the local ordering of a particular datomic instance? what if two datomic instances receive those datoms in different orders?#2019-03-2715:08benoit@veix.q5 Datoms are not "timeless". Each datom points to the transaction that asserted it. The order of datoms in a transaction does not matter. Transactions are atomic.#2019-03-2715:09benoitAnd there is only one "datomic instance", one transactor processing transactions serially.#2019-03-2806:37johanatanHow does this scale? What happens when a single transactor gets overloaded?#2019-03-2807:14val_waeselynckThe authors of Datomic made the explicit tradeoff of not making it write-scalable, and for many, many use cases that's fine. Systems tend to read much more than they write, and to give more importance to consistency than write throughput. Another DB making this tradeoff is VoltDB.
Consider than, in typical RDBMS such as PostgreSQL, reads slow down writes (due to locking, required by mutability), making the typical OLTP workload even less scalable.#2019-03-2821:28mdhaneyIf you eventually outgrow a single transactor, one thing you can do is split into multiple databases, each running their own transactor. On the read side, you can query across multiple databases (with on-prem only, Cloud doesn’t support this). The video below discusses how the Nubank guys designed their system with the anticipation of eventually splitting into multiple databases.
https://youtu.be/7lm3K8zVOdY#2019-03-2821:43johanatan@U0JPBB10W interesting, thanks!#2019-03-2715:20quadron@me1740 what do you call a 'datom' before being asserted?#2019-03-2715:25Joe Lanedata#2019-03-2715:25Joe LaneVolatile data#2019-03-2715:25Joe LaneIf it hasn’t been asserted it doesn’t exist as far as datomic is concerned.#2019-03-2715:29benoit@veix.q5 What joe said + a transaction is a collection of datoms https://docs.datomic.com/cloud/whatis/data-model.html#datoms#2019-03-2715:31quadronthanks for clearing this up, so a bunch of datoms are tied together in a transaction and datomic itself is the total ordering of those txs. right?#2019-03-2715:32benoityes#2019-03-2715:42favilaThe easiest analogy is to git#2019-03-2715:42favilaA datomic transaction is like a git commit containing datoms#2019-03-2715:43favilayou can store a time value in the commit itself, but that is saying something different from the metadata about the commit itself#2019-03-2715:44favila(this analogy breaks down a bit because datomic datoms all occupy the same data space, vs git where there's source and then another layer on top of it)#2019-03-2715:45favilabut generally everything you would use git for on your source you would use transaction metadata and the time features of databases (d/history, d/as-of d/since) on your data#2019-03-2715:47favilaThis is an excellent guide to datomic's internals: https://tonsky.me/blog/unofficial-guide-to-datomic-internals/#2019-03-2715:48favilaAnd this is an excellent article explaining why you usually can't use transaction time as the only time in your data (or vice-versa) and why datomic is not a great fit for time-series data: https://vvvvalvalval.github.io/posts/2017-07-08-Datomic-this-is-not-the-history-youre-looking-for.html#2019-03-2715:50favila(the other reason is datomic is optimized for reads not writes)#2019-03-2717:43souenzzohow to :find [?e ...] in datomic cloud?#2019-03-2720:31dmarjenburghThe find-coll specification is not supported in the cloud version (yet?)#2019-03-2718:53souenzzothere is docs about how to redirect cast to stdout (at my repl/localhost)#2019-03-2719:14zalkyHi all, I have a datomic on-prem transactor deployed via cloudformation (as per the docs) and I'm looking into how to add classpath functions. Any advice or pointers to documentation?#2019-04-0420:23timgilbertThe main docs are here: https://docs.datomic.com/on-prem/database-functions.html#2019-03-2723:01shaun-mahoodAny chance the correct solution for the problem at https://forum.datomic.com/t/dependency-conflict-with-ring-jetty/447/7 could be added to the Datomic Cloud docs? It didn't even cross my mind that there might be a conflict between these, and my normal workflow would be to check the dependencies or issues on github which doesn't really work here.#2019-03-2809:35quadronlet's say I have a json map with 10 fields; each json map is modeled as an entity in a datomic schema and the json fields map neatly to entity attributes. does that mean that asserting every json map requires at least 9 datoms? (one being the identity index)#2019-03-2813:06benoitIf you need to update all 10 fields, yes. But usually you just update a subset of the fields.#2019-03-2809:58misha+1, if you save each map as a separate empty transaction#2019-03-2815:03eoliphanthi, I’m running into an issue where cloud/ion deployments are failing, and I’ve tracked the issue to the failure of one the lambdas in the deployment step function. I’m getting the following error back
{
"error": "States.DataLimitExceeded",
"cause": "The state/task 'arn:aws:lambda:us-east-1:xxxx:function:dat-NZ-Compute-CreateLambdaFromArray-1915A1Q1QXEG8' returned a result with a size exceeding the maximum number of characters service limit."
}
any ideas what might be causing this?#2019-03-2815:41Joe Lane@eoliphant I know this is weird, but check if shortening the description of your ion fixes it. I may have run into something similar in the past and that fixed it.#2019-03-2815:41Joe Lane(Not sure how long the description is, if its super short then maybe thats not the obvious fix)#2019-03-2815:42eoliphantYeah, i’ve seen that before as well, didn’t think any of the new ones were longer than ones that were working but will double check#2019-03-2815:44dangercoderMany thanks for the great Datomic Cloud tutorial. 🙂✌️
https://docs.datomic.com/cloud/setting-up.html#2019-03-2817:25souenzzocan I send cast to stdout?#2019-03-2818:14shaun-mahood@jeff.terrell So I guess that error message is there already - https://docs.datomic.com/cloud/troubleshooting.html#dependency-conflict#2019-03-2818:17jeff.terrellAh, great! Thanks for letting me know.#2019-03-2818:52dangercoderAnyone with any tips and tricks on how I can get an overview of all schemas in a datomic cloud database? I used to use Datomic console for this before when I was using a peer.#2019-03-2818:59dangercoderSorted it with some queries. I guess I could build some private tool to get an overview of it 🙂#2019-03-2821:36mdhaneyI haven’t tried it yet, but you could look into REBL.
https://youtu.be/c52QhiXsmyI#2019-03-2819:34holyjakI'd believe I found a mistake in the Datomic tutorial https://docs.datomic.com/on-prem/tutorial.html but I surely just missed something. They 1. Transact inorrect inventory counts, 2. Retract one, 3. Update the other, 4. Look at the DB as-of #1 so I'd expect to see what was added, ie [[:db/add [:inv/sku "SKU-21"] :inv/count 7]
[:db/add [:inv/sku "SKU-22"] :inv/count 7]
[:db/add [:inv/sku "SKU-42"] :inv/count 100]]
but instead the query shows (d/q '[:find ?sku ?count
:where [?inv :inv/sku ?sku]
[?inv :inv/count ?count]]
db-before)
=> [["SKU-42" 100] ["SKU-42" 1000] ["SKU-21" 7] ["SKU-22" 7]] Why is sku 42 there twice when the cardinality of inv/count is one and when it was only updated from 100 to 1000 in the last tx #3? Can anyone be so kind and explain? #2019-03-2903:33johnjelinekdoes anyone store encrypted PII data in their datomic cloud dbs (for GDPR)? Where do you store your keys?#2019-03-2903:40johnjelineknvm, just learned about this https://docs.datomic.com/on-prem/excision.html#2019-03-2903:41steveb8nNo excision in Cloud (yet) but here’s a good description of what’s required https://vvvvalvalval.github.io/posts/2018-05-01-making-a-datomic-system-gdpr-compliant.html#2019-03-2906:35steveb8nyou can store the keys as encrypted SSM params and read them using ion/get-params. just make sure they start with “datomic-shared” or they won’t be accessible without extra IAM perms (this caught me out)#2019-03-2908:24asierhttps://github.com/magnetcoop/secret-storage.aws-ssm-ps#2019-03-2908:24asierAWS System Manager Parameter#2019-03-2908:24asierhttps://medium.com/magnetcoop/gdpr-right-to-be-forgotten-vs-datomic-3d0413caf102#2019-03-2913:15dmarjenburghWhat is the importance of the KeyName parameter on the CloudFormation template? It's not required to connect to the bastion host and you never connect to the compute nodes. Is it used by CodeDeploy or something?#2019-03-2914:44johnjelinekI thought it was required to connect to the bastion host#2019-03-2917:20dmarjenburghThe startup script of the bastion generates a keypair and uploads the public key to s3 which the proxy script downloads. So the ec2 keyname is actually not used. #2019-03-2919:21ghadithere are ssh keys used for the Datomic nodes themselves -- I think that's what it's for#2019-03-2919:21ghadithey're distinct from the bastion key#2019-03-3004:37NolanCurious if anyone has any recommendations on managing datomic connections in an aws lambda. Currently I essentially do this:
(def client (delay (d/client ...)))
(def conn (delay (d/connect @client {:db-name ...})))
(def q '[:find ...])
(defn somefn [db]
(let [data (d/q q db)]
...))
(defn -handleRequest [_ is os _]
(somefn (d/db @conn))
...)
It works most of the time, but occasionally a lambda will spin up and only ever encounter an anomaly on every invocation: Unable to execute HTTP request: Connect to <storage bucket>:443 failed: connect timed out. It’s as if the connection was never made from the get-go, and until that lambda dies, it will only fail. Do i need to be handling any sort of expiration or refreshing of either the client or the connection? Are there any artificial or hard limits on number of connections in either solo or prod? Would be interested in anyones experience with using datomic in lambda, and how they managed making and maintaining the connection.#2019-03-3013:55Daniel HinesI have a database of square and edge entities. Each square has 4 refs to an edge, and squares may share the same edge. Given a square’s ident A, how can I query for every other square B that shares an edge with the A, or every square C that shares an edge with B, or every square D that shares an edge with C… etc. until there are no more connected squares?
To make it slightly more concrete, given the database:
[{:db/ident :A :edge/right :e1}
{:db/ident :e1}
{:db/ident :B :edge/left :e1 :edge/right :e2}
{:db/ident :e2}
{:db/ident :C :edge/right :e2}
...]
How can I recursively query for the set of entities who’s values for the attributes :edge#{top bottom left right} are the same (in this example db, the result should be #{:A :B :C})#2019-03-3014:07mg@d4hines Datalog rules can do that. You might do something like,
[[(connected-square ?a ?b)
[?a :edge/right ?e]
[?b :edge/left ?e]]
[(connected-square ?a ?b)
[?a :edge/left ?e]
[?b :edge/right ?e]]
[(connected-square ?a ?b)
[?a :edge/top ?e]
[?b :edge/bottom ?e]]
[(connected-square ?a ?b)
[?a :edge/bottom ?e]
[?b :edge/top ?e]]
[(connected-square ?a ?b)
(connected-square ?a ?s)
(connected-square ?s ?b)]]]
#2019-03-3014:07mghttps://docs.datomic.com/on-prem/query.html#rules talks about how to use the#2019-03-3014:09Daniel HinesThanks @michael.gaare ! I’ll try that out.#2019-03-3014:12mgYou need to pass that rule into the query, and then you can find all the connected squares with something like:
[:find ?connected :in $ % ?square :where [(connected-square ?square ?connected)]]
#2019-03-3014:14benoit@michael.gaare’s solution miss :C I think. It seems that squares can be included in other squares. :B and :C share the same right edge.#2019-03-3014:16mgit doesn't handle squares that have identical edges, no. Given that they're squares, if they share one edge that's the same side, aren't they by definition the same square?#2019-03-3014:17benoitMaybe 🙂#2019-03-3014:18mgif you needed to extend to encompass that idea, could write another rule that does edge comparison#2019-03-3014:21Daniel Hines@me1740 is correct - squares that share the same exact edge on the same attribute are not necessarily the same.#2019-03-3014:21Daniel HinesThe trick is that edges don’t have length - they’re lines, in the mathematical (infinitely extended) sense.#2019-03-3014:22mglike maybe,
[[(shares-edge ?e ?square]
[?square :edge/right ?e]]
[(shares-edge ?e ?square]
[?square :edge/left ?e]]
;; ... etc
]
#2019-03-3014:23mgthen connected-square rule clauses instead look like, [(connected-square ?a ?b) [?a :edge/left ?e] [(shares-edge ?e ?b)]]#2019-03-3014:25Daniel HinesThanks, let me try that out.#2019-03-3014:26mgthese sound more rectanglish to me then 😄#2019-03-3014:26Daniel HinesThey are. I didn’t think the geometry would matter for the Datalog 😅#2019-03-3014:27mgI wanted to make simplifying assumptions to enable my own laziness, see#2019-03-3014:27mgI guess you could write a function to output these#2019-03-3014:28Daniel HinesYeah, we have a recursive function that uses db/entity to do this, but I wanted to see if it was possible to do it in pure datalog.#2019-03-3014:28mgA function to write the rules I mean#2019-03-3014:28Daniel HinesOh? Do tell.#2019-03-3014:28mgcuz it's super tedoius#2019-03-3014:29Daniel HinesIndeed 😛#2019-03-3014:30Daniel HinesWhat’s the most effective way to do that? Do I need to use splicing and things like in macro’s?#2019-03-3014:32mgthis should output what you want for shares-edge:
(let [edges #{:edge/right :edge/left :edge/top :edge/bottom}
edge-sym (symbol "?e")
square-sym (symbol "?s")]
(for [e edges]
[(list 'shares-edge edge-sym square-sym)
[square-sym e edge-sym]]))
#2019-03-3014:33mgthose sym bindings probably not necessary either#2019-03-3014:34mghere, even smaller:
(for [e #{:edge/right :edge/left :edge/top :edge/bottom}]
[(list 'shares-edge '?e '?s)
['?s e '?e]])
#2019-03-3014:41mgthen for my own sense of completeness, the connected-square rules can be built like this I think:
(cons
[(list 'connected-square '?a '?b)
[(list 'connected-square '?a '?s)]
[(list 'connected-square '?s '?b)]]
(for [e #{:edge/right :edge/left :edge/top :edge/bottom}]
[(list 'connected-square '?a '?b)
['?a e '?e]
[(list 'shares-edge '?e '?b)]]))
#2019-03-3014:43mgputting the recursion first might be bad for performance, though, so look out for that when you're playing with this#2019-03-3014:51benoitNot tested and I don't know how efficient it is but something like this might work:
'[
;; define what an edge is
[(edge ?s ?e)
(or [?s :edge/top ?e]
[?s :edge/right ?e]
[?s :edge/bottom ?e]
[?s :edge/middle ?e])]
;; two squares are directly connected if they share an edge
[(directly-connected-square ?s1 ?s2)
(edge ?s1 e)
(edge ?s2 e)]
;; the recursion
[(connected-square ?s1 ?s2)
(directly-connected-square ?s1 ?s)
(connected-square ?s ?s2)]]
#2019-03-3019:14Daniel HinesHow do I query for the value of an attribute that may or may not exist? I suppose I could do an or clause on two queries where one had the attribute and the other didn’t, but is there a short-hand for that?#2019-03-3020:27val_waeselynckI think there's a get-else function#2019-03-3020:27Daniel HinesYeah, I eventually found taht.#2019-03-3020:27Daniel HinesThanks!#2019-03-3019:16Daniel HinesOh, maybe I just have to put the one potentially non-existent attribute in the or…#2019-03-3019:18Daniel HinesThat didn’t quite work.#2019-03-3020:14mgThe attribute isn't even in the schema you mean?#2019-03-3020:28Daniel HinesNo, it’s in the schema.#2019-03-3020:27Daniel HinesThe get-else function did the trick 👌#2019-03-3020:29Daniel Hines@michael.gaare I’m usuing your for expression and it’s working beautifully. What’s the easiest way to compose that into a larger set of rules? I’m getting tripped up with quoting.#2019-03-3020:30mgJust concat them all together, pass it as the rules#2019-03-3020:31Daniel HinesI guess I’m maybe shaky on some Clojure basics here… why do the queries start off in quoted vector? Does it have to be quoted?#2019-03-3020:31mgIt's quoted because the symbols will be evaluated otherwise#2019-03-3020:32mgAlso the list forms#2019-03-3020:34Daniel HinesOk. This also seems to be doing what I expect:
(let [big-rule (for ...)]
`[~big-rule
[?e ?a ?v]
;; other rules...
])
#2019-03-3020:34Daniel HinesIs that typical?#2019-03-3020:34Daniel HinesWhat would your way look like?#2019-03-3020:35mgGenerally you construct rules as one thing, pass them as a query argument, and call it % in the inputs#2019-03-3020:36mgI'm not sure what you were doing with that macro, give me a second and I'll show you how I would do the shared edge thing we talked about earlier#2019-03-3020:46mg#2019-03-3020:46mgSomething like that#2019-03-3020:46Daniel Hines(def rules
(let [connected (vec (for [[a1 a2] opposite-edges]
[(list 'connected '?panel1 '?panel2)
['?panel1 a1 '?edge]
['?panel2 a2 '?edge]]))]
`[~connected
[(connected-recursive ?p1 ?p2)
(connected ?p1 ?p)
(connected-recursive ?p ?p2)]]))
#2019-03-3020:46Daniel HinesThat’s where I’m at so far.#2019-03-3020:48Daniel Hines(squares got renamed to panels)#2019-03-3020:52Daniel Hines(This isn’t working, btw.#2019-03-3020:52mgProbably don't want to use syntax quote (`) here#2019-03-3020:53mgThat's gonna mess up all the symbols#2019-03-3020:53Daniel HinesAh.#2019-03-3020:54mgJust use concat there#2019-03-3020:58mgso you could make that work at least syntactically by doing:
(let [connected ... ] ;; what you're doing here already seems fine
(concat
connected
'[[(connected-recursive ?p1 ?p2)
(connected ?p1 ?p)
(connected-recursive ?p ?2)]])
#2019-03-3020:59Daniel HinesThat works! Thanks. Much better than messing with quote/unquote 😛#2019-03-3021:00mgyou could also selectively quote symbols and construct lists if you want#2019-03-3021:01mgquoting the whole form is doing two things for you:
1. without quoting, if the clojure compiler sees a symbol like connected-recursive or ?p1 it will try to resolve that symbol to its value in the current namespace, and throw an exception most likely because it's not going to be there
2. if the clojure compiler sees an unquoted list (like (connected ?p1 ?p)) it will try to turn that into a function call, which will also fail#2019-03-3021:05mgYou can achieve the same result by individually quoting the datalog symbols, (eg '?p1 rather than ?p1, and constructing lists by using the list function or by quoting the whole list#2019-03-3021:09Daniel HinesOk, that makes sense. I think where I got tripped up is I just assumed '[] meant something different than [].#2019-03-3101:47shaun-mahoodHas anyone got any good resources on general AWS stuff that would be applicable to ions? I’d love to read up a bit more from sources other than the docs that anyone would recommend.#2019-03-3116:28dangercoderI am working on a problem that i've never solved using a database before because I've always had this state locally. Let's say I have a "worker-entity" with a :worker/current-jobs-counter attribute. Whenever I start a job I will pick a worker where current-jobs-counter is below 10, and increment it by 1. Would that include a transaction function in datomic?#2019-04-0114:01favilaYou can do this with a transaction function: transaction function essentially have a lock on the entire database so there's no possibility of stale reads or conflicts.#2019-04-0114:03favilaBut you may also be able to do it with speculative writes that retry if a conflict was detected. Maybe you can use the builtin cas (check and swap) transaction function: https://docs.datomic.com/on-prem/transactions.html#dbfn-cas https://docs.datomic.com/cloud/transactions/transaction-functions.html#sec-1-2#2019-04-0217:47dangercoderi guess a transaction function in the cloud becomes a ion#2019-04-0218:35favilasorry I don't know cloud as well#2019-04-0219:12dangercoderNo worries, I am very thankful for your replies. Made some good progress conceptually 🙂#2019-04-0115:02shaun-mahoodHas anyone else run into the issue of their first call to an web service ion timing out, while the rest work fine?#2019-04-0115:17adamfeldmanYou might be running into something related to cold starts: https://blog.octo.com/en/cold-start-warm-start-with-aws-lambda/, https://epsagon.com/blog/how-to-minimize-aws-lambda-cold-starts/#2019-04-0115:19shaun-mahoodYeah, I was thinking it was something along those lines. Is that a common thing for ions? I haven't seen anyone specifically discussing cold starts with ions, so I wasn't sure if I was doing something weird or if it was expected behavior. Thanks for the links!#2019-04-0116:05adamfeldmanSorry, I don't use Ions in particular. Many AWS Lambda users use hacks to keep their Lambdas warm -- often that is managed by the serverless framework you're using (https://serverless.com/blog/keep-your-lambdas-warm/). I wonder if Ions does or ought to do something similar#2019-04-0116:08shaun-mahoodYeah, I had this impression that ions had a way around the cold start. Thanks for the help!#2019-04-0116:22Joe Lane@U054BUGT4 What we have done at work is create a cloudwatch event that passes some json to the lambda function to keep it warm once per minute. It’s pretty trivial to implement.#2019-04-0116:24shaun-mahoodPerfect, that sounds like exactly what I need. I thought I might be missing some obvious datomic specific thing (or that my ion had something wrong with how I set it up. Thanks!#2019-04-0117:06hadilsI am using Datomic Cloud ions. I have keys in the SSM, and I am using the code in the documentation to retrieve those parameters. I am getting an error "Too many results" when trying it locally or after a push. The thing is, it used to work. Any thoughts?#2019-04-0117:07hadilsI have reduced the failure to this line (ion/get-params {:path "/datomic-shared/dev/stackz-dev/"})#2019-04-0117:08hadilsThe parameters appear in the SSM.#2019-04-0122:15grzmI'm running across a similar issue this afternoon when using Omniconf to populate our config. I'm not seeing a "too many results" error, though one could be masked by the Omniconf SSM code: we're just not getting values populated from SSM. In our case, too, nothing has changed in the config either in the SSM parameter store or in our config-calling code. In our case we're in us-west-2.#2019-04-0202:17hadilsSSM has a max limit of 10 parameters to load into Datomic. I wrote a workaround to load all the parameters into my Clojure call from SSM.#2019-04-0213:47grzmInteresting. Do you have a reference for this issue? Where this limit is documented?#2019-04-0219:00grzmFixed the Omniconf issue. The AWS API returns 10 parameters and a NextToken value when there are more. https://github.com/Dept24c/omniconf/blob/ssm-recursive-next-token/src/omniconf/ssm.clj#2019-04-0201:56Daniel HinesI have a fact along the lines of: “The value a is opposite value b“. I’d like to be able to use that fact in my database queries. The fact feels like a datom: [:a :meta/opposite :b]. How do I assert that as a fact in my db, such that I can do queries like [[?e1 ?a1 _] [?a1 :meta/opposite ?a2] [?e2 ?a2 _]]. I’m getting tripped up because :a is a value, not an entity.#2019-04-0203:19mg@d4hines it needs to be an entity for that to work#2019-04-0203:21mgYou can't assert it unless :a is an entity, and you'd probably want :b to be an entity as well so you can get reverse referencing. If you absolutely cannot use entities for some reason, then you could perhaps embed that fact in a rule or in a database function#2019-04-0205:59Daniel HinesHmm. I also have facts like, “entity 1 has an attribute a with a value of 100” and “entity 1 has an attribute b with a value of 50", e.g [[1 :a 100] [1 :b 50]]. How would i model that if a and b were entities?#2019-04-0210:45benoit@d4hines Datomic attributes are entities and you can add your own attributes to them.#2019-04-0210:47benoitSo if :a, :b, and :meta/opposite are all attributes, then [:a :meta/opposite :b] is a valid fact.#2019-04-0211:02benoitAlso you have to be careful with such "commutative" relationships. You might need to define a rule to be able to traverse the relationship in both directions :meta/opposite and :meta/_opposite. Unless people here have better ways to model this kind of relationship in Datomic?#2019-04-0217:16joshkhin datomic cloud, do deployed query functions have to be predicate functions, or can they return other values such as a filtered set of entities?
https://docs.datomic.com/cloud/query/query-data-reference.html#deploying#2019-04-0217:19marshallThey can return whatever you’d like: https://docs.datomic.com/cloud/ions/ions-reference.html#signatures#2019-04-0217:20marshallthis one https://github.com/Datomic/ion-starter/blob/master/src/datomic/ion/starter.clj#L50 returns a set of values#2019-04-0217:21joshkhhow'd i miss that? 🙂 thanks#2019-04-0217:21marshallnp#2019-04-0217:21marshallalso: https://docs.datomic.com/cloud/query/query-data-reference.html#calling-clojure-functions#2019-04-0217:22joshkhare query functions more performant than manipulating the query results outside of the query?#2019-04-0217:24marshallnot intrinsically, but when using Cloud query functions run on your Datomic instances#2019-04-0217:24marshallso they’re running with the data in memory on the instance#2019-04-0217:25marshallit depends a lot on what you’re doing with the query function(s) as to whether you will see a difference in performance doing it inside the query or returning a set of results and doing the manipulation afterwards#2019-04-0217:26joshkhfor example, i have a query where i want to find the latest transacted entity (A), and then pull some information about an entity related to it (B). i can't use max on the tx values while simultaneously pulling the id of B. instead i have to pull A and B and then sort outside the query.#2019-04-0217:26joshkh> query functions run on your Datomic instances
gotcha!#2019-04-0217:27marshalli think you could do what you’re asking with a nested query#2019-04-0217:27marshallhttps://stackoverflow.com/questions/23215114/datomic-aggregates-usage/30771949#30771949#2019-04-0217:28marshallfind the max tx in the inner query#2019-04-0217:28joshkhdo nested queries have to be deployed query functions? i've tried nesting queries and get back something like datomic.api/q or datomic.client.api/q is not allowed. adding it to the allow list doesn't seem to make a difference.#2019-04-0217:29marshallthen use the outer query to get things related to it#2019-04-0217:29marshallno, they should be possible just generally#2019-04-0217:29marshallwhat version of Datomic?#2019-04-0217:29joshkhlatest cloud#2019-04-0217:29marshallhrm. there might be a whitelisting issue. you can deploy an empty ion that just has datomic.client.api/* in the allow list#2019-04-0217:31joshkhokay i'll look into that#2019-04-0217:31joshkhfor reference:
(->> (client/db)
(d/q
'[:find ?name ?c :in $ :where
[?e :community/name ?name]
[?e :community/category ?c ?maxtx]
[(datomic.client.api/q
'[:find (max ?tx) . :where
[_ :community/category _ ?tx]]
$)
?maxtx]]))
ExceptionInfo 'datomic.client.api/q' is not allowed by datomic/ion-config.edn clojure.core/ex-info (core.clj:4739)
#2019-04-0217:31marshallyeah, try with the allow and I’ll look into registering that for a future fix#2019-04-0217:33joshkhwill do. can you clarify what you mean by an empty ion? i have an allow list in my existing project which i guess applies to all ions deployed to the code deploy target#2019-04-0217:37marshallyep just add it to your existing one#2019-04-0217:37marshallif you werent using ions at all you’d need to make an ion-config.edn in a project and deploy it, but it wouldnt need any code or anything#2019-04-0217:37joshkhgotcha#2019-04-0217:46joshkhthat did the trick. thanks for the help @marshall#2019-04-0217:47marshallnp. glad it got you sorted#2019-04-0217:53souenzzoany plans about websockets in datomic ions?#2019-04-0218:25Joe LaneYou can already use ions with websockets#2019-04-0300:06steveb8nI didn’t know about this. Thanks!#2019-04-0218:26Joe LaneCreate an ion that handles apigatewayV2's websocket onConnect onDisconnect and onMessage and you’re good to go.#2019-04-0218:26Joe Lane@souenzzo ^^#2019-04-0218:27souenzzoThere is docs about that?
Where I call onConnect ?
It's via #pedestal .ions or via "raw" ions?
@lanejo01#2019-04-0218:29Joe LaneThere don’t need to be docs on it, the aws docs cover how to call a lambda, that’s good enough. There is no difference between a “pedestal.ions” ion vs a “raw” ion, other than the web handling interceptors that pedestal.ions lets you opt into.#2019-04-0300:54weiRich mentioned that ion deployment would roll So as long as you are not doing something really crazy, like updating in place your functions and things like that.
Is there any escape hatch for redefining functions? If not, is there a doc covering best practices around accretion?#2019-04-0314:47vemvCan I restore a database backup into a local datomic instance, but discarding all datoms later than timestamp T?
Would enable coarse-grained debugging. e.g. I can repeat the process many times, allowing one to make mistakes and roll them back cleanly#2019-04-0411:46benoitDatomic morning quiz: what is the difference between these 2 transactions?
[{:user/email "
and
[{:db/id [:user/email "
(`:user/email` is a :db.unique/ identity attribute)#2019-04-0412:31joelsanchezI guess the first one creates the entity if it doesn't exist, and the second one doesn't#2019-04-0412:31joelsanchezor am I missing the point? 😛#2019-04-0412:35benoitNo, that's it, but I still forget this after x years of Datomic 🙂#2019-04-0412:22Petrus TheronConnecting to remote Datomic Pro dev storage works in one Clojure project via, (d/connect "datomic:dev://.../mydb?password=somepass) but on another project it throws Syntax error (AbstractMethodError). When I print last exception, I see io.netty.util.concurrent.MultithreadEventExecutorGroup.newChild(Ljava/util/concurrent/Executor;[Ljava/lang/Object;)Lio/netty/util/concurrent/EventExecutor; `clojure.lang.ExceptionInfo: Error communicating with HOST <myhost> on PORT 4334."
Suspected invisible dependency version conflict, but after an hour of comparing lein deps :tree between the two projects and making the non-working project as near-identical to the working project, I can't figure out why I can't connect. Clojure 1.10 & com.google.guava/guava 2.0.#2019-04-0412:31Petrus TheronLonger exception:
clojure.core/eval core.clj: 3214
...
user/eval13015 REPL Input
...
com.theronic.data.datomic/eval13019 datomic.clj: 60
datomic.api/connect api.clj: 15
datomic.Peer.connect Peer.java: 106
...
datomic.peer/connect-uri peer.clj: 751
datomic.peer/get-connection peer.clj: 669
datomic.peer/get-connection/fn peer.clj: 673
datomic.peer/get-connection/fn/fn peer.clj: 676
datomic.peer/create-connection peer.clj: 490
datomic.peer/create-connection/reconnect-fn peer.clj: 489
datomic.peer.Connection/create-connection-state peer.clj: 225
datomic.peer.Connection/fn peer.clj: 237
datomic.connector/create-transactor-hornet-connector connector.clj: 320
datomic.connector/create-transactor-hornet-connector connector.clj: 322
datomic.connector/create-hornet-factory connector.clj: 142
datomic.connector/try-hornet-connect connector.clj: 110
datomic.artemis-client/create-session-factory artemis_client.clj: 114
org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory#2019-04-0415:27holyjakThis is surely trivial but, given a DB of movies with and id attribute, how do I find those whose ids are in #{11 22 33}? Thank you!
find.. where ?id in idset?
(use case: I get a list of invoices and want to keep those not in the DB meaning not yet processed. Or should I just get the (thousands) of invoices already in DB and do the set/diff locally?)#2019-04-0415:46benoit@holyjak https://docs.datomic.com/cloud/query/query-data-reference.html#collection-binding#2019-04-0415:48holyjakThanks a lot!#2019-04-0416:01holyjakQuestion 2: since I'm using the peer library, the query runs locally and needs thus anyway to fetch all the ids so it would be just as efficient to ask Datomic for all the ids and do set/diff manually. Or is using collection binding more efficiently thanks to indices/something?#2019-04-0416:10benoitYes, your query will likely take advantage of Datomic indexes to not download all the ids on the peer.#2019-04-0419:34mbjarlandHigh level noob question: would using datomic ions for a normal page serving ring-type app (i.e. not an api or event triggered code etc) be a bad fit for some reason?#2019-04-0420:28Joe LaneNope#2019-04-0420:28Joe LaneNot any better or worse than running an ec2 instance#2019-04-0611:22holyjakDoesn't that depend on whether there is any performance penalty for going through Lambda? If there is just 1 Lambda and it is in a language with fast cold start than it is perhaps negligible. If there is 1 Lambda per Ion and especially if it is in Java/Clojure I suspect you will run into the cold start delay regularly, which can be reportedly 1/2 - few seconds...#2019-04-0814:11Joe LaneWe implemented a single cloudwatch event which sends a heartbeat every minute keeping all our lambdas warm. It’s really not as big a deal as people say. And when you need to scale up and have concurrent lambdas, you should set a low timeout on your api so it then retries against an existing warm lambda, while warming the new concurrent lambda.#2019-04-0501:12bherrmannFYI: https://sites.sju.edu/plw/datalog2/#2019-04-0507:24dmarjenburghI just noticed the limit on string sizes is 4096 characters (https://docs.datomic.com/cloud/schema/schema-reference.html#sec-9-1). I transacted some data with more characters (> 10k) and it stores and retrieves it just fine. It’s perfectly reasonable to set a limit on string sizes, but 4KB is often too small for our use case (The ddb limit is 400KB). How do you best deal with larger text values?#2019-04-0515:20rapskalianI’ve seen others in this channel mention making use of some external blob store (e.g. S3) to store “large” values, and then only storing the key/identifier/slug of that value in Datomic.
To retain immutability, the object in blob storage should never be directly modified, but instead should be copied-on-write. This way datomic’s history will always point to valid blobs. Hope this helps.#2019-04-0515:43dmarjenburghThanks, I was thinking about that too, and combining it with cloudsearch for querying inside the docs.#2019-04-0516:13dustingetzI believe large blobs impact performance which is the reason for the 4k limitation in Cloud#2019-04-0511:01teodorluHello!
Basic question. I want datomic clients on a different machine than the peer server. Can I just start the peer server remotely, let :8998 through the firewall and connect to it from my clients with the access credentials I set? Will that expose my access key and secret over the network? Will normal traffic be encrypted over the network? Or do I have to tunnel this myself if I want it encrypted?
I'm working with the docs here: https://docs.datomic.com/on-prem/getting-started/connect-to-a-database.html
Thanks!#2019-04-0511:03teodorlu(if my question is stupid because of X reason, please do shout out; I'm figuring this out for the first time)#2019-04-0512:13teodorluSlight update: I think I'm going the safe route; keeping the peer server running behind the firewall, and using an SSH tunnel for connection. Then I can keep SSH as the only means of access. I'm still not sure though, so replies are welcome.#2019-04-0521:08tylerDoes anyone have an opinion around the best practices for access control in datomic cloud? In the past I had used (d/filter ...) in the peer model to provide a filtered view of the database using middleware such that the risk of leaking data from a poorly written handler was low but it doesn’t appear that there is a straightforward solution to this problem in the client model.#2019-04-0901:37steveb8nI did this by ensuring all queries/pulls/writes go through a decorator pipeline (I used interceptors but plain fns would work) before they hit the client api. it works well as long as you ensure all client api calls are proxied by the pipeline#2019-04-0901:39steveb8nunfortunately I can’t share the code because it has proprietary design included but it’s not magical, just adding where conditions etc#2019-04-0901:40steveb8npulls are trickier. in that case you have to check the results after they come back from the client api call#2019-04-0815:24Joe LaneHas anyone here successfully downgraded from a datomic cloud production topology back to a solo topology? I tried this weekend and it never finished. Ultimately I just rolled the update back and am now stuck with a production topology.#2019-04-0818:01currentoori noticed, in transaction functions sometimes data structures are vanilla clojure, like a vector, but then sometimes they are something like a java.util.ArrayList, is there any pattern to this?
i had a call to vector? in my transactor function that starting failing sporadically, is vector and java.util.ArrayList the only two? or can there be other types too?#2019-04-0818:16marshall@currentoor Datomic doesn’t make any guarantees about Java/Clojure type preservation across the wire#2019-04-0818:17marshallif you need to know that you have a vector, you’ll want to check for it#2019-04-0818:22currentoor@marshall, that sounds fine but i just need to know all the things that a vector, from the peer, can be converted to? is java.util.ArrayList it? or can it be other types as well?#2019-04-0907:58Ivar Refsdal@currentoor I had the same issue, and while I don't recall the exact details of it, I solved it using clojure.walk/prewalk and checking for java.util.List inside my db fn:
(clojure.walk/prewalk
(fn [e] (cond (and (instance? java.util.List e) (not (vector? e))) (vec e)
:else e))
x)
#2019-04-0908:12Ivar Refsdal@currentoor I believe I've also encountered java.util.Arrays$ArrayList, at least that is what my tests are using to reproduce the behaviour encountered in production. I'm using e.g. (java.util.Arrays/asList (into-array [1 2 3])) in my tests to test the convert function#2019-04-0916:16currentoor@UGJE0MM0W thanks#2019-04-0911:13jaihindhreddyI have a service that I use, and I want to store it's response (EDN) under a particular key in Datomic. AFAIK Datomic doesn't currently support this document like storage? How should I do something like this? Store them externally, and put their IDs in Datomic?#2019-04-0911:45benoitYes, I usually store blobs like these in S3 and the key in Datomic.#2019-04-0911:14jaihindhreddyThe API response is highly dynamic, doesn't use namespaced keywords, and most of the time, I don't foresee needing to use Datalog into it.#2019-04-0911:14jaihindhreddyI'm fine with it being an opaque value.#2019-04-0916:43johnjyou can store it as a string but datomic has performance issues with big strings >1Kb#2019-04-1012:09henrikLike benoit said, store it in somewhere like S3, DynamoDB or another KV-store. You can serialize the EDN with Transit (you can use the msgpack version of Transit for this).
If you generate a unique key (UUIDs are fine for this) every time you update the content, you effectively can preserve the immutability of Datomic, as earlier versions of the same entity will have UUID keys pointing to a different value in the KV store.#2019-04-1019:50rplevyI'm trying to figure out what I'm doing wrong setting up datomic free locally.
I've installed and started H2, I'm running a transactor and a console, I've uncommented h2-port=4335 in transactor.properties, but yet:
user=> (d/create-database "datomic:)
...
ActiveMQNotConnectedException AMQ119007: Cannot connect to server(s). Tried with all available servers. org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory (ServerLocatorImpl.java:799)
#2019-04-1019:57benoit@rplevy I believe Datomic free only works with in-memory and disk storage.#2019-04-1019:58rplevyWell I'm trying to do a fairly straightforward setup and just running it at all, and that's what I get#2019-04-1019:58rplevyI thought maybe I had to set up H2 for it to work but I get the same result either way#2019-04-1020:00benoitI had issues recently with Datomic Free and recent Java versions. Maybe you will have more luck with Java 1.8 or Java 1.9.#2019-04-1020:00rplevyLooks like I'm running openjdk 12#2019-04-1020:00rplevyMaybe I need to downgrade...#2019-04-1020:42rplevyInteresting, datomic doesn't work with the newer JDKs, downgraded to 1.9#2019-04-1020:43rplevyAt first I thought maybe it was because I was on the latest datomic free version but earlier versions failed in the same way, until I downgraded java#2019-04-1020:44rplevythanks @me1740!#2019-04-1117:14hadilsHow do I find out when a datom was entered into the database? I think I have to use :db/txInstant but I don't know how to select.#2019-04-1117:16marshall@hadilsabbagh18 https://github.com/cognitect-labs/day-of-datomic-cloud/blob/master/tutorial/log.clj#L28-L40 that section shows a minimal example of finding the transaction and the wall clock time that “something” happened in the db#2019-04-1117:17hadilsThanks @marshall#2019-04-1117:17marshallif you have the datom of interest in hand, you can pull the txInstant from the transaction entity, whose entity ID is the tx of the datom you’ve got#2019-04-1117:24hadilsThat works! @marshall#2019-04-1219:40joelsanchezhow would you query the ids of all the entities that reference a given entity, including transitively?
i.e. I have entity A, which has a :db.type/ref attribute whose value points to B, and B also has a ref attr whose value points to C, and I want to go from C to A
this is trivial to do if you know how many steps there are, but implementing it for the general case seems difficult to me without resorting to complicated graph traversal algos#2019-04-1219:41joelsanchezso is there a simpler way or do I need to do it the hard way? (i.e. graph traversal custom fn)#2019-04-1219:43joelsanchezjust to make my case clearer, this is to detect when I need to reindex an entity in elasticsearch. if a subentity (a component, usually, but not always) is changed I'll need to reindex the parent entity, but the link isn't always direct#2019-04-1219:49benoitIt seems like Datomic rules would work great for this. The question is whether you can have a list of attributes that you can lookup in these rules or whether you should consider any attribute of type ref.#2019-04-1219:51benoitAbsolutely not tested but I would try something in this spirit:
[(parent ?p ?e)
[?p ?attr ?e]
[?attr :db/type :db.type/ref]]
[(ancestor ?a ?e)
(parent ?a ?e)]
[(ancestor ?a ?e)
(parent ?p ?e)
(ancestor ?a ?p)]
#2019-04-1502:49Daniel HinesI spent a good chunk of time on this channel bothering you guys before arriving at an identical rule. Do the Datomic docs show this example, and I just missed it? Vaguely googling for phrases like "transitive query Datomic" leads to the mbrainz example which only goes to some depth specified upfront or this forum post which never arrives at this rule https://forum.datomic.com/t/how-to-do-graph-traversal-with-rules/132 If the Datomic docs don't have this example, it may be expedient to add it, or perhaps an authoritative blog post. This one rule is so impressively powerful, I think it deserves whatever hype it can get. "Traverse your graphs instantly with this one weird trick..."#2019-04-1607:58joelsanchezcompletely agree, and I had the same experience with the googling#2019-04-1607:58joelsanchezI was very close to implementing this nightmare https://hashrocket.com/blog/posts/using-datomic-as-a-graph-database#2019-04-1607:59joelsanchezthankfully, rules saved my day#2019-04-1219:55benoitYou might not even need the clause on the type of the attribute.#2019-04-1220:07joelsanchezI'm absolutely blown away, I never used rules before, but this works and I'm very grateful for your help#2019-04-1220:09benoitNo problem. Usually if you have a recursive problem, rules help.#2019-04-1221:11ghadi@joelsanchez as noted it will be much faster if you specify the exact attribute you need#2019-04-1221:11ghadi[?p ?attr ?e] <- not binding the attribute causes a much larger scan#2019-04-1221:22benoit?e should be bound to the child entity so the scan should not be much bigger, unless there are a lot of attributes on each entity they don't care about.#2019-04-1221:23benoitYou should of course call parent and ancestor with a bound ?e to find ?p. Not try to retrieve the whole database 🙂#2019-04-1221:46joelsancheznah, they are small entities, and they don't have that many ref attributes. since the child entities are usually components, they aren't referenced by more than one entity, and the depth is always lower than 3#2019-04-1409:44sooheonHey guys, are there any other diagrams like the [codeq schema](https://github.s3.amazonaws.com/downloads/Datomic/codeq/codeq.pdf?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAISTNZFOVBIJMK3TQ%2F20190414%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20190414T072234Z&X-Amz-Expires=300&X-Amz-SignedHeaders=host&X-Amz-Signature=75fe7d45baee7d4d418e4d3086fff1483beb480d3cb45a769d6e74242c1effd8) floating around for reference?#2019-04-1513:55dmarjenburghI'm trying to push an unreproducible deployment using clj on-windows. I'm getting the following error:
{:command-failed "{:op :push :uname daniel-test}",
:causes
({:message "Data did not conform",
:class ExceptionInfo,
:data
#:clojure.spec.alpha{:problems
({:path [:local :home],
:pred clojure.core/string?,
:val nil,
:via
[:cognitect.s3-libs.specs/local
:cognitect.s3-libs.specs/home],
:in [:local :home]}),
:spec
#object[clojure.spec.alpha$map_spec_impl$reify__1997 0x52f8a6f4 "
It fails on :local {:home nil} which is not set by me.#2019-04-1515:48Joe LaneTry it without a hyphen#2019-04-1515:49Joe Lanemake it danieltest and see if that fixes the issue.#2019-04-1516:03shaun-mahoodIs there a way to get :db/tx (or :db/txInstant) from the pull API?#2019-04-1517:30ghaditransactions are entities, you can pull them @shaun-mahood#2019-04-1517:30ghadiI'm guessing you mean pull through a non transaction entity to a transaction entity?#2019-04-1517:32shaun-mahood@ghadi - Yeah, that's what I was trying - I have a nested list near the bottom of my pull, and I want to get the transaction for those entities.#2019-04-1517:35ghadii don't think that is possible#2019-04-1517:35shaun-mahoodOk, that's kind of what I figured. Thanks!#2019-04-1519:45favilaPart of the problem here is entities don't have transactions, only assertions/retractions do (entity + attribute + value combo)#2019-04-1519:45favilapull api doesn't operate at that granularity#2019-04-1521:11shaun-mahoodOh, interesting - I didn’t realize there was a distinction there.#2019-04-1611:16marcolI getting a weird error on AWS trying to get the DB of a datomic cloud instance: {:clojure.error/phase :compile-syntax-check, :clojure.error/line 1, :clojure.error/column 1, :clojure.error/source "datomic/client/impl/shared.clj"}#2019-04-1611:31marcolBy isolating the problem I now receive: Unable to resolve entry point, make sure you have the correct version of com.datomic/client on your classpath
when trying to get the client#2019-04-1613:59ghadiwhat is your dependency in your deps.edn or project.clj? @marcol#2019-04-1614:05marcolcom.datomic/client-cloud "0.8.71"
#2019-04-1614:38Laurence ChenHi, I encounter a Datomic design decision -- "How to design at the situation that we need generalized compare-and-swap semantic in Datomic?"
I have sorted out my question in stackoverflow.
https://stackoverflow.com/questions/55706444/how-to-design-at-the-situation-that-we-need-generalized-compare-and-swap-semanti
I really appreciate anyone can give me some hints. Thx.#2019-04-1614:53benoitI would consider the UX of the system. Does it make sense for admins to work on the same request at the same time? If not, I would implement a lock mechanism with CAS so the admins can get immediate feedback whether they should work on an item or someone else just started to work on it. It has the drawback of requiring one more click for the admin before starting working on a request but the advantage of not spending time on a request is someone is already working on it.#2019-04-1614:54benoitIf the extra click is a problem you can always automatically lock when delivering the request to an admin and expire the lock if there is no activity after a certain period of time.#2019-04-1614:58benoitBut if you don't mind wasting admin's time, I would just do the modifications of the request in a transaction function to ensure atomicity.#2019-04-1715:18Laurence ChenHmmm, interesting answer. I deliberately create this story, but I never think that this problem can be solved from UX. Thank you.#2019-04-1621:56sooheonHi guys, if I’m attempting to model the equivalent of a multi-column primary key for uniqueness in Datomic (say I have a unique entity or “row” for each Player + Season + Team and a bunch of stats attributes), should I create a derived ID column that is (str player team season) and put db.unique/identity on that, or is there a way to specify that those three columns together represent a unique identity that should be upserted on?#2019-04-1622:14benoitWhen you want to ensure any kind of constraints across attributes, you should write a transaction function.#2019-04-1622:20sooheonThanks, this makes sense!#2019-04-1815:39adamfreyI'm looking for code that I've seen before in I believe the day-of-datomic repos or something similar where Stu Halloway wrote a series of queries showing how to debug a datalog query to see how many entities were being resolved at each step#2019-04-1815:40adamfreydoes have a link to what I'm talking about, I haven't been able to find it#2019-04-1815:46adamfrey@jaret do you know what I'm talking about?#2019-04-1815:47jaret@adamfrey are you talking about decomposing a query?#2019-04-1815:52adamfreythis is it, thanks!#2019-04-1815:47jarethttps://github.com/Datomic/day-of-datomic/blob/master/tutorial/decomposing_a_query.clj#2019-04-1815:48jarethelps you find the optimal clause ordering with little domain knowledge ^#2019-04-1815:50zalkyHi all, is there a benefit to using datomic.api/attribute over datomic.api/entity?#2019-04-1816:48timgilbertHey, modelling question here. Say I've got some images which are "confirmed" at some point, at which time I add a datom [eid :image/confirmed-at (Date.)], so the field is either present or missing for any given image.
Now I'm trying to find unconfirmed images. Is there a performance difference between these two clauses?
(not [?i :image/confirmed-at])
[(missing? $ ?i :image/confirmed-at)]#2019-04-1817:55dmarjenburgh@lanejo01 I’ve tried various things, but keep getting the same error.#2019-04-1817:56dmarjenburghHas anyone gotten an ion push/deploy working on windows?#2019-04-1818:37danierouxclojure -A:ion -m datomic.ion.dev '{:op :push}'
{:command-failed "{:op :push}",
:causes ({:message nil, :class NullPointerException})}
How do I debug this?#2019-04-1818:40pvillegas12Look at your code-deploy in aws, it should have the failed deploy and show you more information in the details page#2019-04-1818:44danierouxThere's nothing there - I am push-ing, so did not expect anything there yet.#2019-04-1819:15danierouxI fiddled with deps.edn, and not it's working 😕#2019-04-1819:16Joe LaneWhat happens when you use a prior commit, one that worked? I’ve had issues before and it ended up being that my code wasn’t compiling.#2019-04-1819:19danierouxIn this case, I just removed dependencies, and moved some to -Adev that doesn't get included in -Aion#2019-04-1819:19danieroux*and now it is working#2019-04-1918:33Joe LaneGlad to hear its working now!#2019-04-1818:37danieroux(it has worked before)#2019-04-1904:02sooheon{:command-failed "{:op :push :creds-profile \"sportspedia\"}",
:causes
({:message
"Unable to find a unique code bucket. Make sure that you are running\nDatomic Cloud in the region you are deploying to.",
:class ExceptionInfo,
:data {"datomic:code" nil}})}
#2019-04-1904:02sooheonHas anyone seen this error before?#2019-04-1904:02sooheonI’m able to connect and dev against Datomic Cloud, so it is running.#2019-04-1904:22sooheonI gave explicit :region key and it seems to work — apparently wasn’t picking up region from the profile.#2019-04-1907:28sooheonI’m having trouble understanding how ions work with component / mount. Is there a post about this anywhere?#2019-04-1907:41p14nI do an explicit mount/start on first request#2019-04-1908:38sooheon@p14n Ah I see. If you have N different endpoints, you just put the mount/start in each endpoint?#2019-04-1908:43p14nI only have one graphql one, so that's convenient. Thinking of putting the startup behind a special URL I call after deploy tho#2019-04-1908:44sooheonI see. Are you using lacinia and hooking up the graphql one to lacinia/execute?#2019-04-1908:46p14nYup#2019-04-1919:28staskhi, having an issue with ions tutorial, cant fetch ions dependency, getting following error:
Error building classpath. Failed to read artifact descriptor for com.datomic:ion:jar:0.9.28
org.eclipse.aether.resolution.ArtifactDescriptorException: Failed to read artifact descriptor for com.datomic:ion:jar:0.9.28
...
Caused by: org.eclipse.aether.resolution.ArtifactResolutionException: Could not transfer artifact com.datomic:ion:pom:0.9.28 from/to datomic-cloud (): Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: B62D357DA6A66B13; S3 Extended Request ID: GPC1UKcSUCjPudHXFq8r/krZOp03kN6L9DH717Sj3J91t/GLNvepfoV2g/0+dFRQmtRMnt6CVTw=)
at org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolve(DefaultArtifactResolver.java:422)
at org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolveArtifacts(DefaultArtifactResolver.java:224)
at org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolveArtifact(DefaultArtifactResolver.java:201)
at org.apache.maven.repository.internal.DefaultArtifactDescriptorReader.loadPom(DefaultArtifactDescriptorReader.java:261)
... 25 more
the deps.edn is taken from the documentation:
{:deps {com.datomic/ion {:mvn/version "0.9.28"}}
:mvn/repos {"datomic-cloud" {:url ""}}
:aliases
{:dev {:extra-deps {com.datomic/ion-dev {:mvn/version "0.9.186"}}}}}
#2019-04-1919:45staskfigured it out, had to specify AWS_PROFILE environment variable#2019-04-2111:54joshkhAnyone up for helping me with a Datalog query? 🙂 Given a collection of "group" entities with references to "item" entities:
[{:label "group1"
:items [{:kind :hat} {:kind :scarf} {:kind :shoe}]}
{:label "group2"
:items [{:kind :shoe}]}
{:label "group3"
:items [{:kind :hat} {:kind :shoe}]}]
Is it possible to bind all group entities, and any related entities of :kind :hat, resulting in something like:
=>
[
["group1" [{:kind :hat}]]
["group2" [nil]]
["group3" [{:kind :hat}]]
]
I can't do :where [?group :items ?items] [?items :kind :hat] because that excludes groups without :hat items, and I want all groups regardless.
I could use map specifications in the pull syntax and then filter items for just :hats outside of the query, but I'm curious if there's a way to handle the scenario in pure datalog.#2019-04-2119:53johanatanis anyone running datomic on java 11? i had been resolving the java.xml.bind missing issue with DATOMIC_JAVA_OPTS="--add-modules java.xml.bind" but that seems to no longer be working. any other workarounds/fixes?#2019-04-2121:25favilaThis trick only works up to 10. At 11 that package is removed from the jdk and you need the actual jar#2019-04-2121:26johanatanSo I need to download the jar, place it on the filesystem and modify the classpath to reference it?#2019-04-2121:27favilaYes#2019-04-2121:28favilaYou are likely to have other issues though#2019-04-2121:30johanatanLike what?#2019-04-2121:30favilaWe run peers on java 11 but gave up running the txor on 11. We are sticking to 8 until it’s explicitly supported#2019-04-2121:31favilaThere was some ssl netty issue, details hazy#2019-04-2121:31favilaProbably a dep update would have fixed#2019-04-2121:31johanatanHmm, well the weird part here is I have no idea how my underlying Java was even upgraded. It’s a light sail instance that’s been up for months and I didn’t explicitly reboot or ask it to update java#2019-04-2121:32johanatanI suppose Java 8 is somewhere still on the system so I could conceivably explicitly reference it from my cron tab #2019-04-2121:32johanatan(For nightly backups to s3 via bin/datomic)#2019-04-2121:37favilaWhat is the base OS?#2019-04-2121:37johanatanHuh?#2019-04-2121:38favilaOf a Lightsail instance. What is everything running on top of?#2019-04-2121:41johanatanUbuntu#2019-04-2121:45favilaupdate-java-alternatives -l will show you what Java’s are installed and let you pick a different default#2019-04-2121:45johanatanAh thx!#2019-04-2121:46favilaWarning, what Ubuntu calls openjdk11 is actually java 10 for...reasons#2019-04-2121:48johanatanLol#2019-04-2121:48johanatanSeems like 8 is the safest choice here though yea?#2019-04-2121:50favilaYes#2019-04-2219:12brycecovertWhat happens when the year-long datomic starter edition expires?#2019-04-2219:18benoitYou lose support and updates.#2019-04-2220:45johnjstarter doesn't include support AFAIK#2019-04-2220:51johnjWhat method is recommended for handling the connection object (d/connect db-uri) ? do you pass it as an arg to every function that does a query or transaction?#2019-04-2220:54favilahttps://docs.datomic.com/on-prem/best-practices.html#consistent-db-value-for-unit-of-work#2019-04-2220:55favilaprefer making a DB once and passing as a value to an entrypoint or family of functions#2019-04-2220:55favilaif you need to tx, obviously pass a connection, but try to separate these concerns#2019-04-2220:56faviladon't do (d/db con) in the body of every function that does querying, for eg#2019-04-2220:56favilapass db in as an argument#2019-04-2220:58favilaE.g. in an http-like request handler, it makes sense for it to get a conn and db once, then pass that db around#2019-04-2220:58favilaassuming an http request-response is a single unit of work (which it is most of the time)#2019-04-2221:51johnjhelpful, thanks#2019-04-2221:53johnjchecking this I think did this by commons sense without knowing but verifying anyway#2019-04-2303:49vemvCan I create a Datomic database with an past, fixed basis rather than just (now)?
Use case: I create Datomic databases once per unit test. A given unit test must be able to create an entity with a :db/txInstant of (-> 10 minutes ago).
But if the database was created just now, I won't be able to add such an entity: Time conflict: Tue Apr 23 05:31:11 CEST 2019 is older than database basis#2019-04-2308:12favilaAn untransacted db starts at the epoc (1970) @vemv #2019-04-2308:13favilaIf you are getting this error then you transacted something after that without overriding the txInstant into the past. Maybe schema?#2019-04-2308:23vemvright, thanks for the pointer!
I think it's some intricacy of this project#2019-04-2318:48enocomI’m writing a query to support pagination for a web client. The q API supports :offset and :limit which seems perfect, but I’m having trouble ensuring a sort order for the results. How do people handle pagination for web apps when using Datomic?#2019-04-2319:02favilaQuery all, then sort+limit the results before delivering to the client#2019-04-2319:03favilaI think limit+offset is only somewhat reliable when reusing the same db (anchored to specific t) in fairly rapid succession#2019-04-2319:03favilai.e., in a loop in an application, not in a url#2019-04-2319:04favilaif your result is a set, you should expect a repeatable order if the results are the same#2019-04-2319:04favilaif the results are not a set (e.g. using :with) I'm not sure you can#2019-04-2319:06favila(the repeatable order is due to hashing order, so additional caveat that the result items are hashable)#2019-04-2319:07enocomThat’s about where I landed as well. Here I’ve been worried about memory usage, but that’s probably unnecessary. Thanks!#2019-04-2319:09favilawell you should worry about it#2019-04-2319:10favilaone thing I've found is pulls run before limiting#2019-04-2319:11favilaeven in on-prem, going :find (pull ?x [*]( . to get only one result will pull everything, then take the first tiem#2019-04-2319:11favilarather than take the first item then execute the :find#2019-04-2319:11favilaso try to return as little as you can manage from your "big" query and followup with additional queries or pulls later, using that result as input#2019-04-2319:12favilaalong the same lines, if you have the kind of query which has a large known set based on a raw datomic index, you can use d/datoms to select a subset, then feed that as input to your query#2019-04-2319:13enocomYeah, good idea. We’re doing a pull of eids and txInstant, sorting, dropping and taking depending on the pagination, and then querying for the full data when we know just the page’s amount.#2019-04-2319:13enocomI’ll take a look at the raw indices. That’s an interesting idea.#2019-04-2319:13favilathis only works for query shapes where each item in the first "clause" of the query is merely filtered out by subsequent clauses#2019-04-2319:14favilaotherwise your limit and offset will get messed up again#2019-04-2319:14favilaor the same find result may show up in multiple offsets#2019-04-2319:15favila(the query doesn't see the entire result so it can't do the deduping normally done by the result set)#2019-04-2319:18enocomThat makes sense — the raw indices don’t seem to fit just right in our use case, but it’s a good idea.#2019-04-2319:20favilayeah you have to use it carefully#2019-04-2319:20favilamight even not be as big a win on datomic cloud, since you have more going over the wire#2019-04-2408:08akielHi why Datomic doesn’t have extensible data types? I need date-times with precision and physical quantities among other types. I could build them myself with byte arrays, but Datomic with fressian could do that for me already.#2019-04-2415:08favilaI don’t know. It’s something they’ve promised for years#2019-04-2415:08favilaCareful with byte arrays though, you can’t index them by value#2019-04-2416:27akielAh they promised it? I didn’t know that. Thanks. I know that I can’t index them by value.#2019-04-2513:12donaldballI’d be satisfied if they’d just introduce a few java.time types tbh.#2019-04-2419:33Diego MelendezHow can I connect to on-prem with datomic.client.api?#2019-04-2419:33marshall@diedu89 you need to start a peer-server
see https://docs.datomic.com/on-prem/getting-started/connect-to-a-database.html and https://docs.datomic.com/on-prem/peer-server.html#2019-04-2419:57Joe LaneI’ve been working on a way to optionally transact something in datomic cloud and noticed that if my tx-data vector is empty (like {:tx-data []} the response from d/transact still includes a single Datom in the :tx-data.
{:db-before {:database-id "4c842e8a-aee2-4b52-9361-fd5e7183c02c",
:db-name "2018-11-13",
:t 733484,
:next-t 733485,
:type :datomic.client/db},
:db-after {:database-id "4c842e8a-aee2-4b52-9361-fd5e7183c02c",
:db-name "2018-11-13",
:t 733485,
:next-t 733486,
:type :datomic.client/db},
:tx-data [#datom[13194140266797 50 #inst"2019-04-24T19:42:43.232-00:00" 13194140266797 true]],
:tempids {}}
I’m assuming that datom is the transaction entity itself. Is there any way to prevent the transaction entity from being created?
Is throwing an exception inside a transaction-function sufficient? If so, are there problems with doing that frequently (Hundreds of times per second) assuming wrap the transaction in a try-catch?#2019-04-2514:15favilaIf you throw, you will prevent the assertion of a transaction datom; however you will still advance the "t" counter#2019-04-2511:49benoit@lanejo01 Throwing would work but why can't you detect that your tx is empty and not send it to the transactor in the first place?#2019-04-2513:15Cas ShunI have a datomic cloud database with a bunch of {:db/ident :loc/gb} {:db/ident :loc/us} etc... used as enumerable values. How can I find out all available values in the :loc namespace that are available in the db? i.e. I want to find all countries in the existing schema.#2019-04-2513:44Ivar RefsdalI think
(d/q '[:find [?attr ...]
:in $
:where
[?e :db/ident ?attr]
[(datomic.Util/namespace ?attr) ?ns]
[(= ?ns "loc")]]
(d/db conn))
will do the job#2019-04-2513:44Cas Shunwon't work in cloud#2019-04-2513:45Cas Shundatabase functions not supported in cloud (yet?)#2019-04-2513:49Ivar RefsdalOh, sorry, I'm not familiar with cloud. Maybe someone else is...
Would this work: ?
(->> (d/q '[:find [?ident ...]
:in $
:where
[?e :db/ident ?ident]]
(d/db conn))
(filter #(= "loc" (namespace %))))
#2019-04-2513:50Cas Shunah, i think that's the right idea yes#2019-04-2513:50Cas Shunthank you#2019-04-2513:50Ivar RefsdalNo problem#2019-04-2514:17favilaConsider making an "enum entity" that references all these idents#2019-04-2514:17favilaor the other way around#2019-04-2514:18marshallCloud does indeed support db functions if you prefer that route: https://docs.datomic.com/cloud/query/query-data-reference.html#deploying#2019-04-2514:18marshallgrr sorry wrong link#2019-04-2514:18marshalledited#2019-04-2514:18favila{:db/ident :enum/loc :enum/members [:loc/us ...]}#2019-04-2514:19favilaor {:db/ident :loc/us :enum/member-of [:enum/loc ,,,]}#2019-04-2514:19marshallfwiw, I like @U09R86PA4’s approach 🙂#2019-04-2513:17Cas ShunI know how to query for everything used, but I'd like to know what's available#2019-04-2515:57Joe LaneGreat question @me1740, I’m hoping to write a single transaction function which will either return nothing to transact or some data to update.
The advantage here is leveraging the transactor for its atomicity. In the case where there is nothing to transact I’d like to avoid creating an empty transaction entity. Having nothing to transact is the common path. The whole reason for the question is to avoid increasing the number of datoms in the system with empty transaction entities.
It’s a system with n timers firing once per second, where N could scale to thousands (eventually). The callback attached to each timer will check a predicate and, if true, will transact some new state into datomic, if false, it should do nothing.
I could augment this system to have a single timer that checks the predicate against all candidate entities, but that runs the risk of overrunning the 1 second timer (eventually) if the number of candidate entities grows large (may not be problematic in practice, need to measure).
Throwing to avoid an extra transaction datom may work, but I worry about exception throwing as flow control slowing the transactor down (transactions are notorious perf killers on the jvm), ESPECIALLY since having nothing to transact is the normal case, not the exceptional case.#2019-04-2516:06Joe LaneI think I’ll try to detect if a tx is needed, then try my tx function.#2019-04-2522:36julianI am trying to run d/sync on client and getting IllegalArgumentException No implementation of method: :sync of protocol: #'datomic.client.api/Connection found for class: datomic.client.impl.shared.Connection clojure.core/-cache-protocol-fn (core_deftype.clj:583)
client is com.datomic/client-pro "0.8.17"#2019-04-2615:22jaretDatomic Cloud 470-8654.1 now available. https://forum.datomic.com/t/datomic-cloud-470-8654-1-critical-update/955#2019-04-2616:48souenzzoI tryied to delete my stack, as described here https://docs.datomic.com/cloud/operation/upgrading.html#first-upgrade
Then I got a permission error
Now I'm "admin of aws" (once is undocumented which permissions I need)
Tryied again to delete: DELETE_FAILED
Stack status reason
The following resource(s) failed to delete: [StorageF7F305E7].
#2019-04-2617:00souenzzoI tryied again now my my-stack has now been deleted, but my-stack-Compute-XYZ still DELETE_FAILED
Should I delete it?
When I try to delete it, it ask-me if I want to delete 10+ things#2019-04-2617:15bbssI am running into problems with deployment of an ion app on my solo installation, and I am not really sure where to find what's wrong. The step ValidateService keeps timing out and all the log says is a bunch of [stdout]Received 000. The rollback also seems to fail.#2019-04-2617:35jaretHi @U09MR0T5Y if you’re failing on ValidateService that is symptomatic of a timeout in loading your deps or your app that step. I’d be curious what changed between deployments that worked and this one?
FYI
Ion monitoring docs:
https://docs.datomic.com/cloud/ions/ions-monitoring.html#local-workflow
Ion troubleshooting docs with known errors:
https://docs.datomic.com/cloud/troubleshooting.html#troubleshooting-ions#2019-04-2617:37jaretYou’ll want to check your Cloudwatch logs and system logs for alerts. If anything was syntactically wrong you may see a syntax error in the system logs.#2019-04-2617:37bbssThanks @U1QJACBUM for the quick response. I figured it out already, had an error in my code that didn't pop up when running it locally.#2019-04-2617:37jaretAh cool! Usually that will error in the system logs too.#2019-04-2617:37jaretso you can see the syntax error.#2019-04-2617:37bbssit wasn't a syntax error I think, but I still hadn't found the root error. I'll dig around some more to get familiar.#2019-04-2617:38bbssin googling my error I found a clojurians log that had you give a nearly identical answer by the way 🙂#2019-04-2617:38jaretlol I copied it over 😉#2019-04-2617:40bbssso far I'm pretty happy to work with datomic ions, it was quite some work to get aws with cognito working with a clojurescript-spa+cloudfront+custom domain. But I'm almost there with all the authentication and configuring.#2019-04-2617:40bbssAnd once that is set-up I'll have an awesome dev set-up with a local running version to dev against, and minimal steps to deploy front-end and back-end 🙂#2019-04-2618:55bbssI found the logs for the specific lambdas on CloudWatch very helpful.#2019-04-2618:56bbssNow everything came together nicely: my oauth hosted-ui authorization flow is working and my cljs front-end is getting authorized data from datomic. I'm happy! Thanks again 🙂#2019-04-2707:19henrikHas anyone found a workaround for the java.io.IOException: Connection reset by peer errors when working with Ions?#2019-04-2707:59steveb8nwhen is this happening for you? I’m curious in case I encounter the same thing but also, maybe you have a locked-out node like I have in the past. If that’s true, you can terminate the node in the EC2 console and the ALB will create a new one#2019-04-2709:04henrikI've set up three separate Datomic Cloud systems (2xSolo, 1xProduction).
All of them, including separate query groups created on the production system, exhibit this problem intermittently.
I assumed it was a fact of Ions.#2019-04-2709:04henrikRunning the latest CF template and libs etc.#2019-04-2709:06henrikI could cycle all of them I guess, but I doubt that's the issue given the prevalence.#2019-04-2709:11henrikOne app is an API. For this one, we've solved it by wrapping API calls in retries. One is serving HTML though, so that particular workaround doesn't work obviously.#2019-04-2717:03Joe LaneSo the datomic cloud system is serving HTML? One way around that is to serve static assets from Cloudfront or S3 and use D.C. purely for the API.
We started always setting up a Cloudwatch Event to healthcheck / warm the Lambdas and try to hit this problem with a CW event instead of a customer.
Its not perfect but I expect this will be fixed in the future due to the work done on connecting from apigateway directly to the ec2 machines instead of through a lambda proxy.#2019-04-2717:41henrikThe API plus the data wrangling magic is deployed to a query group. The app (front + backend) sits on another query group and talks to the API. I'll probably stick it in a non-Ion environment to get rid of the issues.
Which is a shame, you do still get a lot of nice affordances by running it as an Ion, even if it's not talking to Datomic directly.#2019-04-2717:44henrikYeah, hopefully it'll get worked out.
We have an ElasticSearch cluster, which we obviously can't run as an Ion. But in an effort to align it with the rest of the system, I created it using CloudFormation and wrote a set of tools that replicates some of the stuff that comes with Datomic Cloud (such as push/deploy etc.)
Datomic is obviously nice, but the I find the tooling around it rather nice to work with as well.#2019-04-2719:54Joe Lane@U06B8J0AJ I’m experimenting with lucene on datomic cloud via ions. Just curious, what order of magnitude are your elasticsearch indexes? 100s of gbs or 1000s of gbs?#2019-04-2803:40henrik@U0CJ19XAM Interesting!
We're currently at about 2.3TB in total. The individual indices are around 500-800GB.#2019-04-2803:43henrikIt's all growing and likely to continue growing though.#2019-04-2804:16Joe LaneCool! Sounds like a lot of info, unfortunately it may be too large for the problem space. I was hoping to use the disk space on the i3large or i3 xlarge nodes but only for cases where the data will stay under 500 GB.#2019-04-2807:12henrikYeah, that's definitely too large, and ES has requirements with regards to cluster behaviour that I don't think would be immediately compatible with Datomic Cloud anyway.
If Datomic could run on i3.2xlarge, the disk space might not be a problem. In the name of compartmentalisation, it might be prudent to leave the transaction nodes alone though.#2019-04-2817:36Joe LaneYeah, I was planning on putting the nodes on a different instance 🙂#2019-04-2913:42Keith HarperIs there any noticeable performance cost associated with using query rules vs. building the rules into the where clause?#2019-04-2917:29marcolHow do I get the next chunk when using the async api? Whenever a take is made on the channel, the channel is then closed so I can't take again. Is this how to get the next chunk, through a succession of takes?#2019-04-2917:41marcolI had to add the :limit option in order for it to not close the channel, although I am now getting an error of :cognitect.anomalies/unavailable ... Specified iterator was dropped, what does that mean and can I do something about it?#2019-04-2919:03dogenpunk#2019-04-2919:06marshall@dogenpunk generally, yes, as we recommend you split your stack into independent storage and compute stacks as part of your first upgrade: https://docs.datomic.com/cloud/operation/upgrading.html#2019-04-2919:07dogenpunkOk, so if I have a nested Storage and Compute stack, then I haven’t upgraded it previously?#2019-04-2919:07marshallmost likely that is the case, yes#2019-04-3015:41hadilsHi! How do I append data to a cardinality/many datom? Do I have to fetch it first, append it then transact it? I am using Datomic Ions.#2019-04-3015:46Joe Lane@hadilsabbagh18 https://docs.datomic.com/on-prem/transactions.html#cadinality-many-transactions#2019-04-3015:47Joe LaneYou just transact a vector containing the new things you want to add.#2019-04-3015:48Joe LaneIt does not replace the value, in conj’s those new value onto the existing collection.#2019-04-3015:48hadilsThanks @lanejo01! Should I be reading the on-prem docs as well as the cloud docs?#2019-04-3015:49Joe LaneI’ve often asked myself that question haha. For fundamental things like the above I think its ok. There will be some subtleties. Usually I look at Cloud docs, then on-prem docs if I can’t find something in the Cloud docs.#2019-04-3015:50hadilsGood to know @lanejo01! I will use that approach.#2019-04-3016:20dustingetz@hadilsabbagh18, you definitely want to watch the day of datomic talks which cover the core information model, which is the same in both onprem and cloud#2019-04-3016:21hadilsGood suggestion @dustingetz. I have watched one of the talks and understand the basics. I need to dive deeper I think.#2019-04-3016:37timgilbertSay, is it necessary to specify :db/index true on :db.type/ref entities in order to get the backrefs to be indexed?#2019-04-3016:54marshall@timgilbert nope. all ref type attributes are automatically indexed in both directions#2019-04-3016:55timgilbert:+1: Thanks!#2019-05-0115:35donaldballDoes anyone have a recipe for deploying a datomic pro jar to a private s3 maven that works with STS credentials e.g. granted by using aws-vault?#2019-05-0116:16drewverleeso
db.unique/identity means no duplicate EAV's right? e.g insert 1 color blue, 1 color blue wouldn't be allowed.
as where db.unique/value. means no duplicatte AV so 1 color blue, and 2 color blue wouldn't be allowed.#2019-05-0116:17drewverleewhere the format is Entity, attribute, value#2019-05-0116:17Ben Hammondno if its a :db.unique/identity then that VA will be used to identiy the entity#2019-05-0116:17Ben Hammondlike a primary key#2019-05-0116:19Ben Hammondif you have two contradictory identities pointed at the same entity then you will get an exception#2019-05-0116:21Ben Hammond:db.unique/value will chuck an exception if you try to assert color blue on two different entities#2019-05-0116:22Ben Hammond:db.unique/identity will attempt to conclude that 1 and 2 are the same entity#2019-05-0116:22Ben Hammondwhich might chuck a contradiction exception, depending#2019-05-0116:25drewverleeThanks @benha, if you dont use db.unique/value would it mean then you can refer to an entity using multiple identities?#2019-05-0116:26Ben Hammondit would, which can be extremely helpful#2019-05-0116:48drewverleethanks again.
I'm not sure my last statement is right, nothing in the docs makes it clear to me that once you assign db.unique/value to an entity its not the only AV you can use to refer to the entity. Just confirming that this is indeed the case 🙂
secondly, assuming the above isnt true, it would seem a good thing to use db.unique/value as its rare that you want to return two entities if your using a key, so it would be nice to enforce that. Again, thats assuming db.unique/value doesn't close that entity to one AV (primary key).#2019-05-0116:26Ben Hammondand sometimes a nasty surprise#2019-05-0116:44uwoIs it always best practice to dereference the return value of d/transact, in order to realize and handle exceptions that only occur after .get is called on the returned Future? Or is it acceptable to call d/transact without a dereference, fire-and-forget style?#2019-05-0116:46favilaThis question is really "is it acceptable not to inspect for and handle errors?" Probably not?#2019-05-0116:47favilaWithout deref, you will only get a timeout exception thrown from d/transact (d/transact-async will never throw)#2019-05-0116:47favilaso you will never know about tx problems without deref-ing#2019-05-0116:54uwoThank you!!#2019-05-0119:19yedihey all, so the following will return all values of ?e where age is 42
[:find ?e
:where [?e :age 42]]
#2019-05-0119:19yediwhat if I wanted to determine if there was any ?e in the dataset that had age set to 42? without having to check the entire unified dataset#2019-05-0119:20yediso basically clojure's some#2019-05-0119:22yedinot sure if this question even makes sense from a logic programming perspective#2019-05-0119:33alexmillerI don't know if this is the best answer, but Datomic gives you raw access to the indexes#2019-05-0119:33alexmillerand this is basically a question you can ask of the VAET index via the datoms api https://docs.datomic.com/client-api/datomic.client.api.html#var-datoms#2019-05-0119:35alexmiller(d/datoms db {:index :vaet, :components [42 :age]}) gives you an iterable of those entities that have :age 42, which you check for emptiness#2019-05-0119:37yedimmm, but unfortunately you wouldn't be able to do that within a datomic rule#2019-05-0119:37yedimy use-case is trying to speed up a query by not having to check all ?es if the first ?e already passes the clause#2019-05-0119:38alexmillerwell, not sure I'm qualified to answer that#2019-05-0119:38yedino worries, thanks for attempting#2019-05-0120:06yedianother question - can someone confirm that ors in datalog don't short-circuit?
e.g: both fast-clause and slow-clause will run even if fast-clause is true
(or [(fast-clause? e)]
[(slow-clause? e)])
#2019-05-0120:21Keith HarperI’ve been wondering the same thing myself. I suspect that they do not short-circuit, but I would love clarification on that.#2019-05-0120:22yedifrom some testing on my end, it does not appear to do so#2019-05-0120:31donaldballHave you tried using rules? I’d be curious to see if they have the same behavior.#2019-05-0120:34yediwhat i tested was using rules#2019-05-0120:39donaldballwell then! 🙂#2019-05-0120:34benoit@yedi
This should return only one value:
[:find ?e .
:where [?e :age 42]]
Note the period after ?e.#2019-05-0120:35yedithats true, but the clause im looking at is within a rule#2019-05-0120:37grzm@jaret @marshall Have you considered exporting the DatomicLambdaRole name as an export from the Cloudformation templates for Datomic Cloud? It would make adding policies to the role easier instead of looking it up and supplying it as a parameter. Small thing, but I've wished for this more than once.#2019-05-0120:37yediso instead of [?e :age 42] that would bind all the results to ?e it'd be nice to have something like [?b (some? e? :age 42)]#2019-05-0120:38yedior something like that idk#2019-05-0120:40benoitRegarding the or, you should interpret or as a union of sets. Datomic will find all entities matching the fast-clause and the slow-clause and "set union" them.#2019-05-0120:42yediok that makes sense and is a good way to look at it (so basically from an procedural perspective: no short circuiting)#2019-05-0120:43benoitIMO the best way is to not take the "procedural perspective" at all because it's going to cause more confusion.#2019-05-0120:43yediyea that's fair#2019-05-0120:43benoitFor your query, keep in mind that you can call clojure functions in your clauses.#2019-05-0120:43benoit[(datomic.api/datoms ...) ?result]
#2019-05-0120:44yediok i think i might need to do that. basically im dealing with a performance problem when using queries with multiple deeply nested rules#2019-05-0120:44yediwhere rules with the same variables are showing up tons of times in the query beacuse theyre included within other rules#2019-05-0120:45benoitThat said, the perf issues I encountered in clauses where most often due to bad ordering.#2019-05-0120:45yedihm not sure if that made sense, or even if that is the source of the performance problems#2019-05-0120:45yediyea our team already went through and made a bunch of ordering changes that def helped#2019-05-0120:46yedibut now weve limited the remaining perf problems to specific queries that leverage a ton of rules#2019-05-0120:52benoitA few heuristics I found helpful:
1. Make sure to have most specific rules first (cardinality of the matching set should be pretty low)
2. Make sure you can navigate from one rule to the next. If a rule starts doing something completely different from the rule before it, it basically means you will do a cartesian product)#2019-05-0120:53yediok interesting, ill have to do some thinking around those heuristics#2019-05-0120:54yediand if you kinda want more clarity into the case im looking at, i have a rule that looks something like this:
(visible-instances [?person ?group] ?instance)
[?instance :instance/group ?group]
(or-join [?person ?group ?instance]
(group-admin? ?person ?group)
(person-can-see-instance? ?person ?instance))
#2019-05-0120:54yediwhere the rule person-can-see-instance? also has the clause (group-admin? ?person ?group)#2019-05-0120:55yediso if there are like 5000 instances, that group-admin rule might be getting applied 5000 times, even if its already been done once#2019-05-0120:55yediand im not sure if thinking about datalog in this way even makes sense#2019-05-0120:55benoitAH AH I optimized this kind of rules for permissions last week in our system.#2019-05-0120:56benoitAre you trying to check whether a given user has permission to view a given instance?#2019-05-0120:57yediwell the example above returns all instances the user has the permission to see#2019-05-0120:57yediperson-can-see-instance? does what you say, checks for a specific instance#2019-05-0121:02yediso in visible-instances if the user is the group-admin, i don't want it to resolve the clause (person-can-see-instance? ?person ?instance) because i already know it can see all instances in that group#2019-05-0121:03yedibut based off your explanation above, it is gonna resolve both clauses and merge them#2019-05-0121:03benoit[(visible-instances [?person ?group] ?instance)
[?instance :instance/group ?group]
(group-admin? ?person ?group)]
[(visible-instances [?person ?group] ?instance)
(not (group-admin? ?person ?group))
(person-can-see-instance? ?person ?instance)]
#2019-05-0121:03yediinto a set#2019-05-0121:03yediooooooo#2019-05-0121:04benoitI don't know whether it solve your overall perf issues but it should bypass your expensive clauses..#2019-05-0121:05yedii think this is the correct version: [(visible-instances [?person ?group] ?instance)
[?instance :instance/group ?group]
(group-admin? ?person ?group)]
[(visible-instances [?person ?group] ?instance)
[?instance :instance/group ?group]
(not (group-admin? ?person ?group))
(person-can-see-instance? ?person ?instance)]
#2019-05-0121:05yedibut yea, that seems suuuuuper promising#2019-05-0121:05benoitcorrect#2019-05-0121:09yediwell i guess the group-admin? clauses should go above the ?instance clauses, so that it makes the set go to 0 before it tries to find all the instances for that group#2019-05-0121:09benoitgood point#2019-05-0121:13benoitBut I'm not sure why you need two different rules for visible-instances and person-can-see-instance. It seems those could be one rule and only take the ?person and the ?instance. If you call the rule without binding ?instance it will bind all visible instances. If you bind ?instance it will let you know whether the person can see this specific instance.#2019-05-0122:23yediall true, except it won't be limited to visible instances in that group, unless you added that binding prior to the rule#2019-05-0122:24yediwhich you could do, but in the effort of DRY, making a rule that already encapsulate that group binding seems preferred#2019-05-0121:16weiis retractEntity not available in datomic cloud? I'm getting the following exception trying to remove an entity in a transaction: ExceptionInfo Unable to resolve data function: :db.fn/retractEntity clojure.core/ex-info (core.clj:4739)
#2019-05-0121:17weioh nevermind it's been renamed#2019-05-0121:18weiwas looking at the on-prem docs, in cloud it's db/retractEntity#2019-05-0122:20yedi@me1740 thanks a lot, that helped for that one query#2019-05-0122:22yedibut it appears that this strategy may get unwieldy. like if your "or" clause is supposed to have 3 or more diff branches instead of 2#2019-05-0122:22yedii would have to include multiple negation clauses at the top level of the rule (similar to how we did (not (group-admin? ?person ?group))) instead of just the one#2019-05-0201:30drewverleeI'm trying to setup datomic cloud, i might eventually roll this into a "production" sever, but its doubtful that will happen anytime soon (months if ever).
https://docs.datomic.com/cloud/setting-up.html
The docs make this comment:
> If you are setting up a Datomic Cloud System for production or doing exploratory work for such a purpose then consider creating separate storage and compute stacks.
Should i go that route? and follow the instructions here?https://docs.datomic.com/cloud/operation/new-system.html
I would guess not, as it probably a bit more complex and i can upgrade to one later right?#2019-05-0201:35souenzzo@drewverlee cloudformation is the central point
You can create storage, solo, production and query from it.
To create a system, you just need to copy the URL from https://docs.datomic.com/cloud/releases.html and paste on "create from JSON URL"
the storage, usually my-system store the data. It will never be deleted
the solo, usually my-system-XXXX can be deleted and created (on create, it will ask for the storage name)
you can delete a solo and create a prodution, pointing to the same storage#2019-05-0212:47Ivar RefsdalI'm using datomic pro 0.9.5786 on-prem and got the following error in my logs (as WARN level) from the datomic.common logger:
o.a.a.a.a.c.ActiveMQNotConnectedException: AMQ119007: Cannot connect to server(s). Tried with all available servers.
at o.a.a.a.c.c.i.ServerLocatorImpl.createSessionFactory(ServerLocatorImpl.java:799)
at d.artemis_client$create_session_factory.invokeStatic(artemis_client.clj:114)
at d.artemis_client$create_session_factory.invoke(artemis_client.clj:104)
at d.connector$try_hornet_connect.invokeStatic(connector.clj:110)
at d.connector$try_hornet_connect.invoke(connector.clj:95)
at d.connector$create_hornet_factory.invokeStatic(connector.clj:142)
at d.connector$create_hornet_factory.invoke(connector.clj:132)
at d.connector$create_transactor_hornet_connector.invokeStatic(connector.clj:322)
at d.connector$create_transactor_hornet_connector.invoke(connector.clj:317)
at d.connector$create_transactor_hornet_connector.invokeStatic(connector.clj:320)
at d.connector$create_transactor_hornet_connector.invoke(connector.clj:317)
at d.p.Connection$fn__10138.invoke(peer.clj:237)
at d.peer.Connection.create_connection_state(peer.clj:225)
at d.peer$create_connection$reconnect_fn__10213.invoke(peer.clj:489)
at c.core$partial$fn__5841.invoke(core.clj:2623)
at d.common$retry_fn$fn__697.invoke(common.clj:514)
at d.common$retry_fn.invokeStatic(common.clj:514)
at d.common$retry_fn.doInvoke(common.clj:497)
at clojure.lang.RestFn.invoke(RestFn.java:713)
at d.peer$create_connection$fn__10215.invoke(peer.clj:493)
at d.r.Reconnector$fn__9445.invoke(reconnector2.clj:57)
at c.core$binding_conveyor_fn$fn__5756.invoke(core.clj:2030)
at clojure.lang.AFn.call(AFn.java:18)
at j.u.c.FutureTask.run(FutureTask.java:266)
at j.u.c.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at j.u.c.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Is there a way to change this message to ERROR level?
Is there a way for my app to detect that this has happened?
Edit: I got this as a one off error, usually everything works fine.#2019-05-0212:59matthewdanielI’m going to play around with datomic cloud but will only have a couple days a week to do so. Does it work to just stop the ec2 instances and turn them on when i’m learning it? Or do i have to tear down the whole stack each time?#2019-05-0212:59matthewdanielI’m going to play around with datomic cloud but will only have a couple days a week to do so. Does it work to just stop the ec2 instances and turn them on when i’m learning it? Or do i have to tear down the whole stack each time?#2019-05-0213:29matthewdanielif anyone knows, please at me in your response#2019-05-0214:29Joe Lane@U5136PEE6 If you go to EC2 Dashboard you can change the minimum number of machines in the autoscaling group to 0, then change the desired amount to 0 and it will shut them down. See the docs for more details https://docs.datomic.com/cloud/operation/planning.html#turning-down#2019-05-0214:30matthewdanielgreat, thank you very much#2019-05-0214:30Joe Lanenp, its a ton of fun to play with 🙂 LMK if you’ve got more questions.#2019-05-0216:18dangercoderthank you for this. I had datomic cloud running last month but I was only using it like 2-3h a day when developing on my own time outside of work 😍.#2019-05-0222:57matthewdaniel@U0CJ19XAM is there a way to connect to this via the rest? i’m trying to setup a javascript connection planning to use https://github.com/limadelic/datomicjs#2019-05-0222:58Joe LaneUnfortunately I dont believe there is.#2019-05-0222:58Joe LaneDatomic cloud isn’t exposed over the internet#2019-05-0215:43Rafael Namiyep, having the same error for the dev setup on prem than @ivar.refsdal when I try to create a database#2019-05-0215:43Rafael Namidev setup === local dev setup#2019-05-0223:16Rafael Namimemory works fine#2019-05-0223:16Rafael Namitrying with datomic + dynamodb local too#2019-05-0223:16Rafael Namisame error#2019-05-0223:16Rafael NamiActiveMQNotConnectedException AMQ119007: Cannot connect to server(s). Tried with all available servers. org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory (ServerLocatorImpl.java:799)#2019-05-0223:16Rafael Namiwhen trying to create a database#2019-05-0307:25Ivar RefsdalDid you read/try the alt-host setting?
https://docs.datomic.com/on-prem/storage.html#connecting-to-transactor
I got this error as a one-off. My app has been running fine in production for a while now#2019-05-0321:11Rafael NamiThanks Ivar, will try that one - tried with dev protocol and ddb-local - both failing with the same message and not creating the database#2019-05-0321:12Rafael Namiwill try with the peer connecting with the transactor (was connecting using REPL)#2019-05-0223:23mrchanceHey, how can I do a conditional over an aggregate in datomic? I tried something along the lines of
[(max ?date) ?last-date] [(before ?last-date ?ref-date)] but it looks like that's always a max over one item in the query, which is kind of expected#2019-05-0223:28benoitYou can use subqueries. https://forum.datomic.com/t/subquery-examples/345#2019-05-0223:31mrchanceAh, nice, thank you!#2019-05-0313:04dmarjenburghWhat do you use to automate the datomic ion push, deploy, and deploy-status checks in an automated pipeline? Do you parse the output of the cli commands manually?#2019-05-0313:27danierouxIs this sort of what you are looking for? https://gist.github.com/olivergeorge/f402c8dc8acd469717d5c6008b2c611b#2019-05-0313:33dmarjenburgh@danie Great! :+1::skin-tone-3: I was trying to call ion-dev code from clojure, but without docs or source code I feared it's not a public api. Also, calling it from windows gave me git issues, but it works on unix#2019-05-0320:35rlanderIs there a minimum amount of RAM for running the transactor and the peer? Would they run on a droplet with 1gb of ram?#2019-05-0321:51hadilsI am getting this error: ExceptionInfo Two datoms in the same transaction conflict: {:d1 #datom[158 40 20 13194139533328 true], :d2 #datom[158 40 23 13194139533328 true]} clojure.core/ex-info (core.clj:4739)
How do I troubleshoot this?#2019-05-0322:04hadilsNvm, I figured it out.#2019-05-0412:21matthewdanielif created-at is used by consumers would it be best to store it or is there some datomicy way to do that which isn’t expensive#2019-05-0414:02val_waeselynckCould be related: https://vvvvalvalval.github.io/posts/2017-07-08-Datomic-this-is-not-the-history-youre-looking-for.html#2019-05-0414:10val_waeselynckProbably best to have an attribute dedicated to that.#2019-05-0414:13matthewdanielyeah, i started moving that direction. thanks#2019-05-0714:35conanit's probably too big a change for you to consider, but Juxt's new bitemporal db is worth considering here
https://juxt.pro/crux/index.html#2019-05-0700:08jdkealyHi, what would cause Transactor not available errors from Datomic ?#2019-05-0719:03gwsI've seen these during HA transactor failovers and occasionally on unreliable networks. if you're transacting often, even if the transactions are small as you say, you're more likely to see this purely because of statistics#2019-05-0909:05Ivar RefsdalI'm fairly certain I have gotten Transactor not available as a red herring (false positive). I reproduced this locally three times this morning.
The real issue was that a cross product query was being run concurrently and starving memory/CPU.
This led to the application freezing/slowing and d/transact failing.
For my case the cross product query took about 10-15 seconds. 3 x concurrent queries starved the machine's resources, and after about 3 minutes of run time the datomic AQ connection died and d/transact failed with Transactor not available.
Hope that helps! And apologies if I'm wrong.
Best regards.#2019-05-0700:09jdkealyIt happens when i'm trying to transact relatively small transactions.#2019-05-0700:12jdkealyit's too infrequent to recreate, but happening enough to cause problems#2019-05-0705:04hadilsI have a design question. I am still new to Clojure/Datomic. I am thinking about passing around maps instead of records and, lately, I have been thinking to use the Datomic key names directly, instead of repeatedly renaming them. I have a GraphQL interface and external APIs which need to rename the keys anyway; I don't see why I can't just use the Datomic :db/ident names. Any thoughts on this subject? The only downside I see is that if I refactor the schema, I have to change it in a lot of places. Thanks.#2019-05-0705:04hadilsI have a design question. I am still new to Clojure/Datomic. I am thinking about passing around maps instead of records and, lately, I have been thinking to use the Datomic key names directly, instead of repeatedly renaming them. I have a GraphQL interface and external APIs which need to rename the keys anyway; I don't see why I can't just use the Datomic :db/ident names. Any thoughts on this subject? The only downside I see is that if I refactor the schema, I have to change it in a lot of places. Thanks.#2019-05-0714:28donaldballYou might consider adding an attribute to those attributes you want to expose on graphql indicating their public name, e.g. :schema/published.ident#2019-05-0714:28conani've had good experiences with aligning clojure.spec and datomic schema, and passing this right the way up to the frontend of my app. I even wrote a bit about it here:
https://conan.is/blogging/clojure-spec-tips.html#datomic
there are some libraries that attempt to bridge the gap between lacinia (for graphql) and datomic schema, although i haven't had a chance to try them yet:
https://github.com/workframers/stillsuit
https://github.com/workframers/catchpocket
https://juxt.pro/blog/posts/through-the-looking-graph.html#2019-05-0714:29conani think there are other libraries, so be sure to do some digging#2019-05-0714:29conanmaybe @U0654RQ1F has made this work#2019-05-0720:26hadilsThanks all for your input. This is very helpful.#2019-05-0808:07Ivar RefsdalI've used a the same names for Datomic and GraphQL fields. And I've instructed cheshire (the JSON serializer) to strip the namespace part of the keys (as well as :db/id) to make the response as GraphQL expects.
This way I can work with the data as datomic pull returns it, and when the web response is given it is finally transformed into the GraphQL format.
This has worked well for me#2019-05-0907:43val_waeselynck@UGNMGFJG3 controversial opinion: don't use Clojure's namespacing convention, use one that can be supported by GraphQL and all the places where your data will end up (e.g x_y_z_k instead of the more clojury x.y.z/k). You'll still get the essential benefits of namespacing, and only lose a bit of syntactic sugar.
In my experience on real-world Datomic projects: having an ubiquitous namespacing convention is more important than having an idiomatic one. Data traceability is more valuable than concision or esthetics.
https://clojureverse.org/t/should-we-really-use-clojures-syntax-for-namespaced-keys/1516#2019-05-0914:02hmaurerHmm interesting. I’ve also been annoyed at the fact that namespaced keywords are a great feature of clojure that isn’t present in other languages, which inevitably leads to the question of what to do with namespaces as soon as you interact with another lang.#2019-05-0918:53henrikAre there systems that don't support uppercase letters?
I have encountered databases that don't support lowercase letters in column names.#2019-05-0919:00val_waeselynck@U06B8J0AJ I believe SQL names are case insensitive, so you get lowercase output#2019-05-1413:22bhurlow@U5ZAJ15P0 though not very satisfying, we use the string representation of clj keywords in our javascript like: ":user/firstname". This still has the benefit of being able to merge safely#2019-05-1604:29timur@U0FHWANJK what happens to your string keywords when they get JSON parsed? That is, once they arrive into the Javascript world over the wire, they are usually parsed from a string into a key.#2019-05-1605:50val_waeselynck@U07HW6PNW any string can be used as a key in js, you just could not use that one with dot notation.#2019-05-1606:21timurconst foo = {“foo/bar”: 42} will work because it’s a string, but my question was what happens to these string keys that contain forward slashes when the containing objects get JSON parsed?#2019-05-1606:33val_waeselynckThey remain the same? I may not understand the question.#2019-05-1609:17henrik@U07HW6PNW Do you mean if the contents of the string is interpreted as JSON? I'd say all bets are off, as they indeed are with any string.#2019-05-1613:26bhurlow@U07HW6PNW in our app we simply leave them as string keys which means like val noted we cannot use the dot syntax. It looks like entity[":user/username"]. This is certainly a downgrade from actual keyword value types, but we still retain some of the advantages. We did consider at one point camel casing into userUsername but this we felt was not necessarily stronger#2019-05-1619:01timurRight, so anywhere in the system between boundaries and across wire where you might not have control, and some code is doing the automatic JSON.parse on the payload, you’re bound to run into issues, I would think.#2019-05-1717:34conansorry if the topic has moved on, but this might be of interest https://github.com/workframers/catchpocket#2019-05-1718:03donaldballWe looked at that and at least one other take on “generate graphql schema from datomic” system, but given that we want our graphql schema to be our public contract, we decided we were happier using a literal graphql schema expressed as lacinia-compatible edn that we annotate with datomic bindings and some higher order niceties.#2019-05-1806:26val_waeselynckYou could also derive both the Datomic and GraphQL schemas from a common source of truth: https://vvvvalvalval.github.io/posts/2018-07-23-datascript-as-a-lingua-franca-for-domain-modeling.html#2019-05-0720:04joshkhdoes the [?a ...] find specification syntax for returning collections work on Datomic Cloud?
i get the following on Cloud when running the on-prem example [1]:
(d/q
'[:find [?release-name ...]
:in $ ?artist-name
:where [?artist :artist/name ?artist-name]
[?release :release/artists ?artist]
[?release :release/name ?release-name]]
(client/db) "John Lennon")
ExceptionInfo Only find-rel elements are allowed in client find-spec, see clojure.core/ex-info (core.clj:4739)
[1] https://docs.datomic.com/on-prem/query.html#find-specifications
(also i think the link in the Exception is outdated)#2019-05-0720:05kennyIt is not supported.#2019-05-0720:06joshkhdarn. okay, thanks @U083D6HK9#2019-05-0804:16currentooris there any harm in asserting an already true fact in a transaction? as in will it cause extra noise/overhead?#2019-05-0816:20Lennart BuitI think its not even asserted? Asserting something that is already asserted as the same value is filtered#2019-05-0816:20Lennart BuitLet me see if I can find a resource to back my claim#2019-05-0816:21Lennart BuitOh here we go! https://docs.datomic.com/cloud/transactions/transaction-processing.html#redundancy-elimination#2019-05-0816:25Lennart BuitSo your history remains “clean”#2019-05-0816:28currentoorawesome thanks @UDF11HLKC#2019-05-1217:04eoliphantBut you still get a transaction, which can be annoying in some use cases. We’ve been generally fine with this as-is, but recently our UX folks wanted to add an autosave feature to one of our apps. We’re still trying to decide the best way to handle it, especially since we do some ES-lite with datomic and tag our transactions with biz event markers. SO we’re ending up with a ton of ‘:updated-foo’ transactions with no actual updates.#2019-05-1217:14Lennart BuitCorrect, I can understand how that is annoying, but it feels consistent. Every time you call d/transact a new transaction entity is asserted. Regardless of whether that transaction contains new domain ‘facts’.#2019-05-0813:47matthewdanielanyone know if progress has been made on console for cloud? looks like stu said to stay tuned over a year ago.#2019-05-0813:49souenzzoI think that #rebl is kind of frontend for cloud
I have plans to create a "client frontend", to list my databases, create, delete, see last tx, first tx, datoms count..#2019-05-0814:00alexmillermore coming on rebl + datomic for sure#2019-05-0815:14dangercoderAre there any tutorials on how to use an in memory database for integration tests using the Datomic Client (Cloud)? It is a problem I am going to solve after work.#2019-05-0815:36Joe Lane@jarvinenemil Why do you need an in memory database? Why can’t you spin up a blank db when you start your tests and tear it down after?#2019-05-0816:18dangercoderI decided to try out
https://github.com/ComputeSoftware/datomic-client-memdb#2019-05-0819:44souenzzoI'm using it.#2019-05-0820:47Daniel HinesHow do people usually enforce things like “This particular type of entity must conform to this particular spec” in Datomic? In SQL DB’s, I could say, “Column X is required for Table Y”. In DB’s/CRM’s with ref types, I can also say “Column A refers to a row in Table B”. I get that a transaction function is the right “when”, but I’m wondering what the common idioms are for “how”.#2019-05-0821:23dustingetz“entity types” Maybe have a look at how https://github.com/luchiniatwork/hodur-engine does it#2019-05-0821:23dustingetzVal has a blog post about modeling with entity types as well i believe#2019-05-0907:49val_waeselynck@U09K620SG @U8QTB156K I don't really have a blog post on that, nor strong opinions.
Avoid doing unnecessary work in the Transactor. I'd recommend doing most validation (e.g validating attribute values) upstream of transacting.
See also: https://stackoverflow.com/questions/48268887/how-to-prevent-transactions-from-violating-application-invariants-in-datomic/48269377#48269377#2019-05-0913:57hadilsThanks @val_waeselynck! This is an excellent idea. I already named my keys and I don't have the opportunity to go back and rename them. Also, there is no single name for each key that satisfies GraphQL and my external APIs. I will keep this in mind for my next project.#2019-05-0914:08Daniel HinesIsn’t namespace keywords to some sort of concatenated string a bijective transformation, such that you can really simply convert from one to the otehr?#2019-05-0914:09Daniel HinesKind of like camel case to hyphenated, etc.#2019-05-0914:17favilaIt's not bijective. :a_b.c -> :a_b_c -> :a.b.c#2019-05-0914:18favilayou would have to add some unambiguous escaping mechanism to know that a character was in the original rather than the result of a transformation. Or reject certain input, also an option#2019-05-0914:31Daniel HinesAh, good point#2019-05-0914:20danierouxI settled on :a.b/c -> a_b__c - it makes SQL happy, and it doesn't offend my sensibilities too much.#2019-05-0919:05val_waeselynckEven when converting the names is easy, I don't think it's beneficial. When you do that, you lose the ability to instantly trace the lifecycle of a piece of data across your whole system, with a basic text search.#2019-05-0919:11val_waeselynckIt's OK for key names to not be Clojure-idiomatic. (Just like it's OK for your language to use parens instead of curly braces... sounds familiar? :) ).#2019-05-0919:14val_waeselynckActually, a key name that doesn't respect Clojure's namespacing convention signals that it's meant to travel across the system, in contrast e.g with component or algorithm configuration. This contrast adds clarity.#2019-05-0919:15Daniel HinesThat’s a good point - data “from the system” would stick out for having the odd_naming__conventions.#2019-05-0920:14calebpHey all, the cloud docs have instructions for “First Upgrade Ever” but these instructions don’t apply if I used the instructions here https://docs.datomic.com/cloud/operation/new-system.html and created the stacks individually, right? It looks like I can just go forward with the regular stack upgrade instructions.#2019-05-0921:18marshall@calebp correct#2019-05-0921:19marshallif you’re already running a split stack you can go right to the regular upgrade#2019-05-0921:19calebpGreat thanks @marshall#2019-05-0923:08weiis there a way I can rewrite this pull query so I can get nil when the query doesn't find any results?
=> (d/pull (db/db) '[*] [:account/email "doesn't-exist"])
#:db{:id nil}
#2019-05-0923:30marshallhttps://docs.datomic.com/on-prem/pull.html#default-option @wei#2019-05-0923:31marshall@wei ^#2019-05-0923:32marshallYou want the default option for your pull Express#2019-05-0923:32marshallExpression#2019-05-0923:40weithanks @marshall. to clarify, I want this output:
=> (d/pull (db/db) '[*] [:account/email "doesn't-exist"])
nil
It's currently returning #:db{:id nil}#2019-05-0923:40weialso, this works for the cloud client api right?#2019-05-1000:13marshallYes .
https://docs.datomic.com/cloud/query/query-pull.html#default-option#2019-05-1000:14marshallNot sure youll get nil like that#2019-05-1000:14marshallPull returns maps#2019-05-1000:28weii see. what's the reasoning for returning #:db{:id nil} vs {} or just nil for no matching result?#2019-05-1000:29weimakes it harder to do nil punning, although that's easy enough to add a helper function for#2019-05-1005:42snurppa@(d/transact (user/conn) [[:db/add :application/address :db/fulltext true]])
Execution error (Exceptions$IllegalArgumentExceptionInfo) at datomic.error/deserialize-exception (error.clj:124).
:db.error/invalid-alter-attribute Error: {:db/error :db.error/unsupported-alter-schema, :attribute :db/fulltext, :from :disabled, :to true}
So I guess the docs are right about “You cannot alter :db/valueType or :db/fulltext”…? Initially I hopefully thought that maybe this means that you can’t remove the :db/fulltext after you have set it to true, but I guess hopes were slightly too high.
What would be the best way to continue? Create new entity attribute :application/address2 with :db/fulltext true and migrate entities to use that? Any other suggestions? Thanks! 🙂#2019-05-1007:45Ivar RefsdalHow about renaming :application/address to :application/addressOld and then copy the old to the new attribute that has :db/fulltext true?#2019-05-1010:25snurppaYeah that could be option, thanks!#2019-05-1014:42stijnis there a JVM or Linux setting that might cause datomic client to not properly read UTF-8 characters (e.g. é) from the database?#2019-05-1014:43stijnthis is for the cloud platform#2019-05-1014:44stijnI can properly read in an ion, also on localhost when connecting to the bastion, but we run a Beanstalk application which uses datomic client api and that one consistently reads UTF8 characters wrongly#2019-05-1014:49stijnwe're using Java 8 running on 64bit Amazon Linux/2.8.1#2019-05-1014:50stijnthe weird thing is, the data comes in on the same instance through an API call, gets properly written to datomic (we can validate that at an ion or local dev), but when reading it back on the same instance, it doesn't have the proper encoding#2019-05-1019:33weiwhat's a good schema for a message body with versions in multiple languages? I was thinking something like:
:message/en
:message/zh
:message/fr
...
but I'd have to modify the schema every time I add a new language. curious if there are any other options.#2019-05-1019:36kennyAnother option could be
:message/body: string
:message/language: keyword#2019-05-1019:39weithanks! that would work too#2019-05-1023:00drewverleeDo Ident and DB.uqiue/identity always go together? Like, can you use ident when your declaring it unique? It would seem so from the docs, but I don't understand what the relationship is then#2019-05-1023:07drewverleeAt ok, so ident is for the attribute, which is an entity itself, but to say this attribute plus a value is a pointer to an entity you have to declare it unique. Right?#2019-05-1023:08drewverleeAnd unsure value, means there can only be one such pointer to the entity#2019-05-1023:18benoit:db/ident can be added to any entity, not just attribute. And it is used to be able to refer to the entity with a keyword.#2019-05-1023:18benoithttps://docs.datomic.com/on-prem/identity.html#idents#2019-05-1023:21drewverleeOk, what makes an attribute an attribute?#2019-05-1023:33favilaAn entity is an attribute by being asserted on :db.part/db :db.install/attribute#2019-05-1023:33favila:db.part/db is entity 0#2019-05-1023:33favila:db.install/attribute is a special bootstrapped attribute created in the first three transactions of a fresh db#2019-05-1023:38favilayou used to have to assert :db.install/attribute explicitly when adding an attribute but it's implicit now#2019-05-1023:23drewverleeAll attributes are entities right? So we must have to add information to make it an attribute.#2019-05-1023:26drewverleeThis assuming I understand the idea of an entity, which I take to be, the set of facts/datoms connected to this entity id.#2019-05-1023:30drewverleeOk right, so if you don't use ident then you can only refer to it by it's id#2019-05-1023:31drewverleeSo if you give it unique identity but not an ident then you could use the entity id plus a value to look up?#2019-05-1023:34drewverleeLike saying :name is a unique identity, but without ident, would mean I would have to use the entity id plus the name.
[5677 "Drew"]#2019-05-1217:08eoliphantLooking forward to putting Cloud on some of these 🙂 https://aws.amazon.com/blogs/aws/new-the-next-generation-i3en-of-i-o-optimized-ec2-instances/#2019-05-1304:11henrikLooks like pretty good machines for ElasticSearch as well.#2019-05-1407:41mkvlrdoes datafy for rebl only work with the datomic.client.api not with datomic.api?#2019-05-1408:05mkvlrseems to be the case, but a good and easy exercise to add it#2019-05-1408:33dazldgood morning! I’d like to migrate some entities into their own partition - at the moment, everything is in user - is retracting the entity and recreating it the only option?#2019-05-1409:15danierouxI think someone here mention that API Gateway websockets does work through Datomic Cloud, maybe @lanejo01? Can’t search the logs right now#2019-05-1414:41Joe LaneAre you running into problems with it?#2019-05-1415:45Joe LaneThis is absolutely possible. Just make 3 ions, one for onConnect, another for onDisconnect, and a third for onMessage (or whatever they call it).#2019-05-1410:21dmarjenburghI'm running into problems trying to do an ion-push from a CodeBuild container.
[Container] 2019/05/14 10:10:08 Running command clojure -A:dev -m datomic.ion.dev '{:op :push}'
{:command-failed "{:op :push}",
:causes
({:message
"VPC endpoints do not support cross-region requests (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 1CC05267DFB09AEF; S3 Extended Request ID: MufLOTeSWSQTUQK+kmRM43+VZatPzPGDlKR3JoAFnWm8C9V+TArQnig6xthyWoNZ/dgBNI7Tyxs=)",
:class AmazonS3Exception})}
I'm not sure why the error occurs, as there are no VPC settings for the build project and it can download dependencies from the https://repo1.maven.org/maven2/ repo just fine. I suppose part of the reason is that a :push also downloads implicit dependencies every time from the datomic-releases-1fc2183a/maven/releases repo, like com/datomic/java-io/0.1.11/java-io-0.1.11.jar. Weird thing is that the following command succeeds just fine: aws s3 cp .. Anyone have any ideas?#2019-05-1410:22dmarjenburghI'm running in eu-west-1 while the datomic-cloud repo is (very likely) in us-east-1#2019-05-1411:03marshall@dmarjenburgh https://forum.datomic.com/t/ions-push-deployments-automation-issues/715
this is a known issue; I haven’t been able to get it to work from us-east-1 either, though#2019-05-1411:03marshallso it’s not limited to the s3 bucket region issue#2019-05-1418:54woxhi, does anybody have experience deploying datomic on-prem on multi-master storage that spans multiple datacenters (for HA reasons)? Something like MariaDB Galera cluster or CockroachDB. Any tips? Or is this at all feasible?#2019-05-1419:21marshall@wox https://docs.datomic.com/on-prem/ha.html#moving-across-data-centers#2019-05-1419:22marshallby definition eventually consistent cross-datacenter replications systems are unsuitable for Datomic storage#2019-05-1419:26woxyeah, I know this, but those solutions are not eventually consistent#2019-05-1419:23dmarjenburgh@marshall I have narrowed down the possibilities quite a bit and posted my findings in the thread (https://forum.datomic.com/t/ions-push-deployments-automation-issues/715). Interesting that it didn’t work for us-east-1, as my minimum reproducible case did work for buckets in the same region.#2019-05-1419:25marshall@dmarjenburgh it may be worth following up with AWS support directly with the specific error code that you get; they may be able to determine what permission or issue is causing the failure#2019-05-1501:37Aaron CummingsReasons I heard for not using Datomic Cloud (all on a single 30 minute meeting):
Never heard of it.
The DBA team won't be able to help you.
We won't be able to hire for that.
That's not a real database.
Functional programming is trending downwards anyway.
🙄#2019-05-1503:33drewverleeNew ideas tend to the a long time to settle in. Which, is a good thing.
If it's really worth the effort, keep trying. I tend to think you get the most milage by explainig the generally ideas behind existing tech first. demonstrate mastery over what your using now and people will come to trust you.#2019-05-1503:34drewverleeFp is trending up, it's just being tacked on to oop languages. Which again, is ok. We're not here to win language wars.#2019-05-1505:20val_waeselynckLook at the bright side: your competitors are likely to think like that.#2019-05-1511:31Aaron CummingsThanks for allowing me to gripe. 🙂 I have a small, isolated application that would be a great place to trial Datomic, but my IT folks have been reluctant to approve anything that isn't a vanilla AWS offering. I'll keep pushing on it though.#2019-05-1512:35conanI've approached the subject by talking about Datomic as an application that runs on top of dynamo, rather than a database itself in the past. Maybe framing it in those terms might make it easier to understand? Also be sure to mention that as an application it's convenient to deploy, as it all comes via a CloudFormation configuration, which also may be more familiar.#2019-05-1808:35val_waeselynck@U7GUPH6D9 you could show them this one too: https://augustl.com/blog/2018/datomic_look_at_all_the_things_i_am_not_doing/#2019-05-1515:31Mark AddlemanA while ago a little bird mentioned that tuple support was coming to Datomic (Cloud?). Any update on that?#2019-05-1515:34Joe LaneHey friends, has anyone used a multimethod for their transaction function in cloud?#2019-05-1516:05Mark AddlemanHi. What's the issue you're running into?#2019-05-1516:07Joe LaneI ended up going with a different approach so its a moot point now.#2019-05-1516:07Joe LaneThere was no issue, I was just hoping someone would say they had done it before. Didn’t want to invest the time trying it only to find out it didn’t work.#2019-05-1522:11drewverleeHow would we go about defining a relationship in datomic. E.g my parents brother is my uncle?#2019-05-1522:13Joe Lanerules is the first thing that comes to mind.#2019-05-1522:13Joe Lanedefining a rule as a predicate#2019-05-1612:51Daniel Hines@U0CJ19XAM, you mean something like this, right:
[(is-uncle ?me ?uncle)
[?parent :child ?me]
[?parent :brother ?uncle]]
#2019-05-1612:52Daniel HinesThat rule wouldn’t be a merely a predicate, because it needs bindings, right?#2019-05-1612:53Daniel HinesWhereas
[(is-uncle [?me] [?uncle])
[?parent :child ?me]
[?parent :brother ?uncle]]
is merely a predicate, because you must bind each variable.
Just trying to get my head around what I can express and how.#2019-05-1718:57timgilbertYeah, it’s complicated and I’ve had to try to train myself not to think of these as predicates, but rather things that express a relation#2019-05-1718:57timgilbertIn your first example you could bind ?uncle and then find all of the nephews#2019-05-1718:58timgilbertSo the name is-uncle is a bit deceptive#2019-05-1718:58timgilbertIn your second example, I’m pretty sure the syntax is actually (is-uncle [?me ?uncle]), BTW#2019-05-1719:00Joe LaneI agree with tim on the last syntax point.#2019-05-1601:41hadilsIs there a way to do nested transactions, for example, i want to add several bank accounts for a customer and then put their :db/ids in the customer/accounts (cardinality many attribute).#2019-05-1602:57donaldballI think you’d generally use temporary ids for this:
[{:db/id "a1" :account/name "A1"}
{:db/id "a2" :account/name "A2"}
{:customer/accounts ["a1 "a1"]}]#2019-05-1604:01hadilsI was thinking that, but the issue is that I don't know the number of accounts in advance. I know how to craft this, but I was looking for a more elegant way…#2019-05-1613:19donaldballAccumulate the accounts in a set alongside the txn you’re building and then conj the customer accounts op onto the txn?#2019-05-1614:32lilactownwhen using q, does the db always need to be the last argument?#2019-05-1614:48Lennart BuitI think it is whatever position you have $ at in your :in#2019-05-1614:57Lennart BuitOr … looking through my own code, I appear to have it both as last and as first argument, which coincides with where the $ is in the :in#2019-05-1615:14favilacorrect; if :in is not explicit, it's assumed to be [$] and you may supply a db as the only arg. If :in is explicit then arg order corresponds to whatever :in says, you can put db anywhere. $ must be the name of the magic implicit db though#2019-05-1615:14favilaNot sure if it's absolutely necessary, but at least strong convention dictates all db names should start with $ also#2019-05-1615:55lilactowngotcha. thanks all!#2019-05-1616:00conanWith a peer-server, how can i make it reconnect after deleting and re-creating the database without killing the process?#2019-05-1616:26hadilsI am having a problem with a Datomic Cloud transaction function. Here's the function:
(defn xact-coll
"Transaction function that commits a collection, then puts the entities
of the collection into attr"
[db coll attr1 k attr2]
(let [m (into [] (map-indexed #(assoc %2 :db/id (str %1)) coll))
c (assoc {}
attr1 k
attr2 (into [] (map #(str %) (range (count coll)))))]
(concat m [c])))
and here's the invocation:
(defn save-funding-sources [customer-url funding-source-coll]
(let [coll (remove-nil-keywords funding-source-coll)
tx-data ['(stackz.db/xact-coll
coll
:customer/url customer-url
:customer/funding-sources)]]
(d/transact (schema/get-connection) {:tx-data tx-data})))
And here's the error message: ExceptionInfo Don't know how to create ISeq from: clojure.lang.Symbol clojure.core/ex-info (core.clj:4739)
#2019-05-1617:58hadilsI fixed this problem, now I'm getting ExceptionInfo tempid used only as value in transaction clojure.core/ex-info (core.clj:4739)
How do I refer to an entity id in the same transaction?#2019-05-1618:00ghadithat error means you have a tempid on the right hand side somewhere, without any attributes on it#2019-05-1618:00favilathe problem here is you have an assertion like [:db/add 123 :attr "value"] without any other use of "value" in the "e" slot#2019-05-1618:00ghadi^#2019-05-1618:04hadilsIf you look at the transaction function, I am using the tempids as values to a cardinality many attribute. Do I have to break it up into two separate transactions?#2019-05-1618:04ghadiyou cannot create empty entities on the right hand side#2019-05-1618:04ghadiwhether card-many or card-one#2019-05-1618:45hadilsI thought that the :db/id values in the collection were on the left-hand side, then they would be on the right-hand side when I am putting them into :customer/funding-sources.#2019-05-1621:33marshallDatomic Cloud 4777-8741 now available.
https://forum.datomic.com/t/datomic-cloud-477-8741-http-direct-ion-0-9-34-and-ion-dev-0-9-229/982#2019-05-1701:34steveb8nthis is awesome#2019-05-1714:18souenzzoDynamos continuos backup service serve as datomic backup? (on prem)
Can I restore it?#2019-05-1714:21marshall@souenzzo no https://docs.datomic.com/on-prem/ha.html#other-consistent-copy-options - DDB backup does not guarantee transactional ordering so is not a suitable option for Datomic backup#2019-05-1714:28kardanAnyone know from the top of their head why my web Ion respond with {“message”:“Missing Authentication Token”} ?#2019-05-1714:29marshall@kardan you need to add a path after your invoke url#2019-05-1714:29marshalli.e. add /datomic to the url you invoke#2019-05-1714:29marshallhttps://docs.datomic.com/cloud/ions/ions-tutorial.html#deploy-apigw “Append “/datomic” to the URL, and call your web service via curl”#2019-05-1714:31kardanAh#2019-05-1714:31kardanThanks#2019-05-1714:31kardanIs that always needed, you can’t serve to root of a domain?#2019-05-1714:32marshallI think you have to configure that via API gateway#2019-05-1714:32kardanOk, I’ll read on. Thanks for getting me past this hurdle!#2019-05-1718:04eraserhdI'm getting this error, and I'm kind of baffled: https://gist.github.com/eraserhd/6428493dc8e77342800e111d445d2fa9#2019-05-1718:05eraserhdI... think the only possibility is that there's an attribute in the database that hasn't been installed? So it can't be altered? Is that possible?#2019-05-1718:38eraserhdI ran a (d/transact conn [[:db/add :db.part/db :db.install/attribute :taskexec/depends-on]]) and retried, getting the exact same error.#2019-05-1721:41johnjCan the attributes of an entity that has an attribute with :db.unique/value be updated using map form?#2019-05-1721:56favilalike this: {:db/id [:unique-value "the-value"} :more-attrs 123}#2019-05-1721:57johnjah right! thanks#2019-05-1807:23weiis there a way to give my EC2 instance running outside the Datomic Cloud access to Datomic?#2019-05-2315:50holyjakyou mean in a different VPC?
then I guess need to tunnel through the bastion just as when doing local dev or configure VPC peering or some newer VPC tricks#2019-05-2407:46weiright, now I'm thinking for my system it'd be easier to communicate via SQS or some other message bus than direct access to the DB#2019-05-1807:46kardanI have an Ion running with a API Gateway custom domain, accessable at https://example.com/d but I can’t seem to understand how to go about hosting the naked domain. Has anyone been able to host a site from within an Ion on the naked domain? I’m playing around to learn Ions & API Gateway and thought it would be handy to host the entire SPA from the Ion. But I’m starting to think API Gateway really wants to host an API 🙂#2019-05-1808:02steveb8nI'd like to know this as well if you find the answer#2019-05-1817:49Joe LaneFor this I always host the static assets out of cloudfront backed by an S3 bucket. I know its possible to host static assets on ions using pedestal.ions.#2019-05-1819:14henrikCreate a CloudFront distribution for the AG URL, point your domain to the CloudFront URL. Also makes serving static resources from S3 optional since they will be sitting cached in the edge network anyway.#2019-05-1819:16henrikI've also managed to set it up with Route53 pointing directly to AG, but I don't remember the settings right now.#2019-05-1819:18henrikI'm serving static resources directly from an Ion using Reitit by the way, so Pedestal is optional.#2019-05-1906:41kardanI did a quick test with a baby in my lap (so no extensive testing) and by adding a metod to the initial / (not proxy) it looks to work#2019-05-1906:41kardanthanks for the pointers#2019-05-1810:54euccastroI did that, but I no longer have that datomic account running and I don't remember the details, sorry 😕#2019-05-1810:55euccastrothat was with the old API gateway to AWS lambda bridge; I haven't done this with the new HTTP Direct thing#2019-05-1810:59euccastroIIRC you need an ANY method execution both at the root (/) and at /{proxy+}#2019-05-1811:01euccastroboth with the same configuration, pointing at your Lambda#2019-05-1821:41currentoorIn the :where clause of a d/q are we allowed to call any function we have defined in our code? Or only the ones in clojure core?#2019-05-1821:43currentoorI’m running into the same issue as this guy#2019-05-1821:43currentoorhttps://forum.datomic.com/t/custom-functions-in-queries-not-getting-resolved/980#2019-05-1908:46lxsameerhey folks, how peers find out about new transactions ? i mean let's say a peers contains fact till tx 200, what happens when another peer submits tx 201 ?#2019-05-1915:06favilaTransactor pushes each successful tx to all peers#2019-05-1915:06favilaYou can see this queue with d/tx-result-queue#2019-05-1915:07favilaIf you need cross-peer coordination (eg one peer tells another to read a result it wrote at time t) use d/sync#2019-05-1915:55lxsameerthanks mate, so transactor keeps a connection open with each peer, right?#2019-05-2010:37favilaYes#2019-05-1911:14joshkh+1 for the just released HTTP Direct feature! i suspect this solves the Connection reset by peer error?#2019-05-1911:15joshkhan another advantage being an API that doesn't suffer from cold-starts?#2019-05-1911:33henrikI'm trying out HTTP Direct right now, and it certainly seems to do away with connection resets and the cold start problem.
However, I'm running into another problem (which might have to do with misconfiguration). Any URL invoked transmits as / to the handler.
The below is what I received for /hello/world.
Does anyone have a clue why this might be? I've been checking and re-checking API Gateway, but I can't see that I've done anything not specified in the tutorial.#2019-05-2012:57marshallYou need to include {proxy+} in your endpoint address in api gateway#2019-05-1911:43joshkhi just moved over to HTTP Direct without a problem and my api is happily routing specific endpoints#2019-05-1911:45joshkhis your configured http-direct handler fn a single handler function, or is it more ring-like in that it matches patterns?#2019-05-1911:47henrikYeah, it's more ring-like.#2019-05-1911:54joshkhalso, just a shot in the dark… in your API Gateway, does your load balancer URL have a trailing /{proxy}?#2019-05-1911:56joshkh#2019-05-1911:57henrikIt doesn't. Should it?#2019-05-1912:00henrikYes it should. Thank you very much 🙂#2019-05-1912:00joshkhyup#2019-05-1912:00joshkhno problem!#2019-05-1912:07henrikGood nose#2019-05-1913:53rapskalianHey all, is there an obvious reason why my CircleCI build would run into AccessDenied errors when trying to procure the com.datomic/ions artifact? Is this intended to work, or can the dep only be obtained by the same AWS user that subscribed via the marketplace? #2019-05-1914:00joshkhi'm no expert, but a colleague ran into a similar situation and i think we solved it by making sure that whichever IAM user is associated with your CircleCI AWS credentials has read access to S3#2019-05-1914:05alexmillerI think actually you just have to have any AWS credentials at all#2019-05-1914:14joshkheven easier 🙂#2019-05-1914:15rapskalianHm...my circle box is configured with AWS creds. I can try tweaking some S3 permissions.#2019-05-1914:19joshkhanother long shot here, but i think CircleCI has (recently?) changed how your projects interact with AWS. we're on 2.1, so our AWS credentials are stored in the "orb" fashion and not via their AWS Credentials tab of the project#2019-05-1914:20joshkh#2019-05-1914:22joshkhat least in our case we have to use the Environment Variables tab and include and entry for AWS_ACCESS_KEY_ID, AWS_REGION, and AWS_SECRET_ACCESS_KEY#2019-05-1914:25rapskalian@U0GC1C09L I just added Get/List read access to all S3 resources and now I get access denied on this artifact com/google/guava/guava/20.0/guava-20.0.jar, but it did successfully grab the ions dep!#2019-05-1914:26rapskalianI’m using that same orb and cred config method to deploy my cljs project to S3#2019-05-1914:31joshkhah! that's a start! can't help with guava but it sounds like you're on the right path.#2019-05-1914:32rapskalianGot that one to work with more S3 permissions…so, these dep problems are definitely just S3 access issues. Thanks for your help @U0GC1C09L.#2019-05-1914:36joshkhhappy to help 🙂 just out of curiosity, are you connecting to Datomic Cloud from CircleCI for CI/CD reasons? if so, i'm doing something similar via a custom SOCKS proxy container which feels a little... dirty. any suggestions for improvement?#2019-05-1914:36joshkhi like to run tests using a fork of the existing database but that means access to our VPC#2019-05-1914:52rapskalian@U0GC1C09L right now I only have CircleCI deploying my Ions, it’s not (yet) running any database tests. Unfortunately I don’t have any advice on a cleaner approach. Just curious, what kind of tests are you running that require a real db connection? Is it mainly testing query logic?#2019-05-1915:04joshkhno worries! in my humble opinion, running tests on an in-memory version of the current database mitigates errors that might pop up when "real life" data has changed, such as schema updates or business related changes to the underlying data model. testing against a safely silo'ed version of production implies that not only do tests pass, but they pass with minimally mocked data on top of the real stack.#2019-05-1915:07joshkhi've seen plenty of tests that pass because their edges are mocked, but the mocks don't reflect now because they haven't been updated :man-shrugging:#2019-05-1915:14rapskalianThat makes perfect sense. I’m quite new to ions/datomic, so I was mostly just probing for what I have in store 🙂
Is your SOCKS workaround open source by chance?#2019-05-1915:15joshkhdatomic provides us with d/with-db, and that can be an incredibly powerful tool to run low level tests against deployed data without actually committing transactions. it's really neat.#2019-05-1918:55drewverleeDoes datomic have the concept of a "rule" in the rules engine sense. e.g If X => Y. I'm thinking of a situation where in this case X could be some change in the database and Y could be some side effect.#2019-05-1919:17lilactownI remember reading there was some way to setup a listener on the transaction log. But it doesn’t have a built in ability to define rules and react when they are true, no#2019-05-1919:18alexmillerthere's a conj talk from Paula Gearon about doing this with Datomic#2019-05-1920:56drewverleeThanks, yep, i just watched her talk. Then watched it again while taking notes. Its really an amazing talk, that must have been a ton of work to put that together. It really ties together some topics and ideas i have heard by never had time to look into.
It would seem that datomic, in some way (waves hands), shares similar underlying structure to what you need in a RETE network, but not all of it, as it doesn't support rules?
Is the difference that supporting rules necessitates needing to store intermediate data so you can calculate the relationships tell they terminate? does that question even make sense?#2019-05-1920:58alexmillerI think it makes sense and is correct#2019-05-2011:05conanFor some reason I can't import datomic.client.impl.shared.Connection and datomic.client.impl.shared.Db, although these are the types that are returned to me when creating connections and dbs. Is there some reason this won't work, or do I just have a weird dependency problem?#2019-05-2011:09conan(import datomic.client.impl.shared.Connection)
Execution error (ClassNotFoundException) at java.net.URLClassLoader/findClass (URLClassLoader.java:382).
(type (conn))
=> datomic.client.impl.shared.Connection
(type (d/db (conn)))
=> datomic.client.impl.shared.Db
#2019-05-2011:18conanoh i see, it's because Datomic's client classes are defined using deftype, whereas presumably the on-prem ones are simply java classes. i needed to require datomic.client.impl.shared in order to get those types#2019-05-2012:24favilaI think on-prem is just fully AOTed#2019-05-2011:44conanIs there a widely-used solution for managing schema migrations in Datomic Cloud? I used to use conformity with on-prem and was very happy with it#2019-05-2014:14rapskalianThis doesn’t fit your “widely used” criteria, but I was able to rather easily adapt one of Stu’s examples to DCloud:
https://gist.github.com/cjsauer/4dc258cb812024b49fb7f18ebd1fa6b5#2019-05-2014:16rapskalianIIRC the only change was replacing the use of d/entity with d/pull#2019-05-2014:19rapskalianFound the original inspiration: https://github.com/stuarthalloway/day-of-datomic/blob/master/src/datomic/samples/schema.clj#2019-05-2016:19conanoh nice, thanks. i'll give that a shot#2019-05-2012:24rolandHi, is there a way to lazily iterate through datoms of an index in a reverse order ? I'm using d/datoms but can only get an iterator from it#2019-05-2117:26stuarthallowayNot at present.#2019-05-2212:58rolandok thanks#2019-05-2017:39drewverleeIn https://docs.datomic.com/cloud/tutorial/assertion.html#org0a2909b, sample-data is missing a closing parens right? seems minor but just in case its easy to fix 🙂#2019-05-2018:28marshallyes it appears to be @drewverlee I’ll fix it - thanks#2019-05-2018:51drewverleethe docs so far have been great!#2019-05-2115:27maleghastAnyone in here got a step by step to connect to Datomic Cloud from Heroku?#2019-05-2117:30stuarthallowayif your app is written in Clojure, you don't need anything other than Datomic Cloud to run it.#2019-05-2220:07joshkhi'm also curious about this. wouldn't you need to run some (SOCKS) proxy to the cloud VPC in order for the Heroku container to execute queries via datomic.client.api?#2019-05-2220:08joshkhotherwise you need some AWS architecture in the middle, such as API Gateway to manage queries?#2019-05-2115:53lilactownnot a step by step guide. but the approach is probably to expose some sort of HTTP API and call that via a service running in Heroku#2019-05-2118:59weiwhere does datomic cloud get its load path from? I'm getting the error
:datomic.cluster-node/-main failed: Could not locate ion_sample/ion__init.class, ion_sample/ion.clj or ion_sample/ion.cljc on classpath.
but I can't seem to find that reference anywhere in my project#2019-05-2210:02fmnoisehi everyone. I'm getting this warning
WARNING: requiring-resolve already refers to: #'clojure.core/requiring-resolve in namespace: datomic.common, being replaced by: #'datomic.common/requiring-resolve
clojure 1.10.1-beta3, datomic 0.9.5786#2019-05-2308:47conanI also get this with clojure 1.10.0 and the same datomic#2019-05-2210:03fmnoiseI understand that's nothing critical, but annoying log item#2019-05-2212:12alexmillerI’ll let people know#2019-05-2309:07Ivar RefsdalHow fast/slow is datomic.api/as-of supposed to be? What affects its performance when d/pull is used?
Without as-of (using the current database) my d/pulling of 6K entities takes about 10 secs.
With as-of, it takes about 3 minutes.
Is this expected, or does this indicate something else is wrong (too little memory?)?
Thanks#2019-05-2314:56Joe Lane@ivar.refsdal Is this for On-Prem? I don’t have any experience with On-Prem specifically but I can imagine the performance here depends on several factors such as memory, caching, client vs peer, number of datoms, etc. Do you have any additional information you can provide?#2019-05-2410:55Ivar RefsdalThanks for replying! Yes, this is On-Prem and I'm using the peer library.
In VisualVM I see that quite some time is being spent inside Fressian/readUTF8Chars (or something like that). Does that mean it is accessing the network?
How would I count the total number of datoms?
I did
(format "%,3d" (reduce (fn [cnt _] (inc cnt)) 0 (d/datoms (d/history db) :eavt)))
=> "37,605,542"
What is considered a big amount of datoms?
Edit: I'm doing a pull star on the as-of-databases. Would it considerably improve performance if this was narrowed?
Thanks.#2019-05-2413:39favilaFressian/readUTF8Chars is just decoding a string from a block of fressian-encoded values#2019-05-2413:40favilado you maybe have any very large or numerous string values that are only seen in as-of?#2019-05-2413:40favilaotherwise I think this is a red herring#2019-05-2315:25marshallAnnouncing HTTP Direct for Datomic Cloud http://blog.datomic.com/2019/05/http-direct-for-datomic-cloud.html
Check out the interactive tutorial https://docs.datomic.com/cloud/livetutorial/http-direct.html#2019-05-2315:37jeroenvandijkThanks! Are there case studies for datomic ions out there?#2019-05-2315:39benoit@marshall FYI it was not clear to me you had to press the spacebar to start the tutorial#2019-05-2315:41Joe Lane@marshall The livetutorial is fantastic. Kudos#2019-05-2315:41benoitOk I didn't see the controls at the bottom 🙂#2019-05-2315:42alexmillerhopefully the first of many...#2019-05-2317:02johnjWe need this for solo! 😉 is there any reason the NLB can't be added ?#2019-05-2317:13Joe Lane@marshall I see in the latest ion-starter the clojure version was bumped from 1.9 to 1.10. Does that imply datomic cloud now supports clojure 1.10?#2019-05-2317:47marshallYes#2019-05-2318:40Joe LaneGreat! Thanks.#2019-05-2317:44joshkhis :db/fulltext supported in Datomic Cloud? i didn't see it in the schema reference docs, but i thought i'd ask just in case.#2019-05-2317:45marshallNo fulltext in Cloud#2019-05-2317:46joshkhokay, thanks Marshall 🙂#2019-05-2317:49joshkhi think i've seen some on-prem examples where some loop function was used to stream values from the transaction log. is something like that possible with Cloud?#2019-05-2317:49ghadiyes you can do that with d/tx-range#2019-05-2317:49ghadiwhich has slightly different arguments in cloud#2019-05-2317:50ghadiIn fact, I am building a pump from the tx-log to ElasticSearch for full-text searching @joshkh#2019-05-2317:50joshkhthat's exactly what i'm trying to do!#2019-05-2317:50Joe LaneThat is strangely what I am also doing, but into lucene.#2019-05-2317:50ghadibasically filter all the datoms where the value is a string#2019-05-2317:51ghadiand index raw datoms into ElasticSearch#2019-05-2317:51ghadi{"e": 42, "a": "whatever/attr.txt", "v": "full text"}#2019-05-2317:52ghadithen when you get hits you have to know how an attribute relates to a larger entity#2019-05-2317:52joshkhhow might that tx-range work efficiently? are you grabbing chunks and storing the latest :t somewhere? wouldn't you have to provide it with new starts and ends?#2019-05-2317:52Joe LaneSo you’re going the approach of making the Datom the document instead of reifying the entity into a document?#2019-05-2317:52ghadiyes you have to keep track of highwater marks @joshkh#2019-05-2317:52ghadii'm going to put them in a dynamo table#2019-05-2317:53ghadi@lanejo01 yeah the advantage of raw datom is that you never have to change your indexing code#2019-05-2317:53joshkhbrilliant. thanks @ghadi#2019-05-2317:53ghadiif you were making full documents in ES, you'd have to rematerialize and re-index everytime the needs changed#2019-05-2317:53ghadiit's a Rich design, I take no credit#2019-05-2317:54ghadiwhen the needs change, you can change the query side that knows how to turn "leaf" hits into the larger entity#2019-05-2317:55ghadiit might not work for all needs, but this is what I'm trying first#2019-05-2317:55joshkhi was working on a little trick to rematerialize entities but without references or components (which can be massive). worked out pretty well when i was manually moving entities to elasticsearch.#2019-05-2317:56ghadiyou have to handle retractions too, beware#2019-05-2317:56ghadi[e a v t false] -> means delete the document in ES#2019-05-2317:56joshkhright#2019-05-2317:57ghadiand cardinality many attrs might make several documents with the same e + a#2019-05-2318:00joshkhi was imagining storing the high water mark in datomic but realised that would cause a loop 😉#2019-05-2318:01ghadiit's possible, but yeah i'd put it elsewhere#2019-05-2318:01ghadinote that S3 doesn't guarantee read-your-writes when you update the same object#2019-05-2318:02ghadiso Dynamo > S3 for this 🙂#2019-05-2318:07joshkh:+1:#2019-05-2320:22souenzzoNew datomic video-tutorial is awesome! thanks!#2019-05-2400:10steveb8nQ: I have a micro-service in another VPC that I want to reach from an Ion. I was assuming that I need to use VPC peering for this but the new video seems to suggest that a VPC link might also work. Does anyone have any advice about this?#2019-05-2400:22ghadithe micro-service in the other VPC is not an Ion, right? @steveb8n#2019-05-2400:23ghadiactually the answer doesn't matter#2019-05-2400:23ghadiYou want to peer the VPCs#2019-05-2400:23ghadiensure both VPCs do not share the same address space#2019-05-2400:23steveb8nyep, the microservice is Fargate with an ALB#2019-05-2400:24steveb8nok, thanks. I’ll stick with peering#2019-05-2400:24ghadithat's the simplest solution#2019-05-2400:24steveb8nI am simple. That works for me 🙂#2019-05-2400:24ghadisimple_smile#2019-05-2414:20akielIs there a possibility to travel the history from recent to past? If I use the following, I get the oldest datoms first:
(d/datoms (d/history (d/db conn)) :eavt eid :version)
I have a :version attribute on each entity which is incremented on each change.#2019-05-2414:57souenzzo@akiel you can query the history#2019-05-2415:03akielYes, but a query returns a set or a vector which is unordered. That’s even worse, there I have to sort the result.#2019-05-2415:03akielI like to get the n newest transactions on a particular entity in an efficient way.#2019-05-2709:34Ivar RefsdalNot sure if this is efficient, but how about something like:
(d/q '[:find ?vv ?tx
:in $hist $curr ?ref ?n
:where
[$curr ?e :e/ref ?ref]
[$curr ?e :e/version ?v]
[$hist ?e :e/version ?vv ?tx true]
[(- ?v ?n) ?vmin]
[(> ?vv ?vmin)]]
(d/history db)
db
"my-ref"
10)
#2019-05-2620:07hadilsHi! I am trying to use lacinia-pedestal as an Lambda function datomic ion. Does anyone have any suggestions on how to do this? Here's some example code:
(defn web-handler*
[]
(-> (load-schema)
(service-map {:graphiql false})
http/create-server
http/start))
(def web-handler
(apigw/ionize web-handler*))
The error message I get is:
Wrong number of args (1) passed to: stackz.graphql/web-handler*
This is because web-handler* is being passed the request argument.#2019-05-2620:25kardanI know nothing about lancia-pedestal but I would guess that you don’t need to start any http server but just go straight to the web handler. Maybe that means to skip the last two threaded function calls so what you ionize is (service-map …)#2019-05-2620:29kardanAnd I guess web-handler* should not be a fn but something like (def web-handler* (service-map (load-schema) {:graphical false})#2019-05-2620:29kardanMight be totally wrong. I had wine and should go to bed 😇#2019-05-2620:34hadilsThanks! Would this be true for http direct as well?#2019-05-2620:36lilactownAfaict yes. The api gateway essentially serves as the http layer in this case#2019-05-2620:37lilactownYou’ll need to do some work to map the Lacinia service-map to the ions req/res format#2019-05-2620:39lilactownYou might want to look at using lacinia directly instead of lacinia-pedestal#2019-05-2620:40lilactownhttps://github.com/walmartlabs/lacinia#2019-05-2620:41hadilsThanks a lot! I might just use the interceptor functionality in pedestal #2019-05-2620:48lilactownyes, that would be good too I just couldn’t find an easy example of that#2019-05-2620:55hadilsMaybe if I just don’t start the server...#2019-05-2621:53hadilsThere is code in the pedestal/pedestal github repo for AWS Lambda functions with API Gateway…#2019-05-2622:06hadilsActually, I'm following this: https://github.com/pedestal/pedestal.ions#2019-05-2622:17Joe Lane@hadilsabbagh18 You’re going to want something like this for your ion, then you’ll need a dev namespace so that in dev you create a local jetty server.
(def the-schema
(-> (io/resource "graphql/schema.edn")
slurp
edn/read-string
(util/attach-resolvers {
:query/Moments moments ;;a resolver function
:mutation/create-moment create-moment ;; another resolver that does mutations
})
schema/compile))
(def service (-> the-schema
(lacinia/service-map {:graphiql true})
(assoc ::http/chain-provider provider/ion-provider
::http/resource-path "/public"
::http/allowed-origins {:creds true})))
(defn handler
"Ion handler"
[service-map]
(-> service-map
http/default-interceptors
http/create-provider))
(def app (apigw/ionize (handler service)))
#2019-05-2622:19hadilsThanks a lot @lanejo01! I really appreciate the effort you put into this answer!#2019-05-2622:20Joe LaneHaha, well thanks. I happen to be working on an app that uses lacinia.pedestal + pedestal.ions at this very moment so it was mostly copy+paste. LMK if you get stuck I’ll try to be available.#2019-05-2622:22hadilsI am very lucky indeed!#2019-05-2800:00hadilsHi @lanejo01. I am having classpath difficulties with Jetty. Did you run into this problem and, if so, how did you fix it?#2019-05-2800:01Joe Lanehttps://docs.datomic.com/cloud/troubleshooting.html#dependency-conflict#2019-05-2800:02Joe Lane@hadilsabbagh18 ^^#2019-05-2800:03hadilsThanks @lanejo01!#2019-05-2800:15hadilsHi @lanejo01 here's my dev code:
(ns stackz.dev
(:require [stackz.graphql :as graphql]
[com.walmartlabs.lacinia.pedestal :as lacinia]
[io.pedestal.http :as http]))
(def service (lacinia/service-map (graphql/load-schema) {:graphiql true}))
(defonce runnable-service (http/create-server service))
Here's what's in my REPL:
(http/start dev/runnable-service)
NoSuchMethodError org.eclipse.jetty.http.pathmap.PathMappings.put(Lorg/eclipse/jetty/http/pathmap/PathSpec;Ljava/lang/Object;)Z org.eclipse.jetty.servlet.ServletHandler.updateMappings (ServletHandler.java:1430)
Any suggestions?#2019-05-2800:18hadilsNvm I added the jetty-servlet into my :dev dependency and it works!#2019-05-2800:19Joe LaneGreat!#2019-05-2805:32kardanAnyone have an idea why I’m seeing error like
"Cause": "No implementation of method: :->bbuf of protocol: #'datomic.ion.lambda.api-gateway/ToBbuf found for class: java.io.File"
https://docs.datomic.com/cloud/ions/ions-reference.html#signatures made me think that a response below would be ok
`
{:status 200,
:headers
{"Content-Length" "69911",
"Last-Modified" "Mon, 27 May 2019 10:22:28 GMT",
"Content-Type" "text/javascript"},
:body
#object[java.io.File 0x585ec33a "/..../public/js/main.js"]}
`
But I guess the “File” reference is for a lambda ion type and this is a “web service”. But I’m unsure the “spec” of a “web service”.#2019-05-2805:35kardanI’m trying to serve compiled cljs from resources hosted in the Ion via API Gateway and lambda (no network load balance)#2019-05-2805:41kardanhttps://docs.datomic.com/cloud/troubleshooting.html#org846f16f “Lambda ions must return a String, InputStream, ByteBuffer, or File. Function signatures for all ion types can be found in the ion reference.” <- make me confused#2019-05-2806:23Joe Lane@kardan any reason you don’t want to just serve the content out of a cdn like cloudfront backed by s3 instead?#2019-05-2806:24Joe LaneThats the solution we’ve used at work and it’s worked very well.#2019-05-2806:27kardanMaybe not a good reason. Just started this way and thought it would be nice to have more control in terms of maybe SSR only one Ion to deploy to upgrade.. etc. But I’m dabbling with Ions & cljs at this point. We have everything in K8s and behind a lot of infrastructure at work so the idea to have only one thing to think about sounded “nice”. But maybe the “API Gateway” should only host the API… 🙂#2019-05-2905:52henrikFWIW, I'm doing exactly that. There's a Clojure/ClojureScript webapp running as an Ion, hooked up to API Gateway via HTTP Direct. Rather than serving resources from S3, I've stuck the resultant API Gateway URL in CloudFront, which means that the CloudFront edge network is taking care of caching and serving static resources, rather than S3.
Updating the app is just a matter of pushing and deploying the Ion and invalidating the CloudFront cache.#2019-05-2807:00steveb8n@kardan I do the same as Joe is suggesting. My CI server uploads my CLJS artifact to s3 and updates an SSM param with the new url. the host page (and Ion) reads the SSM param and that’s it. I do this to set long cache expiration headers on the CLJS file so it’s only downloaded once#2019-05-2807:13kardanok, thanks for the pointers. Need to read up on SSm params.#2019-05-2808:12kardanFor what it’s worth I did a little of slurping on File types in an interceptor and got things to work for now. Maybe S3 would be better for proper production usage but since I’m only exploring I might not do the investment#2019-05-2811:16Ben HammondI've been using io.rkn/conformity {:mvn/version "0.5.1"} to manage Schema updates on a local dev datomic
I'm experimenting with datomic cloud; conformity relies upon peer api; so I don't expect it to work
Is there an equivalent for datomic cloud?#2019-05-2816:01joshkhno big deal, just pointing out a dead link on:
https://docs.datomic.com/cloud/ions/ions.html#how-bond
the very last bullet's HTTP Direct link returns a 404:
https://docs.datomic.com/cloud/ions/ions-http-direct.html#2019-05-2816:04marshall@joshkh thanks i’ll fix it#2019-05-2820:15ChristianHola, I'm trying to export an entire table to csv (blasphemous, I know) and I'm having a little trouble. I'm using https://github.com/bostonaholic/datomic-export, but my tables are a couple of gigs in size and eventually my process dies with a heap error.#2019-05-2820:43jaretNo experience using datomic-export, but can you give the process more memory and see if you can get through? I am assuming you’re going OOM.#2019-05-2820:59favilawhat do you mean by "an entire table"?#2019-05-2821:01favilalooking at it's "entity puller" it looks like it reads all entities into memory#2019-05-2821:01favilahttps://github.com/bostonaholic/datomic-export/blob/master/src/datomic_export/entity_puller.clj#L10-L15#2019-05-2821:01favilathe "distinct" there#2019-05-2821:02favilaso your :e set may be very big#2019-05-2821:02favilaset of entity ids#2019-05-2821:03favilaafter that it's lazy though, so probably with enough memory you could do it#2019-05-2821:04favilahowever you say "tables" so I suspect you have something particular in mind which you might be able to do in a fully incremental manner#2019-05-2821:21ChristianWell, yeah, we're definitely not doing anything idiomatically, so we're just treating each... database? as its own table.#2019-05-2821:21ChristianAs in, all entities in a given db have the same set of attributes.#2019-05-2821:22ChristianSorry, I'm new to this project and my first task is to 'get data out' so I'm still learning the vocabulary.#2019-05-2821:22ChristianMy lisp is also about 15 years rusty so it's a slog.#2019-05-2821:24Christian@U09R86PA4 Does that make sense as a way to iterate through the set of entities? Pulling distinct of :e? It seems a bit weird to me, wouldn't the index already have the data? Couldn't you just reconstruct the entities after the d/datoms call?#2019-05-2821:30favilait's getting all entities which have any of the specified attributes, so that's why it walks over :aevt indexes and why it gets :e#2019-05-2821:31favilaif there's one attribute you know all the entities you are interested in have and only those entities have, you can do this:#2019-05-2821:36ChristianAhh, I see. We've been calling it with a list of all 'columns' (attributes) we expect on a given entity.#2019-05-2821:40favilaentities have no "schema" like tables do, so it isn't generally safe to assume anything about what entities an attribute has#2019-05-2821:41favilaI think it's probably unusual for all user-partition entities of a db to all have the same attrs on them#2019-05-2821:41ChristianRight, the folks who put together the initial system basically treated it like a traditional table.#2019-05-2821:41ChristianDefinitely there's nothing properly idiomatic in this system. I think you'd be horrified.#2019-05-2821:42ChristianSorry though, you were going to say what I could do if there was a single attribute contained on all entities? (which there is)#2019-05-2821:42favilasorry got interrupted#2019-05-2821:42favila(with-open [csvfile ( the-file)]
(clojure.data.csv/write-csv csvfile [["column1" "column2" "etc"]])
(->> (d/datoms db :aevt :the-cardinality-one-attr)
(map (fn [[e]]
(d/pull db my-pull-expression-with-all-attrs e)))
(map (juxt :attr-in-coll-1 :attr-in-coll2,,,))
(clojure.data.csv/write-csv csvfile)))#2019-05-2821:42favilaat the end of the day that project is a fancy flexible wrapper around this core process#2019-05-2821:43favilasince you can make a simplifying assumption about what entities to pull, you can do it with a single index seek and no set-construction#2019-05-2821:44favilathis sketch won't have the largest possible throughput, but it will have very bounded memory use#2019-05-2821:45favilaThe d/datoms gets the entities you are interested in based on the attr you know they all have an assertion for. If it's cardinality-one, you know that the entities are unique already as you seek#2019-05-2821:45favilathe first map gets all the attrs#2019-05-2821:45favilathe second map formats the result of the pull for the csv (i.e. arranges into a flat list of columns)#2019-05-2821:46favilathat's all the customization you need#2019-05-2821:48ChristianOhhhhh excellent, I see.#2019-05-2821:49ChristianI don't need throughput (or a fully realized sketch) but what would you do conceptually if throughput was important here?#2019-05-2821:49favilaadd batching and parallelization#2019-05-2821:51favilae.g. seek over datoms, group them into large bundles, perform d/pull-many in parallel over many bundles at once#2019-05-2821:51favilaanything to get datomic to do as much IO as possible#2019-05-2821:53ChristianSo you'd still crawl the index in that case?#2019-05-2821:55favilayes#2019-05-2821:55favilaif parallelizing the pull was not enough I'd try to partition that first index seek, but I think most of the work will be in the d/pull#2019-05-2821:56ChristianInteresting.#2019-05-2821:59favila(->> (d/datoms db :avet :the-attr)
(reduce (fn [c _] (inc c)) 0)) will give you a quick count of how many entities you are dealing with#2019-05-2821:59favilaalso an idea of how long it takes just to seek the index without doing any other work#2019-05-2821:59ChristianI made an attempt to do this as a pull expression passed to find, but I got even worse memory performance. Conceptually, what's different in that case?#2019-05-2822:00ChristianI'm having a hard time conceptualizing performance here compared to a trad db.#2019-05-2822:00favilaqueries work in parallel agressively, but they hold the entire result-set in memory#2019-05-2822:00favilaI would not hold the entire result set in memory#2019-05-2822:00ChristianAhh.#2019-05-2822:00favilad/datoms is lazy#2019-05-2822:00favilaas is d/index-range#2019-05-2822:01favilad/query is not#2019-05-2822:01favilaresult set must fit in memory#2019-05-2822:02favilaso high throughput with bounded memory involves doing something inbetween#2019-05-2822:06ChristianFascinating. I wish I had more time with this rather than my first project being to put tabular data into a table#2019-05-2820:15ChristianAny suggestions on how to do this?#2019-05-2823:01hadilsHi @lanejo01. I cannot get my Datomic ion provider to work with Lacinia. Did you use lacinia or lacinia-pedestal. If the former, don't you have to write your own interceptor? I get a 404 error on / and a 500 error on the /graphql extension#2019-05-2905:55henrikThe 404 error on / might be because you don't have a method configured for / in API Gateway.#2019-05-2823:43Joe LaneI used the latter#2019-05-2900:26hadilsThanks @lanejo01#2019-05-2916:35Ben Hammondhi guys. I'm trying datomic cloud for the first time.
I have used datomic on prem for several years.
I am transacting a schema that I developed against an 'on prem' database and I get this error
Execution error (ExceptionInfo) at datomic.client.api.async/ares (async.clj:56).
Unable to resolve entity: :db/index
#2019-05-2916:36Ben HammondI know there was regret about the :db/index true attribute in datomic schema#2019-05-2916:36Ben Hammondhas this been eliminated in datomic cloud?#2019-05-2916:37Ben HammondI definitely do want this attribute to be in AVET#2019-05-2916:38ghadihttps://docs.datomic.com/cloud/schema/schema-reference.html#org8d7d7c2
@ben.hammond there are slight differences#2019-05-2916:38ghadilike :db/index#2019-05-2916:39Ben Hammonddoes that page describe the differences?#2019-05-2916:40ghadino that's the ref for cloud#2019-05-2916:42Ben Hammondoh there's no [:db.fn/cas neither 8(#2019-05-2916:42Ben HammondCaused by: clojure.lang.ExceptionInfo: Unable to resolve data function: :db.fn/cas
at datomic.client.api.async$ares.invokeStatic(async.clj:56)
at datomic.client.api.async$ares.invoke(async.clj:52)
at datomic.client.api.sync$eval18317$fn__18322.invoke(sync.clj:83)
at datomic.client.api.protocols$eval16202$fn__16238$G__16187__16245.invoke(protocols.clj:58)
at datomic.client.api$transact.invokeStatic(api.clj:172)
at datomic.client.api$transact.invoke(api.clj:155)
#2019-05-2916:43Joe LaneIts under a different key, which may be the point you’re making.#2019-05-2916:43favilarenamed to :db/cas#2019-05-2916:43Joe LaneBeat me to it 🙂#2019-05-2916:43favila:db/index doesn't exist because all attrs are indexed by default now#2019-05-2916:43favilaIOW it's always :db/index true#2019-05-2916:43Ben Hammondthat was what I hoped#2019-05-2916:43Ben Hammondthank you#2019-05-2916:44Ben Hammondis there a specific doc describing the differences between cloud and on-prem?#2019-05-2916:44Ben Hammondeveryone must have gone through these pain points once#2019-05-2916:44Joe Lanehttps://docs.datomic.com/on-prem/moving-to-cloud.html#2019-05-2916:45Joe LaneEh, actually thats probably not helpful for you.#2019-05-2916:45Ben Hammondah doesn't quite cover these details though#2019-05-2916:45Ben HammondI don't feel so bad about asking then#2019-05-2916:47hadilsCan I use lacinia-pedestal with HTTP Direct? I keep getting a 500 error when it worked fine as a Lambda function…#2019-05-2916:48Joe Lane@hadilsabbagh18 Can you show the handler function and the ion-config?#2019-05-2916:49Joe LaneLets start a thread#2019-05-2916:49hadilsSure.
{:allow [
;; transaction functions
stackz.db/inc-attr
stackz.db/xact-coll
;; query functions
;; lambda handlers
stackz.schema/build-database
;; web applications
stackz.dwolla/web-handler
stackz.plaid/web-handler]
:lambdas {:build-database
{:fn stackz.schema/build-database
:description "Builds the database if not already installed."}
:dwolla
{:fn stackz.dwolla/web-handler
:integration :api-gateway/proxy
:description "Handles POST webhook calls from Dwolla"}
:plaid
{:fn stackz.plaid/web-handler
:integration :api-gateway/proxy
:description "Handles POST webhook calls from Plaid"}}
:http-direct {:handler-fn stackz.graphql/web-handler}
:app-name "stackz-dev"}
(defn sm
[interceptors schema graphiql?]
(lacinia/service-map schema {:graphiql graphiql? :interceptors interceptors}))
(def service
(-> (load-schema)
(lacinia/default-interceptors {:graphiql false})
(lacinia/inject jwt-interceptor :replace ::lacinia/inject-app-context)
(sm (load-schema) false)
(assoc ::http/chain-provider provider/ion-provider
::http/resource-path "/public"
::http/allowed-origins {:creds true})))
(defn handler
"Ion handler"
[service-map]
(-> service-map
cast-log
http/default-interceptors
http/create-provider
cast-log))
(defn web-handler
[request]
(handler service) request)#2019-05-2916:50hadilsOops sorry.#2019-05-2916:52Joe LaneWell it looks like your web-handler is just returning the request object#2019-05-2916:53souenzzo@hadilsabbagh18 try to print/log (get ctx :headers) before the last leave
Could be related with it:
https://github.com/pedestal/pedestal.ions/issues/3#2019-05-2916:56hadilsDo I do (apply (handler service) [request]?#2019-05-2916:58Joe Lane@hadilsabbagh18 Can I see your original ionized version?#2019-05-2916:59hadilsSure. (def web-handler
(handler service))
#2019-05-2917:00Joe LaneSo you never called apigw/ionize?#2019-05-2917:02hadilsOh sorry, just a minute.#2019-05-2917:02hadils(def web-handler
(apigw/ionize (handler service)))
#2019-05-2917:03hadilsDo I still call ionize? Seem like I have to…#2019-05-2917:03Joe Lanenot for direct#2019-05-2917:04Joe Lanehttps://github.com/Datomic/ion-starter/blob/1cd565000329ac02ac6a75747496ebab88a88233/src/datomic/ion/starter.clj#L141#2019-05-2917:04Joe Lanehttps://github.com/Datomic/ion-starter/blob/1cd565000329ac02ac6a75747496ebab88a88233/resources/datomic/ion-config-sample.edn#L23#2019-05-2917:05Joe LaneLooking at the second link, it has the ion-config from the ion-starter repo. That references items-by-type directly, NOT items-by-type-ionized.#2019-05-2917:05Joe LaneSo in your case I think you would want (def web-handler (handler service))#2019-05-2917:06hadilsThat didn't work when I tried it, because it takes the request argument explicitly (as a defn).#2019-05-2917:07Joe LaneDidn’t work locally or didn’t work when deployed?#2019-05-2917:07hadilsWhen deployed.#2019-05-2917:08hadilsWhen I did the apply it gave a result that the body was an InputStream, not a String. That's progress…#2019-05-2917:09hadilsI think I need to an interceptor to expand the body…#2019-05-2917:10Joe LaneI think you should try to deploy it right now, regardless of what it says locally with apply, and see what the results are. This is on a production topology correct? http-direct is only on a production topology, not a solo topology.#2019-05-2917:15hadilsI did. Here's the results:
{"message":"Invalid request: java.io.BufferedInputStream cannot be cast to java.lang.String"}#2019-05-2917:20marshallhttps://github.com/Datomic/ion-starter/blob/1cd565000329ac02ac6a75747496ebab88a88233/src/datomic/ion/starter.clj#L137#2019-05-2917:20marshallYou have to read the input stream#2019-05-2917:21marshallAs done on line 144#2019-05-2917:21hadils@marshall You're correct. I need to add an interceptor to read the body.#2019-05-2917:39hadils@lanejo01 @marshall I believe that I don't need the Ion provider anymore, but I don't know how to disable Jetty from being used here. The Ion provider is expecting data from an ionized request, not the raw request from API Gateway.#2019-05-2918:51hadilsI think the solution is to use Pedestal’s ‘direct-apigw-provider’#2019-05-2918:57henrikFor input-streams, a simple slurp is usually enough.#2019-05-2921:31hadilsTrue @henrik, but I am trying to use Pedestal and want a provider that works with API Gateway.#2019-05-2922:15kennyHow would I migrate the data from an existing Datomic Cloud system to a new Datomic Cloud system?#2019-05-3016:00kardanI’m trying to get a mental model on how to hook up CI with cljs with Ions. Does Ions push rely on a Git (& .gitignore) if uname is specified or is it more a snapshot on what’s in the directory?#2019-05-3016:09kardanIgnore me, I can of course try this locally 😇#2019-05-3017:35hadilsHi @lanejo01. I am working on writing a provider for HTTP Direct.#2019-05-3017:36Joe LaneWhat about the existing providers is failing to meet your needs? Does pedestal.ions library not work for some reason?#2019-05-3017:37hadilsI have not gotten it to work for me.#2019-05-3018:02souenzzo@kardan
I do
clj -A:cljsbuild ## outputs app.js
aws s3 cp app.js
echo $CURRENT_COMMIT > resources/js-ref
git commit ...
clj -A:ions ...#2019-05-3018:14hadils@lanejo01 I am trying a simple test again.#2019-05-3018:30hadilsI have made progress, I'm not writing my own provider but I do need an interceptor for BufferedInputStream.#2019-05-3022:16grzmThe Datomic Cloud schema change docs (https://docs.datomic.com/cloud/schema/schema-change.html#changing-db-ident) include this caveat:
> We don't recommend repurposing an old :db/ident, but if you find you need to re-use the name for a different purpose, you can define the name again as described in attribute-definition. This repurposed :db/ident will cease to point to the entity it was previously pointing to and ident will return the newly installed entity instead.
Are there any known issues with re-use other than bad practice?#2019-05-3111:58joshkhwe noticed a behaviour today with ion/get-params that has potential to knock down live services. get-params throws an exception if a key has more than 10 (or maybe it's 9) values:
(ion/get-params {:path (str "/datomic-shared/")})
ExceptionInfo Too many results clojure.core/ex-info (core.clj:4739)
is that by design? one can innocently add a key to Parameter Store, and then any deployed services that call get-params at run time will fail.#2019-05-3113:32grzmI believe this is on Cognitect's radar: the underlying issue is that the AWS call returns SSM parameters in batches of 10, along with a token that indicates whether there are more to fetch. ion/get-params doesn't check to see if there are more. They're not the first library to make this mistake: I found the same thing with Omniconf https://github.com/grammarly/omniconf . I believe it's fixed in Omniconf's 0.3.3-SNAPSHOT. Another workaround is to call the AWS api directly (@U0ENYLGTA’s AWS client library makes this a breeze: https://github.com/cognitect-labs/aws-api)#2019-05-3113:35grzmHere's some code that can get you started on doing that:#2019-05-3113:35grzmhttps://github.com/Dept24c/omniconf/blob/e4c37a93e7916d1fc2a95687c3ac31e69c43ca8d/src/omniconf/ssm.clj#L6-L20#2019-05-3114:19hadilsHere's source code that works for me. It gets all the params.#2019-05-3114:37joshkhcheers to both of you. it just caught me by surprise how easy it was to tip over our API. yikes. i'll stick to aws-api.#2019-05-3117:15souenzzoIs possible to use cast without datomic ions?
I'm with a on-prem app, that we are working to move it to ions.#2019-05-3117:19lilactownyou might be able to create your own cast impl in the meantime#2019-05-3117:46johnjWhen you have an entity or attribute that can be shared with other entities, what's considered best practice? to create a reference to the entity or attribute in order to have all the attributes of an entity with only one namespace or, just the shoved that entity/attr directly into an entity?#2019-05-3118:13benoitEntities can and should have attributes from different namespaces. Namespaces scope a set of attributes, not "entity types".#2019-05-3118:19kennyIs there a way to make Datomic Cloud actually clean up the resources it creates? It a big pain to chase down everything it does. It also makes recreating a stack with the same name impossible.#2019-05-3118:27kennyThe CF stack does not delete an IAM policy (AdminPolicy) it creates.#2019-05-3118:27grzmThere are a number of other resources that are kept around as well.#2019-05-3118:28grzmThere are scripts out there that make this easier to handle in an automated fashion.#2019-05-3118:28kennyYep: https://docs.datomic.com/cloud/operation/deleting.html#deleting-storage. We have a script to delete those.#2019-05-3118:28kennyIt's super annoying that Datomic doesn't provide this.#2019-05-3118:29grzmI sympathize. Though I try to set aside the things I find really annoying to those I'm not able handle with a script.#2019-05-3118:27kennyThat will block future stacks from being created with the same name with a CREATE_FAILURE due to resource already exists.#2019-05-3118:30grzmI've been seeing these errors in lambda logs for Kinesis ions.
{:cognitect.anomalies/category :cognitect.anomalies/unavailable, :cognitect.anomalies/message "Connection refused", :clojio/throwable :java.net.ConnectException, :clojio/socket-error :connect, :clojio/at 1559312137252, :clojio/remote "10.213.43.136", :clojio/queue :queued-input, :datomic.ion.lambda.handler/retries 2}
They're not very informative. I'm guessing it's due to the lambda not being able to connect to the application on the server instance. I've been bouncing the applications to make them go away, but that's unsatisfying. Any strategies out there for digging into these?#2019-05-3119:37feelextraHi! i'm interested in Datomic, but haven't had a chance to try it out yet.
I would like to know more about the available operations permitted on data on the clients in a system with Datomic on the backend:
# Read access on clients:
Does a client have read access to an in-memory Datomic locally? e.g. being able to query without requiring network request to the peristent Datomic instance on the backend.
# Write access on clients:
Is it true that a client doesn't have write access to an in-memory Datomic? I'm thinking that this is the case since I've seen Datomic described as Strongly Consistent as opposed to Eventually Consistent which would have enabled clients to maintain own transaction log and ultimately synchronize with the backend when reconnected).#2019-05-3120:10dustingetzwrites serialize through transactor and thus require network; reads require network but can often deliver local performance (depending on your topology, Ions vs Cloud vs Onprem etc)#2019-05-3120:16feelextra@dustingetz thank you for the reply 🙂 so on an Onprem config the performance would be local but Ions/Cloud would depend on availability of AWS servers in the vicinity of the client#2019-05-3120:18dustingetzyou will always need the transactor and storage to be reachable#2019-05-3120:27feelextraright.#2019-06-0120:25daniel.spanielHi, I have a question about free vs client ( ion ) versions. I installed the latest free version by putting this in the deps.edn file#2019-06-0120:25daniel.spanielcom.datomic/datomic-free {:mvn/version "0.9.5697"}#2019-06-0120:26daniel.spanielthen I notice that with datomic.api.d/transact I have to transact like this#2019-06-0120:26daniel.spaniel(d/transact conn some-single-schema)
#2019-06-0120:27daniel.spanielinstead of usually with datomic.client.api/transact i do this#2019-06-0120:27daniel.spaniel(d/transact conn {:tx-data some-single-schema})#2019-06-0120:28daniel.spanielI guess I wonder why those api's would be different. and wonder if I have old version or if that is on purpose that the peer api does not use the tx-data map to pass in data ?#2019-06-0120:44daniel.spanielI am trying to setup testing ( to use the free peer locally ) and I can setup the local peer database ( create it ) fine, but was hoping to be able to call the same functions to push data to the local database as I do with the client db#2019-06-0122:31johnjyes, the APIs are different#2019-06-0122:32johnjyou can wrap them, and there's also the client on-premise which is similar to the cloud client#2019-06-0123:18daniel.spanielright ... good point about the on premise being similar to cloud, and i found a library that wraps it like you said ( sorta wraps it ) but it is just interesting that the apis don't match when it would be super nice if they did ( no hackery needed to switch )#2019-06-0203:57johnjI mean, the client is the same API for both on-prem and cloud IIRC#2019-06-0210:25joshkhquestion about query groups and "planning your system" on ions. the diagram here [1] suggests different query groups for each stage (dev/stg/prod) of some application, and each stage is running a different code deploy revision. does each query group have a different application name? and if so, how can i deploy different revisions to query groups of the same system without updating and committing my ion-config.edn's app-name: [query group name] to point to each query group? [1] https://docs.datomic.com/cloud/operation/planning.html#stages#2019-06-0306:27henrikYour query groups will get their own CodeDeploy application names. When you deploy, you'll give that name to indicate with query group to deploy to. Query groups can indeed run different revisions.#2019-06-0210:32joshkhdoes $GROUP become the name of the app stage query group, such as my-app-stg, rather than the main compute group? clojure -A:dev -m datomic.ion.dev '{:op :deploy :rev $(REV) :group $(GROUP)}'#2019-06-0210:43joshkhwait that's not right, $GROUP needs to be a compute group#2019-06-0314:04marshall@joshkh yes, you deploy to a group which can be a query group or a primary compute group#2019-06-0314:24joshkhgotcha. so if we want different deployment stages for an ions related project we'd create two query groups with the same existing system name (storage) stack and different application names such as myapp-dev and myapp-prod?#2019-06-0314:26marshallyep#2019-06-0314:26marshallwell#2019-06-0314:26marshallyes, you could do that#2019-06-0314:26marshalland/or you could leave the application name the same#2019-06-0314:26marshalland deploy separately to them#2019-06-0314:27marshallthe group will be the stack name of the query group stack#2019-06-0314:27joshkhokay, so then here's the part that's confusing me: what does :app-name represent in my project?#2019-06-0314:27marshallapp-name defines the codedeploy target group#2019-06-0314:27marshallfor the ion deployment#2019-06-0314:27marshallso having the same app-name in multiple query groups and/or primary compute groups allows you to deploy the same ion project to any/all of them#2019-06-0314:28marshalli.e. if you wanted to have one QG that got testing/beta builds of your app#2019-06-0314:28joshkhah!#2019-06-0314:28marshallyou’d deploy the same ion project to it, but different versions (shas)#2019-06-0314:29marshallthat way when you’re ready to promote a build/version to production you dont have to change your code at all#2019-06-0314:29marshallin fact, you wouldnt even need to re-push#2019-06-0314:29marshallyou can just deploy it to additional/other QGs#2019-06-0314:30marshallwhereas if you use different application names, you’d need to change the app name in the config and re-push to be able to ship it to your other group(s)#2019-06-0314:30joshkhokay, so let's say for my myapp project i've created two query groups, one for dev and one for prod, and i've given them the same Application Name aka code deploy target. how do i choose to deploy to only one of them if they have the same group name?#2019-06-0314:31joshkh> whereas if you use different application names, you’d need to change the app name in the config and re-push to be able to ship it to your other group(s)
that's exactly what i'm trying to avoid 🙂#2019-06-0314:31marshallwhen you push you’ll see a list of possible groups to deploy to#2019-06-0314:31marshallin the return message from push#2019-06-0314:32joshkhah. this is what i get for automating my deployments 😉#2019-06-0314:32joshkhthanks a lot marshall. i'll have a look.#2019-06-0314:32marshalland, in particular, that group name is the name of the stack#2019-06-0314:32marshallso if you have my-qg1 and my-qg2#2019-06-0314:32marshallwhen you deploy you replace $group in the command with whichever one you want to deploy it to#2019-06-0314:35joshkhgreat, thanks for your help.#2019-06-0314:35marshallnp#2019-06-0320:24joshkhfinally got around to testing and it worked like a charm. thanks again @U05120CBV#2019-06-0316:57eoliphantHi quick question, had to do some work on an older on-prem project, and forgot to add :db/index true to some new attrs, I’ve since fixed it, but was just curious about the behavior for datoms that were already transacted. do they get indexed in the background or something?#2019-06-0318:33jaretYes turning db/index true on will add them to the next indexing job. Turning it on can be expensive if there is a lot of data. If you can, I always recommend testing in an environment before rolling out to production. Also.. worth checking to see if an indexing job has completed since you made the change as that would indicate they’ve already been added to the index.#2019-06-0320:31daniel.spanielLet's say I have a datomic db instance that I have by doing (let [db (d/db conn)] is there a way to go back and get the conn from the db i just made .... (let [conn (?some-method-to-extract-conn? db)])#2019-06-0320:34ghadino#2019-06-0320:35alexmillerremember that db instances are values, connections are stateful#2019-06-0320:35daniel.spanieloh well .. got it ..#2019-06-0320:36daniel.spanielkind of goofy thing to do you might think, but if you say why i wanted to do it you might thing .. oh ok .. not so goofy#2019-06-0322:21esp1how would i query for all of the datoms in a given transaction (equivalent to drilling in to the Transactions tab in the Datomic Console)? if i try this: (d/q '[:find ?e ?a ?v ?t
:in $ ?t
:where
[?e ?a ?v ?t]]
(d/db conn)
13194139536455) i get an error: Execution error (Exceptions$IllegalArgumentExceptionInfo) at datomic.error/arg (error.clj:57).
:db.error/insufficient-binding Insufficient binding of db clause: [?e ?a ?v ?t] would cause full scan
#2019-06-0322:25ghadi@esp1 one way is (first (d/tx-range conn {:start tx :end (inc tx)}))#2019-06-0322:28esp1cool, thanks!#2019-06-0322:26ghadiI'm not sure how/whether it can be done in a query#2019-06-0323:07benoit@esp1 tx-data, See https://docs.datomic.com/on-prem/log.html#log-in-query#2019-06-0323:10esp1oh this is perfect - thanks!#2019-06-0400:56drewverleewhen i create a proxy to datomic via the datomic-socks-proxy.sh script, it seems to timeout after about 5 minutes if i don't use it. Is that normal#2019-06-0413:21jaret@U0DJ4T5U1 It’s normal for socks proxy to go down. I personally get lots of broken pipe because I have bad network. I now use auto SSH to keep my socks-proxy alive per a user’s recommendation in slack: https://forum.datomic.com/t/keeping-alive-socks-proxy/593#2019-06-0414:06drewverleethanks, i'll look into that.#2019-06-0518:20souenzzoIs possible to overwrite datomic internal clock? on-prem#2019-06-0518:35marshall@souenzzo https://docs.datomic.com/on-prem/best-practices.html#set-txinstant-on-imports#2019-06-0518:35marshallyes, but only under specific circumstances#2019-06-0520:09kvltWhat is the upper limit of how many datoms datomic can handle?#2019-06-0520:23favilahttps://groups.google.com/d/msg/datomic/iZHvQfamirI/RANYkrUjAEwJ#2019-06-0520:25favilathere are some harder entity limits#2019-06-0520:25favila2^40 "minted" entity ids, something like that#2019-06-0520:34Daoudahey Folks, how can i achieve something like that in datomic query: [((complement contains? ?type) ?event-types)]?
?event-typesis a vector.#2019-06-0520:50favilado you mean ((complement contains?) ?event-types ?type)?#2019-06-0520:51favilasets are usually faster:#2019-06-0520:51favila(not [(contains? ?legal-vals ?val)] in datomic#2019-06-0520:51favilabut you can also test each one#2019-06-0520:52favila(not [(identity ?legal-vals) [?val ...]])#2019-06-0520:52favilaor (ground [1 2 3]) instead of identity if you have a literal of vals#2019-06-0520:57Daoudasorry for taking long to answer, i need to exclude result matching with ?event-types#2019-06-0613:25DaoudaThank you very much @U09R86PA4, you’ve helped me a lot with your suggestions 🙂#2019-06-0607:59octahedrionwhy does Datomic Cloud not return query results in the minutes following a transaction of a few hundred datoms ?#2019-06-0608:29val_waeselynckYou may be querying the wrong database value#2019-06-0609:55octahedrionsorry I should have been clearer - it's that the query isn't returning - it just hangs then eventually times out#2019-06-0612:17alexmillerDoesn’t sound normal - maybe share the query? Does query work other times?#2019-06-0613:38octahedrionit's regardless of the query - any query - yes those queries work normally#2019-06-0613:38octahedrionafter around 15 minutes it resolves and queries return immediately as usual#2019-06-0613:39alexmillerthat does not seem like normal behavior#2019-06-0613:39alexmillersorry I can't help more in debugging it but I guess I would look at logs and alarms to see if something weird is happening#2019-06-0613:40octahedrionthanks - will do#2019-06-0613:47Joe LaneIs it stuck indexing?#2019-06-0613:54alexmillerqueries should still work regardless#2019-06-0616:02ghadiCheck the dashboard too#2019-06-0616:31tylerWhat version of datomic are you running. We experienced this when using query groups on a previous version from the latest.#2019-06-0615:35jjfinei’m just starting to look into an issue with our app where we’re seeing a lot of gc. it seems to be due to datomic queries doing a lot of memory allocation. sometimes on the order of 100MB/query. anyone have any tips?#2019-06-0615:38alexmillerif that's peer, then there are settings to tune memory use#2019-06-0615:42jjfineyeah its a peer. i’ll look into that#2019-06-0708:20Ivar RefsdalIs it possible to supply a default value in a historical query?
I'm looking for something like
[(get-else $ ?e :a/b ?tx true ?default-value) ?val]
#2019-06-0711:43fdserrDo I miss a jar?
Could not locate datomic/s3backup__init.class, datomic/s3backup.clj or datomic/s3backup.cljc on classpath.
We're on paid Pro. Thx.#2019-06-0712:57marshallAre you running bin/datomic backup-db from the root of your unzipped datomic distribution?#2019-06-0714:02GuillaumeNo, we are trying to run the backup function from (:require [datomic.backup])#2019-06-0714:03marshallbackup is a command line tool: https://docs.datomic.com/on-prem/backup.html#backing-up#2019-06-0714:17GuillaumeYes with the command line it's working fine. I thought i could use (datomic.backup/backup [from-conn-uri to-storage-uri sse? progress differential?]) directly in my code.#2019-06-0714:18marshallThere is currently no supported API entry for running backup programmatically#2019-06-0714:19GuillaumeOk thanks for the confirmation 🙂#2019-06-0716:27jjfineDid some tweaking of object-cache-max and gc settings on my peer. Tweaking these settings doesn’t seem to impact the metrics that are worrying me. However, maybe what I’m seeing is completely normal for a datomic peer: lots of churn in eden space due to allocations from datomic.api/q. Overall, gc time seems insignificant but new gen gc count per second seems high. Objects look like they’re being allocated and immediately garbage collected. Is this normal?
Additionally, it doesn’t seem like the peer is pulling new segments during this time. Does this mean object-cache-max is large enough to hold the data being queried?#2019-06-0718:20daniel.spanielis there a recommended library for doing ( handling / running ) schema/data migrations for datomic ? i see the library called conformity and curious if there is a better one that that?#2019-06-0720:53Joe LaneAnybody running into issues with a IonHttpDirectFailedToStart alert in datomic cloud?#2019-06-0720:53Joe LaneIt appears to be running into a spec error#2019-06-0720:54Joe Lane#2019-06-0720:56Joe LaneMy Ions appear to still be working.#2019-06-0721:26Daniel HinesOn the following page: https://docs.datomic.com/on-prem/dev-setup.html, it reads
> To create a connection string, simply replace <DB-NAME> with a database name of your choice, e.g. “hello”
However, it’s not clear where I replace <DB-NAME. I don’t see that anywhere in the .properties file.#2019-06-0721:35Daniel HinesOk, I see now that that’s answered here: https://docs.datomic.com/on-prem/dev-setup.html#peer-server
Just a note, that particular information quoted above seems out of order and caused me some confusion.#2019-06-0808:41Brian AbbottHi, I’m looking to be able to synchronize cloud to dev-prem. How can I do this? Has anyone done it before? #2019-06-0816:41daniel.spanielI am curious about installing a transaction function on my in memory test database. I think this is same proceedure as on-prem documentation which says to install like this#2019-06-0816:41daniel.spaniel;; tx-data to install the function
[{:db/ident :add-doc
:db/fn #db/fn {:lang "clojure"
:params [db e doc]
:code [[:db/add e :db/doc doc]]}
#2019-06-0816:42daniel.spanielbut i already have the code setup in a function somewhere like my.package/my-function#2019-06-0816:43daniel.spanielso i am curious how to install it ..#2019-06-0816:48daniel.spanieli am trying things like#2019-06-0816:48daniel.spaniel(d/transact conn {:tx-data [{:db/ident :my-function
:db/fn `my.package/my-function}]})#2019-06-0816:49daniel.spanielbut that not making datomic very happy ( says `my.package/my-function is not a valid :datomic/fn for attribute :db/fn )#2019-06-0816:56daniel.spanielalso the example does not even work in the in-memory db I get error#2019-06-0816:56daniel.spanielSyntax error compiling fn* at (middleware_test.clj:13:1).
Can't embed object in code, maybe print-dup not defined: cl
#2019-06-0816:56daniel.spanielwhen i do this#2019-06-0816:56daniel.spaniel(d/transact conn
{:tx-data [{:db/ident :add-doc
:db/fn #db/fn {:lang "clojure"
:params [db e doc]
:code [[:db/add e :db/doc doc]]}}]})#2019-06-0817:02mssTrying out datomic cloud for the first time and noticed that there’s not a connection shutdown function the same way there is in the on-prem version of datomic. I assume then that there’s no cleanup that needs to be done when I’m ready to shut down my client/system?#2019-06-0817:17hanswhi all#2019-06-0817:18hanswBack for another go at Datomic 🙂#2019-06-0817:18hanswIn the tutorial it says:
(def items-by-type-ionized
"Ionization of items-by-type for use with AWS API Gateway lambda
proxy integration."
(apigw/ionize items-by-type))
What does 'ionize' mean in this context?#2019-06-0821:16okocimAPI Gateway passes the request to and expects a response from the application in a specific json format. I think it’s describe here, but I might be wrong: https://docs.aws.amazon.com/apigateway/latest/developerguide/set-up-lambda-proxy-integrations.html#api-gateway-simple-proxy-for-lambda-input-format
The handler function in your app adheres to the clojure ring spec.
The ionize function takes care of the conversions to make those two things compatible. It’s probably just a lot of boring code filled with map key renaming and type conversion…#2019-06-0906:50hanswYou call it boring, I call it 'magic glue' , i'm a platform-guy 🙂#2019-06-1201:07NolanHaving trouble getting a Lambda-proxy Ion to respond with a non-base64-encoded body. Is there a way to force "isBase64Encoded": false? The ion returns a map:
{:body "<some transit>"
:headers {"Content-Type" "application/transit+json", ...}
:status 200}
But API Gateway logs show that the response body is base64 encoded prior to API Gateway’s response transformations. The request also includes {:headers {"Accept" "application/transit+json"}, ...}. Is this an API Gateway setting?#2019-06-1209:34dmarjenburgh@nolan Yes, you can configure it in API Gateway. https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-payload-encodings-configure-with-console.html Easiest is to add */* as binary media type. API Gateway will base64 decode the response for you#2019-06-1210:16octahedrionIs there a list of ways to optimize queries ? And if it's possible to apply functions to :where clauses to optimize them, could we make a library that automates the process ? Because ideally queries should be written for readability#2019-06-1213:40dmarjenburghThere are some best-practices for getting the most out of queries https://docs.datomic.com/cloud/best.html#most-selective-clauses-first. I suppose the query engine could optimise the query for you, but AFAIK it doesn’t do such a thing atm#2019-06-1212:55unbalanceddoes anyone happen to know what versions of MySQL and corresponding jdbc and mysql adapter are supported by latest datomic on prem??#2019-06-1214:52faviladatomic's use of sql is minimal. it's basically a key-value blob store#2019-06-1214:53favilawe use mysql 5.7 with connectorj 5.1.47 but have used other combinations in the past#2019-06-1214:54favilaI also got it running on sqlite once just for perf comparison with "dev" transactor (which uses h2)#2019-06-1215:13unbalancedalright I'll give that a shot. So you have the connectorj in both the lib of your datomic installation and in your deps.edn/`project.clj`/`build.boot`?#2019-06-1215:13unbalancedor just in the lib?#2019-06-1215:15unbalancedalso can I ask what version of datomic you were using? And which of com.datomic/datomic-pro and which version of com.datomic/client-pro?#2019-06-1215:16unbalancedand which version of org.clojure/java.jdbc (if you are using that :P)#2019-06-1214:28unbalancedAlso is there any documentation on how/why AMQ is involved with Datomic? 😛 Because apparently if you're doing stuff on docker it becomes important#2019-06-1214:29unbalancedExecution error (ActiveMQNotConnectedException) at org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl/createSessionFactory (ServerLocatorImpl.java:799).
AMQ119007: Cannot connect to server(s). Tried with all available servers.
#2019-06-1214:51favilait's how peers and the transactor communicate#2019-06-1215:53unbalancedokay for the "Datomic Pro Starter Edition" are we limited to one download of the transactor
com.datomic/datomic-pro {:mvn/version "0.9.5786"} or something? Trying to deploy to a docker container and I'm getting a 401#2019-06-1215:59alexmillerare your maven settings properly set up for authentication?#2019-06-1215:56eoliphantAre cross-db joins working in the client api now?
saw this is in the api doc, but still getting weird errors if i try it
The query list form is [:find ?var1 ?var2 ...
:with ?var3 ...
:in $src1 $src2 ...
:where clause1 clause2 ...]
#2019-06-1216:17unbalancedis the behavior of datomic.api/create-database to throw a Syntax error (NullPointerException) on a malformed db string?#2019-06-1216:17unbalancedor does that indicate something else is going on?#2019-06-1216:50ghadineed more info @goomba: what call did you make?#2019-06-1216:50ghadiarguments#2019-06-1216:52unbalanced#2019-06-1216:52unbalanced#2019-06-1216:53unbalanced(line 10 above corresponds with db.clj:36 in the stack trace)#2019-06-1216:55unbalancedI can't tell if this has something to do with the dependencies in my deps.edn, if the connection string is malformed, if my transactor.properties is incorrect, or if I don't have the correct .jar file in the <datomic-install>/lib, or if it's something else entirely#2019-06-1216:57unbalancedif it helps, I get this same error whether or not the transactor is running#2019-06-1216:57unbalancedI'll be moving rest of comments regarding this to this thread#2019-06-1217:21souenzzoit's a missing /cdn-cgi/l/email-protection driver?#2019-06-1218:07unbalancedProbably but missing from where? In the application?#2019-06-1218:52souenzzoI'm just a datomic user that never used it with jdbc. But by the exception, but looks like that you need to add [mysql/mysql-connector-java "8.0.16"] to your deps#2019-06-1218:58unbalancedIt is, unfortunately 😕 I'm just racking my brain because this is such an opaque error message#2019-06-1218:59unbalanced:deps {
org.clojure/clojure {:mvn/version "1.10.0"}
datascript {:mvn/version "0.18.2"}
org.clojure/data.codec {:mvn/version "0.1.1"}
compojure {:mvn/version "1.6.1"}
http-kit {:mvn/version "2.3.0"}
ring-cors {:mvn/version "0.1.13"}
vvvvalvalval/supdate {:mvn/version "0.2.3"}
org.clojure/core.match {:mvn/version "0.3.0"}
com.novemberain/langohr {:mvn/version "5.1.0"}
com.novemberain/monger {:mvn/version "3.1.0"}
tempfile #:mvn{:version "0.2.0"}
org.clojure/data.csv {:mvn/version "0.1.4"}
org.clojure/java.jdbc {:mvn/version "0.7.9"}
com.datomic/client-pro {:mvn/version "0.8.28"}
com.datomic/datomic-pro {:mvn/version "0.9.5786"}
mysql/mysql-connector-java {:mvn/version "8.0.16"}
com.taoensso/sente {:mvn/version "1.14.0-RC2"}
buddy {:mvn/version "2.0.0"}
clj-http {:mvn/version "3.9.1"}
org.clojure/tools.logging {:mvn/version "0.4.1"}
redis-async {:local/root "../redis-async"}
clj-jedis {:local/root "../clj-jedis"}}
}#2019-06-1218:59unbalancedAll I can figure out is "something" is wrong#2019-06-1219:25unbalancedgood lord... finally got it .... conflicting dependencies oy ...#2019-06-1219:25unbalancedapparently another library was pulling in datomic-free#2019-06-1220:13daniel.spaniel? are transaction function supported in datomic-free ? i have not been able to get them to work ( someone mentioned 2 years about about there not being a transactor in the free version ) but this seems tough on people using free version .. though maybe that is the point#2019-06-1220:18johnjFWIW, the CHANGES.md file inside datomic-free-0.9.5703.zip says classpath functions where added but I haven't try them#2019-06-1220:22johnjHow I would modify (d/pull db '[{:a [:b :c]}] eid]) to return {:b value :c value} instead of {:a {:b value :c value}} ?#2019-06-1220:53daniel.spanieli have tried to load datomic-free-0.9.5703 from deps.edn and it cant be found .. sadly#2019-06-1220:53daniel.spanielbut that is good pointer @lockdown-#2019-06-1220:53daniel.spanieli will check that#2019-06-1220:54daniel.spanielhttps://forum.datomic.com/t/free-0-9-5703-on-clojars-org/572#2019-06-1220:55daniel.spanielinteresting discussion that is super confusing#2019-06-1221:15johnjyeah, why would 5703 free include that changelog then no idea#2019-06-1221:19daniel.spanielpuzzler ( but it probably was mistake ( because it refers to on-prem ) still .. a puzzler non the less#2019-06-1221:19johnjyes, that version is not in clojars but you can use the maven-install script that is inside the zip#2019-06-1221:19daniel.spanielinteresting .. and kind of odd also#2019-06-1221:22johnjwell, looks like they are targeting the enterprise only and want their existing/new users to move to the cloud version#2019-06-1221:32daniel.spanielthat figures, but we using free only for testing ( and using cloud version for real ) so this kill the buzz for testing a bit and for working on laptop while on bus and that sort of thing ( with no wifi )#2019-06-1221:34ghadiyou probably know already but free & cloud are not drop in replacements#2019-06-1221:34ghadi@dansudol#2019-06-1221:34johnjand neither its pro#2019-06-1221:35ghadifree/pro are#2019-06-1221:35ghadion-prem == #{free pro} . vs cloud#2019-06-1221:35ghadihttps://docs.datomic.com/on-prem/moving-to-cloud.html#other#2019-06-1221:36ghadisyntax is different on some schema and tx -related things#2019-06-1221:36johnjyeah, I mean pro and cloud are not compatible#2019-06-1221:51daniel.spanielwe using this library to bridge that gap ( since we using cloud api for real ) https://github.com/ComputeSoftware/datomic-client-memdb.git#2019-06-1221:51daniel.spanielwith this you can do mem-db ( with free ) and use same api as cloud ( BUUUUUT .. no transactions ) what a party pooper that was#2019-06-1303:54drewverleeis it possible to create lookup ref that relies on more then one attribute? The answer seems to be no, but it would not be hard to for me to just right the query for the entity id and use that i suppose.#2019-06-1313:38favilaconsider creating a composite unique-value attribute#2019-06-1414:05eoliphantI do that, plus usually an associated tx-fn#2019-06-1304:18alexmillernot currently#2019-06-1313:05unbalancedso, if I'm running (datomic.api/connect db-uri)... am I supposed to have AMQ running in the background?#2019-06-1313:05unbalancedBecause I'm getting this#2019-06-1313:05unbalancedAMQ119007: Cannot connect to server(s). Tried with all available servers.#2019-06-1313:06unbalancedfor reference this was attempted with MySQL and the transactor running in docker containers and was attempting to connect to transactor from host machine#2019-06-1313:06unbalanced(attempting to troubleshoot why my ring server is having trouble connecting to datomic)#2019-06-1313:12marshall@goomba no. see https://docs.datomic.com/on-prem/deployment.html#peer-fails-to-connect
most likely your host and alt-host values in your transactor properties file are the issue#2019-06-1313:12unbalancedhaha, it's even highlighted on the page 😄 thanks @marshall, I'll take a look#2019-06-1313:50unbalancedalright this time I think it actually is the peer#2019-06-1313:50unbalancedring_1 | ERROR: AMQ214016: Failed to create netty connection
ring_1 | javax.net.ssl.SSLException: handshake timed out
ring_1 | at io.netty.handler.ssl.SslHandler.handshake(...)(Unknown Source)
ring_1 |
ring_1 | Jun 13, 2019 1:36:04 PM org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnector createConnection
ring_1 | ERROR: AMQ214016: Failed to create netty connection
ring_1 | javax.net.ssl.SSLException: handshake timed out
ring_1 | at io.netty.handler.ssl.SslHandler.handshake(...)(Unknown Source)
ring_1 |
ring_1 | Exception in thread "main" Syntax error compiling at (db.clj:42:11).
ring_1 | Error communicating with HOST 172.20.0.4 on PORT 4334
#2019-06-1313:50unbalancedI assume this is because I'm not exposing a port in docker properly#2019-06-1313:51unbalancedwill look into it#2019-06-1313:54cgrandI’ve got a weird corner case were datomic and datascript agree: tempids can unify together but not a tempid and an eid:
[[:db/add "foo" :db/ident :foo] [:db/add "bar" :db/ident :foo]] ; works fine and resolve to a single eid
[[:db/add <existing-eid-or-lookup-ref> :db/ident :foo] [:db/add "bar" :db/ident :foo]] ; doesn't work
#2019-06-1314:20favilaseems to work for me?#2019-06-1314:20favilaat least using mem db#2019-06-1314:20favila(-> "datomic:" d/connect
(d/transact [[:db/add (d/tempid :db.part/user) :db/ident :foo] [:db/add "bar" :db/ident :foo]]) deref)
=>
{:db-before datomic.db.Db,
@5e4de82f :db-after,
datomic.db.Db @b6765e2a,
:tx-data [#datom[13194139534312
50
#inst"2019-06-13T14:19:50.056-00:00"
13194139534312
true]
#datom[17592186045417 10 :foo 13194139534312 true]],
:tempids {-9223350046623220289 17592186045417, "bar" 17592186045417}}
(-> "datomic:" d/connect
(d/transact [[:db/add (d/tempid :db.part/user) :db/ident :foo] [:db/add "bar" :db/ident :foo]]) deref)
=>
{:db-before datomic.db.Db,
@b6765e2a :db-after,
datomic.db.Db @276d2771,
:tx-data [#datom[13194139534314
50
#inst"2019-06-13T14:20:00.447-00:00"
13194139534314
true]],
:tempids {-9223350046623220291 17592186045417, "bar" 17592186045417}}
#2019-06-1314:21favilamaybe the tempid is in a different partition?#2019-06-1314:22favilaactually that works too, at least where [:db/ident :foo] exists already#2019-06-1314:23favilaseems to work for initial write also, but honestly the partition difference seems like it should be an error. I don't know how it would choose a partition#2019-06-1314:23favila(-> "datomic:" d/connect
(d/transact [[:db/add (d/tempid :db.part/db) :db/ident :bar] [:db/add "bar" :db/ident :bar]]) deref)
=>
{:db-before datomic.db.Db,
@28ac123 :db-after,
datomic.db.Db @a8ad04f9,
:tx-data [#datom[13194139534319
50
#inst"2019-06-13T14:22:26.281-00:00"
13194139534319
true]
#datom[63 10 :bar 13194139534319 true]],
:tempids {-9223367638809264717 63, "bar" 63}}#2019-06-1314:15alexmiller"doesn't work" means? error?#2019-06-1314:35cgrandyes error#2019-06-1314:44cgrand@favila @alexmiller fuller repro:
1/ transacting with two tempids and one ident works fine
user=> (d/transact conn [[:db/add "foo" :db/ident :foo] [:db/add "bar" :db/ident :foo]])
#object[datomic.promise$settable_future$reify__4751 0x3386c206 {:status :ready, :val {:db-before
2/ transacting the same ident on an existing eid AND a tempid fails:
user=> (d/transact conn [[:db/add :foo :db/ident :bar] [:db/add "bar" :db/ident :bar]])
#object[datomic.promise$settable_future$reify__4751 0x2321e482 {:status :failed, :val #error {
:cause ":db.error/datoms-conflict Two datoms in the same transaction conflict\n{:d1 [:foo :db/ident :bar 13194139534323 true],\n :d2 [17592186045428 :db/ident :bar 13194139534323 true]}\n"
:data {:d1 [:foo :db/ident :bar 13194139534323 true], :d2 [17592186045428 :db/ident :bar 13194139534323 true], :db/error :db.error/datoms-conflict}
:via
[{:type java.util.concurrent.ExecutionException
:message "java.lang.IllegalArgumentException: :db.error/datoms-conflict Two datoms in the same transaction conflict\n{:d1 [:foo :db/ident :bar 13194139534323 true],\n :d2 [17592186045428 :db/ident :bar 13194139534323 true]}\n"
:at [datomic.promise$throw_executionexception_if_throwable invokeStatic "promise.clj" 10]}
{:type datomic.impl.Exceptions$IllegalArgumentExceptionInfo
:message ":db.error/datoms-conflict Two datoms in the same transaction conflict\n{:d1 [:foo :db/ident :bar 13194139534323 true],\n :d2 [17592186045428 :db/ident :bar 13194139534323 true]}\n"
:data {:d1 [:foo :db/ident :bar 13194139534323 true], :d2 [17592186045428 :db/ident :bar 13194139534323 true], :db/error :db.error/datoms-conflict}
:at [datomic.error$argd invokeStatic "error.clj" 77]}]
...
where I would have expected the tempid to be assigned the existing eid (upsert semantics)#2019-06-1314:45favila[:foo :db/ident :bar]#2019-06-1314:45favilayour tx is malformed#2019-06-1314:46favilayou meant [:db/add :foo :db/ident :bar]?#2019-06-1314:48cgrandthis one, I edited the above post#2019-06-1314:47favilaor [:db/add (d/tempid :db.part/user) :db/ident :bar]?#2019-06-1314:49favilaok, with corrected tx + response#2019-06-1314:50favilaI see correction#2019-06-1314:50favilaThis is unavoidable#2019-06-1314:50favilathe :foo resolves to the before-tx eid#2019-06-1314:50cgrandyou can replace :foo by an explicit eid#2019-06-1314:51favila:db/ident :bar doesn't exist yet#2019-06-1314:51favilaI'm not sure how to get the same datom out of both of these tx ops#2019-06-1314:52cgranduser=> (d/transact conn [[:db/add 17592186045426 :db/ident :bar] [:db/add "tmp" :db/ident :bar]])
#object[datomic.promise$settable_future$reify__4751 0x2964511 {:status :failed, :val #error {
:cause ":db.error/datoms-conflict Two datoms in the same transaction conflict\n{:d1 [:foo :db/ident :bar 13194139534323 true],\n :d2 [17592186045428 :db/ident :bar 13194139534323 true]}\n"#2019-06-1314:52favilawhy would "tmp" unify to the current value of :foo and/or the eid 17592186045426 ?#2019-06-1314:53favila"tmp" may unify to current value of [:db/ident :bar] if it existed#2019-06-1314:56favilaresolution of a tempid to a possible real id is done using the db-before value; for this to work as expected, it would have to be done with a db-after value#2019-06-1314:57favilai.e., [:db/add "tmp" :db/ident :bar] would have to rewrite "tmp" to 17592186045426 before it knew that 17592186045426 had the ident :bar#2019-06-1314:57favilabecause 17592186045426 doesn't have the ident :bar until the tx is complete#2019-06-1314:57cgrandWhat about this one (less heavy on db/idents)?
user=> (d/transact conn [{:db/ident :u/k :db/unique :db.unique/identity :db/valueType :db.type/keyword :db/cardinality :db.cardinality/one}])
user=> (d/transact conn [[:db/add :u/k :u/k :foo] [:db/add "tmp" :u/k :foo]])
(exception)#2019-06-1314:58favilasame issue#2019-06-1314:58favilacan you get anything from the db using (d/entid db [:u/k :foo])#2019-06-1314:59cgrandbut I don’t use [:u/k :foo] as a lookup ref (it’s not transacted yet)#2019-06-1314:59favilaexactly#2019-06-1315:00favilait's not transacted yet, so "tmp" can't be replaced with its eid#2019-06-1315:01cgrandthen how come
(d/transact conn [[:db/add "tmp1" :u/k :bar] [:db/add "tmp2" :u/k :bar]])
works?#2019-06-1315:01favilaI don't know#2019-06-1315:02cgrandbtw I don’t need any db to figure this out: it’s a purely local unification; I could postprocess the fully expanded tx-data to perform the unification....#2019-06-1315:02cgrandsadly my tx-data involves several tx fns...#2019-06-1315:04favilaThis seems like a tricky thing to want to do#2019-06-1315:04favilaI'm still suspicious of trying to unify against something that hasn't been written yet#2019-06-1315:05favilaI'm pretty sure I exploit lack of unification in cases like this to detect real conflicts#2019-06-1315:06favilaI can sort of see why [[:db/add "tmp1" :u/k :bar] [:db/add "tmp2" :u/k :bar]] might be allowed to unify because :u/k :bar doesn't exist yet#2019-06-1315:11cgrandhow do you know it doesn’t exist yet?
In fact it works even if it preexists:
user=> (d/transact conn [{:u/k :preexisting}])
#object[datomic.promise$settable_future$reify__4751 0x62cd562d {:status :ready, :val {:db-before #2019-06-1315:13favilapreiexisting makes sense because both unify to the eid#2019-06-1315:14favilaasserting ":u/k :bar" on a tempid would trigger replacement of tempid with realid#2019-06-1315:14favilathen all future uses of that tempid would also be replaced with realid#2019-06-1315:15favilasimilarly, if [:u/k :bar] didn't exist and was asserted, kinda makes sense to say that every other tempid trying to assert that would unify to the same newly-minted eid#2019-06-1315:17favilabut when :u/k :bar could unify to a real eid, to ask a different tempid asserting a new [:u/k :foo] to unify to the what-is-now :bar but what will be :foo seems like too much mind-reading#2019-06-1315:06favilabut that already makes me a little nervous#2019-06-1315:06favilasuppose one eid had multiple lookup refs#2019-06-1315:07favilayou could have a tempid that could potentially unify against multiple eids#2019-06-1315:10favilae.g. {:db/id 12345 :refa :refa1 :refb :refb1} {:db/id 67890 :refa :refa2} tx [[:db/add "t1" :refa :refa2][:db/add "t2" :refb :refb1]]#2019-06-1315:15cgrandthis works ok#2019-06-1315:23favila"works ok" in what sense? no tx error?#2019-06-1315:24favilaI'm highlighting the potential ambiguity, I'm actually concerned that it works#2019-06-1315:25favilaactually nm this isn't what I was thinking of#2019-06-1315:44favilaThis is the case I was thinking of (:refa and :refb are unique-identity)#2019-06-1315:44favila@(d/transact conn [{:db/id "t1" :refa :a1 :refb :b1} {:db/id "t2" :refa :a2 :refb :b2}])
@(d/transact conn [[:db/add "t3" :refa :a1] [:db/add "t3" :refb :b3]])
=>
{:db-before datomic.db.Db,
@20fead49 :db-after,
datomic.db.Db @c13cf2d2,
:tx-data [#datom[13194139534316
50
#inst"2019-06-13T15:43:29.444-00:00"
13194139534316
true]
#datom[17592186045418 64 :b3 13194139534316 true]
#datom[17592186045418 64 :b1 13194139534316 false]],
:tempids {"t3" 17592186045418}}#2019-06-1315:44favilaIMO this should be an error#2019-06-1315:45favilait works because resolution to a real eid happened first#2019-06-1315:46favilathis btw is also the tx ops from expansion of the tx map `{:db/id "t3"
:refa :a1
:refb :b3}`#2019-06-1315:46favilaI think this syntax makes the ambiguity more clear#2019-06-1315:47favilathis was probably either a mistake or too-clever code#2019-06-1315:53favilaIf I were going back in time, I think I would make upserting attributes work like lookup refs that may not resolve against the db-before#2019-06-1315:54favilae.g. {:db/id [:refb :b3] :refa :a1} would (if :b3 didn't exist) reliably make a new eid, assert :refb :b3 on it, and assert :refa :a1 on it#2019-06-1315:55favilatempids would always make new eids#2019-06-1316:04favilaactually you may be able to replicate some of that behavior by consistently hashing the same ref lookup to the same string-for-tempid#2019-06-1315:13favilaor even worse [[:db/add "t1" :refa :refa2][:db/add "t1" :refb :refb1]]: was I making a new :refa2 or changing :refa1 to :refa2?#2019-06-1315:16cgrandit’s an integrity violation you are merging two existing entities#2019-06-1315:19favilaif :refa2 existed prior, would it still be an integrity violation?#2019-06-1315:20favilayou have to decide whether forms like [:db/add "t1" :refa :refa2] are primarily a lazy way of resolving tempids or a way to assert a new ident#2019-06-1315:20cgrandyou mean if it didn’t exist? because it exists {:db/id 67890 :refa :refa2}#2019-06-1315:20favilawhen both are possible, allowing it to guess doesn't seem like a good idea#2019-06-1315:27favilain general when I want my tx to be an update rather than a upsert, I will use the ident or lookup ref as the eid#2019-06-1315:28favilaI will not rely on unification through the assertion#2019-06-1315:28cgrand> you have to decide whether forms like [:db/add "t1" :refa :refa2] are primarily a lazy way of resolving tempids or a way to assert a new ident
Neither (or both), they are the upsert semantics and static analysis of the tx-data is enough
[:db/add eid :ref :A] [:db/add "tmp" :ref :A]
• :A doesn't exist yet in the db, tmp resolves to eid
• :A does exist but on another eid -> it's a unicity conflict
• :A does already exist on this eid -> tmp resolves to eid
In all non-conflicting cases we get the same output.#2019-06-1315:29favilayes but in case b, it's possible I made a mistake in my tx#2019-06-1315:29favilawait, what is case 2#2019-06-1315:30favilais that [:ref :A] does not resolve to eid?#2019-06-1315:31cgrandit’s [:ref :A] resolves to another-eid#2019-06-1315:31favilaso you would expect a conflict? then why the bug report?#2019-06-1315:31favilaI expect a conflict too#2019-06-1315:32favilaI thought you expected "tmp" to resolve to eid (not another-eid)#2019-06-1315:32favilai.e. the value :ref :A will resolve to after [:db/add eid :ref :A] is applied#2019-06-1317:17rapskalianHello all, I have a question due to lack of conceptualization: why is :db/cas necessary if the following is true for a Datomic system:
>The transactor queues transactions and processes them serially.
Serial processing seems to imply no need for check-and-set, but I’m surely missing something.#2019-06-1317:18Joe LaneIf the value you’re about to set is dependent upon it’s previous value (like a bank account) then you want cas.#2019-06-1317:20rapskalianThat makes sense. So then I think I’ve been implying that the transaction functions themselves are run serially, but in reality, they might be run (expanded) in parallel, and their resulting datoms are what actually get sent to the transactor for serial processing. Is that correct?#2019-06-1317:34favilaNo#2019-06-1317:34favilaA transaction is data describing the change you want done#2019-06-1317:35favilaTo prepare that data, you may have read values out of the db#2019-06-1317:35favilaso your changes are prepared assuming a certain state of the db#2019-06-1317:35favilathe problem is by the time that tx data gets to the transactor, your assumptions may be wrong#2019-06-1317:36favilathus invalidating your transaction#2019-06-1317:36favila:db/cas is a way to assert that something still has the value you read at the moment the write occurs#2019-06-1317:37favilathe transactions themselves are applied serially, but the transaction data was not prepared serially (i.e. it was prepared by uncoordinated peers reading whatever they read)#2019-06-1317:40rapskalianI see, thank you @favila. So then, in a Cloud system, the “compute group” actually can prepare datoms (i.e. run tx fns) in an uncoordinated fashion, but the actual mutation of storage is always serial?#2019-06-1317:43rapskalianThis actually would explain why :db/cas is a built-in, because there must be some storage-level magic happening to ensure that CAS’s promise is kept.#2019-06-1317:44favilaI don't think tx functions are run outside a tx#2019-06-1317:44favilaunless this is a cloud vs prem difference#2019-06-1317:44favilait's a surprising difference if so#2019-06-1317:44marshallthey are not#2019-06-1317:44marshallthe same description for the use/need/purpose of CAS for on-prem that Francis provided above is true for Cloud#2019-06-1317:47rapskalianOkay so to check my own understanding, if I prepare the tx outside of a tx function, then :db/cas might be required. However, if I query for the dependent data inside a tx function, then I shouldn’t need CAS, correct?#2019-06-1317:47marshallmore or less yes#2019-06-1317:47marshalltake a look at the link i dropped in the other thread#2019-06-1317:26Joe LaneHmmm. I’m not sure, that’s a very good question though.#2019-06-1317:29rapskalianHere’s actually an interesting example from the docs: https://docs.datomic.com/cloud/transactions/transaction-functions.html#creating
Notice that inc-attr does not use :db/cas even tho it depends on the previous value…#2019-06-1317:47marshall:db/cas is a transaction function
you can use it within a custom transaction function you write, but you don’t have to
you can also reimplement it (or something like it) as a transaction function yourself
However, the general use of CAS is more frequently for optimistic concurrency applications - i.e. https://docs.datomic.com/cloud/best.html#optimistic-concurrency#2019-06-1317:47favilaThis is because transactions run serially. the transaction data expansion and application to make the new DB value begins with the previous db value. All tx functions receive that previous db value. No other transactions are expanded/applied during this process (in essence, the transaction has a lock on the entire database). The datoms that result from db are applied to the previous db value to make the next db value. then the next tx is processed#2019-06-1317:48favilathere is no opportunity for a tx function to get a stale read#2019-06-1317:48marshallright ^#2019-06-1317:48marshallthe tradeoff is that whatever work you’re doing in your transaction function is happening in the single-writer-thread of Datomic#2019-06-1317:49marshallso if you try to do something expensive (like call a remote service — eek!) from within the transaction function, all writes are going to wait on that work#2019-06-1317:50marshallif you instead do that work locally in your client (or peer), you can avoid that cost on the transaction stream but you need to ensure that no one has changed your relevant data out from under you in the meantime, so you can often use CAS for that#2019-06-1317:50favilacas (and it's general technique of "assert what I read hasn't changed") allows the opposite tradeoff: possible parallel tx preparation, but a stale read is expensive to recover from. (You need to catch the tx error, detect it was a CAS error, and reprepare your tx using a newer db, and reissue hoping you don't race with some other write)#2019-06-1317:54rapskalianAhhh okay, that link coupled with these explanations have totally cleared up my confusion. I was totally missing the fact that concurrency was in the hands of the developer. So putting simple query logic into a tx function is totally valid, but when that logic becomes expensive (e.g. remote call as @marshall said), it might make more sense to perform that work outside the tx fn to keep the tx stream clear, and rely on CAS to uphold consistency.#2019-06-1317:55marshallyep#2019-06-1317:55rapskalianThank you both very much#2019-06-1318:00favilaIf you are familiar with clojure atoms: this is roughly the difference between (swap! db apply-ops (inc-something db)) and (swap! db (comp apply-ops inc-something))#2019-06-1321:22rapskalianMakes sense. Former performs the inc transformation outside the swap, and the latter performs the inc/apply inside the swap (iiuc).#2019-06-1321:32joshkhare there best-practices or guides for query optimisation? we're discovering queries that run orders of magnitude faster after reordering just one of their constraints.#2019-06-1321:33joshkhwe've always followed the "most specific first" rule, but it seems there are others patterns that help as well#2019-06-1321:36marshall@joshkh https://docs.datomic.com/on-prem/best-practices.html#join-along#2019-06-1321:42joshkhthanks @marshall! that's exactly what we found via some quick trial-and-error#2019-06-1321:43joshkhand also moving some top level constraints down into or-joins#2019-06-1414:38conanHiya, we're using Datomic Cloud and need to open it to the internet. What's the easiest way to do this in a production topology? We have a solo topology where we add ingress rules to the node security group, but in production there's an internal NLB that presumably routes requests to the nodes; if we swap that out for an internet-facing NLB then CloudFormation will probably kill it?#2019-06-1414:43Joe Lane@conan api gateway. In solo, use apigw lambda proxy, in production use the new http-direct integration with api-gateway.#2019-06-1414:45conanok great, we'll take a look at that. thanks!#2019-06-1414:49conanactually one question: what exactly would we connect the api gateway to? is it the contents of the node security group (i think that's where the peers are, which is what we need for db access)?#2019-06-1415:13Joe LaneI think the larger architectural problem is that you’re running your application in heroku and trying to access your database in a different datacenter. Independent of datomic cloud, this is likely going to result in slower performance than you like because you’ll need to be streaming data out over the open internet. Is there a reason you chose to use datomic cloud but not elastic beanstalk instead of heroku?#2019-06-1416:54cichliIf I understand correctly HTTP Direct is only relevant for Ions, right? i.e. it’s not for providing access to client applications in general#2019-06-1417:08Joe LaneThat is my understanding as well.#2019-06-1418:18cichliThanks 🙂#2019-06-1414:43Joe LaneDon’t expose your database over the internet directly, you want api gateway to handle security, throttling, etc.#2019-06-1414:45conanno, we'll be protected by static IP#2019-06-1414:48conanour app runs on heroku but it has static ips, so we only allow access from those#2019-06-1414:44ghadi^^#2019-06-1414:50ghadiclients with static IP or not, the traffic flowing over the NLB does not have TLS @conan#2019-06-1414:53conanoh, what happens if i use an nlb with tls?#2019-06-1414:54ghadiNLBs don't have TLS#2019-06-1414:54ghadiNLB is layer 4 load balancing#2019-06-1414:54conanwhat i mean is when i create an nlb, i select this#2019-06-1414:55ghadithat does not do what you think it does#2019-06-1414:55conanhaha ok#2019-06-1414:56conanok so i need to terminate tls somewhere in front of the nlb#2019-06-1414:57ghadii'm not going help you put your database on internet 🙂#2019-06-1414:57conanwe have no choice ¯\(ツ)/¯#2019-06-1414:58conanthe socks tunnel we use is encrypted, so long as we can terminate that somewhere in AWS we're fine#2019-06-1414:59ghadiusing socks to the bastion is fine if you can get that running in Heroku#2019-06-1414:59conanyeah but we don't want to be running all our db traffic over a low-availability bastion server#2019-06-1414:59marshallhttps://forum.datomic.com/t/keeping-alive-socks-proxy/593#2019-06-1415:01conanwe haven't had any problems with the tunnel so far tbh#2019-06-1415:01conanwe aren't using the datomic-socks-proxy script though#2019-06-1415:01marshall“low availability bastion”?#2019-06-1415:02conanas in, it's a single ec2 instance. there doesn't seem to be much point running a high-availability production topology instance of datomic cloud if we run all the traffic over a single point of failure like the bastion#2019-06-1415:02conanam i misunderstanding how the bastion works?#2019-06-1415:02marshallthen use API Gateway#2019-06-1415:02conan(i haven't spent much time thinking about it, the docs very much present the bastion as a dev tool rather than a production resource)#2019-06-1417:07DaoudaHey Folks, can you tell me which time attribute is used by datomic.api/since to return a given version of a database?
https://docs.datomic.com/on-prem/clojure/index.html#datomic.api/since#2019-06-1417:07DaoudaHey Folks, can you tell me which time attribute is used by datomic.api/since to return a given version of a database?
https://docs.datomic.com/on-prem/clojure/index.html#datomic.api/since#2019-06-1417:30favilaI'm not sure what you mean?#2019-06-1417:31favilasince can accept t, tx or an instant#2019-06-1418:18Daoudayeah, but in case t form time i guess is used, which time value does it check internal to know the database version which fulfill the requirement?#2019-06-1418:27favilait's always transaction time#2019-06-1418:29favilaevery transaction has a datom [tx :db/txInstant instant]. Tx is the transaction's entity id; t is just that id with partition bits stripped off (use d/tx->t and d/t->tx to convert between them). instant is a java.util.Date corresponding to whatever the transaction time is#2019-06-1418:31favilaif you supply a time (rather than t or tx) to since, as-of, tx-range, etc, it just looks for the txid at or before that moment#2019-06-1418:44DaoudaThank you very much, those information really helped 😄#2019-06-1500:13NolanI haven’t been able to get a lambda proxy ion to return plain application/transit+json, but instead only receive base64 encoded transit. I’ve tried enabling binary support in API Gateway by adding */* as a binary media type to the API Gateway integration (following this https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-payload-encodings-configure-with-console.html), but that didn’t seem to cause API Gateway to base64 decode the response prior to sending it back to the client. Has anyone run into this before or have any ideas?#2019-06-1501:47hadils@nolan Are you testing it in the AWS console? Try using your app — the console seems to return base64 no matter what I tried, but my app worked just fine.#2019-06-1518:07Nolan@hadilsabbagh18 I’ve been testing using the application, which receives the base64 encoded response body. It seems like it’s Ion middleware that’s base64 encoding the response body and any fix at the API Gateway level is treating a symptom rather than the cause. I’m sure it’s something silly that I’m doing with headers or otherwise, because the Ion tutorial responds with application/edn without any complication.#2019-06-1518:29joshkhhey @Nolan, i return transit+json from my Ions and API Gateway without a problem. maybe i can help?#2019-06-1523:27eoliphantHi, I’m running into an issue on cloud, I have some on prem queries where I’ll pass in some tx-data (from tx-range, etc)
(d/q '[:find ?xxx
:in $ [[?e ?a ?v ?tx]]
:where
...
db tx-data)
This works fine in on-prem (peer api), but in cloud I’m getting a transit marshalling error
*Execution error at com.cognitect.transit.impl.AbstractEmitter/marshal (AbstractEmitter.java:194).
Not supported: class datomic.client.impl.shared.datom.Datom*
Is this the expected behavior?#2019-06-1601:41favilaThere is no serializer in their transit for the Datom class (an instance of a datom from d/datom, tx data, etc)#2019-06-1601:41favilaYou will have to coerce to eg vector#2019-06-1614:25lboliveiraHello!
Last Friday I issued some :db/excise commands on a database that has 50M entities.
It was issued 200 transactions with 5000 :db/excise each. No individual attribute was excised, only entities.
After that I called (d/request-index conn) and (d/gc-storage conn #inst “2018”).
The original transactor that had 2 CPUs and 4GB of RAM stopped to attend new requests. I stopped it and the requests began to be attended by the backup transactor.
The backup transactor worked for 10 minutes and stopped answering new requests.
This pattern started to repeat until I set up a new transactor that has 8 CPUs and 64GB of RAM.
The 8 CPUs were 100% by 2 or 3 minutes, soon they got idle except by 1 CPU that got stuck in 100%.
The new transactor was able to satisfy new requests and no timeouts were logged on the peer anymore.
Since them, one CPU is being consumed until today and does not seem that the entities are being removed.
It there something that I can do? Will this process end some day? Is there a way to find out how much of this job is complete? Should I consider a plan B? I can't afford the new transactor’s bill for too much time.
I am using datomic-pro-0.9.5561.50.
java -server -cp resources:lib/*:datomic-transactor-pro-0.9.5561.50.jar:samples/clj:bin -Xmx60000m -Xms60000m -XX:+UseG1GC -XX:MaxGCPauseMillis=50 clojure.main --main datomic.launcher /opt/datomic-pro-0.9.5561.50/config/transactor.properties
# config/transactor.properties
memory-index-threshold=20g
memory-index-max=40g
object-cache-max=2g
#2019-06-1712:53marshall@lboliveira https://docs.datomic.com/on-prem/excision.html#performance
you should not excise more than a few thousand datoms at the same time (max)#2019-06-1712:54marshallit is possible that your excision job will complete, but it is also possible that it may be too large
5000 * 200 is a very very large excision#2019-06-1712:55marshallgenerally, your options are to provide the transactor a large amount of memory and CPU and wait, try running the excision locally (i.e. on a restored backup) then backing and restore from there, or restore back to a state prior to issuing the excision and try to run it in MUCH smaller pieces#2019-06-1713:17lboliveira@marshall Thanks for your answer. I agree that was sent too many datoms.
The backup and restore operations are already taking too long to complete.
What do you think about d/log -> d/tx-range -> filter -> d/transact strategy?#2019-06-1713:18marshallit is a reasonable approach for rebuilding a database
it can take a significant amount of time as well
also, there are a few details that need to be managed carefully, particularly mapping entity IDs when doing that process#2019-06-1713:19lboliveiraparticularly mapping entity IDs when doing that process
Is there any reference about this?#2019-06-1713:19marshalli believe there are some community implementations of the process, but no, not specifically covered in the official docs#2019-06-1713:25lboliveiraThank you. I think will rebuild the database. It is not likely that the index will be rebuilt first.
We also need to excise 48M more entities. Do you believe that we could do it locally in a timely fashion using fewer datoms per transaction?#2019-06-1713:25marshallprobably not; with that many datoms you’re almost certainly better off rebuilding with a filter#2019-06-1713:25marshallhow big is the total db?#2019-06-1713:26lboliveira19.36 GB#2019-06-1713:27marshallin datoms#2019-06-1713:31lboliveiraone moment…#2019-06-1713:38lboliveiraabout 51M#2019-06-1713:39marshalland you want to excise 48M of them?#2019-06-1713:40lboliveirayes#2019-06-1713:41marshallyou’ll definitely be better off transacting the ones you want to keep into a new empty DB#2019-06-1713:51lboliveirathanks a lot @marshall#2019-06-1806:28seantempestaQuick question, is there a way when doing a pull to get back the :db/ident keywords rather than the :db/ids?#2019-06-1807:05steveb8nIn the peer API you could use d/entity for this. With the client API this is not possible but it’s easy to write 1 fn to decorate a pull/query with the extra nesting and another to de-nest the values from any pull result. Not ideal but not too bad in the grand scheme#2019-06-1813:29Joe LaneUse :as from the pull grammar#2019-06-1814:17jarethttps://forum.datomic.com/t/new-cloud-client-release-0-8-78/1024#2019-06-1814:52souenzzo@jaret wil on-prem receive it?#2019-06-1814:53jaretYep. Coming soon to on-prem#2019-06-1916:43johnjWith datomic's "the database as a value" approach, do you treat your functions as "pure"? (defn some-query [db args....] and pass the db everywhere?#2019-06-1916:45johnjtaking this into account of course: https://docs.datomic.com/cloud/best.html#consistent-db-value-for-unit-of-work#2019-06-1917:07Joe Lane@lockdown- Yeah, it also makes testing a breeze once you dynamically create and destroy databases.#2019-06-1923:35steveb8n@lockdown- it also makes the use of libs like scope-capture super useful since you can interact with it using the REPL even after a test/http request has run#2019-06-2005:30Brian AbbottFrustration. I have the following query:#2019-06-2005:31Brian Abbott(def get-dev-user-q '[:find ?e
:where [?e :user/email <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>]]))
(defn mig-user-role-on-dev-user [conn]
(let [devuser-id (d/q get-dev-user-q (d/db conn))]
(d/transact conn {:tx-data {:db/id (ffirst devuser-id)
:user/role 1}})))#2019-06-2013:46favilaYour tx data is a map not a vector. Should be [{:db/id ...}]#2019-06-2013:49favilaStepping back: why query at all? Either user email is unique, in which case why not mark it so and use :db/id [:user/email “whatever”], or what you are doing is unsafe because you are essentially picking a user at random that happens to have the same email#2019-06-2005:31Brian Abbottit throws this error:#2019-06-2005:32Brian AbbottCaused by: java.lang.IllegalArgumentException: Don't know how to create ISeq from: xhealth_web.db.migrations$mig_user_role_on_dev_user#2019-06-2005:32Brian AbbottI have decided that my brain for some reason does not possess the processing power to solve this. 😞#2019-06-2005:32Brian AbbottDoes anyone know what my problem might be??#2019-06-2005:32Brian Abbott(with the code that is - not my head) 😛#2019-06-2005:33Brian Abbottarent I to put datalog querys in a list ahead of time i.e. '#2019-06-2006:00Brian AbbottAnd.... a lesson in.... arrogance of mind? or rigidity of mind perhaps.... the first part of the stack trace I dismissed as being... not relavant -- it was someone elses code - it had to work correctly. But, how we roll through our migrations, usually its data-attributes not functions that perform queries and subsequent updates.... So, in the config I had to handle mine slightly differantly.....#2019-06-2015:40franquitoI'm going through Datomic's HA documentation, it says something about a "Datomic transactor appliance", in context:
> If you are using the Datomic transactor appliance, you can run two transactors in a single stack by setting a property in your CloudFormation template
> https://docs.datomic.com/on-prem/ha.html#enabling
Does any one know what this means?#2019-06-2015:42ghadi@franquito https://docs.datomic.com/on-prem/aws.html <-- that would be if you're using a Cloudformation template to spin up a datomic transactor automatically#2019-06-2015:43ghadiyou can tweak a property and get two transactors running in HA#2019-06-2015:44ghadiBut if you're on AWS, and haven't chosen how to run datomic yet, I would encourage you to look at Datomic Cloud#2019-06-2015:44ghadi(disclaimer: I work for Cognitect, but not on the Datomic team)#2019-06-2015:46ghadiI think there is less stuff to manage with Datomic Cloud, even though the total number of pieces is greater#2019-06-2015:46ghadi(but on-prem and cloud are not drop-in replacements for each other)#2019-06-2015:48franquitoOh, thanks for the clarification. I'm running Datomic in AWS, but I don't think that whoever deployed it used the Datomic Cloud facilities. I do have a CloudFormation template, but the property to adjust the amount of transactors is called GroupSize instead of aws-autoscaling-group-size.#2019-06-2017:40jaretHi @franquito the difference you are seeing is an artifact of the provided CFT process. For running AWS in on-prem we supply a CFT process, described here.
https://docs.datomic.com/on-prem/aws.html#create-cloudformation-template
That process takes a transactor properties file and a my-cf.propertied file to generate the final CFT.#2019-06-2017:40jaretIn the my-cf.properties file you supply a aws-autoscaling-group-size.#2019-06-2017:41jaretIf you have your own CFT then yes, you can modify with the groupsize property.#2019-06-2017:41jaret@UJK3U9L68 @U050ECB92 ^#2019-06-2415:40franquitoThank you! I double checked just in case. Apparently we did use the CFT sample Datomic provides and it transforms the aws-autoscaling-group-size property to a parameter GroupSize of the template.#2019-06-2016:00ghadi@jaret might know more ^#2019-06-2017:08Jacob FordHey all, I work with @franquito. Following up on his question:
We're pretty deep in On-Prem now even though it is on AWS—it's already running in production backed with DynamoDB. Unless I'm misunderstanding https://docs.datomic.com/on-prem/moving-to-cloud.html, it seems at this point it'd be pretty painful to switch to Cloud.
But my question is: does aws-autoscaling-group-size in the CloudFormation .template file translate directly to GroupSize in the Template JSON I'm seeing in AWS CloudFormation console now, after it was successfully created?
Excerpt from our CloudFormation Template:
"GroupSize":
{"Description":"Size of machine group",
"Type":"String",
"Default":"2"},
#2019-06-2017:10Jacob FordOr is the lack of aws-autoscaling-group-size in my Template JSON a sign that we've improperly configured our cluster for High Availability?#2019-06-2017:12ghadiSpeaking without actually verifying: it should correspond to the ASG size#2019-06-2103:35Brian Abbott... if I am using Datomic Cloud, can I inspect/unsder stand its performance charataristics using AWS X-Ray?#2019-06-2103:35Brian AbbottHas Datomic incorporated X-Ray into Cloud or Ions?#2019-06-2106:59steveb8n@briancabbott if you use pedestal as your web server then you can use the open-tracing sample, although that doesn’t save to x-ray. I want to do the same thing. I’m planning to use the aws client to save spans in my Ion code#2019-06-2115:14Brian Abbottthank you @steveb8n#2019-06-2117:30Katie LefevreHi Folks. We have an old datomic/cassandra cluster and are seeing some weird behavior. We're using https://github.com/bostonaholic/datomic-export to export a csv. It's working for most tables but not for a few very small reference tables. This is probably not enough info, so please let me know what other details I should provide. Appreciate any and all help!#2019-06-2117:40favilaWhat is a "table"? what does "working" and "not-working" look like? (i.e. what is your get vs expect?)#2019-06-2117:45Katie LefevreAh, yes. We definitely are new to datomic#2019-06-2117:45Katie LefevreA particular set of attributes#2019-06-2117:46Katie Lefevreare empty? I suppose? when we see that they are populated if we check the entity id#2019-06-2117:47favilaare you seeing rows in the csv you don't expect to see, or with unexpected results, or...?#2019-06-2118:04Katie LefevreNo we're seeing no rows#2019-06-2118:05Katie LefevreI think we have a way around it using the ui...#2019-06-2118:10favilaui of what?#2019-06-2118:11faviladatomic-export is best used with a specific list of attributes to export#2019-06-2118:12favila(the "include" argument)#2019-06-2118:12favilayou will then get only those entities with any of those attributes. if no entities have any assertions on those attributes, you will get no rows#2019-06-2118:13favilaif you don't supply an "include" argument, you're going to get every entity with any attribute, plus you're going to eat a lot of memory while it figures out what those entities are#2019-06-2118:18Katie Lefevrethanks favila!#2019-06-2123:53johnjIf you retract an entity that has :db.unique/value, that value can't be asserted again?#2019-06-2200:09johnjpebkac#2019-06-2213:41tdantashey guys, quick question
suppose I have my application running ( 3 instances ) using datomic peer library. If I push data on application #1 and read data from application #2 , will the read be consistent with the latest data transacted ? ( reading my own write )#2019-06-2216:04favilaThere’s possible network propagation delay. A peer will always see a consistent view for a specific t (no half-committed or stale reads), but one peer may know about a TX before another one#2019-06-2216:05favilaThe sync function is to solve this problem#2019-06-2216:06favilahttps://docs.datomic.com/on-prem/client-synchronization.html#2019-06-2219:17tdantasHey @U09R86PA4 thanks ... #2019-06-2221:19tdantas@U09R86PA4 how do you usually does on your system to the user read his own writes ? sticky sessions ? ( assuming we are not using peer server )#2019-06-2222:34favilaReturn a t to the client; let the client use that t for future reads#2019-06-2222:34favilaOr just return the new record#2019-06-2222:34favilaIn the POST that does the write#2019-06-2222:34favila(Or whatever)#2019-06-2308:52tdantasyeah, I’m returning the new record, but if the user refresh the page could reach a different application instance -> different db -> different attribute value#2019-06-2313:44favilaYou can sync by time also#2019-06-2313:45favilaThis is usually a difference of milliseconds not multiple seconds#2019-06-2213:44tdantasfrom the documentation, datomic says “ACID compliant and fully consistent.” I’m assuming the transactor gonna publish to all peers the changes before ‘commit’ the value#2019-06-2213:45tdantasis that correct?#2019-06-2318:50Yehonathan SharvitHow can one handle optional values in datomic queries.
For instance I have movies with a mandatory title and an optional release-year.
I’d like to retrieve all the movies, their titles and their release-year or nil if release-year is not set#2019-06-2319:04Yehonathan SharvitI found it. We need to use get-else#2019-06-2322:50hadilsHow do I get Cursive to load an s3 repo for ion jars? I am setting up a new computer and don't have the old one for reference.#2019-06-2322:54hadilsI know, I made a mistake...#2019-06-2322:58hadilsNvm, I am beginning to figure it out.#2019-06-2409:24Matheus Moreirahello, all! i am learning datomic as am unsure how to work with :db.cardinality/many relationships when adding new entities to the collection. e.g.: i have a rule entity that can have multiple :rule/executions (a ref type). from time to time i have to add a new execution to the list. what is the appropriate way of doing it?#2019-06-2409:25Matheus Moreiraone option would be to create an attribute :execution/rule instead of :rule/executions and create new independent executions when necessary. but i am not sure how to do it when i have :rule/executions…#2019-06-2409:50snurppaIf you are just looking how to transact it, something like this should work:
[[:db/add existing-rule-id :rule/executions "new-exec"]
{:db/id "new-exec"
:exec/attribute :some-value
:exec/another 123
...}]
So you just add a new ref (here temp-id "new-exec") to the :db.cardinality/many attribute (`:rule/executions` in your case). And in same transaction, define that new execution entity as map with correct :db/id.#2019-06-2409:52snurppaAlthough you have many relationship for an attribute, you can “append” new refs as “single value” for the attribute.#2019-06-2411:24steveb8nalso watch out for if you need an ordered :many relationship. this is not automatic and non-trivial currently in Datomic#2019-06-2416:30Matheus Moreiranice, thanks for the answers!#2019-06-2417:18spiedenis anyone aware of a library that can convert a raw datum to its symbolic/transaction form given a dbval? bonus for resolving its :db/id to a any unique-identity attributes on the corresponding entity#2019-06-2417:19ghadiexample @spieden?#2019-06-2417:21ghadiwhat is a "symbolic/transaction form"?#2019-06-2417:21spiedensure something like: #datum [1010101 2323232 454545 626262 true] -> [[:some/unique-identity "human-readable-id"] :some/rel-attr [:some/other-unique-identity "also-human-readable"] 626262 true]#2019-06-2417:22spiedenbasically find idents for the e/a/v numeric ids#2019-06-2417:22favilathere's not going to be a universal way to do that#2019-06-2417:23spiedenfor the unique-identity attrs you mean?#2019-06-2417:23favilayes#2019-06-2417:23spiedensure, there can be ambiguity there#2019-06-2417:23favilafor the A it's just (d/ident db-after a)#2019-06-2417:23spiedenok yeah#2019-06-2417:24favilabut attrs don't necessarily need to be named#2019-06-2417:24spiedeni see#2019-06-2417:24spiedenthis is just for debugging the result of applying a transaction#2019-06-2417:24spiedeni generate big ones and let datomic prune the redundant stuff for me#2019-06-2417:25spieden.. and am in a situation where i need to see the “diff” that was applied in a human readable form so i can understand what’s going on#2019-06-2417:25favilagotcha#2019-06-2417:26favilaIf there's an attr you have in mind you can try to resolve it specifically#2019-06-2417:26favilaanyway short answer is no, I'm aware of no lib#2019-06-2417:27favilaI think because it's hard to come up with a universal solution, and it's not very hard to roll your own#2019-06-2417:27spiedenok i might make a best effort general one#2019-06-2417:27spiedenthere are enough different kinds of entities floating around#2019-06-2417:27spieden.. and the relations are a big part of it#2019-06-2417:28favilain my mind I am imagining mapping some-fn over E A V#2019-06-2417:28favilaand making a list of fns that take db and eid and return either nil or some other representation#2019-06-2417:29favilabottom out with the eid itself (or maybe even the tempid)#2019-06-2417:30favila(map (fn [[e a v tx op] ((some-fn unique-attr1 unique-attr2 d/ident #(do %2)) db-after e)) ,,,))#2019-06-2417:30spiedenyeah that makes sense#2019-06-2417:30favilaone for each EAV of course#2019-06-2417:31spiedenthis reminds me, is something like this possible in datomic?
'{:find [?e ?es ?v]
:in [$ [?es ...]]
:where [[?e ?es ?v]]}#2019-06-2417:31favilayep#2019-06-2417:31ghadibetter to use the datoms API for that#2019-06-2417:32favilajust because of result size#2019-06-2417:32spiedeni was thinking you could query out the ids of all the unique-attrs and pass them there as ?es#2019-06-2417:32spiedenoh yeah, i would constrain in more#2019-06-2417:32favilayou can feed the entire result in, implement the some-fn thing with rules#2019-06-2417:33spiedenoh i see#2019-06-2417:33favilayou'll get duplicates if they match multiple ways#2019-06-2417:33spiedensure yeah#2019-06-2417:33favilabut, you can include an index per datom too, then group-by the result#2019-06-2417:33favilaso, it's like seeing all possibilites at once#2019-06-2417:34spiedenyeah that would be fine for just making something legible#2019-06-2417:34spiedencool thanks for the advice guys#2019-06-2420:56kvltI am about to import ~2m file entities into datomic that are all a component of different companies. Is it best practice to partition by "company" or by "file"?#2019-06-2420:56kvltOr is there another option I hadn't considered#2019-06-2522:34spieden@favila @ghadi here’s what i came up with for that problem yesterday — it’s helping https://gist.github.com/spieden/1f559ab5508d44b837647ab894e4795a#2019-06-2522:39favilaconsider using d/attribute for attribute metadata#2019-06-2522:40spiedenok thanks#2019-06-2522:46favilathis looks handy. Do you mind if I fork it and play around with it?#2019-06-2522:46spiedenof course, feel free =)#2019-06-2616:12okocim(into #{} (comp (map :make) (map str/lower-case)) [{:make “Volvo”} {:make “Audi”} {:make “volvo”}])#2019-06-2616:12shaun-mahoodFor https://docs.datomic.com/cloud/query/query-data-reference.html#return-maps , is there a way to return namespaced keys?#2019-06-2616:14favilaI assume namespace/name?#2019-06-2616:14favilaI haven't tried it#2019-06-2616:15shaun-mahoodYeah, that works - thanks!#2019-06-2616:15Joe LaneOh awesome!!!#2019-06-2616:15faviladoublecheck that (namespace the-kw) and (name the-kw) give what you expect#2019-06-2616:16favilaand it's not doing something silly like making the name part "ns/name" (i.e. with a slash in it)#2019-06-2616:16favilathat is a bug I can imagine happening#2019-06-2616:18shaun-mahoodEverything seems to work right! Thanks so much, I couldn't think of what syntax to even try with it 🙂#2019-07-0412:12holyjakNeither would have I. It would be helpful to get an example of this added to the docs.#2019-06-2616:21shaun-mahood@jaret Might be worth adding how to return namespaced keys in return maps to the docs, I wasn't able to figure it out without help#2019-06-2616:25jaretThanks! I’ll take a look at adding an example.#2019-06-2620:22ozNot sure how active #ions-aws. Maybe this is a better channel for this issue. https://clojurians.slack.com/archives/CC0A7PUHF/p1561577826007800#2019-06-2622:07shaun-mahoodI think I just crashed my datomic cloud - I retracted an entity, then tried to pull the entity and everything hung (using datomic-socks-proxy to connect). My cloudwatch sat stuck at 6 pending ops for a while and I couldn't reconnect to the socks proxy so I tried rebooting the EC2 instances. When I tried to restart the compute instance, it went from "starting up" to "shutting down" and was then terminated by AWS - I don't think I told it to do that! My socks proxy instance started back up without a problem, but of course it can't do anything. I'm going to try a compute-only upgrade and see if things come back.#2019-06-2622:10shaun-mahoodThis is all with the solo topology.#2019-06-2622:17ghadiwhat version of Datomic solo template @shaun-mahood?#2019-06-2622:29shaun-mahoodI'm not sure which version I was on, it looks like I originally created the compute stack on 2019-03-28, and I think that was an upgrade from the original combined stack. So I probably used CFT 470-8654, is there a way I can find out? It was pretty recent, but I just upgraded to 477-8741. Both my compute node and socks proxy are running again but the socks proxy can't connect now.#2019-06-2622:31ghadimake sure your bastion security group allows ingress#2019-06-2622:31ghadisee if you have the same error on 477#2019-06-2622:34shaun-mahoodBastion security group looks ok to me, I just followed the instructions on the getting started page and nothing changed. The bastion is hung on the line client_input_global_request: rtype #2019-06-2623:09shaun-mahoodI tried updating it to remove and put back on the bastion, still nothing. Going to put in a ticket for it.#2019-06-2715:20ennIs it possible to override the path where the Datomic peer library looks for transactor-key.jks?#2019-06-2715:40favilajdbc?#2019-06-2715:40ennSorry, I don't follow#2019-06-2715:41ennIn this case the data store is Dynamo ... so I don't think there is JDBC involved ... but I could definitely be mistaken#2019-06-2715:41favilathere's no datomic specific credentialing or jks#2019-06-2715:41favilait's all storage-specific#2019-06-2715:42favilaI'm only aware of jdbc ssl needing client and/or server keystores#2019-06-2715:43favilain the jdbc case it goes into the connection string#2019-06-2715:43ennI believe this key is used for communication with the transactor, not the data store#2019-06-2715:43faviladid you make it? how did you come across it?#2019-06-2715:43ennThe last Datomic item in the stacktrace when it doesn't find the file is datomic.artemis-client/create-session-factory#2019-06-2715:45ennI didn't make it. It's packaged with Datomic. I learned about it when I got an exception trying to start the peer library in an environment where the class path does not map to the filesystem in the way that the code seems to expect and it could not find the file.#2019-06-2715:47favilaI've been using datomic for 6 years and I've never come across this#2019-06-2715:47favilathe closest I can find is this sample code using hornet:#2019-06-2715:47favilahttps://github.com/Datomic/datomic-java-examples/blob/master/src/java/hornet/samples/PingProducer.java#2019-06-2715:47favilanot datomic#2019-06-2715:47favilaim going to check a running app and see if it/s on the classpath#2019-06-2715:50enn$ jar -tf datomic-pro-0.9.5661.jar | grep transactor-key
datomic/transactor-key.jks
#2019-06-2715:51favilaok then#2019-06-2715:52favilaI suspect you can't move it#2019-06-2715:52favilachanging it would mean providing a netty option#2019-06-2715:52favilapiercing through datomic->artemis->netty#2019-06-2715:54favilahow did this get messed up? uberjaring? strange repackaging?#2019-06-2715:55favilait's always been a "just works" thing for me#2019-06-2715:55ennThis is on an AWS Lambda, which seems to put everything from the jar under /var/task#2019-06-2715:55ennSo this file lives at /var/task/datomic/transactor-key.jks but Artemis/Netty is still looking for /datomic/transactor-key.jks#2019-06-2715:56favilabut it should be a resource path not a literal path#2019-06-2715:57ennYes, but it looks like Artemis is bypassing the resource API#2019-06-2715:57favilaso if /var/task is on the classpath, it should find it#2019-06-2715:57ennbasing that on this part of the stacktrace I see:
Caused by: java.io.FileNotFoundException: /datomic/transactor-key.jks (No such file or directory)
at java.io.FileInputStream.open0(Native Method)
at java.io.FileInputStream.open(FileInputStream.java:195)
at java.io.FileInputStream.<init>(FileInputStream.java:138)
at java.io.FileInputStream.<init>(FileInputStream.java:93)
at sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:90)
at sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:188)
at java.net.URL.openStream(URL.java:1045)
at org.apache.activemq.artemis.core.remoting.impl.ssl.SSLSupport.loadKeystore(SSLSupport.java:104)
#2019-06-2715:57favilaif that were true, wouldn't it never work?#2019-06-2715:58favilatransactor-key is always in a jar#2019-06-2715:58ennYes, that is something I've wondered#2019-06-2715:58favilaI mean, normally#2019-06-2716:00ennLooking at code it does look like Artemis gets that URL by calling findResource ... so, yeah, I'm not sure what is going wrong on the lambda that that isn't working.#2019-06-2716:00ennhttps://github.com/apache/activemq-artemis/blob/master/artemis-core-client/src/main/java/org/apache/activemq/artemis/core/remoting/impl/ssl/SSLSupport.java#L283-L299#2019-06-2716:01favilamaybe artemis/netty version fighting?#2019-06-2716:01faviladatomic deps on an artimis "uberjar" which includes netty-all#2019-06-2716:01favilaactually that's imprecise#2019-06-2716:02faviladatomic deps on an older version of artemis which deps on netty-all, which includes every netty class#2019-06-2716:02favilait's not a pure dependency pom#2019-06-2716:02favilaso getting two netty versions on the classpath was a common problem#2019-06-2716:02favilaperhaps that's what's happening?#2019-06-2716:05ennAh, interesting, I will look at the deps tree#2019-06-2716:05favilaThis is what I kinda blindly do now:#2019-06-2716:05favila[com.datomic/datomic-pro "0.9.5786"
:exclusions [org.slf4j/slf4j-nop org.slf4j/slf4j-log4j12
;; netty-all is a netty "uberjar". When combined
;; with other things that depend on netty sub-deps
;; directly, we get duplicate netty classes of
;; different versions on the classpath.
;; In our case, artemis-client-core depends on
;; netty-all 4.0; google-cloud-logging-logback
;; transitively depends on specific netty packages
;; (not netty-all); when combined, we get
;; duplicate netty classes of different versions
;; on the classpath.
;; Solution: exclude netty-all and depend on what
;; artemis *actually* requires. Then maven
;; can resolve the dep tree correctly.
io.netty/netty-all]]#2019-06-2716:05favilaI got burned so hard#2019-06-2716:06favilaI have even considered repackaging datomic peer lib with a different pom that fixes this issue#2019-06-2716:07favilaelsewhere I will include this:#2019-06-2716:07favila;; Source for artemis netty deps:
;;
;; artemis-commons
[io.netty/netty-buffer "4.1.34.Final"]
[io.netty/netty-transport "4.1.34.Final"]
[io.netty/netty-handler "4.1.34.Final"]
;; artemis-core-client
[io.netty/netty-transport-native-epoll "4.1.34.Final" :classifier "linux-x86_64"]
[io.netty/netty-codec-http "4.1.34.Final"]
#2019-06-2716:07ennI don't think I have anything else requiring netty#2019-06-2716:08favilacould lambda be doing it?#2019-06-2716:08favilaI guess this wouldn't fix the issue if it were#2019-06-2716:08ennLooking more at that Artemis validateStoryURL function I am thinking that the DWIM nature of that might explain why it works someplaces and not others#2019-06-2716:08favilaah#2019-06-2716:08ennit tries to handle URLs, file paths, and resource paths#2019-06-2716:10favilaI think it's ok to escalate to a datomic support ticket at this point#2019-06-2716:10ennYeah, I have, just trying to dig a little while I wait for their response#2019-06-2716:10ennI wonder if this is what's happening:#2019-06-2716:11ennin normal environments, file.exists() in validateStoreURL returns false so it falls through to the call to findResource, which finds it in the jar, and all is well#2019-06-2716:13ennLambda unzips the jar into /var/task which may also be the working directory of the process (not sure) so maybe that causes file.exists() to return true, so it returns the result of file.toURI().toURL() instead of calling findResource#2019-06-2716:14ennI don't have an explanation for why, if that's true, the subsequent call to java.net.URL.openStream fails#2019-06-2717:29favilame neither#2019-06-2720:40okocimthis may not be the right place to ask this question, but does anyone in here have a link to docs or refrence for using the cognitect http-client? I am trying to make use of this library since it’s already in my project, but I cannot for the life of me figure out how to send a request body on an http post. I am setting a :body key on the request that is a java.nio.ByteBuffer, and the API that I’m calling is complaining to me that my request doesn’t have a body. I’d like to avoid switching to another library if I can, but I am a bit nonplussed about how to use this library.#2019-06-2720:47alexmillerI don’t think there are any docs, but the source is in the jar file#2019-06-2721:11ghadi@okocim it's very very barebones#2019-06-2815:57okocimThanks for your help. You were right, my buffer positions were incorrect#2019-06-2721:12ghadimake sure that when you hand in the ByteBuffer, the "read head" (.position) is not at the limit#2019-06-2721:17NolanHow have others approached transitive dependency conflicts in datomic cloud? I’m using a library that depends on {:tag :a, :attrs {:href "/cdn-cgi/l/email-protection", :class "__cf_email__", :data-cfemail "dfbcb0b2b2b0b1acf2bcb0bbbabc9feef1eeee"}, :content ("[email protected]")}, while 1.10 (the datomic cloud dependency) causes a compilation error. I recognize it’s more of a JVM soft-spot than anything, I’m mainly curious how others have broached this in cases both where a the conflicting library can be forked and when it cannot.#2019-06-2721:43ghadiis it a public library that needs 1.11 @nolan?#2019-06-2721:44NolanThis instance is, yes. So in the worst case I could fork and maintain a version that works on 1.10. But I’m also curious about how I’d go about resolving the problem if that weren’t an option. It seems like that would be a difficult position to be in.#2019-06-2721:45NolanIs AOT relevant here? Is that a viable solution path? I’m not familiar enough with the JVM to know whether the conflict would still be experienced#2019-06-2721:46NolanI guess the package names would still conflict.#2019-06-2721:46ghadiAOT isn't relevant#2019-06-2721:46ghadi(although if a library AOT's its code, that is not good)#2019-06-2721:47ghadigenerally a library might ask for latest (1.11) but not need the things from latest#2019-06-2721:47ghadiwhich library, if I may ask?#2019-06-2721:47ghadiIn general, with Ions it has to prefer the Ion's version of a conflict, where there is a conflict#2019-06-2721:48NolanIn this case (`clj-multiformats`) it’s a super trivial conflict—`clj-multiformats` uses encodeHexString with 2 arguments, which isn’t in the 1.10 API.#2019-06-2721:48ghadiheh#2019-06-2721:49ghadithat's a "heh" of empathy 🙂#2019-06-2721:49ghadihex formatting is one thing that isn't in the Java stdlib directly... yet#2019-06-2721:49ghadi(but anyways, Datomic Ions are Java 8)#2019-06-2721:50ghadiwe have base64 covered in the JVM now ....#2019-06-2721:50ghadia fork would probably be appropriate in that case#2019-06-2721:50ghadiwhich is really really trivial with git dependencies#2019-06-2721:50ghadinothing to compile. just fork, commit, change your dependency sha#2019-06-2721:51ghadi(and nothing to mvn release, etc.)#2019-06-2721:54NolanRight on. Agreed. The whole thing is just a minor bummer, since its such a surface level incompatibility (I believe 1.11 even accretes appropriately)#2019-06-2721:55NolanI’d be curious if bumping commons-codec would ever make it onto the cloud roadmap. It appears that version bumps are something that releases occasionally cover, but commons-codec is also a pretty fundamental thing.#2019-06-2721:56NolanBut thanks a million for your input @ghadi. Forking won’t be a huge hurdle here#2019-06-2721:57ghadinp, cheers#2019-06-2814:38jaretWe have released a new version of Client-pro, Datomic On-prem, and Datomic Cloud. See announcements below for more info:
https://forum.datomic.com/t/new-client-pro-release-0-9-37/1041#2019-06-2814:38jarethttps://forum.datomic.com/t/datomic-0-9-5927-now-available/1040#2019-06-2814:39jarethttps://forum.datomic.com/t/datomic-cloud-480-8770/1042#2019-06-2814:40stuarthalloway@nolan if we bump codec, any reason not to bump to 1.12 (i.e. do you need 1.11 specifically?)#2019-06-2815:27Nolan@stuarthalloway no specific need for 1.11. 1.12 would probably make most sense if you guys were to bump it#2019-06-2815:28stuarthallowayWill test ASAP, this is an easy change if they didn't break anything.#2019-06-2816:09NolanRight on, thanks for looking into this Stu! Would be huge for us, but I also recognize there’s a lot beyond my line of sight here. Will be following closely!#2019-06-2815:03robert-stuttafordwahey! \o/ tuples! specs!#2019-06-2815:08shaun-mahoodAm I reading it right that tuples can be used as composite keys? If so, that's fantastic! Are there any non-obvious surprises that I would run into if I treat them like I would a SQL composite key?#2019-06-2815:10robert-stuttafordyep! no idea on the SQL thing. i'm also curious about how the performance changes#2019-06-2815:11robert-stuttaford@shaun-mahood http://blog.datomic.com/2019/06/tuples-and-database-predicates.html#2019-06-2815:17favilaQuestion: for composite attrs, list of allowed types does not mention refs, but the example in the docs uses a ref#2019-06-2815:18favilacan you put a ref in a composite attr? is the system aware of it's ref-ness? this seems like an easy way to accidentally introduce an internal weak eid reference#2019-06-2815:18stuarthallowayHm, I feel tricked, this seems like more than one question 🙂#2019-06-2815:19stuarthallowayYes, yes, and probably yes.#2019-06-2815:17shaun-mahoodReading through all the changes right now - that's some amazing timing! I added https://receptive.io/app/#/case/17932 3 years ago on after asking @stuarthalloway about it at my first Datomic training, and I just finished writing the last required feature in my first real Datomic application for work yesterday. It's the same domain that I wanted composite keys for, so I should be able to add them in before we move it to full production mode. Days like this make me feel like the universe (or the Datomic team) really has my back. 🙂#2019-06-2815:25stuarthalloway"The day after you asked for it" would probably have been better timing... 🙂#2019-06-2815:29shaun-mahoodOk, next time I want something big and new I'll expect it right away then! 🙂#2019-06-2815:26fmnoisehey guys, small question, if I have 2 databases on transactor, are transactions serializable in scope of each db, or in scope of transactor?#2019-06-2816:48akielIn scope of each DB. There is no coordination between DB‘s. #2019-06-2818:10fmnoisethanks!#2019-06-2818:21fmnoiseso if I have multi-tenant app, then I can scale with database for each tenant 🤔#2019-06-2816:37souenzzoAs a datomic-cloud user, please bump datomic-free. There is some tooling that use it.#2019-06-2816:46akielI can only second this. I’m a paying on-prem user and like to see all flavors of Datomic at the same version. I also will release the free Docker image ASAP.#2019-06-2817:01val_waeselynckSoooooo theoretically, couldn't you assert an Entity Spec that ensures you can't ever modify the entity, including retracting the Entity Spec itself? Looks like a wonderful prank. 😈#2019-06-2817:01val_waeselynckSoooooo theoretically, couldn't you assert an Entity Spec that ensures you can't ever modify the entity, including retracting the Entity Spec itself? Looks like a wonderful prank. 😈#2019-06-2817:07stuarthallowayNo. "Specs must be requested explicitly per transaction+entity" -- https://docs.datomic.com/cloud/schema/schema-reference.html#entity-specs#2019-06-2817:09val_waeselynck@stuarthalloway thanks, here's the part of the docs that confused me:
> Entity specs can be asserted or retracted at any time, and will be enforced starting on the transaction after they are asserted#2019-06-2817:10val_waeselynckMight want to change to "and will be enforceable starting on the transaction after they are asserted"#2019-06-2817:11favilaMy takeaway was :db/ensure is a magic attribute, it doesn't actually add any datoms#2019-06-2817:11favilait adds a postcondition to the current tx, not the entity#2019-06-2817:13favilaso, does that mean it won't show up in tx logs? "decanting" will lose the ensure assertions?#2019-06-2818:06stuarthalloway@U09R86PA4 nails it. ":db/ensure is a virtual attribute. It is not added in the database; instead it triggers checks based on the named entity." -- https://docs.datomic.com/cloud/schema/schema-reference.html#entity-specs#2019-06-2817:24kennyIn Datomic Cloud, attribute predicates and entity predicates just need to be on the classpath of your application, not installed as Ions, correct?#2019-06-2817:41stuarthallowayDatabase predicates must be on the classpath where transactions execute. In Cloud this means ions. In On-Prem, it is up to you to start a transactor with the appropriate code on the classpath.#2019-06-2818:06souenzzocan I use many :db/ensure ?
like [[:db/add :souenzzo :name "Enzzo"] [:db/add :souenzzo :db/ensure :op1] [:db/add :souenzzo :db/ensure :op2]] ?#2019-06-2819:55dangercoderHow do you guys handle schemas in production, do you let your application code-base run through your schema updates (functions) on startup and send them to datomic or do you have an external application which takes care of it?#2019-06-2819:56dangercoderright now I just transact my schemas as a part of my application init.#2019-06-2914:15akielI do the same. Transacting my schema on every startup.#2019-06-2919:11csmwe wrote a tool that we run to update the schema. We have lots of microservices using datomic, so transacting on startup doesn’t make sense for us#2019-06-2919:13hadilsWe transact the schema on startup. When we move to microservices, we won't be doing that anymore. @UFQT3VCF8 -- how did you architect the use of datomic with microservices?#2019-06-2920:15csmthere’s probably a lot I could talk about, but overall it’s simple — services just connect to DynamoDB and the transactor. We tried using the peer-server and client API, but the peer API was much more efficient under load#2019-06-3017:18eoliphantSame here, used to use conformity as it felt weird to not have some kind of "schema migration" coming from the relational world, but have since gone to just transacting stuff in#2019-06-2907:16robert-stuttafordi'm keeping some notes about Datomic's new tuples stuff here https://gist.github.com/robert-stuttaford/e329470c1a77712d7c4ab3580fe9aaa3#2019-06-2917:33eoliphantHey @jaret @stuarthalloway and co, based on the cloud 480-8770 release, I think that “NOTE Return maps are new in client 0.8.78, and coming soon for Datomic Ions.” text here: https://docs.datomic.com/cloud/query/query-data-reference.html#return-maps should be removed right ?#2019-06-3000:30johnjHas anyone got java.lang.IllegalArgumentException: :db.error/not-an-entity Unable to resolve entity: :db/tupleAttrs when doing a query after upgrading to 5927 for on-premise?#2019-06-3000:30johnjHas anyone got java.lang.IllegalArgumentException: :db.error/not-an-entity Unable to resolve entity: :db/tupleAttrs when doing a query after upgrading to 5927 for on-premise?#2019-07-0100:12marshallhttps://docs.datomic.com/on-prem/deployment.html#ugprading-schema#2019-07-0100:49johnjIs dynamo supported for :uri? is failing for me (after upgrading both the transactor and peer)#2019-07-0100:07mjmeintjesHi. I'm new to datomic, so not sure if I'm missing anything: I recently tried upgrading to the latest version of datomic (5927) in order to use tuples. However, when I specify the schema, I get the following error: :db.error/not-an-entity Unable to resolve entity: :db/tupleTypes#2019-07-0100:10mjmeintjesThis is when running the following code (d/transact c
[{:db/ident :player/location
:db/valueType :db.type/tuple
:db/tupleTypes [:db.type/long :db.type/long]
:db/cardinality :db.cardinality/one}])#2019-07-0100:10marshall@mjmeintjes https://docs.datomic.com/cloud/operation/upgrading.html#datomic-schema#2019-07-0100:11marshallErr thats the cloud doc#2019-07-0100:11marshallOne sec#2019-07-0100:12marshallhttps://docs.datomic.com/on-prem/deployment.html#ugprading-schema#2019-07-0100:12marshallOn prem ^#2019-07-0100:14mjmeintjes@marshall Excellent, thanks, that worked! I assumed it was something to do with upgrading the schema but couldn't find the right function to call.#2019-07-0100:14mjmeintjes(d/administer-system {:uri "DB-URI"
:action :upgrade-schema})#2019-07-0100:16marshall👍#2019-07-0100:54johnj#2019-07-0100:55marshallWhat's in your deps.edn#2019-07-0100:56johnj#2019-07-0100:58marshallCan you create and use a new db on that system?#2019-07-0100:58marshallAnd you have upgraded the transactor to the latest?#2019-07-0101:00johnjyes, transactor is latest, let me try creating a new db#2019-07-0101:01johnjthe transactor crashes after running d/administer-system#2019-07-0101:02marshallCan you open a ticket with support (<mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>) and include your transactor logs? I can have a look at it tomorrow morning#2019-07-0101:02johnjcreating database does work sure, thanks#2019-07-0101:03marshallOk. Glad that works.#2019-07-0116:16fmnoiseis there any way to call peer function from transactor function?#2019-07-0116:22favilatransaction functions run on the transactor whereas peer functions run on the peer. They have different environments#2019-07-0116:22favilayou could include the peer function on the transactor's classpath, but that's something you need to arrange ahead of time#2019-07-0116:23favilahttps://docs.datomic.com/on-prem/database-functions.html#classpath-functions#2019-07-0116:24fmnoiseI'm just thinking how I can call datomic api functions inside the transactor functions using d\... alias but same doesn't work with any other namespace#2019-07-0116:25favilayou could also install your peer function into the db and call it from your tx function with d/invoke#2019-07-0116:25fmnoisehow does it know that d\... means datomic/api#2019-07-0116:25faviladatomic api functions are on the transactor's classpath#2019-07-0116:26fmnoiseah I see#2019-07-0116:26favilathere's no magic to tx fns#2019-07-0116:27favilaaliases and requires are syntatic, it's not automatically shipping code#2019-07-0116:27favilathe only thing shipped is a pr-str of the code body#2019-07-0116:27favilaeverything else needs to be available to the transactor's runtime#2019-07-0116:28favilaclasspath functions IMO are a better solution most of the time#2019-07-0116:29fmnoiseI just thought for some reason that code which uses transactor is in its classpath by default#2019-07-0116:29favilayou maybe are confused because query doesn't work this way?#2019-07-0116:29fmnoiseyep, probably#2019-07-0116:30favilafunctions in a query run on your peer, so the same namespaces are available#2019-07-0116:30favilayou can invoke a fn in a query that you just def-ed a second ago in a repl#2019-07-0116:37fmnoisethanks @U09R86PA4#2019-07-0116:18fmnoiseeg I have myproj.utils.datetime ns with function shift-date
and I want to call it from datomic tx function
currently I have
Execution error at datomic.error/deserialize-exception (error.clj:154).
Syntax error compiling at (0:0).
#2019-07-0117:24grzmI see that the on-cloud Cloudformation templates have been bumped across the board for 480-8770, however only Compute is mentioned in the Release history. To be clear, if I'm on storage-470-8654.1, I don't need to update storage, correct? (doing a quick diff of the two storage templates indicates they aren't identical, though I haven't gone further to see if it's only whitespace)#2019-07-0117:29marshall@grzm Correct. You’ll see here: https://docs.datomic.com/cloud/operation/upgrading.html#how-to-upgrade
that you can check when the latest storage update was, and anything more recent than that is compute-only#2019-07-0117:33grzmThanks for confirming. Is there anything I need to do to coordinate ion library releases with the upgrades? Or do I only need to update the ion library when I want to use the new features?#2019-07-0117:35marshallwhen you upgrade Datomic the version of the ion libraries running on your datomic nodes will be updated to whatever is the latest at that time#2019-07-0117:35marshallbut there shouldn’t be any forward-breaking changes#2019-07-0117:35marshallif/when you push/deploy you may see that your deps are overridden#2019-07-0117:35grzmOh, right. Silly me. I think you owe me a playful jab for that one the next time we see each other in person.#2019-07-0117:29marshallexcluding anything listed as a critical release#2019-07-0117:42dlhello guys, I am trying to figure out how to make use of websockets with Datomic Cloud. So that pushes are directly sent to the user#2019-07-0117:42dljust like when using Datomic's tx-report-queue#2019-07-0118:15Joe Lane@dlorencic1337 What have you tried?#2019-07-0120:00rapskalianIs this anything to be alarmed about (pun intended)? It seems like CloudWatch is having trouble locating the auto-scaling policies for the datomic DynamoDB tables. I’ve updated my stack once or twice via CloudFormation…maybe they’ve been lost somehow?#2019-07-0120:21Joe LaneI’m reading through the pull documentation and it’s referring to a pull syntax like
(d/pull db [_release/artists] led-zeppelin)
but when I attempt it with
(d/pull the-db [_user/recommends] 11263397115183903)
I get No such namespace: _user, however with
(d/pull the-db [:user/_recommends] 11263397115183903)
I get
#:user{:_recommends [#:db{:id 66353327713036842}]}.
Does anyone have an example with the [_user/recommends] syntax that the documentation is referring to? Am I misunderstanding how it works?#2019-07-0120:38favilalink?#2019-07-0120:38favilathat looks like a typo to me#2019-07-0120:38Joe Lanehttps://docs.datomic.com/cloud/query/query-pull.html#reverse-lookup#2019-07-0120:39favilaI think that is just a typo#2019-07-0120:39Joe LaneOk thanks favila.#2019-07-0206:40dl@lanejo01 I have not much experience with Clojure and Datomic, just have tried the tutorials and created the first AWS API Gateway. Just curious if there is a way on how I can push from Datomic to the client via Websockets just like when using Sente with Clojure (https://github.com/ptaoussanis/sente)#2019-07-0209:55fdserrHi there. Who's got the best Dockerfile and Helm chart for Datomic Pro and wants to share them?#2019-07-0213:11rapskalianIs it possible to augment the solo topology with an NLB to take advantage of HTTP Direct? Just ran into Java cold start on my startup project, and I’m not quite at the point that I can justify the prod topo cost. Obviously the NLB would only have a single target, but it might let me bypass lambda.#2019-07-0213:59Joe Lane@cjsauer Set up a cloudwatch event to ping your lambda. It’s waaay simpler than I thought it would be.#2019-07-0214:06rapskalian@lanejo01 okay, that was plan B. I had a little deflambda macro in mind that could check for a “keep warm” header value. That way I could decorate all my ions with that short-circuit logic. #2019-07-0214:08rapskalianI’ll have quite a few lambda functions, so maybe I should write a “keep warm” lambda that pings all the others. Could register them in an atom as part of deflambda perhaps 🤔 #2019-07-0214:09rapskalianDoesn’t really solve the fact that cold starts affect every concurrent execution, but at that point I think the prod topo becomes viable anyway. #2019-07-0214:14rapskalianActually ion-config.edn could just be read to find all the lambdas that need warming. Much simpler. #2019-07-0214:18Joe LaneDo it in data, You can annotate the ion-config.edn however you want. Or make your own file.#2019-07-0215:03souenzzoAny plans about fast/forkable memdb on datomic.client.api ?#2019-07-0215:08souenzzoI'm using datomic-client-memdb that uses datomic-free. But now with com.datomic/client-cloud {:mvn/version "0.8.78"} it do not work anymore
https://github.com/ComputeSoftware/datomic-client-memdb
datomock it is also a great tool but do not work on cloud 😞
https://github.com/vvvvalvalval/datomock#2019-07-0215:19souenzzoThere is docs about :server-type :local?#2019-07-0215:27shaun-mahood@plexus Does https://github.com/lambdaisland/metabase-datomic support Datomic Cloud? I couldn't find any information on it either way.#2019-07-0215:50souenzzo@shaun-mahood probably not. It's hard(and expansive) to foss developer access datomic cloud, once to test it, you need to pay aws infrastructure 😞
"datomic-peer" has a awesome feature: it's simple and accessible. Just (d/connect "datomic:") anyware and it's ready to develop tools
"datomic-cloud" you need to setup AWS Machines, connect proxies, stay online (harder to run CI)... 😣
https://github.com/lambdaisland/metabase-datomic/blob/master/deps.edn#2019-07-0215:54shaun-mahoodThat's kind of what I figured - there's a lot of awesome things about running local Datomic, and the kind of things Metabase do seem pretty well geared towards it. Thanks for letting me know!#2019-07-0215:57fdserrI can confirm metabase-datomic is (so far) for Datomic on-prem only.#2019-07-0216:01plexus@shaun-mahood you rang 🙂#2019-07-0216:06plexusas mentioned metabase-datomic uses the peer api, so it's not currently compatible with Datomic Cloud. I do think it's doable, and perhaps not even that much work, but I haven't looked into it so far. If there's commercial interest I'd be happy to look into it and make an estimate. The development so far has been funded by http://runeleven.com (@fdserr et al) who don't have a need for Datomic Cloud support at this point. There's still quite a bit that could be improved in general as well, so if more companies would be willing to pitch in this could be something everyone would benefit from.#2019-07-0216:10shaun-mahoodMakes sense - nice to hear that Datomic Cloud sounds doable! I assumed it would be much harder without the peer api. Hopefully there will be enough commercial interest to keep improving things. Thanks for the answer!#2019-07-0216:20hadilsWhat version of com.cognitect.aws/ssm is in the {:url ""}?#2019-07-0216:29alexmillerProbably none, that should be in maven-central#2019-07-0216:30hadilsMy release is not working, Alex.#2019-07-0216:30hadils{:deploy-status "ERROR",
:message "Could not find artifact com.cognitect.aws:ssm:jar:697.2.391.0 in datomic-cloud ()"}#2019-07-0216:31hadilsThis used to work...#2019-07-0216:32hadilsDoes this not work anymore?
(defn release
"Do push and deploy of app. Supports stable and unstable releases. Returns when deploy finishes running."
[args]
(try
(let [push-data (ion-dev/push args)
deploy-args (merge (select-keys args [:creds-profile :region :uname])
(select-keys push-data [:rev])
{:group group})]
(let [deploy-data (ion-dev/deploy deploy-args)
deploy-status-args (merge (select-keys args [:creds-profile :region])
(select-keys deploy-data [:execution-arn]))]
(loop []
(let [status-data (ion-dev/deploy-status deploy-status-args)]
(if (= "RUNNING" (:code-deploy-status status-data))
(do (Thread/sleep 5000) (recur))
status-data)))))
(catch Exception e
{:deploy-status "ERROR"
:message (.getMessage e)})))#2019-07-0216:32alexmillerThe error is misleading - it checks every repo but just reports the last error #2019-07-0216:33alexmillerssm is https://mvnrepository.com/artifact/com.cognitect.aws/ssm#2019-07-0216:33hadilsI upgraded to 480--8770#2019-07-0216:33alexmillerWhat version of tools.deps.alpha are you using?#2019-07-0216:34alexmillerOr clj?#2019-07-0216:34hadilsOh, I am using a new computer. How do I install tools.deps.alpha onto MacOS?#2019-07-0216:35alexmillerJust back up and tell me from the beginning what you’re doing#2019-07-0216:36hadilsOk, I have a new MacOS laptop. I am pushing my code to Datomic Production topology for the first time since getting this computer.#2019-07-0216:37alexmillerWhich uses clj right?#2019-07-0216:37hadilsYes. The version is 1.10.1.458.#2019-07-0216:43hadilsI just changed tools.deps.alpha to 0.7.516.#2019-07-0216:43hadilsStill doesn't work.#2019-07-0216:45alexmillercould you humor me on trying something?#2019-07-0216:45hadilsOf course!#2019-07-0216:46alexmillerbrew uninstall clojure
curl > clojure.rb
brew install clojure.rb
#2019-07-0216:46alexmillerbasically a forced downgrade to older version#2019-07-0216:46alexmillerthen try it and see if it works#2019-07-0216:46hadilsOk.#2019-07-0216:50hadilsThat works! Thanks Alex.#2019-07-0216:50markbastianI'm really liking the new tuple features, especially for defining composite keys. Thanks! One question, though. It appears that if you use a composite key and want to update it you'll need to explicitly add that key to the transaction after the entity has been initially installed. Here's an example:
;Relevant composite key in schema. Other fields (person, time, balance, are primitives with cardinality one)
{:db/ident :person+time
:db/valueType :db.type/tuple
:db/tupleAttrs [:person :time]
:db/cardinality :db.cardinality/one
:db/unique :db.unique/identity}
Now, given a connection I transact my schema and a single initial entry:
@(d/transact conn schema)
@(d/transact conn [{:person "Mark" :time #inst "2000-02-01" :balance 200}])
So far, so good. Now I want to correct my balance as of the above time:
@(d/transact conn [{:person "Mark" :time #inst "2000-02-01" :balance 100}])
I now get an error:
Syntax error (Exceptions$IllegalStateExceptionInfo) compiling at (src/datomic_playground/understanding_time.clj:61:1).
:db.error/unique-conflict Unique conflict: :person+time, value: ["Mark" #inst "2000-02-01T00:00:00.000-00:00"] already held by: 17592186045418 asserted for: 17592186045425
However, I can do either of these to update my entity:
;Works because I am explicitly creating the identity key
@(d/transact conn [{:person+time ["Mark" #inst "2000-02-01"] :balance 100}])
;Also works, as expected
@(d/transact conn [{:person+time ["Mark" #inst "2000-02-01"] :person "Mark" :time #inst "2000-02-01" :balance 150}])
Is this behavior (needing to explicitly create the id key after the initial transaction) expected? Is there a way to auto-imply the unique key after the first transaction?#2019-07-0219:52stuarthallowayHi @U0JUR9FPH! A couple of thoughts here.#2019-07-0219:54stuarthalloway1. You can alway use the actual entity id, so you don't need to use any identity to perform an update. (I presume you know this but am including it for completeness.)#2019-07-0220:02stuarthalloway2 If you do want to identify an entity by a unique key, you must indeed specify that unique key (not its constituents). This is clean and unambiguous.#2019-07-0220:58markbastianHey @U072WS7PE, thanks for getting back to me! Yes, case 1 makes total sense (and perhaps I should have put that as a third "works" example for completeness. For anyone following the conversation this would be 17592186045418 in this particular case.). I was indeed looking at case 2. My intuition was that transacting two schema-compliant items with the same constituent elements that form a unique key would resolve to that unique key and either insert or update as appropriate. However, I do see your point that future updates could cause ambiguity. Easy enough to handle once you know the behavior. Thanks!#2019-07-0216:50hadilsThanks @alexmiller#2019-07-0216:57alexmiller@hadilsabbagh18 thanks, I will follow up with the datomic team (I suspect latest version of clj has changed assumptions ion-dev is relying on)#2019-07-0216:58hadils@alexmiller I appreciate the time you took to help me. I know you are very busy.#2019-07-0216:58alexmillerwell, I think I'm the one that caused it :)#2019-07-0302:07dlwhat is the best way to use websockets in datomic cloud, just like in datomic with sente?#2019-07-0303:40rapskalian@dlorencic1337 API Gateway recently announced WS support: https://aws.amazon.com/blogs/compute/announcing-websocket-apis-in-amazon-api-gateway/
I haven’t tried it myself yet, but it seems like a promising ion integration. I was planning on experimenting with it using Cognitect’s aws-api lib: https://github.com/cognitect-labs/aws-api
David recently added support for the ApiGatewayManagementApi. #2019-07-0303:50dlyeah I have heard that news message but didnt find of any tutorials on how to implement it with websockets#2019-07-0303:50dlthats why I asked, thank you man!#2019-07-0303:51dlI am curious because I have read that the Transaction Report Queue is only available with the peer inteface#2019-07-0303:52dlhow would you then go ahead to building an alternative that notifies the api gateway on changes?#2019-07-0304:01rapskalianI was thinking it might be possible to wrap transact! by reading the resulting :tx-data and placing it into a queue (maybe SQS or even a core.async channel). Then some other process would actually interface with APIGW. Build your own report queue basically. #2019-07-0304:02dlok interesting.#2019-07-0304:02dlI will look into it#2019-07-0309:04robert-stuttaford@stuarthalloway @jaret what do i need to do to get an existing database to use the new tuple stuff? transactor and peer are both on the new version. i can make a new database and transact tuple attrs, via the same peer, transactor and storage. i can't transact any tuple attrs to the existing database - it complains that :db/tupleAttrs doesn't exist.#2019-07-0312:55jaretYou’ll need to upgrade your schema with:
https://docs.datomic.com/on-prem/deployment.html#upgrading-schema#2019-07-0312:55jaretOh there is a typo in that anchor link ^ I am going to fix that.#2019-07-0319:52robert-stuttafordthanks @jaret - suggestion 🙂 include this bit of news in any blog post that announces features :+1:#2019-07-0319:34souenzzoI'm still gettin
:dependency-conflicts
{:deps
{org.clojure/clojure #:mvn{:version "1.9.0"} ...
when I {:op :push}
I just deployed a fresh cloudformation today using 480-8770 both solo and storage.#2019-07-0319:34Joe LaneAre you running with clojure 1.9 in your code base?#2019-07-0319:35souenzzo1.10.1 in my deps.edn#2019-07-0319:35jarethttps://forum.datomic.com/t/datomic-0-9-5930-now-available/1060#2019-07-0412:40ivanaHello. I try to run figwheel project from re-frame template, everything works fine until I add datomic to deps. In this case lein figwheel dev falls with
Figwheel: Cutting some fruit, just a sec ...
Syntax error (NoSuchMethodError) compiling at (figwheel_sidecar/repl.clj:1:1).
com.google.common.base.Preconditions.checkState(ZLjava/lang/String;Ljava/lang/Object;)V
I tried to exclude some and set exact versions (I found this in internet)
[com.datomic/datomic-pro "0.9.5927"
:exclusions
[org.eclipse.jetty/jetty-http
org.eclipse.jetty/jetty-util
org.eclipse.jetty/jetty-client
org.eclipse.jetty/jetty-io]]
;; directly specify all jetty dependencies
;; ensure all the dependencies have the same version
[org.eclipse.jetty/jetty-server "9.4.12.v20180830"]
[org.eclipse.jetty.websocket/websocket-servlet "9.4.12.v20180830"]
[org.eclipse.jetty.websocket/websocket-server "9.4.12.v20180830"]
but the problem still the same. What can I do?#2019-07-0412:48souenzzo@ivana both #datomic and #clojurescript use guava lib
https://mvnrepository.com/artifact/com.google.guava/guava
Usually, clojurescript version is higher then datomic version
Ignore it on datomic should fix#2019-07-0413:15ivanaThanks alot!
:exclusions [com.google.guava/guava]
solves the problem!#2019-07-0522:54eoliphantHi, I’m running into a situation on Cloud where we’re persistently getting busy indexing anomalies. upgraded to the latest rev, and have killed the transactors, but the problem hasn’t gone away#2019-07-0523:13marshall@eoliphant Can you look in your CloudWatch logs for any Alerts#2019-07-0523:14marshallif there are some, can you please share the text of the alerts#2019-07-0523:21eoliphantyeah there are some. trying to pick out stuff that might be relevant, vs our apps messages#2019-07-0523:33marshall@eoliphant https://docs.datomic.com/cloud/operation/monitoring.html#searching-cloudwatch-logs
search the Datomic system logs for “Alert - Alerts”#2019-07-0523:37eoliphantnothing is really jumping out. that reminds me, lol, i meant to submit a feature request. it would be nice to keep the datomic system stuff in a separate log group.
Ok, i’ve updated my filter, still nothing jumping out from datomic itself, it’s almost entirely our alerts that we’re logging when we retry/fail etc#2019-07-0523:39marshallAre there any alerts at all that are not from your own use of cast?#2019-07-0523:40eoliphantok,#2019-07-0523:40eoliphantthink i have something
{
"Msg": "IndexerJobException",
"Ex": {
"Via": [
{
"Type": "clojure.lang.ArityException",
"Message": "Wrong number of args (2) passed to: datomic.excise/pred",
"At": [
"clojure.lang.AFn",
"throwArity",
"AFn.java",
429
]
}
],
"Trace": [
[
"clojure.lang.AFn",
"throwArity",
"AFn.java",
429
],
[
"clojure.lang.AFn",
"invoke",
"AFn.java",
36
],
[
"clojure.core$partial$fn__5824",
"invoke",
"core.clj",
2624
],
[
"clojure.core$map$fn__5851",
"invoke",
"core.clj",
2755
],
[
"clojure.lang.LazySeq",
"sval",
"LazySeq.java",
42
],
[
"clojure.lang.LazySeq",
"seq",
"LazySeq.java",
51
],
[
"clojure.lang.RT",
"seq",
"RT.java",
531
],
[
"clojure.core$seq__5387",
"invokeStatic",
"core.clj",
137
],
[
"clojure.core$seq__5387",
"invoke",
"core.clj",
137
],
[
"datomic.index$merge_db$fn__21535",
"invoke",
"index.clj",
1635
],
[
"datomic.index$merge_db",
"invokeStatic",
"index.clj",
1621
],
[
"datomic.index$merge_db",
"invoke",
"index.clj",
1615
],
[
"datomic.indexer$merge_db",
"invokeStatic",
"indexer.clj",
185
],
[
"datomic.indexer$merge_db",
"invoke",
"indexer.clj",
181
],
[
"datomic.indexer$maybe_queue_index_job$fn__28554",
"invoke",
"indexer.clj",
250
],
[
"clojure.core$binding_conveyor_fn$fn__5739",
"invoke",
"core.clj",
2030
],
[
"datomic.async$daemon$fn__10439",
"invoke",
"async.clj",
146
],
[
"clojure.lang.AFn",
"run",
"AFn.java",
22
],
[
"java.lang.Thread",
"run",
"Thread.java",
748
]
],
"Cause": "Wrong number of args (2) passed to: datomic.excise/pred"
},
"DatomicIndexerDbId": "5f06733b-f7c1-4a6f-9aab-3c665b7d498d",
"Type": "Alert",
"Tid": 595,
"Timestamp": 1562369932970
}
#2019-07-0523:42eoliphantI’ve a lambda pulling stuff off of kinesis, so I turned that back on to generate some activity, these are popping up pretty frequently now#2019-07-0523:56jaret@eoliphant I am going to open a ticket up in your name and copy this info over#2019-07-0523:56eoliphantok thx#2019-07-0523:57eoliphantFYI the storage and compute are on 480-8770#2019-07-0523:57jaretProduction templates?#2019-07-0600:00eoliphantyep#2019-07-0600:01jaretSo this is a prod outage? And are you deploying ions?#2019-07-0600:03jaretI will ask some more questions on the case so the whole team can see.#2019-07-0600:04eoliphantnot a prod outage fortunately, but one of my teams is wrapping a product increment on monday, and this is impacting that, and yes we’re using ions#2019-07-0600:54stuarthallowaywe'll get you sorted ASAP#2019-07-0604:20steveb8nFWIW it’s great to see this kind of support response out in the open. builds confidence for me#2019-07-0604:52fdserrHas anyone got a workaround to enable the ping endpoint in a containerised Datomic without the thing blowing up? (0.9.5930, dev protocol)
docker run -v /config/:/config/ my-docker/transactor
Launching with Java options -server -Xms1g -Xmx1g -XX:+UseG1GC -XX:MaxGCPauseMillis=50
Critical failure, cannot continue: Error starting transactor
java.lang.RuntimeException: Unable to start ping endpoint localhost:9999
...
Caused by: java.lang.IllegalStateException: Insufficient configured threads: required=3 < max=3 for QueuedThreadPool[qtp1630087575]@61292997{STA
RTED,3<=3<=3,i=3,q=0}[
I know it is an upstream issue (Jetty). Poking Datomists for a possible hint to enable a transactor health check on K8s/Prom.... TIA!#2019-07-0605:18favilahttps://forum.datomic.com/t/jetty-max-threads-error-when-enabling-ping-health/603/21#2019-07-0605:20favilaWe worked around it by using one of the magic cpu limit numbers (k8s environment)#2019-07-0605:22favilaUnsure of root cause, if is jetty or java8#2019-07-0606:37fdserrGolden, just works. Can't imagine the sweat you put in this. May I ask how you found out the existence of this hidden key? Thanks a bunch @U09R86PA4!#2019-07-0606:42fdserrjust saw the next-next post with it 😃#2019-07-0606:54favilaActually what happened to us was we had it working fine (by accident it turns out) then we adjusted the limits later and it failed. We couldn’t believe it but we found the forum post as confirmation#2019-07-0606:55favilaI think we had 8 and lowered to 4 or something#2019-07-0607:19fdserr> We couldn’t believe it but we found the forum post as confirmation
😃#2019-07-0604:53fdserrBTW congrats Datomic team, the June release is packed with awesome features 🙏#2019-07-0621:44pvillegas12In order for a transaction function to not be applied (in the atomicity sense), do you need to raise an exception?#2019-07-0700:52favilaYes directly or indirectly#2019-07-0715:13fdserr@U6Y72LQ4A Throwing is the way to stop a TX, AFAIK. Throwing clojure.lang.ExceptionInfo helps us deal with explicit business constraints (userland/maybe-recoverable) and we let the rest blow up ("system" error).#2019-07-0715:35pvillegas12Perfect, thanks @U09R86PA4 @U05164QBS for confirming#2019-07-0818:09Nolancurious how others would approach building this, or if this smells:
(make-query {:ns/attr1 "v1" :ns2/attr2 :v2})
;; => [:find ?e :in $ :where [?e :ns/attr1 "v1"] [?e :ns2/attr2 :v2]]
in english, it takes a map, attribute => value, and produces a query for a single entity that has the given value for each attribute. ive implemented make-query using syntax-quote, and also experimented with the :in clause to do a similar thing, but didn’t get too far with it. would love some additional perspective#2019-07-0915:43hadils@nolan I like it. It seems elegant.#2019-07-0915:44hadilsAnyone solved the problem of using an aggregate (max n ?e) where you want to specify n from a function argument? Do I have to build the query up programatically?#2019-07-0919:16jarethttps://forum.datomic.com/t/datomic-cloud-480-8772/1071#2019-07-1009:46DanielHi, I'm trying to follow the datomic tutorial and connect to a running datomic server. But the following code gives an exception Execution error (ExceptionInfo) at datomic.client.api.async/ares (async.clj:58). No subject alternative names present error
(require [datomic.client.api :as dc]))
(def config {:server-type :peer-server
:access-key "myaccesskey"
:secret "mysecret"
:endpoint "127.0.0.1:8998"})
(defn connect! []
(let [client (dc/client config)]
(dc/connect client {:db-name "hello"})))
(connect!)
I'm using the latest version of datomic-pro-starter 0.9.5930#2019-07-1010:16cichliMaybe try localhost instead of 127.0.0.1?#2019-07-1009:48DanielI've tried latest versions of AdoptJDK's OpenJDK 11, 12, 8 with no success. Running on MacOS.#2019-07-1009:49Daniel=> (pst)
ExceptionInfo No subject alternative names present {:cognitect.anomalies/category :cognitect.anomalies/fault, :cognitect.anomalies/message "No subject alternative names present", :cognitect.http-client/throwable #error {
:cause "No subject alternative names present"
:via
[{:type javax.net.ssl.SSLHandshakeException
:message "No subject alternative names present"
:at [sun.security.ssl.Alert createSSLException "Alert.java" 131]}
{:type java.security.cert.CertificateException
:message "No subject alternative names present"
:at [sun.security.util.HostnameChecker matchIP "HostnameChecker.java" 137]}]
#2019-07-1019:38pvillegas12I want to reference an entity with a tempid for :db/id for different datoms. Is this possible?#2019-07-1019:51ghadiif you talk about the same tempid in different datoms within a transaction, it will end up becoming the same entity when the transaction commits#2019-07-1019:52ghadi@pvillegas12 [[:db/add TEMPID a v t] [:db/add something-else a TEMPID]]#2019-07-1019:52ghadi^ create an entity and point to it -- in the same tx#2019-07-1019:53ghadi[{:db/id "tempA" ....}
{:db/id [:lookup/ref 42]
:points/to "tempA"}]#2019-07-1023:39drewverleeSo you can't upgrade a "solo" running Datomic stack?
https://docs.datomic.com/cloud/operation/upgrading.html#first-upgrade
> This upgrade process converts your Master Template Stack to the setup described in the Production Setup documentation.#2019-07-1023:55drewverleeI ask because i'm just currently using my setup to learn and i'm assuming the production setup is more expensive.#2019-07-1100:03Joe LaneYou cant downgrade#2019-07-1100:26ghadiyou can upgrade it no problem#2019-07-1100:26ghadiyou can also move from solo -> prod topology#2019-07-1100:27drewverleeboth of those would seem to be true, but the docs imply that you can upgrade from solo -> solo#2019-07-1100:27drewverleeupgrade the CTVersion#2019-07-1100:27drewverleethe goal here is to get the newer features#2019-07-1100:27ghadiyes the templates are public#2019-07-1100:27ghadihttps://docs.datomic.com/cloud/releases.html#2019-07-1100:27ghadijust got a version bump a few days ago#2019-07-1100:27ghadiyesterday maybe#2019-07-1100:28ghadiI am testing the latest release on a dev cluster before rolling it out to prod in a few days#2019-07-1100:29drewverleeso does "upgrading" in the docs i link refer to moving from solo->prod, and not changing template versions?#2019-07-1100:30drewverleeor put another way, if i just wanted the latest features for my solo topology. what would i do 🙂#2019-07-1108:30stask@U0DJ4T5U1 if it’s your first upgrade, just follow the steps in https://docs.datomic.com/cloud/operation/upgrading.html#first-upgrade
Storage template is the same for both Solo and Production as far as i know. Just choose Solo for the Compute template.#2019-07-1112:39drewverleeBut your have to choose a larger instance, I assume overall this is more money per month#2019-07-1113:49Jacob O'Bryant@U0DJ4T5U1 I did the "first upgrade ever" (stayed on solo), and my datomic set up currently consists of a t2.small and a t2.nano instance--I think this was the same as before, but I don't remember. What instances do you have right now? In any case, my monthly aws billing estimate is still the same (though I only upgraded ~1 week ago)#2019-07-1113:55Joe Lane@U0DJ4T5U1 Maybe the language is overloaded when discussing "upgrading" (transitioning) from a solo topology to a production topology. If we use the word "upgrading" to mean getting latest features (version increment) it is absolutely possible to "version increment" a running solo system while keeping it a solo topology, i've done it a dozen times.#2019-07-1201:41drewverleethanks everyone. i understand now that i can still choose the solot topology and have done so.
Currently i'm not seeing the option to reuse existing storage as specified here: https://docs.datomic.com/cloud/operation/upgrading.html#first-upgrade#2019-07-1110:07conanHi, we need to run some data migrations in Datomic Cloud. How would you go about this? We're thinking that Ions may be the solution, is that the right approach?#2019-07-1116:01shaun-mahood@conan I've done a bunch of migrations from local databases to the cloud just using the socks-proxy. https://youtu.be/oOON--g1PyU is a great reference for how you could approach the problem.#2019-07-1116:05conanSo I need to transform data in the way I would do using db functions in on-prem. If i read data, calculate txes and write them, i may leave my data in an inconsistent state#2019-07-1121:24chrisblomwould using :db/cas be an option, to check that the state is still valid?#2019-07-1116:05conanIt's not entirely clear to me how I do this in cloud#2019-07-1116:22shaun-mahoodAhh - check out https://docs.datomic.com/cloud/transactions/transaction-functions.html#classpath and see if that gives you what you need.#2019-07-1201:50drewverleeim upgrading my datomic stack for the first time, the instructions say to set "resuse existing storage" to True. But i dont see this option anywhere.#2019-07-1204:40eoliphantYou’ve pasted in the URL on the Create Stack Dialog, and are on the first page of inputs for the storage stack? It’s the second option down#2019-07-1205:51drewverleeI did#2019-07-1222:46drewverlee#2019-07-1222:47drewverlee#2019-07-1222:57drewverleeok, thats what you get if i enter the URL from solo topology on https://docs.datomic.com/cloud/releases.html
but if i enter the one for storage i get the option to re-use existing storage#2019-07-1222:58drewverleethe instructions say
> for the Storage Stack you want from the release page
I thought solo, production, storage were all examples of "storage stacks". if not, then what else is one?#2019-07-1216:49Mark Addleman@jaret fyi - I just tried to deploy a Datomic Solo topology from the AWS Marketplace. The AWS Marketplace UI allowed me to NOT enter a Key Pair. Subsequently, the Cloudformation Create Stack operation fails with a somewhat obscure message. Not sure if this is something you have control over#2019-07-1216:56jaretYeah, sorry Mark that’s a limitation on AWS’s side. We’ve asked/lodged requests to be able to require that field to launch, but its not allowed.#2019-07-1220:01Mark AddlemanNo worries. That's what I figured#2019-07-1219:26Jacob O'Bryant@jaret I'd really appreciate it if you/someone could take a look at this, unless I'm mistaken it's a very serious bug with the new composite tuple feature: https://forum.datomic.com/t/upsert-behavior-with-composite-tuple-key/1075/3
thanks. I'm guessing that bug is the root cause of this too: https://forum.datomic.com/t/bug-in-db-ensures-boolean-attr-handling/1073#2019-07-1616:08jaretThank you for the report. Thanks to your example, we have identified an issue with the treatment of false in tuples. We have a fix in the works for the next release. However, upsert does require that you specify the unique key. You can use the entity id or if you do want to identify the entity by a unique key then you have to specify the key (not its constituents). We’re going to update the docs to better address this. I have also updated your posts.#2019-07-1319:18drewverleehttps://docs.datomic.com/cloud/operation/upgrading.html#org4ebe4b2 has a broken link "environment map". i already emailed support, if by any chance someone knows what it should be i would appreciate knowing so i can keep moving forward 🙂#2019-07-1403:10drewverleeThey fixed the link.#2019-07-1420:57joshkhi have a RESTful API written in Clojure with no dependency on Datomic Cloud (although many of services make use of Cloud and Ions, so the infrastructure exists). is a new Query Group, http-direct, and Ions combination still a good solution for deploying it?#2019-07-1421:03joshkhthe crux (ha ha) of my question is regarding micro services. in this case it's a simple API and Ions makes it so easy to deploy, but for every micro service i face another ~$8 a month for a single t2.medium to support its new Query Group, which is the minimum EC2 instance size. with many services (including dev, stg, and prod targets) it adds up.#2019-07-1503:27eoliphantFor us part of the value is not jumping to microservices right away. We’ve a convention for separating out what are essentially bounded contexts into separate projects, and a few scripts to check for architectural conformance. So we start with a ‘managed monolith’, but can pull stuff out if and only if it’s really necessary. Each one uses it’s own db, etc so breaking them out into QG’s when we hit that point isn’t typically a big deal#2019-07-1514:35dmarjenburghDoes datomic ions support cross-account deployments? We have a dev/test account and an acc/prod account. We want to do an ion push in dev/test and deploy the generated artifact in acc/prod#2019-07-1514:37marshallno, you’d need to push to prod#2019-07-1516:07calebpHi, I’m looking for information on error recovery practices for Datomic cloud. Not sure that’s the right terminology, but one problem that particularly worries me is what if someone accidentally calls delete-database? I couldn’t see a way in https://docs.datomic.com/cloud/operation/access-control.html to prevent this and if I didn’t have an backup, my company’s data would just be gone. I’ve been following this thread https://forum.datomic.com/t/cloud-backups-recovery/370/12, but haven’t seen any details there.#2019-07-1607:11TuomasI’m trying out datomic cloud and I’m pretty new to this kind of stuff. Tried to launch to eu-west-1, but failed. After digging around I discovered the compute ami in solo-compute-template-8772-480-ci.template Mappings.RegionMap.eu-west-1.Datomic is invalid. Decided to launch to eu-central-1 because it’s ami mapping seems correct, but thought I should also report this. Any idea where these kind of reports should go to?#2019-07-1616:11marshallWe have filed a support ticket with AWS regarding that issue @koivistoinen.tuomas#2019-07-1618:36pvillegas12I’m using Ions and Datomic Cloud. I would like to understand how many requests I can handle per second. Is the clojure web app I expose as an ion a single process? Will it be threaded in some way? Can I configure this behavior?#2019-07-1618:37Joe LaneThat depends entirely upon what those requests do.#2019-07-1618:38pvillegas12A request may do a 10s job, so trying to understand if that will block the entire API I am exposing.#2019-07-1708:03sooheonWhat, if any, are the differences between missing? and not in the following?#2019-07-1708:04sooheon(d/q '[:find (pull ?feed [*])
:where
[?feed :rss-feed/url]
[(missing? $ ?feed :rss-feed/etag)]]
db)
(d/q '[:find (pull ?feed [*])
:where
[?feed :rss-feed/url]
(not [?feed :rss-feed/etag])]
db)
#2019-07-1708:05sooheonThe only thing I’ve noticed is that missing? doesn’t complain when you ask for a never-transacted attribute (i.e. (missing? $ ?e :random-made-up-or-misspelled-kw))#2019-07-1709:11dmarjenburghCan you configure the path to ion-config.edn? I have slightly different configurations per environment.#2019-07-1710:28holgeris it possible that two jvm processes share the same valcache (directory) for a short period of time? our deployment first starts a new version before it stops the old one#2019-07-1918:00jaretYes, I believe valcache directory can be shared, but it has not been tested. Not to put you out on a limb, but if you notice any issues could you send them to me in support? And I encourage you to test in non-production first.#2019-07-1918:01jaretI’d also be interested to see metrics from your test system. Given that we haven’t tested this configuration it remains unsupported, but theoretically could work.#2019-07-2215:57holgerThanks! In case we decide to go that route, I'll let you know!#2019-07-1820:57grzmAnyone have a reference or experience setting up a Pedestal ion endpoint somewhere other than at the root of the domain? Frex, I want to set up the endpoint at "/api/v1/{proxy+}" rather than at "/". From my cursory trials, looks like I end up needing to include "/api/v1" in each of my routes. Wondering if there's a way around that.#2019-07-1821:09Joe Lane@grzm Are you able to use http-direct?#2019-07-1821:11grzmHaven’t tried yet: still using a lambda/ion#2019-07-1821:12Joe LaneWanna zoom?#2019-07-1821:14grzmWhat is this, a Nissan commercial?#2019-07-1906:54Simon O.Beginner Q: Given the sample schema, is it possible to just retract entity [:step/name "step2"] in line27 from entity [:lad/name "lado"]without maybe having to removing :db/ident :lad/step, adding it back, and subsequently filling it with needed collection of :step/name? and how...#2019-07-1913:07donaldball[[:db/retract [:lad/name "lado"] :lad/step [:step/name "step2"]]] will retract the datom asserting a ref between lado and step2.
[[:db/retractEntity [:step/name "step2"]]] should retract all datoms that refer to step, including that ref from lado#2019-07-1914:01Simon O.First step does it. thanks#2019-07-1918:13daniel.spanielDoes anyone know how to configure my ion to issue an http request to sendgrid? Do I need configure a NAT or Egress-only Gateway from my VPC? And if so, is there some example of doing that?#2019-07-1918:14Joe Laneclj-http should be able to make an outbound http request.#2019-07-1918:15daniel.spanielfrom within my ion code @lanejo01?#2019-07-1918:15Joe Laneyup#2019-07-1918:17Joe LaneI do that in one of my production systems with twilio#2019-07-1918:17daniel.spanielyeah, same here#2019-07-1918:17daniel.spanielIt might be a delay then from twilio#2019-07-1918:19daniel.spanielthanks @lanejo01!#2019-07-1918:19Joe Lanenp, have fun!#2019-07-1918:21Joe Lane@dansudol One thing I ran into was if the ion was cold then sometimes twilio would timeout on callbacks because twilio has a 5 second timeout.#2019-07-2216:21ghadinot terrible at all, have you seen codeq?#2019-07-2216:53hadilsHi! Any suggestions on testing Lambda/APIGW ions on a local MacOS laptop? I am trying out AWS SAM. How do I build a deployment package for my application locally? Is it just the zip file that I would deploy to Datomic Cloud?#2019-07-2312:51eoliphantThere’s no ‘package’ per se with ions, though the ‘push’ does shoot your code as is + dependencies up to an S3 bucket/folder.
I’d check out Stu’s videos on the typical workflow. but in general it’s very much in line with your typical REPL-driven/oriented workflow. You can test/exercise your generic pure funcs as is, you can connect to the db in question from the repl, most ion/datomic funcs that can be ‘embedded’, like transaction functions, are pure can (and should) be t tried out directly. At that point, you can then push, then interactively exercise them on the server.
I’m literally doing that right now. Helping one of my devs optimize some stuff, so I created a new transaction func, spec’d and tested it totally client side, then ‘allowed’ and pushed it, and ran some actual transactions that referenced it.#2019-07-2317:48hadilsSo you avoided the whole SAM local workflow, then?#2019-07-2915:59eoliphantsorry just saw lol, yeah, there’s less need for it IMO. while tx, query, etc funs do have to be on the server at some point. You can generally do a ton of testing, etc with them locally, so by the time you actually push them, you’re pretty confident that they’re doing what you expect.#2019-07-2217:57joshkhmust query functions be deployed to the main (cloud) Compute node, or can they be deployed as part of Query Groups?#2019-07-2218:01joshkhupdate: yes. found it in the docs 🙂#2019-07-2218:01ghadiAnywhere :)#2019-07-2218:06joshkhhmm, are you sure? i have a query group that makes use of query functions. i recently removed them from the main compute group and now the query groups fail.#2019-07-2218:08joshkhby removed i mean that the query functions were defined in both Ions projects due to forking the code base. when i removed them from the main compute group's configuration and deployed to the main compute group the query groups then failed.#2019-07-2218:09marshall@joshkh failed to do what?#2019-07-2218:09marshallyou can definitely have a different set of query functions on your primary group than on a given query group#2019-07-2218:10marshallbut you can only invoke them if you are connected to that query group#2019-07-2218:10joshkhyup, that's what i'm thinking. i might be deploying to the query group but connecting to the main compute group.#2019-07-2219:05joshkhupon further testing, it looks like the client's :endpoint value overrides the client's :query-group value, but only when running locally.#2019-07-2221:26kvlt@ghadi I have not seen codeq#2019-07-2312:42eoliphantHey guys, I’m running into an issue trying to deploy from my CI/CD server, I’m getting the following error
“Unable to find a unique code bucket. Make sure that you are running\\nDatomic Cloud in the region you are deploying to”
Definitely in the same region, so not sure what else to look at#2019-07-2312:47AdrienThat's because Datomic releases are on an S3 bucket in another region and you cannot do cross-region S3 copies.
There is a discussion concerning this issue on the forum: https://forum.datomic.com/t/ions-push-deployments-automation-issues/715#2019-07-2312:48AdrienIn the last answer you have a workaround using VPC with a NAT gateway#2019-07-2312:53eoliphantRight, right I’ve seen that, but in my case, everything is in us-east-1. The script actually works fine for my dev env, but is throwing that for our int-env. Our ‘shared’ account/vpc, as well as the ones for dev and integration are all in us-east-1#2019-07-2316:54calebpI inadvertently converted one of my solo systems to a production system by upgrading it with the production compute template. Is it OK to leave the storage stack, delete the compute stack and recreate the compute stack with the solo compute template?#2019-07-2316:58calebpAssuming this is OK since solo and production use the same storage stack template#2019-07-2320:17matthavenerIs there any performance benefit of pull-many vs mapping over pull ?#2019-07-2401:13favilaPull-many will parallelize if it can; map over pull will not#2019-07-2401:13favilaPull many is like pull in a query#2019-07-2417:29matthavenerthanks as always#2019-07-2407:28fdserrOn-prem pro: is it possible and 200% safe to use a single set of infra for several transactors (with different dbs) ? DDB table, role set, log bucket.#2019-07-2421:42genekimHello! After six months of dabbling with Datomic Cloud on my laptop, I'm ready to use it in a personal project or two! But after almost two hours of Googling, I've hit a problem...
What is easiest way to connect to a Datomic Cloud instance from something like Heroku? There's not an easy/obvious way to use datomic-socks-proxy...
In the ideal, I'd love to be able to connect to Datomic on Heroku without a need for a proxy or sidecar (e.g., like a simple call to connect() with a Postgres/MySQL-style connection string?). I'm trying to simplify my life, so not having to set up a docker container or Kubernetes sidecar would be awesome.
For similar reasons, not having to learn API Gateway and IAM at the same time as learning Datomic would be a plus. :) (Because the AWS feature screen has always scared me, I've stuck to Heroku and Google GCP/GKE, for better or worse.)
Any advice? Many thanks in advance!!!! #2019-07-2422:44eoliphantThere are possible ways, but it may be more trouble than it’s worth. It’s definitely going to be easier to work on something in AWS. if you run elsewhere you’d need to have AWS credentials, run the proxy, etc etc.
What kind of app are you planning If you use ions, there’s a pretty easy, one time setup for API gateway, that’s clearly outlined in the docs. once it’s setup for your ‘entry point’ ion there’s no need to mess with it.
IF you’re planning a separate app, that just connects as a client to the db, then again, you’re gonna need to do some stuff around the networking and what have you, as well as manage aws access keys on heroku. not 100% sure but I’d bet the complexity of getting all that working on heroku might be a equal to or even more than just getting ions, etc going#2019-07-2717:39genekim@U380J7PAQ Thanks for the thoughtful question — was pondering this, because I think it does inform what the right decision is.
I have an app that tracks collects lots of data on books that’s been running for 4 years, data currently stored in MySQL (originally totally accessed thru Ruby ActiveRecord). Now all the data is collected and accessed thru Clojure web app.
I want to store new book metadata, like publishers, categories, in Datomic, because I’m fatigued by SQL database migrations. And I think the fluid way that the schema can be changed in Datomic is super appealing to me.
I imagine that one Ion REST API entry point invoke could be used to do operations like :add-publisher, :update-publisher, :delete-publisher, etc...
And then that endpoint is called by an app that runs anywhere, maybe authenticated by a certificate, secret or something?
Is that thinking reasonable? Did I miss anything huge? Thx for the great question!#2019-07-2717:47genekim@U380J7PAQ Am I correct in thinking that I’d call the API Gateway endpoint with something like this?
https://github.com/jerben/clj-aws-sign#2019-07-2717:47eoliphantHmm.. So Ions absolutely make your last bit far more palatable. you could leave everything else as-is, then just create and deploy your API, exposing it via API gateway, that can be called from anywhere, and API gateway natively supports stuff like API keys for access without too much additional fuss.
So is your plan to migrate from the MySQL/Ruby stuff? Wasn’t clear on whether this new piece is solely complementary or your new direction overall. I can tell you for sure, that if you’ve already drunk the Clojure/Datomic kool-aid, that you’ll eventually end up with far fewer moving parts if you just move it all that way#2019-07-2717:48eoliphantAPI gateway supports a variety of authentication methods. If you just need to secure it ‘system to system’ it probably makes more sense to just use an API key. Then there’s no need to sign, etc, you just send the key in a header#2019-07-2717:49eoliphantif the API needs to ‘know’ say the identity of the user making the call, then that’s when you get into more sophisticated use cases, like passing JWT’s around or something#2019-07-2717:51eoliphanthere’s the info on adding that: https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-api-usage-plans.html#2019-07-2717:52eoliphantthere’s a bunch of stuff in there that supports the more tpyical use case of issuing them to ‘customers’ etc, but in your case it’s really just a single one for your app running in heroku#2019-07-2717:58genekimHoly cow, @U380J7PAQ — this is SOOOO helpful. I owe you drinks for remainder of my lifetime for this — just say when and where! I can’t tell you how many mysteries you’re solving for me!
(But first on MySQL: I’m inclined to leave all the data there. There’s GBs of it, and no real reason to change it — lots of code read and write to it just fine.)
Wow, that link is great! The idea of just passing in a secret string is just my speed. 🙂
I’m looking at the “EXAMPLE: Create a Request-Based Lambda Authorizer Function” example right now at https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-use-lambda-authorizer.html
Am I on the right track? Thx again!!!!#2019-07-2718:04eoliphantlol, no problem at all. This community is awesome. I’ve gotten tons of help Just pay it forward 🙂
Ok cool regarding your existing data, that’s certainly a viable approach, and you can also go ‘full AWS’ even with that setup, dropping your existing MySQL db into MySQL on AWS RDS or even their MySQL compatible Aurora if you need the scalability. Nice thing about Datomic being a bit of an ‘un-database’ is that its also easy to munge up results from other sources if necessary, that may or may not be applicable to your use case
yeah check out the authorizer, though in your case it may not be necessary, they are more useful when say you’re handing out keys to customers, and they perhaps have different levels of service. Where say, the user who corresponds to key XXX has a basic subscription and only gets 1000 API calls a month and only certain API’s vs the user for key YYY that has unlimited access. In your case, you just want to prohibit ‘open’ access which the key more or less does on its own#2019-07-2718:13genekimThis is awesome! I’ll give it a shot this weekend — thanks again!!!! I can’t tell you how timely and spot-on it is! And I’ll keep you posted, hopefully with a screenshot of a successful CURL request and response! 🙂#2019-07-2718:14eoliphantno probs at all, let me know if you need any more info#2019-07-2718:52genekimWow!!! I got my first lambda function running, and managed to get an API key associated with it! Amazing! THANK YOU!
2015-MBP genekim$ curl -H 'x-api-key: xxx'
"Hello from Lambda!"
Next step… Follow the Ions tutorial! Wow! :exploding_head:#2019-07-2808:01genekimThanks to @U380J7PAQ encouragement and help, I’ve gotten an AWS API Gateway and my first tables and queries set up. But I’m having problems getting the com.datomic/ion deps downloaded. (I’m using the ion-starter/deps.edn file.)
I’m getting an error very similar to what was reported here: https://forum.datomic.com/t/issue-retrieving-com-datomic-ion-dependency-from-datomic-cloud-maven-repo/508
I can list my own S3 buckets, but when I get a permission denied error when I try to list the needed S3 bucket where the ion deps are stored:
2015-MBP:hodur-books genekim$ aws s3 ls
An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied
I’m sure this is easy, but I’m just too new to AWS to see it… Thank you!!!#2019-07-2808:13genekimWait… hang on… I can download the .jar file…#2019-07-2808:34genekimOkay, doing my first push and deploy! (And then heading to bed. I have an early start tomorrow! 🙂 This is super exciting, @U380J7PAQ!#2019-07-2915:56eoliphantGlad to hear it’s working 🙂#2019-07-2421:48John MillerThe Datomic cloud client accepts a set for a collection binding, but not a sorted-set. That seems like a bug? I’m using com.datomic/client-cloud {:mvn/version "0.8.78"}#2019-07-2422:38shaun-mahood@genekim The only easy-ish option I know of is using Ions - I think I would love to use HTTP Direct but it's only available with a production topology#2019-07-2522:16genekimThanks @U054BUGT4! Based on this, I’m likely going to run this inside of GKE, with the socks proxy running inside the container.
(Right now, I’m super-parsimonious about anything new I learn. Avoiding yaks altogether, let alone shaving them. :)#2019-07-2522:17shaun-mahoodOh yeah, yak-avoidance is so important.#2019-07-2522:19shaun-mahoodI'm using the socks proxy to run a local server, which connects to my datomic cloud instance and a local database, and I've only had one issue with it over the past few months - our network blipped a bit and I had to reset some network gear to fix it. No idea how it's going to handle running inside GKE, though.#2019-07-2522:26shaun-mahoodI'm gradually migrating things to ions, though, moving functions from my local ring server to ions one at a time.#2019-07-2512:04keesterbruggeI'm trying to do a nested upsert, but this doesn't seem to be possible. I found one technique to do a nested insert, and I found a different technique to do a nested update, there doesn't seem to be a technique that will do an insert or an update (upsert) depending on the state of the database. Is this correct?
Given the following schema
(def schema
[{:db/ident :day
:db/unique :db.unique/identity
:db/valueType :db.type/long
:db/cardinality :db.cardinality/one}
{:db/ident :metric/day
:db/unique :db.unique/identity
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/one}
{:db/ident :rev
:db/valueType :db.type/double
:db/cardinality :db.cardinality/one} ])
;; And some setup
(require '[datomic.api :as d])
(def db-uri "datomic:")
(d/create-database db-uri)
(def conn (d/connect db-uri))
@(d/transact conn schema)
With an empty database, the following inserts 3 datoms
@(d/transact conn [{:metric/day {:day 3} :rev 1.4}])
When I want to do an update to the previously created entries, the following works:
@(d/transact conn [{:metric/day [:day 3] :rev 1.5}])
However the previous insert structure doesn't and gives and error:
@(d/transact conn [{:metric/day {:day 3} :rev 1.5}]) ;=>
<error-here>
So this last variation is only a way to do a nested insert not an update. If we try the nested update variation on an empty database this fails too:
@(d/transact conn [{:metric/day [:day 3] :rev 1.5}])
<error-here>
The more verbose version with tempid's doens't work either:
(let [day-id (d/tempid :db.part/user)]
@(d/transact conn [{:db/id day-id :day 3}
{:metric/day day-id
:rev 1.1}]))
Am I missing how upserting could work in this nested situation or is this a limitation of Datomic by design? Any help is greatly apreciated!#2019-07-2515:04donaldballI have written my own fns to assert a possibly existing tree of data into the database.#2019-07-2515:07donaldballI’m curious about datalog rules. Sometimes I have 4-5 rules with the same name, it’s not clear which one(s) are matching, and it’s a little tedious to debug. A more debuggable form might be to give each rule a distinct name and use an or clause for each case in the general rule. Is anyone aware of the performance implications of this approach?#2019-07-2520:44drewverleeWould it be correct to say datomic employees forward chaining logic?#2019-07-2521:09Joe LaneI don't think so#2019-07-2521:09Joe LaneThat would be a rules engine, if i'm not mistaken.#2019-07-2521:14drewverleeRight, I meant to say backwards chaining 🤔#2019-07-2612:24timeyyyHi.
I'm curious as to the purpose of the created route53 dns entries when using datomic cloud.
Is this for billing or something? Why is this created under http://xyz-datomic.net.
Is this designed to be configured for my application use?#2019-07-2614:23Joe Lane@timeyyy_da_man I think those are private routes for datomic to resolve machines within the vpc.#2019-07-2614:24Joe LaneI don't believe it's for application use.#2019-07-2620:56fmnoiseis there a way to list datoms by given transaction id in datomic on-prem?#2019-07-2621:19souenzzo(d/q '[:find ?e ?a ?v ?tx ?op
:in $ ?tx
:where
[?i :db/ident ?a]
[?e ?a ?v ?tx ?op]]
(d/history db) tx)
#2019-07-2621:25fmnoisethanks @U2J4FRT2T, I was using similar query but got Insufficient binding error, [?i :db/ident ?a] does the trick 🎉#2019-07-2621:27souenzzoNot sure if in peer API is faster access it from raw datoms or some other API
ATM I'm on client API#2019-07-2621:33benoit@U4BEW7F61 I generally use tx-data https://docs.datomic.com/on-prem/log.html#log-in-query#2019-07-2912:08jaihindhreddyIf a Datomic DB contains [e :a v 40 true], is asserting [e :a v 42 true] a no-op?#2019-07-2912:17souenzzo@jaihindhreddy it will generate just the [e :db/txInstant #inst"now" true]#2019-07-2912:21jaihindhreddyGot it. And that would mean, we would know that fact(s) that were already true were reasserted, but not which ones exactly.#2019-07-2912:21jaihindhreddyMakes sense.#2019-07-3015:10tony.kayIs anyone aware of Datomic performance issues with latest JDK 11? We’re seeing some very poor query performance after moving from JDK 8 to 11 on Datomic 0.9.5697#2019-07-3015:12matthavenercould be https://bugs.openjdk.java.net/browse/JDK-8219233 ?#2019-07-3015:12tony.kayI looked at that#2019-07-3015:12marshall@tony.kay I would recommend moving to the latest release
several dependency updates (https://docs.datomic.com/on-prem/changes.html#0.9.5927) include changes to libraries that may impact jdk11 support#2019-07-3015:15tony.kayok, we’ll try that.#2019-07-3015:21alexmillerit's highly unlikely to be that jdk issue - that primarily affects code loaded via user.clj#2019-07-3015:22alexmillerbut fyi, Clojure 1.10.1 includes a Clojure-side mitigation for that#2019-07-3018:28joshkhwhat's the difference between the com.datomic/ion-dev and com.datomic/ion libraries? i tend to only use the ion-dev library in my projects, and given that ion has its own release cycle i'm wondering if i missed something in the docs.#2019-07-3018:31marshall@joshkh both are required in your ion project
ion-dev is used for push/deploy/etc
‘ion’ is required for Ion projects and also includes the parameter helper functions, the cast namespace, etc#2019-07-3018:32marshallalso the ionize function#2019-07-3018:32marshallhttps://github.com/Datomic/ion-starter/blob/master/deps.edn#2019-07-3018:32marshalland https://github.com/Datomic/ion-event-example#2019-07-3018:33joshkhah, thanks marshall. i must have dropped ion when i switched to http-direct#2019-07-3018:39joshkhno, that's not true. i'm still using it to fetch environment parameters. i must have crossed some mental wires when upgrading my various query groups. 🙂 thanks again#2019-07-3115:37hadilsAnyone have any experience with using core.async with Lambda Ions? I would like to know if there are any issues with Lambdas timing out with processes running in the background. Thanks!#2019-07-3115:58jarethttps://forum.datomic.com/t/datomic-0-9-5951-us-now-available/1103#2019-07-3115:59jaretDatomic On Prem 0.9.5951 Now available.#2019-07-3116:37grzmI've noticed a dramatic increase in BeforeInstall times when deploying Datomic Cloud (from ~ 1 minute to over 2 minutes). Everything else is on the order of a second. Any thoughts on what might have caused that? The commit when it changed was only a change in deps.edn, where I updated deps.edn to reflect the conflicts reported when deploying.#2019-07-3117:01Joe Lane@jaret @marshall Doc suggestion related to the cloud tuples example.
The name of the ident is :reg/semester+course+student but the actual order of the tuple is different, its [:reg/course :reg/semester :reg/student] and I found it difficult to keep the differing orders straight in my head when learning tuples.
Found at:
https://docs.datomic.com/cloud/schema/schema-reference.html#composite-tuples
{:db/ident :reg/semester+course+student
:db/valueType :db.type/tuple
:db/tupleAttrs [:reg/course :reg/semester :reg/student]
:db/cardinality :db.cardinality/one
:db/unique :db.unique/identity}
#2019-07-3117:01jaretI’ll switch that around in the example#2019-07-3117:01Joe LaneThanks!#2019-08-0100:43QuestI'm just trying to get Datomic Console running against a local dev transactor -- but whenever I hit localhost:8080/browse, I get a Jetty 503.
I noticed this nasty exception in the Datomic logs. Does anyone recognize it?#2019-08-0100:45QuestRunning on OSX, datomic-pro-0.9.5930 with datomic-console-0.1.216 installed#2019-08-0103:08QuestFigured it out -- missed the meaning of this text at https://my.datomic.com/downloads/console
The Datomic Console is included in the Datomic Pro distribution. Datomic Free users can download it below.
= installing this old version on top of a Datomic Pro release will break the console.
Reinstalled Datomic to undo the damage, console is working fine now 👍#2019-08-0120:49Quest^ Scratch the report on datomic-pro 0.9.5951 failing to download -- couldn't repro after blowing away my .m2, so guessing it was something odd on local.#2019-08-0214:14matthavenerIs the “single parent” policy implied by isComponent true attributes enforced by datomic? It seems like I can add another parent to a child, and then the backref behavior is really strange#2019-08-0214:22matthavenerhere’s what i’m seeing (both asserts pass)#2019-08-0214:22matthavenerhttps://gist.github.com/matthavener/4e61cf3db97fde90cde56af0d556ba6b#2019-08-0214:22souenzzoYes, you can
If :foo/bar isComponent and you insert [2 :foo/bar 1] and [3 :foo/bar 1]
- if you retractEntity 3, 1 will be retracted
- if you retractEntity 2, 1 will be retracted
- in pull/entity API if you ask for for :foo/_bar from 1, it will return just 2 or just 3 "randomly"
- in query, it should not effect#2019-08-0214:23matthavenerthanks @souenzzo 🙂, that’s exactly what I’m seeing but the semantics were just confusing at first#2019-08-0214:25souenzzoI think that it is discouraged, but this behavior will not change AFIK#2019-08-0214:27matthaveneryeah, having a “consistent view” of the db and a backref that is “random” doesn’t exactly jive#2019-08-0214:27matthavenerjust have to add more validation to my txns to ensure ever child only has one parent#2019-08-0214:29souenzzonew ensure / spec features should help
important to say that it's a stable "random"#2019-08-0214:56matthavenerstable for a given value of db?#2019-08-0214:56matthaveneruntil a reindex or something?#2019-08-0215:12souenzzoI'd rather leave it to someone on the datomic team to answer that.#2019-08-0216:03nilpunningDoes anyone know if datomi.api/gc-storage collects just on the database specified in the connection or across all databases in the Datomic deployment?
https://docs.datomic.com/on-prem/clojure/index.html#datomic.api/gc-storage#2019-08-0222:41rgorrepati#2019-08-0222:41rgorrepatiCan someone tell me why the above wouldn’t work?#2019-08-0300:46souenzzo@rgorrepati this exception is because this value isn't printable
if you try to print/str x it should throw too#2019-08-0300:46rgorrepati@souenzzo I can print x though#2019-08-0300:47rgorrepati#2019-08-0302:22souenzzoreal wired. no idea.#2019-08-0302:22souenzzoIt's a "raw" REPL or Intellij/nREPL/CIDER ?#2019-08-0514:38matthavenerthe docs for untuple reference something called :db/composite, but I don’t see it on the reference page https://docs.datomic.com/on-prem/query.html#untuple https://docs.datomic.com/on-prem/transactions.html#dbfn-composite#2019-08-0516:19marshall@matthavener that’s a typo. i’ll fix it#2019-08-0516:28marshallfixed#2019-08-0613:08eoliphantHi, running into a weird issue on datomic cloud. Where comparisons on float fields are failing to match, in the sample below from one of my devs, test-var is something like 123456.0. Even grabbing it directly from the db and handing it right back in the next query, isn’t matching.
(def test-var (first (ffirst (d/q '[:find (take 10 ?nga)
:where [_ :nga-can/nga-id ?nga]]
db))))
(d/q '[:find ?e
:where [?e :nga-can/nga-id test-var]]
db)
#2019-08-0613:14souenzzotest-var is quoted in this code.#2019-08-0613:17eoliphantah hell copied and pasted that from him, i;m on my phone lol. missed that thanks will dbl check#2019-08-0613:13alexmillercomparisons on floats are generally problematic due to imprecision (this is a generic issue in any language using IEEE 754)#2019-08-0613:13alexmillerthat may not be your problem, but just something to be aware of#2019-08-0613:16eoliphantyeah I thought that that might be the case. But, even using the ‘same’ value seems to be problematic.#2019-08-0613:27matthavener@eoliphant could it be NaN? that will fail even self equality tests#2019-08-0613:43eoliphantlooks like it’s good ol IEEE floats at the end of the day#2019-08-0614:11jarethttps://twitter.com/datomic_team/status/1158742202270572546#2019-08-0617:17m0smithHi all, how do I retract all entities match some filter. For example, the entitiy has a :date attr and I want to retract all entities where :date is in a given date range#2019-08-0617:36matthavener(d/transact conn (map (fn [e] [:db.fn/retractEntity (:db/id e)]) (filter #(in-range? (:date %)) entities))) ?#2019-08-0617:37matthavenerthere’s not really any notion of DELETE FROM table WHERE date >= ... if that’s what you’re looking for#2019-08-0618:05John MillerCould somebody sanity check a model I came up with?
We need to use a full-text search to find entities, and then return a bunch of attributes that are stored in Datomic. So I have created a query function as an ion that calls out to cloudsearch and returns a set of ids and scores. I then look up the id normally. The query looks something like this:
(d/q '[:find (pull ?e [*]) ?score
:in $ ?query
:where [(ions.cloudsearch/find-by-query ?query) [[?id ?score] ...]]
[?e :generic/id ?id]]
db
query)
So two quesions - Is this a reasonable model? If so, are there any resources we might need to worry about if Cloudsearch take a long time to respond (e.g. JVM threads? Memory? Certainly not CPU since it would be io bound). We’re expecting potentially 10's of requests per second, and CS typically responds in 10's of ms but sometimes stalls and takes multiple seconds. When that happens, 40-50 requests might build up before CS responds.
We saw a rash of :busy anomalies once since we started trying to make this work. It may be unrelated, but I wanted to see if anybody knew of any potential pitfalls before we go too far down this path.#2019-08-0618:28m0smith@matthavener Thanks. I wish google were better at finding this information#2019-08-0618:31m0smithI would do a separate query to get the "entities"?#2019-08-0618:37csmyes, you would use a separate query in that case. Though remember that those entities aren’t removed from the database, the history is intact; what you’re trying might be better done in your queries with a rule that filters :date after the date you’re interested in#2019-08-0618:37csmif you’re attempting to get rid of old data to save space, datomic might not be the right fit for your problem#2019-08-0619:00m0smithFor now I think I do want to retract them. we want the history so we can always trace the data. thanks again#2019-08-0619:05Joe Lane@jmiller If your cloudsearch request blocks I would say its a bad idea to do that in the query. Instead, why not issue the cloudsearch query, then take the results and pass them into the datomic query? We have a homegrown datomic cloud lucene integration that does something similar to this. Granted, the lucene indexes are relatively small (3gb) so its low overhead.#2019-08-0620:31John MillerThanks for the response, Joe. It generally only stalls for a couple seconds so my hope is that Datomic handles it fine. I’ve successfully done similar things in UDFs in Mysql, but that isn’t built on Java so I am concerned that the gotchas are different.#2019-08-0701:04puzzlerThe datomic website says datomic requires jdk 7 or 8. Still true, or is website out of date? https://docs.datomic.com/on-prem/get-datomic.html#2019-08-0712:06marshall@puzzler On-Prem ?#2019-08-0719:27jarethttps://forum.datomic.com/t/datomic-cloud-482-8794/1117#2019-08-0719:39jaretAlso an important notice on the latest release:#2019-08-0719:39jarethttps://forum.datomic.com/t/issue-with-t2-instances-important/1118#2019-08-0720:15joshkhIs there a performance / speed benefit when querying on attributes that reference a :db/ident rather than a keyword attribute value? For example:
; schema
[
; installed :db/ident
{:db/ident :season/winter}
; a reference attribute to point to the :season/winter :db/ident
{:db/ident :year/season-ref :db/cardinality :db.cardinality/one :db/valueType :db.type/ref}
; a generic keyword attribute
{:db/ident :year/season-kw :db/cardinality :db.cardinality/one :db/valueType :db.type/keyword}
]
#2019-08-0722:09patChanges link for latest free release is broke#2019-08-0806:50karlmikkoWe just had an alarm from our datomic transactor AlarmLogWriteTimedOut however I can't seem to find from google searches out what would causes this alarm. All I have been able to find is https://docs.datomic.com/on-prem/monitoring.html#alarms which says to contact datomic support. I thought I would ask here incase other had seen this and to potentially share the knowledge once we find out what causes this.#2019-08-0813:27jaret@karlmikko I’d definitely log a case to support, especially if you’re still seeing errors. But generally that alarm indicates the transactor timed out waiting for storage to acknowledge a write, specifically to the transaction log.#2019-08-0822:49karlmikkothanks @U1QJACBUM - I will lodge a cast today - the thing i was a bit confused about was the term log as it could be the log file on disc or the transaction log.#2019-08-0900:46karlmikkoI managed to find the exception thrown in the logs and it was a timeout talking to dynamodb, and looking at dynamodb metrics at the time there was not failures and plenty of read/write capactity.#2019-08-0813:28jaretAs an update to the issue reported yesterday with t2 instances. We resolved the issue by working with AWS last night.#2019-08-0813:28jarethttps://forum.datomic.com/t/resolved-issue-with-t2-instances/1118#2019-08-0816:44m0smithI keep running into problems where library code trying to determine whether to use the
Peer or Client API gets it wrong. See https://software-ninja-ninja.blogspot.com/2019/08/datomic-ions-lget-does-not-exist.html My question here is, is there a well defined way to determine which API the code is using? Followup question: Are more people using the Peer or the Client? The Datomic Cloud seems to require the Client but are there a lot of people also using the Peer?#2019-08-0904:33QuestHi @U050VTWMB,
I found an issue that may be related in the onyx-kafka plugin. The modern Datomic Pro (peer lib) includes namespace datomic.client where I don't believe it used to. see https://clojurians.slack.com/archives/C051WKSP3/p1565070325058400 -- you may be able to make use of the :exclusions workaround#2019-08-0904:34QuestI fixed the auto-detection mechanism for onyx-datomic -- perhaps a similar fix is needed in one of the libraries you're consuming. https://clojurians.slack.com/archives/C051WKSP3/p1565111849063700#2019-08-0904:36Questcould be useful to run leim pom && mvn dependency:tree -Dverbose=true, should show you if anything besides datomic-pro is pulling in the client lib dependency.#2019-08-0913:12m0smithmany thanks#2019-08-0822:36m0smithAnother question: datomic.client.api/index-range seems to support a :limit and :offset arguments but it looks like they are ignored. Is that the case or is there a good example of them being used? See https://docs.datomic.com/client-api/datomic.client.api.html#var-index-range#2019-08-0912:54Mark AddlemanTrying to use ions cast/event on a local desktop environment and am getting
Syntax error (IllegalArgumentException) compiling at (src/mtz/server/email/core.clj:46:3).
No implementation of method: :-event of protocol: #'datomic.ion.cast.impl/Cast found for class: nil
What am I doing wrong?#2019-08-0913:07Mark AddlemanOh, the calling code is (cast/event {:msg (str "Starting " daemon-name)})#2019-08-0913:36Joe Lane@UAMEU7QV7 All examples of my cast/event calls are using a string value as my :msg value.
(cast/event {:msg "VerifyAuthChallengeTrigger"
::correct correct?})
#2019-08-0913:37Joe LaneI've never seen one with (str "Starting " daemon-name). Maybe the right thing is to attach the daemon-name as a namespaced keyword and pick a static string as the :msg text?#2019-08-0916:01grzmWhat happens during the BeforeInstall phase of a Datomic Ion deploy?#2019-08-1000:15jplazaWe are testing datomic-cloud and are planning to start using it soon. We have a business SaaS product and currently use a single multi-tenant database. The question is, is there any recommended architecture for multi-tenant apps? Single database, multiple databases?#2019-08-1015:47Mark AddlemanI had success with the following architecture: design a schema for multi-tenancy but place each tenant in a separate database. We also had an admin or account database that served as a catalog of accounts.#2019-08-1015:49Mark AddlemanBy designing the schema for multi-tenants, we were able to more directly handle new business requirements around free-tier accounts (all of those went into a single db) and we anticipated that would be easier to handle requirements around subaccounts#2019-08-1110:30jplazaLet me see if I understand what you are saying. You kept an :account/id attribute for every record that needed that, but instead of using one db you used multiple dbs?#2019-08-1110:36jplazaI was considering using different db for each tenant (account) to be able to get rid of the :account/id in every single record. So I wanted to know if there was some hard limit on the number of dbs you can create in datomic or if it’s not a best practice, etc#2019-08-1114:48Mark AddlemanYes, I kept :account/id and it (almost always) had the same value within the db.#2019-08-1114:50Mark AddlemanI don't believe there is a hard limit on number of dbs. However, Stu once said that an implementation detail kept multiple dbs from being as performant at current transact operations as it theoretically could be. I suggest you contact Datomic support if you are concerned about high throughput concurrent transactions across dbs.#2019-08-1123:00jplazaThanks a lot @UAMEU7QV7 for sharing your thoughts#2019-08-1002:29Sam FerrellBeginner question... using a datalog query, how would I assert the value is non-nil? [?e :my/attr ???]#2019-08-1012:09benoitYou can't have nil values in Datomic so you want to assert that an attribute exists for the entity which you can do with [?e :my/attr].#2019-08-1213:42marshallyou can also use the missing? predicate: https://docs.datomic.com/on-prem/query.html#missing and https://docs.datomic.com/cloud/query/query-data-reference.html#missing#2019-08-1215:05Sam Ferrellthank you both!#2019-08-1220:26mafcocincobeginner question: My company is considering moving to Datomic in the next year or so. Can anyone point me to hard performance numbers? Specifically I'm interested in the volume of transactions the transactor can support. I know this will be dependent on the hardware that it is running on but I'm trying to get some "bag of the napkin" estimates on what kind of volume we typically could push through the transactor.#2019-08-1221:29shaun-mahood@mafcocinco https://www.datomic.com/room-keys-story.html is the main example I used for scale when I did the same thing at my company - but we doing small enough data that it really doesn't matter outside of having a number to point to as far as what is possible.#2019-08-1222:14mafcocincoThanks!#2019-08-1303:52xiongtxI’m not sure why the use of return keys in this example Datomic query isn’t working. It’s straight out of the return maps example: https://docs.datomic.com/on-prem/query.html#return-maps
(d/q '[:find ?artist-name ?release-name
:keys artist release
:where [?release :release/name ?release-name]
[?release :release/artists ?artist]
[?artist :artist/name ?artist-name]]
db)
I get a
2. Unhandled com.google.common.util.concurrent.UncheckedExecutionException
java.lang.IllegalArgumentException: Argument :keys in :find is not a
variable
1. Caused by java.lang.IllegalArgumentException
Argument :keys in :find is not a variable
This is datomic-pro-0.9.5930#2019-08-1413:47marshallAre you using peer or client? What version of client (if that’s what you’re using)
I just tested this exactly as pasted with 0.9.5930 and it works fine for me.#2019-08-1417:54xiongtx[com.datomic/datomic-pro "0.9.5561.50"], which I believe is the peer#2019-08-1417:56xiongtxSeems to work w/ 0.9.5930. Maybe this feature was introduced very recently?#2019-08-1417:58marshallthat’s correct#2019-08-1417:58xiongtx👌#2019-08-1417:58marshallit was added in 0.9.5930 i believe#2019-08-1314:27eoliphanthey @jaret quick note, in the latest cloud rev, you guys fixed tx-range’s result to return a :data key per the api doc, but the Log API discussion in the docs, still it refers to :tx-data https://docs.datomic.com/cloud/time/log.html#2019-08-1317:55jaretgood catch. I’ve fixed the table. should be visible on refresh.#2019-08-1314:33eoliphanthey @mafcocinco it’s probably nearly impossible to get something that would be meaningful for your use case with out mocking a bit of it up. Everything from, tx size, your use of tx functions, etc etc is going to affect any number. There are the more general guidlines, like it’s definitely not for ‘write scale’ apps, raw ingest of clickstream, IoT, etc etc data. But at least for us, it’s more than adequate for our typical OLTP scenarios. Nice thing though is given the ease of modeling, etc, even if you only have a rough idea of your use case, it’s gonna be pretty easy to create a benchmark that will give you some of what you need#2019-08-1314:35eoliphantand if you’re planning to run Cloud, getting the backend setup is just a few clicks in the marketplace#2019-08-1314:36mafcocincoThanks. That is kind of what I was thinking. Going to need to mock something up and see how things look.#2019-08-1314:41eoliphantnp, again, the nice thing i’ve found about the ‘universal info model’ is that it strikes a nice balance between a relational schema, and the wild west of something like MongoDB. You can define the attributes you think you need, then you have a pretty large degree of freedom to mess around with creating entities that you think are representative for your testing. also, make sure you at least skim through the best practices section of the docs, as there are a few things in there that could affect your assessment if you’re not aware of them#2019-08-1322:07tylerIs there any way to tap into aws codedeploy hooks for ions deployments? Would like to run our own checks to rollback on failure.#2019-08-1322:33Joe Lane@tyler We made a codebuild script with different phases, one of which deploys ions, as well as other stuff.#2019-08-1322:34Joe LaneWe did that because we couldn't find a nice codedeploy hook for what you're describing.#2019-08-1322:35tylerInteresting. Will look into that approach, thanks.#2019-08-1404:15johnjelinektryna set up ions#2019-08-1404:15johnjelinektryna set up ions#2019-08-1404:15johnjelineklooks like something error'd:
clojure -A:dev -m datomic.ion.dev '{:op :deploy-status :execution-arn arn:aws:states:us-east-2:101416954809:execution:datomic-dev-Compute-784GREJAJTLX:dev-Compute-784GREJAJTLX-bd6deb15afeee59dd2dd16943cf3c0313f534c34-1565755664290}'
{:deploy-status "FAILED", :code-deploy-status "FAILED"}
#2019-08-1404:16johnjelinek{...
"status": "Failed",
"errorInformation": {
"code": "HEALTH_CONSTRAINTS",
"message": "The overall deployment failed because too many individual instances failed deployment, too few healthy instances are available for deployment, or some instances in your deployment group are experiencing problems."
}
}
#2019-08-1404:16johnjelinekany idea how to troubleshoot?#2019-08-1404:33johnjelinekall my CloudWatch Logs look like:
START RequestId: a012efad-3c01-4a75-8b08-615978c5f177 Version: $LATEST
2019-08-14T04:11:53.124Z a012efad-3c01-4a75-8b08-615978c5f177 { event:
{ codeDeploy: { deployment: [Object] },
lambda: { cI: 4, c: [Array], uI: -1, u: [], dI: -1, d: [], common: [Object] } } }
END RequestId: a012efad-3c01-4a75-8b08-615978c5f177
#2019-08-1404:38johnjelinekmade an issue for this: https://github.com/Datomic/ion-starter/issues/5#2019-08-1413:42marshallDid you examine your Datomic system logs? https://docs.datomic.com/cloud/operation/monitoring.html#searching-cloudwatch-logs#2019-08-1413:42marshallyou need to determine why the instances are not starting up - usually caused by an error in the ion code that is preventing it from loading#2019-08-1423:21johnjelinekI was using the starter ion code#2019-08-1423:31marshallSearch your cloudwatch logs for the system you deployed to#2019-08-1423:31marshallSee if any errors or exceptions show up in there#2019-08-1501:28johnjelinekI posted the messages from cloudwatch logs above#2019-08-1501:35marshallThe logs from your datomic stack. Not codedeploy#2019-08-1501:36marshallTake a look at the link to the docs i posted. It includes details about finding the datomic stack logs#2019-08-1408:48mkvlrwe’ve been running with a shared valcache for about a week in production now. When deploying, valcache is briefly accessed by two instances which we heard from @jaret is not offically supported but should work. We’re now seeing EOFException pop up originating in datomic.index/reify every now and then. Is this a setup you plan to support or should we stop running like this?#2019-08-1408:53mkvlrthis is the stacktrace if it helps. Also happy to report this elsewhere if that’s better.#2019-08-1408:55mkvlrAnother issue we’ve seen is this stackoverflow error. This occured only once and due to recursion we don’t have the full stacktrace but there’s datomic.query/fn in the stacktrace. We’re thinking to increase the number of frames printed in our bug tracker. Any other advise on how to track this down? Thanks!#2019-08-1413:18jaret@mkvlr we do not currently have plans to specifically support valcache being accessed by two instances. I theorized that it should work based on seeing multiple separate services share valcache, but it appears to affect indexing with that EOFException.
Re: your other error. I’d be happy to look at your Datomic logs to see the error in query. If you’d like to open a case with support (<mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>) we can use that to share files and we won’t lose our communication to slack archiving. In general, I think it would be useful to look at the entire datomic log for both errors.#2019-08-1414:52mkvlr@jaret thanks. Talking to my colleagues we believe the EOFException did occur before we were running with two nodes. I guess we’ll reconfigure our nodes to use different valcaches and let you know if it does happen again. And will get in touch with support for the query error, thanks again!#2019-08-1414:53jaretOh interesting. It might be worth it to have us investigate the EOFException as well via the support portal.#2019-08-1414:53jaretEspecially if you’ve kept logs from before and after the switch to sharing valcache.#2019-08-1500:27tylerShould a fresh connection be retrieved for every request with datomic cloud or should you cache the connection?#2019-08-1500:28marshallhttps://docs.datomic.com/cloud/client/client-api.html#connection#2019-08-1500:31tylerHm that’s what I thought. Seeing something that looks like a memory leak though when retrieving a db connection and db value every request. Will dig into it more.#2019-08-1501:10kennyThe docs for as-of (https://docs.datomic.com/client-api/datomic.client.api.html#var-as-of) say:
> Returns the value of the database as of some time-point.
What is "time-point"? A Date? Epoch millis? Datomic's t?#2019-08-1501:22kennyI'm assuming it's similar to Datomic peer: https://docs.datomic.com/on-prem/clojure/index.html#datomic.api/as-of
> t can be a transaction number, transaction ID, or Date.#2019-08-1506:40Brian AbbottIs anyone from Datomic Support currently available? We are having a critical outtage at the moment.#2019-08-1507:12Brian AbbottPlease see Cognitect Support Ticket 2327 🙂 Thank you!#2019-08-1514:29jaretHi Brian, I responded on the ticket a few hours ago.#2019-08-1523:07Brian AbbottThank you again so much Jaret!#2019-08-1508:03mkvlr@jaret my colleague @kommen submitted the issue to <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>. Got a request to create our zendesk accounts but got running into a XSS error when trying to set my password on …#2019-08-1514:30jaretIt looks like there were eventually able to create the ticket. Did they resolve the error? Was it transient or are they still having issues registering?#2019-08-1514:32mkvlrjust tried again, still doesn’t let me assign a password#2019-08-1514:32mkvlrBlocked a frame with origin "" from accessing a frame with origin "". Protocols, domains, and ports must match.
#2019-08-1514:34jaretWhat browser are you using and do you use adblock (can you try without if so to confirm)?#2019-08-1514:42grzmI have a transaction function that may return empty tx-data if certain criteria aren't met. I'd like to take a particular action only if there were datoms transacted. My understanding is that every transaction that does not fail will have at least a single datom in the tx-data of the result for the txInstant of the transaction. From what I've observed, I believe I can test whether the count of the datoms in the tx-data of the result is greater than one to determine if any additional datoms were asserted or retracted. Is this something I can rely on? Anyone have a better approach?#2019-08-1515:01mgrbyteAssuming 1 datom transacted, you could also check if :db/txInstant is the only attr asserted in the "transaction entity". If you're not using reified transactions then this additional check is probably moot.#2019-08-1516:02grzmYeah, I'd like to save doing an additional database lookup: I guess I could assume that the :db/id of the :db/txInstant attribute is unchanging and not do a lookup, but inspecting the datom attribute db/id in a tx-data that includes only a single datom seems redundant if I know that each non-anomalous transaction result will have at minimum a single :db/txInstant attribute. Thanks for thinking through this with me.#2019-08-1517:41tylerIs there a recommended way to hook up a java agent to a query group? Running into a strange memory issue with ions and we are having a hard time debugging with the datomic memory monitor on the provided dashboard.#2019-08-1520:08Laverne SchrockWe have one Datomic Cloud deployment running version 477-8741, and another deployment running 480-8772. On the older version we are able to make assertions about the entity with id 0, but in the newer version we cannot, due to the following error :
{:cognitect.anomalies/category :cognitect.anomalies/incorrect,
:cognitect.anomalies/message "Boot datoms cannot be altered: <actual datom omitted, let me know if you need it>",
:db/error :db.error/datom-cannot-be-altered}
This seems like reasonable behavior, but it doesn't seem to be documented in the changelog.
Does anyone know if this is an expected change?#2019-08-1607:46danierouxOn https://docs.datomic.com/cloud/ions/ions-monitoring.html I see that “You can send all alert, event, and dev output to registered taps by calling initialize-redirect with a target of :tap”
With com.datomic/ion {:mvn/version "0.9.34"} and (datomic.ion.cast/initialize-redirect :tap) I get:
initialize-redirect expects :stdout, :stderr, or a filename
What am I missing?#2019-08-1710:46danierouxNow works with com.datomic/ion {:mvn/version "0.9.35"}, thanks @U1QJACBUM#2019-08-1611:18dmarjenburghA question regarding the EULA. The passage that reads as follows:
> Upon termination of this EULA for any reason, you will erase or destroy all copies of the Datomic Cloud Software, or part thereof, in your possession, if any. Any use of the Datomic Cloud Software after termination is unlawful.
What exactly is mean by the ‘Datomic Cloud Software’? The publicly available`com.datomic/client-cloud` library? The AWS resources running in your account? Or the code running on the marketplace image?
I need to answer these questions to the ‘vendor management’ department. We really want to employ Datomic Cloud in the organization.#2019-08-1622:00xiongtxFrom the datomic documentation: https://docs.datomic.com/on-prem/storage.html#connecting-to-transactor
> If the transactor cannot bind to its publicly reachable IP address (e.g. the transactor is on a VM that doesn’t own or can’t see its external address), you will need to provide a value for alt-host on the transactor with the publicly reachable IP address in addition to the host property.
We’ve got the transactor deployed in a container behind a load balancer. Should the alt-host be the load balancer’s URL? Where do we provide the LB’s port?#2019-08-1700:15favilaAlt-host should be the balancer’s external hostname#2019-08-1700:15favilaPorts cannot differ #2019-08-1700:16favilaI hope you are not actually splitting traffic? Datomic wants to be able to address transactor individually #2019-08-1700:48johnnyillinoisThe traffic is not split #2019-08-1700:48johnnyillinoisAny reason the ports are not allowed to differ#2019-08-1700:49johnnyillinoisOr do you know how big of a change it would be to allow the ports to differ?#2019-08-1700:49johnnyillinoisThe LB works just as a proxy #2019-08-1702:10favilaThe ports can’t differ simply because there’s no option to do so#2019-08-1821:31Chris SwansonHey does anyone use vase with datomic cloud? It wasn't obvious to me how to connect them; the datomic uri key vase uses is different than the datomic cloud client library which needs extra AWS info.#2019-08-1914:35Joe Lane@chrisjswanson I recently succeeded at this but ran into several small issues with it. Granted, i'm trying to deploy via ions and that was where the issues were.#2019-08-1915:06Chris Swanson@lanejo01 if you'd care to share details, I'm quite curious. Did you end up having to write a custom intercepter to add the datomic connection to the chain? Or modify the vase code to let it handle datomic cloud connection Uris?#2019-08-1915:08Joe LaneBoth#2019-08-1915:08Joe LaneI can share more later#2019-08-1915:09Chris SwansonThanks man, good to have that insight, I'd appreciate it#2019-08-1915:09Joe LaneUsing Ions or not?#2019-08-1915:11Chris SwansonYes but probably not to deploy vase, just custom query functions. Vase would likely go on k8s or lambda directly. But I'm still exploring, so I'm also really curious how it ended up working for you on ions#2019-08-1915:11Chris SwansonIf I could just deploy vase straight as an ion that would be pretty nice#2019-08-2007:29Ivar RefsdalWhen I connect to a database with an url like "datomic:", Datomic will by default log this password in plaintext. Is it possible to avoid this? Would it help using a map syntax for the connect call?#2019-08-2009:49unbalancedIs this for corporate security policy? Are you running on a linux server? Does the message appear at initialization?
Not a perfect answer but if the answer to the above questions are "yes" you could always use awk to filter out the password logging line ...#2019-08-2012:05Ivar RefsdalThe message appears when doing (d/connect) (default log-level is info), so yes that is during initialization.
I've "solved" it using a timbre log middleware. Nevertheless I think it's bad practice by Datomic to log passwords in plaintext by default.#2019-08-2013:40marshallthe printConnectionInfo configuration https://docs.datomic.com/on-prem/system-properties.html#transactor-properties
will prevent Datomic from logging storage password on startup#2019-08-2108:12Ivar RefsdalThanks @U05120CBV
My problem was that the peer logged the password, not the transactor.
Or will putting this property also affect the peer?#2019-08-2113:25marshallAh, i misunderstood. I don’t believe that will affect the peer#2019-08-2113:25marshallThat seems like something that should be registered as an improvement request
Can you access the feature request portal (link in top menu bar of http://my.datomic.com dashboard)? If so, that would be a great one to add#2019-08-2015:31grzmI have a Datomic Ion which is called by a scheduled Cloudwatch event every 10 minutes. 8-9 times out of 10 I get the following error in the logs for the lambda:
{:cognitect.anomalies/category :cognitect.anomalies/unavailable, :cognitect.anomalies/message "Connection reset by peer", :clojio/throwable :java.io.IOException, :clojio/socket-error :receive-header, :clojio/at 1566310809474, :clojio/remote "[ip redacted]", :clojio/queue :queued-handler, :datomic.ion.lambda.handler/retries 0}
The function called by the lambda seems to be executing fine: I cast an ion event during execution and can see that in the corresponding Datomic compute logs. Viewing the CloudWatch metrics for the lambda via the console also don't show these errors, so I think the ion is actually working fine. What do these anomalies indicate?#2019-08-2015:39m0smithIs there a Datomic Console deployed with Datomic Ions/Cloud?#2019-08-2015:46grzmNope. REBL can be very useful, particularly it's ability to leverage the nav metadata that decorate Datomic results.#2019-08-2118:02m0smithHow do I get REBL running with ions?#2019-08-2118:04grzmHave you used REBL before? If not, take a look here: http://rebl.cognitect.com#2019-08-2118:06grzmIf you've got REBL up and running, have your datomic-socks-proxy running, and Datomic query results are navigable using REBL. It's been years since I used the Datomic console, so I can't provide a great comparison, but I've found REBL to be really useful, in general, and when coupled with Datomic.#2019-08-2118:06grzmThere's really nothing special about ions in particular wrt REBL.#2019-08-2015:40Joe Lane@grzm I think the aws lambda may have timed out (jvm startup) but the execution was still performed. What happens if you decrease the CW event to every 2 minutes?#2019-08-2016:07grzmCheers! That seems to be it. Would be helpful if the error were more explicit. The total time reported in the lambda logs is well under the lambda timeout, so the issue wasn't immediately apparent to me.#2019-08-2016:08grzmGiven that I don't want to actually run the code at such a low frequency, I'm thinking of setting up a second CloudWatch rule to call the same ion, but switch on the input, no-oping the rule that's keeping it warm. Does this seem sane?#2019-08-2016:19Joe LaneCan you expose it through http direct? CW events can issue http calls#2019-08-2016:19grzmOh, that's an idea.#2019-08-2112:50unbalancedI thought I remembered somewhere in the documentation that you can add a docstring to a transaction. Did I hallucinate that?#2019-08-2113:56marshallSure. You can put any attr (including :db/doc ) on a transaction entity#2019-08-2113:58marshall{:db/id "datomic.tx" :db/doc "my transaction doc""}#2019-08-2116:04unbalancedcan you do that in list form too or just map form?#2019-08-2116:25marshallsure#2019-08-2116:25marshall[:db/add "datomic.tx" :db/doc "my transaction data"]
#2019-08-2200:22unbalancedaha, ty sir!!!#2019-08-2113:13ivanaHello! get-else works with cardinality-one attributes only? how can I chek if my cardinality-many attribute has at least one value or not, without disappearing rows with this attr is not setted?#2019-08-2113:31ivanatrick with (or [?o :order/my-cardinality-many-attribute ?x] [?o :db/id ?x])works, but it made multiple lines on many values in attribute#2019-08-2114:13marshall@ivana you could use the missing? predicate#2019-08-2114:14marshallAmd a similar or trick#2019-08-2114:15ivanayes, but how can I get all missing and non-missing attr rows just with a chek of missing and without a multiplication lines?#2019-08-2114:21benoit@ivana It's not clear what query you're trying to write. Your or clause above will return all the entities in your database because of [?o :db/id ?x], is it really what you want?#2019-08-2114:23ivanaI want simply to check (!) if this entity have at least one value in its card-many attr or not - with the same entity lines as they are.#2019-08-2114:24ivanaF.e. I have 2 entities, one with 10 values in many attr, and one with 0. I want 2 rows with bollean flag#2019-08-2114:27ivanaNot 11 lines, not 1 line. Just 2 - as a real amount of my entity#2019-08-2114:30benoitThere might be a simpler approach but something like this could work:
[:find ?o ?has-many-attr
:where
[?o :other/attr]
(or (and [?o :order/my-cardinality-many-attribute]
[(ground true) ?has-many-attr])
(and [(missing? $ ?o :order/my-cardinality-many-attribute)]
[(ground false) ?has-many-attr]))]
#2019-08-2114:31ivanathanks, I'l try it#2019-08-2114:33ivanafor the first impression it is exactly what I need, thanks alot! I'l play with it in a real queries#2019-08-2115:20deadghostDatomic does not accept nil attribute values. If I have an entity
{:foo/id 101
:foo/code :FOO
:foo/type "bar"}
and I want to update it like so:
(d/transact conn [{:foo/id 101
:foo/code :LOL
:foo/type nil}])
it will throw an Exception.
If I exclude the nil attribute:
(d/transact conn [{:foo/id 101
:foo/code :LOL}])
:foo/type will remain "bar".
:foo/type seems like it needs to be explicitly retracted. I'm currently using the method detailed in https://matthewboston.com/blog/setting-null-values-in-datomic/ to do nil updates. It's a red flag that I need to handroll something to do this type of update and it suggests I am not doing things in the correct way. Another approach would be a full retract and insert but I get the feeling there are unexpected behaviors I have not thought about with that approach. How are you all approaching this?#2019-08-2115:30ghadiYou do not need to reassert the whole entity - just retract the part that is no longer true#2019-08-2115:30ghadiWithout even reading that article it is probably misconceived#2019-08-2117:00eoliphantyeah it I think misses the point. entities are sets of arbitrarily related attributes, not tables, this takes some getting used to but it’s far more powerful once you get the hang of it. And the example of “So what if we need to set a value to null?“, isn’t really.
This
(datomic/q '[:find ?id ?firstname ?lastname
:in $
:where
[?id :user/firstname ?firstname]
[?id :user/lastname ?lastname]]
(datomic/db conn))
should be more like this:
(datomic/q '[:find ?id (pull ?e [:user/firstname :user/lastname])
:in $
:where
[?id :user/firstname]
; - or perhaps -
[?id :user/id]
]
(datomic/db conn))
No need to ‘simulate null’, and you keep the clean semantics of simply retracting :user/lastname. Also ‘matching for values’ is less efficient (though probably not a big deal in this trivial case), as the engine has to do work to match each ‘clause’. to the extent possible, let your where do the selecting, then just pull what you need in terms of values#2019-08-2117:16eoliphantalso, the edit-or-create-user-txnexample seems to conflate empty strings with null/nil. Now that may be desired behavior in certain circumstances, but “” is not nil/null/“not present”, even in a traditional say relational db. Also, figuring out retracts is pretty trivial. a set/difference on the keys of the incoming update and an existing entity will give you that directly. “” as nil, if necessary can be tacked on with filter prior to the diff#2019-08-2122:54m0smithCalling cast/even from wihtin CIDE results in a StackOverflowError#2019-08-2122:55m0smithExecution error (StackOverflowError) at cider.nrepl.middleware.out/print-stream$fn (out.clj:93). Has anyone else seen this?#2019-08-2123:02andrew.sinclairIs there a function in the Peer api that allows a user to programmatically determine the transactor’s port?#2019-08-2123:02andrew.sinclairWe are using the map uri, with a cassandra callback, so port is not present in the uri.#2019-08-2123:08m0smith(ns bug-demo
(:require [datomic.ion.cast :as cast]))
(cast/initialize-redirect "/tmp/hamster")
(cast/event {:msg "ShouldNotCauseAStackOverflowErrorInCider"})#2019-08-2222:37telekidAre homogenous tuples limited to 8 values? https://docs.datomic.com/cloud/schema/schema-reference.html#tuples#2019-08-2222:37telekidseems so, but just wanted to confirm#2019-08-2222:38kennyCan you rename a Datomic Cloud db?#2019-08-2306:25jaihindhreddyI want to extend codeq to analyze Python (2) code. What would that entail?#2019-08-2310:45jaihindhreddyWhere can I get the datomic client library?#2019-08-2312:46kirill.salykinThere are seems no latest datomic-free releases?
https://clojars.org/com.datomic/datomic-free
0.9.5697 vs 0.9.5951 pro#2019-08-2312:51akiel@kirill.salykin Yes this is a big problem and I really don’t understand why. I already wrote directly to Cognitect but got no answer. Maybe we can write an email together?#2019-08-2314:40kirill.salykinhttps://my.datomic.com/downloads/free#2019-08-2314:40kirill.salykinyou can download latest here#2019-08-2314:40kirill.salykindatomic-free-0.9.5703.21.jar
seems like a peer library#2019-08-2314:47kirill.salykinbut still, it is very outdated#2019-08-2314:47akielI currently try to run it.#2019-08-2315:17akielIt somehow runs, but than I don’t understand that version number and it’s not available on maven central.#2019-08-2315:17akielI wrote a new mail at the Datomic support.#2019-08-2315:23kirill.salykinI think they won’t respond :(#2019-08-2315:24kirill.salykinYou can use starter edition btw#2019-08-2315:24kirill.salykinIt is free and supports updates for 1 year#2019-08-2315:25kirill.salykinPro starter I think it is called#2019-08-2315:25akielI’m a paying customer of Datomic. I’ll find a way that they respond.#2019-08-2315:25kirill.salykinI see )#2019-08-2616:34akielJust for this thread. @U1QJACBUM Answered this question in another Thread with:
> We are considering different options for Datomic Free, and would love to hear more about your use cases. You can share your use cases with me via <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>.#2019-08-2618:44kirill.salykinthanks for keeping me posted!#2019-08-2312:52kirill.salykinlast release from 2018…
> Maybe we can write an email together?
lets try, but I doubt it will help a lot#2019-08-2312:59akielMaybe someone else also likes to participate?#2019-08-2313:37joshkhis there a way to retrieve from a datomic cloud client the configuration that was used to create it?*#2019-08-2315:44unbalancedAs much as I would love it, I don't really see a whole lot of incentive for them to update the free version 😕#2019-08-2315:44unbalancedThey need to make money somehow#2019-08-2323:05George CiobanuHi! I am trying to model a tree of components (Page, button group, tab, buttons etc) as a hierarchy of maps (each component has a bunch of attributes). It's very similar to the DOM in that if the user deleted a tab group that has buttons as children, the buttons need to be deleted as well. Of course, the user can also move a subtree of components to a different nodes etc. It's a standard GUI editor.
I think the best way to model this is to make each node a component so that if any node is deleted its children are deleted as well. Does that make sense? Are there any subtleties I'm missing, and should I manage deletion and the whole hierarchy by hand using plain ref types?
Any thoughts much appreciated. A link to an article is also fine (I tried to RTFM but I never saw anyone use components for hierarchies and am wondering why).#2019-08-2401:01unbalancedIs this secretly a datascript question? Happy to help either way but it would be good to know which direction you're going with it#2019-08-2401:02unbalancedspecifically, whether you're doing in clojurescript or clojure is kind of important#2019-08-2401:08unbalancedmy thoughts are effectively you're going to need a DSL layer to interpret the meaning of the maps as they relate to the components, but you probably already knew that. If you're working in Clojure, you can't have dynamic (runtime) components since you probably don't want to ruin your DB by creating schema on the fly.
I would strongly consider checking the https://en.wikipedia.org/wiki/Entity_component_system for some inspiration on how you can create "dynamic" behavior from predefined schema used the entity component system.#2019-08-2401:15unbalancedRegarding the "deletion", the good news is that you don't really have to "delete" anything, you simply assert what the new structure is.#2019-08-2401:22unbalancedSo the challenge for you will be structuring recursive queries. There is a recursive pull syntax available, but you'd have to carefully structure you schema.
So the "illusion" of a recursive delete would be accomplished by doing a retraction near the root of your graph, aka, asserting an empty membership of children -- this would then break the recursion of your query#2019-08-2401:23unbalancedAnyway that's my two cents, best of luck to you!#2019-08-2403:00George CiobanuHi Goomba! Thank you so much for your help#2019-08-2403:00George CiobanuI'll process and reply once I get what you are saying#2019-08-2403:02George CiobanuNot secretly a Datasceipt question, I actually intend to store this datasctructure in Datomic#2019-08-2403:03George CiobanuIt can be either clj or cljs since both my backend and frontend are Clojure(script)#2019-08-2403:04George CiobanuRegarding the DSL layer I don't think I need it, in the sense that the number of component types is fixed and each has a unique schema that's mostly immutable (I might add properties over time but that's it)#2019-08-2403:05George CiobanuSo each map will map to one component#2019-08-2403:05George CiobanuAnd anything in it's :children key will be subcomponents (in the GUI sense)#2019-08-2403:06George CiobanuRe deletion that makes sense#2019-08-2403:07George CiobanuAnd while I haven't fully understood recursive queries I'm not concerned since I saw several examples and they seem to make sense#2019-08-2417:35George CiobanuSorry for double posting I just wanted to see if anyone has thoughts on this#2019-08-2417:35George CiobanuHi! I am trying to model a tree of components (Page, button group, tab, buttons etc) as a hierarchy of maps (each component has a bunch of attributes). It's very similar to the DOM in that if the user deleted a tab group that has buttons as children, the buttons need to be deleted as well. Of course, the user can also move a subtree of components to a different nodes etc. It's a standard GUI editor.
I think the best way to model this is to make each node a component so that if any node is deleted its children are deleted as well. Does that make sense? Are there any subtleties I'm missing, and should I manage deletion and the whole hierarchy by hand using plain ref types?
Any thoughts much appreciated. A link to an article is also fine (I tried to RTFM but I never saw anyone use components for hierarchies and am wondering why).#2019-08-2418:57favilaThis might be a fit, but generally IsComponent is used to reference an entity which doesn’t have an identity at all apart from its parent#2019-08-2419:00favilaDataomic assumes (but does not enforce) that if there’s an assertion [e iscomponentattr component-e], this is the only datom in the entire db with Component-e in the v slot#2019-08-2419:02favilaAt least the d/entity api will also make the reverse-ref of an attr not-a-collection for this reason (even if there is in fact more than one entity pointing to it!)#2019-08-2419:03favilaYou should be careful with “reparenting” a node via an IsComponent attr because you can end up violating this constraint by accident #2019-08-2419:04favilaYou will need a transaction function#2019-08-2419:54George CiobanuThank you Favila much appreciated. I couldn't find documentation about the assumption you mention, any chance you have a handy link?#2019-08-2420:47George CiobanuSpecifically I'm thinking of this: Components allow you to create substantial trees of data with nested maps, and then treat the entire tree as a single unit for lifecycle management (particularly retraction). All nested items remain visible as first-class targets for query, so the shape of your data at transaction time does not dictate the shape of your queries. This is a key value proposition of Datomic when compared to row, column, or document stores.#2019-08-2420:47George Ciobanu"all nested items remain visible..."#2019-08-2515:06favilaIt’s not explicitly stated that way anywhere to my knowledge but it’s an inevitable consequence of the special behavior IsComponent attrs get: 1) retractEntity deletes them even if other entities reference them ; reverse-ref in entity and pull doesn’t show all reverse refs only the first one; pull * and d/touch eagerly follow and load the value of those references#2019-08-2419:42Mark AddlemanTrying to deploy an ion to a new Datomic Cloud instance in a new AWS account. The deploy step is failing and Cloudwatch logs reports
{
"errorMessage": "No Deployment Group found for name: mbsolo-Compute-KLOG23BUPMGI",
"errorType": "DeploymentGroupDoesNotExistException",
"stackTrace": [
"Request.extractError (/var/runtime/node_modules/aws-sdk/lib/protocol/json.js:51:27)",
"Request.callListeners (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:106:20)",
"Request.emit (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:78:10)",
"Request.emit (/var/runtime/node_modules/aws-sdk/lib/request.js:683:14)",
"Request.transition (/var/runtime/node_modules/aws-sdk/lib/request.js:22:10)",
"AcceptorStateMachine.runTo (/var/runtime/node_modules/aws-sdk/lib/state_machine.js:14:12)",
"/var/runtime/node_modules/aws-sdk/lib/state_machine.js:26:10",
"Request.<anonymous> (/var/runtime/node_modules/aws-sdk/lib/request.js:38:9)",
"Request.<anonymous> (/var/runtime/node_modules/aws-sdk/lib/request.js:685:12)",
"Request.callListeners (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:116:18)"
]
}
Weirdly, when I look in the AWS CodeDeploy Console, I see mbsolo-Compute-KLOG23BUPMGI listed in the deployment group.#2019-08-2419:42Mark AddlemanAny thoughts?#2019-08-2419:42Mark AddlemanAny thoughts?#2019-08-2419:49Mark AddlemanJust noticed that the push operation reported an empty :deploy-groups list.
{:rev "5c79aede20d112c7ebbdc8a9a65514451a2a6f19",
:deploy-groups (),
:dependency-conflicts
{:deps
{com.cognitect/transit-java #:mvn{:version "0.8.311"},
org.clojure/clojure #:mvn{:version "1.10.0"},
commons-codec/commons-codec #:mvn{:version "1.10"},
org.clojure/tools.analyzer.jvm #:mvn{:version "0.7.0"},
com.fasterxml.jackson.core/jackson-core #:mvn{:version "2.9.8"},
com.google.guava/guava #:mvn{:version "18.0"},
org.msgpack/msgpack #:mvn{:version "0.6.10"},
com.cognitect/transit-clj #:mvn{:version "0.8.285"},
com.cognitect/s3-creds #:mvn{:version "0.1.23"},
org.clojure/tools.reader #:mvn{:version "1.0.0-beta4"},
org.clojure/test.check #:mvn{:version "0.9.0"},
com.amazonaws/aws-java-sdk-kms #:mvn{:version "1.11.479"},
org.clojure/core.async #:mvn{:version "0.3.442"},
com.amazonaws/aws-java-sdk-s3 #:mvn{:version "1.11.479"}},
:doc
"The :push operation overrode these dependencies to match versions already running in Datomic Cloud. To test locally, add these explicit deps to your deps.edn."},
:deploy-command
"clojure -Adev -m datomic.ion.dev '{:op :deploy, :group <group>, :rev \"5c79aede20d112c7ebbdc8a9a65514451a2a6f19\"}'",
:doc
"To deploy, issue the :deploy-command, replacing <group> with a group from :deploy-groups"}#2019-08-2419:58Mark AddlemanI found this: https://forum.datomic.com/t/help-deploying-ion-example/717/7#2019-08-2419:58Mark AddlemanFrom the thread, it looks like the solution is to create a new AWS account but it would be helpful to open an AWS support ticket.#2019-08-2419:59Mark Addleman@U1QJACBUM Can you confirm this is the same problem? If so, I could use some guidance on what to put in the support ticket#2019-08-2521:35Mark AddlemanI found the problem (spolier: It's my own fault): I had two ion-config.edn files - one in the proper place and the one that I was editing in the wrong location. The one in the proper place had an app-name pointing to an old and deleted cloud stack.#2019-08-2601:57jaretAh! That would do it. Sorry I didn’t see this until Today Mark.#2019-08-2615:42Mark AddlemanNo worries 🙂#2019-08-2615:43Mark AddlemanI noticed there are a lot of resources not cleaned up when a stack is deleted. Do you guys have much control over that?#2019-08-2608:22robert-stuttaford@jaret the Changes link on http://my.datomic.com/downloads/free for "datomic-free-0.9.5703.21" is broken. curious - why does pro not also have a release of this version? ah, i see that pro has a far newer version! will free get all the new tech at some point?#2019-08-2608:28steveb8nadding my perspective: I’m already paying for Cloud Solo and will soon pay for Cloud Prod but I still use free in my deps.edn to keep things simple. I know that I can add the creds etc and depend on the pro version but I like simple since deps are already full of complexity. However I still want all the new stuff in dev. My 2c#2019-08-2611:10akielI’ve written already to <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection> and @jaret. I’m also a paying customer of On-Prem Pro for several years. Hopefully we get any answer soon.#2019-08-2616:28jaretWe are considering different options for Datomic Free, and would love to hear more about your use cases. You can share your use cases with me via <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>. Alexander, I’ve just replied to your Sales e-mail.#2019-08-2616:44kennyIs there a public answer to this question? I am also quite interested.#2019-08-2617:44souenzzoAs a pro and cloud user, it's very frustrating do not have free version for tooling/prototype#2019-08-2617:44kennyExactly#2019-08-2618:29tjgIs it common that installing Datomic Cloud Solo fails? (`The following resource(s) failed to create: [Storage...]. . Rollback requested by user.`)
I’ve followed instructions carefully 3 times. Each time, I delete the previous attempt and change the CloudFormation stack name. My account’s definitely “VPC only”.#2019-08-2618:32tjgI’ve read the troubleshooting guide, looked at the following thread, etc...
https://forum.datomic.com/t/datomic-cloud-solo-subscription-install-failing/654#2019-08-2618:37marshalllook at the failed stack#2019-08-2618:37marshallyou’ll have to select “failed” or “deleted” in the cloud formation dashboard#2019-08-2618:37marshallthen look for the Storage stack#2019-08-2618:37marshallFind the first thing that failed so you can determine what the cause was#2019-08-2618:42tjgAhhh thanks! Ok, looks like I need to give my account privileges to perform lambda:CreateFunction. Will do…
Very helpful, thanks again.#2019-08-2618:44tjg(Before, I only looked at the failed stacks. But naturally I needed to look at the deleted stacks…)#2019-08-2620:22kennyIs it ok to call d/client multiple times without memoizing?#2019-08-2621:25joshkhany update on this forum post? we seem to be experiencing the same problem. https://forum.datomic.com/t/insufficient-memory-to-complete-operation/1111#2019-08-2803:27jaretHi @U0GC1C09L , Just saw this post, Yes Gal and I resolved that issue after identifying a problematic query and optimizing the query for most selective clauses first. If you’re seeing this issue we can investigate further in the support portal as the error could be the result of several issues, not just query. emailing <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection> will start a case and we can begin reviewing.#2019-08-2621:35dpkpAny advice on how to approach / solve the backend reporting function in a Datomic focused architecture? For the SQL world, it is common to expose a db replica to business analysts who then leverage 3rd party tools like tableau or periscope or looker and query the domain schema w sql. I don't think any of these services support datomic directly. Allowing teams to explore datomic can be viewed as a non-starter in not-small orgs with an established BI infrastructure like this w/o a good answer to this side of the system. Has anyone here dealt with this?#2019-08-2704:54jaihindhreddyYou could write a program that subscribes to the Datomic TX log writing the data to a SQL database (as you see fit) and leverage existing BI infra.
I'm interested in this too. Would love to see other ideas.#2019-08-2713:48robert-stuttafordWe have a system which ETLs data to SQL for Metabase to use. This is because we're multi-tenant and we wanted to be totally sure that access is controlled in Metabase (we write a SQL db per client).
We're exploring https://github.com/lambdaisland/metabase-datomic for internal stuff at the moment.#2019-08-2711:01holyjakAny idea if there are any plans to make HTTP Direct available also for Solo topology?#2019-08-2711:07henrikThat would necessitate including an NLB with the Solo topology, which would increase the cost of running it.
Personally, I think it might be worth it, since it would make Solo more accurate for prototyping things that would ultimately run on Prod.#2019-08-2715:25kennyThough we don't use Ions anymore, I do agree that having the option to include a NLB for more accurate prototyping is quite valuable.#2019-08-2718:34holyjakI wish there was also a cheaper (and necessarily less performant) version of the prod topology...#2019-08-2713:05souenzzoanyone running datomic on-prem inside a ions setup? (I'm worried about deps conflicts)#2019-08-2713:05souenzzoanyone running datomic on-prem inside a ions setup? (I'm worried about deps conflicts)#2019-08-2713:07marshallThe peer library will not run in an ion#2019-08-2713:12souenzzoI can't setup my own transactor on AWS and connect on it from inside a ion?#2019-08-2713:13marshallNot at this time#2019-08-2714:23eoliphantHi, i just tried my first ‘from scratch’ install of the separate DC storage and compute stacks#2019-08-2714:24eoliphantI’m getting an ApplicationDoesNotExistException when I try to push. I’ve poked around a bit, and I do see the code deploy app, etc#2019-08-2714:32marshalllook at your newly created compute stack in cloudformation dashboard
go to outputs and find the name of the codeDeployApplication#2019-08-2714:33marshallmake sure that’s the same as what you’re using
also check that your AWS creds (and region) you’re using to push are correct for that account#2019-08-2714:39eoliphantdammit lol. had already checked all of those things. I’m running with God creds.. but sorta forgot to set AWS_PROFILE 😉#2019-08-2715:57eoliphantOk running into another (I think actual) issue. the log is full of streams that appear to be the result of instances cycling due to this msg
“:datomic.cluster-node/-main failed: Production Compute Stack not found for query group”.
I’ve only installed storage and compute#2019-08-2715:58marshallwhen you ran the compute stack, are you sure you used the compute template and not the query group template by accident?#2019-08-2715:59eoliphantI’ll retry it but 99.9% certain that I didn’t Is there anything unique in the QG outputs, that I could use as a quick sanity check?#2019-08-2716:00eoliphantwell actually, just reviewing the parameters. None of the QG stuff like “max query group instances” is there#2019-08-2716:01eoliphant#2019-08-2716:02marshallwhat version ar eyou launching?#2019-08-2716:03eoliphantlatest 482-8794#2019-08-2716:03marshalland the log errors are showing up in the datomic stream for this system?#2019-08-2716:04eoliphantand from the template
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "Creates compute resources needed to run Datomic.",
So looks right#2019-08-2716:04eoliphantyeah#2019-08-2716:04eoliphanti tried a deploy#2019-08-2716:04eoliphantand got that ‘failed because too many instances failed deployment’ msg#2019-08-2716:04eoliphantthen started digging#2019-08-2716:07eoliphantand the ASG activity history is full of Terminating/Launching messages#2019-08-2716:07marshallfrom the failed deploy
is the system up and running normally prior to doing a deploy?#2019-08-2716:08eoliphantyeah, this is the one I’ve been working on. THe ‘cycling’ doens’t appear to ahve started until I tried the deploy#2019-08-2716:09marshallwhat is the application name in your ion-config and what is the deploy command you’re running#2019-08-2716:09eoliphantand I was just using ion-started to test the deployment#2019-08-2716:09eoliphant:app-name "ps-cp"}#2019-08-2716:09eoliphantsame as the stack#2019-08-2716:09marshallag#2019-08-2716:09marshallah#2019-08-2716:09marshallthe compute stack?#2019-08-2716:09eoliphantand didn’t override the app name#2019-08-2716:10marshallthat’s the issue i think#2019-08-2716:10eoliphantsorry the “system name”#2019-08-2716:10eoliphantok?#2019-08-2716:10marshalllook at the outputs of your compute group stack#2019-08-2716:10eoliphanti have ps-cp and ps-cp-compute#2019-08-2716:10eoliphantok#2019-08-2716:10eoliphantSystemName ?#2019-08-2716:10marshallso your CodeDeployApplicationName needs to match#2019-08-2716:11marshalldoes it?#2019-08-2716:11marshalland then when you run the deploy what’s the command?
the group needs to match the CodeDeployDeploymentGroup#2019-08-2716:12eoliphantah.. hmm. i was just using the output of :push#2019-08-2716:13eoliphantbut it’s setting the :group to the name of the compute stack#2019-08-2716:13marshallwhich should match the CodeDeployDeploymentGroup#2019-08-2716:13eoliphantah yes it does#2019-08-2716:13marshallhrm. guess that’s not the issue#2019-08-2716:14eoliphantCodeDeployApplicationName ps-cp
CodeDeployDeploymentGroup ps-cp-compute
SystemName ps-cp
#2019-08-2716:14eoliphantfrom the outputs#2019-08-2716:15eoliphantOver in CodeDeploy I have a ps-cp app with a ps-cp-compute deployment group#2019-08-2716:16marshallhrm#2019-08-2716:19eoliphantand the only thing that’s different (we’ve dozens of solo and prod sized stacks) in the large is that I did the separate stacks from the outset vs the marketplace->first upgrade, etc#2019-08-2716:20marshallthat definitely shouldn’t matter#2019-08-2716:20eoliphantyeah I didn’t think it would#2019-08-2716:22eoliphantand theres not much in the logs. each stream is the same 4 lines of what looks like normal startup housekeeping, then the error#2019-08-2716:22marshallcan you paste the full error line?#2019-08-2716:23eoliphantsure#2019-08-2716:23eoliphant`#2019-08-2716:23eoliphant#2019-08-2716:25marshallit’s definitely trying to launch a query group#2019-08-2716:31marshallnotice: "IndexGroup?": false,#2019-08-2716:32marshall@eoliphant can you file a ticket (<mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>) and we’ll look further into it#2019-08-2716:38marshall@eoliphant can you try deleting the compute stack and re-creating it?#2019-08-2716:41eoliphanthey sorry stepped away for a sec will retry#2019-08-2716:42marshall@eoliphant also, can you provide the first lines of log from when you originally created the stack (when it succeeded originally)?#2019-08-2716:42marshallin particular, the one with the configuration information in it#2019-08-2716:47eoliphantsure#2019-08-2716:47eoliphantlet me find it#2019-08-2716:47eoliphantthere’s quite a few of them now lol#2019-08-2716:51eoliphantwell this is peculiar, the oldest one I can find is showing the same thing. ugh.. in any case deleting the compute now#2019-08-2716:52eoliphantwill delete all the streams, etc as well#2019-08-2717:07eoliphantok it’s taking forever, I’d added a VPC endpoint, and forgot to remove it before I started, I thought I caught it before it got there, but I suspect we’re in CF’s 60 min timeout#2019-08-2717:39marshall@eoliphant do you have a lot of ASGs in your account (like more than 50)?#2019-08-2717:50eoliphant55#2019-08-2717:50eoliphant🙂#2019-08-2717:50eoliphantand it’s back up and went right to blowing up#2019-08-2717:52eoliphantsome sort of AWS API paging issue?#2019-08-2717:53marshalli believe we have a bug when you have > 50 ASGs in an account#2019-08-2717:53marshallwe’ll work on a fix for the next release#2019-08-2717:53eoliphantwhat do I get? 🙂#2019-08-2717:53eoliphantok cool, thx#2019-08-2717:54eoliphantI’m going to see if i can do some cleanup#2019-08-2717:54eoliphantin the meantime#2019-08-2808:21tatutshould ion-dev maven dependency be available somewhere publicly? I want to run the ion :push as part of aws codebuild, do I need to setup my own s3 bucket as a repo#2019-08-2808:48Joe LaneIf you're going to do codebuild, you need to do it in us-east-1.#2019-08-2808:49Joe Lane@tatut Long story short, s3-maven repo that hosts the ion-dev dep is in us-east-1 and for now is otherwise unavailable in other regions.#2019-08-2814:30tatutso if my datomic cloud is running in eu-central-1, I should run my codebuild in us-east-1…#2019-08-2814:31tatutI’ll have to try it because now I’m getting VPC endpoints do not support cross-region requests (Service: Amazon S3; Status Code: 403; Error Code: …#2019-08-2814:58calebpI’ve apparently crashed the process on my solo node (trying to develop a tx log dump for backup purposes). Is there a process for restarting that?#2019-08-2815:03calebpI see it here https://docs.datomic.com/cloud/troubleshooting.html#troubleshooting-solo#2019-08-2817:09calebpDoes any body have any wisdom to share on writing tx log dumps in Cloud? I tried first using the async API in a lambda I had set up outside my datomic cloud vpc (just because I had the infrastructure in place) and was getting network interruptions ( “Specified iterator was dropped”). I assumed this had something to do with the VPC Endpoint, because I can run it fine locally through the socks proxy. So I decided to try it in an Ion. It worked OK, but I saw a precipitous decline in HeapAvailablePercent when I ran it on my production topology (0.967 to 0.384) - even when looped to call tx-range with a small number of transactions at a time.#2019-08-2817:09calebpI can fiddle with the VPC endpoint or start another compute group to run the Ion, but I was wondering if anyone has a proven way of doing this#2019-08-2817:51hadils@calebp You can't use the Async API with Datomic cloud, unfortunately...#2019-08-2817:53calebpYou can’t use it in ions, but you can use it when connecting from elsewhere. At least that’s appears to be the case for me.#2019-08-2817:54calebpThat’s why I started with an external lambda, because I figured the async API was the way to go.#2019-08-2817:55calebpBut I’m not sure how important async is for the tx log, since you can already control chunks with the :start and :end params#2019-08-2820:07calebpThanks for the reply @hadilsabbagh18#2019-08-2900:34skuttlemanHello. Fairly new to datomic. I'm attempting to write a query that returns entities which have had attributes updated since a point in time. I can't figure out why the first two queries work great, but the third one just hangs for 30 seconds and return Server error.
(d/q '[:find (pull ?e [*])
:in $all $new
:where
[$all ?e :customer/id]
[$new ?e :customer/first-name]]
db
(d/since db t))
(d/q '[:find (pull ?e [*])
:in $all $new
:where
[$all ?e :customer/id]
[$new ?e :customer/last-name]]
db
(d/since db t))
(d/q '[:find (pull ?e [*])
:in $all $new
:where
[$all ?e :customer/id]
(or [$new ?e :customer/first-name]
[$new ?e :customer/last-name])]
db
(d/since db t))
#2019-08-2913:01marshallCheck your datomic system logs to see if you had an exception or error in the 3rd one#2019-08-2912:19benoit@skuttleman I wonder if the or clause can only target one src-var. Did you try to put the $new before the or clause? I never encountered this case. Does it mean you cannot write a or clause across src-vars?#2019-08-2912:23benoitFor this kind of historical queries I would use the log API and check for the attributes I'm interested in in each transaction since t.#2019-08-2913:24skuttleman@me1740 @marshall Thanks for responding. I ended up getting it to work last night after finding this: https://docs.datomic.com/on-prem/query.html#how-or-clauses-work
The or clause can only target one src-var
(d/q '[:find (pull ?e [*])
:in $all $new
:where
[$all ?e :customer/id]
($new or
[?e :customer/first-name]
[?e :customer/last-name])]
db
(d/since db t))
#2019-08-2913:29benoit@skuttleman Ok, that's it. Thanks for the relevant link to the docs.#2019-08-2917:00dominicmIn datomic ions, where would I put something I might normally do in a component/integrant system? e.g. create a postgresql hikari connection pool (yes, yes I know, but this is a clear example).#2019-08-2917:34Mark AddlemanI'm using Mount in ions. The only thing that is mildly special is that I have some code in the root of a namespace which checks the environment. If the environment is not the desktop (ie, either :dev or :prod), it mounts the system#2019-08-2917:54dominicmHow does code check the environment?#2019-08-2917:58marshall@dominicm https://docs.datomic.com/cloud/ions/ions-reference.html#parameters#2019-08-2919:01dominicmHow do parameters work in development?#2019-08-2919:05marshallParameters will still use the AWS parameter store in dev
get-env will use local environment variable when running locally#2019-08-2919:16dominicmDevelopers need to set that environment then? (Or I guess the application could default to dev, but that seems potentially dangerous depending on what you do in dev)#2019-08-2919:19marshalleither way, yes#2019-08-2919:02m0smithDoes datomic support a sort aggregate?#2019-08-2920:13timgilbertGenerally no, if you mean something similar to SQL's ORDER BY. In the peer model you'll typically do the sorting in your application code.#2019-08-2921:40m0smithI am running an Ion using the client model#2019-08-3003:13sooheonHi guys, is fulltext not available in datomic cloud?#2019-08-3010:58holyjakReportedly not, see https://clojurians.zulipchat.com/#narrow/stream/180378-slack-archive/topic/datomic/search/fulltext Mentioning:
> ... I’ve seen the odd mention that suggests using cloudsearch, although my understanding is that aws elasticsearch is presently considered to be the better offering. Anyone#2019-08-3010:15holyjakIs it possible to get HTTP Direct working with Datomic Solo?
I know it is not supported out of the box but what if I add the necessary Network Load Balancer and VPC Link? Then API Gateway can access my datomic instance - but what endpoint should it call? Any idea?#2019-09-0121:46Luiz SolHi everyone. I'm trying to run datomic pro using a local postgresql, transactor an peer. I'm able to start both the database and the transactor without any problem:
atlas-intelligence-db-storage | 2019-09-01 21:26:34.823 UTC [1] LOG: starting PostgreSQL 12beta3 on x86_64-pc-linux-musl, compiled by gcc (Alpine 8.3.0) 8.3.0, 64-bit
atlas-intelligence-db-storage | 2019-09-01 21:26:34.823 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
atlas-intelligence-db-storage | 2019-09-01 21:26:34.823 UTC [1] LOG: listening on IPv6 address "::", port 5432
atlas-intelligence-db-storage | 2019-09-01 21:26:34.835 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
atlas-intelligence-db-storage | 2019-09-01 21:26:34.849 UTC [18] LOG: database system was shut down at 2019-09-01 21:25:15 UTC
atlas-intelligence-db-storage | 2019-09-01 21:26:34.852 UTC [1] LOG: database system is ready to accept connections
atlas-intelligence-db-transactor | Launching with Java options -server -Xms1g -Xmx1g -XX:+UseG1GC -XX:MaxGCPauseMillis=50
atlas-intelligence-db-transactor | Starting datomic:sql://<DB-NAME>?jdbc:, you may need to change the user and password parameters to work with your jdbc driver ...
atlas-intelligence-db-transactor | System started datomic:sql://<DB-NAME>?jdbc:, you may need to change the user and password parameters to work with your jdbc driver
(They're all running on containers with a host network_mode)
I think that theses warnings may come from the fact that I'm using datomic as the user and the database name, but I'm not sure.
But then, when I try to start a peer server, I'm faced with the following error:
$ ./bin/run -m datomic.peer-server -h localhost -p 8998 -a datomic-peer-user,datomic-peer-password -d datomic,datomic:
Exception in thread "main" java.lang.RuntimeException: Could not find datomic in catalog
at datomic.peer$get_connection$fn__18852.invoke(peer.clj:681)
at datomic.peer$get_connection.invokeStatic(peer.clj:669)
at datomic.peer$get_connection.invoke(peer.clj:666)
at datomic.peer$connect_uri.invokeStatic(peer.clj:763)
at datomic.peer$connect_uri.invoke(peer.clj:755)
(...)
at clojure.main$main.doInvoke(main.clj:561)
at clojure.lang.RestFn.applyTo(RestFn.java:137)
at clojure.lang.Var.applyTo(Var.java:705)
at clojure.main.main(main.java:37)
I've already tried changing a bunch of configurations with no success. Can someone help me?#2019-09-0122:02Luiz SolThese are my transactor.properties file:
protocol=sql
host=localhost
port=4334
license-key=<license_key>
sql-url=jdbc:
sql-user=datomic
sql-password=datomic-password
sql-driver-class=org.postgresql.Driver
memory-index-threshold=32m
memory-index-max=256m
object-cache-max=256m
#2019-09-0207:11tatutdatomic cloud doesn’t support :db.type/bytes ?#2019-09-0213:47eoliphant@tatut Nope, and note the 4K limit on strings. We’ve mashed it up with ElasticSearch, Redis, etc to handle those cases#2019-09-0213:50tatutok, that means my plan to store geometry data in datomic is probably not recommended#2019-09-0213:53tatutnot really a problem as postgresql+postgis handles geometry data so well… but datomic is so nice to work with that one would like to keep everything in there 😄#2019-09-0213:56eoliphantyeah exactly lol#2019-09-0213:56eoliphantI’ve wondered about GIS+Datomic for a while 🙂#2019-09-0320:10BrianI'm just getting into Datomic and Datomic Cloud recently and I have a question on some theory/design of Datomic Cloud. I'm used to my database server having an api that we hit with requests. Would I essentially write lambdas for every type of query in the same way I'add add api endpoints on my old database?#2019-09-0320:16souenzzo@brian.rogers no
Probably you will use something like
https://github.com/pedestal/pedestal.ions
Export one lambda, make a ANY/* "proxy" API Gateway, then do all routing stuff in pedestal.#2019-09-0320:22Joe LaneOr HTTP-Direct#2019-09-0320:27ghadiyou can also do a regular API server tier (not using Ions)#2019-09-0320:58BrianThank you all for the information! You've made me realize I have other things I need to look into before I know what to ask next#2019-09-0400:58Brian AbbottSorry, Im not able to search too far back in the chat history ATM but, does anyone know of a sure-fire way to determine which version of datomic cloud that an account is on?#2019-09-0401:13Brian AbbottBy sure-fire, I mean, programmatic interogation of the instances via their APIs, not from some loose file simply because it has the textual values written in them --- i want to recieve the version information from the running instance itself.#2019-09-0418:28eoliphant@briancabbott not sure about the instances themselves, but you can use the aws api to query the outputs of the associated cloud formation stack#2019-09-0421:06Jacob O'BryantI'm currently unable to do an ion deploy. CodeDeploy says it's due to an OOM error. Is there anything I can do to fix this?#2019-09-0421:06Jacob O'Bryant#2019-09-0421:10Joe Lane@foo Has that deployment worked in the past?#2019-09-0421:11Jacob O'Bryantyes#2019-09-0421:12Jacob O'Bryantit just stopped working today for some reason#2019-09-0421:14Jacob O'Bryant(or did you mean has deploying this revision worked in the past? In that case, no. However I just tried re-deploying a previous revision and that fails too)#2019-09-0421:15marshallSolo or prod @foo ?#2019-09-0421:16Jacob O'Bryantsolo#2019-09-0421:17marshallNot sure you can redeploy over a failing deploy directly#2019-09-0421:18marshallI think the attemped redeploy will fail and it will continue to try to run the original problem one#2019-09-0421:18marshallUntil eventually aborts and rolls back#2019-09-0421:19marshallYou may be able to cancel/roll back the failing deploy in the code deploy console#2019-09-0421:21Jacob O'BryantIt looks like the rollbacks are failing too. I'm looking around the console, but I don't see anything I can do to rollback manually.#2019-09-0421:22Jacob O'Bryantactually there's a Create Deployment button, maybe that's the ticket#2019-09-0421:22marshallThat would create a new one#2019-09-0421:22marshallDid the rollback fail with an OOM also?#2019-09-0421:24Jacob O'Bryantyes, same error#2019-09-0512:33henrikIn my experience, when you get OOM on Solo, deploys will be pretty non-deterministic, working sometimes and not other times, which means that it’s possible to get into a state where the version being rolled back to also produces an OOM.#2019-09-0512:35henrikI remember getting around it by basically ripping out deps that were particularly unfriendly to memory consumption (Amazonica, at the time).#2019-09-0517:44Jacob O'BryantThanks for the tip, I'll give that a try.#2019-09-0517:18BrianHey y'all I've just connected API Gateway to a lambda to my Clojure code to my Datomic database. In the old database I'm used to using we'd interact with it with routes like /endpoint/{endpoint-id} where the endpoint-id was an argument to the URI string and the C# code on our database end was able to grab that out of the route and then go from there.
I tried to replicate that with my new web API but the only way to get my data across was to add the --payload from the aws invoke command to the request body like so: https://gyazo.com/a1710fd58010abdc1edd801297587c45
Therefore my question is: are the lambda ions only able to have the http request body delivered to them? Would I be able to replicate our existing database and URI routing where we are grabbing arguments out of the routes?
I'm not super well versed in API's and routing etc so I may be slightly off with some of these concepts#2019-09-0601:54chris_johnson@brian.rogers I think you want to look at having an API Gateway proxy resource and configuring the ion with the API Gateway :integration, in that way your whole request path will be available in the context object. I haven’t gotten to revisit ions and API Gateway in months so this is faint memory backed up by a quick review of the docs page.#2019-09-0603:10Jacob O'BryantRe: my problem above--I've taken out all the deps that were easy to take out, but no dice. I'm going to see if I can somehow replace a large-but-important dep I have and hope that works. However, is it possible on the solo topology to upgrade the compute stack to use a larger ec2 instance instead of the default t3.small? (I'm assuming that would fix the issue). I'd upgrade to production topology, except I just can't afford two i3.large instances at the early stage I'm in right now.#2019-09-0609:02henrikUnfortunately, for Solo, you’re stuck with the default option.
Try doing some profiling on your code to see what is eating the resources.#2019-09-0617:41johnjAn alternative is to move to on-prem and use the client lib.#2019-09-0607:26igrishaevI got stuck a bit with the following problem. When shifting a predicate clause inside not, the query stops working. For example, this query works fine:
[:find ?e ?id
:where
[?e :user/pg-id ?id]
[(< ?id 5)]]
But the second is not:
[:find ?e ?id
:where
[?e :user/pg-id ?id]
(not [(< ?id 5)])]
Of cause I can flip the predicate in such a way so it returns opposite value. But I rather interested in common approach.#2019-09-0607:27igrishaevThe error message I’m getting in console is
processing rule: (q__198 ?e ?id), message: processing clause: ["not" [(< ?id 5)]], message: :db.error/invalid-lookup-ref Invalid list form: [(< ?id 5)]
#2019-09-0614:35benoitBoth queries work for me with the on-prem version.#2019-09-0609:48Adrian SmithWhat's a good way of getting secrets up on to datomic cloud? (details here: https://ask.clojure.org/index.php/8554/how-to-store-secrets-in-datomic-cloud)#2019-09-0616:24BrianThanks @chris_johnson!#2019-09-0618:55adamtaitI’m trying to deploy a Datomic Ion application and it’s failing in datomic.client.api/client (creating the client).
{
"Type": "java.lang.AssertionError",
"Message": "Assert failed: cfg",
"At": [
"datomic.client.impl.local$create_client",
"invokeStatic",
"local.clj",
208
]
}
I assume that cfg implies the configuration map (the one parameter that datomic.client.api/client accepts) and that AssertionError implies it doesn’t exist. I have verified that it exists and is valid. The same code works flawlessly from my local machine at the REPL, which is how I understand that Datomic is expecting it (https://github.com/Datomic/ion-event-example/blob/e08c59d0cac1a100251232a462dd77194d83e48a/src/datomic/ion/event_example.clj#L58) even though it swaps out the configuration for the local Datomic client.
Any help on how to proceed through this non-descriptive error from a black box would be most appreciated!#2019-09-0618:55adamtaitI’m trying to deploy a Datomic Ion application and it’s failing in datomic.client.api/client (creating the client).
{
"Type": "java.lang.AssertionError",
"Message": "Assert failed: cfg",
"At": [
"datomic.client.impl.local$create_client",
"invokeStatic",
"local.clj",
208
]
}
I assume that cfg implies the configuration map (the one parameter that datomic.client.api/client accepts) and that AssertionError implies it doesn’t exist. I have verified that it exists and is valid. The same code works flawlessly from my local machine at the REPL, which is how I understand that Datomic is expecting it (https://github.com/Datomic/ion-event-example/blob/e08c59d0cac1a100251232a462dd77194d83e48a/src/datomic/ion/event_example.clj#L58) even though it swaps out the configuration for the local Datomic client.
Any help on how to proceed through this non-descriptive error from a black box would be most appreciated!#2019-09-0618:56adamtaitHere’s the full CloudWatch trace:
[
"datomic.client.impl.local$create_client",
"invokeStatic",
"local.clj",
208
],
[
"datomic.client.impl.local$create_client",
"invoke",
"local.clj",
205
],
[
"clojure.lang.Var",
"invoke",
"Var.java",
384
],
[
"datomic.client.api.impl$dynarun",
"invokeStatic",
"impl.clj",
24
],
[
"datomic.client.api.impl$dynarun",
"invoke",
"impl.clj",
21
],
[
"datomic.client.api.impl$dynacall",
"invokeStatic",
"impl.clj",
31
],
[
"datomic.client.api.impl$dynacall",
"invoke",
"impl.clj",
28
],
[
"datomic.client.api$client",
"invokeStatic",
"api.clj",
84
],
[
"datomic.client.api$client",
"invoke",
"api.clj",
46
],
[
"datomic.client.api$client",
"invokeStatic",
"api.clj",
76
],
[
"datomic.client.api$client",
"invoke",
"api.clj",
46
]
#2019-09-0619:05Joe Lane@adamtait https://docs.datomic.com/cloud/troubleshooting.html#assert-failed#2019-09-0619:06Joe LaneI lost a day to this same error message about 2 weeks ago.#2019-09-0619:08adamtaitThanks Joe! I’m surprised that my searching didn’t find that result before. I’ve already lost a day so hopefully no more!#2019-09-0619:10Joe LaneNP, currently the vase-datomic-cloud interceptor does this connection on ns load. If you're using it, let me know and I'll send you a rewritten version of the interceptor to fix that issue.#2019-09-0619:15adamtaitI actually didn’t know about https://github.com/cognitect-labs/vase but I am already using pedestal and datomic so I’ll give it a look.
I was using stuartsierra/component and creating the Datomic client when I started the system on ns load.#2019-09-0619:24Joe LaneI'd say stick with pedestal and datomic for now. We actually backed off from vase on our project for now until it gets some TLC.#2019-09-0619:13eggsyntaxHey y'all 👋 . I'm curious to hear y'all's opinion about when you would choose to use a Datomic enum as opposed to a keyword. One obvious distinction is that an enum is only suitable for a closed set, but are there other factors you consider? What if you have a closed set in terms of what values can be added at any given time, but expect that set to potentially grow in the future? Thanks!#2019-09-0619:44johnjdatomic has enums?#2019-09-0619:46eggsyntaxIf not, I've been badly misinformed 😆
https://docs.datomic.com/on-prem/schema.html#enums#2019-09-0619:51johnjCan see how that title is confusing, but the gist is to use :db/ident to model "enum-like" stuff#2019-09-0619:51johnjwhy do you think that is a closed set?#2019-09-0620:00eggsyntaxMainly because that's the usual convention for enums. And because unless I'm misremembering, you couldn't transact (to use their music DB example)
[<some-entity> :artist/country :country/NONEXISTENT-VALUE]
#2019-09-0620:09johnjIf you read past the title of the link you gave you'll see there is no enum type.#2019-09-0620:09johnjFor your transact "issue", that's not how you model entities in datomic, read this carefully https://docs.datomic.com/cloud/whatis/data-model.html#2019-09-0620:10johnjit applies to on-prem#2019-09-0620:11eggsyntaxThanks for the feedback!#2019-09-0622:02Questperformance on heterogenous vs homogenous tuples
Say I have a tuple of [type spec], where both values are strings. This vector is guaranteed to be length = 2.
I can model this as a homogenous tuple of strings - :db/tupleType :db.type/string
or a heterogenous tuple - :db/tupleTypes [:db.type/string :db.type/string]
I suspect performance+storage will be better on the homogenous tuple. However, the heterogenous tuple more accurately models the data as it forces count = 2. Are these correct assumptions?#2019-09-0820:06nateHello, so what's a typical setup/stack for a clojure webapp that uses Datomic Cloud/Ions to exposes an API to the web ?#2019-09-0820:18adamtaitAs far as tooling for managing routes & request handlers, the most popular are probably Ring, Compojure (built on Ring) & Pedestal. The ions starter (https://github.com/Datomic/ion-starter) uses Ring directly. Here’s an example with Pedestal (https://github.com/pedestal/pedestal-ions-sample). Ring/Compojure use middleware for common code, and pedestal uses an interceptor stack.#2019-09-0821:24nateCool! the pedestal sample README answers many of my questions, thanks a lot!#2019-09-0821:36Adrian SmithIn my local repl calls to (ion/get-params) return nil, is there a way to provide details that I suspect in live would exist?#2019-09-0821:36Adrian Smith(because I've put them in AWS Systems Manager Parameter Store)#2019-09-0909:29Adrian Smith(ion/get-params) from datomic.ion namespace#2019-09-0914:04marshallyou should be able to fetch params locally. is your local environment configured with AWS credentials that allow GetParametersByPath ?#2019-09-0914:04marshallhttps://docs.datomic.com/cloud/ions/ions-reference.html#get-params#2019-09-0916:18Adrian Smithah that's good to know, I've got some things I can double check tonight#2019-09-0907:40steveb8nQ: I have a Solo/Ion webapp which consistently dies after a small load. The only fix I’ve found is to terminate the EC2 instance and let a new one start. It’s not a memory leak, cloudwatch shows plenty of heap. The error in the logs just before locking up is “java.lang.OutOfMemoryError : unable to create new native thread”. Googling suggests needing access to the thread dump to dig deeper. I cannot reproduce this locally using low or high levels of load. Has anyone seen this? What techniques are available to reproduce,diagnose, fix this?#2019-09-0907:40steveb8nQ: I have a Solo/Ion webapp which consistently dies after a small load. The only fix I’ve found is to terminate the EC2 instance and let a new one start. It’s not a memory leak, cloudwatch shows plenty of heap. The error in the logs just before locking up is “java.lang.OutOfMemoryError : unable to create new native thread”. Googling suggests needing access to the thread dump to dig deeper. I cannot reproduce this locally using low or high levels of load. Has anyone seen this? What techniques are available to reproduce,diagnose, fix this?#2019-09-0907:42steveb8nAlso relevant is that it dies while idle (not being loaded by requests) so this would suggest some thread activity from house-keeping etc although I have no facts to support this#2019-09-0914:02marshallwhat version of datomic cloud#2019-09-0914:14steveb8nCompute stack is 8772#2019-09-0914:15steveb8nStorage is 470 (not sure if this is important)#2019-09-0914:16marshalli would first try upgrading to the latest (8794)#2019-09-0914:16steveb8nok. I’ll try that right away. thanks#2019-09-0914:17steveb8nI presume you mean compute only upgrade?#2019-09-0914:17marshallcorrect#2019-09-0914:30steveb8nok. now on 8794. I’ll load it with requests and will let it sit to see if it happens again. normally takes an hour or so of idle time. I’ll report back here either way#2019-09-0914:31marshall👍 Also check your system logs when/if you see the behavior#2019-09-0914:31marshallif possible, can you paste the full stack trace of the error you saw previously?>#2019-09-0914:38steveb8nsure….#2019-09-0914:38steveb8n{
“Msg”: “Uncaught Exception: unable to create new native thread”,
“Ex”: {
“Via”: [
{
“Type”: “java.lang.OutOfMemoryError”,
“Message”: “unable to create new native thread”,
“At”: [
“java.lang.Thread”,
“start0”,
“Thread.java”,
-2
]
}
],
“Trace”: [
[
“java.lang.Thread”,
“start0”,
“Thread.java”,
-2
],
[
“java.lang.Thread”,
“start”,
“Thread.java”,
717
],
[
“java.util.concurrent.ThreadPoolExecutor”,
“addWorker”,
“ThreadPoolExecutor.java”,
957
],
[
“java.util.concurrent.ThreadPoolExecutor”,
“processWorkerExit”,
“ThreadPoolExecutor.java”,
1025
],
[
“java.util.concurrent.ThreadPoolExecutor”,
“runWorker”,
“ThreadPoolExecutor.java”,
1167
],
[
“java.util.concurrent.ThreadPoolExecutor$Worker”,
“run”,
“ThreadPoolExecutor.java”,
624
],
[
“java.lang.Thread”,
“run”,
“Thread.java”,
748
]
],
“Cause”: “unable to create new native thread”
},
“Type”: “Alert”,
“Tid”: 18,
“Timestamp”: 1567951027119
}#2019-09-0914:38steveb8nnot sure what you mean by “system logs”#2019-09-0914:38marshallcloudwatch logs#2019-09-0914:38marshallfor your datomic system#2019-09-0914:39marshallwhere are you seeing that ^ error?#2019-09-0914:39steveb8nah ok. that’s where this stack trace is from#2019-09-0914:39marshallok#2019-09-0914:39steveb8nall other events at that time look normal#2019-09-0914:41steveb8nvery unscientifically, it seems to tolerate clj-gatling load a bit better on this new version. I’ll have to wait now to see if it dies. Good timing as gotta cook dinner (NL time) but will check in later#2019-09-0915:02marshallDoes your ion webapp do any async work?#2019-09-0915:02marshallanything that might be spawning threads?#2019-09-0915:59steveb8nyes, it uses http-kit client in async mode to call an ECS service. async calls via pedestal interceptor/handlers. that said, this problem occurred before I was using async mode#2019-09-0916:00steveb8nalso using jarohen/chime to periodically report metrics i.e. cron like. again, prior to using chime, this instability was present#2019-09-0916:01steveb8nfull stack is lacinia-pedestal / pedestal / resolvers making http calls returning a core.async channel (to allow pedestal to park/async)#2019-09-0916:02steveb8nmost api endpoints are sync/blocking but I suspect the http callouts so I focus on those to reproduce the error.#2019-09-0916:02steveb8nso far, no hang so will keep waiting on it#2019-09-0916:04steveb8nprior to async http-kit, was using blocking http-kit calls. I think that is using async underneath so there was probably async machinery being used when this originally manifested#2019-09-0916:05marshallthat would definitely be my suspicion for where to look; that error generally indicates that the process is creating unbounded numbers of threads and the OS is out of resources to allocate#2019-09-0916:06marshalldespite it being called a “memory” error - it is evidently more commonly a thread resource issue#2019-09-0916:22steveb8nI suspect the same. I don’t have much experience in finding “captured” threads but I’ll start by using a profiler on my localhost and see if I can find anything#2019-09-0916:19Adrian SmithI'm trying to log into Datomic forum with email link login but I've not received any emails all afternoon, is this just me?#2019-09-1013:40favilaI notice the client api version of d/datoms seems to ignore the fourth (tx) part of :components. Is this a known issue? Bug or by design?#2019-09-1017:48marshallYes, known.
By design I believe, as if you have EAVT you have the datom, so you don’t need index access#2019-09-1018:46favilaIt differs from peer api d/datom#2019-09-1018:46favilaI would at least expect it to be mentioned that only three components in the vector are inspected#2019-09-1018:48favilaYou make a good point though. The only additional possible information datoms could give you is whether it was an assertion or retraction#2019-09-1018:48marshallright#2019-09-1018:48marshalland if you have lots of datoms there you need to get to, you’re doing it wrong#2019-09-1013:48favilaAlso is there a better way to get all datoms matching pattern on client api other than repeatedly adjusting offset? This feels like O(n^2). If client had seek-datoms or a “starting at” datoms argument, one could use the last seen datoms as the start to the next chunk#2019-09-1013:48favilaAlso is there a better way to get all datoms matching pattern on client api other than repeatedly adjusting offset? This feels like O(n^2). If client had seek-datoms or a “starting at” datoms argument, one could use the last seen datoms as the start to the next chunk#2019-09-1013:50favilaI’m struggling with workloads which are too large for a single query where I would normally use peer d/datoms lazily to produce an intermediate chunk or aggregate#2019-09-1014:39Joe LaneDo you have an example of "datoms matching pattern"?#2019-09-1014:43favilaThe pattern in the :components argument#2019-09-1014:46favilaEg {:index :avet :components [:myattr]} pattern is every datoms whose attr is :myattr#2019-09-1014:47favilaIntermediate sets may be too large for a single query#2019-09-1015:27marshall@U09R86PA4 How big is the total DB and how big are the results you’re looking for?
Also, how frequently are you running this query (or ones like it)?#2019-09-1015:39favilaThey are run infrequently (offline or batch jobs)#2019-09-1015:42favilaUse cases vary but they follow the pattern of being able to aggregate as you go and aggregation is much smaller than input; or preparing subsets of input for the same query rerun many times#2019-09-1015:42favilaExample I ran in to today was counting the unique values on a non-indexed attr#2019-09-1015:43favilaOn a peer this is map :v distinct over d/datoms :aevt :myattr#2019-09-1015:44favilaOn a client, the throughput decayed as the offset increased#2019-09-1015:44favilaI gave up eventually#2019-09-1015:55favilaWhen I ran it on a peer, the result took a few minutes but bounded memory and result set size was 60 out of 120 million input datoms (cold instance, no valcache or memcached)#2019-09-1017:33marshallare you running the peer server with the same memory settings as you did for the peer?#2019-09-1017:35favilaPeer server is actually a little bit bigger and has valcache#2019-09-1017:38favilaThese are queries I couldn’t run on even a really large peer. I don’t fault peer server for not being able to handle it naively. I just can’t use my usual d/datoms workaround for controlling the size of intermediate results by being lazy#2019-09-1017:44marshallHrm. I don’t quite understand what the 4th component has to do with it.
Do you have large #s of datoms with the same AVE that only differ in T?#2019-09-1017:46favilaFrom correctness perspective, I will get results where not al components match#2019-09-1017:47favilaThis is unrelated, is why it’s in a separate message/thread#2019-09-1017:47marshallah.#2019-09-1017:49marshallwhat chunk size are you using for your datoms call?#2019-09-1017:49favilaI discovered it while doing thought experiments with a client api with a seek-datoms; I could use it to construct the start of the next chunk instead of merely seeking the whole result over again by the offset (which seems to be how it is behaving. Is that actually how client’s datoms is implemented?)#2019-09-1017:50marshallgotcha#2019-09-1017:51marshallok. i think i understand#2019-09-1017:51marshallyou should use the async API#2019-09-1017:51marshallhttps://docs.datomic.com/client-api/datomic.client.api.async.html#2019-09-1017:51marshallit provides chunked results#2019-09-1017:51marshallon a channel#2019-09-1017:51marshallhttps://docs.datomic.com/client-api/datomic.client.api.async.html#var-datoms#2019-09-1017:52marshallthat should allow you to lazily iterate your results#2019-09-1017:52marshallwithout repeatedly calling datoms#2019-09-1017:52faviladoes the server’s impl of client d/datoms have the same time complexity as peer (->> (apply d/datoms index components) (drop offset) (take limit)) or is it more efficient than that?#2019-09-1017:52marshalli believe it is more efficient if you’re using chunked async client#2019-09-1017:52favilaIt feels n^2 but maybe I am running into unrelated externalities#2019-09-1017:53marshalli’m not sure with the sync impl#2019-09-1017:53favilaWhy would they be different?#2019-09-1017:53marshallsince there’s no “next” chunk in sync impl#2019-09-1017:53marshallyou’re just getting limit#2019-09-1017:53marshallresults#2019-09-1017:53marshalleach call#2019-09-1017:55favilaAh so there may be some cursor-like state in there#2019-09-1017:56favilaOk I’ll try async#2019-09-1017:57marshallhang on#2019-09-1017:57marshallthere’s a sublety here#2019-09-1017:57marshallyou can do it with the sync api#2019-09-1017:57marshalli just talked to Stu#2019-09-1017:57marshallhttps://docs.datomic.com/cloud/client/client-api.html#chunking#2019-09-1017:58marshallyou should use it just like you are, but set :limit -1#2019-09-1017:58marshallthe iterable that is returned is lazy#2019-09-1017:58marshalland you don’t need repeated calls to datoms with offsets#2019-09-1017:58marshallso you can map over it, or call seq on it, or whatever you want to do#2019-09-1017:59favilaAh ok. Why no “chunk” knob since the same considerations apply?#2019-09-1017:59marshallmultiple calls to d/datoms is definitely re-performing the work on every call#2019-09-1017:59marshallfurther down: “Synchronous API functions are designed for convenience. They return a single collection or iterable and do not expose chunks directly. The chunk size argument is nevertheless available and relevant for performance tuning.”#2019-09-1017:59marshallit’s actually there, probably an oversight in the api docs if it’s not listed there#2019-09-1018:01favilaSync api namespace docs do not mention :chunk#2019-09-1018:01favilaOk I was expecting less magic, I should have tried the dumb thing of limit -1?#2019-09-1018:02favilaWell gonna try it now#2019-09-1018:02marshall🙂 sorry for the confusion#2019-09-1018:02marshallyes, limit -1 and use the returned iterable however you like#2019-09-1018:02marshalland you can configure the chunk size for perf tuning if you desire#2019-09-1018:03favilaYeah I thought async‘s chunking was just doing offset adjustment for you like a normal rest api would#2019-09-1018:04marshallah. no, it’s maintaining an iterator between the client and server#2019-09-1018:42favilaOk it works! Thanks! Large :chunk makes a huge difference in sync api and is not ignored, so I consider that a doc bug#2019-09-1018:43favilaI guess a pipeline/prefetch option is out of the question? 😊#2019-09-1018:47marshalli think that would be a good candidate for a feature request on the feature request portal#2019-09-1018:47marshalland i will look into fixing the api docs#2019-09-1021:12favilaI can’t find this doc link you shared on the on-prem section of the datomic docs website#2019-09-1021:45marshallhttps://docs.datomic.com/on-prem/clients-and-peers.html#reads#2019-09-1021:45marshallRelevant info there#2019-09-1021:46marshallBut youre right, no exactly analogous page#2019-09-1014:06unbalancedis there a Datomic certification path anywhere? 😮#2019-09-1015:37colinkahnIs there documentation around using <, >, <=, >= with letters? Like [(< ?title "C")]. I’m seeing that < is exclusive where > is inclusive and wondering if that’s expected.#2019-09-1112:23pithylessDoes anybody have experience with round-tripping EDN data (nested, up to 10 MB each) from Clojure to Datomic (on-prem)? My first guess would be to store it as :db.type/bytes (vs :db.type/string) and probably encode via nippy (but also considering transit+json, transit+msgpack, or fressian). As far as I know, Datomic internally uses fressian, but I've seen reports online that not all EDN data roundtrips correctly -- anyone know more? Anyone have first-hand experience with this use-case? I'm mainly interested in relative performance of read/write, compressed size in the db, and any gotchas to watch out for.#2019-09-1113:36gwsFirst-hand experience with Regex literals not round-tripping#2019-09-1113:53favilaIME to use fressian well you need to know exactly what types you expect to serialize and set your read and write handlers up carefully#2019-09-1113:54favilaout of the box even the clj wrapper will do some surprising roundtrips#2019-09-1113:54favilanippy is definitely an easier out of the box experience#2019-09-1113:55favila(with clojure data)#2019-09-1114:26pithylessThanks for the comments. nippy has open GH issues related to large memory usage; transit+msgpack has open GH issues related to encoding byte[] as base64 strings. Which makes me wonder if I shouldn't just use transit+json. Maybe someone else will pitch in with thoughts; otherwise I'm left with just running some synthetic benchmarks ;]#2019-09-1114:37favilayou really shouldn’t store any blobs in datomic that are large enough to cause nippy memory issues#2019-09-1114:38favilaIf you’re talking about storing a single 10mb value, that is definitely too big for datomic#2019-09-1113:55favilaI semi-frequently get “Unexpected end of ZLIB input stream” during on-prem client api usage. any clues as to why?#2019-09-1113:56favila#error {
:cause Unexpected end of ZLIB input stream
:data {:datomic.client-spi/request-id 97ac9bd9-b6cf-4d2b-87fc-d7a1289005d7, :cognitect.anomalies/category :cognitect.anomalies/fault, :cognitect.anomalies/message Unexpected end of ZLIB input stream, :dbs [{:database-id datomic:, :t 322871158, :next-t 322871160, :history false}]}
:via
[{:type clojure.lang.ExceptionInfo
:message Unexpected end of ZLIB input stream
:data {:datomic.client-spi/request-id 97ac9bd9-b6cf-4d2b-87fc-d7a1289005d7, :cognitect.anomalies/category :cognitect.anomalies/fault, :cognitect.anomalies/message Unexpected end of ZLIB input stream, :dbs [{:database-id datomic:, :history false}]}
:at [datomic.client.api.async$ares invokeStatic async.clj 58]}]
:trace
[[datomic.client.api.async$ares invokeStatic async.clj 58]
[datomic.client.api.async$ares invoke async.clj 54]
[datomic.client.api.sync$unchunk invokeStatic sync.clj 47]
[datomic.client.api.sync$unchunk invoke sync.clj 45]
[datomic.client.api.sync$eval12765$fn__12786 invoke sync.clj 101]
[datomic.client.api.impl$fn__2619$G__2614__2626 invoke impl.clj 33]
[datomic.client.api$q invokeStatic api.clj 351]#2019-09-1114:00favila(I got a similar exception when using a laptop peer with a janky valcache setup (process-shared valcache, running inside a file mounted as a block device; it would transiently read a valcache file as 0 bytes, rerunning the query would always fix), but this peer server has a Proper valcache setup in the cloud. Related or red herring?)#2019-09-1117:47johnjelinekis anyone using datomic as a replacement for kafka?#2019-09-1118:52the2bears@johnjelinek what's your use case? Seems to me, at least, that Datomic and Kafka are quite different.#2019-09-1118:53johnjelinekevent streaming#2019-09-1118:53marshalli’d tend to use datomic with kafka#2019-09-1118:54marshallnot sure i’d do ES with datomic alone if it is a high throughput system#2019-09-1118:54marshalli.e. lots and lots of events#2019-09-1118:59johnjelinekI don't have lots of events#2019-09-1118:59johnjelinekI just want to replace SNS/SQS/DynamoDB/RDS/Kinesis with datomic#2019-09-1119:07marshallyou certainly could. you need to keep in mind that your event source will need to have logic for retrying transactions in case of failure#2019-09-1119:08marshallwhich is something that queues and/or things like kafka can often help with#2019-09-1119:14favilaI have used datomic on-prem’s tx-report-queue to provide an event source for other systems (i.e. they take some action reacting to a transaction). But this is definitely cutting some corners. datom updates are very granular, so you need some tx discipline (e.g. tx metadata describing the “semantic” meaning of the tx) to know what happened and decide what should be done#2019-09-1119:17favilaYou can use datomic to have the ease-of-use of a normal db system where the source of truth is the graph of data at rest (vs an event-source system where the events are primary and you need to devise a projection), while using the tx-queue for some of the niceties of an event-source model for other projections or for creating commands#2019-09-1119:18favilabut if you have actual commands (i.e., perform this side effect against an external system) you are much better off using a proper command event in a proper queue that provides retries, dead-letter, etc for you. You can still have something in the middle to read the tx queue and create commands on another queue#2019-09-1119:23favilasigns you are doing it wrong: transactions whose purpose is purely to trigger an email or http request; job entities which stores the progress of a job and also drive its execution via db writes; you need to replay portions of the tx queue to get side-effects to retrigger#2019-09-1120:36johnjelinek@U09R86PA4: oh -- so, instead of doing it wrong, I should use kafka?#2019-09-1120:37favilaput it this way: you will likely burn yourself if you try to make datomic’s tx queue drive command execution#2019-09-1120:37favilathose are signs you might be trying to do that#2019-09-1120:37favilain those cases I think you should use a real queue and datomic is not a good replacement#2019-09-1120:39favilabut if you just want some event-sourcing niceties via the tx-queue without the event-sourcing complexity of using a queue, well defined events, defining an indexed projection, etc, you can replace e.g. kafka for your event source in some circumstances#2019-09-1120:40favilathe event vs command distinction was not something I really grasped until I did it wrong a few times (using datomic’s tx-queue it so happens)#2019-09-1120:42johnjelinekhmm ... well, I'm pretty new to all of it, and was hoping to simplify my infrastructure as I explore the paradigm (opposed to synchronous systems and shared state through distributed RESTful services)#2019-09-1120:43johnjelinekany advice/tutorials/resources is greatly appreciated#2019-09-1119:05leongrapenthinCan I allocate stateful resources in Datomic Cloud? E. g. if I want to have a cache that I use in an ION, how would I size it?#2019-09-1119:09marshall@leongrapenthin for something that you don’t want to store in the db?#2019-09-1119:10leongrapenthinyes, for example to cache calculations#2019-09-1119:12leongrapenthinor external API calls#2019-09-1119:12leongrapenthinassume sth. like a zip to geolocation resolver#2019-09-1119:14leongrapenthinwould probably be a good fit to cache in db, i know - but my question is generally how much memory is available#2019-09-1119:27marshalli would tend to avoid that approach if possible. ions run in the same JVM as your datomic database system, so anything you use there is competing with datomic itself#2019-09-1119:27marshallwith the exception that you could probably get away with it in a query group#2019-09-1119:27marshallhowever, i would still tend toward using an AWS service separately for this (i.e. put it in s3 or on EFS or store it in parameter store, etc)#2019-09-1119:41iku000888Am I correct to understand that this is only available in cloud? http://blog.datomic.com/2019/06/return-maps-for-datomic-cloud-clients.html Would be so great if it is available in on-prem 😭#2019-09-1119:47favilaIt is available for on-prem: (d/q '[:find ?e :keys :e :where [(ground 1) ?e]] )
=> [{:e 1}]#2019-09-1119:52marshallabsolutely available in on-prem#2019-09-1119:52marshallin clients and peers https://docs.datomic.com/on-prem/query.html#return-maps#2019-09-1200:23iku000888I get 'Argument :keys in :find is not a variable' with something like this
(d/q '[:find ?c :keys :c :where [?c :customer/id "id"]])
#2019-09-1200:24iku000888@U05120CBV @U09R86PA4 Thanks for taking a look 😄
Glad that it is not a cloud only thing!#2019-09-1200:34favilaYour datomic client/peer version is not new enough#2019-09-1200:52iku000888😮 That could be it#2019-09-1200:53iku000888com.datomic/client-pro {:mvn/version "0.8.28"}#2019-09-1200:54iku000888Hah, latest is 0.9.37#2019-09-1207:01Shaitanhi, what these 3 dots mean :find [?e ...] ?#2019-09-1207:07schmeehttps://docs.datomic.com/on-prem/query.html#collection-binding#2019-09-1207:09Shaitanso it returns a collection?#2019-09-1207:09ShaitanI only see collection binding...#2019-09-1207:10schmeethere are four different bindings, all explained here: https://docs.datomic.com/on-prem/query.html#bindings 🙂#2019-09-1214:01Marcus Vieirahi everyone, I’m trying to use datomic for an example project I have and I’m running into some issues — here’s everything I did so far:
1. Registered for an account
2. Downloaded datomic-pro
3. Created a transactor.properties and updated it with the license-key I received on my email
4. Ran bin/transactor transactor.properties
5. Opened another tab, and created a database:
(require '[datomic.api :as d])
(def db-uri "datomic:)
(d/create-database db-uri)
6. Started a peer server on another tab
bin/run -m datomic.peer-server -h localhost -p 8998 -a myaccesskey,mysecret -d hello,datomic:
7. Added [com.datomic/client-pro "0.8.28"] to my dependencies
8. When running lein ring server I am getting this error:
Caused by: java.lang.ClassNotFoundException: org.eclipse.jetty.util.thread.NonBlockingThread
#2019-09-1215:27souenzzocan you share the stacktrace?
if you do lein repl then (d/connect ".. your db uri ..") you get the same error?#2019-09-1216:57Marcus Vieirawhen running the commands I get:
> (def client (d/client cfg))
Syntax error (ClassNotFoundException) compiling . at (http_client.clj:89:19).
org.eclipse.jetty.http.HttpCompliance
#2019-09-1216:58Marcus Vieira(following the commands from this link https://docs.datomic.com/on-prem/dev-setup.html#client)#2019-09-1217:15souenzzo*e on repl will show the full exception#2019-09-1217:16souenzzobut it should be some dependenciy conflict with http libs
please dump a lein deps :tree here#2019-09-1217:47Marcus Vieira[clojure-complete "0.2.5" :exclusions [[org.clojure/clojure]]]
[com.datomic/client-pro "0.9.37"]
[com.cognitect/anomalies "0.1.12"]
[com.datomic/client-api "0.8.35"]
[com.datomic/client-impl-shared "0.8.67"]
[com.cognitect/hmac-authn "0.1.195"]
[com.cognitect/transit-clj "0.8.313"]
[com.cognitect/transit-java "0.8.337"]
[javax.xml.bind/jaxb-api "2.3.0"]
[org.msgpack/msgpack "0.6.12"]
[com.googlecode.json-simple/json-simple "1.1.1" :exclusions [[junit]]]
[org.javassist/javassist "3.18.1-GA"]
[com.datomic/client "0.8.81"]
[com.cognitect/http-client "0.1.99"]
[org.eclipse.jetty/jetty-client "9.4.15.v20190215"]
[org.eclipse.jetty/jetty-io "9.4.15.v20190215"]
[org.eclipse.jetty/jetty-http "9.4.15.v20190215"]
[org.eclipse.jetty/jetty-util "9.4.15.v20190215"]
[com.datomic/query-support "0.8.16"]
[org.clojure/core.async "0.3.442"]
[org.clojure/tools.analyzer.jvm "0.7.0"]
[org.clojure/core.memoize "0.5.9"]
[org.clojure/core.cache "0.6.5"]
[org.clojure/data.priority-map "0.0.7"]
[org.clojure/tools.analyzer "0.6.9"]
[org.clojure/tools.reader "1.0.0-beta4"]
[org.ow2.asm/asm-all "4.2"]
[compojure "1.6.1"]
[clout "2.2.1"]
[instaparse "1.4.8" :exclusions [[org.clojure/clojure]]]
[medley "1.0.0"]
[org.clojure/tools.macro "0.1.5"]
[ring/ring-codec "1.1.0"]
[commons-codec "1.10"]
[ring/ring-core "1.6.3"]
[clj-time "0.11.0"]
[joda-time "2.8.2"]
[commons-fileupload "1.3.3"]
[commons-io "2.5"]
[crypto-equality "1.0.0"]
[crypto-random "1.2.0"]
[javax.servlet/servlet-api "2.5" :scope "test"]
[nrepl "0.6.0" :exclusions [[org.clojure/clojure]]]
[org.clojure/clojure "1.10.0"]
[org.clojure/core.specs.alpha "0.2.44"]
[org.clojure/spec.alpha "0.2.176"]
[org.clojure/data.json "0.2.6"]
[ring/ring-defaults "0.3.2"]
[javax.servlet/javax.servlet-api "3.1.0"]
[ring/ring-anti-forgery "1.3.0"]
[hiccup "1.0.5"]
[ring/ring-headers "0.3.0"]
[ring/ring-ssl "0.3.0"]
[ring/ring-mock "0.3.2" :scope "test"]
[cheshire "5.8.0" :scope "test"]
[com.fasterxml.jackson.core/jackson-core "2.9.0"]
[com.fasterxml.jackson.dataformat/jackson-dataformat-cbor "2.9.0" :scope "test"]
[com.fasterxml.jackson.dataformat/jackson-dataformat-smile "2.9.0" :scope "test"]
[tigris "0.1.1" :scope "test"]
#2019-09-1217:47Marcus Vieiraand here are my dependencies:
:dependencies [[org.clojure/clojure "1.10.0"]
[compojure "1.6.1"]
[org.clojure/data.json "0.2.6"]
[ring/ring-defaults "0.3.2"]
[com.datomic/client-pro "0.9.37"]]
:plugins [[lein-ring "0.12.5"]]
#2019-09-1214:16Shaitanis there a SQL injection analogy like Datalog injection ?#2019-09-1216:47souenzzo[:find ?e
:where [(clojure.core/eval "(prn :ok)")]]
#2019-09-1216:40kelvedenI'm really struggling to get my AWS ECS service to connect to datomic cloud via a VPC endpoint. I appreciate that there are a whole bunch of areas where the connection could fail but does anyone have some war stories/gotchas related to using VPC endpoints to share that might shed some light?
A few bits of info:
* Datomic cloud is deployed via AWS marketplace to the same AWS account on the it's recommended VPC (i.e. 10.213.0.0/16)
* Everything (Datomic, ECS service et al) is running in the same AWS region.
* Bastion is running - and I can connect locally to Datomic via Bastion succcessfully.
* My ECS service is running in a separate VPC.
* A VPC endpoint and corresponding endpoint service has been added as described here https://docs.datomic.com/cloud/operation/client-applications.html#create-endpoint.
* The security group for the endpoint has deliberately been set very open whilst I get to the bottom of this (i.e. ingress: allow all TCP traffic from anywhere; egress: allow everything to everywhere)
* I've tried an endpoint without a principal whitelist and with a principal whitelist of * - no difference.
There's nothing special about the client config (I think); essentially this: {:server-type :cloud
:region "<my-region>"
:system "<my-system>"
:endpoint "http://<endpoint-dns>:8182"
:proxy-port 8182}
The error I'm getting is a simple Connection refused. Any thoughts gratefully appreciated!#2019-09-1306:07jumarI have no experience with Datomic but did you try to ssh into the ecs instance and check logs; or possibly try to connect from the shell?#2019-09-1313:05kelvedenYeah I did try all that thanks.#2019-09-1313:08kelvedenI have got it working now though and I'll note down the key things I changed to make it work in a little while.#2019-09-1314:53kelvedenI had a couple of problems: one related to putting my VPC endpoint in the wrong subnet of my VPC and the other was that the IAM role under which the ECS service runs requires read privileges to the Datomic S3 bucket.#2019-09-1408:49kelvedenSomething that's confusing me is the lack of mention of the web console for Datomic Cloud. There's nothing in the CF template that I can find either. Does anyone know if it's supported?#2019-09-1411:37matanIs datomoic still bound to AWS?#2019-09-1418:38andy.fingerhutOthers are really the experts here, and can correct what I say here in case I am off: Datomic On-Prem was the first version of Datomic released, and was never bound to AWS or any other cloud provider. Datomic Cloud has been integrated with AWS since its release, and I have not heard of a version of it released for any other cloud provider besides AWS.#2019-09-1519:45Marcus Vieirahi everyone 👋 how would I go about reseting the data inside datomic after each test block is performed?#2019-09-1611:48chrisblomuse a new database, https://github.com/vvvvalvalval/datomock has some stuff to make this efficient#2019-09-1612:42Marcus VieiraI’ll take a look, thanks a lot!#2019-09-1521:22kelvedenI'm trying to find any sort of documentation on how to run locally running tests against code that (in production) will run against Datomic Cloud. Does anyone know of any?
datomic-free can be spun up easily enough of course but connecting to that and querying it uses a different client library (`datomic.api` from com.datomic/datomic-free) to that used for Datomic Cloud (`datomic.client.api` from com.datomic/client-cloud).#2019-09-1521:23kelvedenIdeally I just want to spin up datomic-free in-memory and then connect to it from the cloud client.#2019-09-1521:35kelvedenThe code for datomic.client.api/client does mention that :local is a valid :server-type. However, there's no mention of it or what config to pass it in the docstring for the function (nor anywhere else I can find).
Also, the code for local server type refers to an artifact that I can't find anywhere: com.datomic/client-impl-local.#2019-09-1522:02kennyWe’ve been using this for a while now and it gets the job done https://github.com/ComputeSoftware/datomic-client-memdb#2019-09-1523:13kelvedenThanks @U083D6HK9, that looks interesting.#2019-09-1616:02franquitoHello! When accessing the I'm thinking of creating an ElasticSearch index by traversing the EAVT, but I want to be sure I don't include retracted things.#2019-09-1616:06Joe LaneWill you be recreating the ES index as a batch? I've done a lot of work on this problem but with Lucene and it may be easier to add all assertions and retractions, then filter via the es query itself.#2019-09-1616:30franquitoYes, I'll be recreating the ES index as a batch. I thought about that too, this will be a problem if I ever want to use the ES suggestions (because they lack good filtering capabilities).
Other possibility is filtering by :added while traversing? But this goes back to my original question as I'm not sure what EAVT returns.#2019-09-1616:39favilaonly the history database includes retractions#2019-09-1616:39favila(d/history db)#2019-09-1616:39favilaif you never called d/history on a db it will only ever have assertions#2019-09-1616:43franquitoOk thanks! So if I transact and then retract an entity e, it won't be included in the EAVT index?#2019-09-1617:16favilacorrect#2019-09-1617:16favila(and easily verified by trying it)#2019-09-1617:33franquitoThank you. Yes, that was a lazy question, sorry.#2019-09-1616:51thumbnail👋:skin-tone-2: Hello,
I’m noticing this warning in our datomic projects;
WARNING: requiring-resolve already refers to: #'clojure.core/requiring-resolve in namespace: datomic.common, being replaced by: #'datomic.common/requiring-resolve
Any way to circumvent or suppress?#2019-09-1715:47jaretHi @UHJH8MG6S are you using datomic pro? What version? I believe we resolved this issue after clojure 1.10.1 release in the latest Datomic pro.#2019-09-1715:47jaret0.9.5951#2019-09-1716:13thumbnailCurrently I’m on datomic pro 0.9.5786. will check out latset version#2019-09-1716:20thumbnailThanks! bumping to 0.9.5951 fixed it 👍:skin-tone-2:#2019-09-1620:06wilkerluciohello, we are discussion here around some performance characteristics on datomic, we are using on-prem. the current implementation uses datomic queries and entities API, our query deals with 10.000 entities currently, we are wondering if moving from q + entities to q + pull would be faster. our assumption is that it may be faster because the pull may be able to more efficiently get all required datoms inside our instance, but we don't know enough about internals to validate if this is a good assumption. does this refactor approach makes sense?#2019-09-1620:12favilaif you know what you need ahead of time pull is likely to be faster, or at least can be made faster (whereas entity will always have a “should I prefetch this? will you need it?” problem)#2019-09-1620:13favilathere’s another advantage that pull gives you Real Maps and can do some key renaming for you, and you know for sure that IO is done#2019-09-1620:13favilaso you can isolate potentially blocking/latency-sensitive code to its own threads#2019-09-1620:14favila(entity has unpredictable latency because there’s always a chance it has to perform blocking io)#2019-09-1620:26wilkerluciothanks, we did some benchmarks and got results that match your description:#2019-09-1620:26wilkerlucio(crit/with-progress-reporting
(crit/report-result
(crit/quick-bench
(->> (d/q '{:find [[?e ...]]
:where [[?e :artist/name _]]}
db)
(mapv (comp #(select-keys % [:artist/name])
#(d/entity db %)))))))
Evaluation count : 48 in 6 samples of 8 calls.
Execution time mean : 12.594899 ms
Execution time std-deviation : 1.845767 ms
Execution time lower quantile : 10.651151 ms ( 2.5%)
Execution time upper quantile : 14.538328 ms (97.5%)
Overhead used : 1.820422 ns
==============================================================
(crit/with-progress-reporting
(crit/report-result
(crit/quick-bench
(->> (d/q '{:find [[(pull ?e [:artist/name]) ...]]
:where [[?e :artist/name _]]}
db)))))
Evaluation count : 36 in 6 samples of 6 calls.
Execution time mean : 18.813521 ms
Execution time std-deviation : 298.939459 µs
Execution time lower quantile : 18.551782 ms ( 2.5%)
Execution time upper quantile : 19.209279 ms (97.5%)
Overhead used : 1.820422 ns
==============================================================
(crit/with-progress-reporting
(crit/report-result
(crit/quick-bench
(->> (d/q '{:find [?e ?name]
:where [[?e :artist/name ?name]]}
db)))))
Evaluation count : 300 in 6 samples of 50 calls.
Execution time mean : 2.156460 ms
Execution time std-deviation : 143.362504 µs
Execution time lower quantile : 1.990183 ms ( 2.5%)
Execution time upper quantile : 2.312776 ms (97.5%)
Overhead used : 1.820422 ns#2019-09-1620:26wilkerlucioalso, it seems like using the datalog matching is much faster (10x) than both of the other options#2019-09-1620:26wilkerluciothis was run against the demo mbraiz database#2019-09-1620:36wilkerluciojust to be sure, new benchmarks with full tests:#2019-09-1620:36wilkerlucio(crit/with-progress-reporting
(crit/report-result
(crit/bench
(->> (d/q '{:find [[?e ...]]
:where [[?e :artist/name _]]}
db)
(mapv (comp #(select-keys % [:artist/name])
#(d/entity db %))))
:verbose)))
Warming up for JIT optimisations 10000000000 ...
compilation occurred before 1 iterations
compilation occurred before 347 iterations
Estimating execution count ...
Sampling ...
Final GC...
Checking GC...
Finding outliers ...
Bootstrapping ...
Checking outlier significance
x86_64 Mac OS X 10.14.1 12 cpu(s)
Java HotSpot(TM) 64-Bit Server VM 25.181-b13
Evaluation count : 6420 in 60 samples of 107 calls.
Execution time sample mean : 9.898978 ms
Execution time mean : 9.895288 ms
Execution time sample std-deviation : 536.776513 µs
Execution time std-deviation : 544.280358 µs
Execution time lower quantile : 9.142990 ms ( 2.5%)
Execution time upper quantile : 10.901935 ms (97.5%)
Overhead used : 1.820422 ns
==============================================================
(crit/with-progress-reporting
(crit/report-result
(crit/bench
(->> (d/q '{:find [[(pull ?e [:artist/name]) ...]]
:where [[?e :artist/name _]]}
db))
:verbose)))
Warming up for JIT optimisations 10000000000 ...
compilation occurred before 103 iterations
compilation occurred before 307 iterations
Estimating execution count ...
Sampling ...
Final GC...
Checking GC...
Finding outliers ...
Bootstrapping ...
Checking outlier significance
x86_64 Mac OS X 10.14.1 12 cpu(s)
Java HotSpot(TM) 64-Bit Server VM 25.181-b13
Evaluation count : 3300 in 60 samples of 55 calls.
Execution time sample mean : 18.169708 ms
Execution time mean : 18.174275 ms
Execution time sample std-deviation : 516.347064 µs
Execution time std-deviation : 522.849633 µs
Execution time lower quantile : 17.626452 ms ( 2.5%)
Execution time upper quantile : 19.723233 ms (97.5%)
Overhead used : 1.820422 ns
Found 6 outliers in 60 samples (10.0000 %)
low-severe 2 (3.3333 %)
low-mild 4 (6.6667 %)
Variance from outliers : 15.7926 % Variance is moderately inflated by outliers
==============================================================
(crit/with-progress-reporting
(crit/report-result
(crit/bench
(->> (d/q '{:find [?e ?name]
:where [[?e :artist/name ?name]]}
db))
:verbose)))
Warming up for JIT optimisations 10000000000 ...
compilation occurred before 5846 iterations
Estimating execution count ...
Sampling ...
Final GC...
Checking GC...
Finding outliers ...
Bootstrapping ...
Checking outlier significance
x86_64 Mac OS X 10.14.1 12 cpu(s)
Java HotSpot(TM) 64-Bit Server VM 25.181-b13
Evaluation count : 29940 in 60 samples of 499 calls.
Execution time sample mean : 1.945301 ms
Execution time mean : 1.945398 ms
Execution time sample std-deviation : 32.284694 µs
Execution time std-deviation : 32.687682 µs
Execution time lower quantile : 1.908870 ms ( 2.5%)
Execution time upper quantile : 2.033389 ms (97.5%)
Overhead used : 1.820422 ns
Found 3 outliers in 60 samples (5.0000 %)
low-severe 3 (5.0000 %)
Variance from outliers : 6.2567 % Variance is slightly inflated by outliers#2019-09-1620:46favilathese tests seem to show that I am wrong? d/entity appears 2x faster to me#2019-09-1620:46favilathan pull in a query expression#2019-09-1620:47faviladid you try d/pull or d/pull-many after the query?#2019-09-1620:48favilamaybe there’s a large fixed cost to compiling the pull expression#2019-09-1620:49favilaand yes, pulling directly from the query itself (no entity or pull) is always going to be much faster, but the shape is often wrong and it can’t represent nils#2019-09-1719:19dazldwilker, did you try extracting the pull part from your query, into its own function?#2019-09-1719:19dazldquite interested in your findings#2019-09-1719:29wilkerlucioI didn't ran the tests with the pull out yet, have to test that one in some other time#2019-09-1714:05Oleh K.I understand that this kind of question was asked a lot of time, but how to do a sorted pagination of complex objects in datomic cloud properly? By querying only ID's and sort-by fields, then sorting, dropping and taking a limit and, finally, pulling the entity with all nested fields and objects?#2019-09-1714:09favilayeah, that’s basically the way to go#2019-09-1714:13Joe LaneDo you want the pages to be against the same basis-t of the database? If so, you could separate the querying and sorting from the limit, dropping, and pulling.
Then you could stick the sorted resulting eids in a cache somewhere and refer to it by some request guid.
Generally I'd go with @U09R86PA4’s approach unless there is a special need for the caching approach above.#2019-09-1716:28Oleh K.Thanks. No, caching is not a variant#2019-09-1812:53Andrey PopeloHey
Datomic On-Prem currently requires JDK 7 or 8. Is this also true for the peer library? Can the peer library be used with the latest JDK?#2019-09-1813:10favilaI don’t know if this is supported but I have used the peer on jdk 11 without problems. (or maybe there were problems getting it to work but I’ve forgotten them and they are non-issues now)#2019-09-1813:10favilaI’ve never successfully used a transactor on java 11#2019-09-1813:11favila(I probably encountered minor opposition and decided it was safer not to even try to work through it, but honestly I don’t remember clearly)#2019-09-1814:04marshallThe latest release should run on jdk 11#2019-09-1814:04marshallIf you have any issues or problems doing so, please let us know#2019-09-1816:21andrew.sinclairThe peer or the transactor can run on jdk 11, or both?#2019-09-1817:08marshallboth#2019-09-1906:01Andrey PopeloGreat, thank you#2019-09-1814:41jeroenvandijkDid anyone happen to try and use the Github package registry to push the datomic-pro jar to their private repo?#2019-09-1814:41jeroenvandijkDid anyone happen to try and use the Github package registry to push the datomic-pro jar to their private repo?#2019-09-1814:42jeroenvandijkI did and I'm failing to pull it back in with lein. I've asked Github for support#2019-09-1821:46timgilbertHaven't tried on github but we've got it published on a private S3 repo via mvn deploy-file without too much trouble#2019-09-1908:03jeroenvandijk@U08QZ7Y5S yeah I did that with other libraries too. This did result in AWS credentials in the project files as something else got too involved. I could probably do better here. Now I was hoping Github would be a nice alternative. Maybe it needs some time#2019-09-1817:37eraadHi! I would appreciate help with the following: I have a composite tuple that includes 5 unique attributes and I want to add a new unique attribute. What is the correct way to do it?#2019-09-1817:37eraadThe docs say that :tupleAttrs cannot be altered so the option would be to add a new composite tuple. I created one with the extra attribute, but the old one gets in the way.#2019-09-1820:03ivanaHi! Is it possible to get db/id via get-else or use underscore reverse link in query?#2019-09-1820:05favila[(get-else ?e :ref-attr ?not-found) ?v] -> ?v is your “:db/id” (if I understand your question correctly?)#2019-09-1820:06favilaThe reverse form of an attr isn’t recognized in query clauses; reverse the clause order instead: [?v :attr ?e] instead of [?e :_attr ?v]#2019-09-1820:09ivanaThanks, but reversing order makes me disappointed how to use get-else. I'l try you first suggestion#2019-09-1820:14favilaget-else only works on cardinality-one attributes; the reverse would be cardinality-many#2019-09-1820:14favilaso it wouldn’t work anyway#2019-09-1820:15favilayou could do an explicit d/datoms and check it for emptiness#2019-09-1820:16ivana(d/q '[:find [(count-distinct ?item)]
:in $
:where
[?o :order/pickup-items ?item]
[?r :recycle/items ?item]]
db)
=> [541]
#2019-09-1820:17ivana(d/q '[:find [(count-distinct ?item)]
:in $
:where
[?o :order/pickup-items ?item]
; [?r :recycle/items ?item]
]
db)
=> [1561]
#2019-09-1820:18ivanaI want all 1561 rows but with ?r when it present. What should I do?#2019-09-1820:19favilause pull?#2019-09-1820:19ivanaMmmm, maybe I have to read more about it#2019-09-1820:20favilaI’m not 100 % sure what you want the output to be#2019-09-1820:21favilasomething like [[?item (?r1 ?r2 ,,,)],,,]?#2019-09-1820:21favilai.e. all the ?r reachable from an item?#2019-09-1820:23ivanaNo. I want all 1561 items as a separate rows, but if item choosen also in recycle, I want its id, otherwise false by get-else or something similar#2019-09-1820:23favilahow is that different from what I said?#2019-09-1820:24ivanaI do not want any aggregations#2019-09-1820:25favilaif you don’t aggregate, you may get [item1 r1] [item1 r2] etc#2019-09-1820:25favilai.e. item repeats#2019-09-1820:25ivanait is not bad, I can process them in clojure#2019-09-1820:25favilaand if item doesn’t join to an r, what would you treat that as? having a single r?#2019-09-1820:26favilawith some sentinel?#2019-09-1820:26ivanasomething default#2019-09-1820:26ivanamaybe false#2019-09-1820:27ivanaget-else works good, but not with db/id , only with values#2019-09-1820:27favilawhat do you ultimately want? {:item item-eid :recycles #{recycle1 recycle2,,,}}?#2019-09-1820:27favilabut if item doesn’t have a recycle, what would it be? {:item item-eid :recycles #{:default}}?#2019-09-1820:28favila^^ this is what I don’t get#2019-09-1820:28ivana[item1 order1 rec1]
[item1 order2 rec2]
[item2 order2 false]
....#2019-09-1820:29favilayou mean [item2 order3 false] for that last one?#2019-09-1820:29favilawhat you wrote isn’t possible#2019-09-1820:29ivanait can be the same order#2019-09-1820:29favilanm I missed that item1->2#2019-09-1820:30favilaif your query is simple, consider two queries and cat them together#2019-09-1820:31favila`’[:find ?item ?o ?r
:where
[?o :order/pickup-items ?item]
[?r :recycle/items ?item]]` if they have an item#2019-09-1820:31ivanaunfortunately my query sufficiently complex, I typed a simple example above for only to show the problem#2019-09-1820:31favilaok, then you can use d/datoms directly#2019-09-1820:31favilae.g a rule like this:#2019-09-1820:32ivanaYes, code above extracts ONLY if item has recycle#2019-09-1820:32ivanaSimply speaking I want get-else for db/id#2019-09-1820:33favilait’s not that simple because of cardinality#2019-09-1820:37ivanaok, thanks again, I'l think about another way#2019-09-1820:49favilanm you don’t need datoms, a rule can do it#2019-09-1820:50favila(d/q
'[:find ?i ?o ?r
:where
[?o :order/pickup-items ?i]
(or-join [?i ?r]
(and
(not [_ :recycle/items ?i])
[(ground false) ?r])
[?r :recycle/items ?i])
]
'[[o1 :order/pickup-items i1]
[o2 :order/pickup-items i1]
[o2 :order/pickup-items i2]
[r1 :recycle/items i1]
[r2 :recycle/items i1]]
)
=> #{[i2 o2 false] [i1 o1 r2] [i1 o2 r2] [i1 o1 r1] [i1 o2 r1]}
#2019-09-1820:53favilamy other approach was call (d/datoms), perform a bounded count, and either destructure if nonzero or bind to your sentinel if zero#2019-09-1820:53favilaI’m still not convinced this is advisable, but there you go#2019-09-1820:56ivanaThanks, I'l try you suggestion in a some minutes#2019-09-1820:58ivanaFirst impression that it is great and what I need!#2019-09-1820:59ivanaI have not discovered or-join magic with ground yet )#2019-09-1820:07BrianHey! I can do (d/transact db {:tx-data some-map}) to add a map of data to my database. Is there a way to do the opposite? If I have some-map or stuff and I want to retract those from the database?#2019-09-1820:08favilathe map form of a transaction is purely syntactic sugar for a bunch of [:db/add ...] clauses (which is what everything ultimately expands to. There’s no syntactic sugar for :db/retract#2019-09-1820:10favilawhen you say {:db/id e :attr v} in a tx, you are only adding assertions to the entity, you are not making the entity look like your map.#2019-09-1820:12favilathat said there are some transaction functions floating around which can make your entity “look like” the argument, i.e. do whatever adds/retracts are necessary atomically#2019-09-1820:13favilahere’s some I wrote a long time ago, I’m sure there are better ones now: https://gist.github.com/favila/8ce31de4b2cb04cf202687c6a8fa4c94#2019-09-1820:16BrianOkay copy copy. With that, let me give you a little more context. So I have a vector of vectors with 3 values in them. I was very easily able to zipmap keys into those vectors and turn them into a vector of maps, then boom I could add them to the database. Now I have that same vector of vectors and I want to instead retract that information from the database. Should I just put all that information through a function which returns [:db/retract ...] until I have a bunch of those?#2019-09-1820:17BrianLooking at that code now. Would that solve the problem given the context I just shared? If so, I'll run with that. Otherwise, it's a bit complicated for me to off-the-bat understand what it's doing. I'm a bit new to Clojure I must admit#2019-09-1820:54favilayes, just generate retractions#2019-09-1820:55favila{:db/id id :attr v} => [:db/add id :attr v]#2019-09-1820:13favila#2019-09-1914:07Laverne SchrockWhen I push my Datomic Ions app to a query group running 482-8794 (using version 0.9.234 of com.datomic/ion-dev, I get a non-empty :dependency-conflicts map.
{:deps #:com.cognitect{s3-creds #:mvn{:version "0.1.23"}},
:doc
"The :push operation overrode these dependencies to match versions already running in Datomic Cloud. To test locally, add these explicit deps to your deps.edn."}
If I add that to my deps.edn for local testing, I get Could not find artifact com.cognitect:s3-creds:jar:0.1.23 in central (). Does someone at Cognitect need to cut a release?#2019-09-1914:13jaret@lvernschrock would you mind sharing your deps?#2019-09-1914:21Laverne SchrockIt's pulls in a fair number of dependencies internal to our company. Let me see if I can trim it down to something you could build a classpath with.#2019-09-1914:22jaretare you including both ion-dev and ion?#2019-09-1914:23jaretand what version of client-cloud.#2019-09-1914:23jaretI think I just need to see those and if you have them aliased or in the main deps.#2019-09-1914:29Laverne SchrockIn that case, it looks something like the following:
{:paths ["src" "resources"]
:deps {com.datomic/client-cloud {:mvn/version "0.8.78"}
com.datomic/ion {:mvn/version "0.9.35"}
org.clojure/clojure {:mvn/version "1.10.0"}
;; ... a bunch of other deps ...
}
:mvn/repos {"datomic-cloud" {:url ""}
"central" {:url ""}
"clojars" {:url ""}
;; ... internal repo ..
}
:aliases {:tasks {:extra-paths ["tasks"]
:extra-deps {com.datomic/ion-dev {:mvn/version "0.9.234"}
;; ... a bunch of other deps ...
}}
:deployment-overrides
{:override-deps {com.amazonaws/aws-java-sdk-s3 {:mvn/version "1.11.479"}
com.cognitect/transit-clj {:mvn/version "0.8.285"}
commons-codec/commons-codec {:mvn/version "1.10"}
org.slf4j/slf4j-api {:mvn/version "1.7.14"}
org.clojure/core.async {:mvn/version "0.3.442"}
com.cognitect/s3-creds {:mvn/version "0.1.23"}}}}}
Then I'd run
clj -Atasks ...
where ... runs a main method that eventually calls
(datomic.ion.dev/push {:repl-cmd "-A:deployment-overrides"})
#2019-09-1921:17jaret@lvernschrock
>The :ion server-type implements this behavior, connecting as a client when remote, and providing an in-memory implemention of client when running in Datomic Cloud.
Because ions are using an in-memory implementation of client it will not use your main dep of client-cloud. If you moved com.datomic/client-cloud {:mvn/version "0.8.78"} to your alias (and I recommend that you do) you would not see the dependency conflict override. It is worth pointing out here that in order to get an accurate report of the overrides section, you have to match the version of ion-deploy with the version of cloud you are running.#2019-09-1921:19jaretso in your deps it should probably look like:
{:paths ["src" "resources"]
:deps {
com.datomic/ion {:mvn/version "0.9.35"}
org.clojure/clojure {:mvn/version "1.10.0"}
;; ... a bunch of other deps ...
}
:mvn/repos {"datomic-cloud" {:url ""}
"central" {:url ""}
"clojars" {:url ""}
;; ... internal repo ..
}
:aliases {:tasks {:extra-paths ["tasks"]
:extra-deps {com.datomic/client-cloud {:mvn/version "0.8.78"}
com.datomic/ion-dev {:mvn/version "0.9.234"}
;; ... a bunch of other deps ...
}}
:deployment-overrides
{:override-deps {com.amazonaws/aws-java-sdk-s3 {:mvn/version "1.11.479"}
com.cognitect/transit-clj {:mvn/version "0.8.285"}
commons-codec/commons-codec {:mvn/version "1.10"}
org.slf4j/slf4j-api {:mvn/version "1.7.14"}
org.clojure/core.async {:mvn/version "0.3.442"}
com.cognitect/s3-creds {:mvn/version "0.1.23"}}}}}
#2019-09-2013:47Laverne SchrockOkay. That makes sense I guess (and it worked as desired when I tested it 👍).
> match the version of ion-deploy with the version of cloud you are running
I'm not sure what this means since the version numbers for com.datomic/ion-dev don't seem to line up with com.datomic/ion, the clojure client , or the query group templates.#2019-09-1915:15BrianI am writing an extremely simple query which is returning a hard-to-understand error
[:find ?port
:where
[_ :blacklist/port ?port]]
This query works and returns [[443] [80] ...] however I want it to return [443 80 ...]
and so I have adjusted my query to this one
[:find [?port ...]
:where
[_ :blacklist/port ?port]]
And I get Only find-rel elements are allowed in client find-spec, see http://docs.datomic.com/query.html#grammar
I know I've done this successfully on-prem. I'm now working with cloud. I am including [clojure.data.json :as json]. Any ideas?#2019-09-1915:16Brian[datomic.client.api :as d]#2019-09-1915:16Joe Lanecloud doesn't have [:find [?port ...] in it's find-rel grammar#2019-09-1915:17BrianSigh. Any alternatives or do I just need to deal with this?#2019-09-1915:18Joe Laneuse (into [] cat (d/q '[:find ?port :where [_ :blacklist/port ?port]] (d/db (get-conn)))#2019-09-1915:20BrianThank you!!#2019-09-1915:21Joe LaneNP @brian.rogers, lmk if you have any other questions or cloud hurdles. I've only ever used cloud so I've had the complement of your problem many many times 🙂#2019-09-1915:21BrianI used that binding in my in clause. I didn't expect to be unable to use that in my find clause#2019-09-1915:21BrianMuch appreciated @lanejo01 =]#2019-09-1915:22Joe LaneAre you using ions or the client api in your application in the cloud vpc?#2019-09-1915:27BrianHmm well I plan on deploying this with aws code deploy so it will be an ion eventually. I am currently testing it against my aws database. I'm not sure how the two (ion vs client api) differ necessarily. At the end of the day what I need is to deploy this function (making it an ion I believe) and that this function will pull some data from my database in the vpc#2019-09-1915:29Joe LaneIn your second sentence is "... my aws database" your datomic cloud database or a different type of existing database (an existing mysql db, for example?)#2019-09-1915:30Brianmy Datomic Cloud database*#2019-09-1915:31Joe LaneI think what you will ultimately want is to leverage datomic Ions, It's a blast to work with. Have fun!#2019-09-1916:48markbastianI am trying to use a protocol to obtain the implementation of the query function for a given datalog implementation (e.g. Datomic, Datomic Cloud, Datascript). The entire implementation for Datomic Cloud is here:
(ns foo.proto
(:require [datomic.client.api :as d])
(:import (datomic.client.impl.shared.protocols Db)))
(defprotocol IDatalogQueryable
(q [this]))
(extend-protocol IDatalogQueryable
Db
(q [this] d/q))
When I run this I get the following error:
Syntax error (ClassNotFoundException) compiling at (src/foo/proto.clj:1:1).
datomic.client.impl.shared.protocols.Db
This does work when I use the in-mem datomic api jar or with Datascript.
I can do the following to make the class load and become visible:
(ns foo.works
(:require [datomic.client.api :as d]
[foo.config :refer [config]]))
(let [client (d/client config)
conn (d/connect client {:db-name "example-db"})]
(d/db conn))
(import '(datomic.client.impl.shared.protocols Db))
(defprotocol IDatalogQueryable
(q [this]))
(extend-protocol IDatalogQueryable
Db
(q [this] d/q))
It looks like if I initiate a connection the right classes are loaded and then I am able to extend the protocol.
Does anyone have any ideas as to how I might make the first example work? I am assuming there is some sort of dynamic classloading going on that I need to do, but I don't want to have to initiate a test connection just do do that.#2019-09-1917:15Joe LaneI would recommend not using that (let [client... block, you will run into this issue if you ever use that in an ion https://docs.datomic.com/cloud/troubleshooting.html#assert-failed#2019-09-1917:16Joe LaneAre you trying to extend the datalog engine to clojure datatypes with cloud-client like how you currently can with on-prem?#2019-09-1917:48markbastianYeah, the let form was just done to demonstrate that creating a client will load the needed classes to support the protocol. I would not deploy that. My goal is to achieve what the former snippet is doing.
My ultimate goal is to be able to have a set of queries that work across all three of datomic on prem, datomic cloud, and datascript. This isn't a general capability, so the queries I have do work with all three so long as I have the right q function. So, I need the ability to use the right q based on the implementation. Using a simple protocol to dispatch on the db type was what I was hoping for.#2019-09-1917:50markbastianThe real problem is that trying to import datomic.client.impl.shared.protocols.Db before creating a client fails.#2019-09-1917:55Joe LaneCan you dispatch on which library is available on the classpath?#2019-09-1918:06markbastianPotentially. The challenge with that is that I would only be able to have one implementation at a time on my classpath.#2019-09-2019:54BrianHow do I give a :db/id to an entity? I'm using :db/id when trying to transact my data and I get a "unable to resolve entity". How can I say "if it doesn't exist, make one with this id"?#2019-09-2019:56favilaUse a tempid (string) to create a new entity. Note that an entity is inherently it’s id, you don’t “give” it one.#2019-09-2019:57favilahave you gone through a tutorial yet?#2019-09-2019:57favilaCloud: https://docs.datomic.com/cloud/tutorial/assertion.html#2019-09-2019:57favilaon prem: https://docs.datomic.com/on-prem/getting-started/transact-data.html#2019-09-2020:01BrianTell me if my instructions were incorrect. I have a blacklist entity and I want to add things to a cardinality-many attribute of it. I was told I would be able to use :db/id to "name" it for sake of argument :blacklist and from here on out when I want to add to that blacklist I would be able to include :db/id :blacklist in my map-form transaction. Is that a flawed assumption so far?#2019-09-2020:03favilaIt’s a simplification? or confusion maybe?#2019-09-2020:03favilaWhat’s really happening is that all data in datomic is assertions and retractions (datoms) which state one “fact”. They are structured like [entity-id, attribute-id, value, transaction, assertion-or-retraction]#2019-09-2020:04favilathe map form from a pull is a projection of this, where all assertions sharing the same entity-id are joined together, the entity id itself is put on :db/id, and the attributes+values are map entries#2019-09-2020:05favilasimilarly, the map form of a transaction is syntactic sugar that expands out into [:db/add entity-id attribute value] assertions#2019-09-2020:06favilaadded to all this, you don’t have much control over the actual value of the entity id, so it doesn’t make much sense to talk about “naming” an entity#2019-09-2020:07favilaif you want a possibly new entity, you use a tempid: [:db/add "some-string-representing-a-tempid" attribute value] or {:db/id "some-string-representing-a-tempid" attribute value}#2019-09-2020:08BrianSo is the answer to the "I only want a single blacklist and only want to ever add stuff to that one" to, before any transactions, query for the eid of the blacklist and use that (or a temp-id on first go)?#2019-09-2020:09favilayou can use an upserting indexed attribute for this purpose: the index lookup will resolve to an existing entity id or create one if it doesn’t exist yet#2019-09-2020:09favilaif this is really a singleton, you can use :db/ident#2019-09-2020:09favila{:db/id "the-blacklist" :db/ident :my/blacklist-entity …}#2019-09-2020:10favilaor you can make your own attribute#2019-09-2020:10favilahttps://docs.datomic.com/on-prem/identity.html discusses various ways of “addressing” an entity#2019-09-2020:12favila(actually I take that back, I’m not sure :db/ident is upserting; but you can at least ensure it is created as part of your schema bootstrapping)#2019-09-2020:14BrianThank you!#2019-09-2119:57zalkyHi all! I noticed that the latest versions of Datomic have new tuple types. Reading through the docs, can anyone clarify whether homogeneous tuple types have the same 2-8 element length limitations as the other two tuple types? It says homogenous tuples have "variable" length. It's just a little ambiguous. Thanks!#2019-09-2216:19bartukahi ppl, what is the idiomatic way to verify if a transaction was completed successfully?#2019-09-2219:25favilacheck the return value of the transact function call#2019-09-2221:25bartukaDatomic usually raises an exception when something goes wrong?#2019-09-2221:26bartukaI don't know if this is something of my setup in Cider or if it's expected to be this way. For some reason, I expected datomic to return me only a map containing the error data#2019-09-2223:52Laverne Schrock@UBSREKQ5Q Compare https://docs.datomic.com/client-api/datomic.client.api.html and https://docs.datomic.com/client-api/datomic.client.api.async.html#2019-09-2312:28favilaand also https://docs.datomic.com/on-prem/clojure/index.html where it returns a future which doesn’t throw until you read it. (You didn’t say which api you were using)#2019-09-2312:29favilabut in all cases, check the return value#2019-09-2312:53bartukaThanks guys, the exception was happening at my REPL. I was reading the future but not able to catch before happen in the REPL. Everything is fine now. Thanks for the links#2019-09-2312:57bartukaas far as I understand how datomic works, it is accepted to have wrong information on your system in the past versions of the database. [imagining you made a mistake 2 months ago and only corrected now]. However, as many people might be consuming information on different point in time, they might use wrong information to make decisions, right?#2019-09-2313:04alexmillerthis is true of all databases. Datomic lets you know that you did it.#2019-09-2313:08bartukasorry but I don't understand how this would help people querying past versions of the data, specially non-technical people. I understand the concept of issuing a new transactor as pointed out by @U050ECB92 but the wrong data is there forever and people might use it.#2019-09-2313:09bartukaI was making a lot of confusion about "time travel" functionalities in datomic. The distinction of "event time" and "recording time" was very helpful to make it more clear to me#2019-09-2313:11bartukabut I still don't know some implications, for example, imagine I want to run a financial report from three months ago. The data is there, cool. But we issued a transaction to correct some balances, but now the team running the report should not use asOf, txInstant or they will produce a wrong report, that is right?#2019-09-2313:12alexmillerwhen you assert new facts, you retract the old ones. it is possible to do queries as of a point in time in the past before they had been retracted, but that's not what you would normally do in this case.#2019-09-2313:13alexmillerI think there are two notions of time here - one is the transaction times which track when you know things#2019-09-2313:13alexmillerand another are attributes that are put into the database and that's what you'd probably report on#2019-09-2313:16alexmillerso you might record a transaction at time A that says sales were $100 on June 1 (via a schema attribute). And then you'd have a transaction at time B that says oops actually they were $120 on June 1. If you then run a quarterly report, you'd do it over the schema attributes, so you'd see the "updated" data#2019-09-2313:16alexmillerbut also you could run the same query asOf A and then compare to see the report before and after corrections#2019-09-2313:17alexmillerif you use a SQL database, that's impossible because the data is updated in place#2019-09-2313:18bartukayeas! that was very clear. So, at modeling fase I should be paying attention to what entities might need a "time" attribute other than the transaction time#2019-09-2313:26alexmilleranything that you'd need a date-time attribute for in another database model, you probably still need one#2019-09-2313:27alexmillerwhat you typically don't need are things like "created" and "lastUpdated"#2019-09-2313:27alexmillerwhere a relational database has extra attributes to record transaction attributes - those you get "for free" in Datomic#2019-09-2314:08jaihindhreddy@UBSREKQ5Q If you want something that does this in a more systemic manner, check out the Crux database by Juxt. It has bitemporality built right into it. Not sure how mature it is though...#2019-09-2410:01bartukathank you @U064X3EF3 you made it cristal clear to me now.#2019-09-2410:02bartuka@U883WCP5Z I'll take a look at this database as well, however I think this could be an overkill right now for me rsrsrs thanks o/#2019-09-2312:58bartukawould be better to have a mutable database to business people to interact with and increase the awareness of the developer team about such facts?#2019-09-2312:58bartukahow is the daily operation of a company using datomic as the main database?#2019-09-2312:59ghadi"accountants don't use erasers" -- accountants don't go to previous entries and modify them to fix them, they create reconciliation entries later#2019-09-2313:01ghadisame thing applies with datomic: if some value is incorrect, you can transact to assert new facts. It still doesn't change the fact that it was wrong in the past#2019-09-2321:58gwsHi all, we are trying to nail down some behavior of Datomic on-prem where we see a burst of writes to DynamoDB during what we believe to be a read-only operation. It occurred to me that the read-only search query contains a very infrequently-invoked fulltext function call, and while we're not familiar with how this is implemented internally it seems possible that, given that the fulltext index is described as eventually consistent, a Lucene indexing job might be run on demand, causing the burst of writes. Can we rule that out as a possibility, or should we investigate further?#2019-09-2322:00marshall@gws datomic writes to storage dont necessarily correspond directly to transactions
Indexing jobs occur when the mem index reaches a threshold and will include significant write bursts#2019-09-2322:01gwsOK - looking at the memory index it is otherwise holding relatively steady around the threshold, we suspected that as well but don't see the correlation there#2019-09-2322:09gwsWe may have been misreading that CloudWatch chart, thanks for the insight, I think you're right :+1:#2019-09-2512:14tatutbest practices page states that you should annotate transactions with the what, who, when information… but it seems to me that that makes it less convenient to pull because the tx is a separate entity. Are there some good tricks for that or simply do 2 queries?#2019-09-2512:14tatutvs. adding modification timestamp and modifying user info as direct attributes of the entity#2019-09-2512:31favilaThe consideration here is same as in time-of-record vs domain-time considerations#2019-09-2512:32favilaImagine your tx is a git commit. Would the data be part of the commit message or part of the commit body itself?#2019-09-2512:33favilaIf the message, that’s Tx metadata; if body, that’s not tx metadata#2019-09-2512:36tatutfor example I have comments, where the when and who are metadata imo… but it would be convenient to pull it at the same time as the comment itself#2019-09-2512:36tatutin that case I know each comment is added in its own tx#2019-09-2513:01favilacomment author and time sound like data not metadata to me. What if you have to backdate a comment? or import them?#2019-09-2513:02tatutthat’s a good point, perhaps it is data#2019-09-2513:03tatutbetter to err on the side of data, it seems… it is more flexible precisely if we need to import#2019-09-2513:03favilathat doesn’t mean you still can’t record e.g. the system and user that transacted the comment#2019-09-2513:04favilabut the use case is more dev-time auditing and debugging#2019-09-2513:04favilacomment author = tx writer may not be a solid assumption for eg#2019-09-2513:05favilae.g. some systems have “impersonation” features (usually for support)#2019-09-2513:05tatutI’ll go with both, thanks#2019-09-2513:06rapskalianSo maybe a distinction could be made that is “domain metadata” versus “operational metadata”. Tx entity seems to be for the latter. #2019-09-2513:08tatutwould you model the domain metadata as a single attribute type that all entities have, or separate (like :comment/author and :file/author etc)#2019-09-2513:09tatutit seems a single one would allow answering questions like “give me all entities authored by this user”#2019-09-2513:22favilaa single one dovetails nicely with how spec likes to model namespaced attributes#2019-09-2513:24favilabut: make sure you never would need both of them in the same map (that’s a clue there really is some semantic difference between :comment/author and :file/author) and the range (possible legal values) is the same in all contexts#2019-09-2513:24favilaother downsides: it’s less index friendly, and it’s less intrinsically obvious what attributes are expected together on an entity “type” (spec can help here)#2019-09-2513:45tatutin rdbms you often have the same created_by, created_at etc fields in all main tables… as a newcomer to datomic I don’t have an intuition about what is best here#2019-09-2513:55favilacorrect, but 1) those are often really tx metadata in disguise 2) they’re in different tables (usually), so they are different fields. they’re equivalent to :TABLENAME/FIELDNAME#2019-09-2513:55favila(roughly)#2019-09-2513:56favilaif you would use “joined table polymorphism” (I think it’s called?) in sql, that would be like using one attribute#2019-09-2513:58favilai.e. you have one table with all “common” fields, and other tables use those fields by joining (in either direction)#2019-09-2514:13rapskalianGiven the (abbreviated) schema:
{:db/ident :customer/id
:db/unique :db.unique/identity}
and the entity
{:db/id 1
:customer/id "XYZ"}
What terms are given to the following:
- 1
- "XYZ"
- [:customer/id "XYZ"]
- 1 || [:customer/id "XYZ"] (i.e. things you can pass to pull)
The vocabulary in the wild seems to be somewhat inconsistent. I’ve seen words like:
- eid
- lookup
- ident
- entity
Maybe there are specs available?#2019-09-2514:21favilahttps://docs.datomic.com/on-prem/identity.html#2019-09-2514:21benoitI call them:
1: an entity id or eid
"XYZ": a customer id
[:customer/id "XYZ"]: a lookup ref#2019-09-2514:21favila“entity-identifier” = entity-id (eid) OR ident (keyword value of :db/ident attr) OR lookup-ref#2019-09-2514:22favilalookup ref is the [attr value] lookup#2019-09-2514:22favilathe attr itself can be any entity-identifier for an attr also#2019-09-2514:23favilae.g. :customer/id is eid 123: [123 "XYZ"]#2019-09-2514:24favilad/entid can coerce any entity identifier to an eid#2019-09-2514:24rapskalianThanks @U09R86PA4, this link I found appears to agree: https://docs.datomic.com/cloud/schema/schema-reference.html#orgee3fac1
Which would suggest:
1: eid
"XYZ": customer id
[:customer/id "XYZ"]: lookup-ref
:customer/id: ident
1 || [:customer/id "XYZ"]: identifier (entity identifier)#2019-09-2514:24favilacorrect#2019-09-2514:25rapskalianHowever the datomic API itself seems inconsistent. pull for example uses the arg name eid, even tho it really can take an identifier…#2019-09-2514:25favilaprobably just not careful naming#2019-09-2514:26favilaany “eid” argument in a public api will accept any entity identifier#2019-09-2514:26favilaif it helps, eid = Entity IDentifier#2019-09-2514:26favila(not sure that’s what they mean though)#2019-09-2514:27rapskalianMaybe eident would be more specific#2019-09-2514:28rapskalianDoes there happen to be a public spec available for these? Seems like that would be very useful for libraries/frameworks.#2019-09-2514:29rapskalianThis is perhaps the closest: https://github.com/edn-query-language/eql#2019-09-2514:30favilano afaik. I made one but it’s proprietary. (it’s also harder than it looks to get very narrowly defined types!)#2019-09-2514:30favilae.g distinguish between :t and :tx#2019-09-2514:30favilaor know that a long is a potentially invalid entity id#2019-09-2514:36rapskalianHm yeah…library authors lament 😅
Public, standalone specs would be amazing for the community imo. Without them I’m afraid the semantics will drift wildly from lib-to-lib.#2019-09-2514:36rapskalian(For example that eql spec uses “ident” where datomic uses “lookup-ref”)#2019-09-2514:37favilait’s also hard to compress these names without ambiguity#2019-09-2514:37favilaentity identifier -> eident isn’t bad, but it’s still more than “eid”#2019-09-2514:38favilamaybe “edent”#2019-09-2514:38favilaedent = eid | ident | eref#2019-09-2514:56rapskalianYeah edent is nice#2019-09-2514:57rapskalianI don’t have a datomic connection handy at the moment, but is [:db/id 1] a valid lookup-ref?#2019-09-2515:01favilano, :db/id is not an attribute#2019-09-2517:13λustin f(n)I want to start using Datomic, at first in parallel with my existing SQL database. Is it possible (and safe) to put Datomic on top of your existing SQL database to take advantage of existing backup infrastructure, etc?#2019-09-2517:18favilaYou need to use another schema/tablespace/whatever, but you can use the same server#2019-09-2517:14λustin f(n)After all, doesn't Datomic just use a single backing table in SQL? I would assume that it wouldn't break anything in other tables it didn't care about.#2019-09-2517:24genekimI’m getting lots of errors of “Insufficient memory to complete operation” when I run a case-insensitive string matching query against 110K entities on a Datomic Solo instance. I’m running the code on my laptop, running through the datomic proxy.
Am I doing something obviously wrong in the query? Or do I need to upgrade the compute instance type? (Ugh. Hoping that’s note the case!!!)
THANK YOU in advance! Query is as follows: (annotated with Ghostwheel types, which I freaking love):
(>defn get-id-by-screen-name
"case insensitive search "
[nm] [string? => any?]
;
(let [retval (d/q '[:find ?id
:in $ ?lowercasename
:where
[?e :user/id ?id]
[?e :user/screen-name ?screenname]
[(.toLowerCase ^String ?screenname) ?lowercasename]
[(= ?lowercasename ?screenname)]]
(d/db (get-conn))
(.toLowerCase nm))]
(ffirst retval)))
#2019-09-2517:27favilamaybe moving :user/id clause to the end would help#2019-09-2517:29favilaah, actually: [(.toLowerCase ^String ?screenname) ?lowercasename]#2019-09-2517:30favilathis is binding to your already-defined :in ?lowercasename#2019-09-2517:31genekimTHX for reply!!! ohh… I will try reordering the clauses — as soon as the Datomic instance accepts queries again! 🙂
Is there something I should do differently with ?lowercasename?#2019-09-2517:32favilaI think you want [(.toLowerCase ?screenname) ?lcsn] [(= ?lcsn ?lowercasename)]#2019-09-2517:32Joe Lane@genekim That is creating a TON of garbage in the JVM because you're creating 110K new strings. try using https://docs.oracle.com/javase/7/docs/api/java/lang/String.html#equalsIgnoreCase(java.lang.String)#2019-09-2517:33favilayour query as-is is actually looking for cases where the input ?lowercasename, ?screenname, and to-lower ?screenname are all equal#2019-09-2517:33favilaso, that’s wrong#2019-09-2517:34Joe LaneAlso you could use https://docs.oracle.com/javase/7/docs/api/java/lang/String.html#compareToIgnoreCase(java.lang.String)#2019-09-2517:34favila[(.toLowerCase ?screenname) ?lowercasename] should work without the = clause (but I feel like I’ve gotten burned by “clever” unifications like this before)#2019-09-2517:34favilabut also take @U0CJ19XAM’s advice about using better string methods here#2019-09-2517:35favilathat would allow you to filter instead of compare#2019-09-2517:36genekimAh! AWESOME! I’ll give that a try… I think I need to wait 15m while my Datomic instance recovers. @U0CJ19XAM
In the meantime, I will study that query, @U09R86PA4 — it does return correct answers, but most of the time, it runs out of memory.
I’ll let you know how it goes!!! THX AGAIN!#2019-09-2517:40Joe Lane@genekim You may need to bounce the box. I've run out of mem on those little guys before too and it's annoying 🙂#2019-09-2517:42genekimWow, that’s amazing! It worked!!! Thanks @U0CJ19XAM and @U09R86PA4!!!!!! You made my morning!!!#2019-09-2517:43ghadistoring the names pre-lowercased is a good idea, too#2019-09-2517:44ghadior both ways -- not sure the business req#2019-09-2517:45Joe LaneYeah, if you can afford the space, I completely agree with @U050ECB92. Then the datomic = can ( I believe ) leverage index navigation which is very fast!#2019-09-2519:51genekim@U050ECB92 Oh, that is super smart idea. I can definitely store :screenname-lowercase! Thx!!#2019-09-2517:35timgilbertSay, any chance of resolving this dependency conflict in the datomic peer library?
[com.datomic/datomic-pro "0.9.5951"] -> [com.google.guava/guava "18.0"]
overrides
[com.datomic/datomic-pro "0.9.5951"] -> [org.apache.activemq/artemis-core-client "1.5.6" :exclusions [org.jgroups/jgroups commons-logging]] -> [org.apache.activemq/artemis-commons "1.5.6"] -> [com.google.guava/guava "19.0"]
#2019-09-2517:42ghadi@genekim side note: pass the db as an argument -- don't reach out from the inside of the function#2019-09-2519:54genekimAh… I can see how that might make testing easier, and allow for better determinism… What are other benefits?
And what do you typically name the function that gets the db and wraps the actual query? (Looking for some examples and conventions.)
(I’ll go look in the musicbrainz example, too…) Thx!#2019-09-2601:59ghadiTaking the db as an argument allows you to correctly ask higher-level or larger questions, and have all parts of the process use the same basis for query / calculation#2019-09-2602:00ghadiIt also makes the function referentially transparent#2019-09-2602:01ghadiGive it the same input and you get the same output.#2019-09-2602:04ghadicolloquially people call defns that contain calls to d/q “queries”#2019-09-2517:42ghadi[db name]#2019-09-2519:55genekimSo happy to have solved the datomic memory issue (thank you, all!). I just noticed that the Datoms Cloudwatch graph is totally blank — is there an easy remedy for this? (Like rebooting something? 🙂#2019-09-2519:56genekim(The data is there if I zoom out timescale far enough…)#2019-09-2519:57marshall@genekim I believe Datoms is only reported when an indexing job occurs; are you actively transacting against the system?#2019-09-2520:01marshall@genekim actually, i don’t think that’s true ^; not sure why you’re not seeing any data in there#2019-09-2520:02genekim@marshall Yep, as we speak! 🙂#2019-09-2619:27adamfreycan I modify the :db/index schema attribute for an entity? This page mentions :db/unique but doesn't anything about :db/index https://docs.datomic.com/cloud/schema/schema-change.html#2019-09-2619:44marshallCloud doesnt have :db/index#2019-09-2619:44ghadithere isn't a :db/index in Datomic Cloud, afaik#2019-09-2619:44marshallhttps://docs.datomic.com/on-prem/schema.html#2019-09-2619:44marshallhttps://docs.datomic.com/on-prem/schema.html#altering-schema-attributes#2019-09-2619:44marshallyou can indeed modify it in on-prem, however#2019-09-2619:45adamfreyAhhh, I didn’t realize that. Does everything end up in the AVET? Index then?#2019-09-2619:45marshallin cloud yes#2019-09-2619:45adamfreyCool#2019-09-2711:53bartukais it a good idea to model a time series using a single entity with cardinality many for the attributes? I was thinking like a :time and :value attributes with cardinality many#2019-09-2712:02favilaHow will you correlate time with value? How will you have duplicate values?#2019-09-2712:04favilaYou really need at least a datom per same-time observations, but maybe you can combine time+values into a tuple if you are worried about efficiency#2019-09-2715:58kennyNot sure what the scale of your problem is. We attempted to use Datomic Cloud for time-series data and it did not work well at all. Queries, even after being meticulously optimized, would take far too long (10-15s). Our particular problem was storing data for dashboards which we would show to our customers. It was simply too much data for Datomic. Though, I did expect it to fair a little bit better than it did.#2019-09-2719:56rapskalianWhat did you end up going with @U083D6HK9? Just curious.#2019-09-2719:57kennyWe use InfluxDB.#2019-09-2719:59kennyInflux actually simplified our lives a lot. We used to have some Kafka stream processors that would create aggregated points based on the raw data and store that. With Influx, we just throw all our raw data at it and got rid of the stream aggregators. Influx lets you query for any aggregate size at runtime, which is seriously amazing. Queries typically take 0.1s.#2019-09-2823:50favilaAgreed, intense time-series stuff is not a good fit for datomic#2019-09-2822:55xiongtxThe latest version of com.datomic/client-pro is 0.9.37, which doesn’t support features like find specifications:
1. Unhandled clojure.lang.ExceptionInfo
Only find-rel elements are allowed in client find-spec, see
Are the clients libs not kept up to date w/ the datomic libs or something?
https://mvnrepository.com/artifact/com.datomic/client-pro/0.9.37#2019-09-2823:47favilaFind destructuring was deliberately removed from client api#2019-09-2823:47favilaOr deliberately not implemented anyway#2019-09-2823:50favilaIt’s one of the differences on this page: https://docs.datomic.com/on-prem/clients-and-peers.html#peer-only#2019-09-2917:53xiongtxOh, interesting--thanks for pointing to the docs!#2019-09-2901:45sjharmsIs there a local / laptop dev story for datomic? Ie I want to scrape data, store it in datomic, and develop code to interact with it. It looks like my Datomic Pro starter key expired in 2016, and Datomic free won't store any data to disk#2019-09-3001:27ibarrickI'm having difficulty getting my use-case to work with datomic and could use some guidance. I'm implementing datomic as an audit log for our system. I've been successful in implementing most of the desired functionality but I am stuck at trying to query to find all transactions that touched an entity of a certain type. The following query should convey the gist of what I want:
[:find ?txn ?timestamp ?user ?description
:in $
:where
[?txn :db/txInstant ?timestamp]
[?txn :data/user ?user]
[?txn :data/description ?description]
[?e _ _ ?txn]
[?e :model/type :workOrder]
]
#2019-09-3006:03dmarjenburghOne thing I would certainly try putting the most selective clause first (https://docs.datomic.com/cloud/best.html#most-selective-clauses-first). That would speed up the query. You can also combine the last two clauses into [?e :model/type :workOrder ?txn]#2019-09-3006:04dmarjenburghActually, you don’t need the ?e variable. So [_ :model/type :workOrder ?txn] would do.#2019-09-3012:26ibarrickWould this only give me transactions that asserted that the model/type of some entity was :workOrder? I'm looking for all records of that type that had any property changed by a transaction.#2019-09-3013:26favilaI think you want the history database?#2019-09-3013:26favilaif by “touched” you included “retracted”#2019-09-3013:28favilaif $ is a history database, and :data/user and :data/description are always present and cardinality-one, your query should work but may be slow because of clause ordering#2019-09-3013:30favilaThis query is a bit more defensive because it tolerates nils: (d/q '[:find (pull $db ?tx [:db/id :db/txInstant :data/user :data/description])
:in $ $db
:where
[?e :model/type :workOrder]
[?e _ _ ?tx]
]
(d/history db) db)#2019-09-3013:30favilabut it should be roughly the same as yours#2019-09-3001:29ibarrickThis particular query doesn't work because it has "insufficient bindings" but I've gone through many iterations of this overall idea and I either get the bindings error or I get memory/timeout issues because the query is too taxing. the Log API looks like it would help but it doesn't look like I can use that very effectively with the client API, is this really the case?#2019-09-3013:31favilaYour OOMs are because you are selecting all TX datoms in memory. Put the most selective clause first#2019-09-3018:43BrianI'm using Datmoic Cloud and I just pushed an ion and it's made a lambda that I can see and test in AWS Lambda console. However, when I go to API Gateway to connect an API to that lambda, I go to this page https://gyazo.com/e8d7167b2f7b3779258e0b3f51a7681b and my lambda doesn't come up in the search bar.
This sounds like an AWS issue, however when I create a new Lambda from scratch without using Datomic, API Gateway can see that one just fine.
Is there something I need to configure in the Datomic Cloud Lambda that will make it show up in API Gateway?#2019-09-3018:56Joe Lane@brian.rogers Go look up the full name of the lambda in the lambda console and copy paste it into that field. That drop down sometimes doesn't auto populate the lambda but it's there nonetheless. I believe the lambda name is prefixed with the system name, like my-system-demo.#2019-09-3019:37BrianPfffffffffffft wow I didn't try that you're right the auto complete was broken#2019-09-3019:37BrianThank you @lanejo01!#2019-09-3019:45Joe LaneIt's been broken for over a year, I think it was actually documented broken in the datomic ion tutorial. Glad you got it figured out!#2019-09-3021:02Msr TimHi i am trying to insert some test seed data
{
:employee/id 1
:employee/first-name "Level1"
:employee/last-name "Approver"
:employee/email "#2019-09-3021:03Msr Timit rightly complains about#2019-09-3021:03Msr TimUnable to resolve entity: [:employee/id 1] in datom [-9223301668109597978
:employee/approver [:employee/id 1]]
`#2019-09-3021:03Msr Timis there a way to insert that in on tx . Instead of two.#2019-09-3021:04marshallyou need to use an explicit :db/id for any entities you want to reference from “elsewher” in the same transaction#2019-09-3021:04marshallone sec i’ll get you the docs#2019-09-3021:04Msr Timthank you. I've been looking everywhere but didn't quite figure out the approprite google keywords.#2019-09-3021:04marshallhttps://docs.datomic.com/cloud/transactions/transaction-processing.html#tempid-resolution#2019-09-3021:05marshall^ in datomic cloud docs. if you are using on-prem it’s the same and i can find you that link if you prefer#2019-09-3021:05marshallbasically you want:
{
:db/id "1"
:employee/id 1
:employee/first-name "Level1"
:employee/last-name "Approver"
:employee/email "#2019-09-3021:06Msr Timi am using cloud#2019-09-3021:06marshallwhere the db/id is an arbitrary string - as long as it’s the same string in both places#2019-09-3021:06Msr Timoh perfect so I have to use entity id#2019-09-3021:06Msr Timgotcha#2019-09-3021:06Msr Timthank you!#2019-09-3021:08Msr Timthat worked. Thank you so much for quick response.#2019-09-3021:08marshallnp#2019-09-3023:24bartukathere are some ways to define a composed unique attr? For example, I want the combination of :my/name and :my/surname to compose an unique constrain#2019-10-0100:35favilaAre name surname “foo” “bar baz” and “foo bar” “baz” different?#2019-10-0100:35favilaIf not, you must string concat and compute the attr yourself. Otherwise you can use tuples #2019-10-0102:41bartukathey are different#2019-10-0102:41bartukaI think tuples would be a better option too. thanks @U09R86PA4#2019-10-0113:32weiwhat happened to :db.type/bytes? is there a similar type in Datomic Cloud for storing a map, or other data that doesn't fit into the other valueTypes?#2019-10-0114:10Joe Lane@wei If it's big, use a reference in s3, if not, you can Base64 encode/decode either the pr-str of a datastructure or even the binary representation. Example would be a fressian encoded vector could then be Base64 encoded into a string. The tricky part is knowing if somehow you would overflow the string size limit of ~4000 characters. Hence the first recommendation of using s3.#2019-10-0114:11Joe LaneI'm not speaking authoritatively about what happened to :db.type/bytes, just sharing some ideas i've had around working around their absence.#2019-10-0115:50jarethttps://twitter.com/datomic_team/status/1179060874310496257?s=20#2019-10-0115:50jaretNew release for on-prem and cloud ^#2019-10-0115:51stuarthallowayfor the impatient https://www.youtube.com/watch?v=jVKGYu0_OCA#2019-10-0204:17weicongratulations on shipping! this was my #1 feature request for a while and I'm thrilled that you guys have pulled it off. would be fun to hear about the challenges you encountered implementing this.#2019-10-0116:04jeroenvandijkNice 🙂#2019-10-0116:05jeroenvandijkIs it supposed to work with AWS Athena as well? I didn't see a mention in the docs, but that should be similar to Presto#2019-10-0116:10richhickeythere isn’t a path from Athena to ROYB presto, and QuickSight requires LDAP, so neither supported as of yet#2019-10-0116:13jeroenvandijkMakes sense. Thanks for explaining#2019-10-0117:27sjharmsThe new release is exciting, hoping to get some help with a chicken / egg problem. I would like to experiment with Datomic again (originally downloaded starter edition years ago), but it looks like my copy is limited to Datomic as of 2016. It seems like if I want to make software on my workstation and then make a pitch as to why we should buy it (or use AWS hosted etc once proven), I am stuck either getting full buy-in / budget up front, or trying to develop on my systems and never reboot so I can save my data in-between sessions via Datomic Free. Could someone help me understand if this is accurate, or if I am just misunderstanding the workflow?#2019-10-0117:35stuarthallowayDatomic Free is durable and survives reboots...#2019-10-0117:36stuarthallowayOr running Cloud (Solo) is ~$1/day (if you leave it on all the time)#2019-10-0117:38sjharmsAh ok, thank you! I didn't realize Datomic Free wasn't in-memory only#2019-10-0211:46jeroenvandijkI'm a happy user of Datomic Pro, but I wonder if Datomic Free would run nicely in a distributed fashion with https://aws.amazon.com/efs/#2019-10-0213:50souenzzo@U072WS7PE datomic-free still in 0.9.5697
We can't use/learn/develop foss tools around ensure, :keys or any of these new datomic features (as SQL connector)
https://mvnrepository.com/artifact/com.datomic/datomic-free#2019-10-0121:44Msr Timhi, is this documentation still applicable for hosted datomic#2019-10-0121:44Msr Timhttps://docs.datomic.com/on-prem/identity.html#2019-10-0121:44Msr TimYou can request new entity ids by specifying a temporary id (tempid) in transaction data. The Peer.tempid method creates a new tempid, and the Peer.resolveTempid method can be used to interrogate a transaction return value for the actual id assigned.
#2019-10-0121:45Msr Timwhen would someone request new entityids#2019-10-0121:49Msr Timhttps://forum.datomic.com/t/why-no-d-squuid-in-datomic-client-api/446#2019-10-0121:49Msr TimSquuids are no longer required in Datomic (Cloud or On-Prem) now that Datomic has Adaptive Indexing.
#2019-10-0121:50Msr Timofficial docs seems to say#2019-10-0121:50Msr TimIt is often important to have a globally unique identifier for an entity. Where such identifiers do not already exist in the domain, you can use a unique identity attribute with a value type of :db.type/uuid.
#2019-10-0121:50Msr Timi don't have globally unique id for my entities#2019-10-0121:51Msr Timwhat should i do?#2019-10-0121:53nwjsmithif you don’t have a “natural key” for your entities, generate a random UUID#2019-10-0122:00Msr Timthank you @nwjsmith#2019-10-0122:05ghadihttps://docs.datomic.com/cloud/transactions/transaction-processing.html
@meowlicious99 I know you're doing on-prem and not cloud, but that ^ is a useful reference for how transactions work#2019-10-0122:07Msr Timoh I am actually on cloud#2019-10-0122:07Msr Timgoogle fu for on prem docs seems to be high so I always end up on on prem docs.#2019-10-0122:08Msr Timmaybe they should have different color or something 🙂#2019-10-0123:41marshallOn prem docs have a colored bar at the top#2019-10-0123:41marshall;)#2019-10-0208:03dmarjenburghThat’s how I always recognize it ^^#2019-10-0210:03henrikRename one of them to Cimotad#2019-10-0210:05henrikOr Dadomic. You’ll be able to tell by the docs being 90% puns.#2019-10-0215:08Msr Timoh i just noticed that. cool. thank you!#2019-10-0211:31tatutis there any way to permanently delete stuff in cloud? other than deleting whole db and recreating it without the offending datoms… if this has been discussed, I’d appreciate pointers to relevant discussion, thanks.#2019-10-0211:31tatutmostly for the rare(ish?) case of GDPR requests to remove some user’s information#2019-10-0214:04timgilbertThe feature you're looking for is called "excision" in on-prem: https://docs.datomic.com/on-prem/excision.html. No idea what the story is for Cloud though.#2019-10-0214:23tatutyes, I know about excision and that it is not available in cloud#2019-10-0305:59viestithinking that what kind of process is excision in on-prem, is it a internal rewrite of the database?#2019-10-0306:50tatutas :db/txInstant can be set (but must be increasing) I think it would be possible to copy a database to a new one while excising some entities… perhaps not feasible for huge databases#2019-10-0306:50tatutbut a once a month GDPR maintenance break (if needed) where we recreate the whole db#2019-10-0319:27Msr TimMy boss wants to know this too before we use this database at work.#2019-10-0321:29dmarjenburghCurrently not possible in cloud AFAIK. Our solution is to store only a unique id for each user entity in datomic (you can add other non personally identifiable information as well) and store the other userdata in dynamodb under that key.#2019-10-0404:59tatutthat’s our current approach just having a uuid in datomic for a user and storing the actual data elsewhere#2019-10-0413:22mloughlinan approach I've heard about (but not used) is "crypto shredding" - encrypt the data in the immutable DB, and keep the key in a mutable DB. Delete the key when GDPR request rolls in. (I AM NOT A LAWYER 😉 )#2019-10-0211:52magnars#2019-10-0211:52magnars(this is Datomic On-Prem)#2019-10-0212:00magnarsNever mind, I am no longer convinced this is a good idea.#2019-10-0214:23curtosisperhaps an obvious question (haven’t worked with reserved instances before), but is there anything special I’d need to do to use reserved instances for Datomic Cloud CF templates?#2019-10-0214:23curtosisdoing some preliminary price workups#2019-10-0217:00Msr Timme too. Not sure if you can share your findings here. thank you.#2019-10-0219:00curtosisIt’s pretty straightforward. My numbers show $1/day is an upper bound for Solo, at least in us-east. You can do significantly better with reserved instance pricing.#2019-10-0219:01curtosisabout double if you want to use the new analytics gateway function… a non-nano jumps the price up.#2019-10-0219:02curtosisBasic Production config is dominated by the primary i3.larges.#2019-10-0219:03curtosisroughly $4-5k/year depending on whether you want analytics and/or query groups.#2019-10-0219:04curtosisit’s all public info — unfortunately the AWS Marketplace pricer is mostly useless.#2019-10-0216:32hadilsHi dumb question: I am trying to upgrade my Datomic Cloud compute stack to 512-8806. I have RTFM. It is not working. Any one have issues, or can help me?#2019-10-0216:33marshall@hadilsabbagh18 The latest release is a storage & compute upgrade#2019-10-0216:33marshallyou should update your storage stack first, then compute#2019-10-0216:33hadilsThe storage upgrade was successful.#2019-10-0216:33marshallok#2019-10-0216:33marshallwhat error do you see when doing the compute upgrade?#2019-10-0216:34hadilsWait, let me double-check...#2019-10-0216:35hadilsYes. It was updated today. The error I'm getting with the conpute upgrade is:
AMI ami-05c81c69e00244cc9 is invalid: The image id '[ami-05c81c69e00244cc9]' does not exist (Service: AmazonAutoScaling; Status Code: 400; Error Code: ValidationError; Request ID: 6a302920-e531-11e9-9e8a-693b91fa55e0)
#2019-10-0216:37marshall@hadilsabbagh18 what region?#2019-10-0216:37hadilsus-west-2 -- Oregon#2019-10-0216:37marshallok give me one sec#2019-10-0216:37hadilsThanks @marshall!#2019-10-0216:38marshallit appears that AWS Marketplace didn’t create that AMI for that region correctly.
I will report as an issue to them immediately.
Sorry for the inconvenience - I’ll follow up when I hear back from them#2019-10-0216:39hadilsThanks again @marshall! I really appreciate it!#2019-10-0216:45richhickey@curtosis there should be nothing special - credits for reservations you’ve made should be automatically applied to instance hours you consume#2019-10-0217:08souenzzoCan't find where to download CLI Tools#2019-10-0217:14souenzzoDocs can provide a simple working example as https://github.com/Datomic/ion-starter ?#2019-10-0217:16marshall@souenzzo https://docs.datomic.com/cloud/releases.html#current
fixed the link in the release table#2019-10-0217:16marshallthe zip is now downloaded from that link#2019-10-0218:41hadils@marshall Is there any update? I am stuck right now...#2019-10-0218:45marshallI haven't heard back from the marketplace team. You should be able to rollback to the prior version compute template#2019-10-0218:46hadilswhat about the storage template?#2019-10-0218:46marshallYou can leave it#2019-10-0218:46hadilsok ty#2019-10-0218:46marshallThe update there wont hurt anything#2019-10-0218:47hadilsThanks marshall. I appreciate your help.#2019-10-0218:47marshallNo problem #2019-10-0220:26benoitTrying out the new analytics support. It works great!
Should I blame metabase for this strange formatting of the cardinality many attribute ":asset-model/curated-content"?#2019-10-0302:39plexusYes, metabase has some heuristics to prettify names, based on a list of english words and their relative frequencies. You can turn it off in the settings somewhere, I've often seen it create weird results.#2019-10-0312:01benoitGreat, thanks!#2019-10-0220:44benoitI'm getting this error when trying to count rows:
clojure.lang.ExceptionInfo: [?start ?e] not bound in expression clause: [(>= ?e ?start)] {:message "[?start ?e] not bound in expression clause: [(>= ?e ?start)]", :errorCode 65536, :errorName "GENERIC_INTERNAL_ERROR", :errorType "INTERNAL_ERROR", :failureInfo {:type "clojure.lang.ExceptionInfo", :message "[?start ?e] not bound in expression clause: [(>= ?e ?start)]", :suppressed [], :stack ["datomic.client.api.async$ares.invokeStatic(async.clj:58)" "datomic.client.api.async$ares.invoke(async.clj:54)" "datomic.client.api.sync$unchunk.invokeStatic(sync.clj:47)" "datomic.client.api.sync$unchunk.invoke(sync.clj:45)" "datomic.client.api.sync$eval11267$fn__11288.invoke(sync.clj:101)" "datomic.client.api.impl$fn__2619$G__2614__2626.invoke(impl.clj:33)" "datomic.client.api$q.invokeStatic(api.clj:351)" "datomic.client.api$q.invoke(api.clj:322)" "datomic.presto$split_count.invokeStatic(presto.clj:99)" "datomic.presto$split_count.invoke(presto.clj:86)" "datomic.presto$create_connector$reify$reify__2395.getRecordSet(presto.clj:247)" "io.prestosql.spi.connector.ConnectorRecordSetProvider.getRecordSet(ConnectorRecordSetProvider.java:27)" "io.prestosql.split.RecordPageSourceProvider.createPageSource(RecordPageSourceProvider.java:43)" "io.prestosql.split.PageSourceManager.createPageSource(PageSourceManager.java:56)" "io.prestosql.operator.TableScanOperator.getOutput(TableScanOperator.java:277)" "io.prestosql.operator.Driver.processInternal(Driver.java:379)" "io.prestosql.operator.Driver.lambda$processFor$8(Driver.java:283)" "io.prestosql.operator.Driver.tryWithLock(Driver.java:675)" "io.prestosql.operator.Driver.processFor(Driver.java:276)" "io.prestosql.execution.SqlTaskExecution$DriverSplitRunner.processFor(SqlTaskExecution.java:1075)" "io.prestosql.execution.executor.PrioritizedSplitRunner.process(PrioritizedSplitRunner.java:163)" "io.prestosql.execution.executor.TaskExecutor$TaskRunner.run(TaskExecutor.java:484)" "io.prestosql.$gen.Presto_316____20191002_195730_1.run(Unknown Source)" "java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)" "java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)" "java.lang.Thread.run(Thread.java:748)"]}}
#2019-10-0313:45marshallcan you share the sql query you’re issuing and your schema and metaschema?#2019-10-0314:43benoitQuery from the stack trace seems to be:
{:query "SELECT count(*) AS \"count\" FROM \"centriq\".\"asset_model\"", :params nil},#2019-10-0314:43benoitMetaschema is basic:
{:tables
{:user/id {}
:asset-model/id {}
:asset-tag/id {}
:property/id {}}}
#2019-10-0314:44benoit#2019-10-0315:21marshallwhat is the datomic schema#2019-10-0316:00benoitSent in a direct message. Thanks.#2019-10-0308:17dmarjenburghI’m trying to get items that have :item/id and :item/uploadedAt attributes (among others) and want to get the latest top 100 results. I get all ids and timestamps with a query first, sort the list and take 100. Then I want to do a second query to pull the required attributes for those entity ids. First I did:
(mapv (fn [id] (d/pull db pattern id)) ids)
To my surprise, this was actually very slow. So I replaced it with:
(d/q {:query '[:find (pull ?ids pattern)
:in $ [?ids ...] pattern]
:args [db ids pull-pattern]})
which is much faster.
The question is, is this (still) the best way to do this? And why is the first method much slower?#2019-10-0312:44favilaIs this client or peer api? (Looks like client?)#2019-10-0312:46favilaYour mapv is 100 blocking pull requests with work done serially. Your query is one request with work done potentially in parallel#2019-10-0312:47favilaInclude a sorting index with your query input#2019-10-0312:48favila(into [] (map-indexed vector) sorted-eids)#2019-10-0312:50favilaThen in query you can :find (pull ?e [*]) ?i in $ [[?i ?e]]#2019-10-0312:51favilaThen (->> result (sort-by peek) (mapv first)) the result to get back in order#2019-10-0313:19dmarjenburghYes it’s client (cloud). Thanks, so it’s still two sorts regardless.#2019-10-0308:26dmarjenburghOfcourse, I lost the sort-ordering again in the second case…#2019-10-0314:18marshall@hadilsabbagh18 AWS has fixed the permissions on the AMI - you should be able to launch the latest now#2019-10-0314:21hadilsThanks @marshall#2019-10-0314:36hadils@marshall still doesn't work with the same error...#2019-10-0314:43marshall@hadilsabbagh18 us-west-2 right?#2019-10-0314:45hadilsYes. Correct.#2019-10-0314:46marshalli see what’s going on#2019-10-0314:46marshallone sec#2019-10-0314:55tylerIs there a way to install custom dependencies on the datomic ions EC2 instances? We are starting to use AWS xray for tracing but this requires a daemon process to be running for EC2 and I can’t see a straightforward way of installing this without hacking the cloudformation yaml for ions (which is not recommended per the docs).#2019-10-0317:25stuarthallowayIf you can control the daemon from a Java lib then you can add that Java lib to your ion, but atm there is no supported path for deps other than Java libs.#2019-10-0322:10tylerUnfortunately it doesn’t look like there is one. Might have to move the logic to lambda if there’s no pathway there.#2019-10-0413:41stuarthallowayWe are looking into installing xray on the nodes.#2019-10-0315:00hadils@marshall it seems to be working now! Thanks, Marshall!#2019-10-0315:00marshall@hadilsabbagh18 yep - should be good now#2019-10-0316:46Msr Tim(d/q {:query '[:find (pull ?e pattern)
:in $ ?name pattern
:where [?e :artist/name ?name]]
:args [db "The Beatles" [:artist/startYear :artist/endYear]]})
#2019-10-0316:46Msr Timis there a reason I can't pass history db to that query#2019-10-0316:52favilareason is ” Can’t pull from history” (in the exception)#2019-10-0316:52favilapulling from a history database doesn’t make sense#2019-10-0316:53favila(Unrelated, maybe pseudocode in your example, but pull patterns must be literal)#2019-10-0316:56Msr Timthank you sir. that makes sense.#2019-10-0316:47Msr TimUnhandled clojure.lang.ExceptionInfo
Can't pull from history
{:datomic.client-spi/request-id "c4080ae6-32f4-414b-8359-6a4c0128b2ef",
:cognitect.anomalies/category :cognitect.anomalies/conflict,
:cognitect.anomalies/message "Can't pull from history",
:dbs
[{:database-id "956cc673-373e-4c39-8504-86f51b4cb11f",
:t 10,
:next-t 11,
:history false}]}
#2019-10-0420:15souenzzo(let [schema [{:db/ident :foo/checked?
:db/valueType :db.type/boolean
:db/cardinality :db.cardinality/one}
{:db/ident :foo/id
:db/valueType :db.type/string
:db/unique :db.unique/identity
:db/cardinality :db.cardinality/one}]
{:keys [db-after]} (d/with (d/with-db @user/conn) {:tx-data schema})
{:keys [db-after]} (d/with db-after {:tx-data [{:foo/id "ok"
:foo/checked? true}
{:foo/id "not-ok"}]})]
{:ok (d/pull db-after {:eid [:foo/id "ok"]
:selector '[:db/id
(:foo/checked? :default false)]})
:not-ok (d/pull db-after {:eid [:foo/id "not-ok"]
:selector '[:db/id
(:foo/checked? :default false)]})
:not-ok-42 (d/pull db-after {:eid [:foo/id "not-ok"]
:selector '[:db/id
(:foo/checked? :default 42)]})})
=>
{:ok {:db/id 60332402039328370, :foo/checked? true},
:not-ok {:db/id 65148262968987251, :foo/id "not-ok"},
:not-ok-42 {:db/id 65148262968987251, :foo/checked? 42}}
There is a bug in datomic/pull default-option
https://docs.datomic.com/cloud/query/query-pull.html#default-option
Should I open a "formal" ticket in or this report is enough?#2019-10-0420:21ghadiwhat was the expectation with :not-ok-42 @souenzzo?#2019-10-0420:22souenzzoIt's just to confirm that the issue is with false value: once it work with the value 42, everything else inside the test scenario is ok. @ghadi#2019-10-0420:22ghadii think i understand#2019-10-0420:27souenzzoFor me it clarifies that the issue is related to :default false and *not* related to :default with boolean atribute or even you are using a old version of datomic that not support default#2019-10-0420:22ghadiit's actually a correct test case#2019-10-0420:23ghadibut illustrative#2019-10-0421:36jaret@souenzzo @ghadi that looks like a bug to me. I’ve made a case in support and I’ll track it down this weekend.#2019-10-0421:36jaretThanks!#2019-10-0422:46johnjIs it normal for the client to take longer to start up than the peer for a very small DB ?#2019-10-0605:03jumarLooking at https://aws.amazon.com/marketplace/pp/Cognitect-Inc-Datomic-Cloud/prodview-otb76awcrb7aa it seems that in the Fullfillment Options section each option is duplicated. I'm wondering why is that 🙂.#2019-10-0712:27jaretWe’re working with AWS to correct this, but with the recent release they duplicated the listing. One option is the most recent release and the other is all previous. We’re working with them to correct this so they roll up.#2019-10-0613:23rapskalianHas anyone been making use of the newer entity predicates and attribute constraints? I’m doing some data modeling and am curious how others have been using these features, and also how they might integrate with spec. #2019-10-0617:31keymoneis this call supposed to work: (d/transact conn {:tx-data [{:db/doc "hello world"}]})? docs suggest that second arg should be a list of lists#2019-10-0617:38keymoneok, apparently datomic.api is not datomic.client.api#2019-10-0617:40keymoneis there a document that explains when to use one over the other?#2019-10-0619:44marshallhttps://docs.datomic.com/on-prem/clients-and-peers.html#overview#2019-10-0619:29keymonedid anybody get ClassNotFoundException about org.eclipse.jetty.util.thread.ThreadPoolBudget? just trying to connect to peer server#2019-10-0619:33keymoneugh.. these docs are quite outdated https://docs.datomic.com/on-prem/project-setup.html#2019-10-0623:52matthewdanielI seem to be having this problem but can’t figure out a solution. Any ideas? https://forum.datomic.com/t/issue-retrieving-com-datomic-ion-dependency-from-datomic-cloud-maven-repo/508/5#2019-10-0712:39jaretHi. Have you tried setting your creds by sourcing a file with the following?:
aws_access_key_id=<access-key-id>
aws_secret_access_key=<secret-access-key>
#2019-10-0712:39jaretYou can also export them one by one if putting that in a file isn’t your jam.#2019-10-0713:09matthewdanieli have the above creds suggested in my ~/.aws/credentials.#2019-10-0713:11jaretCould you try to explicitly export them in the terminal you’re using to push as a test to confirm that the env vars are in that terminal#2019-10-0713:17matthewdanieli’ve done aws configure --profile myprofile is that what you mean?#2019-10-0713:25matthewdanieli tried export aws_access_key_id=abc123 with no change#2019-10-0814:42jaret@U5136PEE6 The credentials you source need the ability to be able to read S3. Can you confirm your credentials being sourced have s3 permissions?#2019-10-0815:37matthewdanielassuming it is using the credentials i’m expecting, this is its permissions#2019-10-0817:42matthewdanieli’ve tried specifying a server in m2 settings but that doesn’t help either unfortunately#2019-10-0818:14matthewdanielfurthermore i’ve tried creating an s3 bucket with the datomic pom it is having difficulty getting to in my own s3 in the same path and it still is unable read artifact#2019-10-0818:20matthewdanielwell, even making my bucket public doesn’t solve the issue. I’m guessing it is something other than aws permissions. I seem to be on clojure 1.10.0#2019-10-0818:33matthewdanielwell, i got it working by manually creating directories in .m2/repositories and using aws cli to manually download the files. :man-shrugging: if this ever gets close to prod i’ll revisit this#2019-10-0623:55bartukaI have a field with cardinality set to many. Having the ID of the parent entity, Can I filter to get only the child entities that pass some rule? For example:
[;; person schema
{:db/ident :person/cars
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/many}
{:db/ident :person/name
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one}
;; cars schema
{:db/ident :cars/model
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one}
]
Imagine the data inside the database:
[{:db/id "A"
:person/name "Bart"}
{:cars/model "Model1"
:person/_cars "A"}
{:cars/model "Model1"
:person/_cars "A"}
{:cars/model "Model2"
:person/_cars "A"}]
I have only the name "Bart" and "Model2" at hand. I want to get back
only the fields {:person/name "Bart" :person/cars {:cars/model "Model2"}}#2019-10-0701:30Keith HarperYou could use pull with [:person/name {:person/cars [:cars/model]}]#2019-10-0701:38Keith HarperAh, after starting to write the query, I realised what your problem was and my previous answer doesn't solve it. That said, you could simply write a rectangular query and transform the values after the fact.#2019-10-0702:06bartukaSorry, I am new to datomic. Could you explain more about your last suggestion?#2019-10-0710:26Keith HarperSure, you would want to write a query that finds all entities with a :cars/model attribute with a value of "Model2", then use that to find the parent entity with a
• :person/cars attribute with a value that matches the first entity you found
• :person/name attribute with a value of "Bart"#2019-10-0710:29Keith HarperI imagine there's more data that you would want to retrieve besides {:person/name "Bart" :person/cars {:cars/model "Model2"}} since you stated that you have both "Bart" and "Model2" at hand. Here's an example query:
(d/q '[:find ?name ?model
:in $ ?name ?model
:where
[?car :cars/model ?model]
[?person :person/cars ?car]
[?person :person/name ?name]]
db
"Bart"
"Model2")
#2019-10-0710:33Keith HarperYou would take the result of that query, and transform it into the shape that you need it to be in, so if you wanted to get back something like {:person/name "Bart" :person/cars {:cars/model "Model2"}}
(let [query-result [["Bart" "Model2"]]]
(first (map (fn [[name model]]
{:person/name name :person/cars {:cars/model model}})
query-result)))
=> #:person{:name "Bart", :cars #:cars{:model "Model2"}}
#2019-10-0710:35Keith HarperOr using :keys,
(first (map (fn [{:keys [person/name cars/model]}]
{:person/name name :person/cars {:cars/model model}})
(d/q '[:find ?name ?model
:keys person/name cars/model
:in $ ?name ?model
:where
[?car :cars/model ?model]
[?person :person/cars ?car]
[?person :person/name ?name]]
db
"Bart"
"Model2")))
#2019-10-0712:22bartukagreat @U424XHTGT thank you for the detailed examples#2019-10-0714:32sooheond/q takes :offset and :limit. Is it understood that the ordering of results for the same query args and db value is stable?#2019-10-0714:43favilathe ordering of the results (regardless of query or inputs) is likely the same because of clojure’s hashing#2019-10-0714:44favilathe result is either a hash-set or a bag derived from a hash-set (when using :with), so the key order is going to be the same but arbitrary#2019-10-0714:45favilaI’m being pedantic here because a query could potentially produce non-deterministic results on repeated runs: order would not be the same. What matters is whether the result sets are equal in value across subsequent runs so that they hash the same so that the key order is the same#2019-10-0714:45BrianWhat is the easiest way to delete a Datomic Cloud Solo topology running in AWS? I see the instance, the S3 buckets, some lambdas. I could delete them all manually but I might miss something#2019-10-0714:46marshallhttps://docs.datomic.com/cloud/operation/deleting.html#2019-10-0714:46marshall@brian.rogers ^#2019-10-0716:14Msr Timhi I created a solo topology . I am trying to create some ions by following the setup here https://docs.datomic.com/cloud/ions/ions-tutorial.html#create-link#2019-10-0716:14Msr TimIn the Target NLB dialog, choose the NLB for your Datomic Cloud system.
The NLB will selectable from a dropdown.
#2019-10-0716:14Msr Timi don't see anything in the dropdown and there are no NLB's in my aws console#2019-10-0716:38marshall@meowlicious99 HTTP Direct requires a production topology system#2019-10-0716:38marshallyou can still use lambda ions#2019-10-0716:38marshallnote: https://docs.datomic.com/cloud/ions/ions-tutorial.html#http-direct#2019-10-0716:38marshallthat’s for http direct#2019-10-0716:39marshallin solo you can do: https://docs.datomic.com/cloud/ions/ions-tutorial.html#webapp
use an ion with a lambda proxy for a web service#2019-10-0716:44Msr Timah ok. thank you. I will try that now.#2019-10-0719:11calebpDoes datomic-access client <system-name> replace the datomic-socks-proxy script?#2019-10-0719:11marshallyes#2019-10-0719:11calebpThanks#2019-10-0719:11marshallnp#2019-10-0719:12marshallhttps://blog.datomic.com/ covers the rename#2019-10-0719:12marshallthe latest article#2019-10-0719:14calebpThe “Datomic Analytics (Preview)” article? I saw the note about renaming the bastion to access gateway, but didn’t see anything about the CLI tools#2019-10-0719:15marshallah. yeah, i guess it’s not that explicit#2019-10-0719:15marshallyes, the cli tools now take the place of the old socks proxy script#2019-10-0719:20calebpI’m walking through getting set up for analytics. So do people or processes that connect to analysis need to have the Datomic Administrator policy? https://docs.datomic.com/cloud/getting-started/configure-access.html#authorize-user#2019-10-0719:35Msr Timwhere can download the cli tools#2019-10-0719:39andy.fingerhutInstall instructions here: https://clojure.org/guides/getting_started#2019-10-0719:40andy.fingerhutSorry, I forgot which channel this was and may be misinterpreting your request.#2019-10-0719:48Msr Timoh I meant the cli tools mentioned in this sentence above by @marshall the cli tools now take the place of the old socks proxy script #2019-10-0719:50calebp@meowlicious99 They’re on the releases page#2019-10-0719:51calebphttps://docs.datomic.com/cloud/releases.html#current#2019-10-0720:31Luke NelsonAre default values a thing in datomic like they are in sql? If so how do I set them?#2019-10-0720:37ghadi@lukenelson1298 They are not, but there is a query expression called get-else#2019-10-0720:37ghadihttps://docs.datomic.com/on-prem/query.html#get-else#2019-10-0720:50souenzzoOnce again I can't work due lack of internet connection 😞
I can't understand why cognitect do not release datomic-cloud offline jar or at least datomic-free with a newer version (that will allow https://github.com/ComputeSoftware/datomic-client-memdb work again)#2019-10-0820:53rapskalianAgreed. Developing over the internet is extremely tedious and slow. My REPL sessions timeout pretty often, and eval’ing even the simplest queries/transactions will hang for unknown reasons…#2019-10-0820:55rapskalianI’ve seen claims like “ions are easy to test at the REPL, because they’re just functions”, but that’s really not true imo. The mismatch between what runs locally and what runs in the cloud costs me countless hours.#2019-10-0720:57Msr TimOne of coworker asked me this why can't we do this in datomic (d/transact conn {:tx-data [[:db/add "John" :some-attribute-i-made-up-on-spot "Doe"]]}) #2019-10-0720:57Msr Timwhy do we have to define :some-attribute-i-made-up-on-spot upfont#2019-10-0720:58Msr Timcan't datomic simply skip creating indexes for that attribute#2019-10-0721:05favilaAt a minimum, value type and cardinality must be known#2019-10-0721:06faviladatomic can’t even store an attribute if it doesn’t know this information#2019-10-0721:06timcreasyAlso :some-attribute-i-made-up-on-the-spot is the :db/ident of an entity. That entity wouldn’t exist in the system.#2019-10-0721:06timcreasyhttps://docs.datomic.com/on-prem/identity.html#idents#2019-10-0721:07favilagood point#2019-10-0721:07favilaso we would need automatic ident creation too#2019-10-0721:07favilafor a feature like this to work#2019-10-0721:08timcreasy[42 :person/name "Tim"] really resolves to something like [42 12 "Tim"]#2019-10-0721:16Msr Timoh yea. thats what i meant. Why can't datomic create idents on the spot.#2019-10-0721:31favilaI mean, you can: {:db/ident :my-ident}#2019-10-0721:31favilabut it also needs attribute metadata to be an attribute#2019-10-0721:31favilausually attribute creation creates both#2019-10-0721:32favila{:db/ident :my-attr :db/cardinality …}#2019-10-0721:32favilato both create and reference in the same transaction can be done with a tempid#2019-10-0721:34favilaso in theory [{:db/ident :my-attr :db/id "myattr"} [:db/add entity-id "myattr" value]] might work as a transaction, but now we’re back to cardinality and value type problems#2019-10-0721:49Msr Timmakes sense thank you.#2019-10-0805:29iku000888https://twitter.com/iku000888/status/1181344440335532032#2019-10-0806:44igrishaevDoes anyone know what caused such an error when trying to create a new database in Datomic?
Unhandled clojure.lang.ExceptionInfo
Error communicating with HOST 127.0.0.1 on PORT 4334
Caused by org.apache.activemq.artemis.api.core.ActiveMQNotConnectedException
The transactor has been started with no errors using local Postgres. The PG’s database and user exist.#2019-10-0813:19marshallhttps://docs.datomic.com/on-prem/deployment.html#peer-fails-to-connect#2019-10-0813:19marshallThe most common cause for this is misconfigured host and alt-host in your transactor properties file#2019-10-0819:45matthewdanielokay, so i got past my issue with not being able to download from by changing it to https:. Now it downloads everything but get stuck on com.datomic:java-io:jar:0.1.11. here’s my deps. all other datomic artifacts downloaded#2019-10-0819:49Joe Lane@matthewdaniel What AWS Region are you in? Are you trying to perform this action locally or in an aws codebuild (Or something like it)?#2019-10-0819:51matthewdaniellocally. my deploy is going to be going to us-east-2 but even when i try aws s3 cp . i’m unable to get the file.#2019-10-0819:51matthewdanieland the above worked for all other jar/pom files#2019-10-0819:52Joe LaneAh, dang I was thinking it was a different issue I know the answer to. What does "get stuck on..." mean? Do you have a stacktrace?#2019-10-0819:53matthewdanielpretty sure it is stuck from the file not existing like the rest.#2019-10-0819:53matthewdaniel#2019-10-0820:02alexmillerI don't think there is a com.datomic/java-io 0.1.11#2019-10-0820:03alexmillerI would be interested to know what your original exception was#2019-10-0820:04alexmillerany chance you could roll back and do clj -Sdeps '{:aliases {:v {:verbose true}}}' -A:v:dev -Stree and attach the output as a file here?#2019-10-0820:05matthewdaniel@alexmiller what do you mean by roll back#2019-10-0820:08matthewdanielhere are the deps but i didn’t rollback#2019-10-0820:16alexmillerhow are you getting the tree if you're failing to get the deps?#2019-10-0820:17alexmillerthat tree is showing com.datomic/java-io 0.1.14 - why were you trying to get 0.1.10?#2019-10-0820:33matthewdanieli’m not sure. i ran your command to get the tree and i ran clojure -Adev -m datomic.ion.dev '{:op :push :creds-profile "default" :region "us-east-2" :uname "test1"}' and got the other message. i haven’t changed anything#2019-10-0820:44matthewdanielwould a rollback fix things? I’m new to this so not sure why i’d get different dep warnings based on what i typed into the terminal#2019-10-0820:46alexmillerdifferent aliases will affect what classpath you're building#2019-10-0820:47alexmillerI'm trying to get to the point where you are having an error to figure out why you are having it#2019-10-0820:51matthewdanielsorry i’m not being more helpful but i’m not following well. All i’ve done basically is add a deps.edn file and ion-config.edn file and tried to push. it starts downloading a bunch of deps and tips over on the java-io one. would it help to rm my maven repository and push again?#2019-10-0820:53alexmillerhard for me to say#2019-10-0820:53alexmillerwhat you posted above doesn't seem to hit that, so not sure what the problem is#2019-10-0820:54matthewdanielwhen i ran the above it output something like “downloading … ion-dev” and many other files. Those were created in .m2/repository path.#2019-10-0820:57matthewdanielrm -rf ~/.m2/repository caused a widescale download. it did download java-io 1.14 but failed later on java-io 1.11#2019-10-0820:58matthewdanieli really appreciate the effort to help#2019-10-0820:19alexmillerI mean originally you were using s3 url and something didn't work that sent you down this path. what error were you getting at the beginning?#2019-10-0820:33matthewdanielin the very beginning i was getting a problem downloading com.datomic/ion from the deps.#2019-10-0820:33rapskalianRunning into this while trying to deploy via CodeDeploy. UnknownError just shows Access Denied when expanded. What would cause this?#2019-10-0820:48rapskalianOh…I somehow deleted the S3 bucket containing the revision picard-facepalm#2019-10-0820:49rapskalianWell, the IAM policy for that bucket, not the bucket itself. Hence the Access Denied.#2019-10-0820:58dmarjenburgh@matthewdaniel Do you have valid AWS credentials in the environment where you retrieve the deps? I think I encountered something like it in the past and this was the issue (even though the s3 maven repo is public, you need some credentials)#2019-10-0821:01matthewdanieli entered my credentials into the .m2/settings.xml and it did not help. I am however able to download all packages if i use the https endpoints for the files. It just seems that i now have a deps issue where it is downloading a version of java-io from datomic that doesn’t exist. Even if i try to aws s3 cp for the file i’m unable to get it.#2019-10-0821:01matthewdanielwhen i do clj -Adev -m datomic.ion.dev '{:op :push :creds-profile "default" :region "us-east-2" :uname "test1"}' it downloads maybe a hundred files and one of them is java-io 1.14 but then much later in the process i get the following error#2019-10-0821:01matthewdaniel{:command-failed
"{:op :push :creds-profile \"default\" :region \"us-east-2\" :uname \"test1\"}",
:causes
({:message
"Failed to read artifact descriptor for com.datomic:java-io:jar:0.1.11",
:class ArtifactDescriptorException}
{:message
"Could not transfer artifact com.datomic:java-io:pom:0.1.11 from/to datomic-cloud (): Forbidden (403)",
:class ArtifactResolutionException}
{:message
"Could not transfer artifact com.datomic:java-io:pom:0.1.11 from/to datomic-cloud (): Forbidden (403)",
:class ArtifactTransferException}
{:message "Forbidden (403)", :class HttpResponseException})}#2019-10-0821:06matthewdaniel@dmarjenburgh @alexmiller well, i cheated and renamed and copied .m2/repository 1.14 into 1.11 and it carried on its downloads pass the break and successfully pushed. 😳:man-shrugging: thank you for your help. since i’m only at the initial stage for starting a side project i’ll just see if i can carry on without doing it the right way for now.#2019-10-0821:09alexmillerok#2019-10-0912:04matthewdanielhas anyone had trouble deploying to cloud on ValidateService? the state machine times out on Deployment Complete? for me. Never get the lambda created.#2019-10-0912:06matthewdaniel#2019-10-0912:12dmarjenburghCases where this can happen include having an error on program startup. For example, when there’s a syntax error or missing dependency. Try running the code with the deps it has in the cloud. (You can’t push code under alias paths I believe )#2019-10-0912:16matthewdanielthanks @U05469DKJ. can you elaborate a little? how do i use the cloud deps to run locally?#2019-10-0912:16matthewdanielwhat do you mean that I cannot push code under alias paths? does that mean when i push i shouldn’t include -Adev?#2019-10-0912:20dmarjenburghYeah, the bundled code doesn’t include :extra-deps or code under :extra-paths in the dev alias. You can use a dev alias to include ion-dev for push and deploy commands, but the deployed code shouldn’t depend on stuff in :extra-deps/:extra-paths. Try running just clj (no alias) and loading the namespace(s) where your ion handlers live and see if it throws an error or not.#2019-10-0912:21dmarjenburghThe datomic-client-cloud dependency does not have to be in your deployed deps, since this is one is provided#2019-10-0912:24matthewdanieli don’t have a problem from the repl loading my namespace file
• (load-file “/src/datomic-card/core.clj”)
• (in-ns ’datomic-card.core)
• (hello-world)#2019-10-0912:24matthewdanieldeps file
{:paths ["src" "resources"]
:deps {com.datomic/ion {:mvn/version "0.9.35"}
org.clojure/data.json {:mvn/version "0.2.6"}
org.clojure/clojure {:mvn/version "1.10.0"}}
:mvn/repos {"datomic-cloud" {:url ""}}
:aliases
{:dev {:extra-deps {com.datomic/client-cloud {:mvn/version "0.8.78"}
com.datomic/ion-dev {:mvn/version "0.9.234"}}}}}#2019-10-0912:29matthewdanielah, this always gets me when i come back to clojure. my folder is named like my namespace datomic-card changing to underscore fixes the issue. Thank you!#2019-10-0912:29kelvedenI've been trying (but failed so far) to find some definitive documentation on what facilities are available for backup/restore in Datomic Cloud. Does anyone know of any?#2019-10-0912:32kelvedenOr is it a matter of dealing with the underlying storage in the cloud (i.e. backup the S3 bucket)?#2019-10-0914:33favilahttps://forum.datomic.com/t/cloud-backups-recovery/370/2#2019-10-0914:34favilacloud does not have backup/restore like on-prem.#2019-10-0914:34favilaI suspect this problem will be solved in the same time and way as an on-prem<->cloud migration solution#2019-10-0914:36favilaif they consider s3 to be durable enough not to need backup, then the only other use cases are moving to a different cloud installation (workaround: copy all s3 entries while the cluster is off) or to a different storage altogether (i.e. an on-prem migration) or restoring an older point-in-time#2019-10-0915:49kelvedenThanks @U09R86PA4, that makes sense. It'd certainly be nice to have an explicit restoration process but working with just what's currently available is possible. The service I'm working on currently is just used during normal work hours (in one time zone) which means that there could be the flexibility for switching off overnight to execute a backup.#2019-10-0915:24Msr TimHi is there documentation somewhere that has some guidance of setting up different enviroments for a project. Do I have spin up seperate prod clusters for each enviroment.#2019-10-0915:30dmarjenburghhttps://docs.datomic.com/cloud/ions/ions-reference.html#configure#2019-10-0915:32dmarjenburghYou can’t/shouldn’t have separate ion-config per environment. But each compute-stack/application can have it’s own env map (parameter in the CFT). There’s also integration with the aws parameter store#2019-10-0915:39Msr Timthank you. I will take a look.#2019-10-0915:39Msr TimI haven't really looked at ions that much. Only cloud hosted datomic.#2019-10-0916:28m0smithI see the docs refer to a :schema/see-instead for deprecating an attribute. When I try and use it, it gives me an error datomic.impl.Exceptions$IllegalArgumentExceptionInfo: :db.error/not-an-entity Unable to resolve entity: :schema/see-instead
{:entity :schema/see-instead, :db/error :db.error/not-an-entity} any ideas?#2019-10-0916:36favilaThis is a hypothetical attribute you create for schema management, not a builtin#2019-10-0917:01m0smithIs there an example of what it should look like?#2019-10-0917:07favilaare you looking at this? https://docs.datomic.com/on-prem/best-practices.html#annotate-schema#2019-10-0917:08favilaLooks like a cardinality-one ref-type attr#2019-10-0917:10favila{:db/ident :schema/see-instead :db/cardinality :db.cardinality/one :db/valueType :db.type/ref} is how I would do it#2019-10-0917:10favilabut this is just an example, you can do anything you want#2019-10-0917:11favilathe larger point is put metadata on your attributes#2019-10-0917:11favilabonus if they’re machine-readable, then you can do things like e.g. generate deprecation lists#2019-10-0921:23m0smithThanks, that helps a lot#2019-10-0922:45xiongtxI've found some strange behavior in datomic 0.9.5951 query:
This works fine:
(d/q '[:find ?e
:in $ ?uid
:where [?e :other/id ?uid]]
(d/db conn) [:user/id user-id])
This throws IllegalArgumentException: Cannot resolve key: [:user/id ?uid]
(d/q '[:find ?e
:in $ ?uid
:where [?e :other/id [:user/id ?uid]]]
(d/db conn) user-id)
Is this expected behavior or...?#2019-10-1007:13tslockeMy local SOCKS connection is suddenly not connecting. None of the AWS setup changed. Any tips on how to troubleshoot? How would I confirm that the primary stack is running?#2019-10-1009:46tslockeNever mind - figured it out#2019-10-1010:50danierouxWith Datomic analytics, does cardinality-many additional tables, work with column joins?
In the example, in the product_x_tags table and tags was a ref, how can I get the the attributes of tags, instead of it’s eid?#2019-10-1013:08jaret@danie what is your schema and what is your metaschema? I am not sure I entirely follow, are you trying to have another column join from a card many on the tags side?#2019-10-1013:23Msr TimHi, In my app i need to "watch" for changes of particular type and react to it#2019-10-1013:23Msr Timi found this blog post re: tx report queue https://blog.datomic.com/2013/10/the-transaction-report-queue.html#2019-10-1013:23Msr Timbut link 404's .#2019-10-1013:23Msr TimIs that kind of usecase still supported?#2019-10-1013:31alexmillerlink works for me#2019-10-1013:32alexmillerseems like it's getting read in the preview above too#2019-10-1013:32favilaI think he means the javadoc link#2019-10-1013:32alexmillerohh#2019-10-1013:32favilaIt is supported, it’s an on-prem feature only though#2019-10-1013:32favilareal javadoc link: https://docs.datomic.com/on-prem/javadoc/index.html#2019-10-1013:33alexmillerhttps://docs.datomic.com/on-prem/clojure/index.html#datomic.api/tx-report-queue#2019-10-1013:35favilaI’m curious, are you actually writing java against datomic?#2019-10-1013:36Msr Timno its clojure.#2019-10-1013:37Msr Timmaybe I am looking for wrong thing. eg use case: send an email to customer for every order placed.#2019-10-1013:37Msr TimI was hoping I could watch for new order creation tx and send email.#2019-10-1013:37favilayou can, it’s a perfectly fine way to do it#2019-10-1013:38Msr TimI am using cloud offering#2019-10-1013:38favilaoh, yeah nevermind#2019-10-1013:38favilacloud doesn’t offer this#2019-10-1013:38favilayou will need to poll the tx-range repeatedly#2019-10-1013:38favilaunless there’s some ion way to do it?#2019-10-1014:01rapskalianI’ve considered wrapping d/transact in a function that would simply put tx results onto a queue…“roll your own tx queue” so to speak. Obviously then other parts of the code would need to use this wrapped function. Never tried it, but it seems like it would be an option for cloud users.#2019-10-1014:02Msr Tim@U6GFE9HS7 what does queue mean here results onto a queue#2019-10-1014:02Msr Timlike some kafka type stuff?#2019-10-1014:02rapskalian@UNWLRR74Y either a core.async queue/channel to keep it simple and in-process, or potentially a SQS queue if you need something more durable. Kafka is probably overkill.#2019-10-1014:04Msr Timgotcha thank you#2019-10-1014:05rapskalianIdeally something async. You likely don’t want writers being blocked by the transaction queue.#2019-10-1013:39Msr Timok. sounds like I'd have to have to poll tx manually and keep track of my position somewhere https://docs.datomic.com/cloud/time/log.html#2019-10-1013:52Msr Timdo people write applications without 'tx functions' with datomic#2019-10-1013:53Msr Timmy current thinking is that is not possible because when you transact data you have no way to know if they data moved underneath you since you read it. There is cas but thats only for a single attribute.#2019-10-1013:54Msr Timso by extension one cannot really use datomic cloud without ions#2019-10-1013:54Msr Timam i missing something here?#2019-10-1013:56favilait depends on what you need. datomic transactions are really set operations, so they can compose. it’s only if you really need a specific computed write from a read that you reach for CAS or a transaction function#2019-10-1013:57favilafor example, lets say you have a cardinality-many string attr of document tags#2019-10-1013:58favilawhen the user adds or removes a tag, do you a) read all the tags, remove the tag, and re-assert the entire set b) only add the tag?#2019-10-1013:59favilasometimes you want A, in which case yes you need a tx function because you want the set’s final value to be exactly what you stated. (It’s a further question whether you want to fail and recompute if someone else wrote in the meantime, or if you want the last writer to win always)#2019-10-1013:59favilabut sometimes you want B, in which case it doesn’t matter what changes happened to the set in the meantime, the end result of adding your tag will be the same#2019-10-1014:03Msr Timgotcha. I was thinking of super typical ecommerce 101 buy a product flow.
1 . check invenotry count x
2. create order record
3. sen inventory count to x-1
#2019-10-1014:04favilaOften you can split things up so the final unsafe atomic operation happens by itself#2019-10-1014:05favilae.g. create order record but mark inactive, then activate+change inventory count using two CAS#2019-10-1014:06Msr Timwhat happens if activate goes through and change inventory fails#2019-10-1014:06favilathey’re in one tx, they succeed or fail together#2019-10-1014:06Msr Timoh two CAS in one tx. got it#2019-10-1014:07favilayou can also use CAS to assert no-change, e.g. [:db/cas ent attr 1 1] to ensure the value is still 1#2019-10-1014:07favilaI wish you could :db/cas to nil as an atomic retraction#2019-10-1014:12Msr Timthank you @favila#2019-10-1014:13favilaall that said, do make and use tx functions! but maybe you only need a smaller set of primitive functions which do precondition checking or db/ensure for postcondition checking; instead of a function for every single business operation#2019-10-1014:14Msr TimMy team doesn't seem too comfortable with adopting both datomic and ions in one go. So I was researching if its even possible to write apps without tx functions.#2019-10-1016:56kennyAnyone have a library for turning a full map into a Datomic transaction, including retractions and additions from the current entity in the DB?#2019-10-1017:18Brian@kenny I recently just rolled my own. A bit of a pain but I couldn't find a clean solution although perhaps I missed one#2019-10-1017:18BrianCan anyone point me in the right direction for how to give my Datomic Cloud s3 access to a bucket I've created? I'm getting this error when trying to fetch an object from my bucket which makes me think it has to do with permissions {:Error {:Code \"AccessDenied\", :CodeAttrs {}, :Message \"Access Denied\", :MessageAttrs {}, :RequestId \"<reqId>\", :RequestIdAttrs {}, :HostId \"<hostId>\", :HostIdAttrs {}}, :ErrorAttrs {}, :cognitect.anomalies/category :cognitect.anomalies/forbidden}#2019-10-1017:19kennyI figured I’d end up needing to write it myself. Kinda surprised a lib out there doesn’t have a fn for it already. I also searched and couldn’t find one. #2019-10-1017:23ghadi@kenny @brian.rogers that use-case is inherently racey unless done within a tx function#2019-10-1017:24ghadi@brian.rogers permission for Ions?#2019-10-1017:25ghadihttps://docs.datomic.com/cloud/operation/access-control.html#authorize-ions#2019-10-1017:25ghadihttps://docs.datomic.com/cloud/operation/access-control.html#add-policy-to-nodes#2019-10-1017:26kennyThat’s fine. Seems like a general function for doing that is possible. #2019-10-1017:28ghadiif you make it a generic helper then someone uses in a situation where the assumption of a race isn't understood, it wouldn't be fun#2019-10-1017:29kennyRead the docs 😉#2019-10-1017:27Brian@ghadi I think this is exactly what I was looking for thank you!#2019-10-1017:27ghadinp#2019-10-1017:36kennyThe idea of a “full-map” transaction seems like it could be extended to support automatic ordering of cardinality many attributes. #2019-10-1017:38ghadiyou can do whatever your heart desires in a tx-function#2019-10-1017:39ghadibut if you don't account for the basis of the tx-data you generate being old, it will be a nasty transaction#2019-10-1017:41kennySure. Just a lot of interesting things you can do with a full-map transaction. Curious if there’s a reason no one has written a library for this sort of thing. Seems super useful, unless I’m missing something. #2019-10-1017:41favilaI think there are better ones out there now @kenny but this was mine: https://gist.github.com/favila/8ce31de4b2cb04cf202687c6a8fa4c94#2019-10-1017:41favila#2019-10-1017:42favilayou are talking about “reset-attributes” (the second one)#2019-10-1017:42favilathere has been discussion about a “make it look like this” tx function off and on here, maybe the archives can help#2019-10-1017:43kennyYeah, going for exactly that. #2019-10-1017:47ghadiyou can also use cas if your schema is designed for it#2019-10-1017:48kennyWhat does designed for it mean?#2019-10-1017:49ghadie.g. you have a version-like attr on entities and you can say: update all this stuff as long as the cas succeeds#2019-10-1017:54kennyThis is probably app dependent. In some cases, if the data has changed, I want to tell the user something has changed so they can reevaluate their action. #2019-10-1017:55ghadiYup#2019-10-1017:57kennyPerhaps that is what makes a general version of this function difficult — a bit too use case dependent. #2019-10-1017:47ghadimight have to retry though#2019-10-1017:48kennyDepends on the use case probably. #2019-10-1018:04Brian@ghadi can I private message chat you about setting up these ion policies?#2019-10-1018:05ghadiit's best to chat in here if possible -- there are more people#2019-10-1018:06ghadibasically if you update your stack(s) with a reference to the policy you created, then the Datomic compute nodes (whether the primary or query groups) will have augmented capabilities#2019-10-1018:07BrianKk. Well I'm not quite sure where my code is falling down permissions wise. The docs list 4 things that I need to do:
1. use default credentials provider chain. I am making a client this way https://github.com/cognitect-labs/aws-api/blob/master/examples/s3_examples.clj#L12 and my understanding is that without specifying credentials, it is using the default ones. Am I right so far on that?#2019-10-1018:08ghadithat's right, it will use the EC2 Instance Role's credentials#2019-10-1018:08ghadiwhich is what that extra policy will augment#2019-10-1018:11ghadi@brian.rogers DM me your policy if you want -- it's probably missing some sort of S3 permission if it's 403'ing#2019-10-1018:22BrianOH MY GOD IT WORKED#2019-10-1018:22BrianThank you @ghadi for pointing me in the direction go ion permissions#2019-10-1018:22ghadimy pleasure#2019-10-1018:23BrianI didn't even realize that was my problem for the longest of times#2019-10-1023:22kennyDoes Datomic support "union" pull queries as described here? https://edn-query-language.org/eql/1.0.0/specification.html#_unions#2019-10-1023:44favilano, but you can fake it by merging all the possibilities together into the same pull.#2019-10-1100:29steveb8nI do this a merging a lot. the core apply-template fn is very useful for this kind of thing#2019-10-1113:11kenny"the core apply-template fn"?#2019-10-1113:47souenzzo@kenny i developed a custo ast->pull that i do (-> [{:my-thing {:thing/user [:user/id] :thing/group [:group/id]}}] eql/query->ast ast->pull) ;; => [{:my-thing [:user/id :group/id]}]#2019-10-1107:39magnarsAre there any guides or tips for migrating from datomic-free to pro (on-prem)?#2019-10-1113:58rapskalianHow would one create a library for datomic ions? What would be the contents of the library’s deps.edn file? Would it be best to consider the ion and client cloud libs as “provided”?#2019-10-1113:59alexmilleryou might want to check out the ion-starter project https://github.com/Datomic/ion-starter#2019-10-1114:02rapskalianThanks, Alex. I’ve studied that example extensively, and my impression was that it was more application focused. I’m thinking of adding an alias to my library’s deps.edn file called :provided that contains all of the relevant datomic libs. Then users of the lib would need to depend on those themselves.#2019-10-1114:12alexmillerthat seems like a reasonable approach#2019-10-1114:12alexmilleror you could just depend on them directly - the users of your lib could specify different versions and those would be preferred#2019-10-1114:25rapskalianAh okay, that was where my confusion was. I’m never quite sure when, as a lib author, a dependency should be considered “provided” @alexmiller. Probably a question for #tools-deps, but it sounds like the resolution is “last one wins”?#2019-10-1114:26rapskalian(shared to #tools-deps to stay on topic, FYI)#2019-10-1115:19ninjaHi, I'm currently writing a custom monitoring solution in order to enable Prometheus scraping transactor metrics. The monitoring documentation (https://docs.datomic.com/on-prem/monitoring.html) doesn't seem to completely represent the current state of metrics handed to a callback function by the transactor. For specific metrics, information is hidden in the changelog section and never made its way to the monitoring documentation. Other metrics are said to be deprecated (since a long time ago) in favor of replacements, although these replacements were never handed over by the transactor.
So 2 questions:
- is there a plan to update the documentation on this topic in the near future?
- what do the metrics PodUpdateMsec and PodGetMsec describe?#2019-10-1115:25kelvedenIs it possible to pull the transaction data for a datom without binding a ?tx in the :where clause?
For an example close to the specific problem I'm trying to solve: https://docs.datomic.com/cloud/query/query-pull.html#org0595b1b - could the final pull be modified to include (say) db/txInstant with each item of :release/_artists?#2019-10-1117:16hadilsResearch question: is there a way to hook up Datomic Cloud Analytics to an AWS instance of Metabase? I realize I would have to run an EC2 instance, etc. and I am willing to put in the work, but I need to know it's possible...#2019-10-1118:30danierouxWe did this just today: Easiest is to run the analytics ssh tunnel from the EC2 instance, and then let Metabase connect to local host.
Also, it's so darn sweet to see the data in Metabase.#2019-10-1119:07hadilsThanks!#2019-10-1117:19magnarsI'm trying to convince a client to start paying for Datomic - where can I find guides or tips for migrating from datomic-free to pro (on-prem)?#2019-10-1117:57favilaI’m not sure there’s any migration to do? backup the db, restore it to your new storage. If you’re still using dev storage, you may not even need backup+restore (I’m unsure)#2019-10-1117:58favilaare you aware of the distinction between “starter edition” and “free”?#2019-10-1117:58favilastarter edition is just a free-first-year license-key#2019-10-1206:06magnarsYes, part of the selling point is that they don't have to pay the first year.
I tried pointing datomic-starter at the same URL, but there's no free there, and dev does not see the contents from the free database. A backup+restore sounds entirely doable, but I'd be interested in cleaning up the database a bit at the same time. I think it's called decanting?#2019-10-1213:44favilaYes#2019-10-1214:30favilaDoes the connection work if you s/:free:/:dev:/?#2019-10-1214:31favilaThey’re the same bytes in storage really, only some runtime license/feature checks are different#2019-10-1214:35favilaOh you did try it. Huh I expected that to work #2019-10-1118:26matthewdanielis squuid only in datomic peer or can i generate a squuid in the datomic cloud api?#2019-10-1118:29dmarjenburghIt’s not in the client cloud api. https://forum.datomic.com/t/why-no-d-squuid-in-datomic-client-api/446#2019-10-1118:31matthewdanielthanks!#2019-10-1120:40shaun-mahoodWhere's the best to report issues with Datomic documentation? I've run into a couple dead links and outdated instructions recently (though I'll have to find them again to be able to report them).#2019-10-1120:41alexmillerhere is usually a good place#2019-10-1120:59ibarrickI'm getting tons of these: clojure.lang.ExceptionInfo: :db.error/transactor-unavailable Transactor not available and I'm not sure how to diagnose. The transactor is up and running and it doens't look like it's that bogged down. Is there an easy way to diagnose this error or to get more information?#2019-10-1123:57benoitMost of my "transactor unavailable" errors with on-prem were due to a lack of heap on the peers. Hope that helps.#2019-10-1511:06dazldresource starvation on the peers would be my first check too. you could wrap d/transact with some monitoring to measure average completion time.#2019-10-1121:00shaun-mahood@alexmiller Perfect, thanks! I think these were the only one I ran into, but I'll post here if I find any others.
Broken Link - https://docs.datomic.com/cloud/howto.html#aws-access-keys (found in https://docs.datomic.com/cloud/ions/ions-reference.html#push and https://docs.datomic.com/cloud/ions/ions-reference.html#deploy)
Outdated instructions - the AWS pages used when following https://docs.datomic.com/cloud/operation/upgrading.html#compute-only-upgrade and https://docs.datomic.com/cloud/operation/upgrading.html#compute-only-upgrade have changed - they've removed the "Actions" button and have a standalone "Update" button, and the options for specifying the S3 Template are named differently as well. Not sure if that's universal or if it depends on the account or region.#2019-10-1121:17alexmillerthx, I'll ship 'em to datomic team#2019-10-1209:24bartukawhen perform a db/history query, how can I get only datoms that have the same db/txInstant?#2019-10-1210:28benoitdb/txInstant is an attribute of a transactions so I'm guessing you want to access all datoms asserted by a transaction?#2019-10-1210:40bartukayes, that's right. When performing db/history and filtering for a single datom, I find the entity that I want to but it brings back all other datoms of the history. I would like to get back only the datoms asserted by the transaction of the datom that I am using to filter the query. [confusing? :/]#2019-10-1210:45benoitYou should be able to filter like this:
[?tx :db/txInstant <your-instant>]
[?e ?a ?v ?tx ?op]
#2019-10-1210:46benoitThat should show you only the datoms of the tx you're interested in.#2019-10-1301:35favilaThat is going to be extremely inefficient if e or a are unbound#2019-10-1301:36favilaIf you have a specific tx and you want to know what happened in it, use the log and tx-range instead#2019-10-1312:11benoitYes, but @UBSREKQ5Q said they know what entity they want to so I assumed ?e is already bound to something. But if not, @U09R86PA4 is right, do not do that 🙂#2019-10-1216:59SocksWhat happens if you update the compute cluster before the storage as your told not to here: https://docs.datomic.com/cloud/operation/upgrading.html#storage-and-compute#2019-10-1218:06dustingetzWhat is the state of art of unifying Datomic schema with clojure.spec#2019-10-1300:35steveb8n@dustingetz I’ve rolled my own. You could do this in a txn fn or (in my case) in the client that sends the tx data. it’s pretty straightforward to do. you do need a cross-cutting fn to lift idents, re-shape ordered cardinality-N attrs etc.#2019-10-1300:35steveb8nyou are right that this would be a nice lib if it existed. I hope that we will see something like this as an add-on from Datomic/Cognitect (maybe when spec2 is nailed down) but, until then, it’s roll your own#2019-10-1300:38steveb8nif you use the ideas in this post, it becomes much easier https://cjohansen.no/referentially-transparent-crud/#2019-10-1302:50dustingetzhow about the query side - datomic pull and s2/select#2019-10-1304:10steveb8nsame thing. you need a cross-cutting fn to translate. In my case I composed the pull expr for each entity into the query variety of each entity so that the translate fn can be used for pull or query.#2019-10-1304:25steveb8nthinking about this: you could use the spec describe to drive the translate fns. slightly more complex but then fully generic i.e. much less boilerplate#2019-10-1410:12dmarjenburghThe security/compliance dep requires that all S3 buckets are encrypted and all CMK keys have a rotation enabled. By default, the datomic template does not do this.
Can anyone confirm my expectation that making these changes to the CFTs ourselves will not break anything?#2019-10-1413:48ghadiAFAIK the buckets are encrypted @dmarjenburgh with KMS, but not using server-side encryption#2019-10-1416:47tslockeWhy is there both :db/retractEntity and :db.fn/retractEntity?#2019-10-1416:49Joe Lane@tslocke On-prem: :db.fn/retractEntity, Cloud :db/retractEntity#2019-10-1416:51tslocke@lanejo01 yeah I noticed that, but still wondering why.#2019-10-1416:54Joe LaneAhh, sorry, I can't answer that one 🙂#2019-10-1417:01tslockeWith the client API, it seems, :in $ [?a-collection ...] is not allowed. What is the right way to use a collection as a param to a query?#2019-10-1417:07Joe Lanethat is allowed#2019-10-1417:09tslockeAhh my bad it's :find [?coll ...] that's not allowed. Maybe I can rearrange...#2019-10-1417:17favilafind destructuring equivalents: ?x . -> ffirst; [?x] -> first ; [?x ...] -> (mapv peek)#2019-10-1417:17favilaas in (->> query-result (mapv peek)) for e.g.#2019-10-1517:53rapskalian@U09R86PA4 is (mapv peek) faster than (mapv first)?#2019-10-1517:54favilathere probably isn’t much difference#2019-10-1517:55favila(IOW I don’t know but I suspect there’s no difference or a marginal difference)#2019-10-1418:15BrianHey y'all! I'm walking a coworker through Datomic Cloud and he's getting an error on this dependency com.datomic/ion-dev {:mvn/version "0.9.234"} that says "Caused by: org.eclipse.aether.resolution.ArtifactResolutionException: Could not transfer artifact com.datomic:ion-dev:pom:0.9.234 from/to datomic-cloud (<s3://datomic-releases-1fc2183a/maven/releases>): Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 1A3956EEC6ECF1D5; S3 Extended Request ID: 2BxntHvQDQbz3eWoJql9cAFjdsdX2/g0Xu6b5hO7PzOZCnWSkyBomo83Jdh6DGHFDQYDGWhV/js=)" Any ideas as to what his problem might be? This comes up when he runs clj -A:dev#2019-10-1418:21marshallhttps://forum.datomic.com/t/issue-retrieving-com-datomic-ion-dependency-from-datomic-cloud-maven-repo/508/3 @brian.rogers#2019-10-1418:40BrianWe can't seem to get his ~/.m2/settings.xml solution to work#2019-10-1419:14BrianAnyone have any idea? It seems that we can't pull com.datomic/ion-dev {:mvn/version "0.9.234"} out of Datomic's S3 bucket on his side when he runs Clojure -A:dev while trying to work with the ion tutorial repository#2019-10-1419:14Joe LaneHe needs to have aws credentials#2019-10-1419:15Joe Lanehas he configured the aws-cli yet?#2019-10-1419:19BrianYeah he has. He's been able to connect to our existing infrastructure and query our database that is running. That was when we'd removed the ion-dev from the deps.edn. But then of course he couldn't push any ions#2019-10-1419:23Joe Lanein my deps edn I have this section
:mvn/repos {"datomic-cloud" {:url ""}
"sonatype" {:url ""}}
#2019-10-1419:23Joe LaneDo you?#2019-10-1419:32BrianThe code we're working with (that works for me) looks like this https://github.com/Datomic/ion-starter/blob/master/deps.edn#L10#2019-10-1419:32BrianOh wait#2019-10-1419:32BrianYes we also have :mvn/repos {"datomic-cloud" {:url ""}} #2019-10-1419:33Briando not have that sonatype part#2019-10-1419:35alexmillershouldn't need that to use ions#2019-10-1419:35alexmillerthe sonatype one that is#2019-10-1419:35alexmillerthat's just access to maven central snapshots#2019-10-1419:37BrianI can't imagine it has to do with the deps as it's working fine for me on my side. And he has AWS permissions to access all the resources as he was able to query our database. It's just this connecting ion-dev dependency we can't seem to pull down#2019-10-1419:42alexmillerare there AWS env vars set?#2019-10-1419:44alexmillerand what is set in ~/.m2/settings.xml?#2019-10-1419:45alexmillerdon't post anything secret, just trying to get what's configured#2019-10-1419:45BrianI thought so but how can we check for good measure? No such file existed in his /m2/ (I don't have one either) but he did try the structure here: /.aws/credentials and the creds he uses to sign into AWS but nothing came of it#2019-10-1419:45Brianoops hold on let me edit#2019-10-1419:47alexmillerecho $AWS_PROFILE
echo $AWS_ACCESS_KEY_ID
echo $AWS_SECRET_ACCESS_KEY
#2019-10-1419:48BrianAh darn he's just clocked out 15 minutes ago. If those do or don't work, how should we proceed?#2019-10-1419:48alexmillerthe datomic-cloud s3 repo is not authenticated, so you should NOT have anything set in ~/.m2/settings.xml#2019-10-1419:48BrianNone of those echo statements produce any output for me actually#2019-10-1419:49BrianOkay cool cool. I'll make sure he deletes that if he did end up keeping the file around.#2019-10-1419:49BrianShould those produce output for him?#2019-10-1419:50alexmillerone thing that is confusing is that you must have some aws env vars set, even though they will not be used to access the datomic-cloud repo#2019-10-1419:52alexmillerbecause reasons, the s3 provider will attempt to look up the region of the bucket holding the repo and while the bucket is public, the IAM check on the region lookup will fail if you don't have some (any!) credentials set#2019-10-1419:56BrianSo then given that when we commented out the ion-dev, we were still able to use the bastion to connect to the database and run queries, and also able to pull in com.datomic/client-cloud {:mvn/version "0.8.78"} which I'm assuming is also hosted in S3, I couldn't quite see how the creds were not working#2019-10-1419:57BrianIf we were to totally redo the creds, should we just delete his ec2 keypair, delete the ~/.aws, and try it all over?#2019-10-1420:03alexmillerI suspect things are working off the default creds#2019-10-1420:03alexmillerbut the s3 repo provider is super old and may not even look at that stuff#2019-10-1420:03alexmillerI think it is worth setting those aws env vars to same as whatever you're using in credentials and see if it works#2019-10-1420:05BrianWhere does $AWS_PROFILE come from? I see the other two in ~/.aws/credentials/#2019-10-1420:11alexmillerthat specifies which profile in your credentials to use#2019-10-1420:12alexmilleryou can export AWS_PROFILE=default to use the default#2019-10-1420:12alexmillerso that might be a good test#2019-10-1420:15ghadiaws sts get-caller-identity
may be useful#2019-10-1420:17BrianI've written down these suggestions and I'll report back the results tomorrow morning when my coworker is back online. Thanks for all the help! =]#2019-10-1500:35currentoorAny plans for datomic supporting java.time.Instant natively?#2019-10-1500:48bartukahi ppl, I am having a hard time to perform an aggregation. I have a value (float) and date (instant) attributes and want to perform a aggreation by sum of values grouping by date#2019-10-1508:06maxtToday my cloudwatch log is full of messages that says RestartingDaemonException
"Message": "Unable to load index root ref <uuid>", with two different uuid's. Do you know what could be the cause of that, and how can I repair it?#2019-10-1509:02tslockeIs it possible to work with temporary files with Ions? e.g. for resizing images before uploading to S3. I'm hoping I can write to (System/getProperty "java.io.tmpdir")#2019-10-1515:06Brian@ghadi regarding yesterday's conversation: the output of the aws sts get-caller-identity returned seemingly expected results of <bunch of numbers> arn:aws:iam::<bunch of numbers>:user/<aws-usename> <hash>#2019-10-1515:10Brian@alexmiller regarding yesterday's conversation: my coworker exported those variables as per your suggestion (has no settings.xml file) and it still failed in the same was as before with an "access denied"#2019-10-1515:25rschmuklerI have some questions regarding the implications of datomic's licensing if I want to offer my own software as an enterprise offering (and it depends on datomic). Is this an appropriate place to ask?#2019-10-1517:09marshallSure. Alternatively you can email <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>#2019-10-1519:14rschmuklerThe long and short of it is basically - if I have an application that depends on datomic and a customer wants to use my application on prem, but they will not be accessing datomic outside of my application, does the $5K/year/server fee apply? ie. Does each of my (enterprise) customers then effectively run me $5K out of the gate?#2019-10-1613:21marshallThe Datomic On-Prem license does not provide distribution rights, so you can’t ship the Datomic bits as part of your application
If you want to do that, we are happy to negotiate an OEM Enterprise license to cover that use case
If you’d prefer not to go that route, the other option is to have your end customers purchase a license from us for their on-prem installation of Datomic#2019-10-1515:27Msr Timany reason there are 2 options each in subscribe. Which one should i choose?#2019-10-1517:09marshallthis is an issue with the AWS Marketplace. Choose the one with the latest release. Usually it is the first in the list#2019-10-1517:12Msr Timok. Now they are in random order.#2019-10-1517:12Msr Timwhat happens if i choose the wrong one?#2019-10-1517:12Msr Timi can't tell them apart#2019-10-1517:13marshallthe versions in each one will differ#2019-10-1517:14marshallso if you’re looking for the latest (512-8806), you will only find it under one of the choices#2019-10-1517:16Msr Timgotcha. I can see version in next step. 👍#2019-10-1517:16Msr Timthank you#2019-10-1519:28xiongtxIs there a way to retract an entity’s attribute, no ❓ asked?
To “upsert” a :db.cardinality/one attribute we can just transact a new value, but for a :db.cardinality/many attribute doing so appends, necessitating a retraction beforehand. But :db/retract requires an old value, which means we need to do an additional query. Can we avoid that?
https://docs.datomic.com/cloud/tutorial/retract.html#2019-10-1519:36andrew.sinclairThat sounds like you want tx-functions#2019-10-1519:36andrew.sinclairTake a look here: https://clojurians.slack.com/archives/C03RZMDSH/p1570729299065100#2019-10-1519:42andrew.sinclairSpecifically, the :db.fn/reset-attributes says
> The special value nil will retract all values of the attribute.#2019-10-1521:24xiongtxAh OK, I was hoping there’d something built-in to datomic#2019-10-1621:12xiongtxHere’s an answer from the datomic forums: https://forum.datomic.com/t/replace-cardinality-many-attribute/759/2#2019-10-1604:23George Ciobanuis there a way (without using the excellent tx-function examples shared by @andrew.sinclair) to retract multiple values from a cardinality many ref attribute?
E.g. in (d/transact conn [ [:db/retract [:app/id "app_2"] :children [:component/id "Component 1"]]]
I'd like to remove multiple children (components) at once.#2019-10-1613:17marshallEach retraction performed like this (with a list-form :db/retract) translates directly into a single datom, so you would need to create a :db/retract for each child you want to retract#2019-10-1614:35George CiobanuGot it. Thank you so much marshall#2019-10-1604:26George CiobanuAnother question: similar to the example above, is it possible to remove a child's parent? I tried
(d/transact conn [ [:db/retract [:component/id "Component 1"] :children [:app/id "app1"]]])
and I get an error: :cause ":db.error/not-an-entity Unable to resolve entity: :_children"#2019-10-1613:15marshallthe underscore syntax is only valid in pull#2019-10-1613:16marshallin this case you would want to retract from the parent entity, so something like:
[:db/retract [:app/id "app_1"] :app/children [:component/id "Component 1"]]#2019-10-1614:35George CiobanuMany thanks @U05120CBV!#2019-10-1606:15ShaitanHow to get all entities with a missing field? For example all customers that does not have :customer/address#2019-10-1607:10cjmurphy[(missing? $ ?customer :customer/address)] should do that I think.#2019-10-1609:26Shaitanthat works thank you 🙂#2019-10-1614:29hadilsIs there a way to rename a Datomic Cloud application? I need to change my dev environment to prod and I put dev in the name.#2019-10-1614:30hadilsCan I redeploy with a different name?#2019-10-1614:33dmarjenburghDo you mean the system name or the codedeploy application name?#2019-10-1614:34hadilsBoth.#2019-10-1614:35dmarjenburghI think you have to set up new stacks if you really want a new system name.#2019-10-1614:36dmarjenburghThe system name = storage stack name and it’s everywhere.#2019-10-1614:36hadilsYes, I thought so...#2019-10-1614:47stijnalso for the codedeploy application name you have to recreate the compute stack, you cannot alter this parameter in cloudformation (I tried it last week)#2019-10-1615:01genekimHello! I’m wondering if anyone can help me with a query that is looking for missing values that is timing out — I have the following function, which looks for user entities that have an id, but are missing the name.
My goal is to assemble those incomplete users, so I can do a scan to fetch their missing information. But the query for getting the count times out.
I’m sure there’s a better way to write the query? THANK YOU!!!
(I’m running this via datomic proxy, if that matters…)
(defn count-uninitialized-users []
" count users with missing names "
(ffirst (let [conn (get-conn)]
(d/q '[:find (count ?e)
:where
[?e :user/id]
[(missing? $ ?e :user/name)]]
(d/db conn)))))
Execution error (ExceptionInfo) at datomic.client.api.async/ares (async.clj:58).
processing clause: [?e :user/id ?id], message: java.util.concurrent.TimeoutException: Query canceled: timeout elapsed
#2019-10-1615:10Mark AddlemanI'm aware of three different strategies to deal with timeouts:
First, there is a timeout option on the query object (see https://docs.datomic.com/on-prem/clojure/index.html#datomic.api/query)
Second, I have found that using the async api can alleviate the problem (i think async helps when the result set is very large but I'm not certain)
Third, switching to the index api rather than the query api is a last resort.#2019-10-1615:22marshall@genekim You can pass a :timeout to the query if you think it’s an issue of it just being a long-running job https://docs.datomic.com/client-api/datomic.client.api.html#2019-10-1615:23marshallhttps://docs.datomic.com/client-api/datomic.client.api.html#var-q
you’ll need to use the arity-1 version of the q function#2019-10-1615:23marshallI believe the default timeout is 60s#2019-10-1615:36genekimOoh!!! Promising, @U05120CBV @UAMEU7QV7! I’m giving that a shot! Thx! I’ll keep you posted.#2019-10-1615:41genekim@U05120CBV @UAMEU7QV7 Changing timeout worked! Thank you, all!!! Woot!#2019-10-1615:43marshall👍#2019-10-1615:03jeroenvandijk@genekim Did you try swapping the clauses?#2019-10-1615:04genekimYep!!! Alas, still same result.#2019-10-1615:04jeroenvandijkok, that would have been to easy. Not sure then#2019-10-1615:05jeroenvandijkIs missing? someting from datomic?#2019-10-1615:06jeroenvandijkah i see it is, never mind#2019-10-1615:05genekim@jeroenvandijk — the approximate count of entities is 115K with names, and 1MM without names…#2019-10-1615:11jeroenvandijkhmm ok i have no idea, sorry#2019-10-1615:46ghadi@genekim make sure to pass db to your functions, so that you can assemble together many functions that operate on the same db basis. This is useful for things like reporting routines, or really anything where you need to take multiple looks at the database and ensure you're seeing the same thing
plus, it has the benefit of centralizing the d/db calls and cleaning up the function bodies#2019-10-1616:05genekimThank you!!! People kept suggesting that to me, but I can’t say I actually understood why until your comment. That’s awesome!!!#2019-10-1616:04genekim@ghadi Ohhhhh…. Got it… I’ll look in the music-brainz code samples to see how it should look. Thank you!#2019-10-1616:35hadilsIs it possible to downgrade from production to solo?#2019-10-1616:59hadilsNvm, I answered my own question...#2019-10-1618:27hadilsActually, I didn't. Is it possible to downgrade from production to solo?#2019-10-1618:31hadilsI am building a new production instance of Datomic Cloud and want to downgrade the dev instance to Solo. My attempt at just upgrading failed. Any ideas?#2019-10-1620:22joshkhharmless typo in the first sentence here: "reslts" 🙂 https://docs.datomic.com/cloud/query/query-data-reference.html#predicate-example#2019-10-1620:51Joe Lane@hadi.pranoto You cannot downgrade from production to solo#2019-10-1622:13kennyHas anyone deployed an application on AWS Lambda that uses Datomic Cloud? I'm curious how Datomic Cloud would handle many short-lived connections.#2019-10-1622:29tylerWe have some lambdas doing that in production. The connection overhead is far overshadowed by the Clojure runtime boot up time and the VPC ENI provisioning time.#2019-10-1622:30tylerAWS is rolling out a fix for the ENI problem but the Clojure startup time is still a barrier for any customer-facing use cases (we just use lambda for background processing right now)#2019-10-1622:30kennyThanks for the info. Couple other questions. Are you AOT'ing your code? About how many lambdas are running in parallel?#2019-10-1622:30tylerDatomic itself seems to have no issues that we’ve seen so far.#2019-10-1622:32tylerYup we are AOTing the code, however, bootstrapping the Clojure runtime on a lambda incurs a ~1.9s hit for cold starts. We’ve tested this in Java lambdas that only depend on Clojure and just import the clojure.core ns.#2019-10-1622:33tylerAt most we’ve had ~8 lambdas in parallel hitting datomic.#2019-10-1622:33kennyThat's totally fine for our application. Anything under 10s is probably ok actually. What is your actual startup time with all your production deps?#2019-10-1622:34kennyHmm, we will have a significantly higher level of parallelism. Should be somewhat easy to test, I suppose.#2019-10-1622:35tylerBecause of the VPC issue, it takes about 18s. However, we’ve tested in other availability zones that have the VPC fix and that goes down to ~8s.#2019-10-1622:36kennyIs the VPC issue only a problem when sources outside the VPC need to contact the Lambda?#2019-10-1622:36tylerIts an issue when you need to run a lambda in a VPC (which you need to for datomic).#2019-10-1622:37tylerThe fix is likely only a few weeks away for us east 1. Can follow it here https://aws.amazon.com/blogs/compute/announcing-improved-vpc-networking-for-aws-lambda-functions/#2019-10-1622:38kennyGotcha, ok. Are you guys using deps.edn?#2019-10-1622:40tylerAnother option that we’ve considered is just exposing datomic over API gateway using ions w/ http direct to expose a REST-like interface for datomic operations and securing it with an IAM authorizer.#2019-10-1622:40tylerYup we use deps for all our dependency managment.#2019-10-1622:41kennyInteresting idea.
What do you use to AOT?#2019-10-1622:42tylerWe have an internal library that we’ve soft open-sourced (use at your own risk): https://github.com/StediInc/lambda . Its pretty basic, just uses clojure.core/compile and juxt.pack#2019-10-1622:45tylerWe use lambda for pretty much everything so it has some extras in there for making it easy to manage multiple lambda entrypoints in a service.#2019-10-1622:45kennyNice, will write it down to take a look at when we're ready to investigate more. Why'd you guys write your own lib instead of using one of the existing AWS Lambda clj libs?#2019-10-1622:49tylerNone of them worked quite the way we wanted. We use quite a bit of middleware internally and wanted an interface more conducive to the pattern established by ring. Additionally, we do quite a bit with cloudformation and we have a library to make connecting lambdas to that easier. We wanted better control of the output artifacts so we could line that up better.#2019-10-1622:50tylerOnly other one I was aware of was https://github.com/uswitch/lambada#2019-10-1622:50kennyYep, that was the one I'm familiar with.#2019-10-1622:52kennyLooking at the org, I see cdk-clj. We use Pulumi at our org for deployment. It sounds like cdk-clj leverages jsii to work with a TypeScript API from the JVM. Is that correct? I'd be very interested in exploring that with Pulumi.#2019-10-1622:54tylerIt does. However, the library has to explicitly be built with a jsii configuration.#2019-10-1622:54tylerSee https://github.com/aws/aws-cdk for an example#2019-10-1622:56tylerPretty sure it was built specifically to make CDK cross-language compatible. I’m not aware of any other projects using jsii.#2019-10-1622:58kennyI'm not familiar with jsii. Does that mean the libraries need to be actually written in a jsii compatible way? Or is there declarative spec files the library needs to have?#2019-10-1623:01tylerI believe so, although this is stretching my knowledge at this point since we just did it to target CDK. The source library has to be in typescript and has some restrictions: https://github.com/aws/jsii/blob/master/docs/typescript-restrictions.md.#2019-10-1623:02tylerOur implementation requires the jsii bundle manifests that get produced in the JVM target config. No idea how hard it would be to build an existing project with jsii.#2019-10-1623:02tylerLooks like Pulumi is in typescript so it might work.#2019-10-1623:03kennyNot sure. Pulumi supports a few different languages now but I don't think they use jsii.#2019-10-1623:05kennyNot a high priority for us but definitely super interesting stuff. I'll bring it up with the Pulumi team to see if they have any input. Thanks for all this info - been super useful.#2019-10-1623:10kennyYou guys don't happen to use any native libraries in your Lambdas do you @U0BKC8NCU?#2019-10-1623:11tylerWe don’t, no. I’ve dabbled with them on side projects though.#2019-10-1623:12kennyOk. One of the services we're interested in running on Lambda needs some native libs. Seems like other people have used native libs on the jvm on Lambda.#2019-10-1623:14tylerYou should be able to build the native lib on a docker image like https://hub.docker.com/r/lambci/lambda/ and then package it into a lambda layer https://docs.aws.amazon.com/lambda/latest/dg/configuration-layers.html#2019-10-1623:16tylerIf you have a pre-built binary thats compatible you can just put that on a lambda layer directly and ignore the docker step.#2019-10-1623:18tylerhttps://hub.docker.com/_/amazonlinux is what you’d want to build on, not the ci image I posted above.#2019-10-1623:20kennyThat seems like a better path than manually extracting the native libs from the classpath. Thanks!
What do you guys use for logging in Lambda? I've used CloudWatch logs in the past and can hardly stand it after using Datadog logs for a while.#2019-10-1623:21kennyi.e. Do you guys ship the CW Lambda logs to somewhere else? Do you have some other log viewer?#2019-10-1623:21tylerWe use cloudwatch logs for lambda logging but we’ve been moving to using traces instead with Amazon Xray and then just linking the logs from Xray.#2019-10-1623:23tylerWe also post all the cloudwatch logs to an elasticsearch instance. Although haven’t really used that since we’ve gotten Xray setup properly.#2019-10-1623:24kennyDo you have some wrapper around the Xray api? We use opentracing right now and have a defn wrapper that can be used to instrument certain functions.#2019-10-1623:28tylerYeah we just have a lightweight wrapper around the Xray java API. Nothing fancy. We are leveraging the annotation and metadata features pretty heavily, not sure how those interplay with opentracing. We’re pretty much all-in on AWS so we have tried to use their supplied libraries to get as much as we can out of their services.#2019-10-1623:35tylerSpecifically this https://docs.aws.amazon.com/xray/latest/devguide/xray-sdk-java.html. They will auto instrument calls to all AWS services (which we use a lot of) and to anything using the apache http client.#2019-10-1623:36kennyWe started using the cognitect aws-api so we probably will miss out on the auto-instrumentation 😞#2019-10-1623:39tylerWe did too although we are reverting back to the aws sdk. We like the interface for the aws-api lib by cognitect but its missing a lot of goodies. Luckily, the aws sdk v2 exposes a data class interface that allows for the same aws-api payloads to be marshalled into request classes. We are working on an internal lib that is basically a 1-1 replacement with the cognitect aws-api library.#2019-10-1700:57kennyOh very cool!#2019-10-1623:40hadilsThanks @lanejo01#2019-10-1712:01timeyyyHow do people handle schema migrations with datomic cloud? I was looking at conformity but it doesn't support the client api.#2019-10-1712:13NemsHi everybody,
We are having an issue with the way datomic tx works. Our application allows user to register driver registrations with a start and end date for a certain license plate via xml files. If a new driver registration overlaps with the start and end date of the old driver registration, we cut off the old dates in our timelines. Our timelines are sorted on the highest tx.
Just to be clear about the overlaps. Here's an example:
driver registration A inserted at 2019-09-09
{:driver-registration/id "1XXX001-2018-01-01"
:driver-registration/license-plate "1XXX001"
:driver-registration/start #inst"2018-01-01"
:driver-registration/end #inst"2019-01-01"
:driver-registration/driver {:person/first-name "John"
:person/last-name "Doe"}}
driver registration B inserted at 2019-09-10
{:driver-registration/id "1XXX001-2018-06-01"
:driver-registration/license-plate "1XXX001"
:driver-registration/start #inst"2018-06-01"
:driver-registration/end #inst"2020-01-01"
:driver-registration/driver {:person/first-name "Mary"
:person/last-name "Jane"}}
timeline:
2018-01 2018-06-01 2020-01-01
|--driver John Doe---|-----driver Mary Jane------|
As you can see the end date of driver registration A (John Doe) was cut off by the start date of driver registration b (Mary Jane). We retrieve this data by the following query:
(d/q '[:find ?tx (pull ?dr [: {:driver-registration/person [:]}])
:in $ ?license-plate
:where
[?dr :driver-registration/license-plate ?license-plate]
[?dr :driver-registration/id _ ?tx]] db "1XXX001")
We sort the list by the value of ?tx and cut off the dates were necessary. This works fine for most cases but now image the user has made a mistake and wants the end date of driver registrations as the cut off date. Like this:
2018-01 2019-01-01 2020-01-01
|----driver John Doe-----|-----driver Mary Jane------|
When the user uploads a new xml with the exact same data of driver registration A, we expect that driver registration A would now have the highest tx. But due to datomics redundancy elimination, datomic will filter out the data of the transaction and never update the tx of driver registration A. When the user asks for the driver registration timeline, he will still receive the old one.
Is there a way to solve this issue on the query side? One of the solutions would be to add a field :driver-registration/last-upload with a date value to driver registration but that feels as if I'm rebuilding the db/txInstant system.#2019-10-1712:30favilaWhy are you not sorting by the registration start and end dates? Tx Instant is a time of record and has no connection to your domain’s “business” times. Suppose you for eg uploaded an old registration?#2019-10-1712:30favilaMaybe helpful: https://vvvvalvalval.github.io/posts/2017-07-08-Datomic-this-is-not-the-history-youre-looking-for.html#2019-10-1712:41NemsWell the start and end dates do not show at which time the driver registrations was entered in the database. In my example they are nice in order but it's possible that a user would insert a driver registration with an overlap at the start of an old driver registration. If we would just sort on start and end date, the newer one will always be overwritten by the old one. Which shouldn't happen.
Great article. So going by that article we should add a field stating when a driver registration was last-uploaded?#2019-10-1712:53favilaI guess so? I thought by “newer one” you just mean a start time > next range’s end time. I guess I don’t understand precisely your timeline overlap algorithm #2019-10-1712:54favilaIf time of record is really vital to you here you can retrieve the tx of start and end specifically#2019-10-1712:55favilaYou are retrieving the tx of the id which of course is not going to change much#2019-10-1712:56favilaMaybe the tx you really mean is (max tx-start tx-end)#2019-10-1712:57favilaSorry that was vague. I’ll be more precise in a second#2019-10-1713:03NemsSorry about not being clear. Hopefully this flow helps to understand how our application works, With newer I mean the latest transaction date:
Insert #1: driver registration A inserted at 2019-09-09
{:driver-registration/id "1XXX001-2018-01-01"
:driver-registration/license-plate "1XXX001"
:driver-registration/start #inst"2018-01-01"
:driver-registration/end #inst"2019-01-01"
:driver-registration/driver {:person/first-name "John"
:person/last-name "Doe"}}
timeline:
2018-01 2019-01-01
|----driver John Doe-----|
Insert #2: driver registration B inserted at 2019-09-10
{:driver-registration/id "1XXX001-2018-06-01"
:driver-registration/license-plate "1XXX001"
:driver-registration/start #inst"2018-06-01"
:driver-registration/end #inst"2020-01-01"
:driver-registration/driver {:person/first-name "Mary"
:person/last-name "Jane"}}
timeline:
2018-01 2018-06-01 2020-01-01
|--driver John Doe---|-----driver Mary Jane------|
Insert #3: driver registration A inserted at 2019-09-11
{:driver-registration/id "1XXX001-2018-01-01"
:driver-registration/license-plate "1XXX001"
:driver-registration/start #inst"2018-01-01"
:driver-registration/end #inst"2019-01-01"
:driver-registration/driver {:person/first-name "John"
:person/last-name "Doe"}}
Expected:
2018-01 2019-01-01 2020-01-01
|----driver John Doe-----|-----driver Mary Jane------|
Reality:
2018-01 2018-06-01 2020-01-01
|--driver John Doe---|-----driver Mary Jane------|#2019-10-1713:04Nemseven with taking the tx of start and end date, the last insert will always be ignored as it is completely the same as the first insert and datomic will use redundancy elimination#2019-10-1713:04Nemsnow that I'm typing this it's starting to make sense to just add an extra field stating upload-date#2019-10-1713:09favilawhat I meant was something like this:#2019-10-1713:09favila(d/q '[:find ?tx (pull ?dr [* {:driver-registration/person [*]}])
:in $ ?license-plate
:where
[?dr :driver-registration/license-plate ?license-plate]
[?dr :driver-registration/start _ ?tx-s]
[?dr :driver-registration/end _ ?tx-e]
[(max ?tx-s ?tx-e) ?tx]
] db "1XXX001")#2019-10-1713:11favilaagain, this model assumes that your notion of driver registration “age” exactly corresponds to the sum of its tx times, which might be possible but more likely you actually have a separate explicit domain-specific notion of “record effective date” which you just haven’t noticed yet#2019-10-1713:12favilaalso note the mismatch in granularity: tx time is about individual facts not “records”. Datomic doesn’t know anything about records#2019-10-1713:12favilaan entity is not necessarily a record#2019-10-1713:13favilae.g. it may be the union of attributes from multiple records, or it may be a value or “sub-record” (e.g in isComponent case) or it may just be a convenient thing to join on#2019-10-1713:24Nems"but more likely you actually have a separate explicit domain-specific notion of “record effective date” which you just haven’t noticed yet" I think that is the case here. Also, I haven't noticed this statement "also note the mismatch in granularity: tx time is about individual facts not “records”" so your example of the query makes more sense to me now. I think I can figure it out from here. Thanks for taking the time to help me favila!#2019-10-1712:58Shaitanis bigdec safe from floating point errors?#2019-10-1713:48rapskalianDatomic cloud documentation mentions:
> [:db/unique] attribute must have a :db/cardinality of :db.cardinality/one.
https://docs.datomic.com/cloud/schema/schema-reference.html#db-unique
However, in my project, I’ve been using the following schema definition just fine:
{:db/ident :user/email
:db/unique :db.unique/identity
:db/valueType :db.type/string
:db/cardinality :db.cardinality/many}
It appears to work as expected:
(d/transact conn {:tx-data [{:user/email "calvin"}]})
=> {:tx-data [#datom[13194139533323 50 #inst "2019-10-17T13:44:42.951-00:00" 13194139533323 true] #datom[10740029580116059 78 "calvin" 13194139533323 true]]
(d/transact conn {:tx-data [{:user/email ["jenny"]}]})
=> [#datom[13194139533325 50 #inst "2019-10-17T13:47:13.676-00:00" 13194139533325 true] #datom[13453624277467228 78 "jenny" 13194139533325 true]]
And I can even pull:
(d/pull db [:user/email] [:user/email "calvin"])
#:user{:email ["calvin"]}
#2019-10-1713:50marshall@cjsauer can you do the following:
(d/pull (d/db conn) '[*] :user/email)
#2019-10-1713:50rapskalian#:db{:id 78,
:ident :user/email,
:valueType #:db{:id 23, :ident :db.type/string},
:cardinality #:db{:id 36, :ident :db.cardinality/many},
:unique #:db{:id 38, :ident :db.unique/identity}}
#2019-10-1713:51marshallfascinating#2019-10-1713:51marshallwhat version of datomic? (cloud or onprem)?#2019-10-1713:52rapskalianThis is cloud. Checking on version (what would be the fastest way to see that? CF?)#2019-10-1713:52marshallyep#2019-10-1713:53rapskalianThis is in the output section of my compute stack
DatomicCFTVersion 512
DatomicCloudVersion 8806
#2019-10-1713:53marshallthanks#2019-10-1713:53marshalli’m looking into it#2019-10-1713:53rapskalianI also did a split stack deploy, if that matters#2019-10-1714:11marshall@cjsauer you’re correct that it can be done. Dont.
🙂
The semantics of unique identity are such that having a card-many attr there is pretty dicey
I’ll look into filing this as somethign that should potentially throw or warn#2019-10-1714:16rapskalian@marshall ha okay, thanks for checking. I’m a bit puzzled tho. Email addresses seem to challenge those semantics. Is there a technical reason that unique-many attributes can’t exist? Regardless, throwing an exception there would be great.#2019-10-1714:17marshalli suppose not a technical reason so much as a semantic one
unique identity says “this specific attr/value pair is unique in the database”#2019-10-1714:17marshallbeing able to assert multiple values for that is…. complicated?#2019-10-1714:24rapskalian>“this specific attr/value pair is unique in the database”
This could still hold for a card-many attribute theoretically. I opened an issue/PR for datascript before posting here on this subject, and @tonsky mentioned:
> Upsert would not work because it’s not clear which value to look at. Or we must look at all provided values and make sure they all resolve to the same entity.
This might be the complication. When upserting multiple values one must ensure that they do indeed resolve to the same entity.
https://github.com/tonsky/datascript/issues/320#2019-10-1714:24marshallwell, you definitely open up to conflicts#2019-10-1714:25marshalli.e. what if you have two separate “entites” in your transaction, each using one of your unique emails, but they both contain a separately conflicting datom for some other attr#2019-10-1714:25marshallsince they both resolve to the same entity based on that unique id, you then have a conflict of the other datom#2019-10-1714:34rapskalianIt would seem appropriate for that transaction to fail in that example. The two entities are resolved to one, and then can be constrained from there.#2019-10-1714:34rapskalianMaybe the detection of that conflict is the hard part tho. I’m ignorant of the details.#2019-10-1714:36rapskalianGiven this limitation, is it possible to model users being uniquely identified by multiple email addresses? This felt like a very natural way to model my domain, where a user can be part of multiple teams/orgs, each with their own email domains. It would be great if those emails did indeed resolve to the same entity.#2019-10-1715:43timcreasyYou could model this with the concept of an “organization user” which can be mapped to a “user”.
Each “organization user” can have their unique identifier (email), a reference to user and the organization they belong to.#2019-10-1716:53rapskalianHm yeah that could work. It would function similar to a traditional join table.#2019-10-1716:54rapskalianCan’t shake the feeling that it falls into the category of accidental complexity…#2019-10-1714:36benoitYou have the same conflict if you use different identity attributes on the same entity.#2019-10-1714:41favilaCorrect, and I’ve been burned by this before. datomic will pick one for entity id purposes in some undefined way#2019-10-1714:41favilaI actually don’t like identity attributes at all anymore 🙂#2019-10-1714:42favilaI’d prefer entity resolution only happened via :db/id, and if you use a db/id value with a lookup ref whose attribute is identity, only then would it upsert for you#2019-10-1714:43favilahowever upsert predates lookup refs, so 20/20 hindsight and all that…#2019-10-1714:45rapskalianWhat is the conflict? I’m not disagreeing, just failing to conceptualize. Is there a minimal example of a transaction that shows this?#2019-10-1714:51benoitAssuming :product/id and :product/asin are two identity attributes, the following tx will throw with :db.error/datoms-conflict:
[{:product/id #uuid "59d1da4a-7de0-4625-ad83-b63ac8346368"
:product/name "A"}
{:product/asin "B00VEVDPXS"
:product/name "B"}]
#2019-10-1714:54rapskalianThat feels broken 🤔
The ambiguity must lie in the expanded form of this transaction perhaps?#2019-10-1714:55benoitNot sure why it seems broken. It makes sense. I was just pointing out the fact that Datomic detect those conflicts already so I'm not sure why it could not do it for card many identity attributes.#2019-10-1714:56rapskalianMy mental model for this must be off. That transaction looks like two products, each with different forms of identity.#2019-10-1714:59benoitSorry, yes. I should have said that these 2 values refer to the same entity in the database.#2019-10-1714:59rapskalianAhh okay, and so the :product/name datom is the conflict.#2019-10-1715:01benoityes#2019-10-1715:03rapskalian>I was just pointing out the fact that Datomic detect those conflicts already so I’m not sure why it could not do it for card many identity attributes.
This was my thought as well. The conflict check is similar, just over a collection of identity values instead of one.#2019-10-1715:07rapskalianMostly tho, semantically, card-many-unique attributes seem very natural. Using Game of Thrones as an example, royal figures can have many identifying titles: Robert Baratheon, first son of XYZ, slayer of ABC, builder of QRS, etc etc etc.#2019-10-1716:52hadilsCan you build 2 datomic cloud instances in the same region with different names? can you share the key?#2019-10-1716:52hadilsIs this wise?#2019-10-1716:54ghadiwhich key @hadilsabbagh18?#2019-10-1716:54hadilsThe key named datomic.#2019-10-1716:55ghadican you be more specific? some cloudformation parameter?#2019-10-1716:56hadilsI am going to create a separate ssh keypair. I think this is wise...#2019-10-1716:57ghadioh that's the AWS EC2 Key Pair -- yes you can share those#2019-10-1716:57hadilsOk.#2019-10-1716:57hadilsThanks!#2019-10-1716:57ghadiwe put one Datomic system in one region, we attach a couple query groups to it, and it hosts hundreds of databases#2019-10-1716:57ghadibut you can put two isolated systems in the same region, too#2019-10-1716:58ghadithe AWS EC2 Key Pair is only used if you need to ssh to the actual datomic box#2019-10-1716:58ghadithe Bastion/Analytics Gateway uses a different key pair#2019-10-1717:06hadilsI started deploying to us-west-2 according to instructions and I repeatedly get this error:
Embedded stack arn:aws:cloudformation:us-west-2:962825722207:stack/stackz-StorageF7F305E7-13QLTET3W9OAQ/ad7f7950-f0ff-11e9-b33c-02a77ed54d64 was not successfully created: The following resource(s) failed to create: [DatomicCmk, CatalogTable, FileSystem, LogGroup, LogTable].
Does anyone know why it fails?#2019-10-1717:12ghadihttps://docs.datomic.com/cloud/troubleshooting.html#2019-10-1717:12ghadiyou'll need to look at your Event Log in the CF Stack#2019-10-1717:16hadilsThanks @ghadi!#2019-10-1814:30kelvedenI've only been using Datomic Cloud for a few weeks now and I'm really enjoying working with it so far. However I've unearthed enough limitations of Datomic Cloud to the point now where I think we might have to abandon it. I'm hoping someone here might be able to shoot down my observations enough for us to reconsider...
Here are the limitations as I see them:
1. No data excision
- This is a real issue if we get a GDPR right to erasure request.
- Of course, we could just avoid storing any data that could possibly come under such a request but that's not ideal.
2. No backups
- I get the whole argument about the robustness of the AWS storage that backs Datomic Cloud but we still want a disaster recovery plan if something goes wrong.
- (Or if we just want to restore to a point in time.)
3. No transaction context
- Other databases have the concept of a transaction "context" which one can wrap multiple commands in - if one command fails, they all fail.
- E.g. if we want to store some data AND publish it externally - we've no way of ensuring that the storage operation and publishing operation only succeed if BOTH succeed.
Of course, If we were using Datomic on-prem points 1 and 2 would be resolved. And I'm hoping that I'm just missing something obvious with point 3.#2019-10-1814:44jaihindhreddyWhen you say "publish it externally", unless you and the external thing are doing some kind of 2-phase commit thing, I'm not sure you can make both atomic. If the external publishing succeeds but the response times out, your transaction will fail right?#2019-10-1814:50kelvedenYes that's true although not actually an issue in my scenario. I'll be a bit more specific. What I'm thinking of is actually a DB write + kafka produce scenario. So the typical approach would be 1) write to DB; 2) produce to kafka. If (2) fails, roll the whole transaction back. If (2) times out for some reason then retrying the whole transaction is fine as it'll produce a duplicate kafka message (which downstream consumers will recognise as a duplicate and ignore).#2019-10-1814:55kelvedenSo, I guess what I really see as the limitation is that there's no way to rollback a datomic transaction.#2019-10-1816:09Joe LaneWhy do datomic Transaction functions which publish to Kafka not work here?#2019-10-1816:09Joe LaneIf the publishing to Kafka fails then you can throw an exception to abort the datomic commit. #2019-10-1821:09kelvedenThanks. I'd not considered using ions - I've not used them before. It should work though.#2019-10-1907:48vemv'[:find [(pull ?u2 pe) ...]
:in $ pe ?uuid
:where
[?u :user/uuid ?uuid]
[?u :user/primary-email ?email]
[?u2 :user/primary-email ?email]
[?u2 :user/uuid ?uuid2]
(not [(= ?uuid ?uuid2)])]
In this query, :user/primary-email is a :fulltext attribute, which degrades overall performance.
How can I tweak the [?u2 :user/primary-email ?email] clause to use a strict = predicate?#2019-10-1907:55vemv---
A simplified version of the question:
'[:find [(pull ?u pe) ...]
:in $ pe ?email
:where
[?u :user/primary-email ?email]]
How do I skip the :fulltext querying here.
(all other logic from the former query can be done in plain Clojure, out of the result of the simplified query)#2019-10-1916:34favilaThere is no full text querying unless you use the fulltext function#2019-10-1916:35favilaDatalog joins are always exact match only#2019-10-1919:45vemvthat surprises me, since a single-clause query consisting of [?u :user/uuid ?x] takes 1ms, while a single-clause query consisting of [?u :user/email ?x] takes 80ms.
In both cases, ?x is a fixed param I pass to the query.
I imagined the difference is due to :fulltext being a part of my :user/email schema, but maybe there's some other explanation#2019-10-2001:02vemvDebugged, this was indeed something else, related to indices. Solved now.
Thanks for the pointer!#2019-10-2013:06favilaDid you not have a normal :db/index true on the :user/email attribute?#2019-10-2013:22vemvWe believed we had indices in place, but there was some issue with the tooling that emitted these indices. So these attributes stayed in their default of false#2019-10-2114:18BrianI'm using Datomic Cloud and therefore lots of AWS stuff. Does anyone know of where I can ask questions regarding AWS stuff?#2019-10-2114:21ghadiask away @brian.rogers , it’s a big room#2019-10-2114:22ghadithere’s also an #aws channel#2019-10-2114:25BrianI'll ask there. Thanks!#2019-10-2115:05jaret#datomic Cloud 535-8811 now available. https://forum.datomic.com/t/datomic-cloud-535-8812/1214#2019-10-2115:05jaret#datomic 0.9.5981 now available. https://forum.datomic.com/t/datomic-0-9-5981-now-available/1213#2019-10-2115:32faviladocs for index-parallelism say:
>If you have high write volumes, a transactor with plenty of CPUs to spare, and are using a scalable , you can set index-parallelism as high as 8 to speed up indexing jobs:#2019-10-2115:32favilaWhat is the missing word/phrase after “scalable”?#2019-10-2115:43Joe LaneI'm going to guess "backend" but, again, it's a guess.#2019-10-2115:50jaretShould say storage with a link to the storage docs. I am fixing it. Org mode error 🙂#2019-10-2115:50jaretthanks for catching that#2019-10-2118:01souenzzoAny news/hope about datomic-free?#2019-10-2115:44Joe Lane@jaret is index-parallelism pre-enabled on datomic cloud with this release? Is it something cloud users should be thinking of?#2019-10-2115:50jaretThis feature is already pre-enabled in cloud.#2019-10-2115:50stuarthallowayand always has been#2019-10-2115:51Joe LaneHa thanks for answering my current and then immediate next question. See you guys at the Conj!#2019-10-2117:02BrianWe have a Datomic db running in our closet that we want to host in AWS. We already have a stack set up and some db's already running in AWS. We have extracted the database from our closet server and are now looking to restore it in AWS. Can someone point me in the right direction for how we can do that? https://docs.datomic.com/on-prem/backup.html looks promising but that is for on-prem and I'm not sure that's what I need#2019-10-2117:12favilaare you moving from on-prem to cloud datomic? Not merely on-prem in closet to on-prem on aws. There’s no supported migration from on-prem to cloud systems https://docs.datomic.com/on-prem/moving-to-cloud.html#2019-10-2117:21BrianDoes "no supported migration" means there is no easy-button or does it mean that it is not possible?#2019-10-2117:27ghadiyou can easily move your on-prem to AWS running on-prem#2019-10-2117:28ghadiby... running on-prem in your AWS EC2 instance#2019-10-2117:28ghadi(on-prem to Datomic Cloud is a different thing as @U09R86PA4 mentions)#2019-10-2117:29favilait’s possible if you do it yourself#2019-10-2117:29favilai.e. some variation of read each tx from the old db, make a new tx for it, transact it into the new cloud db#2019-10-2117:30favilathere are feature and other differences between on-prem and cloud that you will have to account for#2019-10-2117:30favilabut there’s no easy backup-and-restore#2019-10-2123:03Msr TimHi I am setting up a new datomic system in a VPC that needs to be accessed by kubernetes container running in another vpc#2019-10-2123:03Msr Timi followed the instructions here https://docs.datomic.com/cloud/operation/client-applications.html#vpc-peering#2019-10-2123:03Msr Tim2019-10-21 23:02:05,824 [main] ERROR app.core - {:what :uncaught-exception, :exception #error {
:cause :server-type must be :cloud, :peer-server, or :local
:data {:cognitect.anomalies/category :cognitect.anomalies/incorrect, :cognitect.anomalies/message :server-type must be :cloud, :peer-server, or :local}
:via
[{:type java.lang.RuntimeException
:message could not start [#'spot-app.db.core/conn] due to
:at [mount.core$up$fn__385 invoke core.cljc 80]}
{:type clojure.lang.ExceptionInfo
:message :server-type must be :cloud, :peer-server, or :local
:data {:cognitect.anomalies/category :cognitect.anomalies/incorrect, :cognitect.anomalies/message :server-type must be :cloud, :peer-server, or :local}
:at [datomic.client.api.impl$incorrect invokeStatic impl.clj 42]}]
:trace
[[datomic.client.api.impl$incorrect invokeStatic impl.clj 42]
[datomic.client.api.impl$incorrect invoke impl.clj 40]
#2019-10-2123:04Msr Timits failing at this line (def conn (d/connect client {:db-name "movies"})) #2019-10-2212:10favilaThe cause of your error is something in client: a missing or bad :server-type.#2019-10-2212:12favilahow do you construct your client? that is where the problem lies#2019-10-2212:32Msr Timhttps://docs.datomic.com/cloud/getting-started/connecting.html#2019-10-2212:32Msr Timlike its described there#2019-10-2212:32Msr Tim(require '[datomic.client.api :as d])
(def cfg {:server-type :ion
:region "<your AWS Region>" ;; e.g. us-east-1
:system "<system-name>"
:creds-profile "<your_aws_profile_if_not_using_the_default>"
:endpoint ".<system-name>.<region>."
:proxy-port 8182})
(def client (d/client cfg))
#2019-10-2212:33Msr Timit works perfectly locally on my machine via the bastion#2019-10-2212:33Msr Timbut gives me that error on k8s#2019-10-2212:33Msr Timso it cant be the configuration#2019-10-2212:33favilak8s is not an ion#2019-10-2212:33Msr Timoooh#2019-10-2212:33favilayou are connecting from “outside” the ion cluster#2019-10-2212:34favilaso you need to use a different peer connection type#2019-10-2212:34Msr Timooh#2019-10-2212:34Msr Timgotcha#2019-10-2212:34favilait works locally because there’s a local ion dev environment#2019-10-2212:34Msr Timunderstood#2019-10-2212:35favilayou likely need :cloud#2019-10-2212:36Msr Timi tried it with :cloud but I still get the same error#2019-10-2212:36favilathe same exact error?#2019-10-2212:37Msr Timlet me try one more time now#2019-10-2123:05Msr Timshould i assume that the error message is wrong?#2019-10-2123:06Msr Timsince I am not using async api as suggested here https://docs.datomic.com/cloud/troubleshooting.html#async-ion#2019-10-2200:31rapskalianIs it unwise to use :db/noHistory on a :db.unique/identity attribute that is meant to identify ephemeral, high-churn entities?#2019-10-2203:48ackerleytngI'm passing as-of a Date from clj-time's to-date but I'm getting a casting error, something to do with datomic idb. Has anyone had this issue before?
class java.util.Date cannot be cast to class datomic.db.IDb (java.util.Date
is in module java.base of loader 'bootstrap'; datomic.db.IDb is in unnamed
module of loader 'app')
#2019-10-2212:10benoitAre you passing the date as the first argument instead of the second?#2019-10-2301:57ackerleytngNope, here's my code
(let [time (tc/to-date
(t/from-time-zone (t/date-time 2019 10 22 16 50 0)
(t/time-zone-for-offset +8)))
db-then (d/as-of (d/db conn) time)]
(d/q '[:find ?doc
:where [_ :db/doc ?doc]]
db-then))
#2019-10-2302:01ackerleytngand db-then is a db...
(let [time (tc/to-date
(t/from-time-zone (t/date-time 2019 10 22 16 50 0)
(t/time-zone-for-offset +8)))
db-then (d/as-of (d/db conn) time)]
(type db-then)) => datomic.client.impl.shared.Db
#2019-10-2214:34arnaud_bosThis made me laugh very hard. Thought it'd be of interest to the immutability fans out there 😂
https://github.com/gfredericks/quinedb#2019-10-2214:35arnaud_bosFound via https://twitter.com/andy_pavlo/status/1186636813458432000#2019-10-2214:37arnaud_bosDon't forget the FAQ section.#2019-10-2214:41arnaud_bosJust found out the author is in this slack...#2019-10-2214:53dakraHi. I want to "play around" with datomic cloud but I'm having completing the tutorial. The datomic cloud setup on AWS all seemed to work fine. I have datomic-access running and I can do the curl -X socks call from the tutorial and I get a successful response with s3-auth-path. But when I try (d/create-database client {:db-name "testion"}) I get:
Unable to find keyfile at
. Make
sure that your endpoint and db-name are correct.
#2019-10-2215:14Msr Timyou need to permissions to access those s3 files#2019-10-2215:15Msr Timdid you see that file in s3 ?#2019-10-2215:17dakraI'm new to AWS. I made an IAM user and gave him AmazonS3FullAccess permissions. Is that enough? How can I test access to those files?#2019-10-2215:38dakraI'm now the root user and still same problem. I'll try and delete and re-create the cloud-formation. maybe this helps#2019-10-2215:20dmarjenburghWe recently upgraded from the Solo to the Production topology. When using a lambda proxy, the request contains a requestContext with authorizer claims parsed from the oauth2 token in the request. When using the VPC Proxy, this information is missing. Is there a way to retrieve it?#2019-10-2217:00Joe Lane@dmarjenburgh We ended up parsing the token (jwt in our case) in a pedestal interceptor to work around this missing piece in http-direct.#2019-10-2217:01dmarjenburghYeah, figured that would be the thing to do. Thanks#2019-10-2217:02Joe LaneIf you need to see the contents of the request I suggest casting the request object and looking at it in cloudwatch (seemed to be the only way to debug it)#2019-10-2217:33dmarjenburghDid you use a library for parsing the jwt, or just base64decode it yourself?#2019-10-2218:45Joe LaneDecode it myself#2019-10-2220:04rapskalianAnyone tried this? Is there a foot-gun lurking here?#2019-10-2220:06rapskalianI’m thinking of also using :db/isComponent for all attributes on these ephemeral entities so that I can retract them in one fell swoop.#2019-10-2220:53favilaWhat does that gain over retractEntity?#2019-10-2222:12rapskalianAh yeah, good point. #2019-10-2220:07rapskalianI have a feeling tho that the sage advice might be to not store this type of data in datomic. I’m attempting to avoid bringing in another storage mechanism.#2019-10-2220:52favilawhy do you think there might be some special problem here?#2019-10-2222:14rapskalianJust looking ahead to see if this is a known bad idea. I’m gathering that it’s probably just fine tho. #2019-10-2222:26favilaa minor caveat is that noHistory is not a guarantee of no history ever, just that history will be dropped from indexes. so you may still see some history between the last indexing job and now#2019-10-2222:27favilaalso I don’t know if history disappears from transaction logs#2019-10-2300:43rapskalianI see in the docs that the indexes are stored in S3. Would it be correct to say that :db/noHistory = :db/noS3? Or just that the datom will eventually not exist in S3?
> The effect of :db/noHistory happens in the background
Maybe that’s what this means. The datom is eventually scrubbed from S3 in the background..?#2019-10-2300:47rapskalian> also I don’t know if history disappears from transaction logs
Looking in the docs again, this would mean that it’s still stored somewhere at the end of the day, yeah? DDB in this case. And these datoms would still show up in d/tx-range.#2019-10-2301:56favilaI don’t know the ins-and-outs of cloud#2019-10-2302:00favilafor on-prem, tx log data is written to storage and kept in peer+transactor memory until the next index kicks in. Reads transparently merge the last index time with the in-memory index derived from the tx log; but when the in-memory index is flushed to storage, the history of no-history attributes is not written. I don’t know if the transactor also takes the additional step of rewriting the tx-log to remove attribute history, but it seems unlikely to me. For cloud, I don’t know the precise mechanics of where that in-memory log goes or what precisely happens during indexing#2019-10-2302:01favilaanyway, this is easy to test. If it matters to you it’s probably better than listening to me speculate#2019-10-2302:03favilaactually only on-prem is easy to test. on cloud there’s no d/request-index, so you will have to induce it some other way (probably via lots of writes).#2019-10-2308:06mkvlrit will stay in history logs#2019-10-2516:45rapskalianThanks for the info guys 🙏#2019-10-2221:51Msr Timhow much of an AWS expertise does one need to run and maintian datomic . I finally setup a test production topology. it setup tons and tons of AWS things that i don't really grasp.#2019-10-2222:32Msr TimIs there a future posiblity of a hosted version of datomic#2019-10-2223:01favila…you mean on-prem?#2019-10-2312:21benoitI think he meant a version where all you have to do is get credentials, download the client and you're good to go. This is what I thought cloud was going to be initially. Right now, you still have a lot of moving pieces with all the AWS stuff you have to setup yourself.#2019-10-2313:08Msr Timyeah. Exactly.#2019-10-2313:09Msr TimI don't feel confident at the moment that i can maintain AWS setup on my own in a small team#2019-10-2313:10Msr Timmaybe someday if get up to speed with AWS properly#2019-10-2313:15favilaPossibly on-prem has a lower support burden? It is less embedded into aws#2019-10-2313:15favilarun a transactor, run a peer, use dynamo for storage#2019-10-2316:38Msr Timah.. maybe#2019-10-2316:38Msr Timbut i would really prefer not running my own database#2019-10-2313:24zachcpHi datomic users - anyone have any tips on early stage data modelling for Datomic? I’d be interested in blog posts about 1) the early design phase of a project prior to creation or 2) tools to faciliate exploration or schema creation (like the codeq schema https://github.s3.amazonaws.com/downloads/Datomic/codeq/codeq.pdf?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAISTNZFOVBIJMK3TQ%2F20191023%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20191023T131443Z&X-Amz-Expires=300&X-Amz-SignedHeaders=host&X-Amz-Signature=51d040c7b4a25ac20cb3f81026f005b3b062f211d6fe22db2d1f17bfc54a3d9f)#2019-10-2313:36alexmillerwe usually use Omnigraffle to make those datomic schema diagrams#2019-10-2313:40alexmillerthe techniques here though are very similar to classical ER diagrams, with the distinction that Datomic is more flexible than your typical "tables" of ERD (attributes can be common across "entities", ref types refer to other entities directly, not via PK/FK, cardinality, components, etc)#2019-10-2313:40alexmillermost of that stuff can just be annotated on the diagram though. from a big picture you're still drawing tables and lines#2019-10-2314:21zachcpThanks @alexmiller. Do you have any suggestions on how to think about early stage data design - e.g. trade-offs around making your data model “flatter” or not. Or in your experience as you start modeling the data, a natural degree of partitioning begins to emerge.#2019-10-2314:29alexmillerin general, I find modeling with Datomic usually lets you be pretty close to a logical ERD and there is no reason not to break things out the way you like. you can think more "table"-like, but also do a mixture of graph-like things (and in my experience most enterprise apps are 85% "table"y and 15% "graph"y - Datomic gives you the best of both worlds)#2019-10-2314:30alexmillerit's definitely good to go as far as you can in diagrams before you ever write any code or schemas - changing diagrams is a lot faster :)#2019-10-2314:38zachcp:+1:#2019-10-2320:23schmeejust tried to run Datomic Pro 0.9.5981 locally, using client-pro 0.8.28, and I’m running into this issue: https://forum.datomic.com/t/ssl-handshake-error-when-connecting-to-peer-server-locally/1067#2019-10-2320:23schmeenot running in Docker, and setting :validate-hostnames false in the client config doesn’t do anything it seems#2019-10-2320:24schmeeI’m following this guide: https://docs.datomic.com/on-prem/getting-started/connect-to-a-database.html#2019-10-2320:44schmeeupgrading client-pro to 0.9.37 fixed the issue#2019-10-2415:45ssdevHey folks, I'm interested in trying datomic cloud, but want to test it out first. I'm going through the subscribe process now and seeing the estimated monthly costs at $118.00. Is there a way to test this out for free? Is this just an estimate based on receiving high traffic? I'm currently in the aws free tier period and hoping to just go through the ion tutorial without incurring any fees#2019-10-2415:48Joe Lane@UNCFMJ7QE the datomic solo topology should not cost $118.00. It should cost ~$1 per day, or roughly $30 per month.#2019-10-2415:49ssdevOk. I selected solo topology and seeing this, but I'll just assume and hope it's over-estimating#2019-10-2415:50kennySolo should run on a t2.small. Not sure why that is staying i3.large.#2019-10-2415:50Joe LaneI do not think you selected solo. The solo topology should use a t3.small#2019-10-2415:50Joe Laneyeah. (they bumped from t2 to t3).#2019-10-2415:52ssdevhere's a full page screen shot -#2019-10-2415:52ssdev#2019-10-2415:53Joe Lane@U1QJACBUM ^^ Might want to check that out#2019-10-2415:54Joe Laneeither way @UNCFMJ7QE , I think later on in the process when you actually select which cloudformation template to use, if you pick the solo cloudformation template that is what should be used.#2019-10-2415:56ssdevoooo k. I'll continue on and cross my fingers and light some incense#2019-10-2415:57jaretyeah, that is probably an error on the AWS page. Let me test.#2019-10-2416:00jaretYep, I just confirmed, it actually launches a SOLO template and uses the t3.small, but the calculator and marketplace listing seem to be wrong.#2019-10-2416:00jaretIronically, the Prod template shows the “solo” calculation#2019-10-2416:00jaret#2019-10-2416:10jaretI’ve logged a request with AWS Marketplace to fix @UNCFMJ7QE, but the estimate you see on the marketplace page is flipped. You can look at production to see solo or look at a previous version of the software to see the correct estimates. Sorry about the confusion.#2019-11-0115:39ssdevHey @U1QJACBUM have you heard back form aws concerning this switch? We were also looking at query group prices today and wondering if the prices quoted are correct or not. Also curious what the difference is between "Query Group 1" / "Query Group 2", "Production 1" / "Production 2"?#2019-10-2418:28pvillegas12Can I have a Solo Topology with two t2.medium instances to allow me to deploy without taking my service down or do I have to use the Production Topology instead (this would allow me de deploy and it would be able to serve traffic in between? )#2019-10-2421:07stuarthallowayNot at present, but we understand that use case and have been thinking about it.#2019-10-2420:03cgrandI’m encoutering a weird behavior: it seems like :db/ident go into a cache and they are never removed from the cache:#2019-10-2420:04favilathat is correct#2019-10-2420:04favilad/entid uses this cache#2019-10-2420:04favilait’s so you can rename an ident without altering your code#2019-10-2420:05cgrandd/transact too#2019-10-2420:05favilaeverything flows through d/entid#2019-10-2420:05cgrandI want to get rid of an attribute and be sure it’s not used anew#2019-10-2420:05favilaexcept some query clauses#2019-10-2420:06favilaput that same ident on a non-attribute; now any attribute-like use of it will fail#2019-10-2420:07cgrandok thanks for the workaround#2019-10-2420:07favilayou can retract afterwards#2019-10-2420:08favilathink of the ident cache as a map of key to eid which only assocs assertions and ignores retractions#2019-10-2423:59Luke SchubertI'm curious is there a general ballpark pricing for datomic on prem enterprise/OEM?#2019-10-2509:23Shaitanhow to limit search for a particular day? I have field in the entity with type :db.type/instant.#2019-10-2509:57souenzzo@kalaneje you can (d/q '[:find ?id :in $ ?limit :where [_ :user/id ?id ?tx] [?tx :db/txInstant ?inst] [(> ?inst ?limit)]] db #inst"...")
Will return all user-id's that was transacted before #inst".."#2019-10-2511:15magnarsI'm currently at a client using a very old Datomic version (0.8.4138) - and was wondering how I should go about updating. Could I safely bump the transactor version while staying on the old client API? Or the other way around? Or do I need to time it exactly to upgrade both at the same time?#2019-10-2511:20favilaNormally you can update peer and txor versions in any order except in the cases mentioned here https://docs.datomic.com/on-prem/release-notices.html#2019-10-2511:21favilaHowever that version is so old I recommend a backup, shutdown, upgrade, and restore if you can get away with it#2019-10-2511:21magnarsThanks, that makes sense. 👍#2019-10-2519:23markbastianIf I have a Datomic Cloud system that I am not currently using do I just stop the instances of the system + bastion to prevent being charged for it or do I need to do anything else, like delete the stack?#2019-10-2519:25Joe Lane@markbastian modify the autoscaling group by setting 0 min instances and 0 desired instances on both the bastion and other nodes. That will bring them all down. It wont save all the cost because you're still paying for storage of existing data, but it's as cheap as I think you can get.#2019-10-2519:26jaretand @markbastian if you want to get rid of the storage cost etc and totally remove datomic you can follow this doc#2019-10-2519:26jarethttps://docs.datomic.com/cloud/operation/deleting.html#2019-10-2813:53onetomis it possible to use enums in tuple lookup refs?
eg, this works:
(d/entity db [:rule/algo+expr [17592186045417 "XXX"]])
but if i use an ident ("enum") in place of that eid, then i just get nil:
(d/entity db [:rule/algo+expr [:rule.algo/regex "XXX"]])
where
(d/pull db '[*] :rule.algo/regex)
=> #:db{:id 17592186045417, :ident :rule.algo/regex}
#2019-10-2814:09marshallErm. That seems like it should work. Let me look into it#2019-10-2815:55onetomin my specific use-case, i think i can use a keyword instead of a ref, but it still looks like a bug and i suspect there are legitimate use-cases which might want to do this#2019-10-2816:08onetomhmmm... im still not sure about how the lookup-ref should look like 😕
im getting this error, when I'm trying to transact:
{:txn/id txn-id
:txn/matches [[:rule/algo+expr [:regex expr]]]}
Execution error (Exceptions$IllegalArgumentExceptionInfo) at datomic.error/arg (error.clj:79).
:db.error/not-an-entity Unable to resolve entity: [:rule/algo+expr [:regex "XXXX"]] in datom [-9223301668109555930 :txn/matches [:rule/algo+expr [:regex "XXXX"]]]
#2019-10-2816:09onetomwhere :txn/matches is
{:db/ident :txn/matches
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/many}
#2019-10-2816:47onetomlookup ref works when using the datom style:
(tx [[:db/add "x" :txn/id 1]
[:db/add "x" :txn/matches [:rule/algo+expr [:regex "XXX"]]]])
but fails with :db.error/not-an-entity Unable to resolve entity: :regex when using the entity-map style
(tx [{:txn/id 1
:txn/matches [:rule/algo+expr [:regex "XXX"]]}])
#2019-10-2816:48onetom(at least when the tuple attr's 1st element is a keyword, not a ref)#2019-10-2816:49marshallfor the card-many you need an extra [] around it#2019-10-2816:50marshallhm. or maybe not#2019-10-2816:50marshallwhats’ the schema definition of :rule/algo+expr#2019-10-2816:55onetom{:db/ident :rule/algo+expr
:db/valueType :db.type/tuple
:db/tupleAttrs [:rule/algo :rule/expr]
:db/unique :db.unique/identity
:db/cardinality :db.cardinality/one}
#2019-10-2816:56onetomand yes, i've tried with and without and extra bracket and it works both ways when using entity-map style and only works without when using datom-style, which is quite logical#2019-10-2816:56marshallright#2019-10-2813:53onetomit looks like that :rule.algo/regex is treated just as a scalar (keyword) type#2019-10-2813:54onetommy schema looks like this:
{:db/ident :rule/algo
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/one}
{:db/ident :rule.algo/regex}
{:db/ident :rule.algo/substr}
{:db/ident :rule/expr
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one}
{:db/ident :rule/algo+expr
:db/valueType :db.type/tuple
:db/tupleAttrs [:rule/algo :rule/expr]
:db/unique :db.unique/identity
:db/cardinality :db.cardinality/one}
#2019-10-2813:57onetomin the example from the docs (https://docs.datomic.com/on-prem/schema.html#composite-tuples)
there is this txn:
[{:reg/course [:course/id "BIO-101"]
:reg/semester [:semester/year+season [2018 :fall]]
:reg/student [:student/email "
where :fall is one of the tupleAttrs, but its type is just :db.type/keyword
{:db/ident :semester/season
:db/valueType :db.type/keyword
:db/cardinality :db.cardinality/one}
#2019-10-2814:07onetomthe same doc page further down says:
### External keys
All entities in a database have an internal key, the entity id. You can use :db/unique to define an attribute to represent an external key.
An entity may have any number of external keys.
External keys must be single attributes, *multi-attribute keys are not supported*.
#2019-10-2814:08marshallWell, that's not exactly true anymore bc of tuples#2019-10-2814:09onetomok, then i understood it correctly#2019-10-2814:14onetomsince we are talking about tuples, i've also noticed that datomic-free doesn't support tuple value types.
is it going to be updated or tuples are a pro-only feature?#2019-10-2814:28akielI have asked Cognitect regarding this issue. The answer was that they don’t plan to add features to the free edition at the moment.#2019-10-2814:34akielYou can use the Starter Edition, which is also free.#2019-10-2814:35onetomsure, it's just a bit more troublesome to download for a team, which is just about to learn datomic aaand clojure at the same time... from me...#2019-10-2814:43akielI know and I also don’t like it. It would be good to write a mail to <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>, explaining your situation. Doing so may help to change things.#2019-10-2815:07onetomWhat would you propose as an alternative?
I'm not sure how could the situation be improved.
It's an awesome technology, so I understand why Cognitect is keeping it on a short leash...
The client lib is downloadable without hassle at least.#2019-10-2815:11onetomI would be happy with the free version too, btw, but since I've diligently read thru the last 3 years of changelogs and learnt about the tuple support, now I want it badly :)
But I guess I might just step back a bit and use txn functions to implement composite keys, like 3 years ago...#2019-10-2815:16souenzzoWithout free edition makes harder to have awesome tools like https://github.com/vvvvalvalval/datomock and https://github.com/ComputeSoftware/datomic-client-memdb
Also harder to convince people to use/learn it.
It goes from "way easier to configure then SQL: just add the dependency and use it"
to "oh you will need to crete a account, add a custom repo and it's credentials. You cannot commit your credential. Then you will have access to one year of updates"...
😞#2019-10-2815:21onetomWhich process - I guess - acts as a filter or throttle and only seriously interested ppl bother with using Datomic#2019-10-2815:23onetomI agree, it's a pity, but I'm still very grateful that Datomic exists at all :)#2019-10-2815:27onetomI was also pleased to see that tools.deps takes the ~/.m2/settings.xml file into account and it's even explained how to separate your login credentials from the per-project maven repo settings in your deps.edn#2019-10-2815:31onetomAll this info is a little too scattered and requires a lot of background knowledge and I feel bad about it, because I have to explain all these quirks to my colleagues too.
I'm sure they will ask "how am i supposed to discover all this on my own" and they will feel insecure if I have to tell them that they indeed would have a hard time doing this alone...#2019-10-2815:34onetomIm planning to have datascript around too, so they can quickly experiment, but I'm not sure how different is it from Datomic, coz I never used it...#2019-10-2815:38souenzzoDatascript had used datomic-free to check if it implement's some features/behavior close to datomic
unfortunately it can't be done with new datomic features...
non-free datomic is about kill it's small community 😞 (yes, I as previously peer and now a cloud consumer REALLY sad with it)#2019-10-2815:40onetomwhy so sad about the cloud version?#2019-10-2816:07souenzzoDespite working in a prime region of a Brazilian capital, I have many issues with my(all available in my region) ISP. I already lost many days of work due no-internet-connection
at the beginning of my current project, i used datomic-client-memdb to work offline and datomock to create/reuse test cenarios
After the last datomic update, everything was broken. I needed to re-write my test cenarios and I'm unable to work offline
Also, moving from datomic-client-memdb to "client proxy", my deftest goes from 0.1ms to 10s. "run all tests" from 1m to 10m (and it FAIL when my internet goes down)#2019-10-2818:03rapskalian@onetom have you researched datahike? This might be a good middle-ground for your students. https://github.com/replikativ/datahike
I’ve been considering switching to it myself for all the same pains that @U2J4FRT2T is feeling, and the fact that it’s open source. Datomic appears entirely uninterested in fostering a community, and so my long-term bet is on something like datahike.#2019-10-2818:20onetomNo, I have not encountered datahike yet. Thx for putting it on my radar!
I also have issues with my internet connectivity (I live on Lamma island in Hong Kong and only get a 0.5-3Mbit/s usually)...#2019-10-2814:14onetomalso the latest changelog link (https://my.datomic.com/downloads/free/0.9.5703.21/changes) is broken on the https://my.datomic.com/downloads/free page#2019-10-2814:28akielThis issue is also known. The last update to the free edition is about one year old.#2019-10-2814:37onetom@zach.charlop.powers @alexmiller u were talking about data modeling the other day.
what's your take on Hodur?
https://www.youtube.com/watch?v=EDojA_fahvM&t=1120s#2019-10-2814:41onetomand the repo is this i guess:
https://github.com/hodur-org/hodur-engine
plus the visualization UI:
https://github.com/hodur-org/hodur-visualizer-schema#2019-10-2814:47alexmillersorry, don't know anything about it#2019-10-2814:53onetomregardless, thank you for strangeloop!
i've learnt immense amounts from it.#2019-10-2814:52zachcpI haven’t used it but I’ll take a look. thanks @onetom#2019-10-2815:04rapskalianIs there a way to bind a whole datom to a logic variable in a query? Something similar to :as, e.g. :where [[?e ?a ?v :as ?datom]]. I’m looking for an alternative to d/filter given its unavailability in Cloud, and am thinking that I could use rules in order to simulate its effect.#2019-10-2815:18souenzzo[(tuple ?a ?b ?c) ?datom] [(valid? $ ?a ?b ?c)]
Not sure about performance#2019-10-2815:32rapskalianAh tuple, I kept trying to destructure with []. Still running into this tho:
"[?a ?v ?e] not bound in expression clause: [(tuple ?e ?a ?v) ?datom]"
#2019-10-2815:34souenzzotuple is a new feature from datomic. from the last release#2019-10-2815:34souenzzo[(valid? $ ?datom)] *#2019-10-2816:23rapskalianThanks @U2J4FRT2T, I got a few queries working. You can actually bind the tuple components first before aggregating them, which allows for unification to work in both directions, e.g.
:in $ % ?user
:where
[?e ?a ?v ?tx ?op]
[(tuple ?e ?a ?v ?tx ?op) ?datom]
(authorized? ?datom ?user)
Performance is probably less than ideal tho, like you mentioned.#2019-10-2817:13souenzzoyou can use [?a :db/ident] to avoid "full db scan" error#2019-10-2816:56rapskalianRelated to the above, how big of a bribe is required to get d/filter support in Cloud? 😜
Would be such an amazing way to handle authorization in Ion applications. I can imagine filtering the database per user request based on some authorization rules, which would prevent one from needing to enforce those rules ad-hoc all over the system.#2019-10-2818:24onetomif i have card-many attribute, how can i constrain my results based on its cardinality?
(something like the HAVING clause in SQL)?
the one stackoverflow article i found on this topic recommends nested queries
(d/q '[:find [(pull ?e [* {:txn/matches [*]}]) ...]
:with ?e
:where
[?e :txn/matches ?m]
[(count ?m) ?matches]
[(< 1 ?matches)]])
#2019-10-2819:43favilaanother option is d/datoms with a bounded-count for this simple case, anything harder needs a subquery because you cannot perform aggregation before the :find stage#2019-10-2900:00pvillegas12I want to upgrade from Solo -> Production, but my datomic database is currently serving a paid product. Is there a way to make this operation reversible in case it does not work as expected?#2019-10-2907:31dmarjenburghYou can probably update the stack with the solo template to revert.#2019-10-2900:00pvillegas12Has anyone encountered a problem where your whole system goes down when a code deploy is initiated by a Autoscaling group action? This event is taking down our system which is then restored if we use another deployment.#2019-10-2905:03xiongtxI'm wondering why the #db/fn reader macro doesn't work with clj code, only EDN.
#db/fn {:lang "clojure"
:params []
:code (inc 1)}
when evaluated gives
Can't embed object in code, maybe print-dup not defined:
which IIUC means that it's trying to eval the delay, which I'm not sure why is happening.
The ❓ has been asked previously here, but w/out an answer: https://clojurians-log.clojureverse.org/datomic/2016-01-02/1451699503.001427#2019-10-2905:36hiredmanbecause the compiler generates bytecode and doesn't know how to embed arbitrary objects (like the delay) in bytecode. if there is no special casing of how to embed some object in bytecode, the compiler falls back to calling pr, embedding the string, and then calling read-string when the bytecode is run#2019-10-2905:38hiredmansame thing user=> (defmacro f [] (delay nil))
#'user/f
user=> (fn [] (f))
Syntax error compiling fn* at (REPL:1:1).
Can't embed object in code, maybe print-dup not defined:
#2019-10-2909:04sooheonI’m not getting results binding :db.type/float values, like (d/q '[:find ?e :where [?e :some/attr 1.0]] db), only for some values. I.e. I know that valid values are (1.0 2.0 3.5), and only 2.0 returns results in that query. What could I be missing?#2019-10-2909:11sooheonI now see that the following query works:
(d/q '[:find (pull ?e [*])
:where
[?e :some/attr ?v]
[(> ?v 3.4)]
[(< ?v 3.6)]]
db)
So it seems like a floating point error issue for exact comparisons. I guess I should be using :db.type/bigdec if I care about writing queries for exact values?#2019-10-2911:55pvillegas12Our Datomic Cloud Solo System is failing completely, the API times out with a 504. How can we go about debugging this in AWS?#2019-10-2912:04dmarjenburghTry to pinpoint where it goes wrong first and whether it’s a Datomic issue or an ApiGateway configuration.
- What happens when you invoke the lambda directly instead of through apigw?
- Can you connect to the database through the bastion?
- Do the datomic CloudWatch logs have anything unusual?#2019-10-2912:16pvillegas12CloudWatch logs don’t show anything, that is the unusual part, they start not reporting anything about datomic#2019-10-2912:16pvillegas12I’m going to try 1-2 to replicate#2019-10-2912:58marshallif your datomic system cloudwatch logs just “stop” you should forcibly restart your compute instance#2019-10-2912:59marshallhttps://docs.datomic.com/cloud/troubleshooting.html#troubleshooting-solo#2019-10-2917:44madstapI'm trying to shovel some data from kafka into datomic cloud. Is there a ready made kafka connect sink for datomic cloud or should I write my own?#2019-10-2918:11BrianI'm wondering what the best way is to validate my data when working with Datomic Cloud. I have a sha-256 hash that I want to check if it is actually a valid sha-256 before inserting it. I have a function written. Should I do that check manually before inserting it or is there a way to have rules on certain attributes?#2019-10-2918:12marshall@brian.rogers https://docs.datomic.com/cloud/schema/schema-reference.html#attribute-predicates#2019-10-2918:12BrianThank you!#2019-10-2918:29Brian@marshall I get a 'hash/sha-256?' is not allowed by datomic/ion-config.edn - {:cognitect.anomalies/category :cognitect.anomalies/forbidden, :cognitect.anomalies/message \"'hash/sha-256?' is not allowed by datomic/ion-config.edn\", :dbs [{:database-id \"<id>\", :t 24, :next-t 25, :history false}]}"}}. Does this indicate that I need to push that function up as a transaction function?#2019-10-2918:30marshall@brian.rogers yes, “Attribute predicates must be on the classpath of a process that is performing a transaction.” - for Cloud that means they need to be ions#2019-10-2918:30BrianSweet thank you =]#2019-10-2918:30marshallnp#2019-10-2919:52jjttjjIs there a way to combine the results of these two queries in a single query? providing a default value for "status" when the join cannot be made? I've been missing with get-else but I don't think it's exactly what I need here
;;find placed orders requests that have not been acknowledged with a
;;received status message
(d/q
'[:find ?oid
:where
[?e :iboga.req.place-order/order-id ?oid]
(not [?status-msg :iboga.recv.order-status/order-id ?oid])]
(d/db DB))
;;find placed orders requests that have been acknowledged with a
;;received status message and join with status
(d/q
'[:find ?oid ?status
:where
[?e :iboga.req.place-order/order-id ?oid]
[?status-msg :iboga.recv.order-status/order-id ?oid]
[?status-msg :iboga.recv.order-status/status ?status]]
(d/db DB))
#2019-10-2920:10benoitSomething like this might work:
(d/q
'[:find ?oid ?status
:where
[?e :iboga.req.place-order/order-id ?oid]
(or-join [?oid ?status]
(and [?status-msg :iboga.recv.order-status/order-id ?oid]
[?status-msg :iboga.recv.order-status/status ?status])
(and (not [?status-msg :iboga.recv.order-status/order-id ?oid])
[(ground :none) ?status]))]
(d/db DB))
But I wonder why your status message entity cannot point directly to the order entity. Why do you have to do this "join" on the order id value.#2019-10-2920:13benoitThis (not [?status-msg :iboga.recv.order-status/order-id ?oid]), in particular, might be inefficient.#2019-10-2920:16jjttjjThat works, thanks! So you mean just having the order-id be a :db/unique attribute so all the attributes above point to the same entity, then just doing get-else for the status?#2019-10-2922:35benoitNo, I mean having an attribute :iboga.recv.order-status/order that directly points to the order entity. Why having to use the "order id" value to connect the two entities?#2019-10-2921:37schmeecan you use attribute predicates with database functions, or does it only work with functions on the classpath?#2019-10-3018:17ssdevHey folks, noob question for ya. I'm messing around with web service ions currently and am curious how I can develop locally so I can see what happens when a web request comes in. Is there a way to run these functions locally?#2019-10-3104:44dmarjenburghYou can run a local web server (e.g. Jetty) with your ring handler. Usually the ion handler is just wrapping the ring handler with datomic.ion.lambda.api-gateway/ionize. See https://docs.datomic.com/cloud/ions/ions-reference.html and the ion-starter project.#2019-11-0116:32ssdevcool. Thanks @U05469DKJ#2019-10-3022:18daniel.spanielanyone else found that they cant use cloud connection to datomic ( in the last month or 2 something changed ) and now the connection I make from development to the cloud db hangs after the first try. does 1 hit and then refuses to do more. so odd .. so debilitating#2019-10-3022:25daniel.spanielthe connection hangs with this call <ws://127.0.0.1:9630/ws/worker/main/bd3f3e48-e8b8-438e-840c-61ae23f451cf/33666b32-f7f8-45d7-bbe6-7d54f906fa94/browser>#2019-10-3022:25daniel.spanielvery interesting#2019-10-3022:38souenzzo@dansudol this 9630 port looks like #shadow-cljs stuff. #shadow-cljs should be use in dev-time only#2019-10-3022:41daniel.spanielI know your right. I killed shadowjs but it still hanging#2019-10-3022:42daniel.spanielnot sure how this ever worked before because we used to develop off a cloud connection running shadowjs too .. bizzarre#2019-10-3022:42daniel.spanielwe use mem db now locally so its been a while#2019-10-3105:08onetomdo i see it correctly that the on-prem datomic doesn't provide nested queries, via a bulit-in q query function?
the could version's documentation mentions this feature at:
https://docs.datomic.com/cloud/query/query-data-reference.html#q
(d/q '[:find ?track ?name ?duration
:where
[(q '[:find (min ?duration)
:where [_ :track/duration ?duration]]
$) [[?duration]]]
[?track :track/duration ?duration]
[?track :track/name ?name]]
db)
#2019-10-3105:13onetomah, nvm, i haven't realized that i have to quote the inner q's query parameter too.
here is the most minimal example i could come up with (which works on an empty db too):
(d/q '[:find (pull ?e [*])
:where
[(q '[:find ?x . :where [(ground :db/doc) ?x]]) ?x]
[?e :db/ident ?x]]
(d/db conn))
#2019-10-3105:22onetomso this built-in q function is not in the on-prem docs.
it should come after https://docs.datomic.com/on-prem/query.html#missing to be consistent with the cloud docs.
where can i report such documentation issues?#2019-10-3106:08csmYou can use datomic.api/q within a query. It’s not a “built-in” function, but you can use it like you can use any function on your class path.#2019-10-3114:52favilaQuery forms are evaluated as if in an environment with (require '[datomic.api :as d :refer [db q])#2019-10-3114:53favilathat’s why bare “q” works and seems special#2019-10-3114:53favilaIt’s really datomic.api/q#2019-10-3115:15onetomah, i see!
so, in the cloud version's doc it's important to highlight this, since in such a setup, the query is not running in the app's process?#2019-10-3115:24favilayes; you have no control over requires or ns aliases in the cloud whereas you do on on-prem. Although even in cloud I think it will auto-require fully qualified ns vars, so you can add custom functions to the classpath? I know this happens for transactions, not sure for queries#2019-10-3116:43Oleh K.Guys, can I connect to datomic cloud from multiple services via .<system>.<aws_zone>. ? Currently when my one service is connected to datomic another one cannot#2019-10-3116:55onetomwhat is the error message u get?#2019-10-3116:56Oleh K.[org.eclipse.jetty.client.HttpClientTransport:149] - Could not connect to HttpDestination[.<system>.]6e48b9ed,queue=1,pool=DuplexConnectionPool[c=1/64,a=0,i=0]#2019-10-3116:57Oleh K.<system> is a real name#2019-10-3116:58Oleh K.the service is running in the same instance as the main one (in datomic vpc)#2019-10-3117:03Oleh K.it's also a Solo topology, if it makes difference (don't see anything about that in the documenation)#2019-10-3117:06onetomdoesn't sound like a datomic related issue to me.
can you try to just directly access that endpoint with netcat from the same machine where that "other service" can not access it from?
nc entry.<system>. 8182#2019-10-3118:22jherrlinHey. How can I limit the number of nested results using pull? I have 3 enteties, each one with :db.type/ref / :db.cardinality/many attribute. When pulling from Datomic I never get results because the enteties have relations to each other and i assume its trapped in an infinity loop. I am only interested in the the first line of relations.#2019-10-3118:22jherrlinHey. How can I limit the number of nested results using pull? I have 3 enteties, each one with :db.type/ref / :db.cardinality/many attribute. When pulling from Datomic I never get results because the enteties have relations to each other and i assume its trapped in an infinity loop. I am only interested in the the first line of relations.#2019-10-3118:48jherrlinFound the solution to my answer here: https://docs.datomic.com/cloud/query/query-pull.html#orga9eca04#2019-10-3119:54jherrlinHmm it didnt solve my problem. Dont really grasp what it did though#2019-10-3123:06cjmurphyYou can have pull syntax that recurs only as much as you need. So you might have my-entity-pull-1 that refers to my-entity-pull-2, that refers to my-entity-pull-3. Here my-entity-pull-3 would only have non-references in it. That's how I've limited the recursion, for 'my-entity' in this case.#2019-10-3120:17bartukahi, I'm having some issues using datomic with core.async. I have the following code:
(let [in (async/chan 200)
out (async/chan 200)]
(async/pipeline 4 out (map compute-metrics) in)
(async/go (doseq [item items] (async/>! in item)))
(async/go-loop []
(println (async/<! out))
(recur)))
And the compute-metrics function, basically saves an item into datomic (after performing a simple computation on one field). I am using the client.api.async to save the item. It seems to work just fine if the parallelism parameter is lower than 5 [for a 120 itens on my input list] but higher than that it only stuck after computing the first 8 items.#2019-10-3120:20alexmillercan you reproduce if you use pipeline-blocking instead?#2019-10-3120:21bartukaI had the same issue using pipeline-async but haven't tried the blocking version#2019-10-3120:21bartukaI might be able to run it very quickly here, brb#2019-10-3120:23alexmillerI'm certain that the issue is that the go block threads are all blocked#2019-10-3120:23alexmillerso a thread dump would reveal what blocking op they are blocked on#2019-10-3120:24bartukayes, just worked (Y)#2019-10-3120:24alexmillerthere is actually a problem with pipeline that it uses a blocking op inside a go block that I just fixed this week (not yet released) but the fix basically makes it work like pipeline-blocking#2019-10-3120:24bartukacan you help me understand a little better this process?#2019-10-3120:25alexmillerso yeah, this is a bug in core.async that I'll release soon#2019-10-3120:27bartukaahn, ok! I was fighting with this problem the whole day rsrrs at least I learned a lot about async processes#2019-10-3120:28alexmillerI am also working on a way to detect this sort of thing in core.async (which is how I found the bug in the first place)#2019-10-3120:28bartukaIf I used the datomic sync api I would have succeeded too?#2019-10-3120:30alexmillerno, I don't think that would have helped here. really, if you're using the async api, you should be able to use pipeline-async I think#2019-10-3120:32bartukaI see, but when I take a connection from the channel returned by (d-async/connect) it has different properties than the sync version? I could not find much info about the distinction of these two libraries to be honest#2019-10-3120:37alexmillersorry, I'm not much of an expert on this particular area#2019-10-3120:38bartukanp, thanks for the help.. saved the day o/#2019-11-0100:30QuestIs it possible to use wildcard matching as part of a tuple value? example query against 2-element homogenous tuple:
[:find ?e
:where [?e :nsm.entity/form [:todo _]]]
I want to match all entities with :todo as the first tuple element regardless of the value of the second element. Currently this query always returns an empty set.#2019-11-0110:14onetomsorry, forgot to mention u.
see my suggestion after your question#2019-11-0122:31QuestI can confirm the [((comp #{:todo} first) ...)] solution as working. Thanks @U086D6TBN!#2019-11-0100:43Quest^Behavior reproduces on latest version datomic-pro-0.9.5981#2019-11-0103:55onetomhow about something like
[:find ?e
:where
[?e :nsm.entity/form ?forms]
[((comp #{:todo} first) ?forms)]
#2019-11-0104:46Joe LaneI think you want untuple#2019-11-0122:30QuestI can confirm untuple works in the following query:
'[:find ?e
:where
[?e :nsm.entity/form ?tup]
[(untuple ?tup) [?a ?b]]
[(= ?a :todo)]]
Thanks Joe!#2019-11-0108:45NemsHi everyone, after adding a private maven repo to my ions deps.edn I can't run the "clojure -A:dev -m datomic.ion.dev '{:op :push :creds-profile "dev" :region "eu-central-1"}'" command anymore. I always get the following error:
Downloading: com/datomic/java-io/0.1.11/java-io-0.1.11.pom from <s3://datomic-releases-1fc2183a/maven/releases/>
{:command-failed
"{:op :push :creds-profile \"rsdev\" :region \"eu-central-1\"}",
:causes
({:message
"Failed to read artifact descriptor for com.datomic:java-io:jar:0.1.11",
:class ArtifactDescriptorException}
{:message
"Could not transfer artifact com.datomic:java-io:pom:0.1.11 from/to roots (): status code: 401, reason phrase: Unauthorized (401)",
:class ArtifactResolutionException}
{:message
"Could not transfer artifact com.datomic:java-io:pom:0.1.11 from/to roots (): status code: 401, reason phrase: Unauthorized (401)",
:class ArtifactTransferException}
{:message "status code: 401, reason phrase: Unauthorized (401)",
:class HttpResponseException})}
If I remove the private repo and the dependency of that repo it works again.#2019-11-0112:34marshallsee https://forum.datomic.com/t/issue-retrieving-com-datomic-ion-dependency-from-datomic-cloud-maven-repo/508/6 and https://forum.datomic.com/t/iam-permissions-to-access-s3-datomic-releases-1f2183a/861#2019-11-0112:59alexmillerBy "private repo", I assume you mean one with creds in settings.xml? If so, can you successfully download deps using clj/deps.edn from it separately from the ion setup?#2019-11-0208:28NemsHi @U064X3EF3, yes a private repo with settings.xml. If we run clojure -A:dev all the dependencies get downloaded without an issue. It's only when we run clojure -A:dev datomic.ion.dev '{...}' that we run into this problem. I've tried following both links that @U05120CBV posted but they don't seem to solve the issue.#2019-11-0108:46NemsHere's my deps.edn if that helps (with private maven repo)
{:mvn/repos {"datomic-cloud" {:url ""}
"roots" {:url ""}}
:paths ["src" "resources"]
:deps {org.clojure/clojure {:mvn/version "1.10.0"}
org.clojure/data.zip {:mvn/version "0.1.3"}
org.clojure/data.xml {:mvn/version "0.2.0-alpha6"}
org.clojure/core.async {:mvn/version "0.3.442"
:exclusions [org.clojure/core.memoize]}
org.clojure/core.memoize {:mvn/version "0.7.2"}
com.datomic/ion {:mvn/version "0.9.35"}
cheshire {:mvn/version "5.8.1"}
clj-http {:mvn/version "3.10.0"}
com.cognitect.aws/api {:mvn/version "0.8.305"}
com.cognitect.aws/endpoints {:mvn/version "1.1.11.559"}
com.cognitect.aws/sqs {:mvn/version "697.2.391.0"}
com.cognitect.aws/s3 {:mvn/version "718.2.457.0"}
com.cognitect.aws/ssm {:mvn/version "718.2.451.0"}
medley {:mvn/version "1.2.0"}
camel-snake-kebab {:mvn/version "0.4.0"}
byte-transforms {:mvn/version "0.1.4"}
be.roots.mona/client {:mvn/version "1.65.5-168"}
}
:aliases {:dev {:extra-deps {com.datomic/ion-dev {:mvn/version "0.9.234"
:exclusions [org.slf4j/slf4j-nop]}
com.amazonaws/aws-java-sdk-sts {:mvn/version "1.11.210"}}}
:config {:extra-deps {com.cognitect.aws/sts {:mvn/version "697.2.391.0"}}
:extra-paths ["config"]}
:local {:extra-deps {com.datomic/client-cloud {:mvn/version "0.8.78"}
com.cognitect.aws/sts {:mvn/version "697.2.391.0"}
ch.qos.logback/logback-classic {:mvn/version "1.2.3"}
ch.qos.logback/logback-core {:mvn/version "1.2.3"}
org.clojure/test.check {:mvn/version "0.9.0"}
org.clojure/tools.namespace {:mvn/version "0.3.0-alpha4"}}
:extra-paths ["dev" "sessions" "test-resources"]}}}#2019-11-0109:33avfonarevWhat is the best approach when it comes to storing ordered data in Datomic? Say, someone wants to write yet another todo list app, where items can be reordered in a given list.#2019-11-0111:19octahedrion@avfonarev assert each item like {:index i :item item}#2019-11-0113:40refset@avfonarev to add to this suggestion, you may also want to consider using bisection keys rather than numbers e.g. https://github.com/Cirru/bisection-key (I've successfully used this with DataScript before, for modelling ordered lists)#2019-11-0114:34avfonarevThat is what I was leaning to. One can use amortization to reduce the number of writes per item this was.#2019-11-0113:25bartukaI am using datomic analytics through presto and I deleted the database that was connected to it. After I recreate and populate the new database, presto cannot perform any query, always returning Datomic Client Exception error#2019-11-0114:28marshallrestart your presto server#2019-11-0114:29marshallif you’re using cloud you can use the datomic-gateway script to restart the access gateway#2019-11-0114:29marshallif you’re using on-prem, just kill and restart the presto server#2019-11-0119:19bartukaI see, thanks marshal!! I will post your response into datomic dev forum so other people may benefit from this as well#2019-11-0113:25bartukathe problem is certain related, but now sure how to proceed on that#2019-11-0116:12ssdevQuick question: is there any restriction around using Lambdas & HTTP Direct at the same time?#2019-11-0116:45dmarjenburghAn apigateway method integration has either a lambda proxy or a vpc link of course, but datomic supports both at the same time. Note that the apigw lambda event data will not be present in the http direct request#2019-11-0116:37drewverleeAttempting to call datomic.api/connect with my database uri string results in
Execution error (NullPointerException) at datomic.kv-cluster/kv-cluster (kv_cluster.clj:355).
null
if anyone has an idea what that implies it would be a big help. i assume i have a connection issue, configuration of the connection string or a networking issue.#2019-11-0116:41Oleh K.Does datomic cloud have REST API?#2019-11-0118:32chagas.visHello everyone, I am currently studying how I can use Clojure to a bioinformatics project. One of the first problems that I find is the lack of a library to work with data frames, I did some Google search but I did not find any information about some Clojure library that has any implementation like pandas (Python) or R.#2019-11-0121:13zaneIs anyone aware of open source compilers that compile a (subset, certainly) of SQL to Datalog?#2019-11-0211:49Mark AddlemanNot directly. I'm a fan of http://teiid.org/ which allows you to SQL-fy just about anything. I believe https://prestodb.github.io/ has similar capabilities.#2019-11-0222:05Quest@zalky Found your question in the Zulip logs, reposting & answering it here because this limitation just came up for me.
I noticed that the latest versions of Datomic have new tuple types. Reading through the docs, can anyone clarify whether homogeneous tuple types have the same 2-8 element length limitations as the other two tuple types? It says homogenous tuples have "variable" length. It's just a little ambiguous.
Answer: homogenous tuples are subject to the same 2-8 element limitations.
Attempting to set vectors with count less than 2 or greater than 8 will produce the following exception.
java.lang.IllegalArgumentException: :db.error/invalid-tuple-value Invalid tuple value
Tested on datomic-pro-0.9.5981. The best workaround I have now is to pad tuple values with nil in order to reach the minimum length of two -- ex: :tags ["foobar" nil]#2019-11-0414:09zalkyMuch respect for the follow up!#2019-11-0318:20alidlorenzofor aws, the atomic solo 1 monthly estimate is $118 whereas the production 1 is $21, is this correct? I thought solo was more affordable/suitable for getting started - did this change?#2019-11-0318:21marshallThis is an error in marketplace#2019-11-0318:21marshallThe estimates should be reversed#2019-11-0318:21marshallWe are working with aws to correct#2019-11-0322:28Jon WalchWould you mind letting me know when this is corrected? I'd like to stand up a Solo 1 ASAP#2019-11-0323:15marshallYou can use solo as it is#2019-11-0323:15marshallThe price you are charged is correct#2019-11-0323:16marshallIts just the display on the marketplace site that is wrong#2019-11-0323:16Jon WalchThe CloudFormation template for Solo is also making reference to the instance sizing in production 1#2019-11-0402:33Jon WalchI just went through the process for setting up a Solo 1. If I look at my EC2 instances, my bastion is a t3.nano and then I have two i3.large instances? Is this correct?#2019-11-0412:15marshallThat is not correct#2019-11-0412:16marshallthat is a production topology#2019-11-0412:16marshallWhat version did you launch from Marketplace?#2019-11-0412:17marshall@UNVU1Q6G1 Can you follow the directions here https://docs.datomic.com/cloud/operation/new-system.html
and launch a stack from our Releases page instead of from Marketplace
I will contact AWS today and look into getting the listing corrected#2019-11-0412:20marshall@UNVU1Q6G1 I believe this may be an issue with the “535-8812.1” release. Can you try the one without the .1 (just 535-8812) ?#2019-11-0318:21alidlorenzoso does that mean i should solo 1 despite the large estimate, since the actual amount will be reversed?#2019-11-0318:22marshallYes#2019-11-0318:22alidlorenzook, thanks for clarifying !#2019-11-0318:32joshkhi'm curious -- what makes the following two queries different enough to return empty vs. non-empty results?
; find entities whose :user/name attribute has a value of "user123"
(d/q '{:find [?e]
:in [$ ?v]
:where [[?e :user/name ?v]]}
db "user123")
=> [[12345678912345]]
; find entities with any attribute (unbound) that has a value of "user123"
(d/q '{:find [?e]
:in [$ ?v]
:where [[?e _ ?v]]}
db "user123")
=> []
#2019-11-0318:37favilaThat is pretty alarming#2019-11-0318:37favilaMaybe :user/name is not indexed?#2019-11-0318:38favilaNonetheless, my expectation is the second query would not be empty; it might be so slow it never terminates, but not empty#2019-11-0318:39joshkhis there a way to find out? perhaps it's just me? i can definitely reproduce it.#2019-11-0318:39favilaTo me this looks like a bug#2019-11-0318:40favilait’s a pathological case, you’d never want a query like [?e _ ?v] as the first clause with ?v bound, but it should still work#2019-11-0318:40favilaexperiment with binding ?a in various ways, see if it gives results#2019-11-0318:41joshkh> Nonetheless, my expectation is the second query would not be empty; it might be so slow it never terminates, but not empty
i did wonder if this would trigger a full-db scan alert, however an empty set (returned instantly) made me scratch my head.#2019-11-0318:41favila[0 :db.install/attribute ?a] [?e ?a ?v] or filter down further#2019-11-0318:41joshkhi did try binding ?a with the same result#2019-11-0318:42favila[?a :db/ident :user/name] [?e ?a ?v]?#2019-11-0318:42favilasame result meaning empty set?#2019-11-0318:43joshkh[?a :db/ident :user/name] [?e ?a ?v] works as expected#2019-11-0318:43joshkhsimply binding ?a (and not using it) returns an empty set#2019-11-0318:43favilaI meant forcing ?a to be bound to every attribute explicitly#2019-11-0318:45favilaIt could be the query planner refuses to even try to match by ?v if it doesn’t know ?a#2019-11-0318:45favilathere’s no index it can use effectively after all#2019-11-0318:45favilaI would still want an error not a silent empty set#2019-11-0318:45joshkhinteresting, this works!
(d/q '{:find [?e ?b]
:in [$ ?v]
:where [
[?a :db/ident ?b]
[?e ?a ?v]]}
(client/db) "user123")
=> [[12345678912345]]
#2019-11-0318:48joshkh(by the way, i would never use queries like these in production. this only stemmed from some hacky experimentations.)#2019-11-0318:50joshkhhowever, my concern is that an empty set can be dangerously misleading#2019-11-0408:31dmarjenburghFYI, I reproduced this with the same results and it’s not what I expected. Adding the [_ :db/ident ?a] or [?a :db/ident] clause works (and is considerably slower).#2019-11-0408:34dmarjenburghAdding a predicate clause [(= ?v ?name)] [?e _ ?name] will warn you of a full db scan#2019-11-0319:19joshkhalso curious -- is it normal to see what look like duplicate references in :db.alter/attribute?
{:db/id 0
:db.alter/attribute [#:db{:id 99, :ident :user/name}
#:db{:id 99, :ident :user/name}]}
#2019-11-0412:32marshallThat is an issue that was resolved in the most recent release#2019-11-0321:33alidlorenzohello i'm requesting datomic in a new boot app, and receiving a Could not locate datomic/client/impl/pro__init.class error. is there a way I can go about in resolving this?
here's my require code: (:require [datomic.client.api :as d])
here's my dependency: :dependencies '[[org.clojure/clojure "1.10.0"] [com.datomic/client-cloud "0.8.78"]]#2019-11-0321:40Jon Walchyour require and deps look fine to me#2019-11-0322:09alidlorenzofigured out it's not a dependency error it's a connection error#2019-11-0321:34Jon WalchI'm trying to read an edn file from my code base and then transact it, If I copy it verbatim and paste it in as the tx-data it works fine, however if I try to read it as a resource, slurp, and edn/read-string it, I get the following when I try to transact it
.Exceptions$IllegalArgumentExceptionInfo
:message :db.error/not-a-data-function Unable to resolve data function: #:db{:doc "User first name", :ident :user/first-name, :valueType :db.type/string, :cardinality :db.cardinality/one}
:data #:db{:error :db.error/not-a-data-function}
:at [datomic.error$arg invokeStatic error.clj 57]}]
I think its because edn/read-str is using the Map namespace syntax (https://clojure.org/reference/reader#_maps), is there a way to force it not to?#2019-11-0322:19Jon WalchI'm just going to declare my txs in code instead of in edn#2019-11-0322:58alexmillerwhether you use that syntax, or not, the map in memory is identical#2019-11-0323:00alexmillerthe error makes it sound like you've got an attribute definition where datomic expects a function, which seems like something else#2019-11-0323:39alidlorenzoif we're using datomic cloud, what setup is recommended for dev and staging?
i'd rather not create two more cloud instances, so it OK to use datomic free for these scenarios?#2019-11-0402:23Jon WalchYou can but the API is different#2019-11-0402:23Jon WalchI tried fiddling with https://github.com/ComputeSoftware/datomic-client-memdb/ but it didn't work quite right for me#2019-11-0402:44alidlorenzoso is the expected solution to create/pay for separate cloud instances?#2019-11-0402:44alidlorenzoor I guess wrap both cloud/on-premise APIs to make them the same#2019-11-0414:32faviladatomic cloud does not have a local-dev story#2019-11-0414:32favilayou are expected to have something running in the cloud, even for test runners#2019-11-0415:25kenny@UNVU1Q6G1 What did work?
@UPH6EL9DH We use datomic-client-memdb for running unit tests & gen tests on CI. We also have a dev system always running which lets you connect locally to run integration tests. The Datomic client for this dev system is created by specifying a "prefix" which will get added to all DBs created. We just implemented a simple wrapper around datomic.client.api.protocols/Client.#2019-11-0418:36Jon Walch@U083D6HK9 It was working perfectly for transacting. When I was trying to do (d/db conn) where conn is a datomic-client-memdb LocalConnection, it was telling me that the type couldn't be cast. Let me see if I can repro#2019-11-0418:40Jon Walchjava.lang.ClassCastException: class compute.datomic_client_memdb.core.LocalConnection cannot be cast to class datomic.Connection (compute.datomic_client_memdb.core.LocalConnection is in unnamed module of loader clojure.lang.DynamicClassLoader @1e420b95; datomic.Connection is in unnamed module of loader 'app'
#2019-11-0418:55kennyCan you send the full code you’re using to do that @UNVU1Q6G1 ?#2019-11-0419:27Jon WalchSpoke with @U083D6HK9 in a DM, it was user error on my part 😄#2019-11-0519:26alidlorenzo@U083D6HK9 so to be clear you're not running two cloud instances, just one instance but for dev you prefix all databases created?#2019-11-0519:29kenny@UPH6EL9DH yes — one instance with multiple devs using the same instance. Each dev makes their own prefix. #2019-11-0519:33alidlorenzo@U083D6HK9 would you be able to share some of the wrapper code? 🙂 also, even with a prefix, are you not concerned at all about mixing production and dev database?#2019-11-0519:36kennyI can see how coupled to other code our wrapper is when I get back in front of a computer. Might be able to paste some code here.
Oh, I guess we run two systems then. One for production. Dev and QA environments both use the single Datomic dev system. #2019-11-0519:39alidlorenzo@U083D6HK9 that'd be great thanks; and yea, that seems like best solution, but for a side project jumps cost for 30$ monthly to 60$ which can get pretty steep#2019-11-0519:44kennyI think you’d honestly be fine running prod and dev on the same system. Make it so prod uses no prefix. #2019-11-0519:45kennyWe run separate topologies so we get high availability. Dev is just running solo. #2019-11-0520:49kenny@UPH6EL9DH It's essentially this:
(defrecord DatomicClient [client db-prefix return-anomalies?]
datomic-protos/Client
(list-databases [_ arg-map]
(let [dbs (d/list-databases client arg-map)]
(into (list)
(comp
(filter (fn [db-name]
(if db-prefix
(str/starts-with? db-name (db-prefix-str db-prefix))
(not (str/starts-with? db-name "__")))))
(map (fn [db-name]
(str/replace-first db-name (db-prefix-str db-prefix) ""))))
dbs)))
(connect [_ arg-map]
(d/connect client {:db-name (prefix-db-name db-prefix (:db-name arg-map))}))
(create-database [_ arg-map]
(d/create-database client {:db-name (prefix-db-name db-prefix (:db-name arg-map))}))
(delete-database [_ arg-map]
(d/delete-database client {:db-name (prefix-db-name db-prefix (:db-name arg-map))})))
#2019-11-0523:19alidlorenzo@U083D6HK9 great, thanks for sharing :+1:#2019-11-0323:56vnctaingI’ve some issue installing Datomic Pro Starter Edition
I created a file ~/.lein/credentials.clj
{#"my\.datomic\.com" {:username "…."
:password "…."}}
then generated ~/.lein/credentials.clj.gpg
gpg --default-recipient-self -e ~/.lein/credentials.clj > ~/.lein/credentials.clj.gpg
added to my project.clj
:repositories {"" {:url ""
:creds :gpg}}
:dependencies [[com.datomic/client-pro "0.9.5927"]]
but when i run lein deps I get
Could not find artifact com.datomic:client-pro:jar:0.9.5927 in central ()
Could not find artifact com.datomic:client-pro:jar:0.9.5927 in clojars ()
Could not find artifact com.datomic:client-pro:jar:0.9.5927 in ()
This could be due to a typo in :dependencies, file system permissions, or network issues.
If you are behind a proxy, try setting the 'http_proxy' environment variable.
#2019-11-0412:33marshallthe client-pro version is not the same as the datomic version#2019-11-0412:34marshallclient library is in Maven: https://search.maven.org/search?q=a:client-pro%26
latest version is 0.9.37#2019-11-0412:34marshallhttps://search.maven.org/artifact/com.datomic/client-pro/0.9.37/jar#2019-11-0402:25Jon WalchAnyone have this issue with starting up a new Solo 1? I didn't have this problem with Production 1.
fatal error: An error occurred (404) when calling the HeadObject operation: Key "<system-name>/datomic/access/private-keys/bastion" does not exist
Unable to read bastion key, make sure your AWS creds are correct.
I'm logged in and followed the instructions here https://docs.datomic.com/cloud/getting-started/configuring-access.html#2019-11-0402:25Jon WalchAnyone have this issue with starting up a new Solo 1? I didn't have this problem with Production 1.
fatal error: An error occurred (404) when calling the HeadObject operation: Key "<system-name>/datomic/access/private-keys/bastion" does not exist
Unable to read bastion key, make sure your AWS creds are correct.
I'm logged in and followed the instructions here https://docs.datomic.com/cloud/getting-started/configuring-access.html#2019-11-0412:35marshallwhere do you get this error? when trying to connect your access gateway proxy?#2019-11-0419:28Jon Walchyeah when trying to run the datomic-socks-proxy script#2019-11-0420:11marshalli would guess your AWS creds are not right in that environment#2019-11-0420:11marshalloh sorry#2019-11-0420:12marshallyeah, if the proxy wont start at all it generally is due to AWS credentials#2019-11-0420:12marshallhave you sourced them in that env? and/or set up AWS profiles?#2019-11-0601:14Jon WalchI'm pretty positive that my creds are fine because I can run other aws commands without issue#2019-11-0601:15Jon WalchI also configured the inbound rule on the security policy, and attached the relevant security policies to my user group#2019-11-0602:40Jon Walchno idea what i did differently this time besides name the system something different, but its working now#2019-11-0403:43onetomis it recommended to name card-many attributes as plural?
if not, why not?#2019-11-0403:43onetomis there a place where i can see well designed examples of datomic schemas?#2019-11-0407:46tatutIn datomic cloud, where can I see the cast/dev messages, I'm not seeing any messages in cloudwatch#2019-11-0408:41tatutoh, I see they are not logged https://forum.datomic.com/t/logging-from-an-ion/954#2019-11-0417:48BrianI'm working with Datomic Cloud and have wired up an ion to Lambda to API Gateway which is secured through Cognito to require a user token to access. Next I want to know who is using my ion. Parsing the context I find this: {:clientContext nil :identity {:identityId "" :identityPoolId ""}} (among other things). I expected this information to reflect my user or give some sore of indication as to who was using my ion. Can anyone help me understand how/why this information is not present and how I might get it?#2019-11-0502:29Msr TimHello, I followed ions tutorial described here#2019-11-0502:29Msr Timhttps://docs.datomic.com/cloud/ions/ions-tutorial.html#2019-11-0502:29Msr Tim aws lambda invoke --function-name $(GROUP)-get-items-by-type --payload \"hat\" /dev/stdout #2019-11-0502:30Msr Timthis worked as expected but when i setup api gateway and did a curl i get the following#2019-11-0502:30Msr Timcurl https://{URL}/dev/datomic -d :hat
I3t7OmNvbG9yIDpncmVlbiwgOnR5cGUgOmhhdCwgOnNpemUgOm1lZGl1bSwgOnNrdSAiU0tVLTIzIn0KICB7OmNvbG9yIDpyZWQsIDp0eXBlIDpoYXQsIDpzaXplIDpzbWFsbCwgOnNrdSAiU0tVLTMifQogIHs6Y29sb3IgOmdyZWVuLCA6dHlwZSA6aGF0LCA6c2l6ZSA6eGxhcmdlLCA6c2t1ICJTS1UtMzEifQogIHs6Y29sb3IgOnJlZCwgOnR5cGUgOmhhdCwgOnNpemUgOnhsYXJnZSwgOnNrdSAiU0tVLTE1In0KICB7OmNvbG9yIDpncmVlbiwgOnR5cGUgOmhhdCwgOnNpemUgOmxhcmdlLCA6c2t1ICJTS1UtMjcifQogIHs6Y29sb3IgOnllbGxvdywgOnR5cGUgOmhhdCwgOnNpemUgOmxhcmdlLCA6c2t1ICJTS1UtNTkifQogIHs6Y29sb3IgOnllbGxvdywgOnR5cGUgOmhhdCwgOnNpemUgOm1lZGl1bSwgOnNrdSAiU0tVLTU1In0KICB7OmNvbG9yIDp5ZWxsb3csIDp0eXBlIDpoYXQsIDpzaXplIDp4bGFyZ2UsIDpza3UgIlNLVS02MyJ9CiAgezpjb2xvciA6Ymx1ZSwgOnR5cGUgOmhhdCwgOnNpemUgOm1lZGl1bSwgOnNrdSAiU0tVLTM5In0KICB7OmNvbG9yIDpyZWQsIDp0eXBlIDpoYXQsIDpzaXplIDpsYXJnZSwgOnNrdSAiU0tVLTExIn0KICB7OmNvbG9yIDpncmVlbiwgOnR5cGUgOmhhdCwgOnNpemUgOnNtYWxsLCA6c2t1ICJTS1UtMTkifQogIHs6Y29sb3IgOmJsdWUsIDp0eXBlIDpoYXQsIDpzaXplIDpsYXJnZSwgOnNrdSAiU0tVLTQzIn0KICB7OmNvbG9yIDpyZWQsIDp0eXBlIDpoYXQsIDpzaXplIDptZWRpdW0sIDpza3UgIlNLVS03In0KICB7OmNvbG9yIDp5ZWxsb3csIDp0eXBlIDpoYXQsIDpzaXplIDpzbWFsbCwgOnNrdSAiU0tVLTUxIn0KICB7OmNvbG9yIDpyZWQsIDp0eXBlIDpoYXQsIDpzaXplIDpzbWFsbCwgOnNrdSAiU0tVLTEyMzQ1In0KICB7OmNvbG9yIDpibHVlLCA6dHlwZSA6aGF0LCA6c2l6ZSA6eGxhcmdlLCA6c2t1ICJTS1UtNDcifQogIHs6Y29sb3IgOmJsdWUsIDp0eXBlIDpoYXQsIDpzaXplIDpzbWFsbCwgOnNrdSAiU0tVLTM1In19Cg
#2019-11-0509:49onetom@meowlicious99 if i dedcode that response it seems legit (aside from the missing == from the end of the data):
$ (pbpaste; echo ==) | base64 -d
#{{:color :green, :type :hat, :size :medium, :sku "SKU-23"}
{:color :red, :type :hat, :size :small, :sku "SKU-3"}
{:color :green, :type :hat, :size :xlarge, :sku "SKU-31"}
{:color :red, :type :hat, :size :xlarge, :sku "SKU-15"}
{:color :green, :type :hat, :size :large, :sku "SKU-27"}
{:color :yellow, :type :hat, :size :large, :sku "SKU-59"}
{:color :yellow, :type :hat, :size :medium, :sku "SKU-55"}
{:color :yellow, :type :hat, :size :xlarge, :sku "SKU-63"}
{:color :blue, :type :hat, :size :medium, :sku "SKU-39"}
{:color :red, :type :hat, :size :large, :sku "SKU-11"}
{:color :green, :type :hat, :size :small, :sku "SKU-19"}
{:color :blue, :type :hat, :size :large, :sku "SKU-43"}
{:color :red, :type :hat, :size :medium, :sku "SKU-7"}
{:color :yellow, :type :hat, :size :small, :sku "SKU-51"}
{:color :red, :type :hat, :size :small, :sku "SKU-12345"}
{:color :blue, :type :hat, :size :xlarge, :sku "SKU-47"}
{:color :blue, :type :hat, :size :small, :sku "SKU-35"}}
#2019-11-0513:49danierouxIn Datomic Cloud, how do we copy a database to a new database? Naively transacting the datoms from the tx-range fails because the entity ids do not match#2019-11-0514:09bartukaI am experiencing an odd behavior when I need to connect to datomic cloud. Oftren I receive an error :cognitect.anomalies/unavailable "connection refused". however, I just wait and execute the mount/start command that is managing my connection with datomic and everything works fine.#2019-11-0521:02ssdevHey folks. This is a potentially dumb question but, as I'm going through the ion tutorial, I'm noticing that the web service ion section results in an api gateway endpoint that ends with /datomic. Is /datomic always necessary at the end of the url? If not, how do I get rid of that?#2019-11-0603:28Jon WalchAnyone run into this? This is what happens when my application (running in EKS) tries to connect to my datomic cloud. Datomic cloud is working fine, tested it with the proxy. I already double checked that the EndpointAddress is correct in my cloud formation stack
{:type clojure.lang.ExceptionInfo
:message Unable to connect to .<stack_name>.
:data {:cognitect.anomalies/category :cognitect.anomalies/not-found, :cognitect.anomalies/message entry.<stack-name>.: Name does not resolve, :config {:server-type :cloud, :region us-west-2, :system <system-name>, :endpoint .<stack-name>., :endpoint-map {:headers {host entry.<stack-name>.}, :scheme http, :server-name entry.<stack-name>., :server-port 8182}}}
:at [datomic.client.impl.cloud$get_s3_auth_path invokeStatic cloud.clj 178]}]
#2019-11-0603:40Jon Walchgoing to try peering my VPCs#2019-11-0608:23cjmurphyUsing on-prem I'm looking to generate a tempid then use it to find the real-eid from the :tempids map that is returned from transact!. A generated tempid looks like {:part :db.part/user, :idx -1000305}. It would make sense to me if instead it was a negative number of 19 digits length, because the :tempids map keys are all 19 digit negatives. Can someone help with the error in my understanding? Thx.#2019-11-0609:34onetomyou can use strings as tempids even with the on-prem version of datomic.
is there any specific reason for using the d/tempid function?#2019-11-0609:48cjmurphyErr no - I somehow must have just come across it and thought that was the way to generate 'the next tempid'. I can just use gensym I guess. How do people normally generate tempid strings?#2019-11-0609:55onetomi have the feeling that u might not even need to generate if you don't care about what the tempids are.#2019-11-0609:57onetomit's not necessary to explicitly specify a tempid anymore,
UNLESS you want to reference a newly transacted (or modified) entity in another fact/entity within the same txn#2019-11-0609:58onetomalso, tempid strings only have to be uniq within a transaction, so you can simply number them with range#2019-11-0610:01onetommaybe if u can share more specifics about your use-case, then we can help easier.
im working on some tsv import code now and trying to write tests for it.
it looks something like this:
(defn mk-rule [n]
(let [rule-expr (str "rule-" (if (keyword? n) (name n) n))]
{:db/ident (keyword rule-expr)
:rule/algo :regex
:rule/expr rule-expr}))
(deftest re-import-rules
(testing "remove rule"
(tx [(mk-rule :to-be-deleted)
(mk-rule :unchanged)])
(is (= #{:rule-to-be-deleted
:rule-unchanged}
(q '[:find (set ?r-ident) .
:where
[?r :rule/algo+expr]
[(datomic.api/ident $ ?r) ?r-ident]])))))
#2019-11-0610:04onetomnote how i just made up a convention of identifying my temporary rule entities with a rule- prefix
in another test, i just made up a bunch of rules and simply numbered, like this:
(tx (map mk-rule (range 10)))
then i could create an entity referencing them, like this:
(tx [{:txn/id 1
:txn/matching-rule [:rule-2 :rule-3]}])
im using db/idents here, so i don't have to fuss around with tempid resolution, since im working on an in-memory db, but the naming principle is the same...#2019-11-0610:06onetomalso, if u use nested entity maps in tx-data, then the assignment of the nested entity ids to the containing entity's ref attribute is done automatically by the d/transact logic#2019-11-0610:07cjmurphyThis is a Fulcro application. The idea is that there are fulcro-tempids on client. They get sent to the server. The idea is to generate datomic tempids to go with them as pairs in a map. (key will be Fulcro, val will be datomic tempid). After transact! get two maps. Can use them to get a map of fulcro-tempid -> real-eid. The client can then use that map to do the remapping of the client state.#2019-11-0610:07onetomi've also noticed that u were talking about transact!.
that's an old function, if i understood correctly.
the current https://docs.datomic.com/on-prem/clojure/index.html documentation doesn't even mention it anymore. it just simply uses transact#2019-11-0610:08cjmurphyI'm using an old version of Datomic.#2019-11-0610:08onetomand upgrading is not an option?#2019-11-0610:09cjmurphy0.9.5703#2019-11-0610:09onetombecause writing more code which is not necessary when using newer datomic feels like unnecessary pain#2019-11-0610:10cjmurphyWell the upgrading ability ran out.#2019-11-0610:10cjmurphyOnly lasts for a year.#2019-11-0610:11onetomthat seems like a recent enough version though to support string tempids and transact without a bang#2019-11-0610:11cjmurphyYes I'll start using transact now I know.#2019-11-0610:12cjmurphyi.e. today.#2019-11-0610:12onetom(i also just noticed this change a few days ago, when coming back to datomic after 2-3 years ;)#2019-11-0610:13cjmurphySo I should just generate negative number and str them?#2019-11-0610:13cjmurphyYeah I noticed it but ignored it!#2019-11-0610:14onetomso it sounds like you dont need a fulcro-tempid -> datomic-tempid because the fulcro-tempid can be just a string and u can use that directly in your tx-data#2019-11-0610:15cjmurphyYes that's what I thought too, as long as it is a string, which I can convert it to if its not.#2019-11-0610:16cjmurphy#2019-11-0610:17cjmurphySo I might just use the random-uuid from in there.#2019-11-0610:18onetomisn't something like (->> tx-data (d/transact conn) :tempids vals) enough?#2019-11-0610:18cjmurphyThat gives me the real-eids.#2019-11-0610:18onetomwhat else is associated to the "fulcro-tempids" on the client side?#2019-11-0610:19cjmurphyWell back on the client, in client state, there are client tempids (yes "fulcro-tempids").#2019-11-0610:19onetomaren't those already some uniq strings?#2019-11-0610:19onetombecause it sounds like you can just use those directly as the datomic tempids#2019-11-0610:19cjmurphyFulcro can change them to real ids, but needs the map that can do that.#2019-11-0610:20cjmurphyThey are from that function above.#2019-11-0610:20cjmurphySo for each one of them (a TempId) there needs to be a val which is a real-eid.#2019-11-0610:23onetomand what is that TempId?
which namespace is it from for example?
but i guess i can't add more to this topic now.
i have to get back to work too.#2019-11-0610:24cjmurphyThe problem is already solved in my mind, doing as you say, using 'fulcro-tempid' as the tempids to datomic. String conversion not really an issue.#2019-11-0610:24cjmurphyThank you very much.#2019-11-0610:27cjmurphyhttps://github.com/fulcrologic/fulcro/blob/develop/src/main/com/fulcrologic/fulcro/algorithms/tempid.cljc#2019-11-0615:45BrianUsing Datomic Cloud I have an entity with a :tags attribute with a :db.type/keyword with :cardinality/many. The allowed keywords are :a :b :c :d :e and any combination of those keywords is allowed as the value of the :tags attribute.
Now I want to update the value of :tags with a new combination of those keywords. How can I say "remove current values of :tags and add these new values"?#2019-11-0615:46BrianI am flexible on the schema so if a structural change makes sense, I can do that#2019-11-0615:48BrianI could d/pull on the entity to pull back it's tags and then retract them one by one but that seems like the wrong way#2019-11-0615:49ghadiyou want it to be atomic -- if you read then transact you'll have a race @brian.rogers#2019-11-0615:49ghadithere are a few patterns for handling this: install a transaction function is one#2019-11-0615:52ghadianother possibility is to avoid the race is to do a CAS then retry https://docs.datomic.com/cloud/best.html#optimistic-concurrency#2019-11-0615:53ghadiyou'll need to add an attribute that you can CAS upon#2019-11-0615:53ghadi:tags/version 4
then send in [:db/cas entity :tags/version 4 5] alongside your asserts+retracts#2019-11-0615:55BrianThank you @ghadi! That gives me exactly what I needed to think about 😃#2019-11-0618:55Jon Walch@marshall Is this documentation still up to date? https://docs.datomic.com/cloud/operation/client-applications.html#create-endpoint I don't see "LoadBalancerName" in my Datomic CloudFormation Output section. I'm using a Solo topology. Look like one can't connect using this method for a solo topology. Do I have to do VPC peering for solo?#2019-11-0619:37Jon WalchAccessing Datomic from a separate VPC in versions older than 388 only can be achieved with VPC Peering Well I'm on the latest version so this seems out of the question too. What am I supposed to do?#2019-11-0619:52Jon WalchJust tried adding my EKS VPC to the private datomic route 53 hosted zone, no dice on that either#2019-11-0620:12marshallYou could run the SOCKS proxy in your EKS vpc#2019-11-0620:30Jon WalchThanks! I think I'm just going to go with Production#2019-11-0622:32Jon WalchDoes anyone know where VpcEndpointDns is? https://docs.datomic.com/cloud/operation/client-applications.html#2019-11-0711:36bartukahi, what is the appropriate way to perform setup & teardown of datomic databases when using datomic cloud? I am using mount and creating a {:db-name "my-db-test"} before testing facts with midje and when it finishes I have a function call to d/delete-database which seems perfect fine for me. However, very often I got an error in the subsequent tests saying:
#error {
repl_1 | :cause :db.error/db-deleted 463c4ecf-0733-4afd-a41a-16449265372a has been deleted
repl_1 | :data {:datomic.client-spi/context-id cc5b3341-b4c8-40e9-8cb6-2c7b1fec2f4d, :cognitect.anomalies/category :cognitect.anomalies/fault, :cognitect.anomalies/message :db.error/db-deleted 463c4ecf-0733-4afd-a41a-16449265372a has been deleted, :dbs [{:database-id 463c4ecf-0733-4afd-a41a-16449265372a, :t 108, :next-t 109, :history false}]}
#2019-11-0711:36bartukaI have a retry logic for that, but it seems not alright and very often the max-retries is reached.#2019-11-0716:25ghadiuse ephemeral names for your database -- don't delete and recreate a db with the same name everytime#2019-11-0717:23bartukayes, I just did that and worked out ok! Thanks!#2019-11-0716:24dmarjenburghWe are hitting the limit of the 4kb bytes per string value in datomic. What are the limitations/consequences of transacting datoms with, say an 8kb string?#2019-11-0716:26dmarjenburghBy hitting the limit, I mean our users really want to store bigger text fields. I’m trying not to have to build something that splits the string it and combines it when querying. We already treat the string as opague (it’s gzipped and base64Encoded before it goes into datomic)#2019-11-0716:33ghadi@dmarjenburgh since you already treat it as opaque, it wouldn't be a big stretch to store it elsewhere#2019-11-0716:34ghadi[:db/add e a (content-hash text)]
#2019-11-0716:34ghadithen store the text somewhere else, keyed by content-hash#2019-11-0716:52dmarjenburghI'm trying to avoid that to keep latency down and the application simpler. I'm wondering why the limit exists.#2019-11-0717:34marshallDatomic is not a BLOB store and does not support storing large opaque objects in datoms
We understand this use case and are considering options, but for now, the suggestion from @U050ECB92 to store them out of band is definitely the best aproach#2019-11-0722:02henrikWe’ve reached for DynamoDB for smaller stuff, and S3 for file-sized things, and it’s worked out OK. DDB adds something like 10ms on top, worst case. Not excellent, but good enough for our use case.
Of course, you miss out on the automatic disk/in-memory caching that Datomic otherwise handles, and may end up hitting DDB quite a lot unless you explicitly handle it in some custom manner.#2019-11-0812:15dmarjenburghI understand it’s not blob store and we already use s3 for binary data with a reference in datomic. I’m also not saying there shouldn’t be a limit, I’m just trying to understand why the limit is set at 4kb and what the tradeoffs are for storing text that happens to be a bit larger.
As it stands datomic will actually happily allow larger strings (I’ve tried strings up to 16kb) and the transaction succeeds and can retrieve the values back seemingly without issue. I see 2 obvious cons:
- You can cache fewer datoms in memory
- Queries filtering against the large string value will be slower.
Maybe I’m missing something else. DynamoDB allows transactions up to 10MB so I don’t view that as the limit.
I need to weight this against the cost of storing string of, say 8kb somewhere else from an business/development perspective. Introduced complexity in the application logic, losing transactionality, adding latency and costing development time. I hope you understand where I’m coming from#2019-11-0717:20calebpIf we are already subscribed to Datomic Cloud, do we need to go through the marketplace interface to create a new system? Can we just create the system directly in Cloud Formation using the appropriate templates?#2019-11-0717:23calebpLooks like yes according to the first line here https://docs.datomic.com/cloud/getting-started/start-system.html#2019-11-0717:33marshallYes, you can absolutely get the templates directly from our releases page and launch that way @calebp https://docs.datomic.com/cloud/operation/new-system.html#2019-11-0717:34calebpThanks @marshall. That makes life easier#2019-11-0717:49Luke Schubertdo rules with multiple definitions evaluate in order?#2019-11-0717:49Luke Schubertor phrased differently, do they short circuit like ORs?#2019-11-0718:55hiredmanThey are not ors, I don't know the internals, but it is like a logic program, both branches are taken#2019-11-0718:56Luke Schubertthanks#2019-11-0804:25Jon WalchAnyone seen this one before? I attached a Service Account to my EKS Cluster with S3ReadOnlyPerms, so it should be able to access it. If I'm using a VPC Endpoint to connect to my Datomic VPC, which VPS Endpoint DNS name am I supposed to use?
{:type clojure.lang.ExceptionInfo
:message Forbidden to read keyfile at s3://<redacted>/datomic/access/admin/.keys. Make sure that your endpoint is correct, and that your ambient AWS credentials allow you to GetObject on the keyfile.
:data {:cognitect.anomalies/category :cognitect.anomalies/forbidden, :cognitect.anomalies/message Forbidden to read keyfile at s3://<redacted>/datomic/access/admin/.keys. Make sure that your endpoint is correct, and that your ambient AWS credentials allow you to GetObject on the keyfile.
#2019-11-0804:27Jon WalchDoes my application need more than S3ReadOnly to access that keyfile?#2019-11-0804:28Jon WalchUsing the VPC Endpoint
You must use the VPC Endpoint DNS (or Route53 entry if you created one) and port 8182 for the :endpoint parameter in your Datomic client configuration when connecting from your VPC:
(def cfg {:server-type :ion
:region "<your AWS Region>" ;; e.g. us-east-1
:system "<system-name>"
:endpoint "http://<VpcEndpointDns>:8182"})
The endpoint DNS name can be found in the Outputs of the VPC Endpoint CloudFormation Stack under the VpcEndpointDns key.
#2019-11-0804:28Jon WalchVpcEndpointDns no longer exists#2019-11-0806:20onetomis there some concise idiom for replacing eids in :tx-data returned by d/transact or d/with, so we can see attribute idents at least?#2019-11-0806:35onetomsomething like
(->> (d/with (d/db conn) [])
((fn [{:keys [db-after tx-data]}]
(map (fn [datom]
(map #(or (d/ident db-after %) %)
((juxt :e :a :v :tx :added) datom)))
tx-data))))
=> ((13194140516868 :db/txInstant #inst"2019-11-08T06:34:31.229-00:00" 13194140516868 true))
#2019-11-0819:14pvillegas12I’m getting
http-endpoint fail failed
"Type": "java.lang.IllegalStateException",
"Message": "AsyncContext completed and/or Request lifecycle recycled",
"At": [
"org.eclipse.jetty.server.AsyncContextState",
"state",
"AsyncContextState.java",
54
]
in my datomic logs#2019-11-0819:30pvillegas12Is entity/index a reserved keyword in datomic for schema?#2019-11-0822:56BrianShould be a simple question if someone can answer it regarding transaction functions.
I'm making this call which uses a transaction function update-tags-tx:
(let [hash (ffirst hashes)
hash-type (second (first hashes))
tx ['(update-tags-tx hash hash-type tags)]]
(d/transact
conn
{:tx-data tx}))
`
But getting Unable to resolve entity: hash-type which to me means that's not being evaluated due to the ' which makes sense. In the docs (https://docs.datomic.com/cloud/transactions/transaction-functions.html#calling) they use raw values. How can I do this with variables?#2019-11-0823:07ghadiQuote the symbol alone instead of the whole list#2019-11-0920:36pvillegas12Getting No implementation of method: :-event of protocol: #'datomic.ion.cast.impl/Cast found for class: nil from my ion in development#2019-11-0920:36pvillegas12Upgraded to 0.9.234 - ion-dev, has anybody else seen this? Doing a regular (cast/event {:msg "MyEvent" ::data {...}})#2019-11-0920:38pvillegas12Doing (require '[datomic.ion.cast :as cast])#2019-11-0920:51pvillegas12Using 0.9.34 ion#2019-11-1010:25erikhttps://www.dcc.fc.up.pt/~ricroc/homepage/publications/leap/2013-WFLP.pdf
> A Datalog Engine for GPUs
> Abstract. We present the design and evaluation of a Datalog engine for execution in Graphics Processing Units (GPUs). The engine eval- uates recursive and non-recursive Datalog queries using a bottom-up approach based on typical relational operators. It includes a memory management scheme that automatically swaps data between memory in the host platform (a multicore) and memory in the GPU in order to reduce the number of memory transfers.
> To evaluate the performance of the engine, three Datalog queries were run on the engine and on a single CPU in the multicore host. One query runs up to 200 times faster on the (GPU) engine than on the CPU.
any likelihood this will ever be relevant to Datomic?#2019-11-1015:10cjmurphyIs it a good idea or even possible for entity attributes to have names that are integrated with spec? So something like com.some-company-name.bank-statement/line-item rather than bank-statement/line-item. Is there already documentation/discussion on this?#2019-11-1016:46ghadihttps://docs.datomic.com/cloud/schema/schema-reference.html#attribute-predicates
https://docs.datomic.com/cloud/schema/schema-reference.html#entity-specs
@cjmurphy #2019-11-1016:47ghadi(Yes it is a good idea)#2019-11-1016:56cjmurphyThanks. In that documentation I see :user/name, but never :i.am.a.spec.user/name, or ::user/name. That's what was confusing me.#2019-11-1016:58cjmurphyWhat I was thinking about was not using any special feature of Datomic, just having spec kind of namespaces.#2019-11-1016:59ghadiAny of those kws are fine to register as names of specs. I would choose one name and be consistent#2019-11-1016:59ghadiYeah you can use specs to validate transaction data payloads without using those features above#2019-11-1017:46cjmurphyThanks @U050ECB92, am using the long form of namespaces now, but always with :: in the code, including in pull syntax. Only this long form can be validated by spec - that was my motivation (for others reading this).#2019-11-1017:51cjmurphyAs part of doing this I'm creating namespaces (i.e. files) that serve no purpose other than to be used in :require. Feels like going a bit off the beaten path to be doing this, hence I was looking for some confidence boosting validation 🙂#2019-11-1022:19ssdevIs it possible to export a datomic database in datomic cloud? I see how to do it with on prem version, but can't find how with cloud version#2019-11-1104:57onetomiirc a few days ago someone here said it's on of the drawbacks of the cloud version of datomic that there is no way to export it (and then import it into an on-prem datomic setup)
can't remember though whether his statement was refuted or not.#2019-11-1213:20erikbtw is it not possible to write a straightforward Clojure script to inspect the schema in the cloud DB and generate the import-export code?#2019-11-1409:45onetomno idea. im only familiar with on-prem so far#2019-11-1022:47pvillegas120.9.34 ion is broken for (cast/event ...), had to downgrade to 0.9.28 ion#2019-11-1114:05bartukahi, when should I use the async api?#2019-11-1115:07dangercodernon-blocking applications:
https://blog.codecentric.de/en/2019/04/explain-non-blocking-i-o-like-im-five/ @iagwanderson#2019-11-1118:35bartukathanks, very good reading!#2019-11-1119:13dangercoderyou're welcome 🙂#2019-11-1119:39bartukanot sure, but I am trying to write a service using rabbitmq and core.async. I am handling backpressure and parallelism nicely but when I introduce parts of the code with blocking I/O the execution freezex#2019-11-1119:40bartukaI am trying to make 30 find queries into datomic cloud simultaneously. I get a Client Timeout out of this#2019-11-1119:42bartukathis behavior is expected?#2019-11-1200:31ssdevAnyone know why if I try to use d/entity I get an error No such var: d/entity?#2019-11-1200:33alexmillerthere is no entity in the Client API, maybe you're using that?#2019-11-1200:35ssdevoh. yes I am. so, I would need to just use datomic.api I suppose?#2019-11-1200:35alexmillerkind of depends what you're trying to do#2019-11-1200:36alexmillerare you using on-prem or cloud?#2019-11-1200:37ssdevyeah so, clearly I'm a noob here. I'm trying to get up to speed on datomic cloud. I've managed to create a schema that creates users with first name, last name, user name, email address, and some settings. I'm trying to query for all the settings of a specific user, but what I get back looks like it's the entity id.#2019-11-1200:38ssdevSo I was trying to get that setting based on the entity id. But perhaps I'm way off in trying to do that#2019-11-1200:40ssdevI was running this query and hoping to get back the actual settings for a user -
(d/q '[:find ?settings
:where [?user :user/username ?username]
[?user :user/settings ?settings]]
db "myusername")
instead I get back [[79] [80]]#2019-11-1200:46alexmilleryes, the entity api is only available for peers in Datomic On-prem, so that won't be available. I would recommend looking at the Pull API instead https://docs.datomic.com/cloud/query/query-pull.html#2019-11-1200:47alexmillerthat will let you pull back the data for the selected users in the shape you want#2019-11-1200:55ssdevOk thanks. I'm curious when one should use pull instead of query. Is pull just more commonly used for retrieving multiple nested values?#2019-11-1201:27alexmillerquerying will primarily give you back tuples - if that's good, then use that. if you instead wanted more map-like nested data, then use pull#2019-11-1201:27alexmillerand of course you can use them together! which is kind of shown on that page#2019-11-1201:45ssdevOk cool. Thanks#2019-11-1214:34thumbnailI noticed that arguments of db fns in our peer server are Java types, where i'd have expected clojure datastructures. Is this deliberate?#2019-11-1214:35favilaExample? You mean like ArrayList instead of Vector when returning from an aggregating query?#2019-11-1214:35favilaor do you mean something else?#2019-11-1214:35thumbnailExactly that#2019-11-1214:36favilaI think it’s done for efficiency only#2019-11-1214:36favilaIt’s probably implemented with r/cat#2019-11-1214:36favila(which uses ArrayList underneath)#2019-11-1214:36thumbnailIt is also happening for the arguments that are passed into the db-fn.#2019-11-1214:40favilais this client api?#2019-11-1214:41favilaor transaction fn?#2019-11-1214:41favilawhat’s the context?#2019-11-1214:41favilaI know seqs sometimes get decoded out of fressian as arraylists#2019-11-1214:42favilabut if the query is running using peer api, your inputs are in-process. nothing is getting coerced#2019-11-1214:42favilaso in that case it’s likely you really are passing in what you get#2019-11-1214:50thumbnailIt’s in regards of a transaction function. So i transact a db/fn into the peer (i think?) which accepts an argument.
When i use the client api to invoke that function in a transaction, and check the type of the argument it’s the java equivalent. so a java.util.Arrays$ArrayList for example.#2019-11-1214:53favilaIt’s the “in a transaction” part#2019-11-1214:53favilayour input was serialized and sent to the transactor#2019-11-1214:53favilathe function is running on the transactor#2019-11-1214:54favilaso a side effect of the serialization/deserialization was a change in type#2019-11-1214:54thumbnailYes, i figured that was the reason. I’m just curious whether this is considered a bug or is deliberate#2019-11-1214:55thumbnailIt caused some confusion on our staging env because our dev-setup uses https://github.com/ComputeSoftware/datomic-client-memdb, which doesn’t have the same side effect#2019-11-1214:57favilaNot sure how deliberate it is, but the defaults for fressian are very lazy about preserving types exactly#2019-11-1214:57favilapretty much anything sequential will pop out as an arraylist#2019-11-1215:05thumbnailHmmm, will keep it in mind then. Is there any way to get the types so that they’ll work properly with clj or should i encode the data myself in that case?#2019-11-1215:14favilayou don’t have control over this AFAIK#2019-11-1215:14alexmilleryou should be careful with only relying on what's documented in the apis and not necessarily expect any particular concrete types. Java types are used because Datomic apis can be used from other jvm langs (Java, Groovy, etc)#2019-11-1217:03BrianIs this valid data for a Datomic Cloud transaction function to return?
[[:db/add 56246616830509142 :tags :untrusted]
[:db/add 56246616830509142 :tags :verified]
[:db/retract 56246616830509142 :tags :unknown]]
#2019-11-1217:06favilaThe shape is correct, but validity depends on the schema of :tags#2019-11-1217:07Brian{:db/ident :tags
:db/valueType :db.type/keyword
:db/cardinality :db.cardinality/many
:db.attr/preds 'valid-tag?}
#2019-11-1217:08favilaok, so the types are valid; now it depends on whether the valid-tag? predicate returns true#2019-11-1217:13BrianI'm very confident that part is working properly as I've plugged things in to test it.
The problem I'm now having is:
(let [tx [(list* 'update-tags-tx hash hash-type tags)]]
tx (d/transact
conn
{:tx-data tx}))
is returning count not supported on this type: Keyword. Any ideas?#2019-11-1217:14BrianJust to be thorough:
(def valid-tags #{:trusted :untrusted :unknown :accepted :verified :unauthorized :malicious})
(defn valid-tag? [tags]
(every? valid-tags [tags]))
#2019-11-1217:24BrianOne thing I'm noticing is that the tx variable is wrapping the output of update-tags-tx in brackets however the update-tags-tx is returning a [ [...] [...] [...]] already so we're seemingly triply wrapping that which is odd to me but if I don't do that then d/transact yells at me and says it must be a list of a map#2019-11-1217:36BrianThis is my transaction function which when I test it at the repl it works totally fine but perhaps there is some count going on in here that I'm missing?
(defn update-tags-tx
"Transaction function which when given a map of hashes and a set of tags, will find the
entity who has those hashes and will update that entity's tags"
[db hash hash-type new-tags]
(let [eid (ffirst
(d/q
'[:find ?e
:in $ ?hash ?hash-type
:where
[?e ?hash-type ?hash]]
db
hash
hash-type))
current-tags (set (:tags (d/pull db '[:tags] eid)))
tags-to-add (clojure.set/difference new-tags current-tags)
tags-to-retract (clojure.set/difference current-tags new-tags)
tx (mapv (fn [addition] [:db/add eid :tags addition]) tags-to-add)
retractions (mapv (fn [retract] [:db/retract eid :tags retract]) tags-to-retract)
tx (reduce
(fn [state retraction]
(conj state retraction))
tx
retractions)]
tx))
#2019-11-1217:38favilaIsn’t list* not what you want here? #2019-11-1217:38favilaTags is one arg, not & tags#2019-11-1217:39favilaAnyway I don’t see a count in there. Does the ex-info data on the exception give clues as to what stage is failing, tx or ensure?#2019-11-1217:44Briantags referring to the valid-tags? function? I noticed that name was wrong too tag would be what I want.
I chose (list* ...) based on https://github.com/Datomic/ion-starter/blob/master/src/datomic/ion/starter.clj#L172 because no variation of https://docs.datomic.com/cloud/transactions/transaction-functions.html#calling worked without errors#2019-11-1217:24johnjwhat's the common naming style for attrs? some places I see :user/firstName - others :user/first-name ?#2019-11-1404:04dvingoIs there a recommended practice for the scenario where multiple developers want to iterate (deploy code multiple times a day using :uname for example) on one datomic cloud stack? We're concerned with deploys overwriting each other, resulting in in-development functions being removed by whoever deployed last because their code doesn't have the in-dev code of someone else. One strategy we're considering is to all work on one git branch and always pull and push before deploying, but it would be great if there is a strategy that doesn't involve coordination.#2019-11-1406:17tatutwe are doing development locally with http-kit instead of pushing ions. Each developer has their own database, named "$USER-dev" they can freely play around in#2019-11-1406:17tatutthat doesn't work for tx functions#2019-11-1415:08dvingoyep, we are doing similar with jetty. I'm wondering specifically about deploys. Is the only solution one query group/stack per developer?#2019-11-1506:12tatutnot an expert in that, but the solo topology is so cheap you could easily have one for each developer#2019-11-1415:20grzmCurrently attempting to upgrade to com.datomic/ion-dev "0.9.240" from "0.9.231" and seeing
Execution error (IllegalArgumentException) at datomic.ion.cast.impl/fn$G (impl.clj:14).
No implementation of method: :-event of protocol: #'datomic.ion.cast.impl/Cast found for class: nil
#2019-11-1415:23grzmThis is during a normal cast/event call. The only thing I've changed is updated my deps. Same issue @U6Y72LQ4A reported over the weekend. Anyone else seeing this issue?#2019-11-1415:26grzmThe immediate issue is whether I need to update ion-dev to use the analytics features released in 512-8806.#2019-11-1415:39grzmIt looks like the issue is tied to the com.datomic/ion "0.9.35" release. If I leave ion at 0.9.34, it's fine.#2019-11-1415:47grzmIn my setup: com.datomic/ion "0.9.35" + com.datomic/ion-dev "0.9.234" doesn't work. com.datomic/ion "0.9.34" + com.datomic/ion-dev "0.9.240" does.#2019-11-1418:34pvillegas12Agree on this downgrade solving my problem as well#2019-11-1418:36grzmper @U051V5LLP, looks like calling (datomic.ion.cast/initialize-redirect :stdout) (or likely another redirect option) prior to making any cast is a viable workaround until this gets fixed.#2019-11-1415:39dvingois this happening when running the code locally?#2019-11-1415:39grzmYes. ion-dev is only used locally.#2019-11-1415:47dvingoI found that this worked to get rid of those errors locally: (datomic.ion.cast/initialize-redirect :stdout) - invoke it early in your code before any calls to cast#2019-11-1415:50grzmThat's interesting. I see that works here as well. That looks like a regression. This wasn't a requirement previously.#2019-11-1417:32grzm@U05120CBV What would be your preferred method of tracking this issue? Should I open a support ticket?#2019-11-1417:34marshallyes please#2019-11-1417:45grzmDone! Thanks!#2019-11-1417:39ssdevI'm currently seeing some strange behavior. It seems like my code is not being updated with my deploy. No matter what I deploy the same response keeps getting returned. At one point I deployed a function that returned some json (ex: {showGrid: true}), and even if I go in and hard code what that function should return (now just some text that says "test") it still returns that same json as before. There haven't been any deploy failures. Anyone have any ideas?#2019-11-1420:43henrikDoes it pass through a caching layer? CloudFront?#2019-11-1420:58ssdevno. and in fact invoking the function directly in the terminal seems to return the old json as well.#2019-11-1419:35Ian FernandezPeople, I want to store a field with a java.time.ZonedDateTime into Datomic, some recommendations?#2019-11-1419:51favilastore in a tuple?#2019-11-1419:52favila> A ZonedDateTime holds state equivalent to three separate objects, a LocalDateTime, a ZoneId and the resolved ZoneOffset. The offset and local date-time are used to define an instant when necessary. The zone ID is used to obtain the rules for how and when the offset changes. The offset cannot be freely set, as the zone controls which offsets are valid.#2019-11-1419:52favila(from javadoc)#2019-11-1419:54favilahttps://docs.datomic.com/on-prem/schema.html#heterogeneous-tuples#2019-11-1419:55favilamaybe encode the ymd as one long, hms as another, another long for offset, and a string for zoneid#2019-11-1419:56favilaif you want to ensure temporal sort order, perhaps the first element in the tuple can be the instant (java util date, or just a long of ms since the epoc) that the zoned-date-time would convert to#2019-11-1420:02ghadithis guy datomics#2019-11-1419:35Ian Fernandezmake another field with the Zone?#2019-11-1421:34ssdevupdate on the above issue, when we change the namespace of the code we are deploying, we then see the code update, but if we push and deploy the original namespace with different code, we never see updates, we see the old code running and returning that same old json object. Anyone know why this may be?#2019-11-1422:21m0smithHow do I move an Ion from a staging to a production environment when the :app-name has to be defined in the ion-config.edn but needs to change across environments?#2019-11-1423:34steveb8nQ: is there any kind of shutdown hook available when deploying a new Ion version? I want to call the component/stop fn in my servers so that resources are properly cleaned up#2019-11-1503:56onetomit seems i can't rename :db/id in pull expressions with the :as option:
(d/pull-many db '[[:db/id :as :eid]
[:txn/merchant :as :XXX]]
txns)
outputs:
[{:db/id 17592186045422}
{:db/id 17592186045423}
{:XXX {:db/id 17592186045418}, :db/id 17592186045424}
{:XXX {:db/id 17592186045418}, :db/id 17592186045425}]
is that intentional?
i can't find it documented in https://docs.datomic.com/on-prem/pull.html#as-option#2019-11-1816:10matthavenerI vaguely remember someone else talking about this a few months ago and a datomic rep said it was a known limitation or something#2019-11-1512:52Luke Schubertis there a sane/performant way to accumulate a concept of a score in a datalog query?#2019-11-1512:52Luke Schubertis there a sane/performant way to accumulate a concept of a score in a datalog query?#2019-11-1512:55Luke Schubertto get the idea of what I'm going for is given two people
Name | ArbitraryField | Id
Bob | A123 | 1
Steve | B321 | 2
I want to be able to run a query for (Bob, B321) where Name gives x points and ArbitraryField gives y points on a match and both are returned#2019-11-1513:04Luke SchubertI'm also fine with it returning something like
[[1 [:name]] [2 [:arbitrary-field]]
#2019-11-1515:30benoitDatomic works with set so you will have to associate the score to each result yourself with something of this shape:
(or-join [?name-q ?arbitrary-q ?id ?points]
(and [?id :name ?name-q]
[(ground x) ?points])
(and [?id :arbitrary ?arbitrary-q]
[(ground y) ?points]))
#2019-11-1515:48Luke Schubertah that's exactly what I'm looking for, thanks#2019-11-1515:51johnjBesides the UUID type, what other methods are there to generate unique natural numbers to store in a Long type without collisions? like the ones datomic generates (`:db/id`)#2019-11-1519:32fjolneThere’s not enough space for a universal uniqueness in 64 bits (that’s why all versions of UUIDs are 128 bit). Datomic likely uses the fact that all the new entity ids are generated sequentially (due to sequential transactor), so it could guarantee uniqueness via counters and/or clocks. #2019-11-1519:47fjolneI’d go with tx function which uses aggr max / index lookup + ensure attr has an AVET index. This should yield an O(1) / O(log n) time complexity. Query first approach would require CAS.#2019-11-1520:02fjolneAnd it would probably be more efficient to go down by negative ids, this way index lookup would require to realize only the head of the lazy index: (dec (first (index-range db :your/attr nil nil)))#2019-11-1520:33fjolneClock value (micro/nanoseconds) should also be ok in tx function due to sequentiality of txs (but not precalculated as tx data, as those are not sequential). IIRC they used it in Crux for :db/ids. And it probably worth mentioning that both approaches generate ids which are easy to guess by 3rd-party (not to say UUIDs are too hard to guess but still).#2019-11-1617:02johnjinsightful, thanks, taking notes#2019-11-1515:53johnjcould implement auto-increment as tx function or doing a query first but seems inefficient#2019-11-1519:16dvingoWhy not just use UUID?#2019-11-1523:42hiredmanhttps://docs.datomic.com/on-prem/identity.html#orgdbd68d2#2019-11-1523:43hiredmanthere are squuids, which are more or less uuids, which datomic generates to be sort of sequential which is sort of like http://yellerapp.com/posts/2015-02-09-flake-ids.html#2019-11-1620:22hadilsQ: My release Clojure code (shown below) no longer works on a git commit; I have to supply a uname argument to release to the cloud. Anyone else experiencing this? Is there a fix?
Here's the code:
(defn release
"Do push and deploy of app. Supports stable and unstable releases. Returns when deploy finishes running."
[args]
(try
(let [push-data (ion-dev/push args)
deploy-args (merge (select-keys args [:creds-profile :region :uname])
(select-keys push-data [:rev])
{:group (group)})]
(let [deploy-data (ion-dev/deploy deploy-args)
deploy-status-args (merge (select-keys args [:creds-profile :region])
(select-keys deploy-data [:execution-arn]))]
(loop []
(let [status-data (ion-dev/deploy-status deploy-status-args)]
(if (= "RUNNING" (:code-deploy-status status-data))
(do (Thread/sleep 5000) (recur))
status-data)))))
(catch Exception e
{:deploy-status "ERROR"
:message (.getMessage e)})))
I am currently on com.datomic/client-cloud "0.8.78" and com.datomic/ion-dev "0.9.240"
Here's the error message:
(release {})
=> {:deploy-status "ERROR", :message "You must either specify a uname or deploy from clean git commit"}
#2019-11-1703:21hadilsNvm I figured out the problem.#2019-11-1809:03onetomis it not possible to use reverse navigation style in tx-data within entity maps?
im getting an invalid lookup ref error:
{:cognitect.anomalies/category :cognitect.anomalies/incorrect,
:cognitect.anomalies/message "Invalid list form: [#:db{:id 17592186045418}]",
:db/error :db.error/invalid-lookup-ref}
when trying this:
{:db/ident :new-entity-being-pointed-to-by-a-card-many-attr
:card-many/_attribute [{:db/id 132} {:db/id 345} ...]}
or "Invalid list form: [17592186045418]" when just trying :card-many/_attribute [123 345]
it would be the inverse of a pull expression containing reverse navigation, eg:
;; pattern
[:artist/_country]
;; result
{:artist/_country [{:db/id 17592186045751} {:db/id 17592186045755} ...]}
https://docs.datomic.com/on-prem/pull.html#org31dcc1a#2019-11-1809:43mavbozo@onetom It's possible, but you have to specify the relationship one-by-one. e.g:#2019-11-1809:44mavbozo`#2019-11-1809:44mavbozo[{:db/ident :new-entity-being-pointed-to-by-a-card-many-attr
:card-many/_attribute 132}
{:db/ident :new-entity-being-pointed-to-by-a-card-many-attr
:card-many/_attribute 345}
{:db/ident :new-entity-being-pointed-to-by-a-card-many-attr
:card-many/_attribute ...}
{:db/ident :new-entity-being-pointed-to-by-a-card-many-attr
:card-many/_attribute ...}]#2019-11-1809:44mavbozo`#2019-11-1810:41onetomhmm... interesting. thx.
i guess it's simpler to just use the forward reference than and the :db/add function style#2019-11-1814:11babardoDatomic cloud question: is it possible to run local only integrations test without access to aws infra ?#2019-11-1814:11babardoI tried https://github.com/ComputeSoftware/datomic-client-memdb based on peer library for an in memory db.
But it looks like it doesn't support tuples during schema creation.#2019-11-1814:18favilait uses datomic-free by default, which hasn’t been updated in a while (i.e. since before tuples were introduced)#2019-11-1814:18favilatry excluding that and depending on a recent datomic-pro#2019-11-1814:18babardoOk i'll try that#2019-11-1815:35babardoThanks you, it worked with a datomic pro.#2019-11-1815:35babardoBut what about licensing ? (my company already is on a datomic cloud plan)#2019-11-1815:39favilahow did you get datomic-pro without a license? starter license?#2019-11-1815:41favilaanyway, all on-prem licenses are perpetual, so you can keep using this forever. plus you are not actually running a transactor#2019-11-1815:42favilaagreed this is an odd situation#2019-11-1815:43babardook thanks for your help, we'll try to find an answer from our side 🙂#2019-11-1818:59folconI don’t remember datomic string being limited to 256 chars, is this a change? Or am I misremembering?#2019-11-1819:03favilait should be 4096 and only on cloud#2019-11-1819:03favilahttps://docs.datomic.com/cloud/schema/schema-reference.html#org5a18448#2019-11-1819:12folcon@U09R86PA4 just wondering if there was ever any plan to do edn or blob type? Or is string supposed to be for that usecase?#2019-11-1819:14favilathey never give roadmaps, so I donno for sure, but this table talks about “LOB” types: https://docs.datomic.com/on-prem/moving-to-cloud.html#other#2019-11-1819:15favilaLikely this means the data goes to s3 and a pointer is stored in datomic#2019-11-1819:16favilathis is a technique you should use with large objects in datomic anyway (strings or binary) even for on-prem#2019-11-1819:16favilaon-prem doesn’t have hard size limits, but it’s still a bad idea#2019-11-1819:18folconYea, that’s the problem.#2019-11-1819:18folconIt worries me a little that this hasn’t been addressed yet…#2019-11-1819:18folconThanks though =)…#2019-11-1819:23folcon@U064X3EF3 Sorry to bug you, but just wondering if there’s any way of knowing if/when LOB types are planned for?#2019-11-1819:37alexmillerI'm not on the Datomic team#2019-11-1819:38alexmillerso I don't know any more than you :)#2019-11-1819:51folconFair enough 😃..#2019-11-1818:59folconCurrently trying to setup an import operation which is a bit fiddly#2019-11-1819:01colinkahnAre there any tools to validate datalog? For instance you can’t use (and ...) as a direct descendant of :where. My use case is to validate something that programmatically generates datalog from some input.#2019-11-1819:12alexmillerhttps://lambdaforge.io/2019/11/08/clj-kondo-datalog-support.html is new, might help#2019-11-1819:15dvingoFor anyone who may run into this in the future....
Our team was seeing datomic cloud deploys work for one developer while failing for another developer when we had the exact same clj files, deps.edn, and ion-config.edn files (copied and pasted from the dev who successfully deployed). The deploy ended up working on a clean git clone into a new directory. We figured out that we had run a "compile" in the local directory of the dev with the failing build and the "classes" directory was being executed instead of the new clj source files. Removing the classes directory solved our problem and we can now deploy.....#2019-11-1905:12onetomhow can i access the result of a built-in transaction function like :db/retractEntity so i can modify it?
i would need to replace some of the retractions with assertions containing new computed values#2019-11-1906:05onetomanswering my question:
(->> :db/retractEntity (d/entity db) d/touch :db/code)
reveals how it works:
=> "(clojure.core/fn [db e] (datomic.builtins/build-retract-args db e))"
and indeed it works:
(datomic.builtins/build-retract-args db :x)
=> [[2 17592186045418 10 :x]]
however, since it's not documented anywhere, im bit hesitant to use it 😕#2019-11-1912:16favilaUse d/invoke instead#2019-11-1913:05favila(d/invoke db :db/retractEntity db :x)#2019-11-1913:06favilahttps://docs.datomic.com/on-prem/clojure/index.html#datomic.api/invoke#2019-11-1916:20dustingetzDatomic forum is down#2019-11-1916:26alexmillerseems operational to me?#2019-11-1916:36adamfeldmanI also saw it was down, back up for me too#2019-11-1916:42dustingetzStill down for me#2019-11-1916:44mavbozoI still can not connect to datomic forum#2019-11-1919:46dvingonot sure why this isn't on the datomic site (at least that I could see), or why there's no testing page in general, but this is super useful:
https://www.youtube.com/watch?v=JaZ1Tm6ixCY#2019-11-1920:49dvingohmm, not as useful as I thought - it looks like a db created from the datomic.api ns cannot be passed to (d/q) in datomic.client.api..#2019-11-1920:49dvingoGetting: Query args must include a database when doing so#2019-11-1920:57dvingo(shrug) will just do this then:
(defn get-user-settings*
([db username]
(get-user-settings* d/q db username)) ;; <-- this is datomic.client.api
([q db username] ;; <-- in tests pass in datomic.api/q
(->> (q query-all-settings db username)
xform-user-settings)))
#2019-11-1921:00csmsomeone wrote a lib for testing datomic.client.api with an in-mem DB; I think it’s this: https://github.com/ComputeSoftware/datomic-client-memdb#2019-11-1921:03dvingoooh very cool. I'll take a look, thank you#2019-11-2000:20dvingohas anyone run into an issue where a "local/root" dependency will not be included in the zip file when doing a push? I'm getting a class not found error on deploy for one of the namespaces in a local root and when I unzipped the s3 asset it turned out that the source files are not being pulled into the build. I can compile and execute the code locally , so I'm at a loss for what's going on. There are also other local/root dependencies that are being included in the build.#2019-11-2001:07alexmillerlocal dep deps.edn changes may not force a recompute (at least in clj in general, not sure exactly about push). you might try using -Sforce#2019-11-2001:11dvingothanks! I'm not at my work computer so will let you know how it goes tomorrow#2019-11-2015:07dvingono luck 😞 even tried cloning the repo again to a new directory.#2019-11-2015:10dvingoany reason the datomic ions dev code couldn't be open sourced? This would make debugging problems like this at last tractable instead of poking random buttons of the opaque box.#2019-11-2016:49dvingoFigured out a way around this for now - to compile the app:
mkdir classes
clj -A:dev -e "(compile 'user-settings.core)"
;; add "classes" to deps.edn :paths
I have no idea why ions push is not working but the local/root deps are all in the classes and the deploy is now working. This seems like a bug.#2019-11-2020:06dmarjenburghI think only the files in the default class path are pushed, not paths in aliases. Maybe that could be it?#2019-11-2020:16dvingoThanks for the reply. Good call, unfortunately this is in the main :deps map#2019-11-2020:20dvingoAlso, the compile strategy stopped working and deploy was trying to run some very old version of the app. I'm not sure how this is happening or how I'm the first to run into this..#2019-11-2022:00dvingoOMG... it turned out to be that the local/root project did not have a :paths [] set. Add this :paths ["src"] got the dep to be included in the push....#2019-11-2022:02dvingoFigured it out because other local deps were being included just fine but they had :paths set.#2019-11-2016:34grzm@stuarthalloway As mentioned in person: Two nice-to-haves for the datomic cloud client api:
- some system generated unique value identifying a Datomic database so the application can confirm the database it's connecting to (say, a database with a particular db-name has been deleted and another created with the same name: I'd like to be able to detect that at an application level for things like automated testing)
- a way to map t value to tx eid, say I have a database, which returns a t value, I'd like to also know what tx that corresponds to.#2019-11-2103:01onetomwhen i was looking into how to get the uri of a database object i found this:
(.-db_id ^datomic.peer.Connection conn)
=> "m13n-8ba32b12-7f6e-4d64-bf95-f3e32c95d589"
im wondering if that uuid is actually such a db id u were talking about#2019-11-2020:03grzm@jaret Looking forward to seeing you at the Conj! As an aside, what's the current story with AWS integration testing with CodeDeploy/CodeBuild and cross-region S3 buckets? Is that still busticated? (Not that I consider it a Cognitect thing: I fully place that on AWS silliness.)#2019-11-2020:07jaretStill busticated in the sense you have to copy to a bucket in each region as far as I am aware.
> AWS CodeDeploy performs deployments with AWS resources located in the same region. To deploy an application to multiple regions, define the application in your target regions, copy the application bundle to an Amazon S3 bucket in each region, and then start the deployments using either a serial or parallel rollout across the regions.#2019-11-2020:08jaretI can double check with team and aws rep #2019-11-2020:09grzmThat would be AWSome. Feel free to invoke "our paying customers are (im)patiently waiting for this" on my behalf.#2019-11-2020:08grzmCheers. There are somethings that I really like about immutability: AWS immutability with respect to this issue is not one of them 😉#2019-11-2020:52grzmI've been getting some AWS emails about Nodejs 8.10 begin deprecated/removed in early 2020. I see that Datomic Cloud stuff spun up with the most recent versions of the templates includes Nodejs 8.10 runtimes. Is there going to be a release sometime soon that will include a newer runtime?#2019-11-2020:53marshallits in the pipeline, waiting for AWS to approve/ship it#2019-11-2023:31shaun-mahoodHope everyone on the Datomic team has a great time at the Conj - wish I could be there!
I found a bad link on the docs - https://docs.datomic.com/cloud/troubleshooting.html#troubleshooting-ions has a link to https://docs.datomic.com/ions/ions-reference.html#lambda-ion which doesn't seem to exist.#2019-11-2110:17dmarjenburghI’m trying to do a db/cas to update a db.type/ref attribute of an entity using lookup refs:
[:db/cas [:user/id uid] :user/team [:team/id old-tid] [:team/id new-tid]]
But this fails: Compare failed: [:team/id 123] 52549571713949802#2019-11-2110:17dmarjenburghIt seems like you can’t use lookup-refs for the “old-value” in a cas?#2019-11-2111:06favilaYeah, Cas doesn’t have any smarts about types#2019-11-2111:07favilaThe lookup ref works for the new value only because db/add is resolving it, not because cas is doing anything#2019-11-2113:08souenzzohttps://portal.feedback.eu.pendo.io/app/#/case/26858
My receptive "report" from 3 yrs ago.
It's still a undocumented behavior#2019-11-2120:54colinkahnDoes datomic treat var symbols with dots in them differently? Like ?bar vs ?foo.bar#2019-11-2120:55colinkahnDatomic is hanging when I use the dot version#2019-11-2210:22Per WeijnitzDoes anyone know if Cognitect is still in business? The forum link is dead (https://forum.datomic.com/) and we don't get any response from their support. Our agency needs to migrate to AWS Stockholm, but Datomic Cloud is not available there (the docs says "contact support for other regions", but that seems hard).#2019-11-2210:35schmeeforum is up for me at least#2019-11-2211:31Per WeijnitzAh, it's come back online, good.#2019-11-2213:26Per WeijnitzI have now received a response on my support ticket, so is good.#2019-11-2213:41alexmillerVery much still in business :)#2019-11-2213:42alexmillerThe forum is a third party service and they seem to be having some issues this week#2019-11-2216:09cjmurphyInstalling an attribute is straightforward, for example you could submit this to the transactor:
{:db/ident :student/first
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one}
How would I go about retracting the same attribute?#2019-11-2216:23shaun-mahood@cjmurphy I don't think you can - it would cause issues with historical data in a system with data transacted to it.#2019-11-2216:28cjmurphyBut what about if that attribute was never used in any way? So all you did was that statement above, then you wanted to get back to a world where :student/first was no longer present, where you decided it was a mistake to have that attribute?#2019-11-2216:42shaun-mahoodI haven't found a way - it's only happened to me in development, so I have had to get used to forgetting that it exists until I recreate and repopulate my database (which I tend to do pretty often as I'm figuring out what attributes I want).#2019-11-2216:52johnj@cjmurphy I haven't tried, but doesn't [:db/retract 'identifier' :db/ident :student/first] work?#2019-11-2216:52johnj@cjmurphy I haven't tried, but doesn't [:db/retract 'identifier' :db/ident :student/first] work?#2019-11-2217:03cjmurphyWhat would identifier be there? Using :db/id did not work out:
Execution error (Exceptions$IllegalArgumentExceptionInfo) at datomic.error/arg (error.clj:57).
:db.error/not-an-entity Unable to resolve entity: :db/id in datom [:db/id :db/ident :student/first]
#2019-11-2219:37johnjidentifier is the entity id eid of the attribute you created#2019-11-2219:38johnjin this case it would be the natural number datomic assigned to that attribute#2019-11-2219:39johnjschema attributes are just like domain attributes#2019-11-2219:39johnjyou can query them#2019-11-2305:49cjmurphySeems to have worked thanks :) I did a pull query using [:db/id] , where the eid arg usually goes I put the attribute, so would be :student/first here. I got a low number (3 digits) eid as a result. Then I plugged that into: [:db/retract eid :db/ident attribute-name] . Sending that to transact gave back the normal success result.#2019-11-2417:48fjolne@cjmurphy This seems to be an undocumented feature, and it behaves rather weird, because (d/ident db <attr-eid>) still returns the retracted ident, while (d/touch (d/entity db <attr-eid>)) doesn’t contain the ident anymore. The installation of the attribute with this ident creates a new attribute entity though, so this kinda works, but you’re still left in a kind of inconsistent state. A more conventional (and documented) approach would be to alias the attribute entity with some new ident, and then reuse the old one: (d/transact conn [{:db/id :student/first :db/ident :student.deprecated/first}])#2019-11-2417:50fjolneOr just always design schema on the forked version of the database: https://vvvvalvalval.github.io/posts/2016-07-24-datomic-web-app-a-practical-guide.html#forking_database_connections#2019-11-2417:59fjolneAlso, there’s no need for a separate query / pull to make a retraction of the ident, as idents are interchangeable with entity ids in transactions: (d/transact conn [[:db/retract :student/first :db/ident :student/first]])#2019-11-2216:53cjmurphyThanks @shaun-mahood yes. I can certainly see myself having a production system and putting new attributes in entities then deciding they were put in the wrong place. If there's no actual data (so no students have been put in the system in the example above), then - well it makes sense to be able to remove attributes so the schema remains clean (and identical to the yuppiechef schema that in my case is what's normally used to create attributes).#2019-11-2404:09favilaI’m using datomic analytics and want a :joins through two “levels” of refs. Is this supported/supportable? example: metaschema {:tables {:foo/id {} :bar/x {}} :joins {:foo/bar-card1-ref "bar" :bar/card1-enum-ref "db__idents"}} I expect/hope-for a foo.bar_card1_ref__card1_enum_ref__ident column, but there is none.#2019-11-2423:38ackerleytngWhen I run bin/run -m datomic.peer-server -h localhost -p 8998 -a myaccesskey,mysecret -d hello,datomic:, is a transactor started somewhere in the background?#2019-11-2500:24bhurlowno, transactor must run as separate process (assuming on-prem). When transactor starts it stores it’s addressable location in the backend storage. When peer starts it finds that stored location and can then contact transactor directly#2019-11-2500:25bhurlowactually just seeing the mem flag here, with mem transactor is built in to the library, other protocols dev,`ddb` etc behave as above#2019-11-2512:55ackerleytngi see, thanks @U0FHWANJK!#2019-11-2510:43fjolneCouldn’t sign up for Datomic Forum (seems like it’s getting troubles lately), maybe somebody here knows: is it safe to open transactor port to the public, assuming it’s connected to a password-secured SQL storage?
I understand that peers first connect to the storage, then get transactor coordinates and connect to it, but couldn’t find the authorization mechanism between peer and transactor in the docs.#2019-11-2514:26bhurlowpeers communicate with transactor via an encrypted channel so it’s OK to host transactor on public IP. In fact, this is the only config that worked for us#2019-11-2514:26bhurlowhttps://forum.datomic.com/t/unable-to-connect-to-transactor-from-to-ec2-instances/568/2#2019-11-2522:11fjolne@U0FHWANJK thanks, at some point we had the same error, but setting host to internal network IP (inside VPC) made it to work. That’s good to know that the connection between peer and transactor is secure, but my concern is different: I wonder whether somebody else could actually transact / read something via the open transactor port, which is why I’m interested in the auth protocol (handshake?) between peer and transactor.#2019-11-2522:15bhurlowsomewhat sure all communication into transactor requires the “secret” value which is stored in backed storage#2019-11-2605:58fjolneUgh, it’s actually in the docs: https://docs.datomic.com/on-prem/aws.html
So, yes, connection from peers to transactor is secured via randomly generated credentials, and it’s ok to open transactor to the public.#2019-11-2620:39bhurlowstill felt a bit exposed to me too#2019-11-2510:49fjolneWe’ve currently secured transactor via firewall to allow only connections from the exact peer, but that’s kinda inconvenient for dev (requires ssh tunnelling) and autoscaling (requires to manage all the internal network IPs of our peers).#2019-11-2512:46frankyxhlHi,
I’m new to Datomic. Is there any advice or best practice if I’d like to connect Datomic in ClojureScript/Nodejs?
Thanks.#2019-11-2514:30bhurlowcloud or on-prem? There are no official peer libraries for cljs or node#2019-11-2516:25frankyxhlRight now using on-prem. But will use cloud in production.
Yes. I can’t find cljs library.#2019-11-2518:50grzm@jaret I know I asked you about whether or not PollingCacheUpdateFailed errors had been addressed recently, but I may have been overly distracted when you answered. (To refresh your memory: What we're seeing is part of our Datomic Cloud system stopping (a periodic CloudWatch Event that writes out to a Vertica database) while the rest of the system keeps humming along fine. I've seen PollingCacheUpdateFailed errors in the Cloudwatch logs that correlate with this.)#2019-11-2518:55jaret@grzm looks like… :
"Msg": "PollingCacheUpdateFailed",
"Cache": "CatalogCache",
"Err": {
"CognitectAnomaliesCategory": "CognitectAnomaliesFault",...#2019-11-2518:55jaret?#2019-11-2518:56jaretWhat version of Datomic Cloud are you running on this system?#2019-11-2519:10grzmYup:
"Msg": "PollingCacheUpdateFailed",
"Cache": "cache-group-poller",
"Err": {
"CognitectAnomaliesCategory": "CognitectAnomaliesFault",
"DatomicAnomaliesException": {
"Via": [
{
"Type": "com.amazonaws.SdkClientException",
"Message": "Unable to execute HTTP request: Too many open files",
"At": [
"com.amazonaws.http.AmazonHttpClient$RequestExecutor",
"handleRetryableException",
"AmazonHttpClient.java",
1175
]
},
{
"Type": ".SocketException",
"Message": "Too many open files",
"At": [
".Socket",
"createImpl",
"Socket.java",
460
]
}
This was with 480-8770. We've since upgraded to 535-8812 and haven't seen it since#2019-11-2519:13grzmSeeing that in various caches: index-group-poller, tx-group-poller, cache-group-poller, query-group-poller, autoscaling-group-poller. Looks like they generally happen in pairs or three at a time, mix-and-matching which cache groups are included.#2019-11-2519:14grzmOne that happens on its own is CatalogCache , with
{
"Type": "com.amazonaws.SdkClientException",
"Message": "Unable to execute HTTP request: Connect to [] failed: Read timed out",
"At": [
"com.amazonaws.http.AmazonHttpClient$RequestExecutor",
"handleRetryableException",
"AmazonHttpClient.java",
1175
]
},
#2019-11-2519:24jaretSo one of the causes of that error (pollingCacheUpdateFailed) was addressed and other causes as long as they are transient shouldn’t represent a problem. Re: the CloudWatch Event that writes to the Vertica DB are you seeing any other errors or any other correlations? are you deploying at the same time? is the event special in any way?#2019-11-2519:25jaretI’d be happy to poke at the metrics and logs if you want to give me read-only access.#2019-11-2519:30grzmHaven't seen other errors at the same time, which is why it's kinda been stumping us. No deploys either: it happens after the system's been running for at least a couple of days running fine. Just stops writing. Let me coordinate with the client and get back to you on the log access: that'll likely have to wait until tomorrow.#2019-11-2519:33jaretAnd you have to kick over the application or datomic to get it back up again? @grzm?#2019-11-2519:34grzmWe "redeploy" (same revision) and it all starts working again. (what would it mean to restart only Datomic?)#2019-11-2520:58tylerHas there been any news on the xray daemon for datomic compute nodes?#2019-11-2520:58marshall@tyler it is included on the nodes in the latest release#2019-11-2520:58marshallbut it’s up to you to configure/use it for now#2019-11-2520:58marshallmore docs/info coming in the future#2019-11-2521:00tyler:+1: awesome, we’re happy to configure it just need that daemon running. Thanks.#2019-11-2606:45cjmurphyWhen I create entities I always give them an attribute called :base/type , which is just a keyword, for instance :bank-account . I'd like to find all the entities (preferably that I've created) that don't have this attribute. I've asked this on Stack Overflow: https://stackoverflow.com/questions/58866423/find-all-entities-that-are-missing-a-particular-attribute, but no answers...#2019-11-2612:16marshallhttps://docs.datomic.com/cloud/query/query-data-reference.html#missing#2019-11-2612:19benoit@U0D5RN0S1 you would have to somehow reduce the number of entities to check otherwise you're doing a full db scan. So I would write first the clause to identify all the entities you've created and then from those, find all the entities without this attribute.#2019-11-2612:20benoitUnfortunately it is not explained in the missing? section of the docs and all examples just happend to have clauses before the missing? predicate that makes the query work (`[?artist :artist/name ?name]`)#2019-11-2620:01cjmurphyThanks @U963A21SL that brings things together for me. I can work with knowing the name of an attribute in the entity that may not have a :base/type.#2019-11-2620:01cjmurphy[:find [?entities ...]
:in $ :where
[?entities ::rule/splits ?s]
[(missing? $ ?entities :base/type)]]#2019-11-2612:14leongrapenthinWhere do I find the upgrade instructions from solo to production?#2019-11-2612:18marshallhttps://docs.datomic.com/cloud/operation/upgrading.html#2019-11-2612:18marshallJust choose a production compute stack instead of a solo compute stack.
You should be running a split stack first#2019-11-2613:45joshkhAn attribute which is unique by identity allows us to use its value instead of a :db/id to identify an entity within a transaction. If no entity exists with that value then an entity is created, otherwise facts are stored about the existing entity.
Do composite tuples work the same way? They are also :db.unique/identity, however they seem to operate more like :db.unique/value in that they throw an exception when the tuple value already exists.#2019-11-2613:50joshkhOr in other words, can I take advantage of a tuple to update an existing entity? For example, change this player's colour based on the combination of their first and last name:
{:tx-data [{
; composite tuple attributes:
:player/first-name "Jean-Luc"
:player/last-name "Picard"
; some fact to store about the existing entity
:player/colour "blue"
}]}
Edit: the code example throws a Unique conflict exception when transacted a second time#2019-11-2614:11marshallYou can indeed use a composite tuple for identity and upsert. You have to include the tuple itself in the transaction
So in your example, if the unique attribute is called :player/first+last, you need to include a value for :player/first+last in your next transaction to get upsert#2019-11-2614:11marshall@joshkh ^#2019-11-2614:28joshkhThanks @marshall, you just made my day.#2019-11-2614:32marshallFYI, discussed here also: https://forum.datomic.com/t/db-unique-identity-does-not-work-for-tuple-attributes/1072#2019-11-2614:35joshkhI came across that post a few months ago when I first encountered the same problem, but to be honest I didn't understand the resolution in the comments.#2019-11-2615:32joshkhi'm attempting to transact a tuple ident and am getting the following exception:
(d/transact (client/get-conn)
{:tx-data [
{:db/ident :user/first+last
:db/valueType :db.type/tuple
:db/tupleAttrs [:user/first :user/last]
:db/cardinality :db.cardinality/one
:db/unique :db.unique/identity}
]})
Unable to resolve entity: :db/tupleAttrs
any ideas? i have no problem transacting it to another database an hour ago.#2019-11-2615:34favilaThis db was created with an older (pre-tuple) version of datomic?#2019-11-2615:34joshkhyes#2019-11-2615:34favilaYou need to transact these new schema attributes using administer-system#2019-11-2615:34ghadiIf so, you have to run d/administer-system as in the documentation#2019-11-2615:34ghadijinx#2019-11-2615:35favilahttps://docs.datomic.com/on-prem/deployment.html#upgrading-schema#2019-11-2615:37joshkhthis applies to cloud as well?#2019-11-2615:38joshkhsilly question, of course it does as i'm missing the attributes 😉#2019-11-2615:39joshkhgreat, that did the trick. thanks favila and ghadi!#2019-11-2616:16leongrapenthinis it technically possible/viable to downgrade the production primary compute group instance type to something cheaper as long as you don't have users?#2019-11-2616:17marshallthe supported instance types are fixed#2019-11-2616:17marshallyou can, however, reduce your ASG size to 1#2019-11-2616:17marshallif you don’t need HA#2019-11-2616:17leongrapenthini can see that#2019-11-2616:17leongrapenthinreducing asg size#2019-11-2616:17leongrapenthinthanks#2019-11-2616:18leongrapenthinstill, its a factor ten price difference at least#2019-11-2616:18marshallfrom solo - prod?#2019-11-2616:18leongrapenthinyes#2019-11-2616:19marshallfair enough; we are definitely taking feedabck and considering options#2019-11-2616:19marshallalso, you can ‘turn off’ the system over nights/weekends if it’s not a user-facing prod system#2019-11-2616:19marshallsame technique - ASG to 0#2019-11-2616:19leongrapenthinas long as my customer is not live, I will have difficulty explaining this price to him#2019-11-2616:19marshallwe’re also looking into tooling that will make that somewhat easier to do/manage#2019-11-2616:20leongrapenthinsimultaneously, I need the architecture at some point, to develop against prod. only features like http-direct#2019-11-2616:20leongrapenthinor have staging/test query group separation#2019-11-2616:21leongrapenthini need the platform running for 1-5 users testing on the customer site#2019-11-2616:21leongrapenthinturn off is no option#2019-11-2617:36favilaAre there plans to expose point-in-time queries via datomic analytics?#2019-11-2708:30tatutwhat's the best practice for "deleting" items and later being able to query them and restore them. In SQL you would record a deleted timestamp and use a WHERE deleted IS NULL in all queries... it seems in datomic one should just retract the entity?#2019-11-2708:30tatutbut it seems the pulling deleted entities is somewhat cumbersome, you need to get the deletion tx instant and pull from db that is as-of one millisecond before the deletion#2019-11-2708:31tatutand I guess reinstating the entity would be to reassert all the facts?#2019-11-2709:23cjmurphyA 'deleted' marker like that seems different to a retraction, so why not use the marker? Would be more convenient. Also see two kinds of time here: https://vvvvalvalval.github.io/posts/2017-07-08-Datomic-this-is-not-the-history-youre-looking-for.html#2019-11-2710:03tatutThat raises good points#2019-11-2710:03tatutBut the down side is that every query would need to filter out deleted items in where clause#2019-11-2710:30cjmurphyTrue enough. And you might want it to be an instant in 'event time' rather than a boolean. Also you could have another entity with all the same attributes and then the 'deleted' attribute as well. More work do do at the time of deletion (shuffling it into another entity), but then you wouldn't have to filter out for everyday queries.#2019-11-2714:06ghadiEvery query has to do the same in SQL#2019-11-2805:17tatutThat is true, and I've always disliked it... you can work around it in SQL by using views.#2019-11-2712:16leongrapenthin@tatut suggestion: create a new entity type like {:retraction/before tx, :retraction/eid eid}. When you retract your entity, invoke a transaction function that creates the "retraction" entity. Under :retraction/before, store the tx of the t of the in-transaction database. You can later use it to restore the db as-of the time of deletion. Under :retraction/eid store the entity ID of the retracted entity.#2019-11-2712:18leongrapenthinthis would be out of the way of your usual queries and gives you the ability to find it again, reference the "retraction" in other context etc.#2019-11-2712:18leongrapenthinI'd only do it if I have to, because usually a history or log query does the job if I want to restore something deleted#2019-11-2801:26yannvahalewyn> org.eclipse.aether.resolution.ArtifactDescriptorException: Failed to read artifact descriptor for com.datomic:datomic-pro:jar:0.9.5981
Not sure what to look for to debug this. I followed the steps in my.datomic (~/.m2/settings.xml and :mvn/repos are set correctly). Can someone nudge me in the right direction?#2019-11-2802:13yannvahalewynIf anyone found this by googling the error, verify you have this in settings.xml:
<settings xmlns=""
xmlns:xsi=""
xsi:schemaLocation=" ">
<servers>
<server>
<id></id>
<username>{email}</username>
<password>{password}</password>
</server>
</servers>
</settings>#2019-11-2802:02yannvahalewynThe issue was that I just copied over the settings.xml from my.datomic, but the example is not a complete example but rather just one key. It does link to the maven docs but I didn’t notice it. It’s not super intuitive what is expected for younger devs like me who have no experience with Maven.#2019-11-2802:05yannvahalewynIt took me a full 40 minutes to figure this out, now who has time for that? 😄. Any plans to streamline onboarding a bit? A better and working example would be useful imo, especially pulling in the peer library.#2019-11-2815:56yannvahalewynI noticed other devs of various levels of experience share my feelings about the onboarding. Seems like a shame to me since big improvements can be made with a couple of simple steps. This is your first introduction to otherwise amazing software and may turn potential customers away early.#2019-11-2812:26kardanI was trying to split my Cloudformation (Datomic cloud) stack do to an upgrade for the first time. When deleting the root stack I get an error for the nested compute stack, as DELETE_FAILED LambdaSecurityGroup resource <uuid> has a dependent object (Service: AmazonEC2; Status Code: 400; Error Code: DependencyViolation; Request ID: <uuid>). Anyone has any pointers on what to do / read up?#2019-11-2816:17jaretIn general, Datomic delete/upgrade will only delete/modify resources it created. If any of the resources it uses have been modified it will not delete that resource. Have you changed the security group or added any resources to the security group?#2019-11-2816:19marshallThis is likely the lambda ENI deletion delay issue#2019-11-2816:20marshall@U051F5T93 after youve waited an hour or so, try deleting again#2019-11-2816:40kardanI’ll try again. Don’t think I’ll created much more than what’s in the guides. (but this was a while ago, so might be wrong on this). Will need to be of to handle kids and stuff for a while so will check in later to see if it succeeds. Thanks for the pointers.#2019-11-2816:46marshallThere is a recent change in how aws handles lambda enis that affects their deletion. The current solution from aws is "wait an hour and try again"#2019-11-2904:21kardanTried twice (with a nights sleep in between) and failed again. Could it be a problem that I created a web lambda before splitting the stack?#2019-11-2907:30kardanHitting my connected API gateway with a browser it now responds with 500 Internal Server Error#2019-11-2907:30kardan(this is however not anything in production)#2019-11-2918:00marshallThe lambda should be deleted, unless you created something manually out of band#2019-11-2918:01marshallYou can delve into the error in the cloudformation stack and determine what specifically failed to delete#2019-11-2918:02marshallIf it is a lambda ENI, that is caused by a recent change aws made to vpc resident lambdas#2019-11-2918:02marshallYou may need to look in the vpc console or the list of security groups to determine what resources are still present#2019-11-2919:54kardanOk, will dig in deeper#2019-11-2919:54kardanThanks#2019-11-3007:21kardanDeleted the lambda security group manually and then went on to delete everything. Will start over from scratch again. Thanks for the pointers.#2019-11-2820:01bartukaI was experiencing some issues between the async/datomic parts of my project and decided to perform a small experiment. I wrote a simple query that returned 5 entity ids and used the go to emit some parallel queries against my database [I'm on the datomic cloud]. This is the whole code:
(defonce client (d/client config))
(defonce connection (d/connect client {:db-name "secland"}))
(defn query-wallet []
(-> (d/q '[:find ?e
:where
[?e :wallet/name _]]
(d/db connection))
count
println))
(dotimes [_ 9] (async/go (query-wallet)))
If my dotimes is less than 8 it works fine and print my results. However, with 9+ parallel queries, it hangs and nothing happens. From the terminal the output of the tunnel is only:
debug1: channel 3: free: direct-tcpip: listening port 8182 for port 8182, connect from 127.0.0.1 port 45492 to 127.0.0.1 port 8182, nchannels 11
debug1: channel 10: free: direct-tcpip: listening port 8182 for port 8182, connect from 127.0.0.1 port 45506 to 127.0.0.1 port 8182, nchannels 10
debug1: channel 6: free: direct-tcpip: listening port 8182 for port 8182, connect from 127.0.0.1 port 45498 to 127.0.0.1 port 8182, nchannels 9
debug1: channel 7: free: direct-tcpip: listening port 8182 for port 8182, connect from 127.0.0.1 port 45500 to 127.0.0.1 port 8182, nchannels 8
debug1: channel 8: free: direct-tcpip: listening port 8182 for port 8182, connect from 127.0.0.1 port 45502 to 127.0.0.1 port 8182, nchannels 7
debug1: channel 2: free: direct-tcpip: listening port 8182 for port 8182, connect from 127.0.0.1 port 45410 to 127.0.0.1 port 8182, nchannels 6
debug1: channel 4: free: direct-tcpip: listening port 8182 for port 8182, connect from 127.0.0.1 port 45494 to 127.0.0.1 port 8182, nchannels 5
debug1: channel 5: free: direct-tcpip: listening port 8182 for port 8182, connect from 127.0.0.1 port 45496 to 127.0.0.1 port 8182, nchannels 4
debug1: channel 9: free: direct-tcpip: listening port 8182 for port 8182, connect from 127.0.0.1 port 45504 to 127.0.0.1 port 8182, nchannels 3
I would like to know more about this issue. Why 7 parallel processes? This query is super simple and fast. Is this a configuration issue?#2019-11-2820:15Joe LaneYou're doing blocking IO inside of a go-block. Never do this. In core async there are 8 threads in the core async threadpool, when you perform blocking io in a go block you can deadlock that threadpool.
If you are using the client api in a non-ion project, you can use the async part of the api ( https://docs.datomic.com/client-api/datomic.client.api.async.html ) to leverage core async from the datomic client.
The async part of the client api DOES NOT WORK IN AN ION.#2019-11-2820:16Joe LaneYou would have this problem with anything doing blocking io in go blocks.#2019-11-2822:16bartukaAlright!! thanks for the explanation. I will change the implementation#2019-11-2823:18bartukaI tried to implement this using datomic asynclibrary.
(defonce client-async (d-async/client config))
(def ch-conn (d-async/connect client-async {:db-name "secland"}))
(def out-chan (async/chan 10))
(def times 9)
(dotimes [_ times]
(async/go
(->> (d-async/q {:query '[:find ?e
:where
[?e :wallet/name _]]
:args [(d-async/db (async/<! ch-conn))]})
async/<!
(async/>! out-chan)
)))
(dotimes [_ times]
(println (count (async/<!! out-chan))))
But I still get the same error. When I change to a version using async/thread it works fine. So, probably I am still doing IO blocking right now. Can you spot the error?#2019-11-2906:05tatutin datomic cloud (prod topology) I'm getting errors in the lambda cloudwatch logs. how fatal are these?
{:cognitect.anomalies/category :cognitect.anomalies/unavailable, :cognitect.anomalies/message "Connection reset by peer", :clojio/throwable :.IOException, :clojio/socket-error :receive-header, :clojio/at 1574976823963, :clojio/remote "10.213.37.146", :clojio/queue :queued-handler, :datomic.ion.lambda.handler/retries 0}#2019-11-2908:34onetomhow come the latest (`569-8835` Nov 27th) version of datomic cloud solo only supports N.Virginia region?
$ curl -s | jq .Mappings.RegionMap
{
"us-east-1": {
"Datomic": "ami-066145045fc7a4ad0",
"Bastion": "ami-09e416d6385c15902"
},
"us-east-2": {
"Datomic": "",
"Bastion": ""
},
"us-west-2": {
"Datomic": "",
"Bastion": ""
},
"eu-west-1": {
"Datomic": "",
"Bastion": ""
},
"eu-central-1": {
"Datomic": "",
"Bastion": ""
},
"ap-southeast-1": {
"Datomic": "",
"Bastion": ""
},
"ap-southeast-2": {
"Datomic": "",
"Bastion": ""
},
"ap-northeast-1": {
"Datomic": "",
"Bastion": ""
}
}#2019-11-2918:00marshallAWS marketplace issue. Use the latest version listed in the datomic docs releases oage#2019-11-2918:00marshallPage#2019-11-2917:58Oleh K.I wasn't able to create today a Solo stack on new AWS account with the latest template (of 27 Nov), it just fails. Is it a known problem?#2019-11-2917:58marshall@okilimnik use the latest version listed on the datomic docs releases page#2019-11-2917:59marshallThe newest on AWS has several problems that we are attempting to resolve#2019-11-2917:59Oleh K.@marshall thanks!#2019-11-2921:10bartukahi, I would like to understand a little better the behavior of datomic under load. For example, what is happening on the index-mem-mb metric in this situation?#2019-12-0320:54Linus EricssonDatomic seems to keep the most recent transactions in memory to batch write them efficiently. If I understand correctly, it means that the transactions only store the tx-data for each transaction (and can batch even that) which means Datomic eventually has to re-calculated its indexes. This is probably what can be seen in the last part of the graph, where the JVM obviously does a lot of work GCing, and then, when the index memory is full, recalculated the db and empties its batched up index memory.#2019-11-2921:11bartukaI noticed that for sometime, my application was processing 200msg/s from rabbit and after 15 min of operation it went down to 10msg/s I am looking at the metrics to figure out the mechanics here#2019-11-3004:56Jacob O'BryantCould org.clojure/tools.reader on ions be bumped up to, say, version 1.3.2? (It's on 1.0.0-beta4 currently). I'm trying to deploy a fulcro app, but I'm getting this error:
$ clj -Sdeps '{:deps
{org.clojure/tools.reader {:mvn/version "1.0.0-beta4"}
com.fulcrologic/fulcro {:mvn/version "3.0.10"}}}' \
-e "(require 'taoensso.timbre)"
Syntax error (IllegalAccessError) compiling at (clojure/tools/reader/edn.clj:1:1).
reader-error does not exist
It's the same error discussed at https://github.com/ptaoussanis/timbre/issues/263.
Alternatively, does anyone know how to fix this without changing the tools.reader version? I've tried messing around to no avail. Unfortunately I don't really understand what's causing the error in the first place. Interestingly, it works if I omit fulcro but still require the same versions of timbre and encore:
$ clj -Sdeps '{:deps
{org.clojure/tools.reader #:mvn{:version "1.0.0-beta4"}
com.taoensso/timbre {:mvn/version "4.10.0"}
com.taoensso/encore {:mvn/version "2.115.0"}}}' \
-e "(require 'taoensso.timbre)"#2019-11-3022:09Jacob O'Bryantupdate: I forgot about AOT-compiled jars. The problem was that fulcro depends on clojurescript, which includes tools.reader. adding :exclusions [org.clojure/clojurescript] to fulcro seems to have fixed it. It also fixed a problem with transit-clj , though for that I also had to fork fulcro and remove a call to cognitect.transit/write-meta.#2019-12-0207:21tatutAny ideas why Ion lambda is throwing exception "Key must be integer" from datomic ion code (not my app code)?#2019-12-0207:21tatut{"Msg":"IonLambdaException","Ex":{"Via":[{"Type":"java.lang.IllegalArgumentException","Message":"Key must be integer","At":["clojure.lang.APersistentVector","assoc","APersistentVector.java",347]}],"Trace":[["clojure.lang.APersistentVector","assoc","APersistentVector.java",347],["clojure.lang.APersistentVector","assoc","APersistentVector.java",18],["clojure.lang.RT","assoc","RT.java",823],["clojure.core$assoc__5401","invokeStatic","core.clj",191],["clojure.core$update","invokeStatic","core.clj",6198],["clojure.core$update","invoke","core.clj",6188],["datomic.ion.lambda.api_gateway$gateway__GT_edn","invokeStatic","api_gateway.clj",93],["datomic.ion.lambda.api_gateway$gateway__GT_edn","invoke","api_gateway.clj",87],["datomic.ion.lambda.api_gateway$edn_handler__GT_gateway_handler$fn__3198","invoke","api_gateway.clj",109],["datomic.ion.lambda.api_gateway$gateway_handler__GT_ion_handler$fn__3202","invoke","api_gateway.clj",114],["clojure.lang.Var","invoke","Var.java",384],["datomic.ion.lambda.dispatcher$fn__2154","invokeStatic","dispatcher.clj",47],["datomic.ion.lambda.dispatcher$fn__2154","invoke","dispatcher.clj",45],["clojure.lang.MultiFn","invoke","MultiFn.java",244],["datomic.ion.lambda.dispatcher$handler_fn$fn__2156","invoke","dispatcher.clj",61],["datomic.clojio$start_server$socket_loop__2356$fn__2360","invoke","clojio.clj",204],["datomic.clojio$start_server$socket_loop__2356","invoke","clojio.clj",203],["datomic.clojio$start_server$accept_loop__2363$fn__2364","invoke","clojio.clj",219],["clojure.core$binding_conveyor_fn$fn__5739","invoke","core.clj",2030],["clojure.lang.AFn","call","AFn.java",18],["java.util.concurrent.FutureTask","run","FutureTask.java",266],["java.util.concurrent.ThreadPoolExecutor","runWorker","ThreadPoolExecutor.java",1149],["java.util.concurrent.ThreadPoolExecutor$Worker","run","ThreadPoolExecutor.java",624],["java.lang.Thread","run","Thread.java",748]],"Cause":"Key must be integer"},"Type":"Event","Tid":1075,"Timestamp":1575271105341}#2019-12-0208:49favilaYou are calling update on a vector using a non-integer key#2019-12-0211:39tatutI know what the exception means, it is not coming from my code, but datomic's#2019-12-0211:39tatutbut it seems in this case it was due to trying to call the lambda with aws cli, instead of thru api gw#2019-12-0314:09Brian AbbottIs it possible to run Datomic Cloud on Fargate?#2019-12-0315:00ghadiNo#2019-12-0315:00ghadiYou can connect fargate clients to datomic cloud though, @briancabbott #2019-12-0315:03Brian AbbottSorry, that is what I mean. Is there somewhere that I could find some documentation on how to do that? #2019-12-0315:03Brian AbbottDoes anyone here on this channel have experience doing it?#2019-12-0315:10ghadiYep, there’s a few things to take care of: right subnet && right IAM policy on the fargate task role#2019-12-0408:02Oleh K.when I create fargate service I need to create subnets for it. But in order to connect to datomic a service must be in the datomic subnet (as the documentation says). If I create a service and put in it datomic subnets then the service fails to even start. Can you give some insights where to look for the problem?#2019-12-0409:02ghadiConfirm that the datomic subnets and the fargate subnets are mutually routable#2019-12-0409:02ghadiAnd confirm that the “nodes” security group is augmented to include ingress from your new subnets #2019-12-0409:08ghadiAre you doing peering @U5JRN2PU0 or same vpc?#2019-12-0409:10Oleh K.I wasn't able to connect datomic with fargate in the same VPC, so now I'm trying peering#2019-12-0409:10Oleh K.what do you mean by "Confirm that the datomic subnets and the fargate subnets are mutually routable" ?#2019-12-0409:14ghadiCan they send packets to each other?#2019-12-0409:14ghadiPeering works too, but there are different steps (see the documentation)#2019-12-0409:18ghadihttps://docs.datomic.com/cloud/operation/client-applications.html#2019-12-0411:42Oleh K.I've managed to do connecting within the same VPC from different subnets, thanks you!#2019-12-0412:52ghadinice -- what did you have to do @U5JRN2PU0?#2019-12-0412:52ghadijust for the other readers that might be watching this thread#2019-12-0412:55Oleh K.I just allowed ingress traffic in <system>*-nodes security group from my services subnets' cidrs#2019-12-0412:57ghadinice.#2019-12-0315:10ghadiOther than that the code is the same in the jvm#2019-12-0315:32bartukahi, I have some questions about datomic analytics.. what is the "active workers" that we see at the presto server gui ? I have 3 instances for my query-group but I still get 1 active worker and 0 worker parallelism#2019-12-0316:26marshallThe presto server itself runs on your access gateway instance#2019-12-0316:27marshallit will use parallelism on that instance, but analytics support doesnt currently use multiple presto workers#2019-12-0316:32bartukaahn, cool. I am still trying to proper configure the query-group for my analytics needs. i) I noticed that on the cloud watch dashboards no action was happening on the query-group[I run the x-ray table from metabase on a 100M datoms database] and the bastion server had 100% cpu and this single worker node was the bastion server indeed.#2019-12-0316:36marshallyou can choose a larger instance type for the access gateway instance#2019-12-0316:38bartukabut if this is the case, I dont understand the purpose of the query-group itself. I thought those instances were performing the hard-work.#2019-12-0316:45marshallsome of the work, yes (the datomic DB work)
but some of the work (the SQL processing) happens on the gateway instance#2019-12-0316:36marshallI believe there are 3 or 4 choices#2019-12-0316:36bartukahowever, I found an error on the sync -q <query-group-name> config and fixed it [it was not recognizing the parameter and was taking the system name instead]. But now, the S3 bucket for analytics/ has two folders, one for my system-name and another with query-group-name. Is it right?#2019-12-0316:39markbastianI have a very minor feature request for the datomic team. When you issue a push request the response contains the deploy command (e.g. clojure -A:ion-dev '{:op :deploy, :group elided, :rev \"0876503319a40bafffc8525f0597b1355b94b587\"}'). The rev entry contains escaped quotes so I always have to paste this into a shell and then cursor over to the slashes and remove them. Any chance we can get a version in a future release that doesn't include the slashes? The status-command output does not require doing the above.#2019-12-0316:43marshall@iagwanderson seems fine - what was the thing you changed in the script?#2019-12-0316:44bartukathe problem was not in the script, I think I was calling it using shinstead of bash.#2019-12-0316:45marshallah ok#2019-12-0316:45bartukabut how do I tell the bastion to use the folder of the query-group-name? Or I shouldn't?#2019-12-0316:46bartukaI'm poking around and I deleted the folder system-name and everything stop working 😃 Presto complains that no catalog was found. It seems it was always looking at the system-namefolder#2019-12-0316:48marshall@iagwanderson you need to set the AnalyticsEndpoint in the primary compute group#2019-12-0316:48marshallwhen you go into the cloudformation for the primary compute group#2019-12-0316:48marshallthere is a parameter for#2019-12-0316:48marshall“Analytics Endpoint
Provide the name of a query group if you’d like analytic queries to go to a different endpoint. Defaults to system name.”#2019-12-0316:48marshallput your query group name in there#2019-12-0316:49marshallthat will cause the access gateway to direct its analytics queries to the query group you specify#2019-12-0316:52bartukaI cannot find this option when I click in update in the cloudformation. It should be done when I first create the primary compute group?#2019-12-0316:53marshallit is available either way
Are you running a split stack? you can’t do it with a master stack system#2019-12-0316:53bartukaI have a master stack system.. 😕#2019-12-0316:53marshallyou’ll want to split the stack#2019-12-0316:57bartukagreat! thanks for the help. I think would be nice to have these infos in the documentation Analytics Support -> Configuration. It seems I only needed to add the -q <query-group-name> . I don't know, I am using this stack for over a month now and probably missed some instructions in the docs too.#2019-12-0316:59marshallYeah, we’ll fix that#2019-12-0316:54marshallhttps://docs.datomic.com/cloud/operation/split-stacks.html#2019-12-0316:54jarethttps://forum.datomic.com/t/datomic-cloud-569-8835-and-cli-0-9-33/1277#2019-12-0317:06bartuka@marshall just to recap the question about the usefulness of the query-group with analytics-support in mind. Makes sense to say that for the analytics support, the access-gateway should be priority when we talk about machine-sizing rather than the query-group instances itself? As I understand, the presto can only make few operations before it start the SQL-processing of the data in memory. Maybe an access-gatewaymore optimized for memory-intensive tasks should be a good call?#2019-12-0321:32tylerAre http-direct requests each executed in their own thread?#2019-12-0322:04eagonCurious about this as well, and about the threading of ions in general#2019-12-0322:23markbastianHey all, I've got a datomic ion I'm trying to deploy and I keep getting a runtime error: "Syntax error (ClassNotFoundException) compiling new at (core.clj:79:38).
com.fasterxml.jackson.core.exc.InputCoercionException". This class was added in 2.10 (https://fasterxml.github.io/jackson-core/javadoc/2.10/com/fasterxml/jackson/core/exc/InputCoercionException.html). In my deps.edn file I specify com.fasterxml.jackson.core/jackson-core {:mvn/version "2.10.1"} in my :deps map. However, when I push with clj -A:ion-dev '{:op :push}' I get a dependency conflict warning for jackson-core listing com.fasterxml.jackson.core/jackson-core #:mvn{:version "2.9.8"} as the version being used. This leads me to believe my specified version isn't taking. Any ideas as to how I specify/force the runtime version of a library in my datomic instance?#2019-12-0322:47markbastianI was able to change my deps.edn file version of the jackson libs to 2.9.8 and it appears to be unbreaking. I'll just have to watch for version issues when pushing. I'm still interested in knowing if there's a way to control the deployed versions of the ion so that it uses the latest jars in the cloud.#2019-12-0409:49jaihindhreddyAhh. Good 'ol jackson.#2019-12-0417:57Jacob O'BryantDatomic's dependencies can't be overridden unfortunately.
https://docs.datomic.com/cloud/ions/ions-reference.html#dependency-conflicts#2019-12-0402:35bartukayet on system planning for analytics support. I split my stack and managed to make the presto server to use a query group with 2 instances i3.xlarge which seems fine. Looking at the cloudwatch during some workload, the query group is not reaching cpu utilization above 40% which is ok by me. However, it still gets 6min to return a query like select date, sum(value) from table group by date with a table of 5MM "rows" (in pg), way too slow (?) 😕 As we can see in the screenshot the largest access-gateway available in cloud formation has only 2 processors and it is constantly on 100% usage during workload. The 4gb of memory looks like enough, but 2 cpu is not too low?#2019-12-0403:04bartukaIn fact, I see this behavior when running more than 1 query at the time. Didnt notice the x-ray launched some queries in the database. But I think the point is still valid. Would be possible to use a larger instance for access-gateway?#2019-12-0407:57Oleh K.I want to connect Datomic Cloud via VPC Peering and there is a note in the end of documentation:
If your application does not run in the provided datomic-$(SystemName)-apps security group, you must configure the datomic-$(SystemName)-entry security group to allow ingress from your application's security group.
But I don't see any *-entry security group in my environment. What security group have I to modify?#2019-12-0407:58Oleh K.it's a production topology#2019-12-0409:39Oleh K.@ghadi My question above was right about documentation)#2019-12-0413:49marshall@okilimnik That section at the bottom that you mentioned is specifically for legacy versions of Datomic Cloud, prior to 397#2019-12-0413:49Oleh K.I see, thanks#2019-12-0420:31John ContiAnyone using Datomic cloud from Heroku?#2019-12-0421:12rgorrepatiHi, Does any know if the datomic transactor is ported to jdk 11#2019-12-0512:32maxtIs com.cognitect/transit-clj still at version {:mvn/version "0.8.285"} in Ions? Any chance of getting it updated? At least to 0.8.313 which is what client-cloud depends on.#2019-12-0513:31mkvlris there a workaround for datomic.extensions/< not being variadic? Is there a better way than adding multiple clauses when trying to exclude date ranges from a result via a query?#2019-12-0513:31Luke SchubertIs there a way to run the datomic transactor on windows? I consistently get an error on startup that the input line is too long which appears to be related to classpath construction?#2019-12-0513:33Luke SchubertI have previously been using WSL which works fine for me, but I'm scripting out running local environments for our testers and they all run windows and I'd rather not have to have them all install WSL#2019-12-0513:35Joe LaneWindows historically has had an issue with java classpath lengths being too long. I have no idea if fixing this will allow you to run the transactor on windows, but it may not be a datomic specific issue, rather a windows+java issue.#2019-12-0513:37Luke Schubertah rats that means I'm going to probably have to go about this the harder way.#2019-12-0513:49favilaOne trick for getting around this is to write the classpath into a jar manifest then run the jar with java -jar#2019-12-0513:50favila(the “runner” jar has nothing in it but a manifest with a classpath)#2019-12-0513:57Luke Schubertis there an upper limit on the java version for a transactor?#2019-12-0513:59Luke SchubertBecause another solution as I understand could be to build a classpath file instead of the CP_LIST#2019-12-0514:22alexmillerthere are some speculative generic fixes for this for clj on windows#2019-12-0514:22alexmillernot sure if that applies here#2019-12-0514:22alexmillermaybe you're not using clj so it doesn't matter#2019-12-0514:27Luke Schubertwhat I'm trying to do is run bin/transactor.cmd in a script to start a transactor#2019-12-0514:43favilaI was told by Marshall I think (although I saw nothing official) that java11 is supported#2019-12-0514:45Luke SchubertI think I'm just going to go down the path of windows users are going to have to have wsl.#2019-12-0514:45favilayour “classpath file” can be the jar with manifest.#2019-12-0514:46Luke Schubertactually yeah, you're right, I like that much better.#2019-12-0514:47Luke SchubertThanks for all the help#2019-12-0514:47favilabuild CP_LIST with spaces instead of colons, write to a file, like Class-Path: CP_LIST then jar cfm cplist.jar Manifest.txt#2019-12-0514:48favilaactually you can just distribute the jar by itself, since that classpath isn’t going to change#2019-12-0516:01dazldI’m debugging a colleague’s valcache setup - is there a function to see what the current config that datomic has loaded and understood?#2019-12-0516:02dazldseems like it’s ignoring the JVM options that are being passed to it.#2019-12-0516:07dazld@datomic.config/properties-ref guess this, thanks anyway#2019-12-0518:47eagonhttps://aws.amazon.com/blogs/compute/announcing-http-apis-for-amazon-api-gateway/
Could be interesting for Datomic Ions! Wondering about if HTTP direct could be supported too, though lambda works out of the box#2019-12-0523:51hadilsHi., I'm trying to upgrade to the latest stoarge and compute stack in Datomic Cloud (569-8835). I repeatedly get this error on the storage upgrade: Modifying service token is not allowed.. I have created and assigned a role for CloudFormationFullAccess to my user, and also attempted this with my root user. Get the same error. I have been told by Cognitect that this is an IAM problem but when I had it the last time I made these changes to IAM and fixed the problem. Can anyone give me a pointer as to what to do now? Thanks.#2019-12-0607:33maxtI had the same problem upgrading to 569. I gave up.#2019-12-0615:13jaretHi @U0773UB6D Do you recall if this was your first upgrade on a split stack? If so, we have identified this issue as a bug and are working to address in a future release. In the interim, you can get around the issue by running the upgrade with “reuse existing storage” set to false and it should succeed. Note, you will still have your existing storage and it will be used, this option just moves the CF down an alternate path.#2019-12-0615:14maxtI don't think this is the first upgrade. I'll try that workaround.#2019-12-0609:19nickikA while ago a watched a presentation by Stu about a Typed Java DSL for accessing Datomic? Does this exist anywhere? I can't find any information on it.#2019-12-0610:43dmarjenburghQuestion about keywords in Datomic. In https://docs.datomic.com/cloud/schema/schema-reference.html#orgaf99dce it says:
> Keywords are interned for efficiency.
What does this mean? I know keyword literals are interned in Clojure. Does Datomic intern keywords it encounters? Say keywords are dynamically generated (by parsing incoming JSON requests from a client for example) and are stored as keywords in Datomic. Does Datomic have an optimized way of storing/querying them?#2019-12-0617:03dmarjenburghAfter experimenting, it seems clojure also interns dynamically created keywords, so does datomic do anything special in addition?#2019-12-0618:58benoitInteresting case I encountered today when renaming an attribute. It seems that in order to rename an attribute without downtime you have to:
1. update your code to specifically pull your old attribute (`[:old/attribute]`), [*] will return the new attribute name as soon as you change the schema in step 2
2. update the schema {:db/id :old/attribute :db/ident :new/attribute}
3. update your code to use the new attribute
Does it make sense? Do I miss something?#2019-12-0619:04favilaCorrect. I think the lesson should be “don’t use star” if you know the specific attribute you want. Star is for repl exploration not produciton code#2019-12-0619:05benoitIt seems that way. I wanted to double check here before I accept the lesson 🙂#2019-12-0915:24matthavenerdoes anyone know of an idiomatic or straightforward way of storing a datomic query in datomic? pr-str / read-string seems like the best?#2019-12-0915:29ghadithat can work, or you can store a symbol that refers to the query var in code space#2019-12-0915:29ghadithen you resolve or requiring-resolve the symbol#2019-12-0915:46Joe LaneBonus points if you include the codebase in the datomic schema for that symbol 🙂#2019-12-0921:52kennyAre there datomic specs bundled with datomic cloud by chance?#2019-12-1003:19GobinathHi Channel 👋
<https://clojurians.slack.com/archives/C03S1KBA2/p1575946578414400>
I'm pondering on pros and cons of using Datomic in production at scale.
Got the below comment regarding the same. Please do share your thoughts?#2019-12-1004:00johnj@thegobinath small data, high reads is the sweet spot#2019-12-1004:02steveb8nIMHO the current biggest con is no export tools. the recommendation seems to be that backups are not required but that just doesn’t fly in the enterprise world where I provide services. Not sure what to do about this yet#2019-12-1005:16GobinathDisadvantages
It can be slow, as Datalog is just going to be slower than equivalent SQL (assuming an equivalent SQL statement can be written).
If you are writing a LOT, you could maybe need to worry about the single transactor getting overwhelmed. This seems unlikely for most cases, but it's something to think about (you could do a sort of shard, though, and probably save yourself; but this isn't a DB for e.g. storing stock tick data).
It's a bit tricky to get up and running with, and it's expensive, and the licensing and price makes it difficult to use a hosted instance with it: you'll need to be dealing with sysadminning this yourself instead of using something like Postgres on Heroku or Mongo at MongoHQ
Source: <https://stackoverflow.com/questions/21245555/when-should-i-use-datomic>
So, how is the current situation with regards to the disadvantages of Datomic as described in this stackoverflow thread?#2019-12-1008:50henrik@U064X3EF3 Mentioned that Cloud doesn’t use a single transactor. I’m not sure of the details, but I presume that if you create more than one DB in Cloud, there’s no need to sync writes between those DBs as they are isolated from one another.
The sysadmin bit also doesn’t really apply to Cloud (though you’ll have to deal with AWS in some capacity). Getting it up and running is pretty much just clicking through a wizard.
Pricing wise, a solo deployment lands at around $30-$40/month. For production, it depends a lot on usage.#2019-12-1013:39alexmillerSaying that datalog is slow compared to sql is literally nonsense (in the literal sense of “literal”). The rest of that post reads as out of date (pre Cloud). The whole point of cloud is that the environment is largely built for you and makes best use of aws.#2019-12-1013:41alexmiller@henrik right re dbs#2019-12-1011:40marshallThe presumption that datalog is slower than sql is incorrect#2019-12-1012:21val_waeselynck@thegobinath I'd say the number 1 disadvantage of Datomic is the time you have to spend explaining why you're using it... and the fact that it's not open-source of course, which is a deal breaker for some people.
For the rest, I think it's more objective to talk in terms of limitations rather than disadvantages. Let me lay those out:#2019-12-1012:22val_waeselynck1. Datomic is not low-level storage. Don't use it for high-churn data, blobs, etc. Use if for accumulating facts, only that.#2019-12-1012:26val_waeselynck2. Datomic will be challenging if you have a high write throughput or data size (official rule of thumb: 10 billion datoms is the limit). It will be even more challenging if the relationships in the data have poor locality (this is a rare condition: a large graph with long-range relationship is an example. The usual enterprise system will be fine).#2019-12-1012:27val_waeselynck3. Most developers don't know it. I don't think it's hard to learn, especially for juniors, but your developers have to be able and willing to learn.#2019-12-1012:29val_waeselynck4. It's pretty much married to the JVM as a platform. You can call it from other platforms, but will lose many of the advantages.#2019-12-1012:29val_waeselynck5. It's not lean in terms of computational resources: the minimum deployment will have a high footprint.#2019-12-1012:32val_waeselynck6. It has essentially no support for all but 'relational' queries (fulltext etc.), and performs poorly on big aggregation queries.#2019-12-1012:34val_waeselynck7. It's not a bitemporal system, people often have misplaced expectations regarding this, because of the temporal reputation of Datomic.#2019-12-1012:36mpenet8. AWS only (not considering on-prem)#2019-12-1012:37val_waeselynckYes, if we're only considering Cloud I could add a few more limitations.#2019-12-1012:41val_waeselynckI still believe Datomic is the best technical option for the most mainstream use case of databases: online information systems with high reads and non-trivial transactional writes, a natural relational / graphical data model, and acting as a source of information for downstream systems.#2019-12-1012:43henrikAdd the Cloud-specific ones, for completeness.#2019-12-1012:44val_waeselynckHow so? On-Prem is an option for the others.#2019-12-1012:44henrikOh, sorry, I didn’t realise the question was about on-prem.#2019-12-1012:45val_waeselynckI don't know that the question was about one specific deployment strategy 🙂#2019-12-1012:48henrikWell, it’s quite a bit more than deployment strategy, right? The “sysadmin” bit in the post above applies more to on-prem than Cloud. And with Cloud, you’re married to CodeDeploy, for better or worse, etc.#2019-12-1012:50val_waeselynckYes I fully agree, I was only refraining from going into these specifics.#2019-12-1103:32johnjand acting as a source of information for downstream systems.
Like some kind of meta database?#2019-12-1117:11val_waeselynckNo, like the «sales» system upstream of the «emailing» and «analytics» systems#2019-12-1012:51val_waeselynck@thegobinath note: the SO post you mention predates Datomic Cloud, so some parts of it are no longer true! Especially the "It's a bit tricky to get up and running with" part as mentioned by @henrik#2019-12-1013:43GobinathOk. Apart from the challenges with Learning/Deploying, what it would be like if Twitter/Reddit had chosen Datomic (with Clojure of course)?
Reddit uses Postgres+Cassandra
Twitter uses MySQL#2019-12-1013:51alexmillerThat seems like something impossible to answer#2019-12-1013:54GobinathYeah. That's slightly a stupid question :) I'm just considering similar use case with similar volume of Data transactions#2019-12-1014:11val_waeselynckEveryone reivent their own database system at that scale#2019-12-1014:14val_waeselynckNeither Reddit nor Twitter started with something having the capacity to deal with their current scale, and that's fine#2019-12-1014:22GobinathMakes sense. So one can be safe by starting out Datomic and come up with their own solutions to deal with Scaling. Innovation is born out of necessity :)#2019-12-1014:23henrikThe hugely interconnected nature of social graphs, where users can be expected to ad-hoc interact with any other user or any piece of content, seem like a problem hard to target without talking about a lot of infrastructure beyond the database.#2019-12-1014:23alexmilleryou might note that Nubank started with Datomic and is now the largest fintech company in Latin America, still using Datomic#2019-12-1014:24GobinathOne great example (not to do with DBs) is what Facebook did with php#2019-12-1014:24GobinathRecently, how Discord used Rust to speed up Elixir#2019-12-1014:24alexmillerthey have done a lot of excellent engineering to allow them to make the most out of Datomic#2019-12-1014:24henrikNubank’s credit cards does seem like something that would be easier to compartmentalize than a social network. No user should interact with any other user’s data.#2019-12-1015:27souenzzoBut you can add friends and chat with (support) people 🤔#2019-12-1015:28souenzzoThere is also a personal timeline, from a social net, the unique missing feature is feed from your friends#2019-12-1015:57henrikRight! But those things seem like they can be cleanly sliced per customer. If they support families, it becomes a different matter. Then you might want to make sure that they sit in the same DB I suppose.#2019-12-1014:25mpenetyes, it's heavily sharded if I recall correctly#2019-12-1014:27mpenetthey probably use datomic for other things tho. Every "db" has limitations/tradeoffs#2019-12-1014:34Mark AddlemanOne point related to Datomic Cloud's single transactor per db model: If I recall correctly, as of a year ago, you cannot use Datomic's datalog to join data across dbs but the problem was an implementation detail. I don't know if that has been resolved. If it's been fixed and your transaction boundaries don't cross dbs, then Datomic might scale very well given query groups#2019-12-1016:44grzmI'm trying to test a database function I intend to use as an entity predicate. My thought is to use it in a query: for example, identifying entities that currently violate the predicate. Something like this:
(d/q '[:find (sample 1 ?e)
:where
[?e :some/attr]
[(com.grzm/valid? $ ?e) ?valid]
[(not ?valid)]]
db)
Works in dev. Doesn't work in prod. In prod, I get the following error:
Execution error (ExceptionInfo) at datomic.client.api.async/ares (async.clj:58).
Unable to find data source: $__in__3 in: ($ $__in__2 $__in__3 $__in__4 $__in__5)
- Dev and prod have the same sha deployed.
- Both have the same version of Datomic Cloud (535-8812).
- Dev is solo, prod is, um, production.
Save me, Obi-Wan Kenobi. You're my only hope.#2019-12-1016:44grzmI'm trying to test a database function I intend to use as an entity predicate. My thought is to use it in a query: for example, identifying entities that currently violate the predicate. Something like this:
(d/q '[:find (sample 1 ?e)
:where
[?e :some/attr]
[(com.grzm/valid? $ ?e) ?valid]
[(not ?valid)]]
db)
Works in dev. Doesn't work in prod. In prod, I get the following error:
Execution error (ExceptionInfo) at datomic.client.api.async/ares (async.clj:58).
Unable to find data source: $__in__3 in: ($ $__in__2 $__in__3 $__in__4 $__in__5)
- Dev and prod have the same sha deployed.
- Both have the same version of Datomic Cloud (535-8812).
- Dev is solo, prod is, um, production.
Save me, Obi-Wan Kenobi. You're my only hope.#2019-12-1017:16grzm@marshall @U1QJACBUM anything?#2019-12-1018:22jaretIs this in an Ion?#2019-12-1018:23jaretAm I correct in understanding, the only difference is a solo topology for one system (working) and a production topology for another system (not working)?#2019-12-1018:41grzmThat's the only difference I'm aware of. I'm running that query from the repl (it's an ion in the sense that it's an allowed function), only changing my proxy connection between the two.#2019-12-1018:43marshallare you sure the ns with the valid? function is available in both?#2019-12-1018:43marshalli.e. deployed to both#2019-12-1018:47grzmYes. If I typo the name of the function, I get a is not allowed by datomic/ion-config.edn error instead.#2019-12-1019:24jaretCould you try two things?
1. use an explicit in
2. try passing in a specific entity ID to check validity#2019-12-1019:26jaretfor number 1. it would look like:
(d/q '[:find (sample 1 ?e)
:in $
:where
[?e :some/attr]
[(com.grzm/valid? $ ?e) ?valid]
[(not ?valid)]]
db)#2019-12-1019:27jaretIf all that still fails, I’d like you to log a support case with <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection> so we don’t lose this to slack archive.#2019-12-1019:37grzmNumber 1 fails with the same error.#2019-12-1019:39grzmNumber 2 succeeds (passing in an eid, no sample)#2019-12-1019:41grzmSo, the question becomes how do I write the query to return entities that fail the predicate? Would be nice to be able to use sample, as I don't want to necessarily perform an exhaustive search.#2019-12-1019:42marshallAggregations in the find don't change the amount of work performed by the query#2019-12-1019:42marshallThey only shape the result#2019-12-1019:43marshallThings like sample and limit do not "short-circuit" the query#2019-12-1019:44grzm(d/q '[:find ?e
:where
[?e :some/attr]
[(com.grzm/valid? $ ?e) ?valid]
[(not ?valid)]]
db)
Returns
Execution error (ExceptionInfo) at datomic.client.api.async/ares (async.clj:58).
[?valid] not bound in expression clause: [(not ?valid)]
#2019-12-1019:47marshallYou may have to use an explicit not join#2019-12-1019:48marshallHm#2019-12-1019:48marshallDoes your predicate just return true or false#2019-12-1019:49marshallIf so, I think you want to put your predicate call inside of the not#2019-12-1019:49marshallNo need for the valid variable#2019-12-1019:59grzmThat's promising. Now I'm just getting timeouts and 503s. This is something I can work with. Thanks!#2019-12-1019:59grzmAny idea why sample works in solo and not in production?#2019-12-1020:00marshallnot immediately; we’ll look into it though#2019-12-1020:02grzmCheers!#2019-12-1020:02grzmWant me to open a ticket?#2019-12-1020:02marshallsure, that’d be helpful#2019-12-1116:04grzmNew wrinkle:
(d/q '[:find ?e
:in $ ?from ?until
:where
[?e :some/time ?t]
[(<= ?from ?t)]
[(< ?t ?until)]
(not-join [?e]
[(com.grzm/valid? $ ?e)])]
db from until)#2019-12-1116:07grzmWhen the range of from/until returns a small set, it completes fine. When it returns a large set (just changing range), it fails with Unable to find data source: $__in__3 in: ($ $__in__2 $__in__3 $__in__4 $__in__5)#2019-12-1116:08marshallCan you file a ticket with that info please#2019-12-1116:18grzmYup. Haven't done the one from yesterday. Same ticket or two?#2019-12-1116:19marshallSame#2019-12-1217:35grzmMore follow-up: there was some data in the production database which was causing one of the subsequent queries within the database function to fail. Given the nature of the error messages, it wasn't obvious to me where in the stack the error was happening.#2019-12-1017:01alidlorenzo@val_waeselynck you mentioned that "Datomic will be challenging if you have a high write throughput or data size." Do you think datomic could work for a note-taking style app? i wanted to take advantage of point-in-time queries, but documents will have high data sizes#2019-12-1017:13johnjDatomic doesn't do well with large strings, so much that in cloud they are restricted to 4096 chars.#2019-12-1017:16johnjAs @val_waeselynck said, datomic is not a bitemporal system, you should not rely on tx time to model time in your domain/business logic#2019-12-1017:16johnjcreate your own time attrs#2019-12-1017:20Joe LaneRemember @UPH6EL9DH, in cloud you have access to literally all of aws and their services. You could put a note at a point in time into s3 backed by cloudfront and store the reference to it in datomic. You can use cloudsearch or opendistro for searching as well.#2019-12-1017:42alexmillerI don’t understand how Datomic is not a bitemporal system (if you use it that way).#2019-12-1017:43alexmillerYou have both transaction times and, if desired, attributes for event times, with the ability to query taking both into account#2019-12-1017:57alidlorenzo@U0CJ19XAM thanks for the tip about saving note documents to s3 hadn't considered that#2019-12-1018:03johnjoh yeah you can, the question is if you should use datomic's history features for domain logic, in contrast to just use it for auditing/troubleshooting. https://vvvvalvalval.github.io/posts/2017-07-08-Datomic-this-is-not-the-history-youre-looking-for.html?t=1&cn=ZmxleGlibGVfcmVjc18y&iid=1e725951a13247d8bdc6fe8c113647d5&uid=2418042062&nid=244+293670920#2019-12-1119:54val_waeselynckhttps://clojurians.slack.com/archives/C03RZMDSH/p1575999720198400?thread_ts=1575997270.193600&cid=C03RZMDSH
Because Datomic provides no support for expressive bitemporal queries, in the same way that MySQL et al provide no support for expressive temporal queries.
Choosing to "use it that way" is not enough. Sure, you can encode bitemporal information in Datomic, but it won't be particularly practical to leverage it.#2019-12-1017:38johnjDatomic does not provide a mechanism to declare composite uniqueness constraints - does this still holds now that there is composite tuples?#2019-12-1018:26jaretDid you see this in the docs? Could you throw me a link. Because, you’re correct this is no longer true with the addition of composite tuples.#2019-12-1018:26jaretNVM just saw your link.#2019-12-1017:38Joe LaneIMO, No.#2019-12-1017:38johnjOk, that sentence is still in the docs <https://docs.datomic.com/cloud/schema/schema-reference.html#db-unique-identity>#2019-12-1018:27jaretWill correct. You and @U0CJ19XAM are correct. That is no longer true with the introduction of Composite Tuples.#2019-12-1018:28unbalancedwhoaaa there is composite uniqueness now? Am I hearing this correctly?#2019-12-1018:28unbalancedif-so... huzzah!#2019-12-1018:36jarethttps://docs.datomic.com/cloud/schema/schema-reference.html#composite-tuples#2019-12-1018:36alexmillersince June...#2019-12-1019:17Ike MawiraHello, I am having trouble setting up Datomic as mentioned here, https://clojurians.slack.com/archives/C053AK3F9/p1576002374401100 , I get
ActiveMQNotConnectedException AMQ119007: Cannot connect to server(s). Tried with all available servers. org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory (ServerLocatorImpl.java:799)
Any reason why I could be getting this error?#2019-12-1019:33Ike MawiraSeems like the issue is a Netty library, when i run
(d/create-database "datomic:")
I get a warning,
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by io.netty.util.internal.PlatformDependent0 (file:/home/ike/Documents/softwares/datomic-pro-0.9.5697/lib/netty-all-4.0.39.Final.jar) to field java.nio.Buffer.address
WARNING: Please consider reporting this to the maintainers of io.netty.util.internal.PlatformDependent0
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
ActiveMQNotConnectedException AMQ119007: Cannot connect to server(s). Tried with all available servers. org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory (ServerLocatorImpl.java:799)
While in IntelliJ i get this extra info
WARNING: All illegal access operations will be denied in a future release
Dec 10, 2019 10:27:44 PM org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnector createConnection
ERROR: AMQ214016: Failed to create netty connection
.ssl.SSLException: handshake timed out
at io.netty.handler.ssl.SslHandler.handshake(...)(Unknown Source)
#2019-12-1019:51marshall@mawiraike are you running a transactor on your local machine?#2019-12-1019:52Ike MawiraYes, i got the
System started datomic:<DB-NAME>, storing data in: data
message so i think so.#2019-12-1019:59marshall@mawiraike https://forum.datomic.com/t/important-security-update-0-9-5697/379#2019-12-1019:59marshalli would also recommend that you upgrade to a more recent version. that release is 1.5 years old#2019-12-1020:00marshallif you’ve had this storage system running before, you may be hitting that change ^ with h2#2019-12-1020:04Ike MawiraOkay, thanks @marshall, lemme update and see if it passes.#2019-12-1022:18Jon WalchWhat would the datalog look like for "give me the top ten users with the most cash"
I tried
{:query '[:find ?user-name (max 10 ?cash)
:in $
:where [?user :user/cash ?cash]
[?user :user/name ?user-name]]
:args [db]}
#2019-12-1022:23favilaDatalog doesn’t do sorting or truncating. You would do this in two queries#2019-12-1022:24favilaor one plus a pull#2019-12-1022:18Jon Walchbut this gives me every user#2019-12-1110:02GobinathSo, this is now a open question :)
https://clojurians.slack.com/archives/C03RZMDSH/p1575947979130900
What are the most favourite DBs for Clojure ecosystem and community in general?#2019-12-1112:41Luke SchubertThe other day I was having an issue with running the transactor and console on windows due to java classpath sizes and I found a super simple solution so I wanted to drop it here in case it's useful for anyone else
as of java 6 cp supports wildcards so you can remove the two for loops in ./bin/classpath.cmd and replace them with SET CP_LIST="bin;resources;lib/**;datomic-transactor*.jar"*#2019-12-1117:53Adrián Rubio MorloteHi! I'm kinda noobie on datomic, does anyone know the difference between "Production" and "Production 2" Topologies??#2019-12-1117:54marshall@adrian169 that is an artifact of AWS Marketplace issues
You should use whichever contains the latest release#2019-12-1117:54Adrián Rubio MorloteAlso, I don't really know how to configure instances so they are not i3.large#2019-12-1117:54marshallin the Production topology you can choose i3.large or i3.xlarge#2019-12-1117:54marshallthe Solo topology uses a smaller instance#2019-12-1117:54Adrián Rubio MorloteOhhhhh thanks!#2019-12-1117:55Adrián Rubio MorloteHmmm isn't there a way to use smaller instances#2019-12-1117:55Adrián Rubio Morloteon production?#2019-12-1117:55marshalli3.large is the smallest supported instance type in production topology#2019-12-1117:55marshallsee: https://docs.datomic.com/cloud/whatis/architecture.html#topologies#2019-12-1117:55marshallfor some additional information#2019-12-1117:56marshallalso useful: https://docs.datomic.com/cloud/operation/planning.html#2019-12-1117:56Adrián Rubio MorloteThank you so much!#2019-12-1118:36johnjIs there a way to directly omit the :db/ident key in a pull expression for an "enum"? having only its value returned#2019-12-1118:42johnj{:db/ident :green} => :green#2019-12-1119:56favilaNo, it is not possible. You have to postprocess with e.g. clojure.walk#2019-12-1120:34johnjjust used update for this simple case, definitely going to need clojure.walk on the next one, thanks#2019-12-1119:02John MillerI’m having trouble with upserts on entities with tuple identities containing a ref. The only way I can get upserts to work is to look up the entity id of the ref and use that in the query. Tempids work for the initial insert but then fail with an identity conflict. Lookup refs don’t work at all. And tuple does not appear to work in transact. Here’s a repro script:
(d/transact dt-conn {:tx-data [{:db/ident :example/r
:db/valueType :db.type/keyword
:db/cardinality :db.cardinality/one
:db/unique :db.unique/identity}
{:db/ident :example/id
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one}
{:db/ident :example/ref
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/one}
{:db/ident :example/multi
:db/valueType :db.type/tuple
:db/tupleAttrs [:example/ref :example/id]
:db/cardinality :db.cardinality/one
:db/unique :db.unique/identity}]})
(d/transact dt-conn {:tx-data [[:db/add "one" :example/r :one]]})
(d/transact dt-conn {:tx-data [{:example/ref [:example/r :one]
:example/id "foo"}]}) ; Succeeds once - Fine, need to include the identity tuple
(d/transact dt-conn {:tx-data [{:example/ref [:example/r :one]
:example/id "bar"
:example/multi [[:example/r :one] "bar"]}]}) ; Fails - "Invalid tuple value"
(d/transact dt-conn {:tx-data [[:db/add "ONE" :example/r :one]
{:example/ref "ONE"
:example/id "baz"
:example/multi ["ONE" "baz"]}]}) ; Succeeds once. Then fails - "Unique conflict: :example/multi, value [...] already held by ..."
(d/q '[:find ?e :where [?e :example/r :one]] (d/db dt-conn)) ; Put the resulting id in the next query
(d/transact dt-conn {:tx-data [{:example/ref [:example/r :one]
:example/id "qux"
:example/multi [<insert value here> "qux"]}]}) ; Succeeds upsert
Any suggestion on how to make this work?#2019-12-1121:25dominicmI'm getting " handshake timed out" when connecting to a datomic free transactor using the datomic clojure client api:
Dec 11, 2019 9:18:41 PM org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnector createConnection
ERROR: AMQ214016: Failed to create netty connection
javax.net.ssl.SSLException: handshake timed out
at io.netty.handler.ssl.SslHandler.handshake(...)(Unknown Source)
I vaguely recall there being some changes to all this, but I don't remember the detail.#2019-12-1121:38dominicmI set encrypt-channel to false#2019-12-1121:50steveb8nQ: I’m running Ions using the “connect on use” pattern when my app startup calls a component/start of my stack. My stack becomes unstable after a few CI deploys and I suspect that the lack of a component/stop call before the new stack is started is leaking resources such as aws clients. What is the recommended way to shutdown stacks during deploys? Is there a hook in one of the step functions used in deploy to address this?#2019-12-1217:28grzmSeeing memory allocation errors on the BeforeInstall step during deploy to a solo Datomic cloud instance.
[stderr]OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000faa00000, 72351744, 0) failed; error='Cannot allocate memory' (errno=12)
#2019-12-1217:29grzmIf I recall in the past when this happens, just deploying again fixes it. Doesn't seem to be the case this time. No new deps or big code changes. Version 535-8812 (solo)#2019-12-1217:30grzmThe rollback after the failed deploy also is failing.#2019-12-1218:58grzmEnded up turning down the Autoscaling from 1 instance to 0, and back to 1 to bounce the solo instance. Deploy then started working again.#2019-12-1220:47steveb8nI have the same problem. Roughly every 3rd ci build. To fix I just kill the EC2 instance and the alb brings up a new one automatically#2019-12-1220:48steveb8nThe deploy works again. It has to be memory#2019-12-1220:50grzmThat sounds painful. Terminating the instance is probably a faster way than modifying auto-scaling each time (twice!). Thanks for the tip!#2019-12-1223:14steveb8nglad to help. just make sure you don’t terminate the bastion 🙂#2019-12-1222:13steveb8nQ: I’m about (tomorrow) to create a production cloud topology. I’d like to verify my thinking before I do this. Is there someone at Cognitect I can talk to about this?#2019-12-1301:02Brian AbbottRandom… do we have any idea of how many datomic-deployment instances exist in the world? I am contemplating a book proposal to one of the major tech publishers… Justifying potential market size would be helpful. Also, is there anything anyone here would like covered beyond the beaten path for a DB book.#2019-12-1303:00souenzzoAt NuBank, a single bank from Brasil, there is more then 1000 transactors instances#2019-12-1315:26johnjouch#2019-12-1315:34onetomwe shall link their latest video, where they talked about this, to give some credibility to this statement.
@U2J4FRT2T are you working for nubank?#2019-12-1316:10souenzzoI will dig a video presentation with this info.
Not working but we use the same stack and I talk a lot with nubankers#2019-12-1302:50onetomDid anyone ran into not being able to start the socks proxy for a Datomic Cloud Solo installation?
$ datomic-access client -p dev -r ap-northeast-1 enterprise-sandbox
download: to ../../.ssh/datomic-ap-northeast-1-enterprise-sandbox-bastion
fatal error: An error occurred (404) when calling the HeadObject operation: Key "enterprise-sandbox/datomic/access/private-keys/bastion.hostkey" does not exist
Unable to read gateway hostkey, make sure your AWS creds are correct.
Where is the best place to find answers to these kind of questions?#2019-12-1302:52onetomMy Solo version is the one before the last (Nov something marketplace version).
datomic-cli is 0.9.33#2019-12-1302:55onetomi also don't quite understand why would there be some private key stored in s3 to my system.
when i was starting up the solo system i was asked for a keypair.
isn't that keypair is the one which is used for both the primary compute group instances and for the bastion host too?
where is this documented?#2019-12-1303:13onetomsince it was only complaining about the hostkey, i've removed that step from the datomic-access script and accepted the hostkey manually,
BUT I had to enable SSH access in the bastion host's security group manually too.
i understand it's a very cautious default, but is it documented anywhere?
i've read a lot of docs and seen tons of videos, but none of those mentioned this requirement.#2019-12-1314:16marshall@onetom https://docs.datomic.com/cloud/getting-started/configure-access.html#authorize-gateway#2019-12-1315:31onetomthis section has eluded me somehow...
no idea why.
the documentation looks great and makes sense now.
thanks a lot for giving direction!#2019-12-1314:16marshall@onetom the latest Datomic CLI requires the latest release of Cloud#2019-12-1315:27onetom@U05120CBV since the nov 26th/27th was not working on nov 29th (when you said it's an "AWS marketplace issue")
and i haven't seen any newer images on the marketplace, i just assumed it's still not working.
i did take a peek at the release notes, but since it did not have any entries newer than nov 29th, it also suggested that i should just use the previous to last version.
for the record, on the 29th of nov (HKT), the problematic cloud formation template url i found somehow from the aws console looked like this:
$ curl -s | jq .Mappings.RegionMap
{
"us-east-1": {"Datomic": "ami-066145045fc7a4ad0", "Bastion": "ami-09e416d6385c15902"},
"us-east-2": {"Datomic": "", "Bastion": ""},
"us-west-2": {"Datomic": "", "Bastion": ""},
"eu-west-1": {"Datomic": "", "Bastion": ""},
"eu-central-1": {"Datomic": "", "Bastion": ""},
"ap-southeast-1": {"Datomic": "", "Bastion": ""},
"ap-southeast-2": {"Datomic": "", "Bastion": ""},
"ap-northeast-1": {"Datomic": "", "Bastion": ""}
}
and it seems to be modified on nov 15th the last time:
$ curl -Is
HTTP/1.1 200 OK
x-amz-id-2: gvBaTBQB/yS5MVpBf2kzkSHfS9mLZQbvts4BhOTrkrEjq6pD3g+g4ydf0m4knsJ+WBoPwN+FwDE=
x-amz-request-id: AF9C1169F7991A06
Date: Fri, 13 Dec 2019 15:16:05 GMT
x-amz-replication-status: COMPLETED
Last-Modified: Fri, 15 Nov 2019 15:27:33 GMT
ETag: "ae57666cf395e15fe66809522c78c92c"
x-amz-version-id: GIC.pH3ALeoNUyyaU_3.bmehFNcZL2E2
Accept-Ranges: bytes
Content-Type: application/octet-stream
Content-Length: 130983
Server: AmazonS3
now the release page (https://docs.datomic.com/cloud/releases.html#release-history) links to a slightly different url for the "same 569-8835 version", which is dated 11/26/2019, but it's last modified date is dec 03:
$ curl -Is
HTTP/1.1 200 OK
x-amz-id-2: NO6zf0g58YfTPGxMSrZtg+99UNgEqx++RkChsDhEhcSU3GIze3uMZ373Dkg3by/EQWDQSTIYz5A=
x-amz-request-id: C70010966DC504E4
Date: Fri, 13 Dec 2019 15:15:52 GMT
Last-Modified: Tue, 03 Dec 2019 14:25:29 GMT
ETag: "6b67112cbd9bddb2ee4d59de71e5e6a3"
Accept-Ranges: bytes
Content-Type: binary/octet-stream
Content-Length: 107111
Server: AmazonS3
and it contains AMIs for many regions, as expected:
curl -s | jq .Mappings.RegionMap
{
"us-east-1": {"Datomic": "ami-0b853443711d20708", "Bastion": "ami-07dccb5098034c24d"},
"us-east-2": {"Datomic": "ami-0ee324fea6a1a937e", "Bastion": "ami-0d2278c155c1d6754"},
"us-west-2": {"Datomic": "ami-0ccaf9cb58eaa44db", "Bastion": "ami-0f410f80475d0894e"},
"eu-west-1": {"Datomic": "ami-04d09a0a833d508eb", "Bastion": "ami-073e038edb8c675b6"},
"eu-central-1": {"Datomic": "ami-07b3154c0242f0e87", "Bastion": "ami-04e29a583e47d8b80"},
"ap-southeast-1": {"Datomic": "ami-0e4afb22a156fdbac", "Bastion": "ami-018ec097e75c16803"},
"ap-southeast-2": {"Datomic": "ami-023f62ca869bf17a2", "Bastion": "ami-09360d51b0aa43a1b"},
"ap-northeast-1": {"Datomic": "ami-0bc5ca724bf8882b5", "Bastion": "ami-0bcc43206700dc511"}
}
#2019-12-1315:29onetomThat's not a very immutable move! ;D#2019-12-1315:33marshallUnfortunately we don’t have any control over the Marketplace listing or when/how stuff gets dated there#2019-12-1315:34marshallthe Datomic docs releases page will always be the official record of what releases are available and will have links to the separate system CFTs#2019-12-1315:34marshallin general, once you’ve subscribed to the product on the marketplace page, i would recommend you use the docs releases page from then on instead of going back to marketplace#2019-12-1315:36onetomthanks for the help!
that dec 5 last-modified date still doesn't make sense though.#2019-12-1315:37onetomi recorded all the details, in case it helps to make later releases less confusing.
i feel like i had bad luck and just tried things at a time when they were in flux.#2019-12-1315:37marshallprobably depends on the date we received notification from AWS that the release shipped and the date when we finished testing and actually ‘issued’ the release on our docs#2019-12-1315:38marshallthe objective is for any release listed as ‘current’ or ‘latest’ on the datomic docs release page to always work as would be expected#2019-12-1315:38marshallso if there are issues with the templates available directly from Marketplace we won’t post them to our releases page until those issues are resolved#2019-12-1315:39marshall(which, incidentally, is what happened in this last case)#2019-12-1315:42onetomthanks!
our team is getting more and more excited about datomic, despite these hiccups.
it took me a lot of explaining, but my efforts are starting to bear fruit! 🙂#2019-12-1315:43marshallGlad to hear it!#2019-12-1314:16marshallhttps://docs.datomic.com/cloud/releases.html#cli-0-9-33#2019-12-1317:32onetomI'm just going thru a solo stack deletion process by following https://docs.datomic.com/cloud/operation/deleting.html
i think examples like this:
aws --region (Region) application-autoscaling deregister-scalable-target --service-namespace dynamodb --scalable-dimension dynamodb:table:WriteCapacityUnits --resource-id table/datomic-(System)
aws --region (Region) application-autoscaling deregister-scalable-target --service-namespace dynamodb --scalable-dimension dynamodb:table:ReadCapacityUnits --resource-id table/datomic-(System)
could be better written as:
/usr/bin/env REGION="<region>" SYSTEM="<system>" \
bash -xc 'for dimension in Read Write; do aws --region $REGION application-autoscaling deregister-scalable-target --service-namespace dynamodb --scalable-dimension dynamodb:table:${dimension}CapacityUnits --resource-id table/datomic-$SYSTEM; done'
so the parts which need replacement are factored out to the beginning and will only need to be replaced once.
while the for loops makes it slightly complicated, it also highlights the intent better.
it was not obvious to spot that few letter difference between the 2 commands on the website, where the difference was even off screen...
or a compromise:
REGION="<region>"
SYSTEM="<system>"
aws --region $REGION application-autoscaling deregister-scalable-target --service-namespace dynamodb --scalable-dimension dynamodb:table:WriteCapacityUnits --resource-id table/datomic-$SYSTEM
aws --region $REGION application-autoscaling deregister-scalable-target --service-namespace dynamodb --scalable-dimension dynamodb:table:ReadCapacityUnits --resource-id table/datomic-$SYSTEM
although this only works under bash , zsh and alikes, while the one using env works under fish or anything really, too.#2019-12-1318:50joshkhis there a way to retract all values of a :db.cardinality/many attribute, or must i retract each value individually? something like:
[:db/retract eid :recipe/ingredients] ; throws exception
instead of
[[:db/retract eid :recipe/ingredients "milk"]
[:db/retract eid :recipe/ingredients "sugar"]
[:db/retract eid :recipe/ingredients "eggs"]]
#2019-12-1319:50Joe Lane@joshkh You must do the latter#2019-12-1400:25joshkhi feared as much, only because i'm lazy. 🙂 thanks @lanejo01 for confirming.#2019-12-1401:31shaun-mahood@joshkh I believe there’s something in the “Understanding & Using Reified Transactions” presentation at https://docs.datomic.com/on-prem/videos.html - haven’t watched it for a while though, so it may only apply to on-prem (or I might be mixing up videos - but that’s a great one regardless).
#2019-12-1405:15fjolne@joshkh there’s a nice collection of db fns, which includes fns for reseting to-many rels: https://github.com/vvvvalvalval/datofu#2019-12-1405:18fjolneThose use entity API, but are actually quite easy to rewrite with pull API (had to do this even on on-prem, cuz entity API introduced some subtle bugs for our case).#2019-12-1416:22mssis there a query explorer-type interface for datomic cloud deployments?#2019-12-1420:28val_waeselynckI don't think so unfortunately, but note that REBL might give you a lot of the same value prop.#2019-12-1422:32joshkhi wrote a little personal-use webapp to interactively explore my data, and planned to release it after christmas along with a guide on deploying containerised datomic cloud apps on AWS. maybe you can help me test it? 🙂#2019-12-1422:35joshkhit's a tree-like browser and (so far) supports transacting new values with in-place editing#2019-12-1422:35joshkh#2019-12-1416:27Joe Lane@mss Can you elaborate on what you mean by "query explorer"?#2019-12-1416:34msshttps://docs.datomic.com/on-prem/console.html#2019-12-1416:34mssconsole, I guess I should say#2019-12-1421:33onetomGot this error when I was upgrading a Solo system from 535-8812 to the latest (https://s3.amazonaws.com/datomic-cloud-1/cft/569-8835/datomic-solo-compute-569-8835.json)
Export with name <datomic-system-name>-CodeBucketPolicyArn is already exported by stack <datomic-system-name>-Compute-1P5MYAP35641W
i have no idea what does it mean.
i just deleted a system in the same region earlier and kept the code bucket as documented on the https://docs.datomic.com/cloud/operation/deleting.html page.
i suspect it has something to do with this specific situation, because that stack deletion didn't go completely flawlessly.
it was not able to delete a security group because it couldn't delete the ENIs it was referring to, so i had to manually delete them.
for now i will just tear the stack down and try to pull up the latest one and reuse the existing storage.
at least i will exercise how to do this...#2019-12-1422:06Ike MawiraHello, i would like to ask if the client pro version should match the specific version of datomic downloaded on my machine. I heard so in a tutorial while i was using datomic-free. Now i have downloaded datomic starter version 0.9.5981 but it seems that the latest version of client pro offered by Maven is 0.9.41.
I am getting an error while setting up a repl in intellij and I am not sure if that is the problem.
Dec 15, 2019 12:51:16 AM org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnector createConnection
ERROR: AMQ214016: Failed to create netty connection
java.nio.channels.ClosedChannelException
at io.netty.handler.ssl.SslHandler.channelInactive(...)(Unknown Source)
#2019-12-1422:21Ike MawiraIts working, seems like i had imported [datomic.api] instead of [datomic.client.api] for my case.#2019-12-1422:10joshkh@shaun-mahood thanks for the video link, i'll check it out. and thanks to you too, @fjolne.yngling. transaction functions are the way forward.#2019-12-1422:17joshkhout of curiosity, do folks here ever find themselves battling the 120 second sync-libs error when deploying Ions with "large" dependencies -- for example, the AWS-SDK or some Apache java library?#2019-12-1516:37chris_johnsonI have not encountered that problem but it sure sounds to me like a good use case for incorporating AWS Lambda Layers (https://docs.aws.amazon.com/lambda/latest/dg/configuration-layers.html). I imagine that might run into trouble in that it would likely need to be the Cloud instance’s …instance… of Lambda the Ultimate that actually held the Layer refs, so if you had two Ions that wanted incompatible versions of the AWS SDK for example, that would hurt#2019-12-1516:39chris_johnsonBut I could definitely see a future state where Datomic Cloud exposes the machinery of Layers to help you slim down the dependency payload for a given Ion, it would just need to be thought through well (and for all I know, correctly incorporated with existing use of Layers - I have no visibility into how Lambda the Ultimate or Ions are actually implemented today)#2019-12-1517:01kennyIons don’t deploy to Lambdas. I think they already do an intelligent diff of dependencies, only uploading the ones that changed. It sounds like joshkh may want to increase a timeout of some kind. #2019-12-1517:22chris_johnsonYou’re correct, it was me who needed more time to think things through well. 🙂#2019-12-1518:50joshkhthanks for the input @chris_johnson. one can never know too much about lambdas, and i will explore layers for sure. @kenny is right though - I'm not using lambdas at the moment, and i don't know of a way to increase this timeout.#2019-12-1519:42dominicmException: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "PoolCleaner[2018699554:1576421257813]"
Exception in thread "Thread-4 (ActiveMQ-client-global-scheduled-threads-270014218)" java.lang.OutOfMemoryError: Java heap space
Exception in thread "Thread-2 (ActiveMQ-client-global-scheduled-threads-270014218)" java.lang.OutOfMemoryError: Java heap space
I'm seeing this, does it mean that my query will never return? I'm doing some big queries. Will it recover?#2019-12-1519:45onetomim trying to make an ion, but im getting back base64 encoded responses.
is there a way to get non-encoded responses somehow?
(ns hodur-example-app.core)
(defn debug [payload]
{:status 200
:headers {"Content-Type" "text/plain"}
:body "debug"})
(def debug-ion (api-gateway/ionize debug))
datomic/ion-config.edn:
{:allow [hodur-example-app.core/debug-ion]
:lambdas {:debug-ion
{:fn hodur-example-app.core/debug-ion
:integration :api-gateway/proxy}}
:app-name "enterprise-sandbox"}
aws lambda invoke --function-name enterprise-sandbox-compute-debug-ion /dev/stdout:
{"statusCode":200,"headers":{},"body":"ZGVidWc=","isBase64Encoded":true}{
"ExecutedVersion": "$LATEST",
"StatusCode": 200
}
where ZGVidWc= is indeed the expected debug response:
$ echo 'ZGVidWc=' | base64 -d
debug
#2019-12-1519:53onetomand if i expose the lambda via an API gateway, i do get the base64 content still:
$ curl -d ''
ZGVidWc=
#2019-12-1601:39onetomI guess I'm doing this wrong, according to
https://docs.datomic.com/cloud/ions/ions-reference.html#lambda-ion
I must return a string if I'm exposing some fn as lambda.
Maybe the hodur-example-app is a bit obsolete?
https://github.com/hodur-org/hodur-example-app#2019-12-1603:40onetomi've tried the ion-starter project too and that also returns base64 response:
$ curl -s -d ':shirt'
W1sjOmludns6c2t1ICJTS1UtMjgiLCA6c2l6ZSA6eGxhcmdlLCA6Y29sb3IgOmdyZWVufV0KIFsjOmludns6c2t1ICJTS1UtMzYiLCA6c2l6ZSA6bWVkaXVtLCA6Y29sb3IgOmJsdWV9XQogWyM6aW52ezpza3UgIlNLVS00OCIsIDpzaXplIDpzbWFsbCwgOmNvbG9yIDp5ZWxsb3d9XQogWyM6aW52ezpza3UgIlNLVS00MCIsIDpzaXplIDpsYXJnZSwgOmNvbG9yIDpibHVlfV0KIFsjOmludns6c2t1ICJTS1UtMCIsIDpzaXplIDpzbWFsbCwgOmNvbG9yIDpyZWR9XQogWyM6aW52ezpza3UgIlNLVS01MiIsIDpzaXplIDptZWRpdW0sIDpjb2xvciA6eWVsbG93fV0KIFsjOmludns6c2t1ICJTS1UtMTIiLCA6c2l6ZSA6eGxhcmdlLCA6Y29sb3IgOnJlZH1dCiBbIzppbnZ7OnNrdSAiU0tVLTQ0IiwgOnNpemUgOnhsYXJnZSwgOmNvbG9yIDpibHVlfV0KIFsjOmludns6c2t1ICJTS1UtMTYiLCA6c2l6ZSA6c21hbGwsIDpjb2xvciA6Z3JlZW59XQogWyM6aW52ezpza3UgIlNLVS02MCIsIDpzaXplIDp4bGFyZ2UsIDpjb2xvciA6eWVsbG93fV0KIFsjOmludns6c2t1ICJTS1UtNCIsIDpzaXplIDptZWRpdW0sIDpjb2xvciA6cmVkfV0KIFsjOmludns6c2t1ICJTS1UtMzIiLCA6c2l6ZSA6c21hbGwsIDpjb2xvciA6Ymx1ZX1dCiBbIzppbnZ7OnNrdSAiU0tVLTI0IiwgOnNpemUgOmxhcmdlLCA6Y29sb3IgOmdyZWVufV0KIFsjOmludns6c2t1ICJTS1UtMjAiLCA6c2l6ZSA6bWVkaXVtLCA6Y29sb3IgOmdyZWVufV0KIFsjOmludns6c2t1ICJTS1UtOCIsIDpzaXplIDpsYXJnZSwgOmNvbG9yIDpyZWR9XQogWyM6aW52ezpza3UgIlNLVS01NiIsIDpzaXplIDpsYXJnZSwgOmNvbG9yIDp5ZWxsb3d9XV0K
then i guess the problem might be how i setup the api gw 😕#2019-12-1603:50onetomi think i've found the missing step in the docs:
https://docs.datomic.com/cloud/ions/ions-tutorial.html#org6d06b38
i had to set all (`*/*`) content types to be treated as binary#2019-12-1611:42geodromeI am working through day-of-datomic-cloud. Im on tutorial/constructor.clj. When I try to execute the following line:
(d/with (d/with-db conn) {:tx-data [{:user/email "
https://github.com/cognitect-labs/day-of-datomic-cloud/blob/master/tutorial/constructor.clj#L37
I get an error:
Execution error (ExceptionInfo) at datomic.client.api.async/ares (async.clj:58).'datomic/ion-config.edn' is not on the classpath
`
I had no issues with several previous tutorials. I am using Cursive with IntelliJ. The REPL is configured to ‘Run with Deps.’ I restarted the REPL and stuff like that. My deps.edn includes “resources” under the :paths key. And the file datomic/ion-config.edn is there in the resources directory.
I suspect this has something to do with the datomic/ion-config.edn being unavailable in the cloud instance. Am I on the right track? I did not configure anything pertaining to ions on the cloud instance.#2019-12-1612:22daniel.spanielIs there a way to set the blank > < in a variable passed to query. I am doing a query where if the I am filtering on values in a field, but the filter might be empty so this case I want to get any and all values for that field . Was trying to pass in a variable like ' or ` or '' or "" ( the blank ) if the value was nil but to no avail.#2019-12-1612:22daniel.spaniel(d/q '[:find (pull ?e pattern)
:in $ pattern ?customer-id
:where
[?e :invoice/customer ?customer-id]]
db '[*] customer-id)
so for example the customer-id might have an id or might be nil ( but I can't use nil and need a blank )#2019-12-1614:11Joe Lane@dansudol The query is a datastructure so you can construct it on the fly using a cond depending on customer-id's presence then conj either [?e :invoice/customer ?customer-id] or [?e :invoice/customer]. I wouldn't use an if because filters always seem to expand the cases they handle.#2019-12-1614:23favilaanother option is a rule which uses a sentinel#2019-12-1614:49daniel.spanielthanks @lanejo01 I did not know I could use cond .. good idea .. not sure what a sentinel is @favila#2019-12-1614:53favilaA value you pick to represent “no filter” which is not in the space of matchable values#2019-12-1615:05favila'[[(invoice-with-customer ?e ?cust)
[(!= ?cust :any)]
[?e :invoice/customer ?cust]]
[(invoice-with-customer ?e ?cust)
[(= ?cust :any)]]
]#2019-12-1615:05favila(for e.g.)#2019-12-1615:05faviladatalog refuses nil values#2019-12-1615:06daniel.spanieli know .. kind of tricky#2019-12-1615:06daniel.spanieli see your query .. i just don't get it .. hard to grok#2019-12-1615:07daniel.spanielis invoice-with-customer an on the fly function you are defining ?#2019-12-1615:08favilaits a rule#2019-12-1615:09favilahttps://docs.datomic.com/on-prem/query.html#rules#2019-12-1615:14daniel.spanielwhoa .. this is fancy .. very interesting too .. trying this out#2019-12-1614:55daniel.spaniel@lanejo01 could you do me a favour and write that query with the cond .. i was hacking around and could not get it#2019-12-1614:56Joe LaneSure, hang on.#2019-12-1615:10Joe Lane(defn customers
[db {:keys [customer-id] :as arg-map}]
(let [the-query (cond-> {:query {:find ['(pull ?e pattern)]
:in ['$ 'pattern]
:where []}
:args [db '[*]]}
customer-id (->
(update-in [:query :where] conj '[?e :invoice/customer ?customer-id])
(update-in [:query :in] conj '?customer-id)
(update-in [:args] conj customer-id))
(nil? customer-id) (-> (update-in [:query :where] conj '[?e :invoice/customer])))]
(d/q the-query)))#2019-12-1615:10Joe LaneI think that should work. You might want to extract the right hand side of the first cond-> clause to be its own fn, but I thought I'd include it all in one place for now.#2019-12-1615:14daniel.spanielahh .. update-in .. ok .. very clever#2019-12-1616:28daniel.spanielturned out to be super whacky .. ( i have way more clauses and variable ) BUT .. that sucker worked .. i am well shocked .. and thanks for the days surprise of whacky code .. very interesting#2019-12-1617:03Joe LaneI use an extended variation of this with about 30 small clauses to construct and expose a domain specific pseudo-sql query language (in json!) to the mobile developer on one of my projects. They love it and have implemented several features without even talking to me about it. It's pretty cool 🙂
I use the cond-> pattern whenever I have possibly nillable values and I'm constructing queries/tx-data. It's one of my top 5 fav language tools.
Glad the above was helpful.#2019-12-1617:13daniel.spanielsuper nifty .. thanks again 🙂#2019-12-1619:03rgorrepatisala2018!#2019-12-1620:46aisamuYou might want to change your password :P#2019-12-1620:58rgorrepatioops!#2019-12-1621:37aisamu(Jokes aside, please be aware that we have at least 2 public logs of these channels 😬)#2019-12-1623:35kennyFor those working with Datomic Cloud, here's a short script that will automatically delete the durable storage for you so you don't have to go through the manual steps listed in the docs. https://github.com/ComputeSoftware/datomic-cloud-tools. We've found this useful to integrate with our infra-as-code tools and just generally making spinning up and down Datomic systems a bit easier.#2019-12-1717:24alidlorenzoif we follow the datomic ion tutorial we're interacting/changing with live database, correct? so we should delete it afterwards in order to start from fresh slate (since there's no way to rollback changes)?#2019-12-1717:26alidlorenzoalso, does starting/stopping a datomic-gateway effect billing? (so i shouldn't forget to stop it after developing?) there's not a lot of info on implications of these commands in docs#2019-12-1717:43shaun-mahood@alidcastano Yes, the ion tutorial transacts data to a live database. If you want to start over, deleting and recreating the database shouldn't cause any problems.
The API gateways are billed based on the number of requests, so you should only see a cost if you are using it a lot (mine is $3.50/million requests, not going to break the bank from development use).#2019-12-1717:45alidlorenzoso it doesn't matter whether we leave it running or not, just the amount of requests we do during development#2019-12-1717:46Joe LaneYou two are talking about different things.#2019-12-1717:46Joe LaneAPIGateway is billed by the request, however, @alidcastano is talking about what used to be known as the bastion server.#2019-12-1717:47shaun-mahoodDid they change the name on me and I didn't notice?#2019-12-1717:47Joe Laneyep#2019-12-1717:47Joe LaneIt does more things now though so I see why they changed it.#2019-12-1717:47Joe Lane@alidcastano The datomic-gateway is an ec2 machine which you will be billed for like normal unless you shut it off.#2019-12-1717:48shaun-mahoodhttps://docs.datomic.com/cloud/operation/bastion.html
is now called the datomic-gateway then?#2019-12-1717:52alidlorenzo@U0CJ19XAM so starting/stopping is required for ion development correct?#2019-12-1717:52Joe LaneThat is my understanding. https://docs.datomic.com/cloud/getting-started/get-connected.html#access-gateway https://docs.datomic.com/cloud/whatis/architecture.html#security#2019-12-1717:52shaun-mahoodOh yeah, it's called "Access Gateway" in https://docs.datomic.com/cloud/operation/operation.html now
Thankfully, if you forget to turn it off it's still not going to cost you much - looks like mine cost $3.74 to run the EC2 instance for an entire month#2019-12-1717:54alidlorenzooh yeah that's not bad at all. my first time interacting with aws I once left a service running that costs me a few hundred dollars so that has scarred me 😆#2019-12-1717:54alidlorenzothanks for looking into it @U054BUGT4#2019-12-1717:55shaun-mahoodOuch!#2019-12-1717:55Joe Lane@alidcastano "developing", no, there is no need to bounce the machine (except maybe with some of the new analytics stuff, i'm not 100% on that...). If you want to deploy an ion for development (like a transaction function) the push/deploy commands from the cli go against the primary node. The access gateway is an evolution of the bastion machine which is used for secure access into the vpc. It now does more than secure access but if you're just getting started you can treat it like a "jump box".#2019-12-1717:59alexmillerI am not an expert, but I think generally you should not need to bounce the datomic gateway for anything with analytics - it picks up changes in the metaschema dynamically afaik (but I could be wrong)#2019-12-1718:06marshallThat’s an old docs page#2019-12-1718:06marshalllatest info is here: https://docs.datomic.com/cloud/operation/howto.html#datomic-gateway#2019-12-1718:06marshalli will fix the links/pages#2019-12-1800:18rapskalian@alidcastano I usually scale down the auto-scaling group when not actively developing in order to save money. This section might be useful to you.
https://docs.datomic.com/cloud/operation/planning.html#turning-down#2019-12-1718:04eoliphantare cross-db joins supported in cloud?#2019-12-1718:04alidlorenzowhat's the recommended testing approach for ions? any repos or articles that demonstrating it?#2019-12-1718:05alidlorenzo(not referring to repl development, but unit tests ideally with some sort of rollback per test)#2019-12-1718:22Joe LaneWhat exactly do you want to unit test? You can find code in the helpers here (https://github.com/cognitect-labs/day-of-datomic-cloud) that you can adapt to creating/destroying databases, but you may not need it.#2019-12-1718:32alidlorenzojust basic tests to make sure resolvers are doing what's expected#2019-12-1718:32alidlorenzoat least that's what im used* to doing with graphql + postgres.
is not how it's done with datomic?#2019-12-1718:46Joe LaneSo this is testing graphql resolvers using (I assume) Lacinia?#2019-12-1719:11alidlorenzothat would be the equivalent setup, yea, but I haven't even gotten that far yet.
I just like to wrap my head around how the dev/test flow works so I can have more confidence in what im doing.#2019-12-1719:11alidlorenzoare the ions themselves not tested too?#2019-12-1719:30Joe LaneWell, an "ion" is an aws lambda function which acts as a proxy for your clojure code, so, you can just invoke your clojure code in a test and not involve invoking actual ions.#2019-12-1719:51alidlorenzoah have to learn the terminology better, thanks.
yes, I mean testing the actual the clojure code. what I have in mind is an example setup that shows how I can test API with an the in memory client database (has same API ions use) and rollback changes per test
hopefully that makes sense in context of datomic, otherwise I'll have to dig more into the code before commenting further 😅#2019-12-2016:31mauricio.szaboI also want to know, what we're doing right now is creating a database then destroying it after each test, but its awfully slow and "seems wrong"... 😄#2019-12-2016:38Joe LaneEh, it's not wrong, but you might be able to optimize. You can use d/with-db to try several tests after creating a db with some common fixture data.#2019-12-2020:08mauricio.szaboIs there any alternative? Like, to run local tests , the need to connect to a cloud instance, create a database with a unique name that does not conflict with anyone else running tests at that time seems strange...#2019-12-2020:17Joe LaneIt only matters if you work with the database. Otherwise no need to connect.#2019-12-2020:19mauricio.szaboHow do I do an integration test, for example, for a CRUD application without working with the database?#2019-12-2020:20alidlorenzodo only peer database have an in-memory alternative? (so that approach can't be used with ions / client api)#2019-12-1721:14matthavenercan :db/tupleAttrs refer to idents with :db.type/tuple? (ie tuple ident containing tuples)#2019-12-1722:03markbastianAny idea how to get past this issue with stack creation: "Embedded stack arn:aws:cloudformation:us-east-1:...:stack/vcl-Storage.../... was not successfully created: The following resource(s) failed to create: [InternetGateway, Vpc, AvailabilityZones]."? This is a solo instance.#2019-12-1722:39steveb8nQ: I just went live with a prod system today. I think I still have a slow memory leak but it there’s so much headroom that its stable anyway. Is it ok to occasionally kill one of the instances to flush this out? I was thinking of doing this each night until I can find the problem#2019-12-1723:07kennyDo you happen to be using the aws-api lib?#2019-12-1800:01steveb8nyes!#2019-12-1800:01steveb8nI have been putting all the calls in try/finally blocks to “stop” the client#2019-12-1800:01steveb8nbut still leaking a bit I think#2019-12-1800:02steveb8ndo you have a story here?#2019-12-1800:03kennyAh. We had that same problem. We still create a client on all function calls but have a
(def default-http-client
(delay (aws/default-http-client)))
statically declared. That is then passed to the client function via :http-client @default-http-client . That solved the memory leak for us.#2019-12-1800:04kennyThe behavior you described sounded very similar. It would take ~1 day before an OOM would occur.#2019-12-1801:52steveb8nDid the system detect the situation and cycle the node when it hit OOM? Or did you have to do this manually?#2019-12-1801:52steveb8nThanks BTW!#2019-12-1801:58kennyWe don’t use Ions so don’t know. #2019-12-1803:39steveb8nok. thanks. I’ll keep a close eye#2019-12-1800:46QuestCould the tuple attribute limitation of "must have at
I've been using it but padding with nil to reach >= 2 length. This creates awkward code like below.
(defn pad-tuple-nils
[v]
(let [length (count v)]
(cond (>= length 2) (vec v)
(= length 1) (conj (vec v) nil)
:else [nil nil])))
And the inverse, making sure any frontend display runs a (remove nil? tuple) so as not to render empty elements.
I do want to note that the convenience of getting vectors back from DB queries is a great addition & I hope this can be upgraded to bring greater parity between plain Clojure vectors <-> Datomic tuples.#2019-12-1815:03camdezHi all…struggling with a query that I feel shouldn’t be too hard to write…perhaps I’m wrong…
(def animal-db
[[1 :name "Bear" 1]
[1 :kingdom :mammal 1]
[2 :name "Dog" 1]
[2 :kingdom :mammal 1]
[3 :name "Snake" 1]
[3 :kingdom :reptile 1]
[4 :name "Frog" 1]
[4 :kingdom :amphibian 1]])
;; Find all animal names belonging to the kingdoms passed in
(d/q '[:find [?name ...]
:in $ [?target-kingdom ...]
:where
[?animal :name ?name]
[?animal :kingdom ?target-kingdom]]
animal-db #{:mammal :reptile})
;; => ["Snake" "Dog" "Bear"]
;; How can I find the names of the animal NOT in the kingdoms passed
;; in?
;; ???
;; => ["Frog"]
(1) How can I write the desired query, excluding values for a collection passed? (2) Is this a case of “dynamic conjuction”, as described in this post? https://stackoverflow.com/questions/43784258/find-entities-whose-ref-to-many-attribute-contains-all-elements-of-input#2019-12-1815:06camdezFWIW, I know it can be done the following way, but I assume it’s inefficient:
(d/q '[:find [?name ...]
:in $ ?target-kingdoms
:where
[?animal :name ?name]
[?animal :kingdom ?kingdom]
(not [(contains? ?target-kingdoms ?kingdom)])]
animal-db #{:mammal :reptile})#2019-12-1815:12favilawhy assume it’s inefficient?#2019-12-1815:14camdezSince predicate expressions can contain arbitrary code, they must not be factored into the query planner…I’m deducing that they just run over the full set of matches, and filter it. In this case that would be every single entity (“animal”), so I’m basically doing a full tablescan.#2019-12-1815:14camdezRather than taking advantage of the indexes.#2019-12-1815:14favilathat’s true either way#2019-12-1815:15favila[?animal :kingdom ?kingdom] if it were the first clause and :kingdom were indexed would benefit from an index#2019-12-1815:16favilaif it’s not, it’s a filter; and using datalog pattern-matching may be slower than checking against a set#2019-12-1815:17favila[?animal :kingdom ?kingdom][?animal :name ?name] is the fastest if ?kindom is known and :kingdom is indexed#2019-12-1815:17camdezTotally agreed that the clause ordering would have been a fundamental issue for performance. But that’s trivially fixable. With that out of the way, is there a way the query can be written?#2019-12-1815:19favila[?a :kingdom ?k](not [(contains? ?kingdoms ?k)]) [?a :name ?n]#2019-12-1815:19favilawill scan :kingdom, but filter quickly#2019-12-1815:20favilaalternatively if you know all kingdoms, you can find the set-difference and convert your negation into a positive match#2019-12-1815:20favilathat would be faster if computing the set-difference is faster. if kingdom is an open set you will need a query to find all kingdoms anyway, so it may not be faster#2019-12-1815:25camdezYeah, the contains? approach is the same as what I supplied earlier, but with an improved clause ordering (I do know this is significant from a performance perspective).
Call it an intellectual exercise if you want, but is it possible to write the query with a collection binding, similar to the way the original query matching desired kingdoms (rather than their complements) was written?#2019-12-1815:26favilausing (not [?a :kingdom ?kingdom])#2019-12-1815:27favila(where ?kingdom is destructured from your list of kingdoms not to include)#2019-12-1815:28favilaThe difference is only that this will evaluate every ?kingdom, but the set containment test will only run once#2019-12-1815:32camdezI believe you’re suggesting the following:
(d/q '[:find [?name ...]
:in $ ?target-kingdoms
:where
(not [?animal :kingdom ?target-kingdoms])
[?animal :name ?name]]
animal-db #{:mammal :reptile})
;; => ["Frog" "Snake" "Dog" "Bear"]
Which returns incorrect results. I suspect because it’s finding all animal where there exists a kingdom not in the target-kingdoms set.#2019-12-1815:39favilano, I’m suggesting :in $ [?target-kingdom …] and (not [?animal :kingdom ?target-kingdom])#2019-12-1815:39favilathat clause should also be second#2019-12-1815:39favila> (where ?kingdom is destructured from your list of kingdoms not to include)#2019-12-1815:40favilaBy this I mean [?kingdom …]#2019-12-1815:40favila(d/q '[:find [?name ...]
:in $ [?target-kingdom ...]
:where
[?animal :name ?name]
(not [?animal :kingdom ?target-kingdom])]
animal-db #{:mammal :reptile})
;; => ["Frog" "Snake" "Dog" "Bear"]#2019-12-1815:41favilaputting it all together#2019-12-1815:42camdezI appreciate you putting it all together. But the results are still not what we’d want (I just ran it). Desired output would be ["Frog"].#2019-12-1815:43camdezI do understand that if I had a closed set (i.e. I know the full set even before going to the database), then that would make things much easier.#2019-12-1815:45favilaoh, yes, you’re right, the semantics of not don’t help here because every possibility is evaluated#2019-12-1815:45favilaso at least one of the kingdoms will not-match, thus the not clause will succeed#2019-12-1815:47favilaor I think? I’m still a bit puzzled by this result honestly#2019-12-1815:51camdezBingo, that’s what I’m thinking. And what I poorly tried to explain above. I think it expands to something like:
(or (not (= kingdom :mammal))
(not (= kingdom :reptile))
,,,)
And it’s always not at least one of those values. So, somehow, those need unify…#2019-12-1816:50favilaI can’t think of anything which doesn’t force evaluating an item at a time (e.g. first+rest plus recursive rule) or uses sets#2019-12-1816:50favilaI doubt anything is faster than set membership testing#2019-12-1816:58camdezI appreciate the input. This SO post has some suggestions that feel like they might apply, but I haven’t yet managed to adapt them to fit this problem: https://stackoverflow.com/questions/43784258/find-entities-whose-ref-to-many-attribute-contains-all-elements-of-input#2019-12-1818:50dvingoI've been playing around with this, set up an in-mem db:
(def schema
[#:db{:ident :animal/kingdom :valueType :db.type/ref :doc "" :cardinality :db.cardinality/one}
#:db{:ident :animal/name :valueType :db.type/string :doc "" :cardinality :db.cardinality/one}
#:db{:ident :kingdom/name :valueType :db.type/keyword :doc "" :cardinality :db.cardinality/one}])
(def data
[{:kingdom/name :mammal :db/id 1}
{:kingdom/name :reptile :db/id 2}
{:kingdom/name :amphibian :db/id 3}
{:db/id 4 :animal/name "Bear" :animal/kingdom 1}
{:db/id 5 :animal/name "Dog" :animal/kingdom 1}
{:db/id 6 :animal/name "Snake" :animal/kingdom 2}
{:db/id 7 :animal/name "Frog" :animal/kingdom 3}])
(d/transact conn schema)
(d/transact conn data)
This is the only way I could get a query with the correct result:
(d/q '[:find ?kingdom
:where [?kingdom :kingdom/name ?name]
(not [?kingdom :kingdom/name :mammal])
(not [?kingdom :kingdom/name :reptile])
] (d/db conn))
the (not [(contains?... version did not work for me.#2019-12-1818:51dvingoThis seems in line with Val's SO post along the line of generating a query.#2019-12-1815:08onetomI think u can do something with (complement #{:mammal :reptile})#2019-12-1819:34stijnis there a way to see which version (rev, i.e. git sha) of an ions application is deployed on a give compute group?#2019-12-1819:37dvingoif you know which CodeDeploy you want, click on the deploy and then there is a section "Revision details"#2019-12-1819:42stijnok, it's for script automation, but I think I can get to it with the aws cli, thanks!#2019-12-2017:45dvingois (datomic.ion/get-env) supposed to return something besides nil when running locally?#2019-12-2017:50timcreasyNot sure what you mean by “running locally”, but it should return the environment map specified in the compute group for that stack:
https://docs.datomic.com/cloud/ions/ions-reference.html#environment-map#2019-12-2017:51dvingoRunning in a repl that is not hosted in AWS#2019-12-2017:51dvingohow does it know what system you're using?#2019-12-2017:53timcreasyAh, it looks for the DATOMIC_ENV_MAP environment variable to be set. For example I have that in my repl config.#2019-12-2017:53timcreasyhttps://docs.datomic.com/cloud/ions/ions-reference.html#get-env#2019-12-2017:54dvingook great thank you!
So the expectation is that locally you'll be providing the values in the map that should match what is in SSM?#2019-12-2115:01pvillegas12Does the restriction
> Strings are limited to 4096 characters
apply in datomic cloud?#2019-12-2115:06Keith HarperAfaik yes it does#2019-12-2115:07Keith HarperI believe it isn't a hard cap, but you shouldn't intentionally exceed it#2019-12-2116:55favilaIt only applies in cloud. It’s still a bad idea in on-prem but afaikt it never stops you#2019-12-2121:21alidlorenzoi'm trying to push a datomic ion and receiving the following error
Error building classpath. Could not find artifact com.datomic:ion-dev:jar:0.9.247 in central
from searching around, could maybe be a permissions error? but i'm able to connect and use database locally, and i've even granted my IAM user admin privileges.
anyone have experience with this error?#2019-12-2123:09alidlorenzoreconfigured my global deps.edn, but now getting
'datomic/ion-config.edn' is not on the classpath"#2019-12-2123:21alidlorenzo(this is with the datomic ion tutorial repo)#2019-12-2203:02alidlorenzoupgrading clojure seems to have fixed it#2019-12-2305:34onetomim a bit confused about the relation of ~/.m2/repository/com/datomic/client-cloud/0.8.81/client-cloud-0.8.81.jar
to the /Users/onetom/.m2/repository/com/datomic/client-api/0.8.38/client-api-0.8.38.jar library.#2019-12-2305:36onetomthe later seems to provide the datomic.client.api namespace, but it's not mentioned in the cloud docs:
https://docs.datomic.com/cloud/client/client-api.html#2019-12-2305:39onetomthe on-prem docs only mentions the com.datomic/client-pro lib.
both of them seem to provide the datomic.client.api namespace, but if i look into the client-cloud jar, it only contains code for a datomic.client.impl.cloud ns.#2019-12-2305:41onetomthere is also no documentation for the :cloud value for the :server-type key in the client api config data structure, only for the :ion type:
https://docs.datomic.com/cloud/ions/ions-reference.html#server-type-ion#2019-12-2305:52onetomwhich library should i include into my project?
i would like to develop locally offline too, so im starting a peer-server and i would connect to it via the com.datomic/client-pro lib and a {:server-type :peer-server} config,
but if i would want to connect to the cloud version too with the :ion server type, what should i do?#2019-12-2306:58onetomah, i see the same transitive dependency is there in both libraries:
<dependency>
<groupId>com.datomic</groupId>
<artifactId>client</artifactId>
<version>0.8.87</version>
</dependency>
and the datomic.client.api/client function understands all server-types:
(case (:server-type arg-map)
:ion
(client (assoc arg-map :server-type (impl/ion-server-type)))
(:cloud :peer-server)
(impl/dynacall 'com.datomic/client
'datomic.client.api.sync/client
arg-map)
(:peer-client)
(impl/dynacall '(or com.datomic/datomic-pro com.datomic/datomic-free)
'datomic.peer-client/create-client
arg-map)
:local
(impl/dynacall 'com.datomic/client-impl-local
'datomic.client.impl.local/create-client
arg-map)
(throw (impl/incorrect ":server-type must be one of :cloud, :local, :peer-client, or :peer-server")))#2019-12-2307:01onetomor not?
Execution error (FileNotFoundException) at datomic.client.api.impl/dynaload (impl.clj:15).
Could not locate datomic/client/impl/pro__init.class, datomic/client/impl/pro.clj or datomic/client/impl/pro.cljc on classpath.
#2019-12-2307:47onetomalso, the https://docs.datomic.com/on-prem/getting-started/connect-to-a-database.html page doesn't mention the need for the :validate-hostnames false option,
while the https://docs.datomic.com/on-prem/peer-server.html#connecting page does.
I was getting a No name matching localhost found error otherwise, or when I changed localhost to 127.0.0.1, it was throwing a No subject alternative names present error.#2019-12-2314:57marshall@onetom The on-prem client and cloud client libraries are different dependencies but have some shared internals (As you determined)
I’ll fix that missing validate-hostnames part in the docs, thanks for finding that
You’ll need to use a different dependency in your project to connect to cloud than to connect to peer-server; I don’t think you can have them both active at the same time (CP and name conflicts); but you could use an alias in your deps and have the different config maps available in your code depending on which you’re connecting to#2019-12-2323:45dvingoI have some queries that all retrieve a common set of properties from a few entities, but each query adds a few different fields.
I'm wondering if it is a good idea to add helpers to write parts of the query, something like this:
(def my-query
{:find
(conj (select-fields-symbols) '?property)
:keys (conj (select-fields-keys) 'property)
:where (conj (select-fields-query '?project)
'[?project :project/d ?d]
'[?project :my-other/property ?property])}
;; =>
'{:find [?a ?b ?c ?d ?property]
:keys [a b c d property]
:where [[?project :project/a ?a]
[?project :project/b ?b]
[?project :project/c ?d]
[?project :project/d ?d]
[?project :my-other/property ?property]]}
Where we always want the fields (a, b, c, d), but depending on the situation also want to add a few additional fields, or join to other entities.
Is there some other way to go about this? Should I just duplicate the fields and not overthink things?
One other strategy I'm considering is to make multiple queries, the first one to get a set of common properties and then another to add on the additional fields I want.#2019-12-2323:57dvingoI think I answered my own question.... Use the pull API..#2019-12-2405:57steveb8n@danvingo a trick I’ve learned is to define pull expressions and then compose them into queries. The core apply-template fn is a big help when doing this. Once you do this and see how it works, you will see how composable pull expressions can be#2019-12-2405:58steveb8nprovides similar benefits as rules do for conditions#2019-12-2504:42codonnellIs it possible to host multiple ion applications in the same (solo) topology? Use case: I have a tiny personal project where I'd like to have staging and production applications with distinct configuration, and I don't anticipate there will be sufficient traffic to put any kind of tax on a single solo topology setup. Paying for another stack would be cost-prohibitive in this case. I'm open to other ways of accomplishing this, as well.#2019-12-2504:53steveb8nit would be tricky. you can create >1 databases with solo so you could keep data separate. the problem is code, I can’t think of a way to have > 1 version of code running in solo unless you do some crazy namespace duplication tricks.#2019-12-2504:53steveb8nbest bet is run 2 solo stacks and shut them down when you aren’t using them to save $. shutdown is simple, just set the ALB to zero instances and then 1 when you want to bring it back up#2019-12-2504:58codonnellGood point; I had not thought this all the way through. I may just not have a staging environment at all, since a bit of downtime won't be the end of the world in this case and rollback is not hard.#2019-12-2518:44codonnellI'd love some help building a query in the right way. I'm trying to add some authorization logic into my datomic queries. A slightly simplified example of what I'm trying to do is to allow a user to view their own attributes and also to view the attributes of other users which have granted them access. My initial attempt looks like this:
(d/q '{:find [(pull ?u [::user/name])]
:in [$ ?requester-id ?id]
:where [(or
(and
[?u ::user/id ?id]
[?u ::user/id ?requester-id])
(and
[?u ::user/id ?id]
[?requester ::user/id ?requester-id]
[?grant :view-grant/grantor ?u]
[?grant :view-grant/grantee ?requester]))]}
(get-db) (java.util.UUID/randomUUID) (java.util.UUID/randomUUID))
However, this doesn't work because each clause in an or expression is required to bind the same set of variables. I can get around this by making two separate queries and combining their results, but the code is more complicated. Is there a reasonable way to do this in one query?#2019-12-2518:54timcreasyHave you ever looked at Datomic Rules?
Can use multiple rule heads for logical OR and have different bindings in each head.
Can’t link to section on Rules on mobile, but it can be found here: https://docs.datomic.com/cloud/query/query-data-reference.html#2019-12-2518:56codonnellI will probably end up writing a rule for this later, thanks! For now I will use or-join until I know enough to build the right abstraction.#2019-12-2518:54codonnellFigured it out. 🙂 I needed to use or-join to specify which variables should be unified outside of the or. Corrected query in thread.#2019-12-2518:54codonnellFigured it out. 🙂 I needed to use or-join to specify which variables should be unified outside of the or. Corrected query in thread.#2019-12-2518:56codonnellThe corrected query for the benefit of anyone else who has the same question and sees this:
(d/q '{:find [(pull ?u [::user/id])]
:in [$ ?requester-id ?id]
:where [(or-join [?u ?requester-id ?id]
(and
[?u ::user/id ?id]
[?u ::user/id ?requester-id])
(and
[?u ::user/id ?id]
[?requester ::user/id ?requester-id]
[?grant :view-grant/grantor ?u]
[?grant :view-grant/grantee ?requester]))]}
(get-db) (java.util.UUID/randomUUID) (java.util.UUID/randomUUID))#2019-12-2618:04ennAfter retracting an entity (using the retractEntity transaction fn), what should I expect when I call d/entity with the now-retracted entity's ID?#2019-12-2618:29benoitSame thing as if you were passing an entity id that does not exist. I think it returns an "empty entity", not nil.#2019-12-2620:26potetmThe whole concept of “entity” is layered on top of Facts. retractEntity is a shorthand for “retract all Facts with this entity ID in the e slot.”
There is no concept for “remove this entity.” So, like Benoit said, d/entity always returns something. Even if that something happens to have no Facts in the db.#2019-12-2620:30codonnellI have a dependency (fulcro) in my ion code that depends on transit-clj 0.8.313. In particular, it uses the metadata support, which was introduced in transit-clj 0.8.303. Unfortunately, according to the ion push output, the datomic process overrides this transitive dependency with transit-clj 0.8.285 which doesn't include metadata support and tanks my deployment.
Are my only two options trying to downgrade my fulcro dependency to some point where it doesn't use the metadata support and waiting for the Datomic team to upgrade the transit-clj version? For reference, 0.8.285 was released in 2015; metadata support was introduced in March of 2018.#2019-12-2717:56Jacob O'Bryant@U0DUNNKT2 I had the same exact problem a couple weeks ago. I believe those are basically the only two options. Hopefully cognitect will update the deps at some point. In the mean time, I just patched fulcro:
diff --git a/src/main/com/fulcrologic/fulcro/algorithms/transit.cljc b/src/main/com/fulcrologic/fulcro/algorithms/transit.cljc
index bcc73f02..4a924e76 100644
--- a/src/main/com/fulcrologic/fulcro/algorithms/transit.cljc
+++ b/src/main/com/fulcrologic/fulcro/algorithms/transit.cljc
@@ -101,12 +101,11 @@
shadow-cljs, this means placing that in your package.json file (not relying on the jar version)."
([data] (transit-clj->str data {}))
([data opts]
- (let [opts (assoc opts :transform t/write-meta)]
- #?(:cljs (t/write (writer opts) data)
- :clj
- (with-open [out (.ByteArrayOutputStream.)]
- (t/write (writer out opts) data)
- (.toString out "UTF-8"))))))
+ #?(:cljs (t/write (writer opts) data)
+ :clj
+ (with-open [out (.ByteArrayOutputStream.)]
+ (t/write (writer out opts) data)
+ (.toString out "UTF-8")))))
(defn transit-str->clj
"Use transit to decode a string into a clj data structure. Useful for decoding initial app state#2019-12-2718:02Jacob O'BryantAlso, FYI, I had to add :exclusions [org.clojure/clojurescript] to fulcro because of another ion dependency conflict.#2019-12-2718:34codonnell@U7YNGKDHA Thanks! Are you maintaining a fork of fulcro somewhere with these changes?#2019-12-2718:35Jacob O'Bryantno, for now I've just got the patch applied to a local clone#2019-12-2718:36codonnell:+1:#2019-12-2718:36Jacob O'Bryant(the :exclusions [org.clojure/clojurescript is for your project's deps.edn btw, fulcro {:local/root "../fulcro" :exclusions [org.clojure/clojurescript]} to be specific, not another patch of fulcro)#2019-12-2718:59henrikThis is good to know! Was planning to look into deploying Fulcro on Ions.
Updates to Ion deps seem to happen extremely conservatively, don’t expect a bump anytime soon (if history is anything to go by).#2019-12-2803:28souenzzoYou can create a com.fulcrologic.fulcro.algorithms.transit namespace in your src and maintain your own version of this ns#2019-12-2803:31codonnellA variant of this was suggested in #fulcro: adding a no-op definition of the cognitect.transit/write-meta function to my codebase so that the fulcro namespaces compiles. I prefer this to redefining the fulcro transit namespace, personally.#2019-12-2718:09scottwleonardif I am using ions and query groups in AWS, do I need to stick with the recommended instance types or can I use something like a memory optimized instance? I have some very memory hungry queries and low query volume on this query group#2019-12-2718:28Joe LaneYou should stick to the recommended instances. #2019-12-2720:17scottwleonardHi Joe, nice to talk to you again. Can you tell me why those instances are recommended over the others? Specifically what performance characteristics make one instance type preferable over another?#2019-12-2721:05Joe LaneHey Scott, so I don't work for cognitect so I'm not an "official" source of truth. That being said, I may still be able to provide some helpful information. Which specific instance types are you working with right now? Also, maybe there is a way to reduce the memory consumption of said queries?#2020-12-3021:17eoliphantHi, i’m running into an issue where Cloud’s version of transit-clj is causing some problems with some libs im including Cloud appears to be using 0.8.285 which looks to be pretty old, and is missing write-meta etc#2020-12-3022:56henrikFulcro? If so, check #fulcro and scroll up a bit, there’s a recent discussion about it (Dec 27th).#2020-12-3023:17eoliphantyep great will cehck it out#2020-12-3122:50Jacob O'Bryantmaybe we can all sign a petition to get the dep upgraded 😉#2020-12-3115:48Brian AbbottHey guys, does anyone have experience with pre-creating ids on Datomic, pre-transact?#2020-12-3115:50favilaAs in, using UUID/randomUUID or d/squuid?#2020-12-3116:24Brian AbbottI think Squuid — the idea is, for data hierarchies, solve for child elements in the tree prior to their being created such that, an element that is created, can be given its :db/id on the client, at the time that the data is available and, from that point on, assuming the transact operation fully succeeds, reliably know that on the client, at the time that that entities id was assoc’d into a hierarchy, it will forever be that entities id.#2020-12-3116:39favilaDo you mean specifically :db/id? You cannot control :db/id and applications should not rely on it being indefinitely stable (e.g. by storing it durably somewhere and using it as a reference)#2020-12-3116:40favilagenerally people create a cardinality-one uuid attr with index-value or index-identity#2020-12-3116:40favilathen use that as the public id for their entities via lookup refs, e.g. [:my-public-id #uuid"…"] wherever they would normally use a raw :db/id#2020-12-3116:42favilaThere’s some subtlely here about when you create these ids and whether you want them to be “upserting” (using indexed-identity). I’d have to know more about your problem to help further#2020-12-3119:13dustingetzHyperfiddle uses tempids on the client, and then when the transaction returns we inspect the tx-report which contains a tempids map which maps the tempids to the realized ids, and then you can update your client state accordingly (for example if you have tempids in your url we can rewrite the url to the new id)#2020-12-3119:13dustingetzIf this is not good enough for you, i’m interested in knowing what you’re doing that needs something different#2020-01-0116:13Brian AbbottI found a work around which…. it comes down to the way I wanted to do it originally but, I was inspired by some philisophical ideas I read (extrapolated) from the datomic docs plus how we’re writing some of our code as a result of the Lacinia/Datomic code but, let me get back to you guys - Im going to write a short ‘paper’ really a design req/solution ideas that, provided that our CTO okays it, i’ll send to anyone interested.#2020-01-0116:16Brian AbbottThe philosophy being that, I want to be able to throw into dat. a graph of arbitrary depth/complexity that, requires referential integrity at time of creation (client-side) in order to …. not force non-inuitive UI workarounds. (i.e. a sequential create step where ids are created on one screen to be able to provide to the next screen in the wizard but, it starts getting broken up purely to serve this necessity and not for the best/most-ideal UX)#2020-01-0118:39dustingetzGraphs stitched with tempids do have referential integrity and 1:1 mapping to same graph post transact. Seems like I am missing something you’re saying#2020-01-0217:31jhemannAt the risk of causing confusion with @UFVRVT0L8’s original question, I'll raise what I think is the same challenge: I have client-side logging that adheres to a protobuf schema. One or more protobuf messages will get logged for a given user action, each message having a shared UUID generated by the client. This shared UUID enables connecting these messages as all related to the same user action. In the Datomic documentation it gives an example of entities shown as a table:#2020-01-0217:36jhemannSo, since I already have an ID for an "entity" in my data model, can I use that as the "E" column shown in this table above (a Datomic ident?) or is the entity ID only yielded from a transaction?#2020-01-0217:48jhemannI think this is what @UFVRVT0L8 is asking too: If we have an ID created in a client-side process whose purpose is to connect related information to some entity in an implied data model, can we use that ID as the entity ID that automatically is indexed in Datomic?#2020-01-0217:54jhemannI think the answer is Yes, given this section of the documentation, but I am only investigating Datomic and have no experience using it yet. https://docs.datomic.com/cloud/whatis/data-model.html#identity-and-uniqueness#2020-01-0217:55favilaNo, for a few reasons:#2020-01-0217:56favila• Entity ids in a datom are primitive longs. You can’t use another type#2020-01-0217:57favila• entity ids have an internal structure of partition and “t” or index. The index is guaranteed unique because of an internal counter the transactor advances#2020-01-0217:57favila• entity ids are only created via the transactor “minting” an id (really advancing the counter and joining to a desired partition) from a tempid.#2020-01-0217:58favilabottom line, entity ids are not meant to be something applications control. There used to be some partition control but even that ability is deprecated (and absent from datomic cloud)#2020-01-0218:01favilaThat said, you should make your own :db.unique/identity or :db.unique/value attribute to attach your application’s notion of uniqueness e.g. via a uuid, and reference it via [:my-identity-attr value] lookup refs in your transaction data#2020-01-0218:04jhemannThanks @U09R86PA4, the use of :db.unique/identity seems to be the direction to go. I'll need to read more to understand how this identity would be indexed and used in graph queries.#2020-01-0121:23Adrian Smith#2020-01-0213:54joshkhwe had an issue where our latest Ions revision failed to deploy, and then so did all previously working revisions. CodeDeploy reports a ScriptTimedOut error:
[stdout]Received 000
and looking at the EC2 system logs we see:
ip-xx-xxx-xxx-xxx login: /dev/fd/11: line 1: /sbin/plymouthd: No such file or directory
is there a way to further track down what has gone wrong?#2020-01-0213:56joshkh(inspired by https://jacobobryant.com/post/2019/aws-battles-ep-1/ 😉 )#2020-01-0223:11Jacob O'Bryantha... I was just about to say, "hey, I saw that error last week!"#2020-01-0223:20Jacob O'BryantSometimes I've seen errors from bad code I committed in the cloudwatch logs... but if your previously working deploys also fail, idk if cloudwatch would have anything useful
What's the Most recent event from codedeploy?#2020-01-0223:24Jacob O'Bryantalso is this solo or prod topology? I've seen other people in this channel say that sometimes solo deploys just get wonky and you have to terminate the ec2 instance#2020-01-0216:34asierwrong link#2020-01-0216:42marshall@asier thanks, i’ll fix it#2020-01-0315:29Luke SchubertI think I'm misunderstanding something about :keys in a query#2020-01-0315:31Luke Schubert{:find [?id (max ?score) ?timestamp ?source]
:keys [id score timestamp source]
...}
shouldn't that return [{:id some-val :score some-val ...}]?#2020-01-0315:57Luke Schubertas it stands now that query still returns a vec of vecs#2020-01-0316:08favilaMaybe you are using an older version?#2020-01-0316:09Luke Schubert0.9.5951?#2020-01-0316:10favilanope that should have it#2020-01-0316:13favila(datomic.api/q '{:find [?a]
:keys [a]
:where [[(ground [1 2 3]) [?a ...]]]})
=> [{:a 1} {:a 2} {:a 3}]
Works for me#2020-01-0316:14Luke Schubertalright let me try a smaller more isolated query#2020-01-0316:17Luke Schubertahh I see what's wrong#2020-01-0316:17Luke Schubertit would appear mem doesn't support :keys#2020-01-0316:17Luke Schubertand the tests are using mem#2020-01-0316:18favilawoah, I would not have expected this to have any connection to the storage used#2020-01-0316:18favilaMy query uses no dbs#2020-01-0316:19favilaare you sure that’s what’s going on?#2020-01-0316:19Luke Schubertactually I may have jumped to that a little quickly#2020-01-0316:21Luke Schubertah my client lib is 0.9.5697#2020-01-0316:22Luke Schubertsorry about that#2020-01-0316:28Luke Schubertso this is weird#2020-01-0316:28Luke Schubertsame query you ran#2020-01-0316:29Luke Schubert(datomic.api/q '{:find [?a]
:keys [a]
:where [[(ground [1 2 3]) [?a ...]]]})
=> #{[1] [2] [3]}#2020-01-0316:47favilaclient-lib? you mean peer version?#2020-01-0316:47favila0.9.5697 predates the :keys feature by a significant amount#2020-01-0316:48favilaso that is why it isn’t working#2020-01-0317:39Luke Schubertyup, thanks for the help#2020-01-0317:02joshkhhow can one retract an entity representing a tuple? when we try, we get an exception and the error message Invalid list form [ ... ]
(d/transact (client/get-conn)
{:tx-data [[:db/retractEntity 123456789123]]})
Execution error (ExceptionInfo) at datomic.client.api.async/ares (async.clj:58).
Invalid list form: [9876543219876 5555123123123 111122223333444]
#2020-01-0317:10Joe LaneHave you tried [:db/retract 123456789123 :my.tuple/attr [9876543219876 5555123123123 111122223333444]]#2020-01-0317:11Joe LaneI know I've done what you want before I just cant remember the exact syntax.#2020-01-0317:14joshkhunfortunately that doesn't work. datomic runs the transaction successfully but it's a noop.#2020-01-0317:40Joe LaneCan you show the schema for that attribute?#2020-01-0317:54joshkhsomething like this, where all three attributes are references:
#:db{:ident :edu/university+semester+course,
:valueType :db.type/tuple
:cardinality :db.cardinality/one
:unique :db.unique/identity
:tupleAttrs [:edu/university :edu/semester :edu/course]}#2020-01-0317:59Joe LaneAre you using the latest datomic cloud release / latest client?#2020-01-0318:01Joe LaneFrom the bottom of https://docs.datomic.com/cloud/schema/schema-reference.html#composite-tuples
> Composite attributes are entirely managed by Datomic–you never assert or retract them yourself. Whenever you assert or retract any attribute that is part of a composite, Datomic will automatically populate the composite value.#2020-01-0318:02Joe LaneSo doing the first retract suggestion I gave wouldn't make sense.#2020-01-0318:07joshkhHmm. So in our case, we have an entity with a component reference to these tuples. When we retract the "parent" entity, Datomic automatically attempts to retract the tuple (due to the component reference) and then fails due to the invalid list form error. I wonder if we've found ourselves in an edge case.#2020-01-0318:09Joe LaneSo your schema attribute would be
#:db{:ident :edu/university+semester+course,
:valueType :db.type/tuple
:cardinality :db.cardinality/one
:unique :db.unique/identity
:isComponent true
:tupleAttrs [:edu/university :edu/semester :edu/course]}
Correct?#2020-01-0318:16joshkhnot quite, more like this:
1. a parent catalogue entity with a many/component attribute :catalogue/registrations which references:
2. various registration entities which contain the :edu/university+semester+course tuple attribute (and of course the tuple's individual attribute and values of :edu/university, :edu/semester, and :edu/course)#2020-01-0318:18joshkhso when retracting a catalogue entity, those component entities with the tuple attributes are unsuccessfully retracted#2020-01-0318:19Joe LaneOh interesting. Does [:db/retract the-catalogue-eid :catalogue/registrations 123456789123] succeed?#2020-01-0318:22joshkhlet's find out! one sec.#2020-01-0318:26joshkhyup, that works as expected. all we're doing is removing the relationship between the parent and the child which doesn't really affect the child with the tuple attribute.#2020-01-0318:37Joe LaneSo now try retracting the entity with a tuple manually. This may determine if you have indeed found some edge case in using component entities with composite tuples.#2020-01-0318:46joshkhstill no luck#2020-01-0319:07john-shafferHi. I really want to get into Datomic, but I'm having trouble understanding how to take advantage of it. I tried it initially for a website, but I need full-text search and it's not in the cloud version apparently (and not really recommended for on-premises). I ended up using DynamoDB for key lookup and ElasticSearch for querying, and this works well but I want to take another look at Datomic before the project gets too far along. Should I replace DynamoDB with Datomic (having Datomic sync with ElasticSearch) and hit Datomic for key lookups and non-FTS queries?#2020-01-0319:22eagonInterested in the same question - are there any examples or tips for integrating Datomic with ElasticSearch or CloudSearch?#2020-01-0319:25joshkhsame here. in the past i've seen proof-of-concept examples of spooling the transaction log to ElasticSearch, but no concrete examples of (re)building searchable documents based on changes to entities over time.#2020-01-0319:37joshkh@jshaffer2112 from what i've read, Datomic <-> ElasticSearch should be sufficient without DDB in the middle if you don't mind glueing them together. Datomic is pretty performant when looking up ids. however, DDB does have the advantage of configurable triggers to automatically push changes to ElasticSearch when a row entry is modified.#2020-01-0319:41joshkhi suppose you can do the same with transaction functions, so long as you remember to use them when needed 🙂#2020-01-0319:49john-shafferI'll look up transaction functions and try that. Datomic is definitely more work to get started, but I think it will be worth it down the road.#2020-01-0320:00joshkhlet us know what you find. according to the docs, transaction functions must be side-effect free, so i suppose their purity is in the eye of the beholder. 😉 hopefully someone here can chime in if they have some experience using transaction functions to sync data with other resources.#2020-01-0320:08favilakeep in mind tx fns execute even if the tx ultimately fails#2020-01-0320:08joshkhyeah#2020-01-0320:02john-shafferIs it working looking into CloudSearch instead of ElasticSearch?#2020-01-0320:07chrisblomyou can subscribe to changes on the datomic db, and sync changes to ES that way#2020-01-0320:07chrisblomthat would be a cleaner solution IMO than abusing transaction functions#2020-01-0320:08joshkh@chrisblom is that Datomic Cloud friendly?#2020-01-0320:08chrisblomi'm not sure, i've only worked with on-prem#2020-01-0320:11joshkhthe last time i checked the proposed solution was to periodically read the Cloud transaction log via a scheduled Lambda, store processed transactions in DDB, and push changes to ES. but that was a year ago, so maybe things have changed?#2020-01-0320:16Joe LaneAfter a transaction succeeds, why not initiate an indexing process with the tx-result by putting the new datoms on a queue to be indexed? Note, this assumes you don't need to query the Search index in-process with your query, and it also assumes you would be ok indexing the datoms as documents, not entity maps. IMO, indexing datoms is a smoother approach than updating documents.#2020-01-0320:20ghadiwhy not sip the tx log?#2020-01-0320:22Joe Lane@ghadi I've done that approach in the past too and it's fantastic, I just figured you already had the datoms in hand.#2020-01-0320:22joshkhagreed. anecdotally, my issue with datom-level indexing is that i can't easily restrict access to indices based on authorization.#2020-01-0320:25Joe LaneHow would you meet that requirement in any other system with elasticsearch?#2020-01-0320:40joshkhi'm by no means an ES expert, so take what i say with a grain of salt. 🙂 in my project i have clearly defined permissions to entities, and so it's easy to reason about which ES index (and attached permissions) to push entities as whole documents. when sipping datoms off the tx log i have to do a lot of reconciliation of [E V ] to make sure they end up in the right place. i don't think any other system solves the problem better, and for me it boils down to challenging business requirements.#2020-01-0409:53PaulWhat is the idiomatic way to model relationships with varying types in datomic?#2020-01-0409:54PaulOr more generally are there good resources with example how to model more complex problems in datomic?#2020-01-0417:44favilaWhat do you mean by varying types?#2020-01-0419:27fmnoiseIf you mean attribute which may be a string or keyword or long or something else, then string field with EDN content may work, but such approach may also indicate bad architecture decision 🙈
I used that in real-world project to abstract system configuration:
[{:db/ident :setting/name
:db/valueType :db.type/keyword
:db/unique :db.unique/identity
:db/cardinality :db.cardinality/one}
{:db/ident :setting/value
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one}]
#2020-01-0419:30fmnoiseif you talk about attribute which is reference to another entity then there's nothing to model, as references are not typed#2020-01-0502:25favilaFor polymorphic attributes I like the pattern of having an attribute whose value is the attribute where the value may be found, liked a tagged union#2020-01-0502:26favilaEg {:my/foo :my/fooLong, :my/fooLong 123}#2020-01-0502:27favilaFor “polymorphic” refs you can either do the same or just use one attr and tag the referent instead#2020-01-0502:28favilaEntities are just bags of assertions and have no type, so just “type” them as far as is useful.#2020-01-0519:52SocksSetting up Datomic cloud. What could it mean that i have a private-keys/bastion but not a private-keys/bastion.hostkey
E.g aws ls on .../private-keys just returns the file "bastion" where the command seems to expect a bastion.hostkey.#2020-01-0523:20jaretYour peer at Datomic access gateway (previously referred to as the bastion) need to have AWS credentials for the system you are trying to access. After you launch your gateway can you test that you are able to connect following these docs? https://docs.datomic.com/cloud/getting-started/get-connected.html#test-access-gateway#2020-01-0601:06SocksThe aws s3 cp command i reference above comes from running "datomic-access client [system-name]" which i understand creates the socks proxy to the Access Gateway. Given that it cant test access to it, i dont even have a port number. Or i'm i missing something?#2020-01-0520:04SocksAlternative, i set this up a while ago (4 months ago) but never touched it. What does it look like to just do a complete re-setup. I'm worried it might be hard to clean up lingering artifacts or something. really have no idea.#2020-01-0520:11Socksmaybe i need to upgrade again? Is there a way to link the docs to a particular system version?#2020-01-0523:19jaretYou can identify what version you are using by following these docs. https://docs.datomic.com/cloud/operation/upgrading.html#know-your-version#2020-01-0600:58SocksYep. I know what version im using. I'm not sure yet if the docs are versioned.#2020-01-0602:03SocksIndeed the answer was to upgrade#2020-01-0621:19philipHello everyone, I am seeing a very confusing side effect when I upgrade to the latest datomic version. When I upgrade to com.datomic/datomic-pro "0.9.6014" it breaks my cljs test compilation with this error:
clojure.lang.ExceptionInfo: failed compiling file:out/cljs/test.cljs {:file #object[.File 0x6759042b "out/cljs/test.cljs"], :clojure.error/phase :compilation}
...
Caused by: clojure.lang.ExceptionInfo: Reader tag must be a symbol {:type :reader-exception, :line 376, :column 13, :file "/asdf/out/cljs/test.cljs"}
Here is the offending line: https://github.com/clojure/clojurescript/blob/master/src/main/cljs/cljs/test.cljs#L376
For some reason it can't read the ##NaN.
The tests are run via lein-cljsbuild/doo and karma-cljs-test
I have verified just changing between 0.9.6014 and 5981 causes this issue. I also checked the new datomic-pro version doesn't bring in a new version of clojurescript or anything, the only transitive dependency it updates is com.datomic/client-api. I upgraded all dependencies that are possibly involved (clojurescript, karma, etc) and that didn't help.#2020-01-0621:20philipDoes anyone have any idea what could cause clojurescript to forget about NaN?#2020-01-0621:20philipDoes anyone have any idea what could cause clojurescript to forget about NaN?#2020-01-0621:26favilaOlder CLJSs don’t understand this; perhaps your dep tree got shook up in such a way that it ended up downgrading cljs?#2020-01-0621:35philiphmm thanks. lein deps :tree shows I'm using org.clojure/clojurescript "1.10.597" but it's possibly I don't completely understand how lein doo and cljsbuild work and they're using an older version?#2020-01-0621:41favilalein with-profile test deps :tree should force the test profile on and you can see if the tree is different#2020-01-0621:41favila1.10.597 shouldn’t be giving this problem though#2020-01-0621:42favilacould also be cljs is on the classpath multiple times and it’s picking up older compiled classes and newer non-classes. try cleaning target and out?#2020-01-0621:57philipAny version of 1.10 should be fine with ##NaN right? with-profile shows the same clojurescript deps, and looking at the classpath printed by lein I don't see any clojurescript jar that's not clojurescript-1.10.597.jar . Although it does fetch clojurescript-1.10.238 when I fetch deps in a new docker container. From figwheel it looks like#2020-01-0622:05favilayeah any 10 should be fine iirc#2020-01-0622:05favilahm, maybe it’s tools.reader#2020-01-0622:12philipsame story there where something is causing it to fetch tools.reader-1.0.0-beta3.jar but only 1.3.2 (the latest) appears on the classpath.#2020-01-0622:16philipthe confusing thing is that the only thing in the dep tree that changes is com.datomic/datomic-pro and com.datomic/client-api. A diff plain old diff is identical except those two lines.#2020-01-0814:16defaDoes anyone know if datomic on prem supports scylladb as a storage service which should be a “drop-in replacement” for cassandra?#2020-01-0817:52daniel.spanielis this issue fixed on datomic cloud yet ? https://forum.datomic.com/t/d-with-not-working-inside-tx-fn/1243#2020-01-0823:17Jacob O'BryantThere's this: https://forum.datomic.com/t/using-d-with-db-in-a-query-fails-on-cloud/1227/8
(I'm guessing that it's the same bug)#2020-01-0917:29daniel.spanielWe updated datomic ion and they fixed this issue in their latest release so I am all set .. thanks for noticing that same issue though#2020-01-0817:55daniel.spanielI can't tell from the releases page on datomic ion#2020-01-0818:42marshallDatomic 0.9.6021 is now available: https://forum.datomic.com/t/datomic-0-9-6021-now-available/1301#2020-01-0818:58philipdatomic-pro-0.9.6014 and 6021 includes clojure.tools.reader classes. 5981 does not. I'm starting to think this is the cause behind the strange behavior in my question above (Monday).#2020-01-0819:01philipCan someone with knowledge of the design comment if this was an intentional change? And what version of tools.reader is included#2020-01-0913:33stuarthallowayThanks for this report! I am investigating now.#2020-01-1021:51matthavenerfind anything?#2020-01-0819:25philipI can confirm if I rm -rf clojure/tools/ from datomic-pro-0.9.6014.jar and re-package it that ##NaN now works.#2020-01-0921:42m0smithI have deploy my application to a datomic cloud query group but the dashboards show everything is still happening on the compute group. We check the logs and the query group load balancer is being called. Is there something else that needs to be done to take advantage of the query group?#2020-01-1001:23marshallYou need to set the endpoint address in your client config map to use the query group @m0smith#2020-01-1001:23marshallhttps://docs.datomic.com/cloud/operation/query-groups.html#connecting#2020-01-1001:29m0smith@marshall Thanks. I just tried it and no change. One frustrating thing is I cannot find where the Ion deployment is logged to see if I have other problems.#2020-01-1001:29marshallOh. Using ions?#2020-01-1001:30m0smithyes#2020-01-1001:30marshallYou need to deploy your ion to the query group instead of to the primary compute group#2020-01-1001:30m0smithI did that and the CodeDeploy says it was successful. I can find no other evidence of the deployment though.#2020-01-1001:31marshallHow are you invoking the ion?#2020-01-1001:32m0smithVia API Gateway through http-direct. Is it possible to access http-direct another way?#2020-01-1001:33marshallAnd you configured the api gateway to trigger the query group ion?#2020-01-1001:34m0smithWe are pointing at the query group load balancer. Is there some other way to trigger it?#2020-01-1001:35marshallNo that is right#2020-01-1001:35marshallDoes the ion do a lot of writing?#2020-01-1001:35m0smithWe can see in the logs the API gateway is calling the expected query group load balancer#2020-01-1001:36m0smithIt can do a lot of writing but not usually. We also have some custom aggregator functions#2020-01-1001:36marshallWrites are all forwarded to the primary compute group#2020-01-1001:37marshallWhat makes you think the QG isnt doing work?#2020-01-1001:37m0smithThe dashboard for the QG shows no usage while the compute group dashboard shows all the query and transcts#2020-01-1001:42marshallYour connection type is :ion in the client config map?#2020-01-1001:42m0smithyes#2020-01-1001:43marshallHm. Can you file a support ticket and we can look into it tomorrow#2020-01-1001:43m0smith:server-type 🚉#2020-01-1001:43m0smithsure, thanks#2020-01-1001:43marshallIt may help if we could get cloudwatch access, etc#2020-01-1001:44m0smithDoes the app-name need to match? We have different app-name set in the resource file#2020-01-1001:44marshallDifferent than?#2020-01-1001:44marshallThe app name set on the query group when you launched it needs to match the one you use in the config when you push#2020-01-1001:45m0smithThe :app-name in the ion deployed to the compute group is different than the :app-name deployed to the QG#2020-01-1001:46marshallAs long as you did actually deploy to your qg#2020-01-1001:46m0smithThe ApplicationName in the Parameters to the QG is not specified. We should be using the SystemName in that case?#2020-01-1001:47m0smithin CloudFormation#2020-01-1001:50marshallYes#2020-01-1001:50marshallWait#2020-01-1001:51marshallThe app name defaults to the stack name if you dont specify it#2020-01-1001:55m0smithok, I had that wrong#2020-01-1002:00m0smithI will give it a try. I have to take off. I'll open a ticket if that doesn't help.#2020-01-1002:00m0smiththanks again#2020-01-1002:00marshall👍#2020-01-1007:53tatutI have a chicken and egg style problem with datomic ion lambdas... I'm creating my environment s3 buckets with cloudformation before ions are deployed... I can't refer to the not-yet-existing ion lambdas in the bucket notification config.#2020-01-1007:54tatutsame goes with API gw permissions to call the ion lambdas... I wan't to create my infastructure before ci does the deployment#2020-01-1007:55tatutI'm wondering if others have had similar issues and how to solve them#2020-01-1609:18holyjakNot here but elsewhere we deployed a dummy service under the same name and later overwrote it with the actual application. Not sure whether a similar approach (a dummy lambda with the target name) could work...#2020-01-1009:13Ivar RefsdalHi. A question about backup. Currently we just backup the backing database using regular MySQL functions/scripts. Is that good enough, or could this lead to trouble in recovery in the event of a disaster? Thanks.#2020-01-1018:14joshkhare ion deployment failures logged, and if so which CloudWatch log group can we inspect? i've checked in /aws/lambda/<stack-name>-CreateCodeDeployDeployment and ...EnsureCodeDeployApplication and don't see anything suspicious. CodeDeploy tells me which specific event failed, but i'm hoping for more details.#2020-01-1118:49m0smithI look in the datomic-<stack-name> log group and filter for Ion#2020-01-1020:20denikMade a natural language to datalog lib in CLJC. Experimental. Please read readme before commenting on why strings are a terrible idea for DB queries 😁 ️Curious to hear your thoughts and ideas! https://www.reddit.com/r/Clojure/comments/emwgry/nldl_natural_language_clojure_flavored_datalog/#2020-01-1308:42hkjelsNifty! It makes perfect sense on mobile, such as your use-case#2020-01-1022:02alidlorenzoAnyone have experience using Hodur for a datomic graphql API?
from looking at project, I don't particularly like the way it abstracted out datomic's schema, but only examples I can find using lacinia and datomic also use Hodur#2020-01-1022:07alidlorenzofor context - i was able to get ions running already via http with pedestal-ions, following their pet service example project.
but haven't been able to integrate Lacinia as simply, and hard to know what steps to follow without a bare bones example
hence why i'm considering just using hodur, but wondering what experiences ppl have with it#2020-01-1022:10shaun-mahoodAre you looking for hodur-datomic experiences specifically when combined with graphql? I've never used it with graphql but have used it with datomic and have some opinions.#2020-01-1022:23alidlorenzosure, what are they?#2020-01-1022:37shaun-mahood• I'm not a fan of the hodur types, I was constantly looking up what types translated to which Datomic valueTypes.
• It made it easier to get basic database schemas started, but felt a lot more constrained than it did without it. I think it would be hard to build something like https://github.s3.amazonaws.com/downloads/Datomic/codeq/codeq.pdf?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAISTNZFOVBIJMK3TQ%2F20200110%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20200110T222806Z&X-Amz-Expires=300&X-Amz-SignedHeaders=host&X-Amz-Signature=d202d5b1eccf52fcf95b4da8d3c88d014445278d133a01ced5c684889d96db98 using it, as one example.
• Although I loved the graphing capabilities, I couldn't get things to work nicely when I started diverging from the kind of design that would be expected in a table based structure. I tried putting in interfaces and things but it became way more complicated than necessary fast.
• I had issues with the naming conventions and how it translated some names - s3 became s-3, and I had to test how the names I wanted translated and change my desired naming to use it
• It's great for enums and simple schemas, and I love the concept of it and how it works for some types of schemas and relations.
• I currently have about half my schema defined in hodur, and the other half with normal datomic syntax.#2020-01-1022:41alidlorenzothis is definitely helpful, thanks#2020-01-1123:09steveb8nI built my own “Hodur” before it was released based on this blog post https://vvvvalvalval.github.io/posts/2018-07-23-datascript-as-a-lingua-franca-for-domain-modeling.html#2020-01-1123:10steveb8nand generate Datomic schema in this shape because I find it more readable https://gist.github.com/stevebuik/17ed50824f1bb814fab9e556a37cf18a#2020-01-1123:10steveb8nnow 99% of my schema is generated#2020-01-1123:10steveb8nand about 50% of my GQL schema as well#2020-01-1123:19steveb8nI also generate clojure code using a pattern similar to this. https://github.com/stevebuik/clj-code-gen-hodur#2020-01-1123:20steveb8nthat generates 8k lines of code in my application now. this generation idea is really powerful. I’m a big fan of it 🙂#2020-01-1716:37Ian Fernandez@U0J9LVB6G#2020-01-1305:27csmhi! is there a way to rotate the Datomic Cloud access keys (stored in S3)? I did something stupid with one of my access keys, and even though it’s a test solo install, would rather not delete & recreate it#2020-01-1315:31souenzzoHello. I'm trying to run a transactor in ddb setup
EDIT My ping-port return 200 ok but the peer can't connect.
There is no error's in transactor stdout/err or logs dir.#2020-01-1315:31souenzzoHello. I'm trying to run a transactor in ddb setup
EDIT My ping-port return 200 ok but the peer can't connect.
There is no error's in transactor stdout/err or logs dir.#2020-01-1315:32souenzzoError communicating with HOST 10.0.0.150 on PORT 4334 {:alt-host
nil, :peer-version 2, :password "<redacted>", :username "xxxxx", :port 4334, :host "10.0.0.150", :version "0.9.6014", :timestamp 1578928111501, :encrypt-channel true}#2020-01-1317:54skuttlemanHello. I'm having trouble writing/using a custom transaction fn. Specifically, it seems to choke only on attributes that are :db.cardinality/many. From the repo that contains the function I can can inovke:
(my-namespace/my-db-fn (datomic.api/db conn) {:test-entity/id "my-unique-id"
:test-entity/coll ["foo" "bar"]})
And I get an output I expect:
[[:db/add "DBID23569" :test-entity/id "my-unique-id"]
[:db/add "DBID23569" :test-entity/coll "foo"]
[:db/add "DBID23569" :test-entity/coll "bar"]]
When I pass the output to datomic.api/transact , it transacts and I can query it and very that it's there.
However, when I try to invoke the function running on the transactor:
(datomic.api/transact conn [['my-namespace/my-db-fn {:test-entity/id "my-unique-id"
:test-entity/coll ["foo" "bar"]}]])
I get this error:
:db.error/wrong-type-for-attribute Value [foo bar] is not a valid :string for attribute :test-entity/coll
I would appreciate any pointers in a helpful direction, because I'm completely lost as to what to do. Thanks!#2020-01-1319:57favilaMaybe a Dumb Question, but are you sure that the transactor has the same version of the function as what is in your repl?#2020-01-1319:58favilaperhaps it has an earlier version that returned [:db/add "DBID23569" :test-entity/coll ["foo" "bar"]] ?#2020-01-1320:09skuttlemanI only ever deployed one version, so that couldn't be it. On the bright side I found the answer (for posterity's sake):
sending vectors to the transactor get deserialized as java.util.ArrayList. I was using clojure.core/coll? to detect collections, but java.util.ArrayLists respond false to clojure.core/coll?. I switched to check for java.util.Collection and it all seems to work now.#2020-01-1320:11favilaout-of-the-box clojure.data.fressian does the same thing#2020-01-1320:28eoliphantHi are there any more complete examples of the analytics metaschema around? (e.g. for mbrainz)#2020-01-1418:46jaretHi @U380J7PAQ here is my metaschema which I use on the subset of mbrainz-1968-1973.
;;mbrainz subset metaschema
{:tables ;; :membership-attr->opts
{:abstractRelease/gid {}
:artist/gid {}
:country/name {}
:artist.type/name {}
:artist.gender/name {}
:label.type/name {}
:label/gid {}
:language/name {}
:medium.format/name {}
:release.packaging/name {}
:medium/format {}
:release/gid {}
:script/name {}
:track/name {}}
:joins ;;ref-attr -> tablename
{:abstractRelease/artists artist
:label/country country
:label/type label_type
:release/country country
:release/language language
:release/script script
:release/packaging release_packaging
:release/artists artist
:release/labels label
:artist/country country
:artist/gender artist_gender
:artist/type artist_type
:medium/format medium_format
:medium/tracks track
:track/artists artist}}#2020-01-1421:44eoliphantah great thanks man#2020-01-1421:44eoliphantit’d be great if you guys added that to the docs 🙂#2020-01-1418:42jarethttps://forum.datomic.com/t/datomic-0-9-6024-now-available/1310#2020-01-1418:55matthavenerthank you for removing the peer dependencies!#2020-01-1419:53grzmI'm looking at spec'ing Datomic data structures (e.g., data in tx-data and results from queries and pull). For the most part, it's pretty straightforward. One wrinkle is references: the value can be a variety of things, such as a long (the entity id), any of the different lookup-ref forms, or a map of the referenced entity attributes (e.g., in result sets). All will have the same key. In some contexts I'll definitely want a lookup-ref as opposed to a map (e.g., in a transaction where I want to reference an extant entity), so I'm wondering how to handle validation. Any ideas on approaches?#2020-01-1420:02Joe LaneUse spec2/union. Look for prior art on folks already attempting to spec datomic. Identify which scenarios you "definitely want a lookup-ref as opposed to a map" and define a different select / union for those scenarios (luckily, you seem to already have identified them!).
For speccing the tx-data, just follow the grammar on the docs page.#2020-01-1420:02Joe LaneWhen in doubt, phone a friend 🙂#2020-01-1420:04grzmCheers! is there a reference for union? I don't see it immediately at hand on the wiki Looks like it's on the schema/select page.#2020-01-1420:04Joe Lanehttps://github.com/clojure/spec-alpha2/wiki/Schema-and-select#unions#2020-01-1420:04Joe Lanejinx#2020-01-1420:06alexmillerunions are probably going to go away or change name, so don't lean on that too hard#2020-01-1420:13grzmThat being the case, would you have a pointer towards another approach I might take? Where in some contexts, I want tighter validation than others for the same spec/key?#2020-01-1420:19alexmilleris there some reason not to s/or all the options?#2020-01-1420:20alexmillerand then s/and in more constrained predicates when needed?#2020-01-1420:21alexmillerand separately, are these tighter constraints actually helping you enough to be worth the trouble?#2020-01-1420:39grzmSay I'm using Datomic-like data structures in my application, as Datomic is my primary data store and they're nice data structures to do business logic on anyway. I have a new (not yet persisted) entity that contains a reference to another, (should-be-pre-existing) entity. If I want to transact that data, I'll use a look-up ref (which will guarantee failure if that entity doesn't already exist, which is what I want). If I use a map as that reference, it will create that entity (which is not what I want). I can create a spec that accepts a map or a lookup-ref as valid values for the key, but in the tx-data case, I only want the lookup-ref version, not the map version. Am I thinking about this wrong?#2020-01-1420:58favilaStepping back, datomic transactions are a DSL. does it make sense to spec it as a DSL, or to spec your particular use of it?#2020-01-1420:59favilaThe latter is going to be a challenge because the meaning of keys is overloaded#2020-01-1420:59favilaAs a DSL, the keys are opaque and meaningless#2020-01-1421:00favilabut if you try to spec the DSL at a more semantic level, you may conflict with the “read” view of those keys (i.e. what you get from d/pull, etc)#2020-01-1421:00alexmillersometimes, it's just not worth getting down to that level of specificity#2020-01-1421:01favilaor if you need to, consider making a new DSL for your transactions, and speccing that instead#2020-01-1421:01favilaand have functions that transform to the transaction dsl#2020-01-1421:22grzm@U09R86PA4 that makes sense. Cheers.#2020-01-1421:47grzmStronger than that. @U09R86PA4 that really clarifying. doubleplussgood#2020-01-1420:06Joe LaneNoted :+1:#2020-01-1420:39djeisIs it possible to use classpath functions with an in-memory database created with the peer library?#2020-01-1421:01djeisMy impression at a high-level from the docs and discussions online is it should be, but it seems no matter what I do I get :db.error/not-a-data-function so clearly either I've misunderstood something or I'm doing something fundamentally wrong.#2020-01-1422:01djeisAh, I think I figured it out... The latest version of datomic free in clojars doesn't support classpath functions. 😓#2020-01-1507:56csmcross-post from #announcements -- https://clojurians.slack.com/archives/C06MAR553/p1579074969108000#2020-01-1513:29alidlorenzocool idea. you should look into string literals for a nicer API
for examples look at how these sql libs do it
https://github.com/porsager/postgres
https://github.com/gajus/slonik#2020-01-1600:17csmThat was a pretty good suggestion. I just pushed 0.1.7 where you can use EDN string literals.#2020-01-1604:19alidlorenzoyea the edn literals look good, nicely done#2020-01-1609:01dmarjenburghI have a schema attribute with db.type/float. When I try to transact the value 3.14, it actually transacts 3.140000104904175. What is going on?#2020-01-1612:02Linus EricssonThe problem can be in some serialization, where the float converts to a bit representation (perhaps in fressian if that is used) and then back.
The answer is not wrong per se (because floats are imprecise) but I understand you find it surprising.
If you really want to use 3.14 as 3.14, use BigDecimal and 3.14M or try if Double is serialized in a more precise way. I’m not sure it will work either but worth a try.
#2020-01-1612:03Linus Ericsson(:db.type/decimal and :db.type/double)#2020-01-1612:03Linus Ericsson(I have no insight in the Datomic code base)#2020-01-1707:47dmarjenburghIt doesn't lose precision with doubles. Unfortunately, changing float to double is not a supported schema alteration.#2020-01-2314:21eggsyntaxThis is a common issue with floating point. Unless you're doing something where exact precision is critical (typically currency or some scientific work) I'd say the standard solution is to round it at the point of display -- as you can see from the example, the difference is so tiny that you could round to anywhere from 2-8 digits and get the right answer.
As @UQY3M3F6D points out, using BigDecimal avoids the problem entirely, at the cost of being a) computationally much more expensive/slow, and b) annoying to work with (although less so in Clojure than in Java).
Here's a well-known & useful article on the topic: https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html#2020-01-1617:00timcreasyI noticed the other day that datomic.client.api/q doesn’t support query on a collection of facts:
(d/q '[:find ?name
:where [_ :person/name ?name]]
[[1 :person/name "Tim" 42]
[2 :person/name "Jasmin" 42]])
Execution error (ExceptionInfo) at datomic.client.api.impl/incorrect (impl.clj:42).
Query args must include a database
I remember this being supported in on-prem: https://docs.datomic.com/on-prem/query.html#database-of-facts
Is there any way/talk to support this?#2020-01-1617:01timcreasyLooks like datomic.client.api/q requires an argument to satisfy Queryable#2020-01-1617:02timcreasyWas always just handy for small examples 🙂#2020-01-1617:07m0smithWe are using a :db/ident as an enum. in Datomic Cloud. When I do a :find (pull ?e [*]) I get the :db/id but not the actual keyword. What is the correct way to implement enums in Datomic in the year 2020?#2020-01-1617:14Joe Lanehttps://docs.datomic.com/cloud/schema/schema-modeling.html
Is the enum a ref like above?#2020-01-1617:17m0smithYes, exactly. The problem is, how to return :country/JP from a query rather than just use it in a where clause.#2020-01-1617:22Joe Lanewhat happens when you do
:find (pull ?e [* {:artist/country [*]}])#2020-01-1617:32m0smithsometimes I get {:db/id 777} and other times I get {:db/id 777 db/ident country JP}#2020-01-1617:33m0smithThat smiley is a : d#2020-01-1617:59m0smithIs there any disadvange in use a keyword type instead? It simplifies things for sure.#2020-01-1618:33Joe LaneI cant say for sure what the disadvantages would be, however I think modeling things that way sounds great if it meets your applications needs.#2020-01-1618:52m0smiththanks!#2020-01-1618:56fjolne@U050VTWMB what’s wrong with querying for :db/ident specifically? using asterisk is considered to be anti-pattern for production queries. I agree that keywords are more convenient in some cases, but idented entities have a few pros: a) they’re entities thus extensible with additional data b) misspellings are validated on-write by db itself#2020-01-1618:56m0smithWhat would that query look like?#2020-01-1618:58fjolne[{:artist/country [:db/ident]}]#2020-01-1619:03favilaUsing a keyword for enums: 1) enum set is “open” (you don’t need to transact ahead of time to create another enum) 2) you get a keyword value in pulls#2020-01-1619:06favilaUsing a ref for enums: 1) enum set is “closed” (you must transact ahead of time) 2) slightly more efficient storage 3) indexing by value is automatic 4) you can annotate enum entities with additional assertions (they are “just” entities after all. For e.g. you can document/enforce attribute range and and enum set membership with more attributes) 5) pull gives you {:db/ident :keyword-value}#2020-01-1619:06favilaI think the usual answer is to post-process the pull result with e.g. clojure-walk to convert to a keyword#2020-01-1619:35m0smithReplacing the * with :db/ident seemed to fix the problem I was seeing. I just wish the extra :db/ident wasn't in the response as it makes it just another thing to remember#2020-01-1718:59m0smithWe are seeing a message like Cache put-result-handler error: net.spy.memcached.internal.CheckedOperationTimeoutException: Operation timed out. - failing node: /10.213.1.121:11211 in the AWS CloudWatch logs for Datomic Cloud. Is it something to be concerned about?#2020-01-1721:36Leonid ScottHi, I’m trying to query on a list of id’s like so:
(d/q '[:find ?title
:in $ ?eids <- A list of the form (#{:db/id <ID>} ...)
:where
[?title :message/text ?eids]]
db msg-eids)
How do I go about doing this?
Thanks in advance!#2020-01-1721:49cjmurphyJust pass in a list of eids, not a list of maps.#2020-01-1722:03Leonid ScottOkay, my list will look like this:
(<ID> ...)
Do I need to change anything in the in clause?#2020-01-1722:52favilayes you will need to destructure#2020-01-1722:52favila:in $ [?eid ...]#2020-01-1722:53Leonid ScottThanks!#2020-01-1722:53favilaIf you keep the map :in $ [?eid-map ...]#2020-01-1722:53favilathen [(:db/id ?eid-map) ?eid] at the top of your :where#2020-01-1722:54favila(it’s better not to pass in a map though)#2020-01-1723:59zalkyHi all, wondering why the datomic api sometimes returns sets (entity) and sometimes returns vectors (pull) for cardinality many values? AFAIK, there is no intrinsic order to cardinality many values, and i'm wondering why vectors are returned.#2020-01-1804:11favilaPull cannot guarantee results are a set#2020-01-1804:13favila[{:card-many-attr [:not-unique-attr]} for eg. You are going to get duplicate {:not-unique-attr someval} entries#2020-01-1804:15favilaD/entity however can guarantee that because each entry is a unique entity object (entity equality is by db-id+ connection)#2020-01-1805:27zalky@U09R86PA4, great explanation, thanks for your response!#2020-01-1818:03john-shafferIs there a good option for local testing that's compatible with Datomic Cloud, or do you pretty much have to set up a deployment?#2020-01-1823:20kennyWe wrote this to solve that problem and have been using it for a year or so. https://github.com/ComputeSoftware/datomic-client-memdb
It’d be great for the Datomic team to have an official, feature-complete version though. #2020-01-1915:54john-shafferThanks! That looks great.#2020-01-1822:21dazldthe documentation on the datoms API is a little terse - I get that there’s ways to slice the returned collection via offset, limit, and to provide values that bookend the returned data, but some examples would be great!#2020-01-1822:23dazldlinks would be really welcome if you know of any#2020-01-1822:30dazldSee namespace doc for timeout, offset/limit, and error handling. - where is the namespace doc..? the doc string doesnt seem to help much.#2020-01-1822:33Joe Lanehttps://github.com/cognitect-labs/day-of-datomic-cloud/blob/master/doc-examples/raw_index_access.repl#2020-01-1822:57favilaThe “namespace doc” is the docstring on the namespace itself#2020-01-1822:57dazldthat example on index-range doesn’t work for me, probably an old verison of the api#2020-01-1822:57dazldputting ordered args instead of the map did though!#2020-01-1822:58dazld@U09R86PA4 ah, ok - there’s no info on offset/limit in there#2020-01-1822:58dazld(->> (d/index-range
(ldb)
:user/email
"d"
"e")
(map :v))
#2020-01-1822:58dazldthis worked for me#2020-01-1823:10dazldannoyingly the map version throws an arity exception, as well as anything apart from arity 4#2020-01-1823:10dazldthat looks like a bug#2020-01-1823:10dazldhave to put nils in the slots#2020-01-1823:10dazld:I#2020-01-1900:13favilaCall it with a map arg#2020-01-1900:13favilaI didn’t think you could call it like this#2020-01-1909:45dazldI can’t call it any other way.#2020-01-1909:45dazld(d/index-range (ldb) {})
Execution error (ArityException) at user/eval28846 (form-init4749596070973050853.clj:1).
Wrong number of args (2) passed to: datomic.api/index-range
(d/index-range {})
Execution error (ArityException) at user/eval28850 (form-init4749596070973050853.clj:1).
Wrong number of args (1) passed to: datomic.api/index-range
(d/index-range (ldb) {})
Execution error (ArityException) at user/eval28854 (form-init4749596070973050853.clj:1).
Wrong number of args (2) passed to: datomic.api/index-range
(d/index-range (ldb) {} {})
Execution error (ArityException) at user/eval28858 (form-init4749596070973050853.clj:1).
Wrong number of args (3) passed to: datomic.api/index-range
(d/index-range (ldb) :attrid :user/email :start "d" :end "e")
Execution error (ArityException) at user/eval28862 (form-init4749596070973050853.clj:1).
Wrong number of args (7) passed to: datomic.api/index-range
(d/index-range (ldb) :attrid :user/email :start "d" )
Execution error (ArityException) at user/eval28866 (form-init4749596070973050853.clj:1).
Wrong number of args (5) passed to: datomic.api/index-range#2020-01-1909:52dazldsame on
(d/datoms (ldb) :avet :user/email)
#2020-01-1909:53dazld:man-shrugging:#2020-01-1909:53dazldbit of a mystery#2020-01-1909:53dazlddatomic-free `
0.9.5697
`#2020-01-1921:14favilaNo mystery. You are not using datomic cloud but datomic on-prem#2020-01-1921:18favilaIt has a blocking api and is in-process. It has no concept of timeout, offset, limit etc which belong to the peer api#2020-01-1921:19favilaSo you are actually reading the wrong docs#2020-01-1921:21favilaOh and I missed that you were taking about index-range later#2020-01-1921:21favilaIndex-range is not variadic; datoms and seek-datoms are#2020-01-2014:08dazldsorry, yes was playing with both APIs#2020-01-2014:09dazldstill, I think the docs could use some love - I’m interested in on-prem performance, and they look critical to this#2020-01-2014:43favilaGenerally you should lean on queries. I’ve only found datoms useful when the intermediate or final sets from datoms are too large for peer memory#2020-01-2014:43favilaEven then the strategy is usually to use datoms to get chunks to feed as input to the “full” query#2020-01-2014:44favilaI guess one other case is certain kinds of self-joins#2020-01-2014:57dazldI’m sorting a list of 8000 items by counting the number of items each has on a ref attribute#2020-01-2014:57dazldI can’t seem to get it under 800ms#2020-01-2014:57dazldusing a materialized count gets it down to about 140ms, but still…#2020-01-2014:58dazldI’ll probably open a topic on the forum#2020-01-2014:58dazldif the answer is “use materialized counts and tx functions” then that’s also ok.#2020-01-2014:59dazld(total number of ref’d entities is about 200k, so not particularly big lists..)#2020-01-2108:27favilaStart a new thread showing your queries #2020-01-2108:28favilaAlso how much memory do your peers have? More memory -> more cached data -> faster#2020-01-2109:13dazld32gb#2020-01-2109:13dazldi’ll write a thread#2020-01-2012:08Adrian SmithI'm getting access denied for some of the pictures on this blog: https://blog.datomic.com/2012/10/codeq.html#2020-01-2016:23marshallFixed; Thanks#2020-01-2106:19steveb8nQ: I just updated my Mac to Catalina. Now I can’t connect via ssh/socks-proxy to the bastion. Has anybody dealt with this before?#2020-01-2107:45steveb8nnvm: apparently ssh doesn’t like wifi. when I use ethernet, I can connect again#2020-01-2114:01plexuswe're converting an on-prem app to cloud, and noticed there's no :db/index. Does that mean there are no AVET indexes available for non-unique attributes?#2020-01-2114:15souenzzo@U07FP7QJ0 AFIK :db/index is enabled for all atributes in cloud#2020-01-2114:15souenzzoAnd there is some limitations about that
(no bytes, no large strings...)#2020-01-2114:22favilacorrect, everything is V-indexed that can be#2020-01-2115:01plexusoh, great! thanks!#2020-01-2208:27augustlis there a definition of what a "large string" is? And does Datomic do other special things with large strings?#2020-01-2211:52favilahttps://docs.datomic.com/cloud/schema/schema-reference.html#orgaf39f49#2020-01-2211:52favila4096 characters#2020-01-2211:53favilaLonger strings are just rejected#2020-01-2211:53augustlah, the transaction just aborts?#2020-01-2211:53favilaNote this is only datomic cloud. On prem will take any string size (not that it’s a good idea to store large strings in datomic)#2020-01-2211:54augustlah, I see#2020-01-2413:11dmarjenburghLarge strings are not rejected though. I have transacted string with up to 38000 characters (in testing) and was able to retrieve them without a problem. It might not be a good idea, but it’s not clear what the exact tradeoffs are.#2020-01-2413:39favilaThis is cloud you are talking about @U05469DKJ? on-prem doesn’t reject#2020-01-2413:39dmarjenburghI mean datomic cloud, yes#2020-01-2413:39favilathe tradeoff is large segment sizes#2020-01-2209:59Lukas#2020-01-2210:04augustl<DB-NAME>? is meant to be replaced with the actual name of your db 🙂#2020-01-2210:50Lukashey ty for ur reply, i did that (see 3. box) but than i got a SQLException
(d/create-database "datomic:")
=> Execution error (SQLException) at java.sql.DriverManager/getDriver (DriverManager.java:298).
No suitable driver
(d/create-database "datomic:sql://?jdbc:")
Execution error (SQLException) at java.sql.DriverManager/getDriver (DriverManager.java:298).
No suitable driver
#2020-01-2210:52Lukasbut i was able to start the console this way
bin/console -p 8088 datomic "datomic:sql://?jdbc:"
#2020-01-2210:56augustlyeah ,you need to add a dependency to the driver itself, I guess? Doesn't seem like Datomic provides one#2020-01-2211:11Lukas🤔 could u point out how i would do that? I'm kinda lost here 🙄#2020-01-2211:14augustladd the dependency in project.clj, if you use leiningen. For example: [org.postgresql/postgresql "42.2.6"]#2020-01-2211:14Lukasokay ty#2020-01-2211:14Lukasseems fairly easy 😄#2020-01-2211:14augustl👍#2020-01-2210:45mgrbyteI'd like to check my understanding on datomic security (on-prem, AWS, DDB).
Any machine can read + write to the datomic database if:
• A peer/client has the datomic-pro library
• The URI to your database is known
• An IAM instance role controls whether the application has permission to read/write to DDB.#2020-01-2211:53mgrbyteMy concern is: what's to stop anybody with datomic-pro who knows the URI of a datomic DB writing to it (When it's on-perm/AWS/DDB)?#2020-01-2211:54augustljust like any DB I would assume - firewall it. The URI of most DBs contain username/password etc#2020-01-2211:56mgrbyteWith DDB there is no host and port tho (AFAICT). so you can't for example use EC2 security-groups to control inbound access.#2020-01-2211:56augustlI'm definitely not an expert on DDB and AWS, but it seems odd to me that all DDBs are accessible to the public internet?#2020-01-2212:03mgrbyteyou can use DDB endpoints to restrict access to clients within a VPC.
It's not "open" per-say, you need to grant IAM privileges (via roles) to read and write from DDB. And the transactor process is given those when it is set up.
What I'm failing to remember/see is the problem of securing peer access - that is, if the DB URI is known, how to prevent access from any arbitrary datomic-pro client/peer.#2020-01-2212:24marshallIAM handles read and write to ddb#2020-01-2212:24marshallPeers need to be able to read ddb#2020-01-2212:24marshallTransactor need both#2020-01-2212:25marshallYou definitely shouldn't have global read allowed on your ddb table.#2020-01-2212:26marshallDepending on whether your peers are aws instances or not, you should use IAM instance roles and or profiles/environment credentials#2020-01-2212:27marshallhttps://docs.aws.amazon.com/amazondynamodb/latest/developerguide/authentication-and-access-control.html#2020-01-2212:32mgrbyteThanks @U05120CBV - this is true of the transactor processes I've deployed (they are controlled by IAM roles).
What I'm seeing is that a process in an environment with no AWS creds set can still connect to the transactor and transact datoms without needing any AWS environemnt variables set.
We now have peers that run as ElasticBeanStalk apps (that use IAM roles), but also command line applications that use datomic-pro directly to talk to the database.
It's the latter case (or just using a repl with datomic pro library) that I'm struggling with to see how to secure access.#2020-01-2214:20marshallwhere is that environment#2020-01-2214:20marshallif it’s on an EC2 instance, it likely has an instance role assigned#2020-01-2214:43mgrbyteto set the record straight, my creds were set in my ~/.aws directory and I hadn't realised the datomic-pro peer library uses those. so a red herring.#2020-01-2214:44mgrbytebig thanks to @U05120CBV for setting me straight.#2020-01-2214:23jarethttps://forum.datomic.com/t/datomic-cloud-589-8846/1328#2020-01-2214:24jaretFYI, despite that preview’s text eu-north-1 was not added as AWS lacks the ability currently to support Cloud in that region. It was a late scratch from the release and we’re hoping to add it as soon as AWS is able to support Cloud in that region.#2020-01-2214:28Joe LaneHaha, shoot. I just upgraded to 8835 2 hours ago.#2020-01-2214:36marshallthen you’re all warmed up and ready to do it again#2020-01-2300:31mrmcc3FYI when I click on the production compute template. It actually returns a storage template#2020-01-2300:34marshallWe will look into it#2020-01-2300:51jaret@U050CQFT1 Sorry! I believe I’ve corrected the link.#2020-01-2300:51jaretIt should now pull the production compute template#2020-01-2300:51jaretThank you for reporting.#2020-01-2300:52mrmcc3No worries#2020-01-2301:06mrmcc3The link itself seems to indicate production compute template https://s3.amazonaws.com/datomic-cloud-1/cft/589-8846/datomic-production-compute-589-8846.json
But the json has "Description": "Creates storage resources needed to run Datomic." at least for me#2020-01-2301:08jarethmm let me look again#2020-01-2301:09jaret> “AWSTemplateFormatVersion”: “2010-09-09",
> “Description”: “Creates compute resources needed to run Datomic.“,#2020-01-2301:09jaretI am wondering if the cache needs to be busted on the hosted S3#2020-01-2301:09jaretlet me see if I can do that#2020-01-2301:14mrmcc3Seems to be working now. 👍#2020-01-2301:14jaretok great. Again sorry about that!#2020-01-2303:31Jacob O'BryantThanks for the d/with bugfix -- really appreciate it :)#2020-01-2215:03Luke Schubertdoes transact-async not work with an in mem database?#2020-01-2215:07matthavenerit should work#2020-01-2215:13Luke Schubertyeah it does, turned out to be a bug somewhere else that threw me off#2020-01-2313:53dazldis there a quicker way to count the number of datoms than
(count (map :a (d/datoms (d/since db #inst"2020-01-14") :aevt :some.ref/attr)))
?#2020-01-2313:53dazld(for example)#2020-01-2313:54dazldI see that it’s an iterable, but I couldn’t see a way to figure out if it can have a method called on it to return the size#2020-01-2313:54dazldseq/map seem about the same in terms of time taken#2020-01-2314:00dazldI guess these kind of bounded time questions would be better with the log..?#2020-01-2314:02ghadi@dazld (d/db-stats)#2020-01-2314:03dazldI don’t have that - is it a cloud thing?#2020-01-2314:06ghadiit's on the client API#2020-01-2314:06ghadihttps://docs.datomic.com/client-api/datomic.client.api.html#var-db-stats#2020-01-2314:09dazldsadly, not using that#2020-01-2314:12ghadiWill defer to datomic support but if you're using on-prem, you probably have some data in the logs, or via the metrics callbacks
https://docs.datomic.com/on-prem/monitoring.html (cc/ @jaret)#2020-01-2314:13dazldit’s ok, not a monitoring thing - and yep, can see totals for datoms in cloudwatch.#2020-01-2314:13dazldit’s more of a performance golf thing#2020-01-2315:36grzmNot overly long ago, Cognitect provided an example of splitting a Datomic app into separate app and deployment repos: https://github.com/Datomic/ion-event-example-app https://github.com/Datomic/ion-event-example I'm interested in hearing any experience reports with people doing this. One thing I like about Datomic is the quick "commit and deploy" workflow, and am wondering whether I'd find the "commit in app, push to repo, update deploy repo with new hash, and deploy" process overly cumbersome.#2020-01-2316:29eoliphantquick datalog question. I’ve a case where a user can have 0-N entities each describing a privilege, and I need to determine users who have a given priv, but no others. so more generically something like I want to find entities where values :a 1 where say :b 'X' and there are no other entities where :a 1 and :b <something other than X> I did the following, and it works fine, but it feels a little clunky and was just wondering if there might be a more succinct approach
(d/q '[:find ?person-id
:in $ ?person-id
:where
[?e :egms.highest_assignment/customer_staff_id ?person-id]
[?e :egms.highest_assignment/customer_assignment "PRINCIPAL_INVESTIGATOR"]
(not-join [?e ?person-id]
[?f :egms.highest_assignment/customer_staff_id ?person-id]
[(!= ?e ?f)]
[?f :egms.highest_assignment/customer_assignment ?role]
[(!= ?role "PRINCIPAL_INVESTIGATOR")])]
db 938629M)#2020-01-2320:28grzmDoes something like this work?#2020-01-2414:26luiseugenioHi, is it possible to create a tuple (composite key) that refers to a inverse key (from a parent that has this entity as subcomponent)?
{
:db/ident :subcomponent-entity/composite-key
:db/valueType :db.type/tuple
:db/tupleAttrs [:parent/_subcomponent-entities :subcomponent-entity/id]
:db/cardinality :db.cardinality/one
:db/unique :db.unique/identity
}#2020-01-2417:42Sam DeSotaI'm considering deploying a web app api I've developed locally via Datomic Ions, currently I'm using a boot for deps + a number of tasks. It looks like I need to specify my deps via the clojure cli EDN format in order to deploy via clojure -iAion-dev, is there an easy way to use boot for deps + tasks while deploying to ion?#2020-01-2418:21alexmillerusing deps is always ultimately just running a java command line#2020-01-2418:21alexmillerso there's no reason you can't do the equivalent#2020-01-2418:24alexmillerit would look something like java -cp <ion-dev-deps> clojure.main -m datomic.ion.dev '{:op :push}'#2020-01-2418:26alexmillerwhere <ion-dev-deps> are the transitive deps from com.datomic/ion-dev 0.9.247 (you will also need maven repository "datomic-cloud" {:url "<s3://datomic-releases-1fc2183a/maven/releases>"})#2020-01-2418:26alexmillerI don't know what the best way to make that happen with boot is, but should be doable#2020-01-2418:42maxtI'd like to create a rule that gives me all the passed entity and all entities it refers too, but I'm not able to get the "same" rule to work
'[[(self-or-refers-to ?e1 ?e2)
[(= ?e1 ?e2)]]
[(self-or-refers-to ?e1 ?e2)
(refers-to ?e1 ?e2)]
[(refers-to ?parent ?entity)
[?parent ?ref ?entity]
[?ref :db/valueType :db.type/ref]]
[(refers-to ?parent ?entity)
(refers-to ?parent ?e)
(refers-to ?e ?entity)]]
Using = gives me [?e2] not bound in expression clause: [(= ?e1 ?e2)]
How would I go about that?#2020-01-2419:03favilaI think you want [(identity ?e1) ?e2] .#2020-01-2419:04favilait may be easier to think of it using [] to enforce boundedness#2020-01-2419:05favila(self-or-refers-to [?anchor-e] ?reachable-e)#2020-01-2419:07favilain practice datomic rules can’t be run backwards efficiently, so it’s easier to think of input and output params (although you can bind the output params as a kind of filter)#2020-01-2419:07maxtIn the call or the rule signature? Not yet grokking what you say#2020-01-2419:07favilabrackets in the signature#2020-01-2419:08maxtWhat does the brackets imply? I.e. is it called something that I can read about?#2020-01-2419:08favilathey require that the parameter be bound#2020-01-2419:09favilahttps://docs.datomic.com/on-prem/query.html#rules#2020-01-2419:10favilaIt’s a few paragraphs down,#2020-01-2419:10favila> We can require that variables need binding at invocation time by enclosing the required variables in a vector or list as the first argument to the rule#2020-01-2419:12maxtThank you! That paragraph seems to be missing from the cloud docs, but supposedly it works the same. It's still there in the example.#2020-01-2419:20maxtUsing identity works great, really didn't think of that.
[(self-or-refers-to [?e1] ?e2)
[(identity ?e1) ?e2]]
It works both with and without the brackets. Would the brackets make it more efficient, or is in this case more of a documentation think to signal that we expect it to be bound?#2020-01-2419:38favilawell a few things#2020-01-2419:38favila1. this particular rule can’t even run backwards, so it’s a guard and documentation#2020-01-2419:39favila2. I think it helps to clarify what parts of the fn are considered input vs output#2020-01-2419:56maxtOk! Thank you for taking your time explaining!#2020-01-2419:14marshallhttps://twitter.com/datomic_team/status/1220784375916265474#2020-01-2423:33John ContiI am trying to start my first AWS Solo instance. So beware the n00b. I have looked at the troubleshooting page but do not see why my stack is failing along with autoscaling alerts:#2020-01-2500:42John ContiThe problem was there are timeouts that default to 10 minutes in the template launch forms. Changing those looks like it is helping.#2020-01-2501:44marshallYou need to look at the failure in the nested stack#2020-01-2501:45marshallYou have to choose deleted or failed stacks in the stack console#2020-01-2501:45marshallThen look into the nested stack that failed and find the specific error#2020-01-2423:34John ContiSeems like it creates the storage, tries to scale it and dies, I believe. I have every permission under my account (full admin).#2020-01-2500:09hadilsMy solo upgrade to 589-8864 failed. I have a log stream: 2020/01/24/[$LATEST]7975f45ef9494dc9b39c412959f3de4f but I don't know which log group it belongs to. This log stream contains the error message that caused the problem.#2020-01-2501:46marshallWhat did you see in the CF dashboard on the failed upgrade#2020-01-2500:12hadilsDoes anyone know which log groups I should be looking at? I tried all the usual suspects but no luck#2020-01-2518:15eoliphanthi, @marshall, @jaret you guys around? ran into an issue with a prod size stack upgrade to 589-8846. the nodes come up, but wasn’t seeing any …Compute.. log streams. tried bouncing them, same issue. I’ve since poked an SSH hole in the SG, and logged into one of the nodes, and been able to confirm that the datomic.cluster-node java process isn’t running. Not sure where sticks the logs locally, so can’t really investigate further#2020-01-2518:17jaretDid you upgrade storage first and then compute?#2020-01-2518:17eoliphantyep.. always 🙂#2020-01-2518:23jaret@eoliphant would you be able to log a support case for us to track by e-mailing <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>. I’d like to know:
• What version are you upgrading from?
• Are you still able to connect and use Datomic or is this limited to log streams?
• Are you running with any non-default system settings or parameters (i.e. storage provisioning/scaling, changes to network or instance configurations, alterations to the default CloudFormation templates)?
• Are you using Ions on this system?#2020-01-2518:24eoliphantsure will send it now#2020-01-2518:24jaretThe support case tracking will allow us to share all of this information and I can bring in the rest of the team + ask for more#2020-01-2518:24jaretI am going to get to my laptop and start looking at this#2020-01-2518:37eoliphantok great thanks just sent it#2020-01-2518:37eoliphantand as I mentioned, I can ssh into the nodes now so if you need logs, etc I can get them to you#2020-01-2518:38jaretLet me know when you’ve sent in the case.#2020-01-2518:45jaret@eoliphant I am not seeing the case can you DM me the e-mail you used to log the ticket?#2020-01-2518:57eoliphanthey sorry had stepped away for a min. it was stuck in my outbox#2020-01-2522:16frozar
Hello, I'm pretty new to Datomic and I try to follow the basic steps to get a locally fonctionnal toy-code project using Datomic. As I understood how to use the datomic-pro lib, I use the following dependancy in my project.clj:
[com.datomic/datomic-pro "0.9.6024"]
Since I am using the pro library, my call to d/transact fonction doesn't work (as-is) anymore. For example, a call to transact function:
(d/transact conn {:tx-data [{:owner/name owner-name}]})
should rather be:
(d/transact conn [{:owner/name owner-name}])
So the wrapping map {:tx-data ...} is not used anymore.
Am I right? Does this change is documented (anywhere) please?#2020-01-2522:23eoliphantThe issue you’re having doesn’t have anything to do with pro v free, etc. There are 2 API’s for datomic on-prem, client and peer. Your lein dependency is for the peer lib, but your code example is what the client lib expects.#2020-01-2522:24eoliphantpeer api docs: https://docs.datomic.com/on-prem/clojure/index.html#2020-01-2522:24eoliphantsame thing for client: https://docs.datomic.com/client-api/index.html#2020-01-2604:31frozarThank you for links, they are really helpful 🙂#2020-01-2607:20Sam DeSotaHello, I'm attempting to deploy an :http-direct app via a datomic ion, but for the life of me I can't get past this error when submitting to the endpoint:
No implementation of method: :->bbuf of protocol: #'datomic.ion.lambda.api-gateway/ToBbuf found for class: nil
at core_deftype.clj
#2020-01-2607:21Sam DeSotaNote, this exact error is mentioned in troubleshooting relating to not returning a valid response from lambda ions, but I'm returning a valid string.#2020-01-2607:22Sam DeSotaThis is my http-handler
(defn graphql-route [secret query vars]
(if (is-authorized? secret)
(let [result (execute orders-schema query vars nil)]
(json/write-str result))
(json/write-str {:message "Not authorized"})))
(defn http-handler [req]
(let [body (-> (:body req) io/reader (json/read :key-fn keyword))]
{:status 200
:headers {"Content-Type" "application/json"}
:body (graphql-route (-> req :headers (get "authorization")) (-> body :query) (-> body :variables))}))
(def app
(apigw/ionize http-handler))#2020-01-2607:23Sam DeSotaAny hints on where this sort of error could be comming from would be much appreciated.#2020-01-2608:44dharriganHi, in the email I got about having a starter pro license, there is a link to the EULA - which results in a 404#2020-01-2608:44dharriganhttps://www.datomic.com/datomic-pro-edition-eula.html#2020-01-2608:45dharrigan#2020-01-2615:10frozarHello, I try find find my way through datomic and I still have really basic questions.
I have a transactor and a peer server running locally, and I try the following program:
(ns datomic-tuto.core
(:require
[datomic.client.api :as d]))
(def cfg {:server-type :peer-server
:access-key "myaccesskey"
:secret "mysecret"
:endpoint "localhost:8998"
:validate-hostnames false})
(def client (d/client cfg))
(d/delete-database client {:db-name "pet-owners-db"})
When I try to delete a database, I get the following error in REPL:
1. Unhandled java.lang.AbstractMethodError
datomic.client.impl.shared.Client.delete_database(Ljava/lang/Object;)Ljava/lang/Object;
async.clj: 158 datomic.client.api.async/delete-database
async.clj: 150 datomic.client.api.async/delete-database
sync.clj: 74 datomic.client.api.sync.Client/delete_database
api.clj: 155 datomic.client.api/delete-database
api.clj: 146 datomic.client.api/delete-database
REPL: 25 datomic-tuto.core/eval17171
REPL: 25 datomic-tuto.core/eval17171
Compiler.java: 7176 clojure.lang.Compiler/eval
Compiler.java: 7131 clojure.lang.Compiler/eval
core.clj: 3214 clojure.core/eval
core.clj: 3210 clojure.core/eval
main.clj: 414 clojure.main/repl/read-eval-print/fn
main.clj: 414 clojure.main/repl/read-eval-print
main.clj: 435 clojure.main/repl/fn
main.clj: 435 clojure.main/repl
main.clj: 345 clojure.main/repl
RestFn.java: 1523 clojure.lang.RestFn/invoke
interruptible_eval.clj: 79 nrepl.middleware.interruptible-eval/evaluate
interruptible_eval.clj: 55 nrepl.middleware.interruptible-eval/evaluate
interruptible_eval.clj: 142 nrepl.middleware.interruptible-eval/interruptible-eval/fn/fn
AFn.java: 22 clojure.lang.AFn/run
session.clj: 171 nrepl.middleware.session/session-exec/main-loop/fn
session.clj: 170 nrepl.middleware.session/session-exec/main-loop
AFn.java: 22 clojure.lang.AFn/run
Thread.java: 748 java.lang.Thread/run
The delete_database method seems to be abstract and I don't understand how it is possible. I use the datomic-pro lib [com.datomic/datomic-pro "0.9.6024"]
Any help/hint would be really helpful :)#2020-01-2615:21dharriganIs delete-database a function of the client, not the datatabase?#2020-01-2615:22dharriganclient/delete-database?#2020-01-2615:23dharrigan#2020-01-2615:26frozarI was checking that and for me it is a function of the client: https://docs.datomic.com/client-api/datomic.client.api.html#var-delete-database#2020-01-2615:27frozarI think that it's legitimate for a client to be able to delete a database#2020-01-2615:31dharriganyes#2020-01-2615:31dharriganbut in your source you're invoking it against the db#2020-01-2615:31dharrigan(d/delete-database client {:db-name "pet-owners-db"})#2020-01-2615:31dharrigand === ref to the db#2020-01-2615:31dharriganinstead of client/....#2020-01-2615:32frozarMay be I use a non standard requirement [datomic.client.api :as d]#2020-01-2615:32dharrigantry using client 🙂 if that doesn't work, then <shrug> 🙂#2020-01-2615:33maxtclient is what you get back from d/client#2020-01-2615:34maxtSorry, that's indeed what you def:ed. Looks right to me#2020-01-2615:35frozarOk, but I must do something wrong, when I try (client/delete-database {:db-name "pet-owners-db"})
I get the error No such namespace: client#2020-01-2615:36maxtThe namespace looked right, I also have (require [datomic.client.api :as d])#2020-01-2615:37maxtAnd (d/delete-database client {:db-name db-name})#2020-01-2615:37marshallIn datomic OnPrem only peers can create and delete databases#2020-01-2615:37marshallClients cannot#2020-01-2615:37marshallThis is a difference between onprem and cloud#2020-01-2615:38marshallIf you want to create or delete a db with onprem you need to connect a peer to your transactor and do so#2020-01-2615:38marshallClient via peer server cant do that#2020-01-2615:39frozarOoooh, that's a news, ok, thank you @marshall#2020-01-2615:40marshallhttps://docs.datomic.com/on-prem/dev-setup.html#run-dev-transactor#2020-01-2615:40marshallSee from there down#2020-01-2615:41marshallIn particular the peer server part: "The Datomic Peer Server does not create durable-storage databases itself. To use a Peer Server along with a dev storage database you will need to have previously created a database and have a running dev Transactor. "#2020-01-2615:49eoliphantHi I’m experimenting with the analytics support in Cloud, and i’m running into an issue where I get a “failed: Rounding necessary” error from time to time. I thought I’d traced it down to a specific ‘column’, by just adding them one by one to a query. But I then found that, subsequent queries with only that column, didn’t cause the issue#2020-01-2615:50marshallhttps://docs.datomic.com/cloud/analytics/analytics-metaschema.html#scale-option#2020-01-2615:51eoliphantah thx 🙂#2020-01-2615:55eoliphantso in this case, how can i handle cases where there scale might vary? If I have bigdec attrs, for say :something/money. and some are 123 and others 123.45 ?#2020-01-2616:30marshallUse the largest#2020-01-2616:30marshallIt will add 0s to reach that precision#2020-01-2815:46eoliphantsorry, yeah figured that out. thx ! ironically this revealed some bugs in the code where moneytary vals had something othehr than a scale of 2 (or 0)#2020-01-2616:49dharriganWhere would be a good place to recommend updates/corrections to the documentation?#2020-01-2616:49dharrigani.e., http://docs.datomic.com#2020-01-2619:23jaret@dharrigan you can drop an email to <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection> that way it won’t get lost to slack archiving or you can drop a note here. #2020-01-2622:07dharrigansure. I'll gather my notes 🙂#2020-01-2717:32MatthewLispHello everyone #2020-01-2717:32MatthewLispNice to be here #2020-01-2717:32MatthewLispI’m working on a project which uses Datomic Pro#2020-01-2717:33MatthewLispAnd when I try to run a REPL, I have this error
`Could not find artifact com.datomic:datomic-pro:jar:0.9.5561 in central (https://repo1.maven.org/maven2/)
Could not find artifact com.datomic:datomic-pro:jar:0.9.5561 in clojars (https://repo.clojars.org/)
Could not transfer artifact com.datomic:datomic-pro:jar:0.9.5561 from/to http://my.datomic.com (https://my.datomic.com/repo): Not authorized , ReasonPhrase:Unauthorized.
Could not transfer artifact com.datomic:datomic-pro:pom:0.9.5561 from/to http://my.datomic.com (https://my.datomic.com/repo): Not authorized , ReasonPhrase:Unauthorized.
This could be due to a typo in :dependencies, file system permissions, or network issues.
If you are behind a proxy, try setting the 'http_proxy' environment variable.
Could not resolve dependencies`#2020-01-2717:35MatthewLispany clues? It seems it has to do with the Pro version of Datomic, but I’m not sure, still googling around (never used Datomic before)#2020-01-2717:39alexmillerdatomic-pro is not available in public repos, you need to use your private my.datomic repo#2020-01-2717:40alexmilleror mvn install it locally#2020-01-2717:40alexmillerhttps://docs.datomic.com/on-prem/integrating-peer-lib.html#2020-01-2717:41MatthewLispThank you @alexmiller :) #2020-01-2718:45dharriganI watched this youtube presentation on datomic #2020-01-2718:46dharriganin it he mentions that they query runs on the client, with the current value of the database retrieved by the client before the query is executed (if you want, naturally the database as-it-is-now)#2020-01-2718:47dharriganWhat wasn't too clear, I think he said it's held as an optimised tree like structure on the client? If so, I wasn't then too clear what would happen if your database has millions upon millions of entries#2020-01-2718:47dharriganIs there a reference to some documentation that explains if your database is huge, how the client handles running queries clientside?#2020-01-2718:48dharrigan(or indeed, more about how the client runs queries clientside?)#2020-01-2720:01favilaI didn’t watch the video, but “the query runs on the client” is imprecise phrasing. The query runs on peers. There is a separate concept in datomic (a client) which runs queries “over there” on a peer and doesn’t access storage directly--it only asks questions and gets results. (Although in datomic cloud ions the peer may be in-process, so the query doesn’t actually travel on a network.)#2020-01-2720:01favilaThis talks a bit about the tree structure: https://tonsky.me/blog/unofficial-guide-to-datomic-internals/#2020-01-2720:02favilaThe tree structure is blocks of datoms (called segments) in storage; the transactor writes these and the peers read these.#2020-01-2720:14dharriganah ha!#2020-01-2720:14dharrigan. Peer then deduces which segments it needs to execute the query, and reaches storage for missing segments. Queries can run across datasets that do not fit in memory (by loading and unloading cached segments during DB scans), but the result of query should fit in memory. Query results are not lazy.
#2020-01-2902:39dustingetzhttp://www.dustingetz.com/:datomic-performance-gaare/#2020-01-2904:09dharriganthanks for the link! 🙂#2020-01-2906:50IgnasHey, I have a datomic query performance question. We are trying to get all entities of a certain type (transactions) that have changed in the last 5 minutes. We are currently using the default "now" db to get all the transactions and then "since" db to shave off the past. The query works but gets exponentially slower as data grows. Maybe there is a better way write this query?
(def since-5min (d/since db #inst "2020-01-28T15:45"))
(d/q '[:find [e? ...]
:in $ $since
:where [$ ?e :transaction/status]
(not [?e :transaction/type :transaction.type/rejected]))
[$since ?e]]
db since-5min)#2020-01-2907:46favilaFive minutes is not very many transactions. Maybe just look at the transaction log directly?#2020-01-2907:51maxtIs there a difference if you put the [$since ?e] part first?#2020-01-2908:05Ignasyes, I won't get the ones that changed in the last 5 minutes but did not change the :transaction/status property#2020-01-2908:17Ignasfor the transactions logs, can we use both :db/txInstant to filter for the last 5 minutes and filter by :transaction/status in the same query?#2020-01-2908:49Ignasin the way that would include also the transactions that have changed but on some other property than :transaction/status#2020-01-2912:48ghadiWhy not include txInstant directly in a query clause?#2020-01-2914:04favilaWhat I was thinking was something like this:#2020-01-2914:04favila(d/q '[:find [?e ...]
:in $ ?log ?from-t ?to-t
:where
[(tx-ids ?log ?from-t ?to-t) [?tx ...]]
[(tx-data ?log ?tx) [[?e ?a ?v _ ?op]]]
[?e :transaction/status]
(not [?e :transaction/type :transaction.type/rejected])]
(d/as-of (d/db conn) #inst "2020-01-28T15:50")
(d/log conn)
#inst "2020-01-28T15:45"
#inst "2020-01-28T15:50")#2020-01-2914:06favilaLook at everything that happened in the last five minutes; if you see any datoms in the tx log whose entity now (currently) has a transaction status and no rejected transaction type, you know that transaction entity changed#2020-01-2914:06favilaits mere presence in the log means it changed#2020-01-2914:07favilathis strategy might not make sense for longer time periods or higher transaction loads because it depends on transaction time being the most selective thing available#2020-01-2914:08favila(but for 5 minutes, it probably is)#2020-01-2914:43Ignasthanks you! the query seems to do exactly what I want. I'll now try to benchmark it in different load scenarios thanks2#2020-01-2913:29leongrapenthinIs there a faster alternative to developing against Datomic Cloud over using the Bastion/Tunnel? For my development purposes, I sometimes need to run about a 100 queries in a batch which is super slow. I don't need help with best practices etc. as I use Datomic for 5 years now. Point is, when executed within Datomic Cloud this takes milliseconds. But locally it can take half a minute which breaks any kind of dynamic/interactive flow during dev. Right now I'm thinking of using VNC to develop on a remote appliance within AWS.#2020-01-2913:46dharriganWOuld you consider running wireguard or zerotier on the server? then you can connect directly to the server, skipping ssh 🙂 (the connection is encrypted and secure on either wireguard and/or zerotier)#2020-01-2913:49ghadiwhat makes you think the bastion vs. wireguard is going to make a difference?#2020-01-2913:50ghadithere's probably inherent latency from @leongrapenthin’s laptop to the VPC#2020-01-2913:52dharriganWell, with bastion you're jumping directly from a machine in the middle to the actual target server, so that's an additional network hop#2020-01-2913:52dharriganwireguard/zerotier is direct peering#2020-01-2913:53ghadithe latency from bastion -> query group is far less than the latency from laptop -> bastion#2020-01-2913:53dharriganperhaps - I don't know - maybe - but if one can remove a network hop, then it's good#2020-01-2913:56ghadiwireguard is great, but it can't make packets get inside the VPC faster#2020-01-2913:57dharriganI agree#2020-01-2914:45maxtCloud running locally would be great, like datomic-memdb but supporting tx functions and tuples. I'm also annoyed by queries that run fast in the cloud but very slow when developing.#2020-01-2914:46augustlwhen will we get apt-get install aws-local? 😄#2020-01-2914:57dharriganyay -S t'interwebs-local#2020-01-2915:57leongrapenthin@ghadi I'm pretty sure the performance distance is mainly due to a Datomic Ion having more "Datomic Peer like" querying performance with segment caching and whatnot, whereas the client via bastion is a flat http client.#2020-01-2915:58Joe LaneI don't think that was the argument he was trying to make.#2020-01-3015:01jcfHello everyone! I'm kicking the tyres on Datomic Cloud, and have run into a problem that I think might be a bug.
I have a working SOCKS proxy and can curl -x "socks5h://... to verify the connection is healthy. Now I'm trying to create a database but I'm getting an error about there not being an AWS profile with the name I've specified in my config:
(def cfg
{:server-type :ion
:region "eu-west-1"
:system "<REDACTED>"
:creds-profile "dev"
:endpoint ".<REDACTED>."
:proxy-port 8182})
With that config I can create a client without issue, and can invoke the call to datomic.client.api/create-database but that's where this exception is thrown:
1. Unhandled clojure.lang.ExceptionInfo
No AWS profile named 'dev'
#:cognitect.anomalies{:category :cognitect.anomalies/fault,
:message "No AWS profile named 'dev'"}
async.clj: 58 datomic.client.api.async/ares
async.clj: 54 datomic.client.api.async/ares
sync.clj: 73 datomic.client.api.sync.Client/create_database
api.clj: 144 datomic.client.api/create-database
api.clj: 135 datomic.client.api/create-database#2020-01-3015:02jcfI know that profile does exist. I can use it via the CLI (e.g. aws --profile dev s3 ls). I'm using STS to access an organisation within my account, and I think this might be where the problem comes from.#2020-01-3015:03jcfCan the client library work across orgs via STS or is this unsupported?#2020-01-3015:05ghadi@jcf you're deploying an ion?#2020-01-3015:05ghadior testing locally on your laptop?#2020-01-3015:05jcf@ghadi not yet. Just trying to create a database from my local machine. I thought the client could create DBs with Datomic Cloud, but maybe not?#2020-01-3015:06ghadiYou can. But server type needs to be :client when local going over proxy#2020-01-3015:07jcf1. Caused by clojure.lang.ExceptionInfo
:server-type must be one of :cloud, :local, :peer-client, or :peer-server
#:cognitect.anomalies{:category :cognitect.anomalies/incorrect,
:message
":server-type must be one of :cloud, :local, :peer-client, or :peer-server"}#2020-01-3015:07jcfPeer client?#2020-01-3015:07ghadi:cloud, sorry#2020-01-3015:07ghadibut wait --#2020-01-3015:07jcfThat's good because using :peer-client gave me this:
1. Caused by .FileNotFoundException
Could not locate datomic/peer_client__init.class, datomic/peer_client.clj or
datomic/peer_client.cljc on classpath. Please check that namespaces with
dashes use underscores in the Clojure file name.
😄#2020-01-3015:08ghadiwhat dependency are you using in your deps/project.clj ?#2020-01-3015:08jcf:deps
{com.cognitect/anomalies {:mvn/version "0.1.12"}
com.datomic/client-cloud {:mvn/version "0.8.81"}
com.datomic/ion {:mvn/version "0.9.35"}
org.clojure/clojure {:mvn/version "1.10.1"}
org.clojure/data.json {:mvn/version "0.2.7"}}#2020-01-3015:08jcfI'm following the tutorial to get connected and get the client running locally.#2020-01-3015:09ghadilink to tutorial?#2020-01-3015:09jcfhttps://docs.datomic.com/cloud/tutorial/client.html#2020-01-3015:09jcfI've gone through a few pages to get to this point.#2020-01-3015:10jcfPrerequestites to then get to https://docs.datomic.com/cloud/getting-started/start-system.html.
Then on to https://docs.datomic.com/cloud/getting-started/get-connected.html.
Now I'm doing the client API cloud tutorial above.#2020-01-3015:12jcf@ghadi I'm pretty sure @daemianmack and Joe ran into this same problem when we were trying to pull together a Datomic Cloud demo on a recent consulting gig. There was some workaround they needed to get profiles working across AWS orgs.#2020-01-3015:14jcfI've got the same setup where I have a root AWS account with IAM users. Those users assume an admin role in sub-orgs via profiles. It works via the AWS CLI but I think maybe the Java SDK doesn't work the same way.#2020-01-3015:16jcfI'm crawling through GitHub issues now. 😅#2020-01-3015:17jcfThis has been open for a couple of years and appears unresolved: https://github.com/aws/aws-sdk-java/issues/803#2020-01-3015:21jcfNo AWS profile named 'dev' is a really odd error. It looks like the profile isn't being found at all as opposed to not having the right permissions etc.#2020-01-3015:28jcfI wonder if there's a way to pass in my own AWS credential provider? https://github.com/cognitect-labs/aws-api/blob/635a0dfaa6f60e8158196874fa1a99e3db69d83a/examples/assume_role_example.clj#L46-L64#2020-01-3015:29daemianmackheya @jcf! how’s things?#2020-01-3015:29jcfHey, @daemianmack! Really good thanks! Hope you're well. 🙂#2020-01-3015:30daemianmacki don’t quite understand why you’d get that profile error at that point in the sequence, instead of earlier, but we eventually settled on aws-mfa to handle that piece of things — https://github.com/broamski/aws-mfa#2020-01-3015:30jcfOh, I haven't gotten to MFA yet. That's gonna be fun! 😄#2020-01-3015:30daemianmackyeee…. essss.#2020-01-3015:31jcfWhatever happened to days of an API key over HTTP?!#2020-01-3015:32jcfCreating a custom credential provider's going to mean some lifecycle management from the looks of it. I have to stop the provider. Hmm. This is surely too fiddly to be the right way of connecting things.#2020-01-3015:36jcfAhh, shucks! Looks like Datomic Client doesn't use aws-api. I think I'm going to park this for today. Why I'm getting told I don't have a profile that I know exists I'm not sure but I'll dig in more soon.#2020-01-3015:37jcfThanks for the help, @ghadi @daemianmack! Much appreciated. 🙂#2020-01-3015:42daemianmacklet us know what you find out @jcf!#2020-01-3015:42jcfWill do! 🙂#2020-01-3019:46kennyDoes Datomic not allow you to divide two numbers in a query and get a number that is between 0-1?#2020-01-3019:47kennye.g. [(/ ?connected-mins ?reported-mins) ?percent-connected] ?percent-connected is always 0 for some reason.#2020-01-3019:53ghadicall (/ x y) in a repl and check the output#2020-01-3019:53kennyJust a regular result?
(/ 5352 13380)
=> 2/5
#2020-01-3019:57kennyHuh... https://docs.datomic.com/cloud/query/query-data-reference.html#built-in-functions
> Datomic's / operator is similar to Clojure's / in terms of promotion and contagion with a notable exception: Datomic's / operator does not return a clojure.lang.Ratio to callers. Instead, it returns a quotient as per quot.
I guess I have to store these attributes as doubles to be able to do division on them?#2020-01-3020:00kennyThis appears to work
[(double ?connected-mins) ?connected-mins-dbl]
[(double ?reported-mins) ?reported-mins-dbl]
[(/ ?connected-mins-dbl ?reported-mins-dbl) ?percent-connected]
#2020-01-3020:11ghadiyeah you need coerce one of the arguments to double @kenny#2020-01-3119:32eraserhdI have a transaction that retracts two values and asserts one for a cardinality-one attribute on a single entity. That is weird, right?#2020-01-3119:35favilaare they two different values retracted? Is it a schema entity?#2020-01-3119:35favilaor, do you have the log?#2020-01-3119:38eraserhdthis is from the log. the two retractions have equal values, and are different from the assertion#2020-01-3119:42favilaah, so if you explicitly retract and assert a new value on the same e and card-1 attr, datomic may not drop the redundant retraction. This is an issue we encountered during decanting (it made decanting an already-decanted database break). I don’t remember if this was fixed#2020-01-3119:43matthavenerwhat is decanting?#2020-01-3119:43eraserhdahhh... ok, and in this case we certainly would be doing that#2020-01-3119:43eraserhdthat was my other question :D#2020-01-3119:44faviladecanting is replaying and transforming the tx log on one db to fill a new db#2020-02-0115:53Mehdi H.Hi all! I am using datomic on-prem for a small project and I find that the datomic.process-monitor is messing up whatever I type in my REPL all too often. I couldn't find in the bin/logback.xml config the right line to comment to get rid of these in a dev environment. Would anyone be able to guide me on this? Either this or a way not to get datomic logs at all in the REPL but somewhere else? Many thanks in advance!#2020-02-0217:29mdhaneyThis file lists all the namespaces to suppress for Datomic. This configuration is for timbre - if you are using a different logging framework, the configuration will be different.
https://github.com/fulcrologic/fulcro-template/blob/master/src/main/config/defaults.edn#2020-02-0409:55Mehdi H.Hey, thanks mdhaney! I see none of these options in the logback file, so I can't really mute them. For instance I'd definitely mute the datomic.process-monitor one, but I can't see it. I tried and commented everything in it : the process-moniitor logs are still visible in my REPL...#2020-02-0410:00Mehdi H.So if anyone knows how to make the logs from datomic.process-monitor disappear from my REPL (it gets in the way of my typing in it a lot and it's really annoying...) with a basic datomic-pro on-prem logback configuration, I'd be more than happy to hear from you! 🙂#2020-02-0420:11Mehdi H.Thanks @U05120CBV! Usually I do clj -A:rebel:dev in a terminal with :
:dev {:extra-paths ["dev"]
:extra-deps {
org.clojure/tools.namespace {:mvn/version "0.3.1"}
org.clojure/java.classpath {:mvn/version "0.3.0"}
}}
:rebel {:extra-deps {com.bhauman/rebel-readline {:mvn/version "0.1.4"}}
:main-opts ["-m" "rebel-readline.main"]}#2020-02-0420:11Mehdi H.But I seem to recall the logs were also "polluting" the REPL with the Emacs CIDER integration for instance.#2020-02-0420:46marshallWhats in your logback.xml?#2020-02-0421:11Mehdi H.I have commented most of the lines and the logs from datomic.process-monitor are still coming to the REPL :
<configuration>
<!-- prevent per-message overhead for jul logging calls, e.g. Hornet -->
<contextListener class="ch.qos.logback.classic.jul.LevelChangePropagator">
<resetJUL>true</resetJUL>
</contextListener>
<appender name="MAIN" class="ch.qos.logback.core.rolling.RollingFileAppender">
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>${DATOMIC_LOG_DIR:-log}/%d{yyyy-MM-dd}.log</fileNamePattern>
<maxHistory>72</maxHistory>
</rollingPolicy>
<prudent>true</prudent> <!-- multi jvm safe, slower -->
<encoder>
<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} %-5level %-10contextName %logger{36} - %msg%n</pattern>
</encoder>
</appender>
<!-- <logger name="datomic.cast2slf4j" level="DEBUG"/> -->
<!-- uncomment to log storage access -->
<!-- <logger name="datomic.kv-cluster" level="DEBUG"/> -->
<!-- uncomment to log transactor heartbeat -->
<!-- <logger name="datomic.lifecycle" level="DEBUG"/> -->
<!-- uncomment to log transactions (transactor side) -->
<!-- <logger name="datomic.transaction" level="DEBUG"/> -->
<!-- uncomment to log transactions (peer side) -->
<!-- <logger name="datomic.peer" level="DEBUG"/> -->
<!-- uncomment to log the transactor log -->
<!-- <logger name="datomic.log" level="DEBUG"/> -->
<!-- uncomment to log peer connection to transactor -->
<!-- <logger name="datomic.connector" level="DEBUG"/> -->
<!-- uncomment to log storage gc -->
<!-- <logger name="datomic.garbage" level="DEBUG"/> -->
<!-- uncomment to log indexing jobs -->
<!-- <logger name="datomic.index" level="DEBUG"/> -->
<!-- these namespsaces create a ton of log noise -->
<!-- <logger name="httpclient" level="INFO"/>
<logger name="org.apache.commons.httpclient" level="INFO"/>
<logger name="org.apache.http" level="INFO"/>
<logger name="org.jets3t" level="INFO"/>
<logger name="com.amazonaws" level="INFO"/>
<logger name="com.amazonaws.request" level="WARN"/>
<logger name="sun.rmi" level="INFO"/>
<logger name="net.spy.memcached" level="INFO"/>
<logger name="com.couchbase.client" level="INFO"/>
<logger name="org.apache.zookeeper" level="INFO"/>
<logger name="com.ning.http.client.providers.netty" level="INFO"/>
<logger name="org.eclipse.jetty" level="INFO"/>
<logger name="org.hornetq.core.client.impl" level="INFO"/>
<logger name="org.apache.tomcat.jdbc.pool" level="INFO"/> -->
<!-- <logger name="datomic.cast2slf4j" level="DEBUG"/> -->
<root level="info">
<appender-ref ref="MAIN"/>
</root>
</configuration>#2020-02-0421:29Mehdi H.This above is the logback from bin/ in the datomic folder. There is no other logback.xml file in the folder of the project.#2020-02-0513:23Mehdi H.After checking, even if I just launch `
clj
in the terminal, and request a connection to a db uri, the logs from datomic.process-monitoring keep on coming. Any thoughts @U05120CBV?#2020-02-0515:11marshalltry commenting on the appender entirely
this line <appender-ref ref=“MAIN”/>#2020-02-0515:12marshallyou may have a global system redirect somewhere on your local machine that’s resulting in terminal appender instead of file#2020-02-0515:13Mehdi H.Ok makes sense! Trying this immediately!#2020-02-0515:21Mehdi H.hmm so I just commented the one line you mentioned, leaving <root level="info" etc, but no, still coming relentlessly to my REPL.#2020-02-0515:23Mehdi H.I am on WSL (ubuntu 18.04) on an otherwise Windows 10 laptop. I just don't see where I would have created the redirect that you mention. Normally I get from the rest of the file that I should get these logs as files with a 3 day rolling history. I am going to look for those files in the install folder.#2020-02-0515:25Mehdi H.yep I do get daily log files in the log folder as well... Am I looking at a conflict of logging policies, would you say that could cause the double logging?#2020-02-0515:39marshalltry taking logback out of your peer app dependencies#2020-02-0515:39marshallwhat all is in your deps.edn?#2020-02-0515:40marshallif you have no logback in your classpath, it will default to logging in your terminal#2020-02-0515:40marshallthis is separate from the transactor logs#2020-02-0515:40marshallwhich is what the logback in the datomic distribution controls#2020-02-0515:40marshalli would put the configs back they way they were in the datomic logback#2020-02-0515:40marshallthen create a logback in your peer application (or remove the logback dependency from your deps.edn)#2020-02-0515:42Mehdi H.This is my deps.edn file of the project:
{:mvn/repos
{"" {:url ""}}
:deps
{org.clojure/clojure {:mvn/version "1.10.1"}
buddy/buddy-auth {:mvn/version "2.2.0" :exclusions [cheshire]}
com.datomic/datomic-pro {:mvn/version "0.9.5951" :exclusions [org.slf4j/slf4j-nop]}
com.stuartsierra/component {:mvn/version "0.4.0"}
org.slf4j/slf4j-simple {:mvn/version "1.7.28"}
org.clojure/tools.logging {:mvn/version "0.5.0"}
com.stuartsierra/component.repl {:mvn/version "0.2.0"}
io.pedestal/pedestal.service {:mvn/version "0.5.7"}
io.pedestal/pedestal.route {:mvn/version "0.5.7"}
io.pedestal/pedestal.jetty {:mvn/version "0.5.7"}
com.draines/postal {:mvn/version "2.0.3"}
clj-template {:mvn/version "1.0.1"}
com.velisco/clj-ftp {:mvn/version "0.3.12"}
buddy/buddy-sign {:mvn/version "3.1.0"}
clj-time {:mvn/version "0.15.2"}
tupelo-datomic {:mvn/version "0.9.2"}}
:paths ["src" "resources"]}#2020-02-0515:42Mehdi H.So basically I was messing with the wrong file, ok, I'll put it back to what it was before#2020-02-0515:44Mehdi H.So you are saying the datomic.process-monitoring logs are different from the transactor logs. Does that mean that whatever dependency I have for a logging framework in my classpath is doing all this?#2020-02-0515:50Mehdi H.I see this sentence on the github page of tools.logging : "You can redirect all java writes of `System.out` and `System.err` to the log system by calling `log-capture!`#2020-02-0515:52Mehdi H.Is it me misusing tools.logging or slf4j as I see a lot of slf4j dependencies in a clj -Stree command...?#2020-02-0515:56marshallyou have org.slf4j/slf4j-simple {:mvn/version “1.7.28”} in your des#2020-02-0515:56marshalldeps#2020-02-0515:57marshallif you have slf4j in your deps and no logback.xml in your classpath you will get default logging behavior from datomic peer library#2020-02-0515:57marshallto your console#2020-02-0515:57marshallyou can either remove that dep or make a logback file with the configs you actually want#2020-02-0515:58Mehdi H.Oh I see! I wanted to use slf4j because I think I felt I had to choose something like that to use tools.logging.#2020-02-0515:59Mehdi H.So I need to build a logback file and put it on my classpath. Makes perfect sense! Many thanks for taking the time to debug this rookie issue @U05120CBV! Much appreciated!#2020-02-0300:06datranHello, I'm trying to figure out how to model relationships in a a graph database, and I've got a puzzle. I want to be able to link one entity to another multiple times:
(d/transact db/conn [{:db/ident :container/name
:db/valueType :db.type/string
:db/unique :db.unique/identity
:db/cardinality :db.cardinality/one}
{:db/ident :container/stuff
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/many}
{:db/ident :thing/name
:db/valueType :db.type/string
:db/unique :db.unique/value
:db/cardinality :db.cardinality/one}])
(d/transact db/conn [{:container/name "a"}
{:container/name "b"}])
(d/transact db/conn [{:thing/name "thing1"}
{:thing/name "thing2"}
{:thing/name "thing3"}])
(d/transact db/conn [{:container/name "a"
:container/stuff [[:thing/name "thing3"]]}
{:container/name "b"
:container/stuff [[:thing/name "thing1"]
[:thing/name "thing1"] ; note that I'm repeating "thing1" twice
[:thing/name "thing2"]]}])
(d/q '[:find (count ?thing) .
:where
[?e :container/name "b"]
[?e :container/stuff ?thing]]
@db/conn)
;;=> 2 I would expect this to return 3#2020-02-0300:07datranam I approaching this the wrong way? Should I have a separate :thing/count attribute?#2020-02-0300:07datranam I approaching this the wrong way? Should I have a separate :thing/count attribute?#2020-02-0303:01favilaDatomic facts are sets, so there are no duplicate facts. You can’t assert “b contains thing1” multiple times#2020-02-0303:02favilaHaving a count entity is unlikely to help because you’re going to run in to the same problem when you try to decrement#2020-02-0303:03favilaMaybe you could make “stuff” attr a tuple of thing-ref and count >=1#2020-02-0303:04favilaStepping back though, what are you trying to model?#2020-02-0315:55datranhttps://imgur.com/a/9pmUkKr#2020-02-0315:56datranI want to learn fact-modeling and so I'm building an api for a pen and paper RPG rulebook.
I'm trying to model advances. Each class has a set of advances, some of which they can take multiple times. A lot of those advances are shared between classes as well.#2020-02-0315:58datranIn sql terms, this would just be a join table with (advance_id, class_id, count), I think. But I'm not sure how to express that here.#2020-02-0318:34john-shafferCould you use something like this?
:db/valueType :db.type/tuple
:db/tupleTypes [:db.type/ref :db.type/long]
:db/cardinality :db.cardinality/many
:db/doc "Tuples of [Advance-Ref, Count]"#2020-02-0318:43datranI'm currently playing around with datahike which doesn't support tuple types, so I can't try that out right now.#2020-02-0318:45datranCurrently, I've just removed the :db/unique property and I'm just dealing with the duplicates#2020-02-0318:45datranthe denormalization bugs me a bit, though#2020-02-0311:00Ivar RefsdalShould lookup refs work with tuple attributes?
Example code here that does not work as expected:
https://gist.github.com/ivarref/0bb0e1f51919760be7c953ccd0b12279
Or am I missing something?
Thanks.#2020-02-0318:50franquitoApparently pull does not work with ref. According to this gist https://gist.github.com/robert-stuttaford/e329470c1a77712d7c4ab3580fe9aaa3#using-refs-in-tuples.
Although I don't know where is that list of valid :db.type/ref for tuples.#2020-02-0314:30mgrbyteRunning datomic on-prem 0.9.5951 in AWS EB, the connection is created on server init.
I'm seeing the following exception some time after a web service starts.
A google search only returns one hit (a slack thread from Feb 2014!)
Has anyone else encountered this, or may be able to hint at what might be the issue?
08:59:48.688 WARN datomic.common [clojure-agent-send-off-pool-3]
... caused by ...
java.lang.IllegalStateException: Connection pool shut down
#2020-02-0401:47Oleh K.hey, I create datomic cloud bucket backups every day, and now I'm trying to restore the data from some point. If I stop datomic instances and just replace bucket's content, I still see the latest data, not only from the , for example, 3 days ago backup. Or to re-create Cloudformation stacks is the only way?#2020-02-0409:46Joe LaneI'm 90% positive you will not be able to restore a datomic cloud instance from s3 backups alone. If you re-create the stack you will likely LOSE the information because you are not taking dynamodb into account with your s3 backups. Dynamodb contains the tx-log, which I'm pretty sure you don't get with the s3 bucket backups alone.
Since you currently still have the dynamodb tx-log from today (not truncated to 3 days ago, and I'm not suggesting to truncate it), it makes sense that you are getting back information from today from dynamodb.#2020-02-0415:22Oleh K.Thanks, Joe#2020-02-0415:29dmarjenburghEFS also serves as an additional caching layer which might explain why you still see the latest data.#2020-02-0414:15jaretIf you’ve had any failed deploys due to sync-libs failed to complete in 120 seconds
Please check out our latest ion-dev release where we have upped the timeout on sync-libs:
https://forum.datomic.com/t/ion-dev-0-9-251/1346#2020-02-0421:51DaoudaHey Folks, is there a way to tell to datomic to inc a value in the database, based on whatever value it will found there?
Context: I just want to inc an attribute value in datomic, without have to inc in application level. I want that to happen in datomic to avoid dealing with concurrence, cas.#2020-02-0421:57shaun-mahoodhttps://docs.datomic.com/cloud/transactions/transaction-functions.html#creating has that function as an example 🙂#2020-02-0421:58ghadinot sure if you are doing on-prem vs cloud, but the details differ slightly#2020-02-0421:58ghadithere's also a built in cas db fn#2020-02-0421:59Daoudaon-prem#2020-02-0421:59ghadihttps://docs.datomic.com/on-prem/database-functions.html#2020-02-0421:59ghadithat's the same link but for on-prem#2020-02-0513:07DaoudaSorry for not saying thank you yesterday. Thank yiu very much for the help 🙂#2020-02-0515:03souenzzohttps://github.com/vvvvalvalval/datofu#generating-unique-readable-ids#2020-02-0517:52unbalancedI couldn't find this in the documentation anywhere, I'm curious what the expected behavior of doing a pull on a composite tuple is. Would you expect back a map or a vector?
i.e., based on the documentation, this makes sense to me, but I'm not sure if it matches with reality:
;; pseudo-data example
(d/q '[:find (pull ?e [*]) .
:where
[?e :db/id [:reg/semester+course+student [234561 345621 123456]]
@conn) ;;=>
[{:db/id 234561
:semester/year 2018
:semester/season :fall}
{:db/id 345621
course/id "BIO-101"}
{:db/id 123456
:student/first "John"
:student/last "Doe"
:student/email "#2020-02-0609:22Ivar RefsdalI would expect a map if you use the . notation in :find. Otherwise I would expect a 0 or 1 length set as the :reg/course+semester+student attribute is :db.unique/identity.
(defn db-id [lookup-ref]
(d/q '[:find ?e .
:in $ ?id-attr ?value
:where
[?e ?id-attr ?value]]
(d/db conn)
(first lookup-ref)
(second lookup-ref)))
(d/q '[:find (pull ?e [*]) .
:in $ ?course-semester-student
:where
[?e :reg/course+semester+student ?course-semester-student]]
(d/db conn)
[(db-id [:course/id "BIO-101"])
(db-id [:semester/year+season [2018 :fall]])
(db-id [:student/email "
The only thing I don't like with this is having to write and call the db-id function. Datomic should do this itself I think.
Here is a working gist that demostrates this:
https://gist.github.com/ivarref/dc2a5698cd0b791121bfadfa934bcd74#file-tuples2-clj-L79
Edit: The schema and basic data transactions is copied from https://docs.datomic.com/cloud/schema/schema-reference.html#tuples#2020-02-0620:12unbalancedBrilliant, thank you#2020-02-0620:12unbalancedThat’s exactly what I was looking for#2020-02-0519:34zaneIn the Datomic query grammar is find-rel short for something? find-relation?#2020-02-0519:40favilaYes. A relation is a set of same-typed tuples#2020-02-0519:43zaneAwesome. What does "same-typed" mean in this context?#2020-02-0519:58favilaeach tuple is the same type?#2020-02-0519:59favilaI guess specific to datomic, each tuple slot corresponds to the same binding expression in the find#2020-02-0520:00favilaAll this grammar means is you’ll get results like #{[x y z]…} not [x y z] (tuple) or [x x x] (collection) or x (scalar)#2020-02-0520:01favilaand none of this is supported on datomic cloud#2020-02-0520:13zane> each tuple slot corresponds to the same binding expression in the find
This makes sense to me.#2020-02-0520:13zane> none of this
None of what?#2020-02-0520:18favilait doesn’t support any destructuring in :find#2020-02-0520:19favilaIOW “find-rel” is the only option#2020-02-0520:21zaneGot it! Thanks.#2020-02-0613:12tatutin datomic cloud there is no possibility to do in-memory unit tests without a connection to the cloud instance?#2020-02-0613:12tatutFAQ seems to state that#2020-02-0613:13tatutdo people usually have test suites use an actual instance?#2020-02-0613:13alexmillerYes#2020-02-0613:14tatutok, good to know#2020-02-0613:18favilathere is a library which implements the client api against the peer api designed for testing, it’s name escapes me#2020-02-0613:18favilait’s not perfect and you need an on-prem license#2020-02-0613:19favilabut it’s something#2020-02-0613:20favilahttps://github.com/ComputeSoftware/datomic-client-memdb#2020-02-0614:29jeremyvdwDepending of your us-case, you can have to look to https://github.com/vvvvalvalval/datomock#2020-02-0616:46kenny@U11SJ6Q0K We wrote that library (I don't like the name but people started using it before we could change it haha). We really wanted to embrace the whole "test against the production system" but it's just too much of a pain. CI tests need to run the socks proxy to connect to the Datomic instance. IIRC, we never managed to get that set up in a stable way on CircleCI. It was almost there (again IIRC) but we needed to be able to control the hostname for the socks proxy. The Datomic team said they'd add it but that was 2 years ago or so.
Further, you need a way to have developers create databases with the same name (e.g., "admin") but not trip over each other by transacting to the same database. You'll need to write something to handle that as well. You could have each dev run their own Datomic Cloud system but that gets unwieldy. If you're using Ions, you'll probably need to do that.
Finally, the ability to test in memory is simply quite nice. I may not always have a stable internet connection (e.g., working in a cafe, on a plane, etc) and don't want to be blocked from developing.
I wish the Datomic team would release an official in-memory implementation for testing.
All that being said, we do run integration tests against the Datomic Cloud system.#2020-02-0617:10tatutyes, I really wish there was an in-memory impl for testing purposes as well... the socks proxy situation seems tricky to set up for ci#2020-02-0617:11tatutI don't see the database name as a problem though, your tests can always setup a unique name and we use $USER-dev named databases for all developers#2020-02-0617:12kennyYes, if you place all your data in a single database, the naming shouldn't be an issue.#2020-02-0619:36joshkhafter a long battle, we managed to get the socks-proxy running on CircleCI#2020-02-0619:40kennyNice! You should make a post on the Datomic forum about how you did it. #2020-02-0619:43joshkhthat's a great idea. i'll confirm that the work is shareable and then hopefully follow up with a post.#2020-02-0619:51joshkh@U11SJ6Q0K we've also had some success building and then testing against an in-memory datahike database: https://github.com/replikativ/datahike
you won't be able to test things like Datomic query and transaction functions, but it's a fun idea and faster than querying a database value returned by d/with-db . and of course it isn't "as real" as using Datomic itself.#2020-02-0619:30joshkhAssuming I have the attribute :user/id in my schema, is this a valid datomic/metaschema.edn to generate a table?
{:tables {:user/id {}}}
I'm unable to see my table, although I can connect with my schema and catalog and can see the db__attrs and db__idents tables, and so the Troubleshooting docs suggest it's a problem with the metaschema itself. I've tried syncing and cycling my datomic-access connection and presto connection with no luck.#2020-02-0620:12joshkhCame back to it later and it's working. Cool.#2020-02-0823:28bbloomi’m curious what kinds of patterns folks do around making tx-data, especially with respect to uniqueness and upserts#2020-02-0823:29bbloomit seems like upserts are mostly automatic when you have like {:unique-kw :a, :x 1} and then later do {:unique-kw :a, :x 2}#2020-02-0823:33bbloomi’d love for the same magic when working with tuples. for example, if you were modeling a two-level hierarchy with uniqueness within each branch of the hierarchy: {:top/name :x} and {:bottom/parent [:top/name :x] :bottom/name :y}#2020-02-0823:33bbloomif you do that, but you have unique/identity on :bottom/parent+name tuple, you get a uniqueness constraint violation when trying to upsert a bottom entity#2020-02-0823:36bbloomsomething like Unique conflict: :bottom/parent+name, value: [12345 :y] already held by: …. asserted for: …#2020-02-0823:34bbloomi could create some kind of upsert-by database transaction function, but i’m wondering if folks with more datomic experience have examples, suggestions, ideas, patterns, whatever#2020-02-0823:34bbloomthanks#2020-02-0823:42bbloomooh, hmm, didn’t realize i could actually add derived tuples as tx data, seems like that works just like a non-tuple attribute for upserts…. that’s nice!#2020-02-0823:54bbloomor maybe this doesn’t work? i probably shouldn’t slack & code at the same time…#2020-02-0823:42bbloomhaving discovered that - i’m still open to hearing about patterns folks use for building transactions, etc#2020-02-1016:02matthavenerI’ve used functions that transform transactions to resolve tempids or de-dup upserts to merge data (pre tuples and for other types of uniqueness not supported by datomic). I’m interested in what you find, because it’s a problem I run into a lot#2020-02-1102:20bbloomi’m curious if you could say more, b/c that’s basically my current plan: to create a richer language on top of tx-data that basically compiles down to tx-data#2020-02-1102:20bbloombasically, a series of statements that are executed and, as a side effect, return tx data#2020-02-1102:21bbloomand a big ol’ multi-method to add “macros” of sorts to that language#2020-02-0913:09SvenIs it possible to somehow disable delete-database in Datomic Cloud or even better - disable the deletion of specific (e.g. prod) databases? If not then how do people deal with the possibility that any engineer who has access do datomic can trigger delete-database via client api? If such scenario has happened - how can one restore the database as all the data is actually not deleted?#2020-02-0915:59joshkhno big deal, but there's a markup error in the Cloud docs here:
https://docs.datomic.com/cloud/getting-started/get-connected.html#access-gateway
> To connect to the access gateway, run the [[../operation/howto.html#datomic-access[datomic-access]] script...#2020-02-0916:12alexmiller@marshall ^^#2020-02-0918:34bbloomis there a way to resolve tempids within a transaction function?#2020-02-0920:01favilaNo, as they are not resolved yet#2020-02-0921:14bbloomok - thanks#2020-02-1000:57NolanIs there a dedicated location where some form of datomic dependency upgrade roadmap is broadcast? Or has that traditionally happened out of necessity or as a result of a request? I find myself in a situation where I have no real urgency to use the newest {:tag :a, :attrs {:href "/cdn-cgi/l/email-protection", :class "__cf_email__", :data-cfemail "5238333139213d3c12607c63627c62"}, :content ("[email protected]")}, but would benefit from knowing if that was planned, not a priority, for sure never going to happen, etc..#2020-02-1016:29grzmPutting together a script to run some integration testing, I'm calling ion.dev/push from Clojure. That appears to open a s3-transfer-manager-worker thread pool that doesn't shut down cleanly. I end up needing to call System/exit. Is this what datomic.ion.dev/-main does as well? Or is there some other manner I can cleanly shut down that thread pool?#2020-02-1116:57odieHi there, I'm having a bit of trouble with a simple query I'm trying to write using the pull syntax.
Some entities has an attribute that looks like
{:db/ident :some-component
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/many
:db/isComponent true}
This particular attribute often holds a list of some 2000+ elements.
So, pull seems to want to return 1000 elements by default. To lift the limit, I tried the following...
(pull ?e [* (:some-component :limit nil)]) ;=> limit not lifted, returned 1000 elements
(pull ?e [* (limit :some-component nil)]) ;=> limit not lifted, returned 1000 elements
(pull ?e ["*" (:some-component :limit nil)]) ;=> limit not lifted, returned 1000 elements
(pull ?e ["*" (limit :some-component nil)]) ;=> returned all 2184 elements
So, of the 4 cases, only the last one returns the correct number of elements. But, by using "*", all the key names seem to have been converted to strings.#2020-02-1116:58odieWhat am I doing wrong here?#2020-02-1117:09favilaSanity check: does your version of datomic support this syntax? it was added in on prem in version 0.9.5656#2020-02-1117:09favilaIf that checks out, does removing the wildcard change behavior?#2020-02-1117:37ghadiI believe there is a bug that @U05120CBV filed around this#2020-02-1117:37marshallThat's correct. We are working on a fix#2020-02-1119:41odie@U09R86PA4 I'm using 0.9.6014.#2020-02-1119:41odieOkay. Got it. It's a bug. Anybody got a link to that issue so I can keep an eye on it?#2020-02-1117:29favilaCan the datomic backup process make use of valcache or memcached?#2020-02-1117:38marshallI dont believe that it does, but i will check#2020-02-1315:44favilaDid you find anything out?#2020-02-1315:50marshallIt does not currently; can you provide more detail on use case? Is this a performance thing or robustness or other?
Do you have specific quantitative requirements (i.e. speed/throughput/etc)?#2020-02-1315:53favilaIt’s performance. We run a continuous backup and we were wondering if this was worth trying to get backup times under 1hr after an index event. By my reasoning it wouldn’t help much though because it’s already pruning the tree.#2020-02-1315:53favilaso it’s unlikely to be reading cached segments#2020-02-1213:11pfeodrippeShould the order matter at the installation of tuple attributes?
This works
[{:db/ident :reg/course
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/one}
{:db/ident :reg/semester
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/one}
{:db/ident :reg/student
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/one}
{:db/ident :reg/semester+course+student
:db/valueType :db.type/tuple
:db/tupleAttrs [:reg/course :reg/semester :reg/student]
:db/cardinality :db.cardinality/one
:db/unique :db.unique/identity}]
While this doesn't
[{:db/ident :reg/semester+course+student
:db/valueType :db.type/tuple
:db/tupleAttrs [:reg/course :reg/semester :reg/student]
:db/cardinality :db.cardinality/one
:db/unique :db.unique/identity}
{:db/ident :reg/course
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/one}
{:db/ident :reg/semester
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/one}
{:db/ident :reg/student
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/one}]#2020-02-1214:45marshall@pfeodrippe Yes, we are aware of that behavior, we are investigating whether it can be changed to be order insensitive#2020-02-1218:04mynomotoMeanwhile a note in the docs would be helpful.#2020-02-1313:26pfeodrippe@U05120CBV thank you!#2020-02-1214:50TyIs datomic cloud the only way to use datomic?#2020-02-1214:52ghadino, there are two products: Cloud & On-Prem#2020-02-1214:52ghadiif you go to http://datomic.com and click on Products or peruse some docs, you can see differences between the two#2020-02-1214:53ghadithere's also some comparison here https://docs.datomic.com/on-prem/moving-to-cloud.html#2020-02-1214:57TyThanks! Much appreciated#2020-02-1217:58Sam DeSotaFor non-breaking schema updates, is it fine to re-transact my entire apps schema idents every time I start an app instance? Could this cause any issues down the road?#2020-02-1218:00ghadi1) don't do anything except non-breaking schema growth
2) yeah it's unnecessary and can probably cause issues, especially when you have many app instances#2020-02-1218:04Sam DeSotaRight, everything is non-breaking, I just put that qualifier in there for clarification.
Didn't know if datomic only transacted changes anyway, I guess I'll have to write some sort of diffing engine, manually syncing the schema on each change doesn't work for my use case.
Thank you.#2020-02-1218:06ghadidont need a diffing engine, just query#2020-02-1218:07ghadi(set (map first (d/q '[:find ?name :where [?e :db/ident ?name] [?e :db/type]] db)))#2020-02-1218:07ghadithen transact all the stuff that isn't in the set#2020-02-1218:07ghadi(schema is not special, they're just ordinary entities)#2020-02-1218:08Sam DeSotaGot it, yeah I guess I only need to diff on the top level items, no need for a deeper tree diff update. Thanks again!#2020-02-1218:09ghadino problem... what does "top-level" mean?#2020-02-1218:09ghadiyou mean the existence of the attribute itself?#2020-02-1218:10Sam DeSotaI mean for example.. updating an existing entity from non-component entity to component entity, I can just re-transact the entire ident, instead of comparing all the individual db entity properties and updating only those that changed#2020-02-1218:12ghadiyeah, understood. Another approach is to transact collections of schema and mark in the tx metadata something identifying the collection itself#2020-02-1218:12ghadithat way one collection can make schema, then a later collection can update some part of it#2020-02-1218:13ghadithat way instead of ensuring that attr :patient/id exists, you can ensure that 20200212-patient-stuff.edn is in the database#2020-02-1218:14ghadithen you can add 2021-more-patient-stuff.edn later#2020-02-1218:14ghadiwe use an attribute called :migration/file to tag collections of schema in this way#2020-02-1218:19Sam DeSotaGot it, yeah using more traditional migration. I do like the first approach however, since all changes to Datomic are non-breaking anyway, I can use a declarative format agnostic to datomic for my schema, then generate the datomic idents, as well as schemas for other purposes ex: GraphQL from that source. It's been super helpful for rapidly building out internal apis without the need to duplicate schema information everywhere.#2020-02-1218:02ghadiyou can always query then transact only what's missing#2020-02-1321:33jarethttps://forum.datomic.com/t/datomic-0-9-6045-now-available/1360#2020-02-1322:37joshkhi'm having trouble requiring com.datomic/client-cloud {:mvn/version "0.8.81"}:
Clojure 1.10.1
(require '[datomic.client.api :as d])
Execution error - invalid arguments to datomic.client.api/loading at (api.clj:16).
:as - failed: #{:exclude} at: [:exclude :op :quoted-spec :spec]
:as - failed: #{:only} at: [:only :op :quoted-spec :spec]
:as - failed: #{:rename} at: [:rename :op :quoted-spec :spec]
(quote :as) - failed: #{:exclude} at: [:exclude :op :spec]
(quote :as) - failed: #{:only} at: [:only :op :spec]
(quote :as) - failed: #{:rename} at: [:rename :op :spec]
the previous version com.datomic/client-cloud {:mvn/version "0.8.78"} seems fine:
Clojure 1.10.1
(require '[datomic.client.api :as d])
=> nil
#2020-02-1322:39ghadi@joshkh can you paste your whole clojure -Stree ?#2020-02-1322:39ghadiCmd-Shift-Enter will open up the snippet paste in slack#2020-02-1322:41ghadiyou can redact whatever proprietary stuff you have#2020-02-1322:43joshkhclj -Adev -Stree#2020-02-1322:48ghadiare you running a repl with clj/ clojure ? I don't see the repl prompt appear before the require#2020-02-1322:48ghadiclojure -Sdeps '{:deps {com.datomic/client-cloud {:mvn/version "0.8.81"}}}'
Clojure 1.10.1
user=> (require '[datomic.client.api :as d])#2020-02-1322:48ghadiyou should try running ^ outside your project#2020-02-1322:49joshkhsorry, used clojure -Adev -Stree in my code paste. i'll try your next suggestion now.#2020-02-1322:50joshkh$ clojure -Sdeps '{:deps {com.datomic/client-cloud {:mvn/version "0.8.81"}}}'
Clojure 1.10.1
user=> (require '[datomic.client.api :as d])
Execution error - invalid arguments to datomic.client.api/loading at (api.clj:16).
:as - failed: #{:exclude} at: [:exclude :op :quoted-spec :spec]
:as - failed: #{:only} at: [:only :op :quoted-spec :spec]
:as - failed: #{:rename} at: [:rename :op :quoted-spec :spec]
(quote :as) - failed: #{:exclude} at: [:exclude :op :spec]
(quote :as) - failed: #{:only} at: [:only :op :spec]
(quote :as) - failed: #{:rename} at: [:rename :op :spec]
user=>
#2020-02-1322:50ghadijust to confirm, can you paste your -Stree again but using ^ outside your project?#2020-02-1322:50joshkhwait, outside of my project it works#2020-02-1322:50ghadiyeah that's what I suspected#2020-02-1322:51joshkhdid i break my toys?#2020-02-1322:51ghadiwithout knowing what -Adev is doing, it's hard to say#2020-02-1322:51joshkh:aliases {:dev {:extra-deps {com.datomic/client-cloud {:mvn/version "0.8.81"}
com.datomic/ion {:mvn/version "0.9.35"}
com.datomic/ion-dev {:mvn/version "0.9.251"}}}}
#2020-02-1322:53ghadiyou have some stray directories / AOT in your main project?#2020-02-1322:54ghadiwhat's in :paths#2020-02-1322:54ghadi(just a lark...)#2020-02-1322:54joshkhjust some benign files in /resources. :paths ["src/clj" "resources"]#2020-02-1322:54ghadiclass files?#2020-02-1322:56joshkhnope!#2020-02-1322:56joshkhi'm stumped 🙂#2020-02-1322:57ghadidid you follow these instructions: https://docs.datomic.com/cloud/operation/howto.html#ion-dev ?#2020-02-1322:57ghadithey changed recently#2020-02-1323:04joshkhyup, and no dice:
(ns genpo.client
(:require [datomic.client.api :as d]))
Clojure 1.10.1
Loading src/clj/genpo/client.clj...
Syntax error (ExceptionInfo) compiling at (src/clj/genpo/client.clj:1:1).
Call to clojure.core/refer-clojure did not conform to spec.
#2020-02-1323:19ghadiif it works outside your project @joshkh I'd try to debug your project classpath#2020-02-1323:19ghadican't repro it over here#2020-02-1323:20joshkhyou got it. thanks @ghadi for your help!#2020-02-1323:20ghadiand your deps looked correct#2020-02-1323:20ghadiion-dev should definitely be in .clojure/deps.edn#2020-02-1323:20joshkhyes - i've moved it there. thanks for the tip.#2020-02-1323:20ghadinp#2020-02-1403:23Sam DeSotaI'm starting to look into full-text search with cloud. I'd like to use a full text search database hosted in the datomic VPC, and sync data via the the log api if that's reasonable. Ideally, I could keep the full-text search in sync with the datomic relatively quickly (< 30s), is there any resources or directions anybody could point me in for working on this?#2020-02-1404:50eagonI’m really interested in the same thing, and been meaning to build out this functionality in the near future. I believe previous discussion mentioned quite a few people doing it with ElasticSearch, and somewhere (I think?) it was officially suggested to use AWS CloudSearch. The basic implementation idea would probably be sipping the transaction log and publishing directly to the search solution, so I’m pretty sure syncing should be much faster than 30s given the reactive ideas built into datomic#2020-02-1417:32joshkhCan you elaborate on the reactive ideas built into datomic? I thought sipping the transaction log would be more akin to polling the log every n seconds in a loop.#2020-02-1418:09Sam DeSotaRight, that's what I'm curious about. I've seen a couple of examples of a utilizing a polling thread with ions to do subscriptions, but if there's some way to get a message queue of all Datomic transactions, that would be ideal#2020-02-1419:22eagonAhhh, yeah I think I mixed up on-prem and cloud, been watching too many old Rich Hickey Datomic videos for my source of datomic truth and not so much the documentation. 😛
I was thinking of tx-report-queue which was a big idea in prem for the Peers, which was understandably not supported in the Client API for cloud. Now that Ions are out though and there must be some kind of internal implementation to keep the transaction log synced across query groups, is there a way to access this API?
There’s this old slack conversation https://clojurians-log.clojureverse.org/datomic/2018-06-27 between @U09FEH8GN and @U072WS7PE where it was mentioned @currentoor we certainly understand the value of tx-report-queue, and will probably do something similar (or better) in Cloud at some point. That said, you should plan and build your app around what exists today. Was wondering if I missed an update since, or what kinds of workarounds “build your app around what exists today” that people have found to work for them?#2020-02-1419:37favilapolling#2020-02-1415:13Joe LaneFor both of you asking about search I'm curious, what is the expected size of the ES Cluster you will be running in the VPC?#2020-02-1418:05Sam DeSotaFor me, just used for a product database of about 100,000 user-generated products (title, description, tags) and an orders / customer database of ~5000 orders a month, just for admin tasks.#2020-02-1418:05Sam DeSotaRunning in the VPC#2020-02-1417:30joshkhI want to upsert two entities with tuples in the same transaction, where the second entity's tuple references the first entity. An initial transaction works as expected:
(d/transact db
{:tx-data [{:db/id "entitya"
:feature/id "SomeId123"
:feature/type "Gene"
:feature/type+id ["Gene" "SomeId123"] ; <-- tuple for upsert
}
{:db/id "entityb"
:attribute/view "Gene.primaryIdentifier"
:attribute/value "MC3R"
:attribute/feature "entitya" ; <-- ref back to entitya temp-id from above
:attribute/feature+view+value ["entitya" "Gene.primaryIdentifier" "MC3R"]}]})
=> success
But transacting the same tx-data again throws a Unique conflict caused by the second entity, even though I'm including the tuple attribute value (albeit a temporary id):
(d/transact (client/get-conn)
{:tx-data ...same as above})
Unique conflict: :attribute/feature+view+value, value: [47257009761812574 "Gene.primaryIdentifier" "MC3R"] already held by: 27971575810621538 asserted for: 31454828647415908
Should I expect a successful upsert here?
• Edit - I'm on the latest version of Datomic Cloud 8846 and client 0.8.81#2020-02-1419:31favilaOn-prem has the same behavior. I too am curious if this is by design because one of our desired use cases for composite tuples was to have upserting composites.#2020-02-1419:33favilaYou can use them as upserting only if you explicitly assert the final value of the composite in the transaction. composites don’t seem to be consulted for upserting-tempid resolution, even if no part of the composite involves a ref#2020-02-1419:35favilae.g. transacting {:a eid-of-x :b eid-of-y} with a defined composite upsert attr defined of :a+b may also produce a datom conflict instead of upserting#2020-02-1419:36favilainstead we have to do {:a eid-of-x :b eid-of-y :a+b [eid-of-x eid-of-y]} always. And we can’t use tempids or lookup refs for eid-of-x or eid-of-y , only raw entity ids#2020-02-1517:30daniel.spanielCurious to know if a regex query in the where clause like this#2020-02-1517:30daniel.spaniel[?e :invoice/number ?number]
[(re-find ?regex ?number)]
#2020-02-1517:30daniel.spanielis supported .. my regex is like #"(?i)blah"#2020-02-1517:31daniel.spanielthis works in the in memory datomic but is barfing in datomic cloud ( ion )#2020-02-1517:37daniel.spanielthe error is
Not supported: class java.util.regex.Pattern
#2020-02-1522:52steveb8n@dansudol try putting the Pattern class in the :allow values of ion-config.edn#2020-02-1523:04daniel.spanielwill do @steveb8n good idea#2020-02-1608:39odieHi all,
One of the queries I’m trying to run seemed “slow”, at around 2+ seconds
on my local machine.
The query is just trying to match on an exact value for an attribute. It looks something like this:
'[:find [(pull ?e [*]) ...]
:in $ ?target-val
:where [?e :some-attr ?target-val]]
It turns out, there are ~10M entities with this particular attribute. I then tried to speed this up by asking for the attribute to be indexed.
The attribute was updated to look like:
{:db/ident :some-attr
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one
:db/index true}
However, the query speed did not change.
I then tried looking things up directly through the AVET index like so.
(d/datoms db :avet :some-attr "12345")
This results in an error. Saying the attribute isn’t indexed.
Execution error (Exceptions$IllegalArgumentExceptionInfo) at datomic.error/arg (error.clj:79).
:db.error/attribute-not-indexed attribute: :some-attr is not indexed
I also tried running
(d/request-index (db/get-conn)) ;=> true
and
(deref (d/sync-index conn (d/basis-t (d/db conn)))) ;=> blocks forever
I’m pretty lost on how to go about getting the index built.
Is there something obvious I’m missing here?
I’m running 0.9.6014 btw.#2020-02-1614:05favilaYou are on the right track but indexing isn’t instant#2020-02-1614:06favilatransactor logs and metrics will show if it’s actively indexing#2020-02-1614:07favilaUse a basis t exactly corresponding to the t when you issued request-index (or a little older if you are unsure) to your d/sync-index call#2020-02-1614:07favilaIf it’s still blocking, then you are definitely still indexing#2020-02-1614:08favilaYou just need to wait for it to finish#2020-02-1614:08favilaAnd you can confirm by looking at tx logs#2020-02-1706:37odie@U09R86PA4 I left everything running over night. The index still isn’t available.
Digging a bit in the log, I found that whenever I ran
(d/request-index (db/get-conn))
The following lines would soon show up in the log file:
2020-02-17 14:19:10.773 WARN default o.a.activemq.artemis.core.server - AMQ222165: No Dead Letter Address configured for queue admin.request5e4a305e-a0cd-4a28-a539-6e209b4d53ec in AddressSettings
2020-02-17 14:19:10.778 WARN default o.a.activemq.artemis.core.server - AMQ222166: No Expiry Address configured for queue admin.request5e4a305e-a0cd-4a28-a539-6e209b4d53ec in AddressSettings
2020-02-17 14:19:10.860 WARN default o.a.activemq.artemis.core.server - AMQ222165: No Dead Letter Address configured for queue admin.response5e4a305e-bd13-4fe6-905e-d3a0a0ace172 in AddressSettings
2020-02-17 14:19:10.860 WARN default o.a.activemq.artemis.core.server - AMQ222166: No Expiry Address configured for queue admin.response5e4a305e-bd13-4fe6-905e-d3a0a0ace172 in AddressSettings
2020-02-17 14:19:10.909 INFO default datomic.update - {:event :transactor/admin-command, :cmd :request-index, :arg "stock-insight-d1b785f9-4994-485c-960f-45cc94ebced8", :result {:queued "stock-insight-d1b785f9-4994-485c-960f-45cc94ebced8"}, :pid 97181, :tid 97}
2020-02-17 14:19:11.328 INFO default datomic.update - {:index/requested-up-to-t 25104804, :pid 97181, :tid 54}#2020-02-1706:37odieI’m guessing the last line means all the index has been brought up to t:25104804.#2020-02-1706:39odieIndexing on said field was enabled at t:25104802. So I guess that means the transactor thinks all index as been brought up to date?#2020-02-1706:42odieHowever, trying to use the AVET index fails in the same way.#2020-02-1706:43odieWould those activemq warnings be some indication that something isn’t working the way it is supposed to?#2020-02-1706:46odieAlso, I noticed that I was starting the transactor with java 11. I’ve since switched to running it with java 8. Could that have messed something up?#2020-02-1712:23favila The last line means the index has been requested up to that T, not that it is finished#2020-02-1712:24favilaTry sync-index with that t. If it blocks, you are not finished#2020-02-1712:25favilaCognitect says they support java 11 :man-shrugging: #2020-02-1712:25favilaI think the activemq warnings are red herrings#2020-02-1611:47daniel.spaniel@steveb8n, that did not work .. seems like
java.util.regex.Pattern
#2020-02-1611:48daniel.spanielis not allowed in the :allow section of the ion-config.edn file#2020-02-1614:04favilaCould this be a serialization issue? Try using a string for the re and constructing a pattern early in the query with re-pattern#2020-02-1621:30steveb8nIn mine I have just the fn from the namespaces I need e.g. clojure.string/starts-with?#2020-02-1621:34steveb8nsince Pattern in java interop, you might create your own fn with Pattern inside and “allow” that instead#2020-02-1623:19DaoudaHey folks,
Let say I have an entity with an attr called :entity/hash and more than one entity may share de same hash value .
Now I want to perform a query where I will pass a list of hash hash1 hash2 hash3 and the query will return a tuple of hash count-of-entity-with-the-same-hash like this: [hash1 3 hash2 5 hash3 80] or {hash1 3 hash2 5 hash3 80}
Don’t want to perfom that at application level. I want the query to give me back that result. Is it possible and how can I achieve that?
Snippet code will be very welcome 😄#2020-02-1706:24pithylessUnless I misunderstood the question, this sounds like an aggregate count query: https://docs.datomic.com/cloud/query/query-data-reference.html#built-in-aggregates
[:find ?hash (count ?hash)
:in [?hash ...]
:with ?entity
:where [?entity :entity/hash ?hash]]
#2020-02-1723:07Daoudaactually you go it right, thank you very much 😄#2020-02-1707:21eagonAre there any other UI options for Datomic Cloud other than REBL? I’d love to introduce non-clojure team members to Datomic to get them excited and have them try out queries, and the old Datomic Console for on-prem looked perfect, but sadly doesn’t seem to be available for cloud(?). Have people built any solutions for this need?#2020-02-1712:10joshkhis there a way to configure a username and password for the Presto server built into the Datomic Cloud access gateway?#2020-02-1715:05souenzzo[on-prem] There is guidelines about datomic and core.async?
I need to run queries inside a go-block#2020-02-1715:18favilaqueries are blocking work, which you shouldn’t do in a go-block generally#2020-02-1715:19favilaunless you know they will complete very quickly?#2020-02-1715:38souenzzoI already locked all my threads due quering inside go-block 😕#2020-02-1715:38souenzzoI know, I can create a thread pool and bla bla bla
But it can't be delivered as a library, like datomic.client.api.async#2020-02-1715:56souenzzo@U04VDQDDY I remember that sometime ago you asked about connect datomic-client-pro in a datomic:mem conn
Did you end up with some solution?#2020-02-1716:02mfikes@U2J4FRT2T that must have been someone else with that issue#2020-02-1716:04maxtUsing clojure spec for datomic entity spec seems like a good idea. Is there any reason why it might not? I can't find it being mentioned anywhere.
Doing so would make it easy to use the same verification code client side and server side. The possible downside I can think of is that it might be a bit of overhead.
Something along the lines of:
(s/def :user/phone-number (s/and string? #(re-matches #"\+[0-9 +-]+" %)))
(s/def :user/uuid uuid?)
(s/def :example/user (s/keys :req [:user/uuid] :opt [:user/phone-number])
(in-ns 'example)
(defn user? [db eid]
(let [user (d/pull db '[*] eid)]
(if (s/valid? :example/user user)
true
(s/explain-str :wavy/user user))))
;; datomic schema
{:db/ident :example/user
:db.entity/preds example/user?}
#2020-02-1718:21Luke Schubertis there anyway to mass transact data?#2020-02-1718:32favilaWhat do you mean?#2020-02-1718:32favilaDo you mean this? https://docs.datomic.com/on-prem/best-practices.html#pipeline-transactions#2020-02-1718:34Luke SchubertI'm already pipelining, I was just wondering if there was some functionality to run a large batch of transactions#2020-02-1718:37Luke SchubertI wasn't able to find anything, but I figured I would ask here in the event that I was just failing to search properly#2020-02-1718:52favilaI’m still not sure what you want more than running transactions repeatedly#2020-02-1718:54favilathere are some tricks to improve “import” transaction cost or performance if that’s what you’re looking for?#2020-02-1718:54favilalike performing the import locally on large instances with SSDs#2020-02-1718:55favilaincreasing the index threshold#2020-02-1718:55favilanot indexing any attrs until the end of the import#2020-02-1721:37joshkhso this is interesting! when using the standalone pull syntax with [:db/id] as the selector, the results include reverse reference attributes with a '... symbol value. is this expected?
(d/pull db [:db/id] 1234567891234)
=>
{:attribute/a "value-a"
:attribute/b "value-b"
:attribute/_c ...
:attribute/_d ...}#2020-02-1721:47favilaI can’t reproduce this#2020-02-1721:47favilaI definitely don’t consider this expected#2020-02-1721:48favilacan you provide some more context?#2020-02-1721:50alexmillerare you sure that's not just repl printing?#2020-02-1722:06joshkhyup:
((juxt identity type) (:attribute/_c (d/pull (client/db) [:db/id] 1234567891234)))
=> [... clojure.lang.Symbol]
a coworker found this today, and i reproduced it on my machine (different IDEs and REPLs)#2020-02-1722:25joshkh> can you provide some more context?
DatomicCloudVersion 8846
com.datomic/client-cloud {:mvn/version "0.8.78"}
and nothing peculiar about our schema. the reverse reference attributes are legit, and for what it's worth non-component. pulling * on the same entity returns the same data but without reverse references (as expected).#2020-02-1722:40joshkhhas anyone here successfully integrated Datomic Cloud Analytics with a third party BI platform? we can successfully validate our Presto connection, but after selecting a Schema (db name) we get the error Query failed: Expected string for :db-name
someone else has the same problem and posted on the forums a few months ago without a resolution
https://forum.datomic.com/t/error-on-integration-between-datomic-analytics-and-power-bi/1266/2#2020-02-1723:10DaoudaHey folks, how retract impact datomic performance?
Make database read and write faster?
What about excision, does it has the same impact or different one?#2020-02-1820:55BrianDatomic Cloud instance suddenly giving
{
"errorMessage": "Connection refused",
"errorType": "datomic.ion.lambda.handler.exceptions.Unavailable",
"stackTrace": [
"datomic.ion.lambda.handler$throw_anomaly.invokeStatic(handler.clj:24)",
"datomic.ion.lambda.handler$throw_anomaly.invoke(handler.clj:20)",
"datomic.ion.lambda.handler.Handler.on_anomaly(handler.clj:171)",
"datomic.ion.lambda.handler.Handler.handle_request(handler.clj:196)",
"datomic.ion.lambda.handler$fn__3841$G__3766__3846.invoke(handler.clj:67)",
"datomic.ion.lambda.handler$fn__3841$G__3765__3852.invoke(handler.clj:67)",
"clojure.lang.Var.invoke(Var.java:399)",
"datomic.ion.lambda.handler.Thunk.handleRequest(Thunk.java:35)"
]
}
I've tried from lambda as well as bastion. The EC2 instance is up and running. It's been a while since I've touched this. If I redeploy master with no changes, will I lose the handles I have connecting api gateway and my lambda ions? I just need to get this service back up and running#2020-02-1821:11jaret@brian.rogers can you log a case by emailing <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>? I would like to gather more information on this failing service before offering concrete next steps and it would be best to share that information over a case.
Useful starting info:
-Cft version.
-solo or prod?
-other services are working?
-did you deploy before the error? Anything change before you saw this error?#2020-02-1821:17Brian@jaret I just redeployed master and it's fixed itself. Would it be useful for me to still submit a case to cognitect?#2020-02-1821:18BrianIf it helps: cft version I don't know (what is cft?), solo topology, we only are running that one datomic service so I couldn't test anything else, lat deployment was in the summer and it's been running ever since until a day or two ago#2020-02-1821:19jaretYes. I am very interested in tracking this down and would like to provide you potential steps to gather a thread dump should this issue occur again.#2020-02-1821:20jaretCFT = cloud formation template version, found in the outputs of your compute stack#2020-02-1821:22jaret@brian.rogers obviously no urgency on the case, but if you get a chance please do log one. If we have a bug here I’d like to address it.#2020-02-1821:24BrianSure thing!#2020-02-1822:53lilactownhas anyone used datascript as a client-side cache for datomic?#2020-02-1822:54Joe LaneI think you'd be surprised how different they are.#2020-02-1822:54Joe Lane(Meaning, I was)#2020-02-1822:59lilactownin that semantically they are too different for datascript to act as a cache that way?#2020-02-1823:00lilactownthe reason I’m asking is because I’ve been thinking about building a datalog API in front of our microservices (a la GraphQL), and started thinking that it might make sense to use datascript as a client-side cache to reduce the queries that have to actually hit the backend.
my thinking was that a query could respond with not only the result, but the datums that were resolved in the processing of the query. that way the client side could transact those into a client-side cache and future queries could potentially query the local db instead of sending a request to the server.
however, populating the cache for a query could end up accidentally being quite a lot of datums that need to be sent over the wire, even if the query result is small. so I was wondering if this same idea had been solved by some datomic <-> datascript integration.#2020-02-1902:04aisamuThis sounds a lot like Meteor's minimongo! (along with all the hard problems that came with it)#2020-02-1903:04lilactownI'm starting to think that datalog might be too general for this#2020-02-1913:43favilathere’s a reason graphql resembles pull expressions more than sql or datalog#2020-02-1913:45favilaIME the problems were 1) determining dependencies (i.e., do I have the thing I need to query or not) efficiently 2) expressing those things at the right granularity or even overlapping granularity 3) updating those things efficiently#2020-02-1913:46favilaa datomic peer can be really sloppy with this by just having lots of ram and a fast network and very large granularity (i.e. “giant seqs of sorted datoms”#2020-02-1913:47favilaon a remote, semi-untrusted client, you can’t do that#2020-02-1913:47favilayou need to send a lot less, and you need to make sure they can’t see “nearby” data which may not be theirs#2020-02-1914:57lilactownyeah that makes sense#2020-02-1915:00lilactownI guess the biggest downside of datalog for this use case is that it’s much harder to build an index on top of a set of queries you’re sending.#2020-02-1823:27Joe LaneSo, at that point you're putting your database on the internet.#2020-02-1823:30Joe Lane(If i'm understanding you correctly)#2020-02-1823:36lilactownI’m not sure what you mean by that.#2020-02-1823:52Joe LaneWhat is the first problem (the a la GraphQL) you're trying to solve? Why are you interested in building the datalog api? What brought you to the conclusion of "use datascript as a client-side cache"? What are you interested in caching?#2020-02-1900:13Joe LaneFWIW I think the most complete library that is close to what you're asking for is https://github.com/denistakeda/re-posh#2020-02-1900:14Joe LaneAnd all the libraries it depends on.#2020-02-1900:14lilactownwe have a lot of microservices that are currently exposed at various endpoints. I would like to be be able to query our system, via Datalog, to get a response that contains data from multiple endpoints.
E.g. if there’s a books and authors microservice, I’d be able to write a query on the client-side:
'[:find ?title ?author
:where
[?e :book/title ?title]
[?e :book/author ?author-id]
[_ :author/name ?author]]
and the service would query across the books and authors microservices to resolve the facts I want#2020-02-1900:15Joe LaneAhh, pathom is probably the closes thing to that 🙂#2020-02-1900:15lilactownright, I know of pathom but it isn’t datalog, it’s more akin to pull syntax#2020-02-1900:15Joe LaneHa, thats what I was just typing.#2020-02-1900:18Joe LaneIf you're going to make something to do this, I'd probably build it on top of pathom since it compiles indexes.#2020-02-1900:18Joe LaneI'm not familiar with anything that that will do it for you.#2020-02-1900:20Joe LaneDatascript is (obviously) datalog implemented in the browser. Another interesting one is built into https://github.com/arachne-framework/factui , which builds an impressive datalog on top of clara rules (in the browser!).#2020-02-1900:22Joe LaneTo be clear, what we are talking about now is pretty far away from the initial question of:
> has anyone used datascript as a client-side cache for datomic?
Nothing wrong with that, per se, it just sounds like datalog to query across n-services is a very different (more general) problem than client-side datomic cache.#2020-02-1900:26lilactownyes, the next step of the idea is that I would like to handle caching on the client of these queries so that it doesn’t have to send the request for datums that have already been requested from the microservices.#2020-02-1900:27lilactownand my thinking was: what if my datalog-query-service responded with all of the datums that were requested in order to resolve the query, and then the client transacted those to a local datascript db?#2020-02-1901:07lilactownDoes my question make more sense, now?#2020-02-1901:08lilactownI am looking for experience reports of using datascript to cache queries for a service that already uses datalog (datomic)#2020-02-1906:22pithylessThe closest thing I'm aware of is https://github.com/replikativ/datahike and the replikativ stack as an alternative trying to build a JS/JVM distributed data stack. But, I would also argue that your books and authors example sounds like you want to build a js-based datomic peer (that will fetch and cache datoms and do joins locally):
'[:find ?title ?author
:in $book-db $author-db
:where
[$book-db ?e :book/title ?title]
[$book-db ?e :book/author ?author-id]
[$author-db ?a :author/id ?author-id]
[$author-db ?a :author/name ?author]]
#2020-02-1914:56lilactowna client with caching is fairly similar to a datomic peer, I suppose!#2020-02-1913:08asierHi, we have a memory issue (system crashes) because we get many records (over a million) and then sort by time.#2020-02-1913:09asierThis is the schema:#2020-02-1913:09asier{:db/id #db/id[:db.part/db]
:db/ident :lock/activities
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/many
:db/isComponent true
:db.install/_attribute :db.part/db}
{:db/id #db/id[:db.part/db]
:db/ident :activity/author
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/one
:db.install/_attribute :db.part/db}
{:db/id #db/id[:db.part/db]
:db/ident :activity/at
:db/valueType :db.type/instant
:db/cardinality :db.cardinality/one
:db/index true
:db.install/_attribute :db.part/db}#2020-02-1913:09asierand the code that crashes is this:#2020-02-1913:09asier(sort-by :at
(mapv #(get-activity-data %) (:lock/activities lock)))
#2020-02-1913:10asierIs there a simpler way to get activities of a lock sorted?#2020-02-1913:12Joe Lane@asier does it crash if you just call (mapv #(get-activity-data %) (:lock/activities lock)) without sorting?#2020-02-1913:13asieryes#2020-02-1913:14Joe LaneSo, the issue isn't sorting then, maybe the issue is eagerly realizing a million+ entities in memory at once?#2020-02-1913:14favilaDo you need all results here? It seems like you simply can’t fit all activities in memory; what are you willing to give up?#2020-02-1913:15asierwe just need the newest 100 activities, but we don't know how to do it.#2020-02-1913:16Joe LaneCan you use the :avet index?#2020-02-1913:16favilaWould a time bound instead of a count bound be acceptable?#2020-02-1913:18favilaIf not, I think you need to rearrange your schema a bit so you can make a composite attr sorted how you want#2020-02-1913:20asierThanks both - I'll investigate further#2020-02-1913:20favila(Or, could you throw more ram at it)#2020-02-1913:21asierthat's an option, indeed#2020-02-1914:25asierWith this code we don't need to increase the memory:#2020-02-1914:26asier(sort #(compare (:activity/at %2)
(:activity/at %1))
(:lock/activities (or (d/entity db [:lock/id lock-id])
(d/entity db [:lock/serial-number lock-id])))#2020-02-1914:32favilahow is this different from your get-activity-data? (Which I now realize you never showed us)#2020-02-1914:43asier(defn get-activity-data
"Gets the attributes from entity"
[activity]
(merge
{:at (to-long (:activity/at activity))
:name (-> activity
:activity/kind
:activity-kind/name)
:desc (-> activity
:activity/kind
:activity-kind/desc)
:status (-> activity
:activity/kind
:activity-kind/status)
:image (->> activity
:activity/kind
:activity-kind/image
(str "assets/"))}
(if (:activity/author activity)
{:author (some-> activity :activity/author :user/username)}
{:author nil})))#2020-02-1916:08favilaAh, I see, you were building full result sets, and you couldn’t fit that in memory. but you can fit the things you sort by#2020-02-1916:08favilaconsider using a query instead of the entity API?#2020-02-1916:11favilae.g.
(->> (d/q '[:find ?activity ?at
:in $ ?lock
:where
[?lock :lock/activities ?activity]
[?activity :activity/at ?at]]
db lock-eid)
(sort-by peek)
(into []
(comp
(map first)
(take 100))))#2020-02-1916:13favilastill realizes everything you sort on, but should be lighter than using entity sets#2020-02-2013:03asierI take note - thanks!#2020-02-1914:46asierold code - from 2015 or so#2020-02-1916:00BrianHello! I'm trying to upgrade my system for the first time. I'm running a solo topology in Datomic Cloud. I've selected my root stack and used the formation template https://s3.amazonaws.com/datomic-cloud-1/cft/589-8846/datomic-storage-589-8846.json which I found from https://docs.datomic.com/cloud/releases.html however the stack update has failed with the status reason of Export with name XXX-MountTargetSecurityGroup is already exported by stack XXX-StorageXXX-XXX . I have never updated my system since I created it in August. Any guidance would be appreciated!#2020-02-1916:07Joe LaneHave you looked at https://docs.datomic.com/cloud/operation/upgrading.html and https://docs.datomic.com/cloud/operation/split-stacks.html ?#2020-02-1916:12BrianI was using the upgrading.html page but I was not using a split stack system. Let me try splitting the stacks and see if the problem persists after that. Thanks @lanejo01#2020-02-1916:58BrianWhere should I go from here? My system is now down with the ec2 instance terminated so some part of that delete worked#2020-02-1917:00Joe LaneI'd open a ticket at this point, sorry I can't be more helpful ATM.#2020-02-1917:00BrianNo problem. Thanks for the help!#2020-02-1917:55BrianI resolved the above issue by navigating to the ENIs page and deleting the ENIs manually#2020-02-1918:14Joe Lane@brian.rogers Did you then split the stack and upgrade?#2020-02-1918:14BrianCurrently in the process of doing so!#2020-02-1918:14BrianHave not yet finished#2020-02-1918:14Joe LaneGreat to hear :+1:#2020-02-1921:36marshallANN:
Datomic CLI Tools 0.10.81 now available:
https://forum.datomic.com/t/datomic-cli-0-10-81-now-available/1363
Check out the video overview: https://docs.datomic.com/cloud/livetutorial/clitools.html#2020-02-2006:42tatutdoes it need some configuration? I'm getting Error building classpath. Could not find artifact com.datomic:tools.ops:jar:0.10.81 in central () when trying to run datomic command#2020-02-2008:14maxtI get the same#2020-02-2008:18maxtIf I add the datomic cloude s3 repo to deps I can get it to work
:mvn/repos {"datomic-cloud" {:url ""}}#2020-02-2015:20marshallIt should have been on Maven central; we are looking again - it should show up there soon if we need to re-release#2020-02-2016:59timcreasyLooks like it’s up now :+1: I had been hitting this same issue.#2020-02-2008:10maxt@marshall Thank you! I especially like the log list command. Would those commands also be available to call from a repl? That would be my prefered way of working. Then I don't need to have another window open, I can save some startup time, and I don't have to parse text to process the output further.#2020-02-2015:19maxtTurns out using it from the REPL works great
(require 'datomic.tools.ops)
(datomic.tools.ops.cloud/list-systems {})
(datomic.tools.ops.system/list-instances {:system "example"})
(datomic.tools.ops.log/events {:group "example"
:minutes-back 10
:tod (java.util.Date.)})
#2020-02-2011:28pezSomeone else tried to deploy on-prem in AWS eu-north-1? I get an error Not a supported dynamodb region: eu-north-1 - (You'll never learn).#2020-02-2012:43marshallThe included launch scripts dont currently support that region. You can likely provision and launch manually there. I will also look into adding support for the region in an upcoming release#2020-02-2012:50pezIn our case we get the error message when the peer is trying to connect, so this seems to go deeper than the launch script.#2020-02-2014:18marshallah; i think i know what the issue is; one minute#2020-02-2014:25marshallI believe I have a workaround that will work for you.
In your transactor properties file, change the protocol to ddb-local:
protocol=ddb-local
Then comment out the aws-dynamodb-region line:
#aws-dynamodb-region=
Finally, set the aws-dynamodb-override-endpoint to the address of the DDB endpoint:
aws-dynamodb-override-endpoint=
The use of ddb-local as the protocol will allow the system to honor the override configuration.
Similarly, you will need to use the ddb-local URI for your peer:
#2020-02-2014:31pezThanks a lot! Right now we're moving things back to eu-central-1, but we might try this again later this week. Will let you know if we do and how we fare, if so.#2020-02-2014:32marshallI will also add a feature request for region support for that region#2020-02-2014:33pezThat would certainly help us. We will probably stick with on-prem on AWS for a while, and we serve only Sweden.#2020-02-2014:48marshallhttps://feedback.eu.pendo.io/app/#/case/117834#2020-02-2015:20marshallWe don’t currently support running the tools from a REPL; caveat emptor so to speak#2020-02-2016:21uwoWhen overriding the default table name in a metaschema, is that name then used to match with Datomic attributes and determine the columns for the associated table?
(https://docs.datomic.com/cloud/analytics/analytics-metaschema.html#name-option)#2020-02-2022:09uwojust to follow up, the answer looks to be no. As in the name-opt isn't used to match attributes whose munged namespace would match. Need to use :include to capture them. (let me know if I'm missing something!)#2020-02-2016:42mdhaneyI’m trying to automate my Ion deployments with a Github workflow, but I can’t figure out how to handle polling for the deployment status. I was wondering if anyone else has done this, or even with a different CI tool how you handled the polling.#2020-02-2017:22maxtI'm doing it on circle CI. This is my deploy function
;; Inspired by
(defn ions-release
"Do push and deploy of app. Supports stable and unstable releases. Returns when deploy finishes running."
[{:keys [group] :as args}]
(try
(let [push (requiring-resolve 'datomic.ion.dev/push)
deploy (requiring-resolve 'datomic.ion.dev/deploy)
deploy-status (requiring-resolve 'datomic.ion.dev/deploy-status)]
(println "Pushing" args)
(let [{:keys [dependency-conflicts deploy-groups] :as push-data} (push args)]
(assert (contains? (set deploy-groups) group) (str "Group " group " must be one of " deploy-groups))
(let [delay-between-retries 1000
deploy-args (merge (select-keys args [:creds-profile :region :uname :group])
(select-keys push-data [:rev]))
_ (println "Deploying" deploy-args)
deploy-data (deploy deploy-args)
deploy-status-args (merge (select-keys args [:creds-profile :region])
(select-keys deploy-data [:execution-arn]))]
(when dependency-conflicts
(clojure.pprint/pprint dependency-conflicts))
(println "Waiting for deploy" deploy-status-args)
(loop []
(let [status-data (deploy-status deploy-status-args)]
(if (= "RUNNING" (:code-deploy-status status-data))
(do
(print ".")
(flush)
(Thread/sleep delay-between-retries) (recur))
(do (println)
status-data)))))))
(catch Exception e
{:deploy-status "ERROR"
:message (.getMessage e)})))
#2020-02-2016:58NicoI have an attribute that has a cardinality of many, how do I test in a query that all entries in it aren't equal to a certain thing?#2020-02-2016:59NicoI can do [?e :tags ?t] [(not= :tags [whatever])] but that returns items that have the tag I don't want, because they also have other tags#2020-02-2017:08NicoI just realised not clauses were a thing, but that still doesn't completely solve the problem#2020-02-2017:09favilaIf you are using on-prem, call a function that checks; this is very tedious in pure datalog#2020-02-2017:11favilaa pure datalog solution will be a variation of this: https://stackoverflow.com/questions/43784258/find-entities-whose-ref-to-many-attribute-contains-all-elements-of-input/43808266#43808266#2020-02-2017:12Nicoah ok, thanks#2020-02-2017:16favilaAn on-prem function implementation looks something like this:
(defn not-any-matching-eav? [db e a test-v]
(zero? (->> (datomic.api/datoms db :eavt e a test-v)
(bounded-count 1))))#2020-02-2019:11shaunxcodeis there any way with the pull syntax to indicate you only want the single value (v.s. a collection containing one value). With query we can do [:find ?x . :in ....] is there something similar (only thing I could find in docs is ability to indicate limit). e.g. what about the case where you know there is only one term e.g. [:person/id [{:person/_child [:person/id]} :as :person/parent]] and I do not want :person/parent to be "boxed"? Like I would like the result to be [{:person/id :x :person/parent {:person/id :y}}] not [{:person/id :x :person/parent [{:person/id :y}]}]#2020-02-2023:17csmI’m thinking of using a tuple of two instants to represent a validity period, and a query would look something like [:find ?cap :in $ :where [?cap :capability/period ?period] [(ground (java.util.Date.)) ?now] [(first ?period) ?starts] [(second ?period) ?ends] [(< ?starts ?now)] [(< ?now ?ends)]]. Is that an appropriate use of tuples? Is first and second the right way to pull those values out?#2020-02-2100:21shaunxcodeyou might consider [... :where ... [(untuple ?period) [?starts ?ends]] ...]#2020-02-2101:10csmthat’s exactly what I wanted… the docs for untuple confused me#2020-02-2110:28tatutI'm trying to run tests in github actions (just clojure -A:test command) and it seems it doesn't find datomic jars (I have the s3 releases bucket as a repo)#2020-02-2110:30tatutspecifically the com.datomic/ion {:mvn/version "0.9.35"} can't be found#2020-02-2113:45maxtMy first guess would be missing aws credentials. You need to be authed to fetch from that repo.#2020-02-2113:59tatutI don't think our codebuild in aws is authed, it's just in the n. virginia region#2020-02-2117:10m0smithIn Datomic cloud, is it possible to do a dry run of a retractEntity. That is, have it return what it would retract but not actual do the retraction?#2020-02-2117:28Joe Lane@m0smith use https://docs.datomic.com/client-api/datomic.client.api.html#var-with and with-db#2020-02-2117:33m0smithThanks @lanejo01! That did the trick#2020-02-2118:15jarethttps://forum.datomic.com/t/datomic-cloud-616-8879/1364#2020-02-2123:03dvingodatomic cloud question: We are attempting to set up a new cloudformation stack of an existing system (existing ions code, datomic schema, and data)
The stack is setup, schema is transacted and we're attempting to confirm that data is loaded. When executing a query via HTTP through API gateway an exception is thrown. It appears to happen on the invocation to datomic.client.api/connect.
The relevant stack trace lines are:
[
"datomic.anomalies$throw_if_anom",
"invoke",
"anomalies.clj",
113
],
[
"datomic.client.impl.local.Client",
"connect",
"local.clj",
192
],
[
"datomic.client.api$connect",
"invokeStatic",
"api.clj",
133
],
and
"Cause": "Supplied AttributeValue is empty, must contain exactly one of the supported datatypes (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: ValidationException; Request ID: JF41UQTURAKQ0ITO8LFP94LBEVVV4KQNSO5AEMVJF66Q9)"
}
}
},
"At": [
"datomic.anomalies$throw_if_anom",
"invokeStatic",
"anomalies.clj",
119
]
The config map supplied to connect are read from amazon SSM parameters:
(defn cloud-db-config []
{:server-type :ion
:region (ssm/get-ssm-param "region")
:system (ssm/get-ssm-param "system")
:endpoint (ssm/get-db-endpoint)
:timeout (ssm/get-ssm-param "timeout")
:proxy-port 8182})
I believe these values are all present in SSM (I have to debug through an ops team member due to restricted environment access... so taking their word for it)
We confirmed that we can query the database from a REPL connection by passing a manually constructed db-config map.
I'm wondering if anyone has seem something like this before or if there is something obvious that I'm overlooking.#2020-02-2200:05dvingosooooo. redeploying the ion app seems to have magically fixed things#2020-02-2217:32cobyHey folks, Datomic n00b here. I'm working inside a simple demo app generated with lein new luminus luminus-example +datomic. (It generated the db-core ns called below, where my conn lives.) I can run queries just fine but can't transact. When I call this code:
(defn create-post! [{:keys [type title slug content]}]
(let [id (java.util.UUID/randomUUID)
type (or type :page)
title (or title (str "Page " id))
slug (or slug (title->slug title))
content (or content [])]
(d/transact db-core/conn {:tx-data [{:post/id id
:post/type type
:post/slug slug
:post/title title
:post/content content}
{:db/add "datomic.tx"
:db/doc "create post"}]})))
(create-post! {})
I get:
Execution error (ClassCastException) at datomic.api/transact (api.clj:96).
class clojure.lang.PersistentArrayMap cannot be cast to class java.util.List (clojure.lang.PersistentArrayMap is in unnamed module of loader 'app'; java.util.List is in module java.base of loader 'bootstrap')
Am I doing something obviously wrong? Which map is it trying to cast to a list?#2020-02-2217:53markaddlemanAre you using Datomic Cloud or on-prem? I believe the APIs for those two are a bit different.
Specifically, the second argument to transact is a map in Cloud but I do not think it is in on-prem#2020-02-2219:23cobyI'm using on-prem, following this tutorial which is passing a map:
https://docs.datomic.com/on-prem/tutorial.html#2020-02-2223:05ghadiYou should write a function that outputs the tx-data so that you can inspect it before transacting#2020-02-2223:05ghadi try to pull apart things that transact from things that generate tx data#2020-02-2305:25cobyI actually did try that and got what I expected.
(create-post-data {})
;; => [#:post{:id #uuid "b05f2baa-ec01-485c-91cc-abf2f8fe5256",
;; :type :page,
;; :slug "page-b05f2baa-ec01-485c-91cc-abf2f8fe5256",
;; :title "Page b05f2baa-ec01-485c-91cc-abf2f8fe5256",
;; :content []}
;; #:db{:add "datomic.tx", :doc "create post"}]
(create-post-data {:type :page :title "New Page!"})
;; => [#:post{:id #uuid "5665a149-90cf-42ff-bef1-a10d46881a3e",
;; :type :page,
;; :slug "new-page!",
;; :title "New Page!",
;; :content []}
;; #:db{:add "datomic.tx", :doc "create post"}]
(create-post-data {:type :page :title "New Page!" :slug "new-page" :content ["some" "content"]})
;; => [#:post{:id #uuid "82a510c7-8160-4e22-96fd-74d15239ef8b",
;; :type :page,
;; :slug "new-page",
;; :title "New Page!",
;; :content ["some" "content"]}
;; #:db{:add "datomic.tx", :doc "create post"}]#2020-02-2305:43cobyHmm, this seems to be a deeper issue that's not related to create-post! at all. I'm getting it for any call to transact.
(d/transact db-core/conn
{:tx-data
[[:db/add
[:post/id #uuid "b2e32ece-c6b6-4936-ba39-d24f717dcd4d"]
:post/slug "updated-slug"]]})
;; => Execution error (ClassCastException) at datomic.api/transact (api.clj:96).
;; class clojure.lang.PersistentArrayMap cannot be cast to class java.util.List (clojure.lang.PersistentArrayMap is in unnamed module of loader 'app'; java.util.List is in module java.base of loader 'bootstrap')#2020-02-2312:09andrewzhurovdoes this hit the spot?
https://stackoverflow.com/a/53360679#2020-02-2318:14cobyyep, that sounds like exactly my issue. So here's my understanding:
• running bin/run -m datomic.peer-server ... and connecting per the tutorial uses the Client API (map version)
• My Luminus app is using the "in-process peer library" as described here: https://docs.datomic.com/on-prem/clients-and-peers.html
• Seems like the recommendation is to go with the Client API...which is still in alpha??#2020-02-2219:17currentoorHas anyone used datascript as a write-through-cache for datomic to achieve offline mode?
I’m building a POS, clients are react native iOS. Datascript (persisted in client-side blob store) as a write-through cache seems compelling because then I can write most of my queries in CLJC and re-use them on client and server. And in offline mode I can restrict the POS to only allow accretion of new data, so I don’t have deal with DB conflicts.#2020-02-2316:20Joe Lane@currentoor From a few days ago, https://clojurians.slack.com/archives/C03RZMDSH/p1582066433438000#2020-02-2316:25currentoorAh thanks, exactly the advice I was hoping for#2020-02-2320:23eagonTrying to puzzle out not-join and when it is necessary. The docs example is:
[:find (count ?artist)
:where [?artist :artist/name]
(not-join [?artist]
[?release :release/artists ?artist]
[?release :release/year 1970])]
The docs mention this means that ?artist is the only variable unified and the other inside the not-join, namely ?release, is not. Why would a simple (not) clause here not work, if there doesn’t seem to be a ?release to be unified outside the clause anyway? Or is this just some kind of performance thing?#2020-02-2320:25joshkhif i'm reading the docs correctly, one needs to split their solo stack in order to upgrade datomic. are there any disadvantages to doing this (e.g. pricing)? and if not, then why does the solo topology start with a combined stack?#2020-02-2320:44eagon@U0GC1C09L The master combined stack is necessary for the AWS Marketplace integration, but makes operational tasks harder for Datomic, as architecturally the persistent storage stack and the compute nodes are separate. For example, you could take down the compute nodes completely and your underlying storage would be unaffected, allowing you to upgrade either separately or do whatever you want. For solo the system is so small (and has no HA guarantees) so it’s fine to lump everything together, but for production you’ll need to split the stacks (or more correctly “untangle” them because they weren’t meant to be together anyway). There’s no pricing disadvantage to split stacks, the same resources are just described in two cloudformation templates.#2020-02-2320:49joshkh> The master combined stack is necessary for the AWS Marketplace integration
cool, that answers my question. i'm coming from the production->solo perspective for hobby purposes. thanks @UNRDXKBNY#2020-02-2404:33mvIs there a good tutorial or guide I can read for advice on which libraries to choose to make a datomic backed web api?#2020-02-2407:42dmarjenburghAfter ionizing your request-handler, it’s not really different from building a web api with ring/pedestal.#2020-02-2407:44dmarjenburghThis was assuming datomic-cloud…#2020-02-2408:44mkvlrI recently came across https://blog.acolyer.org/2019/06/17/towards-multiverse-databases/ and found it quite interesting. The central idea behind multiverse databases is to push the data access and privacy rules into the database itself. The database takes on responsibility for authorization and transformation, and the application retains responsibility only for authentication and correct delegation of the authenticated principal on a database call. Such a design rules out an entire class of application errors, protecting private data from accidentally leaking. Is anybody aware of a similar thing being tried for datomic?#2020-02-2408:48mkvlr@U4YGF4NGM and @U09R86PA4: I think it should also solve the security issues when trying to cache parts of datomic locally?#2020-02-2420:39cobyWow, fascinating! I'd guess that probably no one's working on this (would love to be wrong!), but I imagine given the degree of immutability and lazy eval built into Datomic already, it'd have a leg up on any other database software trying to add this in. I could imagine the API being as simple as:
(def db (d/with-context {:user-id uid}))#2020-02-2413:20Arek FlinikHas anybody tried running Datomic on top of MSSQL? (please don’t blame me, talking with a potential enterprise customer that insists on doing that because “based on their experiences PostgreSQL is a terrible choice” 🙄)#2020-02-2416:48favilaI’ve run it on top of mysql. You’ll be fine#2020-02-2416:49faviladatomic uses sql as a key-value store for binary blobs#2020-02-2416:49favilaoptimize your tables and schema for that workload#2020-02-2414:48marshall@aflinik Many of our customers run on SQL Server#2020-02-2420:36Arek FlinikThanks! Would be able to share some insights about potential pitfalls, differences in performance characteristics, or any other learnings?#2020-02-2515:03marshallGenerally speaking most SQL stores are fairly reliable
We have many customers using postgres, SQL Server, and Oracle
Like most Datomic storage options, the most common issues are usually general misconfiguration of storage itself. If you’re comfortable running the storage and/or have a good DBA who knows it well, they all perform fairly similarly#2020-02-2417:59arohnerAre in-memory databases still supported for testing purposes? I’m having trouble finding docs on how to set that up#2020-02-2418:32favilaIf you’re talking about on-prem, yes definitely. If you’re talking about cloud AFAIK it has never supported in-memory? (why say “still supported”?)#2020-02-2418:11arohnerIn production I plan to use datomic client against an on-prem transactor. What is the best way to write tests against that?#2020-02-2418:12arohnerTo get the in-memory database it looks like the app needs to include the full datomic-pro dependency#2020-02-2419:10favilaCognitect’s official recommendation is “use random database names and an aws connection for testing”#2020-02-2419:10favilaThis thing also exists, but requires datomic-pro as you noticed: https://github.com/ComputeSoftware/datomic-client-memdb#2020-02-2514:49joshkhmy Ions deployments to a specific query group started failing today due to an error reported by the BeforeInstall script: There is insufficient memory for the Java Runtime Environment to continue. has anyone else experienced this?#2020-02-2514:50joshkhthe project deployed to the query group has no problem running queries. it's the deployment itself that fails.#2020-02-2515:02marshall@joshkh what size instance is the query group?#2020-02-2515:13joshkht3.medium#2020-02-2515:14marshallhttps://docs.datomic.com/cloud/ions/ions-reference.html#jvm-settings
the t medium instance only have 2582m heap#2020-02-2515:15joshkhi did start playing with Datomic Analytics yesterday, although i'm using my main compute group for that. still, could that affect an unrelated query group?#2020-02-2515:16marshallshouldnt
the analytics server itself runs on the gateway instance and sends queries to whatever group you’ve configured (or default to the primary compute)#2020-02-2515:17joshkhright, that's what i thought. okay, we can look into increasing our instance size. thanks @marshall.#2020-02-2515:18joshkhjust curious though - wouldn't the heap have more of an effect on a running project? this happens when i initiate a deployment, which fails almost immediately.
LifecycleEvent - BeforeInstall
Script - scripts/install-clojure
[stdout]Clojure 1.10.0.414 already installed
Script - sync-libs
[stderr]OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000ee000000, 32505856, 0) failed; error='Cannot allocate memory' (errno=12)
[stdout]#
[stdout]# There is insufficient memory for the Java Runtime Environment to continue.
[stdout]# Native memory allocation (mmap) failed to map 32505856 bytes for committing reserved memory.
#2020-02-2515:18marshallhave you tried cycling the instance?#2020-02-2515:19marshallthat looks like a wedged instance#2020-02-2515:19marshallif it can’t allocate 32M#2020-02-2515:22joshkhwe tried autoscaling a second instance which came up just fine. then we tried to redeploy or code to fix the wedged instance, but the deployment failed due to a 120s sync libs error#2020-02-2515:23marshallwhat version are you running?#2020-02-2515:23joshkhoops, incoming edit 😉 above#2020-02-2515:24marshallyou should update your ion-dev version https://docs.datomic.com/cloud/releases.html#ion-dev-251#2020-02-2515:24joshkhyes, i was very excited to see that!#2020-02-2515:24marshallthen cycle your instance(s)#2020-02-2515:25joshkhwill do, thanks Marshall#2020-02-2516:24kennyI am getting this exception ~1/day. Any idea why this would occur?
clojure.lang.ExceptionInfo: Datomic Client Exception
{:cognitect.anomalies/category :cognitect.anomalies/fault, :http-result {:status 500, :headers {"content-length" "32", "server" "Jetty(9.4.24.v20191120)", "date" "Sun, 23 Feb 2020 17:08:37 GMT", "content-type" "application/edn"}, :body nil}}
at datomic.client.api.async$ares.invokeStatic (async.clj:58)
datomic.client.api.async$ares.invoke (async.clj:54)
datomic.client.api.sync.Client.list_databases (sync.clj:71)
datomic.client.api$list_databases.invokeStatic (api.clj:112)
datomic.client.api$list_databases.invoke (api.clj:106)
compute.db.core.DatomicClient.list_databases (core.cljc:71)
datomic.client.api$list_databases.invokeStatic (api.clj:112)
datomic.client.api$list_databases.invoke (api.clj:106)#2020-02-2516:25ghadiis that the full stacktrace? what was the user code that caused it?#2020-02-2516:26kennyNo but it's the only relevant part. It's caused by datomic.client.api$list_databases#2020-02-2516:27kennyThis is line 71 in compute.db.core:
(let [dbs (d/list-databases client arg-map)]
#2020-02-2516:29ghadinot sure, but you should try to correlate it with logs in cloudwatch#2020-02-2516:30ghadihttps://docs.datomic.com/cloud/operation/cli-tools.html#log#2020-02-2516:30ghadiBTW ^ new Datomic CLI tools#2020-02-2516:35kennyIt looks nice but will require a couple things to happen before we can update.#2020-02-2516:39ghadiyou appear to be on the latest 616 compute#2020-02-2516:40ghadiyou can use the datomic cli tools fine with that#2020-02-2516:36kennyDon't use CW logs often. It felt like a battle to get to the logs I wanted 😵 Should I upload them here? There's 2 relevant lines.#2020-02-2516:37kenny#2020-02-2516:37kenny#2020-02-2516:37kennyDidn't look like any sensitive info so added them to the thread there ^#2020-02-2516:38ghadithanks -- that is probably useful to @marshall. Need your datomic compute stack version # too#2020-02-2516:38ghadithanks -- that is probably useful to @marshall. Need your datomic compute stack version # too#2020-02-2516:40kennyDatomicCFTVersion: 616
DatomicCloudVersion: 8879#2020-02-2516:40ghadithanks#2020-02-2516:38ghadiseems like a server side bug from the stacktrace#2020-02-2516:39kennyIt's weird how it happens so infrequently.#2020-02-2518:08jaretKenny, I am logging a case to capture this so we can look at it. I think we have everything we need, but wanted to let you know in case you see an e-mail come your way from support.#2020-02-2518:13kennyGreat, thanks. #2020-02-2516:47uwoI'm setting Xmx and Xms when running the (on-prem) peer-server. I just noticed that it appears to start with its own settings for those flags.
CGroup: /system.slice/datomic.service
├─28220 /bin/bash /var/lib/datomic/runtime/bin/run -Xmx4g -Xms4g ...
└─28235 java -server -Xmx1g -Xms1g -Xmx4g -Xms4g -cp ...
Should I be setting those values thru configuration elsewhere?#2020-02-2517:02uwoAh, I see where they're hard coded into the bin/run script. Perhaps I don't need to treat the peer server like other app peers?Repeated flag precedence may differ across versions/distros#2020-02-2521:52hadilsI am developing an application that reqquires storage of millions of transactions and I am concerned about the limitations of Datomic Cloud. Should I use a separate store (e.g. DynamoDB) for the transactions or are there ways to scale Datomic Cloud? Welcome any feedback anyone might have...#2020-02-2522:14Joe Lane@hadilsabbagh18 Which limitations are you concerned about? What do you mean "Ways to scale Datomic Cloud?" ?#2020-02-2522:26hadilsI am concerned about storage for now. #2020-02-2522:27hadils@lanejo01 #2020-02-2522:27ghadi@hadilsabbagh18 millions of transactions is fine with Datomic Cloud. (Disclaimer: I work for Cognitect, but not on Datomic) If you really want to do it right, you'll want to estimate the cardinality of your entities, relationships, and estimate the frequency of change... all in all you need to provide more specifics#2020-02-2522:28ghadiWith the disclaimer that there is no application specifics, DynamoDB has very very rudimentary query power#2020-02-2522:38hadils@ghadi -- I will make an estimate. Who should I be talking to about this?#2020-02-2522:39ghadi@marshall is a good person to talk to#2020-02-2522:39hadils@ghadi Thanks a lot. I will be more specific when talking to @marshall.#2020-02-2522:39ghadino problem#2020-02-2523:42steveb8nQ: has anyone used aws x-ray inside Ions? I want to do this (i.e. add sub-segments) but I’m wary of memory leaks etc when using the aws client in Ions. Any war stories or success stories?#2020-02-2523:49Sam DeSotaIs it possible to pass a temp id to a tx-fn and resolve the referenced entity in the tx-fn? Example:
(def my-tx-inc [db ent attr]
(let [[value] (d/pull db {:eid ent :selector [:db/id attr]})]
[:db/add (:db/id value) attr (inc (attr value))]))
{:tx-data [{:db/id "tempid" :product/sku 1} '(my-ns/my-tx-inc "tempid" :product/likes)]}#2020-02-2523:52favilaNo. You must treat tempids as opaque. What they resolve to is unknowable until all datoms are expanded#2020-02-2523:53Sam DeSotaGot it. Thank you.#2020-02-2523:53favilaFor example some other tx fn may return something that asserts the tempid has a particular upserting attribute. That changes how it would resolve #2020-02-2523:50Sam DeSotaAs is, this doesn't seem to work#2020-02-2523:50ghadi@steveb8n I’ve done them#2020-02-2523:51ghadiYou need the xray sdk for Java but not the aws-sdk for java auto instrumenter#2020-02-2523:52steveb8ngreat. not the Cognitect aws client? Just aws java interop?#2020-02-2523:52ghadiInterop#2020-02-2523:53ghadiKeep in mind the amazon xray sdk for java is a completely separate sdk than the aws java sdk (not a subset)#2020-02-2523:53steveb8nok. I’ll give that a try. is there a sample snippet out there somewhere? I don’t need one but it seems like a good thing for docs#2020-02-2523:54ghadiNo, sorry, but the aws docs were accurate#2020-02-2523:54ghadiAnd helpful#2020-02-2523:54steveb8nok, good info. thanks#2020-02-2600:19csmone can (and should) split stacks in a solo cloud install, correct? And recreate the storage stack and solo compute stack?#2020-02-2612:11vemvHi. I use datomic on-prem (via the Peer API) with a number of Datomic installations (think production, staging etc)
In most of them everything is OK, but for one of them, query latency is consistently high; 1500ms, for the simplest, cheapest possible query.
Indices are in place. We also tried to rule out basic stuff (configuration/environment differences, etc)
What are some possible causes, or things we can do to troubleshoot this?#2020-02-2612:11vemvHi. I use datomic on-prem (via the Peer API) with a number of Datomic installations (think production, staging etc)
In most of them everything is OK, but for one of them, query latency is consistently high; 1500ms, for the simplest, cheapest possible query.
Indices are in place. We also tried to rule out basic stuff (configuration/environment differences, etc)
What are some possible causes, or things we can do to troubleshoot this?#2020-02-2612:12vemvIn case it helps:
how come the latency is slow every time?
Given the Datomic architecture (AIUI), queries should be in-memory, with updates being pushed over the wire.
So after a slow first query, subsequent queries should be fast... but nope.#2020-02-2615:21marshall@vemv what is different about the one that is high latency?#2020-02-2615:22vemvWe haven't noticed any significant difference - DB size, indices, infrastructure details#2020-02-2703:57mavbozo@vemv how many peers you have? is it just 1 peer - 1 transactor - 1 storage ?#2020-02-2703:58mavbozomaybe in staging you have only 1 peer but in production you have multiple peers and 1 of the peers has high latency?#2020-02-2707:37vemvWe have:
1 Peer Lib per webapp instance
2 transactors
1 postgresql DB#2020-02-2707:38vemv> maybe in staging [...]
All our environments are alike in terms of size, topology etc#2020-02-2616:51dvingoI noticed in on-prem there is an optional :db/index attribute (https://docs.datomic.com/on-prem/schema.html#operational-schema-attributes) but I don't see one in the cloud docs. Are all attributes indexed in cloud?#2020-02-2617:00dvingoI don't see it exlicitly stated in the docs, just alluded to here:
"Datomic datalog queries automatically use multiple indexes to support a variety of access patterns" (https://docs.datomic.com/cloud/whatis/data-model.html)
The differences page, doesn't mention it other than the full-text difference:
https://docs.datomic.com/on-prem/moving-to-cloud.html#text-search#2020-02-2617:01ghadiall attributes are indexed in cloud#2020-02-2621:07eagonAre lookup refs not valid values for transactions for ref attributes in cloud? As per https://blog.datomic.com/2014/02/datomic-lookup-refs.html Similarly, they can be used in transactions to build ref relationships. but I get “entity not resolved” with the full tuple as an error in cloud#2020-02-2621:14ghadiwhat did you try and what did it error?#2020-02-2622:25eagon@ghadi
(d/with with-db {:tx-data [{:space/uuid #uuid "some-space-uuid"
:space/devices [:device/id "some-device-id"]}]})
:space/devices is cardinality many, type ref. I realized the issue with this version is the cardinality many means that the vector is interpreted as a vector of refs, so i’m actually pointing to the ident :device/id and an unknown tempid.
I then tried another version:
(d/with with-db {:tx-data [{:space/uuid #uuid "some-space-uuid"
:space/devices [[:device/id "some-device-id"]]}]})
And this time after querying the database and looking at the committed transactions it seemed like nothing changed. Solved this issue just splitting the reference into temp ids as mentioned (coincidentally) in the latest conversation here, but was just wondering if it’s possible to use lookup refs as the value in ref type attributes.#2020-02-2622:26ghadiwhat error did you get?#2020-02-2622:27ghadiand what is the schema definition of :space/uuid and :space/devices?#2020-02-2622:28eagon:space/uuid is identity, type uuid
:space/devices is type ref, cardinality many
No error on the second one, just nothing updated (looking at the after-db, the intended space->device datom was not found)#2020-02-2622:29eagonFirst one errored
tempid 'd5f2962bd37b24c1c7cb076b9053ae77' used only as value in transaction
which made sense per above reasoning, I then added another pair of square brackets to intend it as a lookup ref#2020-02-2622:32eagon#:db{:ident :space/devices,
:valueType :db.type/ref,
:cardinality :db.cardinality/many}
#:db{:ident :space/uuid,
:valueType :db.type/uuid,
:cardinality :db.cardinality/one,
:unique :db.unique/identity}#2020-02-2622:35ghadithat second try seems like it should have worked....#2020-02-2622:35ghadihang on to the transact return value if you try it again#2020-02-2622:56eagonTurns out another false alarm, thanks for your time! Am I correct to assume that datomic will automatically infer the meaning of a vector based on both schema ident cardinality and type? Got a little confused from how overloaded the square brackets were#2020-02-2621:27marshall@eagonmeng what version of cloud?#2020-02-2622:10eagon@U05120CBV Hmm, was there a rollback for the released version of cloud? I’m on DatomicCFTVersion 616, which I believe was just released recently on 2/21 (https://webcache.googleusercontent.com/search?q=cache:cWl9SuBuZeMJ:https://docs.datomic.com/cloud/releases.html+&cd=1&hl=en&ct=clnk&gl=us) but it’s mysteriously disappeared off the main page (https://docs.datomic.com/cloud/releases.html)#2020-02-2622:15marshallNo thats a doc issue. Will fix#2020-02-2622:46marshallfixed#2020-02-2621:28marshallnote: https://docs.datomic.com/cloud/releases.html#569-8835
• (Fix: resolve tempids for reference attributes inside tuples.)#2020-02-2621:29marshallhowever, you should move to the latest; it includes a fix for a different regression in that version#2020-02-2622:04hadilsI need to write to several different "schemas" in a single Datomic transaction, e.g., phone numbers, emails accounts, person information and address information. All of these write create eids which are then stored in a customer. I need to do this atomically, but I cannot figure out how to recover eids from within a transaction function. Any ideas?#2020-02-2622:06shaun-mahood@hadilsabbagh18 Do you need a transaction function, or can you use tempids (https://docs.datomic.com/cloud/transactions/transaction-processing.html#tempid-resolution)?#2020-02-2622:07ghaditransaction functions don't resolve tempids#2020-02-2622:08ghaditransaction functions (the ones that run inside datomic) return transaction data, which is later committed#2020-02-2622:09hadils@ghaid @shaun-mahood Understood. How do I create such a transaction atomically?#2020-02-2622:09ghadiif you need to transact something compound in the same Datomic transaction, you just give it to transact:
(d/transact conn {:tx-data [thing1 ... thing2.... thing3]})#2020-02-2622:09ghadithe things can be arbitrary#2020-02-2622:10ghaditransaction functions are something different and should be used to achieve specific purposes (they're like macros that run while the tx is being committed)#2020-02-2622:11hadilsThanks @ghadi. I have a one->many relationship between phone numbers and persons. How would I do that in a single transaction?#2020-02-2622:12ghadiI don't mean to brush off the question, but you should really go through the tutorials#2020-02-2622:12ghadiall of this is covered#2020-02-2622:12ghadiand I'll do a bad job explaining 😃#2020-02-2622:13ghadiyou're using Cloud?#2020-02-2622:13shaun-mahoodI've got an example I can paste in here - will redact it quickly and throw it in a thread (the docs are great, but I sometimes need a few extra examples to figure things out the first time)#2020-02-2622:15shaun-mahood(d/transact conn {:tx-data
[[:db/add "request" :request/date request-date]
[:db/add "client" :client/name client-name]
[:db/add "job" :job/name job-name]
[:db/add "job" :job/client "client"]
[:db/add "job" :job/request "request"]]})#2020-02-2622:16shaun-mahood@hadilsabbagh18#2020-02-2622:21hadilsThanks!#2020-02-2622:22hadilsThanks!#2020-02-2622:13hadilsThanks @ghadi. I have gone through all the tutorials and the videos.#2020-02-2622:14hadils@shaun-mahood you can email it to me at <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>#2020-02-2622:14hadilsif you wish.#2020-02-2622:14ghadihttps://docs.datomic.com/cloud/transactions/transaction-processing.html#tempid-resolution#2020-02-2622:15ghadilinking tempids together is how you assert relationships between things#2020-02-2622:16hadils@ghadi. The problem is that i need to commit data atomically, not how to retrieve tempids.#2020-02-2622:16ghadi[{:db/id "customer1"
:customer/name "hadils"}
{:db/id "customer2"
:customer/name "ghadi"
:friends/with "customer1"}]#2020-02-2622:17ghadithings sent in the same tx-data are committed together#2020-02-2622:17ghadiin that example, two new entities are created, and the second entity points to the first entity#2020-02-2622:17ghaditwo tempids "customer1" "customer2" will be resolved into two entity-ids when committed#2020-02-2622:18ghadiyou can send in arbitrary graphs of entities in the same tx#2020-02-2622:18hadils@ghadi. I am with you. So I need to not use tempids then, I need some other reference for a one-many relationship.#2020-02-2622:18ghadi(those are tempids above)#2020-02-2622:19ghadiif you have a unique ID you can send that in instead of the tempid -- but the end result is the same: datomic needs to know what entities you're asserting things about#2020-02-2622:19hadils@ghadi. yes i see that now. is :firends/with a fre value type?#2020-02-2622:19ghadiyes#2020-02-2622:20hadilsthanks. that answers my question.#2020-02-2622:20steveb8nthis got me as well. in cloud, tempids are just strings in the :db/id#2020-02-2622:21ghadihttps://docs.datomic.com/cloud/transactions/transaction-data-reference.html#identifier#2020-02-2622:22ghadithe tx data grammar at the top of that page is useful, along with the examples#2020-02-2622:22shaun-mahoodI think my biggest stumbling blocks in learning Datomic have all been related to how simple it is at the core#2020-02-2622:23hadilsWhat about one to many relationships?#2020-02-2622:25ghadi"cardinality many" attributes can be asserted in a similar way#2020-02-2622:26hadilsOk#2020-02-2622:26ghadi:friends/with [joe hadils...]#2020-02-2622:26ghadiin the transaction data#2020-02-2622:26hadilsThanks a lot @ghadi !#2020-02-2701:50Jon WalchHas anyone seen this?
clojure.lang.ExceptionInfo: entity, attribute, and new-value must be specified
at datomic.client.api.async$ares.invokeStatic(async.clj:58)
at datomic.client.api.async$ares.invoke(async.clj:54)
at datomic.client.api.sync$eval2142$fn__2147.invoke(sync.clj:84)
at datomic.client.api.protocols$fn__14323$G__14277__14330.invoke(protocols.clj:72)
at datomic.client.api$transact.invokeStatic(api.clj:181)
at datomic.client.api$transact.invoke(api.clj:164)
#2020-02-2701:51Jon WalchGoogle is turning up nothing#2020-02-2701:57ghadiWhat transaction data did you send in @jonwalch ?#2020-02-2702:06Jon Walch@ghadi
[{:db/id 1,
:foo/apple 0N,
:foo/processed-time #inst "2020-02-27T02:00:35.561-00:00"}
{:db/id 2,
:foo/apple 0N,
:foo/processed-time #inst "2020-02-27T02:00:35.562-00:00"}
[:db/cas 3 :user/speaker 0N 100N]
[:db/cas 4 :user/speaker 85N 100N]
[:db/cas 5 :bar/running? true false]
{:db/id 5,
:bar/end-time #inst "2020-02-27T02:00:35.569-00:00",
:bar/result? false}]#2020-02-2702:07ghadiDon’t send in your own :db/id, use tempids#2020-02-2702:07ghadiWhich are strings#2020-02-2702:08ghadiWhat version of the compute stack are you running? CAS with a boolean might be problematic- I forget.#2020-02-2702:09ghadiI should clarify: only send in integer :db/ids that Datomic handed you @jonwalch #2020-02-2702:10Jon Walchthats what those are :+1:#2020-02-2702:10ghadi1 is an integer above#2020-02-2702:10Jon Walchtrying to find my datomic cloud version#2020-02-2702:10ghadiDid you print or prn?#2020-02-2702:10Jon Walchyeah I fuzzed the ids#2020-02-2702:11Jon WalchI should've clarified, my bad#2020-02-2702:11ghadiI’m debugging this from my phone, so try to help me out 🙃#2020-02-2702:12ghadiOk if your ids are legit, then take out the boolean CAS and see if you get an error. If you do, I’ll file a report (need your compute stack versions)#2020-02-2702:13Jon WalchDatomicCloudVersion 8812#2020-02-2702:14Jon WalchComputeCFTVersion 535#2020-02-2702:14Jon WalchIs that what you're looking for?#2020-02-2702:14ghadiThanks#2020-02-2702:14Jon WalchChanging the code, will report back in ~10#2020-02-2702:20Jon Walch@ghadi removed the cas, working flawlessly#2020-02-2702:20Jon Walchthank you so much!#2020-02-2702:23ghadiYou might want to upgrade your stack to latest. I feel like I’ve filed this bug before with the crew but I’ll double check#2020-02-2713:01tatutIn datomic cloud 8812 we had a problem with a big transaction creating entities with multiple levels of nested maps... some children ended up on the wrong parent... the same code on newer version worked correctly. I tried looking at release notes but didn't see bug fixes related to that#2020-02-2715:26hadilsGood morning @ghadi @shaun-mahood! I successfully got an complex atomic transaction working this morning! Thanks again for your help!#2020-02-2715:27ghadiwoot! thanks for following up.#2020-02-2715:27ghadihopefully the first of many!#2020-02-2715:48souenzzohttps://docs.datomic.com/on-prem/pull.html#as-example
This example is wrong
Should be
[(:artist/name :as "Band Name")]
#2020-03-0415:12souenzzobump
it is frustrating for newcomers to fail on running doc examples
I feel uncomfortable recommending something that the thin documentation doesn't work#2020-02-2716:45joshkhapologies for the cross post, but i was wondering if anyone has an answer to this forum post regarding a single transaction of two facts about the same entity resolved by a composite tuple? thanks! https://forum.datomic.com/t/conflict-when-transacting-non-unique-values-for-entities-resolved-by-composite-tuples/1367#2020-02-2810:57jthomsonThat seems like the expected behaviour to me. You're asserting two values for a single-cardinality attribute of one entity, within one transaction. There isn't a concept of "newer" within one transaction, so its a conflict.#2020-02-2810:58jthomsonIf you split this into two transactions, then of course there would be no problem as you assert abc123 and then abc789#2020-02-2900:21Jon WalchI'm not sure these docs are up to date https://docs.datomic.com/cloud/operation/upgrading.html#compute-only-upgrade#2020-02-2900:22Jon WalchSelect "Specify an Amazon S3 template URL:" and enter the CloudFormation template URL for the version that you wish to upgrade to (see Release page for all versions) then click "Next".#2020-02-2900:22Jon WalchI see
Prerequisite - Prepare template
Prepare template
Every stack is based on a template. A template is a JSON or YAML file that contains configuration information about the AWS resources you want to include in the stack.
Use current template
Replace current template
Edit template in designer
#2020-02-2900:22Jon Walchoh i guess one step is skipped#2020-02-2900:43Jon WalchIs it ever necessary to upgrade the root stack?#2020-02-2909:38joshkhi'm accumulating some general questions about Datomic Analytics. where's the best place to post them? perhaps an Analytics category on http://forum.datomic.com would be useful?#2020-03-0118:42hadilsHi. I have an Lambda ion that needs to invoke another one. I am getting the error message:
User: arn:aws:sts::<ACCOUNT-NUMBER>:assumed-role/stackz-dev2-compute-us-west-2/i-0ff451783095066e5 is not authorized to perform: lambda:InvokeFunction on resource: arn:aws:lambda:us-west-2:<ACCOUNT-NUMBER>:function:stackz-dev2-compute-bh-dummy
I have attached lambda:* permission to stackz-dev2-compute-us-west-2 but this does not help. Does anyone have any experience they can share?#2020-03-0118:54hadilsNvm, I figured it out...#2020-03-0201:20introomI am using dynamodb, and am considering switching to datomic.#2020-03-0201:21introomI wonder, if datomic will hurt the write and read performance. It seems datomic relies on some ec2 machine sitting in between the client and dynamodb. won’t that be some bottleneck?#2020-03-0201:22introomother than the versioned history of the entire db and the query flexibility, I wonder what benefits datomic brings to me other than directly using dynamodb.#2020-03-0208:53eagon@i I switched from dynamodb to datomic, and it’s been great. While the underlying storage is indeed dynamo, you actually gain performance, not lose it, because of how queries are cached and how your application logic runs in Ions with direct access to memory. The query flexibility is great, and with Ions you gain a VPC and best practices for architecting an application, as well as easy integration into the rest of AWS with lambda triggers.#2020-03-0208:55introomcost-wise, does datomic incur more read/write requests?#2020-03-0208:56introomI am also concerned that, datomic seems to suffer single-point bottleneck on the ec2 instances. While with vanilla dynamodb, the bottleneck is only on the dynamodb site.#2020-03-0213:48joshkhshould it be possible to have two entities in Datomic with the same :db/ident but different :db/ids?
[{:db/id 111
:db/ident :color/green}
{:db/id 999
:db/ident :color/green}]#2020-03-0213:49ghadino#2020-03-0213:50ghadido you see that in your database?#2020-03-0213:53joshkhi do, and it's causing problems. i have entities referencing what should be the same enumerated value, but are in fact different entities.#2020-03-0213:55favilacould the history DB be involved?#2020-03-0213:55favilahow precisely did you produce those results you posted above?#2020-03-0213:58joshkhnope, history isn't at play. i don't know how or when it happened. we just stumbled upon it today while chasing down a uniqueness conflict in an API.#2020-03-0213:59favilawhat is the result of (d/q '[:find ?e :where [?e :db/ident :color/green]] a-plain-current-db)?#2020-03-0214:01joshkhgood question! the weird thing is that when we query for the ident, we only get one result.#2020-03-0214:01joshkhhowever...
(let [db (client/db)]
[(d/pull db [:*] 55555555555555555)
(d/pull db [:*] 10101010101010101)])
=> [#:db{:id 55555555555555555, :ident :color/green}
#:db{:id 10101010101010101, :ident :color/green}]#2020-03-0214:02favilaare those real entity ids?#2020-03-0214:02favilais this real code?#2020-03-0214:02favilaplease real code only#2020-03-0214:03joshkhthat's real code with the db/ids and db/idents replaced with other values#2020-03-0214:03favilaok, then you should file a support ticket#2020-03-0214:03favila(why the replacement? there’s nothing sensitive about entity ids)#2020-03-0214:03ghadiand buy a lottery ticket at the same time#2020-03-0214:04favilato prepare your ticket, get the full history of datoms for each entity#2020-03-0214:05favilathis is basically “impossible” so you’ve either stumbled on a really amazing bug, or you’re missing something#2020-03-0214:05joshkhregarding sharing db/ids, i'd rather be safe than sorry. i can't think of anything i'd do with one, but you never know. 🙂#2020-03-0214:07favilause real code in your ticket at least. entity id details may matter#2020-03-0214:08joshkhyes of course, always do in support tickets. just not in public channels with work-related db/ids. thanks for your input favila, i'll be sure to include any historical datoms.#2020-03-0216:55souenzzo(let [k1 (keyword "color" "green")
k2 (keyword "color" "green ")]
{:k1 k1
:k2 k2
:pr-str-k1 (pr-str k1)
:pr-str-k2 (pr-str k2)
:equal? (= k1 k2)})
=> {:k1 :color/green,
:k2 :color/green,
:pr-str-k1 ":color/green",
:pr-str-k2 ":color/green ",
:equal? false}#2020-03-0216:56souenzzo"dynamic" keyworkding is evil 😈#2020-03-0218:41joshkhoh for sure, i've been bitten by that in the past. it was the next thing i checked 🙂
(let [db (client/db)
entity-1 (d/pull db [:*] 12345)
entity-2 (d/pull db [:*] 67890)
ns-name (juxt namespace name)]
[(-> entity-1 :db/ident ns-name)
(-> entity-2 :db/ident ns-name)
(= (:db/ident entity-1) (:db/ident entity-2))])
=> [["color" "green"] ["color" "green"] true]
#2020-03-0218:48joshkhand to add to the mystery, i can't query for either entity.
(d/q '{:find [(pull ?e [*])]
:in [$]
:where [[?e :db/ident :color/green]]}
(client/db))
=> []
anywho, ticket opened 🙂#2020-03-0214:01favilaI want to tease apart two problems 1) these enum attrs are not pointing at the entities I expect 2) there’s actually more than one entity with the same :db/ident value.#2020-03-0214:01favilathere are many ways to cause 1 which won’t cause 2#2020-03-0214:01favilaso let’s rule out 2#2020-03-0214:51daemianmackalso a quick check on the content of the db/ident seems in order — does A’s :color/green value really equal B’s :color/green value?#2020-03-0220:29John ContiAnyone know how to report Datomic documentation bugs? I just found that https://docs.datomic.com/cloud/getting-started/get-connected.html has an error datomic client access system is (I think) supposed to be datomic-access -r <AWS Region> client <Datomic System Name>#2020-03-0222:25joshkhThat looks correct to me, although you should be able to include the region if needed. Are you the very latest CLI version? They release an update not too long. #2020-03-0315:32vlaaadHi! is there a way in a transaction to reference txInstant in the same way we can reference tx with "datomic.tx" ?#2020-03-0315:33souenzzo@vlaaad last time that i searched about it i endup with [:db/add "datomic.tx" :dummy-attribute ""] 😞#2020-03-0315:37favilaIn the response :tx-data , look for an assertion of :db/txInstant where the e and the tx of the datom are the same#2020-03-0315:38vlaaadI want to save tx instant on an entity inside transaction#2020-03-0315:38favilaI know, but this is for Enzzo#2020-03-0315:38favilaWhat you want you can’t do#2020-03-0315:38vlaaadah#2020-03-0316:35souenzzoI got it wrong . sorry.#2020-03-0315:37vlaaadhuh? I want to save tx instant on an entity#2020-03-0315:39favilathe tx instant is not available to transaction functions. I’m not sure when the implied tx instant datom assertion is added#2020-03-0315:39favilayou can add it explicitly if you know the txinstant is older than the last tx instant#2020-03-0315:40favilaconsider also referencing the tx instead of copying the instant#2020-03-0315:40vlaaadsomething like [:db/add "my-entity" :my-entity/created-at "datomic.txInstant"] . I just want to have a precise instant because later I might use in as-of queries#2020-03-0315:40favilawhy do you want the date to match instead of transacting your own instant?#2020-03-0315:40vlaaadWould prefer to be able to do it in single transaction, so everything is consistent#2020-03-0315:41favilaI’m talking about a single transaction#2020-03-0315:42vlaaadbecause this “my-entity” is a public release version, sort of like a git tag that is then used by consumers to see data at that time#2020-03-0315:42favila{:db/id "my-entity" :my-entity/creating-tx "datomic.tx"} is one option#2020-03-0315:43favilabut you may be mixing domain time vs time of record#2020-03-0315:43alexmillerI don't think you can or should do this? the datom already is in a transaction that will have the txInstant when it's transacted#2020-03-0315:43favila^^^, although ergonomically it’s not accessible as data, only metadata (i.e. can’t get at it with d/pull)#2020-03-0315:44vlaaadyes, and this is a possibility I’m thinking about as well, but using tx id instead of date as a version will make me expose implementation details#2020-03-0315:44favilayour application would not expose tx id, it would follow the ref and expose the txInstant#2020-03-0315:44vlaaadhmmm#2020-03-0315:45favilahttps://vvvvalvalval.github.io/posts/2017-07-08-Datomic-this-is-not-the-history-youre-looking-for.html#2020-03-0315:45favilaBasically you want to use the TX metadata as your “domain time” for these entities#2020-03-0315:45favilathat may be warranted, but keep in mind as-of and other history features are not designed to manage domain time#2020-03-0315:46favilaso even the use case of “I have a created-at instant on an entity, I now want to use that to see what the db looked like at that moment” is suspect because it is blending those times#2020-03-0315:47favilathis may be fine if you want this domain time to have the same guarantees as your time-of-record, but in that case you should reference the TX directly (or even better use the TX on the datom directly)#2020-03-0315:48vlaaadthanks @favila you sent me in the right direction :+1:#2020-03-0315:49favilaglad to help. This is a subtilty of datomic’s history features it took me a while to internalize#2020-03-0315:49favilalike everyone else I was eager to use it to manage domain time too#2020-03-0315:50favilainterestingly Crux adds domain time as a first class concept on top of what I’m calling “record” time: https://opencrux.com/#2020-03-0315:51favilaIt makes other different tradeoffs vs datomic, but if time-traveling your domain is really import it’s an option to consider#2020-03-0316:19Daniel MasonHi there, I think I may have ran into a bug on datomic-free (using version 0.9.5697)? I've made a small example of it https://github.com/danmason/datomic-close-query but essentially I was using datomic.api/query with a :timeout (in the example I set the timeout to 1ms, and it behaves in the same way) and it appeared to prevent my application from closing properly? Removing the :timeout from the query-map allowed it to exit fine.#2020-03-0316:26alexmillerdid it close if you wait 1 minute?#2020-03-0316:26alexmillerif so, maybe (shutdown-agents) ?#2020-03-0316:27Daniel MasonI was originally running it on something a bit longer and did include (shutdown-agents) (forgot to add that to my little example, but might be worth a try!) but it did continue running longer than a minute. I'll give that a go on this too, however, and get back to you.#2020-03-0316:31alexmillerwas just a shot in the dark :)#2020-03-0316:32Daniel MasonMhm 🙂 Including (shutdown-agents) , it does continue to run regardless.#2020-03-0317:10magraHi, when the entity got entered twice what is an idiomatic way to merge these two entities. I have entities A and B and want to change all refs that point to B to point to A and then delete (retract-entity) B. I am not a native speaker and fail to find the right keywords to goolge for this. Does anyone know a manual entry or blog post that describes that?#2020-03-0320:43stijnwhen I'm executing clj -A:ion-dev "{:op :push}", I'm seeing the following error
{:command-failed "{:op :push :region us-east-1}",
:causes
({:message "Unable to transform path",
:class ExceptionInfo,
:data
{:home "/github/home",
:prefix "/github/home/.gitlibs/libs",
:resolved-coord
{:git/url "
What does this error mean?#2020-03-0415:35alexmillersome Datomic folk are out atm, response might be delayed, but I'll copy into our internal support room#2020-03-0418:54Lucas BarbosaIs there a way to perform a "left anti join" on datomic? I want to find all the entities with a certain attribute whose values is not in a list that I pass in as an argument
For instance, Imagine I have the :order/type attribute, and I want to find all the orders whose :order/type is different than let's say :created and delivered
The argument would be [:created :devivered] , and it could change#2020-03-0419:08favilaif the attribute value you are testing is cardinality-one, the easiest thing IMO is to provide the filter as a set and (not [(contains? ?filter ?v)])#2020-03-0419:09favilaotherwise, you want a negated variant of….this https://stackoverflow.com/questions/43784258/find-entities-whose-ref-to-many-attribute-contains-all-elements-of-input/43808266#43808266#2020-03-0420:07Lucas Barbosathanks#2020-03-0500:00lilactownthis is sort of related to datomic, so I thought I might find people who knew here:
is it possible to statically analyze a query and always know all of the attributes that a query depends on?#2020-03-0505:44favilaNo, as attrs to match can be input, dynamically built, or you can call an arbitrary function#2020-03-0515:47lilactown> dynamically built
can you show me what you mean by that? the other two make sense#2020-03-0515:52favila[?e :use-attr ?attr] [?e ?attr ?v]#2020-03-0515:52lilactowngot it. thank you!#2020-03-0515:52favila[(keyword ?foo ?bar) ?attr] [?e ?attr ?v]#2020-03-0515:53favila[(rand-int 1 1000) ?attr] [?e ?attr ?v] 😏#2020-03-0515:53lilactownhahaha#2020-03-0509:54dmarjenburgh@lvbarbosa @favila I’m struggling with the same problem, but I want to match on multiple attributes and the matches to exclude is stored in datomic as well (not as separate input).
E.g. a list of items:
[
{:item/department "D1" :item/type "A"}
{:item/department "D1" :item/type "B"}
{:item/department "D1" :item/type "C"}
{:item/department "D2" :item/type "B"}
{:item/department "D3" :item/type "A"}
{:item/department "D3" :item/type "C"}
]
And a list of tuples with [dep type]s to hide:
{:item/hidden [["D1" "A"] ["D3" "C"]]}
I got it working with the following query:
(d/q {:query '[:find (pull ?item [...])
:where
[?item :item/department ?dep]
[?item :item/type ?type]
[(tuple ?dep ?type) ?dep+type]
[(q '[:find (set ?hidden)
:where [_ :item/hidden ?hidden]] $) [[?results]]]
(not [(contains? ?hidden ?dep+type)])]
:args [db]})
But I’m not sure about using the set function in the find clause of the subquery (it’s not documented). And I’m not sure if there is an easier/more performant way to do it#2020-03-0514:21favilause distinct instead of set (honestly it might just be an alias for set) https://docs.datomic.com/cloud/query/query-data-reference.html#built-in-aggregates#2020-03-0514:22favilaYour query works? I don’t see how the ?hidden in your last clause is bound#2020-03-0514:23favilaI would move building the hidden set up higher. you can also issue two queries#2020-03-0514:23favilasupplying the output of one as the input to the next#2020-03-0514:24favilathis isn’t great because it can’t make use of indexes#2020-03-0515:24dmarjenburghI adjusted my actual case and typed it over, so it has an error. The ?hidden should be ?results.#2020-03-0515:24dmarjenburghDoes distinct always yield a set? I assumed it would be a seq, like clojure.core/distinct.#2020-03-0515:25dmarjenburghThanks for the feedback, I’ll try different approaches to see if there is a performance difference#2020-03-0515:27favilaconsider an initial filter using the most selective part of the tuple#2020-03-0515:27favilaso that you can make use of any value indexes on item-department or item-type#2020-03-0515:28favilaalternatively, make a composite index and match against that instead#2020-03-0515:28favila(probably a better option anyway)#2020-03-0516:05hawkeyhi, does someone know how to get statistics of Datomic database (total size, size by entity, index size …)?#2020-03-0516:15Joe Lane@hawkey Using the Client API: https://docs.datomic.com/client-api/datomic.client.api.html#var-db-stats#2020-03-0521:22joshkhi have a need to rename two db idents, and then repurpose their old idents as new attributes (which i know isn't recommended). i started by aliasing the old entities with their new idents, and then transacted two new attributes with the old idents as i would like any new attribute definition. one ident was repurposed successfully -- a value type of long. but the other, which was/is a reference, throws an exception: Ident :player/details cannot be used for entity 666, already used for :new-player/details . Is it possible to repurpose an ident which was previously claimed by a reference attribute?#2020-03-0521:48marshallthat sounds like a uniqueness violation#2020-03-0521:49marshalldoes the schema of the one that didnt “repurpose” include a uniqueness attribute?#2020-03-0607:55joshkhthe failed repurposed ident was originally claimed by a ref attribute which was included in a unique composite tuple. i removed the unique constraint on that tuple, but i still get the same exception.#2020-03-0607:55joshkhlooking at the alias, and the tuple attr which refers to that alias, neither have a unique constraint#2020-03-0608:21joshkhso the whole thing looks more like this:
1. :player/details, claimed by a ref attribute, aliased to :new-player/details
2. :player/details+region tuple attribute aliased to :new-player/details+region
3. unique by identity constraint removed from :new-player/details+region tuple attribute
4. transact {:db/ident :player/details :db/valueType :db.type/ref :db/cardinality :db.cardinality/one}
Ident :player/details cannot be used for entity 666, already used for :new-player/details
#2020-03-0609:54grounded_sageWhat would best practice be for say.
Getting a CSV dump from a client each night. Which contains all historical data (that shouldn’t change) and active data that is changing. Committing just the changes to Datomic.
Where the data they provide has ID’s which associate across other CSV’s.
Obviously you could simply query for the set of ID’s in the database and then only transact the new id’s for the data that shouldn’t change. But some data is subject to change at varying frequencies. Whereas others shouldn’t change but the data may be messed up on their end and you also want to somehow capture that.
Would you hash the data associated with the provided id per namespace/csv and transact that? #2020-03-0609:55grounded_sageFraming my question is a bit difficult so I hope people can follow me#2020-03-0611:03grounded_sageSo I guess the model would be this set of data doesn’t change. If it changes transact it but only ever give me the first instance of it (then notify me it changed - probably auditing logic on my side). As for if they mess up their id space.. I guess that problem will surface downstream and we can retract the transaction?#2020-03-0611:55grounded_sageMaybe something like.
:ticket-purchase/active-data Boolean or :ticket-purchase/historic-data Boolean.
Then when the data has changed when say historic is set to true. The data is logged with a false. So you can query the history of ticket id and see all changes even if it is wrong. But query the right one using the true. #2020-03-0615:27vlaaadIs this a bug?
(seq (d/tx-range (get-conn) {:start #inst "2020-03-06T15:14:52.000-00:00"}))
=> ({:t 38
:tx-data [#datom[13194139533350 50 #inst "2020-03-05T14:24:23.642-00:00" 13194139533350 true] ...]}
{:t 39
:tx-data [#datom[13194139533351 50 #inst "2020-03-06T15:14:52.119-00:00" 13194139533351 true] ...]})
(entity 50 is :db/txInstant , so a request for txs since today includes tx from yesterday)#2020-03-0909:29vlaaadAh, I understood my mistake: start point 2020-03-06T15:14:52.000 is before the second returned tx 2020-03-06T15:14:52.119 (notice the millis), so it returns previous transaction which makes sense :+1:#2020-03-0616:23donyormSo I'm trying to setup of my project to work with ions in datomic-cloud, but I've been having issues with it crashing. I tried to reproduce locally, using the dependencies a ion push operation prints, but I'm getting the following error
Error building classpath. Could not find artifact com.cognitect:s3-creds:jar:0.1.23 in central ()
Any idea why that library can't be found?#2020-03-0617:41joshkhdo you have the following in your deps.edn?
:mvn/repos {"datomic-cloud" {:url ""}
#2020-03-0617:42donyormYes I do#2020-03-0617:45joshkhdo you get the error when you deploy your code? or does it happen when you run your project?#2020-03-0620:51donyormIt happens when I run the project, but only when I include the list of dependencies given when you push. I don't have my terminal up now, but it's something like "overridden dependencies"#2020-03-0620:52donyormThis is locally on my machine to clarify, but it makes it hard to reproduce the server environment.#2020-03-0914:25mkvlrdoes the datomic.api still not come with implementations for datafy + nav?#2020-03-0914:25mkvlr(extend-protocol p/Datafiable
datomic.query.EntityMap
(datafy [o] (into {} o)))
(extend-protocol p/Navigable
datomic.query.EntityMap
(nav [coll k v]
(clojure.datafy/datafy v)))#2020-03-0914:27mkvlr☝️ this is our (very basic) implementation to allow navigation of refs.#2020-03-1000:15kennyShould you type hint in Datomic queries? For example:
(d/q
'[:find ?e ?mins
:where
[?e :my-date ?transition-date]
[(.toInstant ?transition-date) ?i]
[(.atZone ^java.time.Instant ?i (java.time.ZoneId/of "UTC")) ?zoned-dt]
[(.toLocalDateTime ?zoned-dt) ?local-dt]
[(.getMinute ?local-dt) ?mins]]
(d/db conn))#2020-03-1008:18jcfI might consider using a function of my own inside the query at this point. Something like (com.example.time/minutes ?transition-date).#2020-03-1011:59favilaYou should type hint according to the same rules as Clojure #2020-03-1012:00favila(So generally, yes, unless it’s throwaway code or type inference figured out the type already)#2020-03-1021:01mruzekwHi everyone, does using API Gateway HTTP Direct with Ions avoid any cold start up latency given it doesn’t involve a Lambda at all? Are there other latency trade-offs to consider?
https://blog.datomic.com/2019/05/http-direct-for-datomic-cloud.html#2020-03-1021:26Joe LaneCorrect, no cold start.#2020-03-1021:45Matheus Moreirahello! probably a datomic newbie situation here: i am not sure how to use (or if it is possible to use) a tuple attribute that models a natural composite key as the value of an attribute that is a ref. my situation is this:
template-schema
[...
{:db/ident :template/id
:db/cardinality :db.cardinality/one
:db/valueType :db.type/uuid
:db/unique :db.unique/identity}
{:db/ident :template/name
:db/cardinality :db.cardinality/one
:db/valueType :db.type/string}
{:db/ident :template/provider
:db/cardinality :db.cardinality/one
:db/valueType :db.type/ref}
{:db/ident :template/provider+name
:db/cardinality :db.cardinality/one
:db/valueType :db.type/tuple
:db/tupleAttrs [:template/provider :template/name]
:db/unique :db.unique/identity}
...]
infocard-schema
[...
{:db/ident :infocard/id
:db/cardinality :db.cardinality/one
:db/valueType :db.type/uuid
:db/unique :db.unique/identity}
{:db/ident :infocard/user-id
:db/cardinality :db.cardinality/one
:db/valueType :db.type/uuid}
{:db/ident :infocard/template
:db/cardinality :db.cardinality/one
:db/valueType :db.type/ref}
...]
i thought it would be possible to transact an infocard referring to an existing template like this:
(d/transact conn {:tx-data [#:infocard{...
:template [:template/provider+name [:provider/engage "test-template]]
...}]})
but when i do this the result is unable to resolve entity. what am i doing wrong?#2020-03-1021:50favilaComposite attrs are not meant to be transacted directly, but computed automatically (otherwise they can get out of sync with their values)#2020-03-1021:51favilathat said, I haven’t personally been able to use an identity composite ref attribute in any way other than explicitly asserting tempids for the composite value and all its components. It appears that tempid resolution to existing entities occurs before composites are computed, (at least the last time I checked). In practice this means identity composites are not useful for upserting, only guaranteeing uniqueness (carefully)#2020-03-1021:51favilaI’d welcome some clarification on whether this is considered by-design and what one should do if an identity composite key is desired#2020-03-1021:52favila(note what I’m doing contradicts the admonition “never transact composites directly”)#2020-03-1021:57Matheus Moreirai my case i don’t assert the composite directly, the template already exists in the database and the composite was calculated by datomic itself.#2020-03-1021:57Matheus Moreiraafter the template is there, i try to create an infocard referring to it but using the composite key as a unique id instead of the surrogate uuid because the composite key is what clients know.#2020-03-1021:58Matheus Moreira(so my api receives the pair provider/name instead of the surrogate key when creating an infocard.)#2020-03-1022:04favilaYour transaction data as you are presenting it is asserting the composite, though#2020-03-1022:05favilaand by making it an identity attribute that strongly suggests you mean for this to be upserting#2020-03-1022:05Matheus Moreiraso this is the first flaw. 🙂 what i’d like to do is to refer to the template by its composite key.#2020-03-1022:06favilaif you don’t want to assert it, use it as a lookup ref (which I think should work now but earlier versions were buggy about): {:db/id [:template/provider+name [:provider/engage "test-template]] …}#2020-03-1022:06Matheus Moreiraah, let me try that.#2020-03-1022:06favilaand probably weaken :db.unique/identity to :db.unique/value#2020-03-1022:08favilaNote using a lookup ref will fail if the entity doesn’t already exist, but your earlier transaction could succeed (if the lookups were resolved, that is)#2020-03-1022:17Matheus Moreirawhat about queries? i should be able to fetch templates using :where [?e :template/provider+name [:provider/engage "test-template]] , right?#2020-03-1022:18Matheus Moreiralooking at the transaction grammar (https://docs.datomic.com/cloud/transactions/transaction-data-reference.html#orge639385) it seems that [attr value] should be a valid entity identifier, that is why i used that form trying to transact the infocard.#2020-03-1022:20favilaI think that syntax works in queries, but maybe you have to use tuple to assemble the value#2020-03-1022:20favilamy point about transact is that you are actually attempting to transact that attribute, not merely looking it up#2020-03-1022:21favilatransaction maps are merely sugar for [:db/add the-key the-value]#2020-03-1103:13onetomIf I want to cash some datomic query results, what's the best cache key I could derive from a database?
My guess would be (defn cache-key [db] ((juxt :id :basisT) db), but then there are :filt :history :raw attributes too which feel relevant, I'm just not sure what is their exact meaning
My expectation would be that d/q and d/pull* and d/datoms calls would return the same results for databases if their cache-key are the same.#2020-03-1110:41favilaI think you want at least `
[(:id db) (or (d/as-of-t db) (d/basis-t db)) (d/since-t db) (d/is-history db)]
#2020-03-1110:42favilaand if (d/is-filtered db) you can’t cache it at all#2020-03-1116:25onetomthanks!#2020-03-1106:17robert-stuttaford@marshall @jaret just confirming that :db/tupleAttrs requires the target schema to be transacted already, and that if you try to transact the targets with the tupleAttrs at the same time, you get :db.error/invalid-tuple-attrs anomay?#2020-03-1112:55jaret@robert-stuttaford that is correct. You need to transact the schema first.#2020-03-1113:17robert-stuttafordthanks!#2020-03-1117:57bamarcoHi, I'm attempting to follow the ion-starter tutorial. When I get to the push section I run into this problem:#2020-03-1117:58bamarco$ clojure -A:ion-dev '{:op :push}'
Downloading: com/datomic/ion-dev/0.9.251/ion-dev-0.9.251.pom from datomic-cloud
Downloading: com/datomic/ion-dev/0.9.251/ion-dev-0.9.251.jar from datomic-cloud
Error building classpath. Could not find artifact com.datomic:ion-dev:jar:0.9.251 in central ()#2020-03-1117:58bamarcoIt seems to be finding the jar in datomic-cloud, but then fails for some reason#2020-03-1117:59alexmillerAre your aws credentials set?#2020-03-1118:02bamarcoI'm pretty sure they are. i created a datomic database#2020-03-1118:09alexmilleraws sts get-caller-identity#2020-03-1118:09alexmillerCan tell you#2020-03-1118:10bamarcoyes, they are set#2020-03-1118:11alexmillerYou’re not behind a proxy or anything?#2020-03-1118:13alexmillerWell let me ask this, what version of clojure tool are you running? clj -Sdescribe#2020-03-1118:13bamarconot behind a proxy#2020-03-1118:13bamarco{:version "1.10.1.536"
:config-files ["/usr/local/Cellar/clojure/1.10.1.536/deps.edn" "/Users/bamarco/.clojure/deps.edn" ]
:config-user "/Users/bamarco/.clojure/deps.edn"
:config-project "deps.edn"
:install-dir "/usr/local/Cellar/clojure/1.10.1.536"
:config-dir "/Users/bamarco/.clojure"
:cache-dir "/Users/bamarco/.clojure/.cpcache"
:force false
:repro false
:resolve-aliases ""
:classpath-aliases ""
:jvm-aliases ""
:main-aliases ""
:all-aliases ""}#2020-03-1118:16alexmillerjust ruling stuff out#2020-03-1118:16bamarcono prob, I appreciate the help#2020-03-1118:16alexmillerdo you see anything in ~/.m2/repository/com/datomic/ion-dev/0.9.251 ?#2020-03-1118:17alexmillerthe output you have looks like it downloaded the pom but not the jar, but that's extra weird#2020-03-1118:18bamarcoit is an empty folder#2020-03-1118:19alexmilleranything in ~/.m2/repository/com/datomic/ion-dev ?#2020-03-1118:19alexmillerlike ea metadata xml file?#2020-03-1118:19bamarconope just the empty folder#2020-03-1118:20alexmillerI guess those are on the repo, and you see _remote.repositories files in your local repo#2020-03-1118:22bamarcoI'm not sure what you mean. where do I check for _remote.repositories?#2020-03-1118:25alexmillernvm#2020-03-1118:25alexmillerdoes this work aws s3 cp . ?#2020-03-1118:27bamarcofatal error: An error occurred (403) when calling the HeadObject operation: Forbidden#2020-03-1118:28alexmillercan you try export AWS_REGION=us-east-1 and try again?#2020-03-1118:28bamarcosame error#2020-03-1118:30alexmillerthat call is effectively what the clj s3 maven transporter is doing to download the file so it is definitely identity related somehow#2020-03-1118:30alexmillerthat bucket should be public read though to head/get object#2020-03-1118:42alexmillermore specific but I'd guess this fails too:#2020-03-1118:42alexmilleraws s3api head-object --bucket datomic-releases-1fc2183a --key maven/releases/com/datomic/ion-dev/0.9.251/ion-dev-0.9.251.jar#2020-03-1118:43bamarcoyup same error#2020-03-1118:44alexmillerI can repro if my aws credentials are set to bad values#2020-03-1118:45alexmillernot sure how you could get bad values but the sts returns you an identity though#2020-03-1118:46alexmillercan you try explicitly setting AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY to your iam creds?#2020-03-1118:52bamarcotried explicitly setting and it failed#2020-03-1118:52bamarcocurl -x :[port] .[system].[region].
does work#2020-03-1118:55alexmillercan you try
aws --debug s3api head-object --bucket datomic-releases-1fc2183a --key maven/releases/com/datomic/ion-dev/0.9.251/ion-dev-0.9.251.jar#2020-03-1118:55alexmillerin particular anything useful in the response body part#2020-03-1118:58alexmillerI'm guessing not but you never know#2020-03-1119:00alexmillerI don't understand why a valid iam user would not be able to access a public object in a bucket. I've pretty much only seen this with region issues but we tried that above.#2020-03-1119:00bamarcoDEBUG - Making request for OperationModel(name=HeadObject) with params: {'body': '', 'url': u'https://s3.amazonaws.com/datomic-releases-1fc2183a/maven/releases/com/datomic/ion-dev/0.9.251/ion-dev-0.9.251.jar', 'headers': {'User-Agent': 'aws-cli/1.18.12 Python/2.7.16 Darwin/19.3.0 botocore/1.15.12'}, 'context': {'auth_type': None, 'client_region': 'us-east-1', 'signing': {'bucket': u'datomic-releases-1fc2183a'}, 'has_streaming_input': False, 'client_config': <botocore.config.Config object at 0x10d0816d0>}, 'query_string': {}, 'url_path': u'/datomic-releases-1fc2183a/maven/releases/com/datomic/ion-dev/0.9.251/ion-dev-0.9.251.jar', 'method': u'HEAD'}#2020-03-1119:00bamarco2020-03-11 14:56:05,098 - MainThread - awscli.clidriver - DEBUG - Exception caught in main()
Traceback (most recent call last):
File "/usr/local/aws/lib/python2.7/site-packages/awscli/clidriver.py", line 217, in main
return command_table[parsed_args.command](remaining, parsed_args)
File "/usr/local/aws/lib/python2.7/site-packages/awscli/clidriver.py", line 358, in call__
return command_table[parsed_args.operation](remaining, parsed_globals)
File "/usr/local/aws/lib/python2.7/site-packages/awscli/clidriver.py", line 530, in call__
call_parameters, parsed_globals)
File "/usr/local/aws/lib/python2.7/site-packages/awscli/clidriver.py", line 650, in invoke
client, operation_name, parameters, parsed_globals)
File "/usr/local/aws/lib/python2.7/site-packages/awscli/clidriver.py", line 662, in makeclient_call
**parameters)
File "/usr/local/aws/lib/python2.7/site-packages/botocore/client.py", line 316, in apicall
return self.makeapi_call(operation_name, kwargs)
File "/usr/local/aws/lib/python2.7/site-packages/botocore/client.py", line 626, in makeapi_call
raise error_class(parsed_response, operation_name)
ClientError: An error occurred (403) when calling the HeadObject operation: Forbidden
2020-03-11 14:56:05,105 - MainThread - awscli.clidriver - DEBUG - Exiting with rc 255
An error occurred (403) when calling the HeadObject operation: Forbidden#2020-03-1119:05alexmillerseems like I'm using a much newer version of aws+python than you, but request is basically the same. doesn't shed much light though, as it's really an IAM issue.#2020-03-1119:13alexmillerI'm not sure what else to try. maybe make different iam creds and see if those work.#2020-03-1119:14alexmillerunless the datomic team folks have an idea of something#2020-03-1119:15ghadiwhen debugging IAM or 403 issues, it is really important to know whether you are using
AWS_PROFILE
AWS_ACCESS_KEY_ID + AWS_SECRET_ACCESS_KEY
and where your region info comes from: explicit AWS_REGION or something in the profile file#2020-03-1119:15ghadiif you are calling this from EC2, there are other possibilities, but it looks like you're on a mac#2020-03-1119:15bamarcoyup on a mac#2020-03-1119:16ghadiand if using AWS_PROFILE, whether your profile is an ordinary profile with credentials, or you're using something like aws-vault or an assume role profile#2020-03-1119:16ghadiaws sts get-caller-identity will tell you who you are#2020-03-1119:16alexmillerthe region is right in the debug trace above#2020-03-1119:17alexmillerand this bucket is public read#2020-03-1119:17ghadibut where your credentials came from is a separate question#2020-03-1119:17alexmillerand we tried both profile and keys#2020-03-1119:17ghadiwhat does aws sts get-caller-identity return?#2020-03-1119:17ghadi@mail524#2020-03-1119:23bamarco{
"Account": "<my-account-#>",
"UserId": "<my-user-id>",
"Arn": "arn:aws:iam::<a-number>:user/datomic-admin"
}#2020-03-1119:24ghadiis datomic-admin the user you expect to be?#2020-03-1119:24bamarcoyes#2020-03-1119:24ghadiwhat policies does that user have?#2020-03-1119:26ghadiarn:aws:iam::${ACCOUNT#}:policy/datomic-admin-#{DATOMIC_SYSTEM}-{AWS_REGION}
is it just ^ ?#2020-03-1119:27ghadibecause if so, that is not sufficient, even for a public-read bucket @alexmiller#2020-03-1119:28ghadiaws s3api head-object --bucket datomic-releases-1fc2183a --key maven/releases/com/datomic/ion-dev/0.9.251/ion-dev-0.9.251.jar
An error occurred (403) when calling the HeadObject operation: Forbidden
#2020-03-1119:28ghadi^^ me creating a user + access_key, adding the datomic-admin policy to that user, then trying to download the jar ^^#2020-03-1119:30alexmillerwell that seems bad if so#2020-03-1119:30bamarcoI think this is the problem. This is my first time using aws.#2020-03-1119:31ghadino worries, this stuff is complex.#2020-03-1119:31ghadiwere you following a particular guide to set up datomic access?#2020-03-1119:31ghadiif so, I'd love the link so that we can improve it#2020-03-1119:31bamarcolet me find it#2020-03-1119:32ghadiMost developer accounts in aws have a pretty permissive s3:GetObject policy that would allow getting from the datomic-releases-1fc2183a bucket#2020-03-1119:32bamarcohttps://docs.datomic.com/cloud/getting-started/configure-access.html#authorize-user#2020-03-1119:32ghadithank you#2020-03-1119:33ghadiyes, this allows access to datomic (as you see), but it does not have permissions to download the datomic jars from Maven.#2020-03-1119:35alexmilleryou were following https://docs.datomic.com/cloud/ions/ions-tutorial.html right ?#2020-03-1119:35bamarcoyes#2020-03-1119:37bamarcoit might be good to put something in https://docs.datomic.com/cloud/troubleshooting.html#troubleshooting-ion-push#2020-03-1119:39ghadihttps://docs.datomic.com/cloud/ions/ions-tutorial.html#org29202d3
make sure that user is an AWS Administrator @mail524#2020-03-1119:39ghadiobviously production user does not need to be an administrator, but for the purposes of this tutorial, yes#2020-03-1119:43bamarcoso I should make a dev user with admin permissions?#2020-03-1119:59ghadino you can keep this user and augment it with the AdministratorAccess policy https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_job-functions.html#jf_administrator#2020-03-1119:59ghadi@mail524#2020-03-1120:05bamarcoThanks @ghadi and @alexmiller this solved it.#2020-03-1120:06ghadino problem -- we will update the docs, thanks for persisting#2020-03-1120:31BrianHow do I crack open the #object[datomic.promise ...] return by a (d/transact ...) call? My goal is to check for failure/success. I see the contents of the object in my console but I'm not able to programmatically access it which I'd like to#2020-03-1120:32johnjdid you de-reference it?#2020-03-1120:33BrianI tried (deref <obj>) but that errors out with a syntax error. Although it then seems to also print out the correct error#2020-03-1120:34BrianIt results in:
Syntax error (Exceptions$IllegalArgumentExceptionInfo) compiling at ... ; syntax error
:db.error/not-an-entity Unable to resolve entity ... ; error I want to grab from obj
#2020-03-1120:34marshall@brian.rogers synchronous transact can throw#2020-03-1120:34marshallyou’d need to put the deref’d call in a try if you want to grab the exception#2020-03-1120:37BrianAh thank you @marshall#2020-03-1205:22onetomIs there a Clojure client library which provides the same interface as the Datomic Client API, but for accessing Datomic REST APIs?
My use-case is that I would like to collaborate with someone who would be using a Datomic REST API from Python,
while I'm using the same database from Clojure code.#2020-03-1205:34onetomI guess my main motivation is the ability to share some in-memory Datomic DB, which is possible via the REST server, but I have no convenient Clojure interface to it.
OR I can run a peer-server with an in-memory database, which I can connect to conveniently from Clojure via the Datomic Client API,
BUT there are not Datomic Client API libraries for other languages, like Python.
3rd option would be to just run a transactor with protocol=mem in its config.properties file, but that throws this error:
java.lang.IllegalArgumentException: :db.error/invalid-storage-protocol Unsupported storage protocol [protocol=mem] in transactor properties /dev/fd/63
#2020-03-1205:36onetomThe reason for wanting to share an in-memory Datomic DB is have a really tight feed-back loop within our office,
where we have 1 machine with 80GB RAM, while other machines have only 16GB#2020-03-1207:43fmnoisehi everyone, last time I periodically see the following error in logs
org.apache.activemq.artemis.api.core.ActiveMQNotConnectedException: AMQ119010: Connection is destroyed
at org.apache.activemq.artemis.core.protocol.core.impl.ChannelImpl.sendBlocking(ChannelImpl.java:335)
at org.apache.activemq.artemis.core.protocol.core.impl.ChannelImpl.sendBlocking(ChannelImpl.java:315)
at org.apache.activemq.artemis.core.protocol.core.impl.ActiveMQClientProtocolManager.createSessionContext(ActiveMQClientProtocolManager.java:288)
at org.apache.activemq.artemis.core.protocol.core.impl.ActiveMQClientProtocolManager.createSessionContext(ActiveMQClientProtocolManager.java:237)
at org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl.createSessionChannel(ClientSessionFactoryImpl.java:1284)
at org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl.createSessionInternal(ClientSessionFactoryImpl.java:670)
at org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl.createSession(ClientSessionFactoryImpl.java:295)
at datomic.artemis_client.SessionFactoryBundle.start_session_STAR_(artemis_client.clj:81)
at datomic.artemis_client$start_session.invokeStatic(artemis_client.clj:52)
at datomic.artemis_client$start_session.doInvoke(artemis_client.clj:49)
at clojure.lang.RestFn.invoke(RestFn.java:464)
at datomic.connector.TransactorHornetConnector$fn__10655.invoke(connector.clj:228)
at datomic.connector.TransactorHornetConnector.admin_request_STAR_(connector.clj:226)
at datomic.peer.Connection$fn__10914.invoke(peer.clj:239)
at datomic.peer.Connection.create_connection_state(peer.clj:225)
at datomic.peer$create_connection$reconnect_fn__10989.invoke(peer.clj:489)
at clojure.core$partial$fn__5839.invoke(core.clj:2623)
at datomic.common$retry_fn$fn__491.invoke(common.clj:533)
at datomic.common$retry_fn.invokeStatic(common.clj:533)
at datomic.common$retry_fn.doInvoke(common.clj:516)
at clojure.lang.RestFn.invoke(RestFn.java:713)
at datomic.peer$create_connection$fn__10991.invoke(peer.clj:493)
at datomic.reconnector2.Reconnector$fn__10256.invoke(reconnector2.clj:57)
at clojure.core$binding_conveyor_fn$fn__5754.invoke(core.clj:2030)
at clojure.lang.AFn.call(AFn.java:18)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
#2020-03-1207:43fmnoisedatomic on-prem 0.9.6045 running in docker container in k8s#2020-03-1208:43onetomI was also on the verge of trying to use a Docker containerized on-prem Datomic.
If I understand correctly, that's not an officially recommended way to run a Datomic system.
Can you share some setup instructions for it please?#2020-03-1207:44fmnoisesometimes it results in restarting datomic with the {:message "Terminating process - Heartbeat failed", :pid 13, :tid 223}#2020-03-1209:20fmnoiseoh, just found out that peer version is lower than transactor, 5951 vs 6045, probably that's the issue 😞#2020-03-1209:21fmnoisewill back if issue happens after update#2020-03-1212:56grounded_sageHow do you do the equivalent of a SQL left-join?#2020-03-1213:09favilaone possibility is get-else with a sentinel value (not nil)#2020-03-1213:09favilaif the join is just for selecting, consider using pull instead#2020-03-1213:10favilain which case the “nil” will be just a missing map entry#2020-03-1213:35grounded_sage(d/q '[:find (pull ?e [*]) (pull ?e1 [*]) (pull ?e2 [:bank/name])
:where
[?e :customer/id ?id]
[?e1 :address/id ?id]
[(get-else $ ?e2 :bank/id "No Value") ?id]
[(get-else $ ?e2 :bank/name "No Value") ?name]]
@conn)#2020-03-1213:35grounded_sageThat is what I am trying to do @U09R86PA4#2020-03-1213:37favilaYou mean you want the map result of pull to say “No Value”?#2020-03-1213:38grounded_sageThat’s just the default value?#2020-03-1213:38favilawhat you put here doesn’t make sense#2020-03-1213:38favilaI don’t know what your desired output is#2020-03-1213:39grounded_sageBasically :customer/id and :address/id returns 300,000 results. When I join on bank it returns 10,000#2020-03-1213:39favilawhat is ?e2 unifying against?#2020-03-1213:39grounded_sageI effectively want to have the most possible results with the others kind of enriched with extra data when it is found#2020-03-1213:40favilacan [?e :customer/id] be missing?#2020-03-1213:40grounded_sageNo#2020-03-1213:41favilaCan [?e :customer/id "No Value"] happen?#2020-03-1213:41grounded_sageno#2020-03-1213:41favilaor [?e1 :address/id "No Value"]?#2020-03-1213:42grounded_sageLike I want it to start with :customer and effectively merge each result onto it with default values if they are not present.#2020-03-1213:42grounded_sageLike if I joined a table on a column#2020-03-1213:42favilathe only thing unifying these clauses is ?id so I don’t see what the join is#2020-03-1213:43favilais the value of :bank/id and :customer/id supposed to unify?#2020-03-1213:44grounded_sageIt only returns me results that have a :bank/id so I lose all the previous finds.#2020-03-1213:44favilais there any ref attribute you can use to walk between a customer, address, and bank?#2020-03-1213:44grounded_sageYes I am unifying those and then wanting to add some extra data if it is there#2020-03-1213:45grounded_sageI don’t quite understand refs yet so no.#2020-03-1213:46favilaok, so you’re joining by some concrete value “id” (I guess a string?) which happens to be common to :customer/id :bank/id, and :address/id?#2020-03-1213:46favilayou can do this but it’s not the natural way of modeling relationships in datomic#2020-03-1213:46grounded_sageYes. But there is very little bank data.#2020-03-1213:47grounded_sageWhat is the natural way of doing things in datomic? Using the refs etc?#2020-03-1213:47favilausing refs#2020-03-1213:47favila(I would normally assume a :customer/id was scoped to customers)#2020-03-1213:48favilaso :customer/address -> some entity with address attributes on it#2020-03-1213:48favila:customer/bank -> some attribute with bank entities on it#2020-03-1213:48favilaat this point you have a single pull expression, because it can see all the joins#2020-03-1213:49favilayou need two steps:#2020-03-1213:49grounded_sageThis is kind of a dirty data CSV import. So I haven’t got much luxury in the naming of things. Was just assessing if datalog will help me with pre-processing. But it’s harder than I thought#2020-03-1213:50favila[?c :customer/id ?id][?a :address/id ?id][?b :bank/id ?id] for those with banks#2020-03-1213:51favila[?c :customer/id ?id][?a :address/id ?id](not [_ :bank/id ?id]) for those without#2020-03-1213:51favilaget-else isn’t good here because it can’t make use of a value index for :bank/id#2020-03-1213:52grounded_sageSo you would split it into 2 queries?#2020-03-1213:52favilaI would#2020-03-1213:53favilaYou could unify to one query using or and a sentinel for the missing bank case, but you can’t pull off that sentinel#2020-03-1213:53grounded_sageThanks. I was trying to avoid that but I guess it requires stronger schema for that.#2020-03-1213:54favilaup to you if it’s worth augmenting your data#2020-03-1213:55favilait’s easy to find the common values via query and build the tx you need to make the refs#2020-03-1213:56favila[?c :customer/id ?id][?a :address/id ?id] -> [:db/add ?c :customer/address ?a] for example#2020-03-1213:59grounded_sageThe problem I have is that this is a CSV ETL job. Daily drops with millions of rows and 10's of columns. So doing these checks doesn’t seem that efficient.#2020-03-1213:59grounded_sageThough I could be wrong….#2020-03-1214:00grounded_sageThe CSV’s is pretty much always the same… with just a bit of new data.#2020-03-1214:02favilaI hate to say it, but sql might be a better fit, depending on what you do#2020-03-1214:02grounded_sageI’m starting to believe it.#2020-03-1214:03favilaif you aren’t becoming the source of truth for what you ingest, and that stuff is stable-shaped already and you just want to make some joins, sql may be better#2020-03-1214:03faviladatomic shines with graph-shaped data you’re growing with (i.e. history of changes is important) and a primary datastore you add to incrementally with a live application#2020-03-1214:04favilait doesn’t do giant bulk imports well, and it can join by value but you miss out on a lot of graph-shaped niceties#2020-03-1214:05favilameanwhile some sql engines can query csv directly, import csv with magical fast bulk importers, and are already used to joining by value#2020-03-1214:06favilae.g. have you considered redshift or athena (for cloud things at huge scale)? I think they both work by sticking table-shaped files (e.g. csv) into s3 and then “just working”#2020-03-1214:06grounded_sageIt’s a shame. Because I was using Meander to transform the CSV’s before entering them into the DB. Then I was using the DB for the joins and pulling it all out. The semantics are pretty much the same because they both use logic programming.#2020-03-1214:06grounded_sageWe haven’t got huge scale#2020-03-1214:07favilacan you use meander to make the refs ahead of time?#2020-03-1214:08grounded_sageYou mean keeping them all to the same ns of the keyword?#2020-03-1214:08favilaI mean, is there something from the unit of work you can use to mint a unique upserting attribute#2020-03-1214:09grounded_sageI don’t think so. It’s all good 🙂#2020-03-1214:09favilae.g. {:customer/bank [:bank/unique-id customer-id]}#2020-03-1214:10favilawell it wouldn’t be like that exactly#2020-03-1214:11favila[{:db/id "bank-123" :bank/unique-id "value-derived-from-customer}{:db/id "customer-123" :customer/bank "bank-123, ,, :other/customer "stuff",,,}]#2020-03-1214:11favilathat said, I think whether you use datomic or not should be driven entirely by what you plan to do after you ingest these CSVs#2020-03-1214:14favilaiirc there’s something that can do pulls against postgres#2020-03-1214:16favilahttps://walkable.gitlab.io/ ?#2020-03-1219:16grounded_sage@U1C36HC6N the convo. #2020-03-1219:17grounded_sage@U09R86PA4 yea I find it interesting though I also think I need to know SQL and tables a bit more before working with such abstractions. #2020-03-1216:00vHi I am trying to run datomic transactor. I downloaded Datomic. And following the steps from
https://docs.datomic.com/on-prem/dev-setup.html. When I try to run the local transactor. I get this error. java.lang.Exception: 'protocol' property not set
Any Ideas. Here is the full error log
Launching with Java options -server -Xms1g -Xmx1g -XX:+UseG1GC -XX:MaxGCPauseMillis=50
Terminating process - Error starting transactor
java.lang.Exception: 'protocol' property not set
at datomic.transactor$ensure_args.invokeStatic(transactor.clj:116)
at datomic.transactor$ensure_args.invoke(transactor.clj:105)
at datomic.transactor$run$fn__22768.invoke(transactor.clj:387)
at clojure.core$binding_conveyor_fn$fn__5754.invoke(core.clj:2030)
at clojure.lang.AFn.call(AFn.java:18)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:835)
#2020-03-1216:00vI downloaded Datomic, added the licence key in dev-transactor-template.properties and ran this script
bin/transactor datomic_properties/dev-transactor-template.properties
#2020-03-1216:04favilais protocol=dev in that file?#2020-03-1216:04vin the dev-transactor-template.properties file @U09R86PA4?#2020-03-1216:04favilayes#2020-03-1216:21v@U09R86PA4 So this is what I have so far
protocol=dev
host=0.0.0.0
port=4334
license-key=<MY_KEY>
memory-index-threshold=32m
memory-index-max=256m
object-cache-max=128m
#2020-03-1216:21vNow I am getting Terminating process - License not valid for this release of Datomic#2020-03-1216:23favilaso you miscopied the key, or the key isn’t valid for the version you’re using#2020-03-1216:24favilayou can check this at your http://my.datomic.com site#2020-03-1216:25favilaprobably also in the email that sent you the license key#2020-03-1216:25vOkay rookie mistake, my key was expired :face_palm:#2020-03-1216:27favilalicenses are perpetual, so you can use an older version (released before it expired)#2020-03-1216:27favilayou just won’t get updates#2020-03-1216:32v@U09R86PA4 thank you so much. Local Transactor is running now#2020-03-1222:18jacksonWhat makes the dev database not production worthy? Are there signficant performance advantages to using postgres or another sql server? Mostly concerned about on-prem solutions at the moment.#2020-03-1301:10favilaDev databases are embedded h2 databases, served by the same process as the transactor#2020-03-1306:02onetomAs a consequence, the data you handle with Datomic leaves in files on the disks of the transactor machine.
From an operations perspective it's not a great architecture to couple these 2 concerns.
Your transactor process doesn't need any persisted state, you could run it (or them) on ephemeral machines.#2020-03-1309:13mavbozofor one of my company internal app, we use datomic dev with h2 database on a dedicated server. works fine til now.#2020-03-1313:47jacksonGood points. I had some unexpected time to play last night and already had postgres installed. For what I'm doing, my time to transact a fair amount of data went from 360s with dev to 40s with postgres. Hardly scientific testing, but indicative still.#2020-03-1301:01marcelo.pivaHi, any opinions on using fully namespaced spected keywords as datomic attributes? Like org.domain.entity/attribute?#2020-03-1306:02onetomas opposed to what alternative?#2020-03-1306:10marcelo.pivajust :entity/attribute.#2020-03-1306:13marcelo.pivaOne issue that I’ve found is trying to spec ref attributes, where I can have the actual entity or a lookup ref when transacting. It’s seems a bit weird to have the domain this close to a db spec.#2020-03-1306:57onetomcan u give a specific example of that spec situation?
this sounds like an interesting problem.
i was also wondering about what is a good naming scheme for specs.
for now i just went with the problem-domain-entity-name/attr style, and my namespaces dealing with those attributes are <company-short-name>.<project-name>.data.<problem-domain-entity-name> or maybe don't even have the .data part if the project is simple enough.#2020-03-1307:01onetomeg:
company internet domain: https://gini.co
project name: rule-crib
domain entity terminology: financial transaction
domain entity name: txn (as opposed to tx, which we kept to refer to Datomic transactions)
NS: gini.rule-crib.txn
Datomic attribute: :txn/descr
I've also tried to just have gini.txn, but it's easy to get lost between projects within a monorepo, where different projects might deal with different aspects of the same domain entity...#2020-03-1310:35favilaImo it’s not appropriate to spec txdata for d/transact. The spec for transaction data is the spec for the transaction dsl, not for its keys#2020-03-1310:35favilaSo a transaction map is a s/map-of, not an s/keys#2020-03-1310:37favilathe spec for keywords should be what you would get in pulls#2020-03-1314:04marcelo.pivaCool, I’m sharing this opinion now.
What do you think about the namespaces? There is a lot of boilerplate when converting between them, having the same attributes internally and in datomic solves some of the issue.
Wire Internal Datomic
:attribute <-> org.domain.entity/attribute <-> :entity/attribute#2020-03-1314:12favilaI prefer sharing keywords, but keyword length fatigue is real#2020-03-1314:12favilaI’d drop “org” though, unless you already have a strong code convention around that#2020-03-1314:14favilaIf you can somehow maneuver your data keyword namespaces to be actual namespaces (maybe put specs there? idk) your life will be much better, or at least not have as much keyword typing in it#2020-03-1315:33marcelo.pivaI’m doing that, the actual length is not an issue (only in repl things get polluted sometimes). Our convention is to wire unnamespaced between services, but it’s not hard to namespace all things, the issue is more between internal and datomic schemas.#2020-03-1315:33marcelo.pivaAbout the org, we thought on using it because it would be clearer when it’s an external data from a provider, for example.#2020-03-1318:24favila:org.domain.otherorg.entity/attribute maybe?#2020-03-1306:09onetomIs there a way to get the database URI back from a datomic.Connection (or datomic.peer.Connection)?#2020-03-1306:13onetomIf not, is there a reason for it?
Eg. not to keep connection secrets for a DynamoDB connection around for long (when someone is using the not-recommended non-role based uri)?
It would be convenient to obtain a connection URI back from a conn, when creating a Datomic component using some state management library, like component / mount / juxt/clip.
It could simplify the stop operation, which can automatically clean up in-memory test databases with random names for example...#2020-03-1309:53rossputinmorning folks - just testing my thinking... if I accidentally deferred giving my datomic compute instance an application name during template setup etc (working through the ion tutorial) - would I be right in thinking there should be a way to give a value to that parameter in AWS console ? Thanks.#2020-03-1313:57marshallIt will automatically get the application name of the compute group, but you can also change it by doing an “update stack” in the cloudformation console#2020-03-1316:04rossputinthanks!#2020-03-1317:36Alexthis may be a dumb question but if you're querying with a vector of values is there a way to count how many values in a muli-valued attribute you matched on?#2020-03-1318:25favilayou can in the result with distinct or count-distinct depending on how you aggregate#2020-03-1318:25favilayou can’t within the query easily without a function call or subquery#2020-03-1318:26favila[:find ?e (count-distinct ?v) :in $ [?v …] :where [?e :attr ?v]]#2020-03-1323:42hadilsI want to use Java 11 lambda ions in Datomic Cloud. My IDE and libraries are set up for Java 11 on my local machine, but it still deploys Java 8. Any suggestions?#2020-03-1323:43ghadi"Lambda ions" don't actually run in AWS Lambdas#2020-03-1323:43hadilsAh#2020-03-1323:44ghadithe lambdas are cookie-cutter forwarding proxies that talk to the compute group, which actually runs your code#2020-03-1323:44ghadithat way your code runs with the full database essentially "local"#2020-03-1323:45ghadionce a machine is hot, it scorches#2020-03-1323:45ghadiNot sure what the timeline is on Java 11 support. I'm sure it's on the roadmap but not sure where#2020-03-1323:46hadilsThanks @ghadi#2020-03-1323:46ghadino problem#2020-03-1323:46ghadinot sure if there is a diagram of how this all works somewhere#2020-03-1323:47ghadi@hadilsabbagh18 https://docs.datomic.com/cloud/ions/ions.html#2020-03-1323:47ghadidiagram near the top#2020-03-1323:49ghadiincidentally I have been working on a AWS Lambda custom runtime for clojure -- as a separate effort#2020-03-1323:49ghadican run Java 14 in the Lambda, where your function handler is an ordinary var that gets called with ordinary maps as arguments, and returns a map#2020-03-1323:50ghadican set clojure.core/merge as a valid lambda handler#2020-03-1323:50hadilsNice!#2020-03-1323:50ghadiit's like 100LOC + 25MB for the JVM#2020-03-1323:50ghadino silly macros to do the silly java interop#2020-03-1323:52ghadi@hadilsabbagh18 is there something from Java 11 that you really really want to use?#2020-03-1323:53ghadiPersonally I like the https://java.net.http client being available. Cuts down on a lot of deps.#2020-03-1323:53hadilsNo, I was hoping to get a little more speed out of the JVM. Want to use the latest and greatest before going to Production.#2020-03-1323:54ghadigotcha#2020-03-1410:06fmnoiseproblem still occurs after update https://clojurians.slack.com/archives/C03RZMDSH/p1583998982381900#2020-03-1410:07fmnoisetransactor log says
2020-03-14 04:24:02.043 WARN default o.a.activemq.artemis.core.client - AMQ212037: Connection failure has been detected: AMQ119014: Did not receive data from /10.133.74.15:55932 within the 10,000ms connection TTL. The connection will now be closed. [code=CONNECTION_TIMEDOUT]
#2020-03-1601:28Jon WalchAnyone seen this on datomic cloud?
clojure.lang.ExceptionInfo: Ops limit reached: 64
at datomic.client.api.async$ares.invokeStatic(async.clj:58)
at datomic.client.api.async$ares.invoke(async.clj:54)
at datomic.client.api.sync$eval2156$fn__2157.invoke(sync.clj:90)
at datomic.client.api.protocols$fn__14405$G__14359__14418.invoke(protocols.clj:126)
at datomic.client.api$pull.invokeStatic(api.clj:285)
at datomic.client.api$pull.invoke(api.clj:271)
at datomic.client.api.sync$eval2156$fn__2157.invoke(sync.clj:91)
at datomic.client.api.protocols$fn__14405$G__14359__14418.invoke(protocols.clj:126)
at datomic.client.api$pull.invokeStatic(api.clj:287)
at datomic.client.api$pull.invoke(api.clj:271)
#2020-03-1603:12aldycoHi everyone,
I'm having trouble in transacting into datomic but querying is fine. I get timeout when I do transact and got this error.
AMQ212054: Destination address=-596311ff-6dde-4e20-8f54-57f93ad53fdb.tx-submit is blocked. If the system is configured to block make sure you consume messages on this configuration.
org.apache.activemq.artemis.core.client.impl.ClientProducerCreditsImpl in acquireCredits at line 95
org.apache.activemq.artemis.core.client.impl.ClientProducerImpl in sendRegularMessage at line 285
org.apache.activemq.artemis.core.client.impl.ClientProducerImpl in doSend at line 263
org.apache.activemq.artemis.core.client.impl.ClientProducerImpl in send at line 119
datomic.connector.TransactorHornetConnector$fn__9976$fn__9979$fn__9980$fn__9983 in invoke at line 293
datomic.connector.TransactorHornetConnector$fn__9976$fn__9979$fn__9980 in invoke at line 293
datomic.connector.TransactorHornetConnector$fn__9976$fn__9979 in invoke at line 285
datomic.connector.TransactorHornetConnector$fn__9976 in invoke at line 283
Any idea/pointer where to check?
Thank you in advanced.#2020-03-1605:23hdenAnyone successfuly log into the datomic forum lately?
https://forum.datomic.com
> Your account hasn’t been approved yet. You will be notified by email when you are ready to log in.#2020-03-1613:33Joe Lane@hden I just logged in, but I already had an account. Do you already have an account?#2020-03-1613:35hdenI’ve created the account during the weekend.#2020-03-1613:37hden@U0CJ19XAM Do you know how long those approvals usually take?#2020-03-1613:38Joe LaneNo, unfortunately. I presume it's a human action so it's subject to their availability. Hopefully soon things like this can be sped up.#2020-03-1613:38Joe LaneIs there something you want to post in particular?#2020-03-1614:02hdenI’ve just received the approval email. Thanks for your time!#2020-03-1615:59vQuick question does datomic client support :db.attr/preds in the schema. I am trying to add a validation function to one of the attributes. When I try to transact the data. I get this error
Could not locate datomicexample/bbank/pred__init.class, datomicexample/bbank/pred.clj or datomicexample/bbank/pred.cljc on classpath.
#2020-03-1616:00vHere is the pred name space
(ns datomicexample.bbank.pred)
;;
(defn chequing?
[val]
(and (number? val) (< 0 val)))
#2020-03-1616:00vHere is how it is being used
def account-schema
[{:db/ident :account/chequing
:db/valueType :db.type/long
:db/cardinality :db.cardinality/one
:db.attr/preds 'datomicexample.bbank.pred/chequing?
:db/doc "The chequing amount in the Account"}
{:db/ident :account/savings
:db/valueType :db.type/long
:db/cardinality :db.cardinality/one
:db/doc "The Users saving amount in the Account"}])#2020-03-1616:21favilapredicate functions must be on the classpath of the transactor#2020-03-1616:29vI am following this tutorial. https://docs.datomic.com/on-prem/getting-started/connect-to-a-database.html. How do I know which name space the transactor lives in#2020-03-1617:05v@U09R86PA4 should it be in the same file we the .properties file where we put licence key and other configurations#2020-03-1617:06favilahttps://docs.datomic.com/on-prem/database-functions.html#classpath-functions#2020-03-1617:07favilaexport DATOMIC_EXT_CLASSPATH=mylibs/mylib.jar#2020-03-1617:07favila(I don’t think it has to be a jar--a clj source directory should work too)#2020-03-1617:33vThank you for your help#2020-03-1620:49joshkhGiven the following model:
{:list/order [{:order/from 'a
:order/to 'b}
{:order/from 'b
:order/to 'c}
{:order/from 'c
:order/to 'd}]}
How might I efficiently query for an :order/from value which is not referenced by any :order/to attribute, and in this example represents the beginning of a list? Unification is getting the best of me.#2020-03-1621:06favila(d/q '[:find ?from
:where
[?l :list/order ?o]
[?o :order/from ?from]
(not-join [?l ?from]
[?l :list/order ?o2]
[?o2 :order/to ?from])]
[['l :list/order 'o1]
['l :list/order 'o2]
['l :list/order 'o3]
['o1 :order/from 'a]
['o1 :order/to 'b]
['o2 :order/from 'b]
['o2 :order/to 'c]
['o3 :order/from 'c]
['o3 :order/to 'd]]
)
#2020-03-1621:07favilaI can’t speak to its efficiency, but what was probably tripping you up was that you have to unify that ?o2 separately#2020-03-1708:03joshkhyou're exactly right, my or-join was missing a separate binding. thanks favila!#2020-03-1712:47pmooserIt's a shame that when they added support for strings as temp-ids, they didn't make strings work as temp-ids for the entity function, so it's easy to have tx-fns that use entity explode in your face. I'm not sure I think they would consider this a bug, but it's the kind of sharp edge that feels all-too-common in datomic.#2020-03-1713:41matthaveneryou mean like (d/entity some-db "a-tempid-string") ?#2020-03-1713:42matthavenerwe usually just transact, and then do (d/entity after-db (get tempids some-tempid))#2020-03-1713:51pmooser@matthavener Yes, that's what I mean. But I have some transaction functions that internally use entity , which can't handle those string tempids, so the transaction will abort. I don't see how transacting is an option, since the problem is that the tx-fns won't run.#2020-03-1713:52pmooserThat's why as far as I can tell the solution is just not to us those string ids if you have a situation like this.#2020-03-1713:52favila@pmooser that’s really a type error: those tx fns cannot accept a tempid as an argument#2020-03-1713:52favilawhatever work they are doing requires knowledge they cannot have--silently doing nothing seems wrong#2020-03-1713:53favilaexplosion seems better#2020-03-1713:53pmooser@favila I mean it depends on what you think it means for datomic to support strings as tempids - I would say the error is incompletely supporting it. It's not like the docs clearly state you can use them only in certain places as strings.#2020-03-1713:53favilayou can use tempids anywhere they are not resolved#2020-03-1713:54pmooser(entity db (tempid :db.part/user)) works, even for an unresolved tid#2020-03-1713:54pmooser(entity db "anything") doesn't work#2020-03-1713:54pmooserThat means strings aren't tempids and don't behave like them.#2020-03-1713:54favilayeah, I consider the former a bug, I remember complaining about that back in the day#2020-03-1713:55pmooserI mean it's hard to say what is intended in this case, since (as often happens with datomic) it's hard to know which of these edge cases are intentional and which aren't, and the documentation does little if anything to clarify.#2020-03-1713:55favilaI think this is a clojure culture problem in general#2020-03-1713:56pmooserIn any case, in terms of concrete things, in my situation it just means we either have to walk some data structures and replace strings with temp-ids, or just avoid the strings where possible. In this case it's a little unfortunate, since strings as tempids are a convenient affordance for things like client-generated data.#2020-03-1713:56favilaedge cases are often not called out or checked and you only know they are edges by experience and intuition#2020-03-1713:56faviladon’t you have to walk anyway though?#2020-03-1713:57pmooserI think it's also just that despite clojure being a great language, they don't have the impression that this is a significant sort of quality problem. There are some truly shocking bugs in things like core.async - and they're known, and they'll probably never be fixed.#2020-03-1714:03alexmillerWhat?#2020-03-1713:57favila[{:db/id tempid :upserting-attr bar} [:txfn tempid]] is wrong whether tempid is a string or a record.#2020-03-1713:57pmooser@favila Not in general, no. I mean, I don't walk these structures I'm transacting unless I'm doing some modification of the data anyway.#2020-03-1713:58pmooserI'm not sure what you are implying - that I can't use a tempid of any kind as an argument to a tx-fn?#2020-03-1713:58pmooserThat's demonstrably false, I'm fairly sure.#2020-03-1713:59pmooser(I probably just don't understand the particularity you're trying to illustrate with upsert)#2020-03-1713:59favilaI mean if :txfn needs to read tempid, these two transactions are “equivalent” if [:upserting-attr :bar] exists:#2020-03-1713:59favila[{:db/id tempid :upserting-attr :bar} [:txfn tempid]]#2020-03-1713:59favila[[:txfn [:upserting-attr :bar]]#2020-03-1714:00favilabut one of them, :txfn can deref, the other cant#2020-03-1714:00favilaeven though the state of the db is the same and arguably the tx should do the same thing#2020-03-1714:00favilathat’s why I argue, if a txfn is expected to read an entity provided as input, it should hard-fail if that input is not resolveable (i.e. a tempid)#2020-03-1714:01ghadiI agree with the last point ^#2020-03-1714:01pmooserI'm still trying to chew on this and understand it.#2020-03-1714:02favilatransaction functions are transaction “macros”#2020-03-1714:02favilathey can read the environment (&env, the db), but they cannot read “in progress” transactions#2020-03-1714:03favilawhat tempid resolves to is not known until the very end, and then all fully-expanded adds and retracts are applied simultaneously, set-wise#2020-03-1714:03pmooserI think it's a little more subtle than that.#2020-03-1714:03pmooserThey can't "read" the current tx, because it essentially has not happened yet.#2020-03-1714:03favilayes, that’s fine#2020-03-1714:04favilamy point is that there’s a syntatic transformation occurring, and if that transformation needs to read the environment to perform the transformation, it can only do so with what is available to it syntatically#2020-03-1714:04pmooser@favila It's going to take me a couple minutes to respond, because I want to go experiment with what you are saying a little bit.#2020-03-1714:05favilaand tempids are not resolveable syntatically. some other opaque tx fn could emit an assertion which changes its resolution#2020-03-1714:05favilaonly when the full set of primitive asserts/retracts is known and the syntax is fully expanded can tempids be resolved#2020-03-1714:14pmooserOk, so I sort of understand what you mean, but I don't think it's even quite correct.#2020-03-1714:15pmooserYou're right that the tempid passed to your tx-fn won't be magically converted to the db/id of the upsert,#2020-03-1714:15pmooserbut if the tx-fn uses the tempid to assert some things, datomic will correctly eventually resolve that tid to the upserted db/id.#2020-03-1714:15pmooserSo I don't completely understand the point you were trying to make, or what it has to do with my original point, which is that half-implemented features (ie, temp-ids as strings) are unfortunate sources of complexity and sharp edges.#2020-03-1714:16ghadiyou're thinking about it wrong and presuming it's half implemented#2020-03-1714:16pmooserI suppose that's a convenient point of view, from the standpoint of the implementer.#2020-03-1714:17ghadiI'm not the implementor, I'm a user#2020-03-1714:17pmooserI don't mean you, I'm just saying, it suits the creator, not the users.#2020-03-1714:17ghadia tempid isn't resolved to a real entity id before the transaction commits#2020-03-1714:17pmooserOk, I'm probably just being really dense, but so? What does it have to do with the changelog that says you can use strings as tempids, when they are not substitutable?#2020-03-1714:19pmooserLike it's hard for me to understand the idea of that the implementation of entity isn't wrong in the sense that either it should work for all representations of temp-ids, or for none of them. I'm not quite sure how you think some other solution serves the users.#2020-03-1714:19ghadithey can appear in transaction data, but they cannot appear as an argument to d/entity#2020-03-1714:19pmooserThe minimum acceptable version would be for it to be documented.#2020-03-1714:19pmooserWell, what you just said isn't true, as it happens, depending on what you mean by temp-id, which is my point.#2020-03-1714:21favilaI concede absolutely, that a string tempid and a record tempid should fail in the same way; however, whichever way they fail, you still need to do the same checks#2020-03-1714:21pmooserMy code isn't wrong and isn't failing to check#2020-03-1714:21favilabecause at the end of the day, you still need to throw to abort the tx#2020-03-1714:22ghadizoom out and re-examine what you're trying to achieve. calling d/entity on a tempid is a detail, what is the overarching goal?#2020-03-1714:22ghadiI haven't read the full scrollback#2020-03-1714:22ghadifeel free to link something if you've already said it#2020-03-1714:22pmooserIt's just a discussion of wishing the API behaved consistently#2020-03-1714:23pmooserMy code works, I worked around the fact that entity freaks out if you call it using a string on a tempid#2020-03-1714:23pmooserMaybe I mistakenly gave you guys the impression that I'm trying to fix a bug - the bug is fixed. It's just yet another case of having to understand datomic from its behavior than from any kind of real specification.#2020-03-1714:23ghadiwell I can't help with wishes 🙂 what semantics would it even have to call d/entity on a tempid? like what would it return?#2020-03-1714:24pmooserExactly what it does if you call it on a tempid. In fact, entity confuses me in that it will return something for any integer value you pass it, even if the entity doesn't exist. I can't explain that, just as I can't explain why it only works for certain representations of tempids but not others.#2020-03-1714:25favilawhat does it mean for an entity to exist?#2020-03-1714:25pmooserWhat I would have it do, if I could control it, is: whatever entity does, do it for all representations of tempids, and not behave differently depending on representation of tempid.#2020-03-1714:25pmooser@favila Is that a trick question?#2020-03-1714:26favilano#2020-03-1714:27faviladatomic stores datoms, not entities. d/entity provides a projection of datoms as a map. So, what does it mean for an entity to exist?#2020-03-1714:29pmooserIf we accept your definition, presumably it must mean an entity with id E exists if at least one datom has ever been asserted with E in the first position of the EAVT tuple.#2020-03-1714:29favilaok, entity-maps are lazy. your definition requires at least some eagerness#2020-03-1714:30pmooserWhat is your (presumably more correct) definition?#2020-03-1714:31favilaI don’t think it’s a meaningful question. an entity is a key upon which to join datoms#2020-03-1714:31favilait’s not like an id in a row in a relational store#2020-03-1714:31favilathe row exists, so the thing exists#2020-03-1714:32pmooserThat is essentially what I said.#2020-03-1714:32pmooserA row existing is isomorphic to some assertion having been made.#2020-03-1714:32pmooserBut you rejected it, for some reason.#2020-03-1714:32favilait matters for what d/entity returns. (d/entity 9999) => {:db/id 9999} (assuming 9999 has no assertions)#2020-03-1714:32pmooserOk#2020-03-1714:32favilaor (d/entity 9999) => nil#2020-03-1714:32favilawhich is right?#2020-03-1714:32pmooserI must just be communicating very poorly#2020-03-1714:33pmooserLet me ask you a direct question which will help clarify.#2020-03-1714:33pmooserWhy should entity behave differently for different representations of the same concept?#2020-03-1714:33pmooser(I have no idea the relevance of anything else you're talking about to THIS, which is my fundamental point/question. As far as I can tell, all of these existential questions about entities have absolutely no bearing on this, which is fundamentally a behavioral question.)#2020-03-1714:34favilato me, the bug is (d/entity db (d/tempid :user/foo)) should be nil#2020-03-1714:34favilait’s an implementation detail that it’s not, namely that tempid records encode a negative number, which represents a tempid#2020-03-1714:35favilathe argument to d/entity got passed to d/entid (or moral equivalent) at some point, and that’s why you have what you have#2020-03-1714:35pmooserI would accept that as well, because to me, the confusion is from the inconsistency.#2020-03-1714:35favilabut they probably did a long? check on the result, instead of positive-long (or 0)#2020-03-1714:36pmooserThe utility of having entity work for a tempid is that you can still use entity to check if there are existing values for an attribute that you intend to set (and there's actually no worry about them being asserted elsewhere in the tx, since the existence of that would create multiple assertions, which would abort the transaction anyway).#2020-03-1714:36favilaso I would sure like it if d/entity returned nil to indicate any unresolveable state#2020-03-1714:36pmooserNow, if I haven't communicated that clearly, I don't think that I can really explain it any better.#2020-03-1714:37pmooserIf entity returned nil for tempids, it would be fine, I'd just make sure that I would query for the existing values, and you'd have EXACTLY the same problem, as the query couldn't see anything else you asserted in the same TX but hasn't been committed yet.#2020-03-1714:37favilacorrect#2020-03-1714:37pmooserBehaviorally, there IS utility in this defintion.#2020-03-1714:37pmooserBut you have to be cautious with using it this way.#2020-03-1714:37favilawhich is why I was arguing, in both cases, the only appropriate thing is to check and abort#2020-03-1714:37pmooserRight, and that is where we disagree.#2020-03-1714:37pmooserI don't need to abort - the transaction in my particular case is well-formed and meaningful - even with the result of entity being passed a temp-id.#2020-03-1714:37favilathe check changes and is much more convenient if d/entity is consistent (really d/entid)#2020-03-1714:39favilaso you are using this tx fn in such a way that you have some guarantee from the application that it will never try to change something the tx fn will read in a way that would affect its functioning, nor ever compose this tx fn in the same tx with anything else that might do the same?#2020-03-1714:39pmooserI agree completely on all counts that entity should behave consistently - and ideally, the behavior would be documented or specified somewhere.#2020-03-1714:39pmooserLet me answer that question in 2 parts:#2020-03-1714:41pmooser1. In this particular case, the tx-fn makes assertions of a particular attribute. Something else making an assertion of the same attribute for the same entity somewhere else in the tx would be an error anyway, since you're not allowed to make ambiguous transactions. So no, that's not a problem.
2. Do you think that all functions we write, and especially transaction functions, are arbitrarily and infinitely composable? I assure you this is not a general property of transaction functions.#2020-03-1714:42pmooserThe fundamental problem here, as far as the argument you're making, isn't even about entity behavior, because as I said above, even if this didn't work with entity at all, you'd just have to query for existing attribute values, and that would have the same potential issue you're worried about.#2020-03-1714:42pmooserI think the problem that concerns you, then, is whether we can really have tx-fns that work on both existing and new entities, correctly in both cases.#2020-03-1714:47favilafor 1) I don’t know exactly what you are doing so perhaps you are avoiding this case, but the upserting case from earlier is what I was thinking of. 2) absolutely not, which is why the caution about throwing if someone accidentally supplies a tempid to a tx-fn that is expected to read the value of the tempid. Correct to your last two paragraphs.#2020-03-1714:47pmooserWhat I will say is that the tx-fn in question that I wrote is pretty specialized, and I acknowledge the wisdom in what you're saying in general - at the very least we have to be careful, and in many circumstances, having entity return non-nil when something isn't really there could create problems.#2020-03-1714:50favilanot knowing your problem specifically, I would prefer using a sentinel value to a tx fn (say, nil) to indicate “I am minting a new entity id, it has no assertions before this point” rather than a tempid to communicate the same.#2020-03-1714:51pmooserThat's an interesting idea.#2020-03-1714:51favilasince a tempid doesn’t make that guarantee#2020-03-1714:52pmooserI like the clarity of your idea - in my particular case, it would just require slightly different code.#2020-03-1714:52favilaalthough of course, you may still need the tempid in order to hang new assertions off of it#2020-03-1714:52favilaI guess my point is really that the intent should be communicated out of band somehow#2020-03-1714:53favilaI’m also starting to like using :db/ensure to check invariants afterwards#2020-03-1714:53favilait can catch many cases of “accidental composition” of transactions#2020-03-1714:53favilaand races#2020-03-1714:53pmooserYeah, if I had upserting attributes in the entity, what I'm doing wouldn't work at all.#2020-03-1714:54pmooserSince I do not have that case (currently), that means the presence of a tempid does actually indicate a new entity.#2020-03-1714:54pmooserIf that changed in the future, it could break things.#2020-03-1721:55mruzekwHow do I get all attributes on a matched entity? Say I have a database with movies. And I want all movies before a certain year. How would I do that?
(d/q '[:find ?movies
:where [?movies :movie/release-year ?myear]
[(< ?myear 2013)]] db)
Right now this only returns the entity id. How would I change this to include all attributes on that entity?#2020-03-1721:55mruzekwHow do I get all attributes on a matched entity? Say I have a database with movies. And I want all movies before a certain year. How would I do that?
(d/q '[:find ?movies
:where [?movies :movie/release-year ?myear]
[(< ?myear 2013)]] db)
Right now this only returns the entity id. How would I change this to include all attributes on that entity?#2020-03-1721:57favilaYou probably want pull#2020-03-1721:57favilahttps://docs.datomic.com/on-prem/pull.html#2020-03-1721:57favila:find (pull ?movies [*])#2020-03-1721:58favilafor your sanity, though, production code should generally be explicit about attributes#2020-03-1721:59mruzekwSo typically you’d only bind certain attributes?#2020-03-1722:00favilayes, whatever you needed#2020-03-1722:00favilaor you would separate query from pull explicitly (using pull-many)#2020-03-1722:01mruzekwHmm I honestly haven’t looked at pull yet, very new to Datomic#2020-03-1722:01mruzekwCould I bind multiple attributes from a query into a map rather than a list?#2020-03-1722:02mruzekwI know I can do (d/q '[:find ?title ?year#2020-03-1722:02favila(pull db [:foo] entity ) => {:foo "bar"}#2020-03-1722:02favilait’s already a map#2020-03-1722:02mruzekwOkay, I will look into pull then. It seems to be what I’m looking for#2020-03-1722:03mruzekwThank you 🙏#2020-03-1722:03favilathere’s also :keys in query, if you just want a natural name for each member of the result tuples#2020-03-1722:03mruzekwCool, I’ll look that up as well#2020-03-1722:04favila:find ?a ?b ?c :keys a b c => [{:a 1 :b 2 :c 3} ,,,]#2020-03-1722:15mruzekwuser=> (d/q '[:find (pull ?movies [:movie/title :movie/directors]) :where [?movies :movie/release-year ?myear] [(> ?myear 2013)]] db)
[[#:movie{:title "Paddington", :directors [#:db{:id 17842874695549022}]}]]
How would I go further and retrieve the directors’ as well?#2020-03-1722:18mruzekwNever mind#2020-03-1722:18mruzekwuser=> (d/q '[:find (pull ?movies [:movie/title {:movie/directors [:director/name]}]) :where [?movies :movie/release-year ?myear] [(> ?myear 2013)]] db)
[[#:movie{:title "Paddington", :directors [#:director{:name "Paul King"}]}]]#2020-03-1722:18mruzekwThat is super sweet!#2020-03-1722:18mruzekwIt’s like GraphQL built in#2020-03-1721:56mruzekwOther attributes include :movie/title :movie/directors :movie/actors#2020-03-1805:11tekacscould anyone speak to whether the peer vs. client, on-prem vs. cloud discussion has a settled answer for new projects?#2020-03-1805:11tekacsI'd love to use peer + on-prem, but I have a lingering worry about its development stalling out or other similar complications over time#2020-03-1805:16onetomThe ability to excise and being able to depoly on non-aws infrastructures such distinctive features of datomic on-prem are important requirements for medical and banking industries still, so i would be really surprised if the on-prem development would stall.#2020-03-1805:17onetomon the question of client vs peer, im unsure... i really love the entity api and the fact that i can just run integration-level test using in-memory dbs, which allow direct access to my database functions.#2020-03-1805:18tekacsmm -- I'm curious to hear if there's an answer on the topic that's settled -- I'll wait to hear, or might reach out to ask#2020-03-1805:19onetomwith the help of https://github.com/ComputeSoftware/datomic-client-memdb you can still create a specific database state using the peer api, then run your app functions which use the client api against that prepared state#2020-03-1816:10Jon WalchI use this library daily and the maintainer (@U083D6HK9) is awesome#2020-03-1805:20onetombtw, is there any official or non-official Python lib for the Datomic Client API or at least some specs?
All I gathered from the docs that the client api is a http api using transit encoding as seen it being mentioned on this diagram:
https://docs.datomic.com/on-prem/architecture.html#storage-services
It there isn't can we write one legally?#2020-03-1816:36geodromeHello,#2020-03-1816:37geodromeCan a datomic peer server and client app process reside on the same machine to eliminate the network hop?#2020-03-1816:37marshallyes they can; keep in mind that if you do so you’ll be sharing system resources#2020-03-1816:38marshalland system problems will take them both down, as opposed to if you have one peer server serving multiple remote clients#2020-03-1816:38geodromeok, thank you#2020-03-1816:44geodromeIs the typical architectural pattern to have distinct peer servers serving groups of clients with same/similar working sets?#2020-03-1816:49marshallyou can definitely get some cache similarity advantage that way#2020-03-1816:50marshalli’d say ‘typical’ depends a lot on your use patterns#2020-03-1816:50marshalli.e. how much query work you need to do, how many clients you have, what your deployment environment makes feasible#2020-03-1816:52marshallat a high level, a beefy peer server with a big heap (and maybe valcache) can often serve numerous clients (again, depending on their workloads/etc)#2020-03-1816:53marshallbut you could also stand up numerous peer servers, each serving different workloads#2020-03-1816:57geodromeok yeah, makes sense, thanks#2020-03-1820:06Jon WalchIs it possible / whats the proper syntax to do something like this for Datomic Cloud?
(let [attrs '[:foo/bar]]
(d/q {:query '[:find (pull ?e ?attrs)
:in $ ?attrs
:where [?e :foo/baz? true]]
:args [db attrs]}))#2020-03-1820:08ghadiI would look to the d/datoms or d/index-range API @jonwalch#2020-03-1820:08ghadiof course you could do it by generating the query datastructure with ordinary clojure#2020-03-1820:10Jon WalchThis worked
(d/q {:query '[:find (pull ?prop attrs)
:in $ attrs
:where [?prop :foo/baz? true]]
:args [db attrs]})#2020-03-1820:10Jon Walchhttps://forum.datomic.com/t/dynamic-pull-selector/478/2#2020-03-1820:11ghadioh, I misread you#2020-03-1820:11ghadi(you're not looking for a where clause attribute binding to be dynamic)#2020-03-1820:11ghadibut rather the pull#2020-03-1820:12marshallhttps://docs.datomic.com/cloud/query/query-data-reference.html#org8b379b4#2020-03-1820:12marshallDocs for what you’re looking for ^#2020-03-1916:11vHi, is there a simpler way to solve this.
(d/q '[:find (pull ?fid [*])
:in $ ?email ?follow-id ?follower-email
:where
[?uid :user/email ?email]
[?uid :user/following ?follow-id]
[?fid :user/email ?follower-email]]
(d/db conn)
"#2020-03-1917:19favilaSimpler in what way?#2020-03-1917:19favilaSolve what?#2020-03-1917:34vSo I have a user-id and a follower-id, I want to fetch all information of the follower if the user is following#2020-03-1917:35vI managed to simplify down to this function
(defn get-follower
"Fetch all the User followers information"
[db user-email follower-email]
(->> (d/q '[:find (pull ?follow-id [*])
:in $ ?email ?follow-id
:where
[?uid :user/email ?email]
[?uid :user/following ?follow-id]]
db
user-email
[:user/email follower-email])
ffirst))#2020-03-1917:36vPlease disregard this post 🙂#2020-03-1917:36ziltiIs there some place that helps me decide on which storage engine I should use for Datomic?#2020-03-1917:37vYes. Here is a comprehensive documentation https://docs.datomic.com/on-prem/storage.html#2020-03-1917:40ziltiThis tells me the how, not really the why I should decide for a particular one. I guess PostgreSQL is the "standard"? And, will Cassandra give me more performance, or PostgreSQL?#2020-03-1917:42faviladatomic uses storage as a generic key-value store, with only a few keys needing atomic updates#2020-03-1917:43favilathe particulars of the storage systems installs themselves are going to dominate more than which technology you use#2020-03-1917:43favila(and if you are on aws, dynamo is likely the “standard” choice)#2020-03-1917:51ziltiIt would be on premise for us, we're a startup and it is the best for us to keep the data on our own servers. Okay, so there isn't any particular difference as far as performance goes, good to know. Thanks!#2020-03-1917:52zilti(that said, we'll have to look how we proceed after a year with this new product of ours. Over $400/month for on-premise is a hefty price tag for a startup)#2020-03-1918:05favilaIMO you should prefer whatever you are most familiar with#2020-03-1918:06favilayou will already know how to resize, scale and and tune it if it needs#2020-03-1917:58v@favila Is there a way to pass pattern as an argument, to determind what items you want to fetch
(defn fetch-articles-by-tags
[conn tags pattern]
(d/q '[:find (pull ?aid ?pattern)
:in $ ?tags ?pattern
:where
[?aid :article/tagList ?tags]]
(d/db conn)
tags
pattern))#2020-03-1917:58vthis code for example does not work#2020-03-1917:58favila(defn fetch-articles-by-tags
[conn tags pattern]
(d/q '[:find (pull ?aid pattern)
:in $ ?tags pattern
:where
[?aid :article/tagList ?tags]]
(d/db conn)
tags
pattern))#2020-03-1917:59vWhy didnt mine work, does adding ? make a huge difference?#2020-03-1918:00faviladocumented here https://docs.datomic.com/on-prem/query.html#pull-expressions#2020-03-1918:01favilanormally aggregation functions in find can receive only one bound value#2020-03-1918:01vYou are the best 😄#2020-03-1918:01favilathis is a special-case. by omitting ? you are signaling that this is not a binding; then it inlines that into the pull#2020-03-1918:01favilathis wasn’t even supported in the beginning--this feature got added at some point without my noticing#2020-03-1918:02vRight, definitely a gotcha for someone learning this technology#2020-03-2017:41v@favila Quick question. How do I start the datomic UI console. This is the URI from my transactor
datomic:
#2020-03-2017:41v@favila Quick question. How do I start the datomic UI console. This is the URI from my transactor
datomic:
#2020-03-2017:42vI frist tried starting the peer server by running
bin/run -m datomic.peer-server -h localhost -p 9001 -a myaccesskey,mysecret -d helloo,datomic:
#2020-03-2017:44vThen next I ran
bin/console -p 8084 helloo datomic:
to start the console, the UI opens. but when I choose DB, I get this error
java.lang.RuntimeException: Could not find hello//hello in catalog
#2020-03-2017:56favilathe console and peer server are unrelated#2020-03-2017:57favilahttps://docs.datomic.com/on-prem/console.html#starting-the-console#2020-03-2017:58favilatry `
bin/console -p 8084 localhost-postgres datomic:sql://?jdbc:
#2020-03-2017:59favilanote, the alias here is not a db alias, but a “storage” alias (i.e. all the dbs managed by a transactor)#2020-03-2017:59favilaand you don’t specify the db name in the uri#2020-03-2017:59favila(because console can access all dbs)#2020-03-2018:06vThank you so much 🙂#2020-03-2017:59drewverleeIs there an easy way to construct the where clause for a query? Like conditionaly i want this where clause or this one.#2020-03-2017:59favilacond-> and a steady hand#2020-03-2018:00favilais it not an appropriate use for a or or rule?#2020-03-2018:06drewverleegotcha. I was trying to remember how programmatic queries worked.
https://docs.datomic.com/on-prem/query.html#building-queries-programmatically#2020-03-2018:07drewverleeThe other options you mentioned might work to for my case.#2020-03-2018:13favilain general you want static queries https://docs.datomic.com/on-prem/best-practices.html#parameterize-queries#2020-03-2018:14favilabut not always. ad-hoc filters is one case where it’s just easier and more predictable performance to add/remove clauses#2020-03-2018:14favilabut keep clause order deterministic, so you can still leverage caching#2020-03-2113:33dustingetz@U0DJ4T5U1 you want backtick https://github.com/brandonbloom/backtick
(let [needle "foo"]
(template
[[(clojure.string/includes? ?name ~needle)]]))
=> [[(clojure.string/includes? ?name "foo")]]
Yes be sure that you aren’t blowing out number of static queries, parameterize them appropriately if possible, injecting a string constant in this way would be a really bad idea, but injecting a clause based on a predicate would be fine as it emits only two possible queries#2020-03-2314:19drewverleeWhat i feel i need to express is the idea of conditionally picking the part of the where clauses.
d/q [:find ?x
:where [?a :fname "drew"]
(if some-conditional [?a :lname "verlee"]
[?a :age 15]
I think i need to insert the entire clause(s) as opposed to just an entity, attribute or value.#2020-03-2314:25favilais this because you have a multiparameter search interface, and some fields can be unused?#2020-03-2314:27favilaJust wondering what your use case is. What you wrote here is possible but not the best--parameterization is better; conditionally emitting clauses can sometimes be ok but still using parameterization#2020-03-2117:07David PhamIs there a rule of thumb for knowing how many Datomic systems are required given the size of the data set?#2020-03-2117:27adamfeldmanThere are a few notes regarding scaling here: https://www.datomic.com/cloud-faq.html#_how_big_can_a_database_be#2020-03-2117:41adamfeldmanAlso https://docs.datomic.com/cloud/faq.html#size-limit#2020-03-2209:29David PhamThanks a lot :)#2020-03-2120:42joshkhwhat's the best way to stop a runaway query in cloud? is it to cycle the node, or is there a more graceful solution?#2020-03-2206:54steveb8nQ: I have a stock prod topology and I have started load testing readonly loads. when I max it out, the cpu tops out at approx 50%. Am I right in understanding this is because only one of the two server nodes is handling queries?#2020-03-2423:29steveb8n@U05120CBV could you comment on this?#2020-03-2423:29marshallHow are you "maxing it out"#2020-03-2423:30marshallIs the read load all coming from a single source?#2020-03-2423:33steveb8nI have a cljs lambda which calls two Ion fns. Both are read only. I use Gatling to create increasing levels of load. It reaches an asymtote pretty quickly and the CPU chart in the dashboard never goes above 50% when it hits the asymtote#2020-03-2423:34steveb8n(I can’t spell asymptote)#2020-03-2423:35steveb8nso I guess it is a single source but I’m not sure how lambdas scale out. are you suggesting that, if > 1 lambda is making the call, it will distribute across both nodes?#2020-03-2423:35marshallI suspect it has to do with traffic routing on a per database basis for the primary compute group, but i dont know for a fact. I would suggest trying it against a query group#2020-03-2423:36marshallBut also, the thing that matters is your read throughput, not the distribution of cpu usage#2020-03-2423:39steveb8ndigging in. my cljs lambda invokes the Ion lambda via the npm aws client. not sure if that is relevant#2020-03-2423:40steveb8nalso my Ions work with N datomic databases, not just a single one. I’ve segregated database based on usage requirements#2020-03-2423:41steveb8nok. I’ll try to grok “read throughput” a bit more on the next load test#2020-03-2423:42steveb8nin my startup status (i.e. low budget) I’m trying to minimise costs. adding a query group implies extra cost right?#2020-03-2423:42marshallI dont know of the top of my head how the dashboard reports cpu usage. You can always look at individual ec2 instance cpu metrics if you want to examine things further#2020-03-2423:44steveb8nyeah. I thought about watching ec2 node cpu as well. I’ll do that also#2020-03-2423:44steveb8nthanks for the ideas. I’ll refine more on the next test#2020-03-2501:39steveb8nhere’s a thought. would the routing system be different if I started using http-direct for this Ion invocation? It’s something I’ve been meaning to do anyway#2020-03-2513:35marshallshouldn’t change
HTTP direct goes to the LB the same as a lambda#2020-03-3105:17steveb8n@U05120CBV ok. I have since verified that under load, only one of the two nodes are being used for reads. with the OOTB topology, should the load be spread across both? if so, what’s the next step to diagnosing this? Is it possible that my cljs/aws api Ion invocation is somehow affecting the load balancer?#2020-03-3105:19steveb8n(interestingly the architecture diagrams don’t show an NLB behind the lambdas or direct. I trust your statement despite that 🙂#2020-03-3105:20steveb8nhappy to log a ticket to pursue this further if it makes sense?#2020-03-2209:37David PhamCan we translate number of datomic system with the number of transactors?#2020-03-2302:19vnctaingHi, I’ve encountered some issue while I was playing around with datomic, I’ve described my problem on stackoverflow https://stackoverflow.com/questions/60807032/fixing-a-datomic-anomalie#2020-03-2302:29potetm@vnctaing you want all of your key-value pairs in the same map#2020-03-2302:30potetmyou have them in 3 maps#2020-03-2302:30potetme.g.
[{:db/ident :trip/name
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one}]#2020-03-2302:46vnctaingAh right :man-facepalming::skin-tone-2: ! good catch !#2020-03-2302:46vnctaingThanks !#2020-03-2316:21Braden Shepherdsona general datomic question: does (datoms ...) return only the existing, unretracted values? or does it return the full history?#2020-03-2316:29favilawhat datoms returns depends on the database#2020-03-2316:30faviladatabases all have a basis-t, and can also have any/none of:
• as-of
• since
• history
• filter#2020-03-2316:30favilawhen all of those a nil, you get only assertions valid at basis-t#2020-03-2316:30favilaas-of removes any datom not valid at that time#2020-03-2316:30favilasince removes any datom after that time#2020-03-2316:31favilahistory adds in retractions and datoms no longer valid in the as-of to since range#2020-03-2316:31favila(or the entire history if both are unset)#2020-03-2316:32favilafilter is an arbitrary predicate that runs after all of the above and can suppress datoms from showing up.#2020-03-2316:32vIf you want full history then you have to pass historical database i.e
(def hist-db (d/history (d/db conn))
;;
(d/datoms hist-db ...)
#2020-03-2316:35vIf you pass the the current value of database, then it will returns all the datoms that are valid at the current time.#2020-03-2317:51robert-stuttaford1. composite unique tuple on two other attrs. one attr is the partnership, the other is a unique identifier for that partnership. [partner partner-id]
2. haven't re-transacted those attrs to cause tuple attrs to appear yet
3. want to retract a bunch of redundant values on one of the target values, to avoid having to re-assert for the tuple.
4. transaction at 3 fails because now it's trying to set the same value [partner nil] for the tuple, which fails the uniqueness check.
Can I retract tuple attrs directly? I know I can't retract them now because they don't exist yet, so it seems that I have to first re-assert these values so that tuple datoms are made, and then retract both the identifier and the tuple datoms to clear away the old values.
Happy to rewrite any of this if it doesn't make sense 🙂#2020-03-2317:52robert-stuttafordMy other option is to do a cleanup before asserting the tuple schema, which is still something I can do -- just looking to learn how to deal with this when that is no longer an option 🙂#2020-03-2317:52marshall@robert-stuttaford I think the cleanup beforehand is your best option right now#2020-03-2317:53marshallWe are aware of the complications with automatic composite tuples; don’t have a specific workaround once the unique composite is declared#2020-03-2317:54robert-stuttafordfor interest's sake, let's say that door was closed. am i hosed, or is there a way, even if longwinded?#2020-03-2317:54marshallyou’d have to rename and ‘move’ the attribute#2020-03-2317:54marshalli believe#2020-03-2317:55robert-stuttafordok. is there a way to directly assert updates to tuple attrs? or is it only ever via altering their targets?#2020-03-2317:56marshallno, Datomic does the assertions, you trigger it by ‘touching’ the entity#2020-03-2317:56robert-stuttafordthanks for answering so quickly! it was rad to hang out in Berlin, which feels like it was months and months ago now, haha#2020-03-2317:57marshallyes, it was awesome#2020-03-2317:56marshallactually, you could “turn off” uniqueness#2020-03-2317:56marshallthen fix the problems#2020-03-2317:56marshallthen re-enable uniqueness#2020-03-2318:53robert-stuttafordthat makes sense, thanks!#2020-03-2319:51Jon WalchWhat I want from this query is "Give me the :db/id of the datom that has the largest :foo/end-time"? Is this the correct way to do it? I call ffirst on the output and get what I'm looking for, I'm just not sure of max's behavior with :db/id.
(d/q {:query '[:find (max ?target) (max ?end-time)
:where [?target :foo/bar? false]
[?target :foo/end-time ?end-time]]
:args [db]})#2020-03-2319:53favilaYou can’t use aggregation for this. You want to fetch the full result, find the highest end-time, and select the target#2020-03-2319:54favila:find ?target ?end-time#2020-03-2319:55favilathen
(max-key (fn [[_target ^Date end-time]] (.getTime end-time))
results-of-query)
#2020-03-2319:55Jon WalchSo if I wanted to bound the total amount of records that this query could return, I'd need to do that some other way?#2020-03-2319:55ghadiif the db you're passing is not a history db but a regular db, ordinary queries return the latest assertion for something#2020-03-2319:56Jon WalchIt is a regular db#2020-03-2319:57ghadiso only history dbs contain all of the past datoms, regular db is "now" (at a particular 't')#2020-03-2319:57ghadiit would be helpful to understand your goals before talking through mechanisms#2020-03-2319:58Jon WalchI have n (and growing) foo s in this example. Right now the query would return all of the records where ?target :foo/bar? false] which is "most" of them#2020-03-2319:59Jon WalchI'd like to avoid returning nearly all foo s from the db in this query#2020-03-2320:02ghadi[?target :foo/end-time ?end-time]
[(> ?end-time threshold)]
and pass in e.g. yesterday as the threshold?#2020-03-2320:02Jon WalchCool, was wondering if there was a better way, but this will do fine. Thanks!#2020-03-2320:50marshallyou could use a subquery#2020-03-2320:51marshalldepending on how ‘unique’ your max value is#2020-03-2320:51marshallhttps://stackoverflow.com/questions/23215114/datomic-aggregates-usage/30771949#30771949#2020-03-2320:51marshallfind the max (or min) value in the inner query, use it find the db/id (or whatever else) in the outer query#2020-03-2321:55Jon WalchOh nice! I'll give this a shot too.#2020-03-2403:33Nolanif an ion fetches an ssm parameter as in the event-example: https://github.com/Datomic/ion-event-example/blob/master/src/datomic/ion/event_example.clj#L40-L48
(def get-params
,,,
(memoize #(,,, (ion/get-params ,,,))))
when can you expect that to get recomputed? e.g. after next deployment, after next upgrade, never, etc.?#2020-03-2405:57eagon@nolan If you've memoized that function like that it'll update each time the process is cycled so an ion deploy will work to refresh. ion/get-params is just a wrapper around https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_GetParametersByPath.html if you want more details
Although it might be a better idea just to write your own version. Be careful about exceeding 10 http://p.ararameters per deploy group, your app might explode if you don't explicitly work around the 10 results limit in this API call, as last I remember ion/get-params does not implement the logic to keep calling next-token until all params are fetched.#2020-03-2406:00Nolanah! really appreciate the input @eagonmeng. affirms my suspicions (and hopes, desires, etc. 🙏), and makes a lot of sense. also appreciate the additional info re: parameter limit, wont be relevant here, but super good to know.#2020-03-2406:33Nolanas a slightly tangential follow-up, does memoizing the call to ion/get-env make any material difference? or is that an in-process access anyway (only really worried about production, if that matters)?#2020-03-2413:32marshallhttps://softwareengineeringdaily.com/2020/03/24/datomic-architecture-with-marshall-thompson/
(little shameless self promotion 🙂 )#2020-03-2415:56BrianQuery question: I have an entity called Interaction with an attribute called :devices which is of cardinality many. Those devices are eids. Given a device eid, I'd like to check to see if that device eid appears in an Interaction's :devices and then return all the other device eids. Here's what I have so far:
'[:find ?devices
:in $ ?dev-eid
:where
[?interaction :devices ?dev-eid]
[?interaction :devices ?devices]
]
However the problem is that my original ?dev-eid is also inside ?devices . I could filter it out after the query, but I feel like it would be better practice to include that filtering in the query (correct me if I'm wrong on that). Additional info: there are only ever 2 eids in any :devices . How can I remove ?dev-eid from ?devices inside my query? Something like "grab all ?devices which are not equal to ?dev-eid ".#2020-03-2416:10favilaAdd
[(!= ?devices ?dev-eid)]
#2020-03-2416:14BrianPerfect thank you @U09R86PA4! One more (I think) simple thing. I end up only getting a single ?device when I do this. However if I return ?devices ?interaction I end up getting the 4 that I expect (but they are each paired with the ?interaction eid I don't want). It seems like the query just grabs the first one and returns it. How can I have it grab them all?#2020-03-2416:15favilaare the four you expect the same?#2020-03-2416:15favilaquery normally returns sets, so if the same device appears 4 times it won’t matter, you will get one device#2020-03-2416:18favilayou can either include :find ?interaction to get the devices per interaction, or use :with ?interaction to include it for the set but then have it removed before returning. queryies with :`with` do not return sets#2020-03-2416:19BrianYou were right! The same device appeared multiple times. The data model was slightly different than I expected. I'm getting exactly what I want now =]#2020-03-2422:49Ben HammondHi.
I'm running datomic-pro-0.9.5697 local transactor and then datomic.peer-server and then datomic.client.api/connect to make the actual connection.
It's been working fine for ages; but I've just started to see
Reflection warning, cognitect/hmac_authn.clj:80:12 - call to static method encodeHex on org.apache.commons.codec.binary.Hex can't be resolved (argument types: unknown, java.lang.Boolean).
Reflection warning, cognitect/hmac_authn.clj:80:3 - call to java.lang.String ctor can't be resolved.
warnings and now
Caused by: clojure.lang.ExceptionInfo: No name matching localhost found
{:cognitect.anomalies/category :cognitect.anomalies/fault, :cognitect.anomalies/message "No name matching localhost found", :cognitect.http-client/throwable #error {
:cause "No name matching localhost found"
:via
[{:type .ssl.SSLHandshakeException
:message "No name matching localhost found"
:at [sun.security.ssl.Alert createSSLException "Alert.java" 128]}
{:type java.security.cert.CertificateException
:message "No name matching localhost found"
:at [sun.security.util.HostnameChecker matchDNS "HostnameChecker.java" 225]}]
:trace
[[sun.security.util.HostnameChecker matchDNS "HostnameChecker.java" 225]
[sun.security.util.HostnameChecker match "HostnameChecker.java" 98]
[sun.security.ssl.X509TrustManagerImpl checkIdentity "X509TrustManagerImpl.java" 459]
...
at datomic.client.api.async$ares.invokeStatic (async.clj:56)
datomic.client.api.async$ares.invoke (async.clj:52)
datomic.client.api.sync.Client.connect (sync.clj:71)
datomic.client.api$connect.invokeStatic (api.clj:118)
datomic.client.api$connect.invoke (api.clj:105)
errors
I'm not aware of changing anything; the SSLHandshakeException makes me wonder if some certificate has expired.
I don't see any errors reported on the transactor log or in the peer server console#2020-03-2422:54Ben Hammondoh I just found https://forum.datomic.com/t/ssl-handshake-error-when-connecting-to-peer-server-locally/1067/7#2020-03-2423:02Ben HammondHmmm, naively adding
:validate-hostnames false
didn't seem to help#2020-03-2423:02Ben Hammondhaven't tried updating datomic binaries though#2020-03-2423:06Ben Hammondjust for reference, I am trying this
(datomic.client.api/connect
(datomic.client.api/client {:server-type :peer-server,
:access-key "myaccesskey",
:secret "mysecret",
:endpoint "localhost:8998",
:validate-hostnames false})
{:db-name "xiangqi"
:validate-hostnames false}
)
and I get
Execution error (ExceptionInfo) at datomic.client.api.async/ares (async.clj:56).
No name matching localhost found
#2020-03-2423:24Ben Hammondupdating from
com.datomic/client-pro {:mvn/version "0.8.28"}
to
com.datomic/client-pro {:mvn/version "0.9.43"}
seems to have done the trick#2020-03-2423:27Ben Hammondtransactor/peer server did not need upgrading; just the client#2020-03-2501:56derpocioushey all! Anyone have a simple example of subscribing to changes of a datomic db? 🙏#2020-03-2501:58derpociousAlso, is there any common lein or boot templates to quickly scaffold out a CRUD backend for datomic (ideally a CRUDS starter template with subscriptions built-in as well!) thanks!#2020-03-2513:37adamfeldmanAs far as backends go, the closest I’m aware of is the (alpha) Datomic plugin for Pathom Connect (EDIT: see following message for other good options!) https://github.com/wilkerlucio/pathom-datomic, https://github.com/wilkerlucio/pathom.
If you’re looking for a frontend or full-stack solution, there’s Fulcro RAD (also in alpha and under rapid development) https://github.com/fulcrologic/fulcro-rad-demo, https://github.com/fulcrologic/fulcro-rad-datomic, https://github.com/fulcrologic/fulcro-rad. Note that Fulcro also uses Pathom internally, but RAD uses its own Datomic adapter.
There exist nice ways of adding websockets to Fulcro (enabled via Sente), but I don’t belive it’s currently pre-integrated with Fulcro RAD etc https://github.com/fulcrologic/fulcro-websockets#2020-03-2513:44adamfeldmanIf you already have the frontend settled, another option may be to stand up a GraphQL server. Lots of ways to do that.
Over Postgres (or Yugabyte…) with https://hasura.io (supports subscriptions out of the box). To bridge Clojure(script) and GraphQL, you might use https://wilkerlucio.github.io/pathom/#GraphQL.
There’s also this new, pure-Clojure GraphQL API creator for MySQL/Postgres https://github.com/graphqlize/graphqlize
Hodur has an “experimental” adapter for declaratively creating a GraphQL API over Datomic (and lots more cool stuff) https://github.com/hodur-org/hodur-engine#2020-03-2517:01derpociousThanks @UCHV4JZ7A! Interesting, I've never heard of Panthom or "EQL"#2020-03-2517:02derpociousI thought you could already ask for pieces of data with the standard Datomic api. If that is the case then isn't graphQL kind of unnecessary? 🤔#2020-03-2517:14adamfeldmanI think one way to explain it is that EQL+Pathom or GraphQL both serve to decouple your frontend from the schema within your database. Datomic (as with all? SQL databases) is not intended to be exposed directly to a frontend client.
This talk from the creator of Pathom may be enlightening: https://www.youtube.com/watch?v=IS3i3DTUnAI. There are a few other good Pathom (and Fulcro) talks out there from the same person#2020-03-2517:17adamfeldmanIt’s possible you and I are dreaming about the same kind of thing — easily pull DB data from a SPA-style web frontend, with minimal setup. The closest I’ve found outside Clojure-land is https://github.com/marmelab/react-admin + https://github.com/hasura/ra-data-hasura.
As it matures, I expect Fulcro RAD will enable a similar experience to that, but built on better and more flexible primitives that you can’t ever outgrow#2020-03-2517:22adamfeldman(“Better” to me means a few things. React-admin is made out of the typical mish-mosh of tools from the react ecosystem, which I find to be highly “inconsistent”, with the added bonus of high churn and the resulting breakage. Fulcro RAD is still just Fulcro, which I find to be the closest thing to an internally-consistent framework in the Clojure(script) ecosystem — an ecosystem which itself prizes stability)#2020-03-2504:08stuartrexkingThis is a question about datalog. I posted this on #datalog but there isn't my activity there. I appreciate any help as I'd like to understand this.#2020-03-2504:14stuartrexkingIs it a join? I think it's a join.#2020-03-2507:24eagon@stuartrexking You probably shouldn't think of the function call so much as assignment as just binding. So yeah, it's just joining on different people with the same birth date. Another way to think about it is imagine if there was a :person/born-month and a :person/born-date attribute, you could replace the function call clauses with something like [?p1 :person/born-month ?m] [?p2 :person/born-month ?m]#2020-03-2507:25stuartrexkingAh. That makes sense. Thank you. #2020-03-2600:54SvenI’ve noticed that when I save a float e.g 200.9 then it is saved in the database (or at least displayed when I query the value) as 200.899993896484. Is that a feature and if so then why?#2020-03-2601:34favilaThat is just how floats work. They cannot represent every decimal fraction precisely. Use bigdecimal type if you need exact decimal precision#2020-03-2601:41Svenok, thanks. I guess for things like latitude/longitude, financial values etc. it would be wiser to use bigdecimal. For my simple use cases float has worked fine but the issue I have now is that each time I update an entity which has a float attribute then a change is triggered even if the value does not actually change (`200.9` provided vs 200.08999 in db). I would not mind it but I provide user with an activity log for each entity and now this float value change appears in every transaction. This was unexpected 😕#2020-03-2602:57Braden Shepherdsonit's worth mentioning that financial data often works with (64-bit integer) millicents or microcents, to avoid exactly this kind of problem.#2020-03-2602:58Braden Shepherdsonit means rounding, but no one cares if they win or lose a rounding to the thousandth of a penny.#2020-03-2606:26mavbozohttps://floating-point-gui.de/basic/#2020-03-2606:28mavbozoi've faced problems with purchase amount in mysql because it was stored as float#2020-03-2611:15teodorluFor lat/long, I think floats are fine (assuming you aren't doing advanced GPS stuff). Just beware how much precision you actually want when displaying the numbers!#2020-03-2616:55vYou have two options
1. use double as the name implies, has 2x the precision of float.
2. Recommended: save all your number in long format i.e 2009 and when you need to convert it you can always divide and get the value you desire. It also preserves the original value i.e no floating precision errors#2020-03-2611:26motformHi, I’m now to datomic and have run into a problem where it feels like there would be some kind of (at least semi-) best practice. I have a set of e of type foo in my database, and I want to filter with user input. foo has a few a that the user can include in their filter, including cases where you can filter by multiple enumerations of a specific a, but I’m struggeling to find a good way to do a conditional query. If I make a query that expects all possible as as :in, then (I think) it only matches when all the inputs are valid. I also sort of of feel like this is a place where I should use the pull api, but I have not found a way to run a pull over all e of foo.#2020-03-2611:28motformHope the explanation makes sense! I’m running datomic free on-prem, so I’ve not look at the client api.#2020-03-2614:01Joe LaneWhich number do you want:
1. "All e of type foo which have a1 OR a2 ?"
2. "All e of type foo which have a1 AND ?"#2020-03-2614:03motform2-ish. “All e of type foo which might have a1 AND a2 and a..n depending on user input”#2020-03-2614:06val_waeselynck@UUPC4CHEZ your requirement is still ambiguous to me (what does "might have" mean?) but this may help: https://stackoverflow.com/questions/43784258/find-entities-whose-ref-to-many-attribute-contains-all-elements-of-input#2020-03-2614:08motformThat looks interesting, will check it out! In this concrete case, I have the e :cocktail that have a like :ingredient :title :author, the end-user should be albe to build a search query that can, but does not have to, include filtering by these#2020-03-2614:09Joe LaneHow many a exist ( or will exist? )#2020-03-2614:10motforma known amount, all :cocktail have the same data in this set#2020-03-2614:11Joe LaneAre you referring to a as an attribute for the entity?#2020-03-2614:12motformyes, is that not correct? all cocktail enties have the same complete set of attributes#2020-03-2614:14Joe Lane{:type :cocktails
:cocktail "Martini"
:ingredients ["Vodka", "vermouth"]
:author "Unknown"}#2020-03-2614:15Joe LaneAnd you want the ability to search for drinks which take Vodka#2020-03-2614:16motformYes! and that I can do, but the user should be able to search for cocktails that contain vodka, cream and have the word “russian” in fulltext#2020-03-2614:16motform(d/q '[:find [(pull ?e [:cocktail/id :cocktail/title :cocktail/recipe :cocktail/preparation :cocktail/ingredients]) ...]
:in $ [?ingredients ...] [?search ...]
:where
[?e :cocktail/ingredients ?ingredients]
[(fulltext $ :cocktail/fulltext ?search) [[?e ?n]]]]
(d/db conn) ingredients fulltext)#2020-03-2614:17Joe LaneStart by building the query up as data using the cond-> macro. I cant help any more right now but I can later tonight.#2020-03-2614:18motformis how far I got. its a concrete version of only qing two a, but it fails if any of the two vecs are empty, as it has nothing to match on#2020-03-2614:18motformah, cond->! I don’t think I’ve used that one before. thank you so much for all your help, will look into that!#2020-03-2614:22val_waeselynckWhen generating queries, it's usually more convenient to do it in map form:
{:find [...] :in [...] :where [...]}#2020-03-2614:24motformoh, can you pass maps to q? that makes life a lot easier lol#2020-03-2616:41alidlorenzoFor Datomic Cloud is it recommended to have one system per env? (dev, staging, prod)
I ask bc the ion-config.dev requires an app-name so not sure how to dynamically configure that based on environment.
I found a related question in forum (https://forum.datomic.com/t/parameterizing-ion-configuration/479) but no conclusive answer from anyone on it, so I’m wondering how people are handling different environments.#2020-03-2617:49marshallhttps://docs.datomic.com/cloud/operation/planning.html#2020-03-2617:50marshallyou can either have one or multiple systems
you can configure your environments with ion environment maps and parameters https://docs.datomic.com/cloud/ions/ions-reference.html#environment-map#2020-03-2618:47alidlorenzosay I want multiple systems - so I’d keep the app/system name the same (but give them different environment maps?
i didn’t think aws would allow multiple systems with same name but i’ll go ahead and try it#2020-03-2618:48alidlorenzooh maybe the system name has to be different but the app-name can be the same (i guess that’s why that option exist)#2020-03-2618:48marshallcorrect ^#2020-03-2618:49alidlorenzothat clarifies a lot, thanks#2020-03-2619:23joshkhcan i call d/with on the results of calling d/with ?#2020-03-2619:37val_waeselynckYes, and a lot of power follows from that :)#2020-03-2619:43joshkhright. i'm sure i've done this before, but i'm drawing a blank. d/with requires a d/with-db conn, but the result of d/with doesn't return a connection. does it?#2020-03-2619:51joshkhoh of course. :db-after is already with-db'ed.#2020-03-2621:21johnjDoes the peer uses connection pooling when using a SQL database?#2020-03-2717:23johnjis there such thing as too many entities references in a cardinality many attribute?#2020-03-2717:49motformI have another newbie question about the map form for queries, regarding quoting as it feels like I have misunderstood something.
(d/query '{:query {:find [?e]
:in [$ ?title]
:where [[?e :title ?title]]}
:args [(d/db conn) "negroni"]})
gives me the error nth not supported on this type: Symbol. When I quote the nested query map, it works (not surprisingly, as we don’t wanna eval all the datalog symbols that it complains about if i leave the whole thing unquoted). This works in the repl, but I don’t see how I would write code that does this to the query map without reaching for a bunch of map manipulation, the thought of which gives me that feeling that I’m doing something wrong.#2020-03-2717:50motformWhat confuses me is that in the docs, nothing is quoted. when querying with the map form.
https://docs.datomic.com/cloud/query/query-executing.html
It also shows the nested query map form for a /q and not /query invocation, which also confuses me, as i thought /q wanted a flat map and args as & args , supplied directly to the function#2020-03-2718:02skuttlemanI don't think you want to quote the args, right?
(d/query {:query '{:find [?e]
:in [$ ?title]
:where [[?e :title ?title]]}
:args [(d/db conn) "negroni"]})#2020-03-2718:05motformNo, I guess not, but how do I only quote the query-map? This is the interface to the database, where parse-strainer takes a map of user input and builds a query-map through a cond-> pipeline
(defn strain [strainer]
(let [query (parse-strainer strainer)]
(d/query query)))#2020-03-2718:07favilaThe purpose of quoting is so symbols like ?e don’t get expanded to current.ns/?e and lists like (some-rule) in the query don’t get evaluated.#2020-03-2718:08favilayou can accomplish the same clause by clause, or using forms like (list 'some-rule '?foo ?bar) or even ('some-rule '?foo ~'?bar)`#2020-03-2718:08favilaThis is normal Clojure quoting, it’s not specific to datomic#2020-03-2718:09favilathe query just wants to see literal symbols ?e etc#2020-03-2718:43motformyeah ok, that makes sense of course, I guess I don’t quote that much in Clojure otherwise.#2020-03-2718:43motformBut what I still don’t get is how they want to to use the api. If I have a fn that spits out the following map so that it might then be used to call d/query with, how do I quote only the :query submap?
{:query {:find [?e]
:in [$ ?title]
:where [[?e :title ?title]]}
:args [(d/db conn) "negroni"]}
(update m :query quote) evals the symbols#2020-03-2718:51favilaif you already have the query map, why are you quoting it again?#2020-03-2718:52motformim not! which is where I get confused, haha#2020-03-2718:52motform(defn strain [strainer]
(let [query (parse-strainer strainer)]
(d/query query)))#2020-03-2718:52motformis my end-point#2020-03-2718:52favilaif that is actually what your function returns, then you are done#2020-03-2718:52favilajust hand that map to d/q#2020-03-2718:53motformmy fn
(defn- parse-strainer [{:keys [:ingredients :search :type]}]
(cond-> base-query
ingredients (simple-query :ingredients '?ingredients ingredients)
type (simple-query :type '?type type)
search (fn-query :fulltext '?fulltext 'fulltext '$ search)))
(parse-strainer {:ingredients ["vodka" "cream"] :search ["russian"]})
{:query
{:find [(pull ?e [:id :title])],
:in [$ [?ingredients ...] [?fulltext ...]],
:where
[[?e :ingredients ?ingredients]
[(fulltext $ :fulltext ?fulltext) [[?e ?n]]]]},
:args [(d/db @*conn) ["vodka" "cream"] ["russian"]]}#2020-03-2718:55motformbut when i do (strain {:ingredients ["vodka" "cream"] :search ["russian"]}) i get
Execution error (UnsupportedOperationException) at datomic.datalog/extrel-coll$fn (datalog.clj:300).
nth not supported on this type: Symbol#2020-03-2718:55favilawhere do your args come from?#2020-03-2718:56favilayour (d/db @*conn) is a literal list with d/db and conn elements#2020-03-2718:57favilaI think it’s trying to use it as a vector-type datasource, but d/db doesn’t support nth#2020-03-2718:57motformoh shoot, so i should somewhere do like (def db (d/db conn) and have :arg [db [“foo”] [“bar”]?#2020-03-2718:57favila“arg” is not syntax, it is real objects#2020-03-2718:58favila(d/q query arg1 arg2) and (d/q {:query query :args [arg1 arg2]}) are equivalent#2020-03-2719:00favilacan you show the code that builds args?#2020-03-2719:02motformah, of course! I did not think of that at all#2020-03-2719:02motform“its just data”#2020-03-2719:03motform(defn strain [strainer]
(let [{:keys [query args]} (parse-strainer strainer)]
(apply d/q query (d/db @*conn) args)))
now works! before I hade a base map for my query that contained :args [(d/db @*conn)]that i then conjed onto the other args to#2020-03-2719:05favilayou can still do that, just don’t quote the args in your base map#2020-03-2719:05favilathe problem is that the db was literally a list, instead of the db object, so some quoting was going on that shouldn’t have.#2020-03-2719:06motformi get that now, thank you so much for your help!#2020-03-2719:07motformi guess i just assumed that i would be invoked somewhere along the line, don’t think i’ve encountered this before#2020-03-2719:07motformdatomic is the only place outside of macros where i’ve actually come across quoting#2020-03-2719:08motformis there a best prefered way to pass the db argument around? in my cases, I’ve used (d/db @*conn) where conn is an atom holding a (d/connect uri), would it be “better” if it was just a var with a (d/db conn)?#2020-03-2719:09favilahttps://docs.datomic.com/on-prem/best-practices.html#consistent-db-value-for-unit-of-work#2020-03-2719:09favilalike any code, avoid mutables#2020-03-2719:09favilaconn is a mutable#2020-03-2719:09faviladb is not#2020-03-2719:10favilaalso, it allows some priviledge scoping: anyone can transact with a conn, but not with a db#2020-03-2719:16motformthat makes sense, thanks!#2020-03-2719:16motformI guess i should just RT rest of the FM, that best practice page was really good!#2020-03-2719:17motformdo you have any good open source reference projects that use datomic that one could look at?#2020-03-2719:32favilaI know lots of libraries, but I can’t think of any apps offhand#2020-03-2809:44motformno probs, I’m super thankful for all the help already! : )#2020-03-2809:56motformCan I have one last quoting question? In my query, I want to pull and bind-coll with …, which i can add to my q no problems,
{:query
{:find
[(pull ?e [:id :title :recipe :preparation :ingredients]) ...],
:in [$ [?ingredients ...] [?fulltext ...]],
:where
[[?e :ingredients ?ingredients]
[(fulltext $ :fulltext ?fulltext) [[?e ?n]]]]},
:args [#{"gin" "rum"} #{"russian"}]}
However, when I run this, it tells me that Argument ... in :find is not a variable, despite that fact that it can handle the … in the :in clause#2020-03-2810:00motformDoes it have something to do with the fact the the other invocations of … are nested? should not, right?#2020-03-2811:53favilaThis is a peer vs client api difference. Only the former supports destructuring in :find#2020-03-2813:09motformHm, then that’s strange, I thought that Datomic free only used the peer api? It works in my hand written queries, which is what made me confused#2020-03-2816:01favilaYou are using free not cloud?#2020-03-2816:01favilaD/q with map arg is a client api thing, so I assumed you were using cloud#2020-03-2818:54motformHaha, no, I’m a free leecher for the moment. d/q takes args in map or vector form, d/query take maps with :query and :args keys, as Ive understood it. I have no clue about how the client api, have not researched that yet#2020-03-2822:49favilaNvm, this is the map form of query, so you need an extra vector#2020-03-2822:51favila[:find a b c] => {:find [a b c]}. So [:find [a ...]] => {:find [[a ...]]}#2020-03-2718:24johnjis transaction functions a common way to achieve referential integrity in datomic?#2020-03-2718:44arohnerTried to use insecure HTTP repository without TLS:
project:
com/datomic/datomic-lucene-core/3.3.0/datomic-lucene-core-3.3.0.jar
This is almost certainly a mistake; for details see
#2020-03-2718:50arohnerI see that the datomic-pro-0.9-6024.pom contains
<repository>
<id>project</id>
<url></url>
#2020-03-2718:54alexmillerI think they were fixing up some stuff like this recently iirc, but I'm not on the team#2020-03-2718:54alexmillernot sure if anyone is watching here atm#2020-03-2718:55arohnerI think I figured it out#2020-03-2718:55arohnerwe’re hosting datomic-pro.jar in a private S3 repo. datomic-lucene-core also needs to be there. It wasn’t found in central, so it tried all other repos, and then complained about the http://#2020-03-2718:56alexmillerah, yes that is a common error reporting gotcha#2020-03-2718:56alexmillerif it looks in N places and doesn't find it, it just reports the first or last place it looked as it doesn't know where it was expected to find it#2020-03-2718:57alexmillerand often that is different than your expectation#2020-03-2722:44eagonFor Datomic Ions, are there hooks for system (component, integrant, etc.) start/stop? Wondering about best practices here#2020-03-2815:48rapskalianIn the past I’ve placed start calls at the beginning of each HTTP request, right before handing the request to the app handler. Calls to start are idempotent and are essentially no-ops if things are already started. #2020-03-2815:50rapskalian@UNRDXKBNY so it’s basically side-effecting middleware. #2020-03-2815:51rapskalianThis gives you a well-defined place to assoc the started system onto the request map as well. #2020-03-2819:33eagon@U6GFE9HS7 Yeah, I've basically done the same for system start, though was a little worried about runtime performance and how much the no-op check would cost.
But are there any good solutions for stop? Wondering about how to manage integrations like AWS API gateway and websockets, and having notifications on process cycling or auto-scaling#2020-03-2819:45rapskalianHm, I haven’t needed stop hooks myself. I wonder if you could tie into the Simple Workflow (SWF) hooks that Datomic sets up for coordinating deploys. #2020-03-2820:54eagonOoh, didn't know about that part of Datomic (thanks for the pointer!), though it makes sense that something like SWF was behind coordinating deploys. I wonder if there's some buried api for this that wasn't too much trouble/not supported to hook into#2020-03-2818:30yuHi everyone! So I'm starting out with datomic and I have already successfully connected datomic-console to a local datomic-transactor instance, but for a remote one (that uses digitalocean droplet for the transactor & heroku-postgres for storage) I'm having a hardtime, I have tried the following commands:
bin/console -p 9000 sql datomic:sql://?jdbc:postgresql://<heroku-postgres-host>:5432/<heroku-postgres-db>?user=<postgres-user>&password=<postgres-password>&ssl=true&sslfactory=org.postgresql.ssl.NonValidatingFactory
bin/console -p 9000 sql datomic:sql://?jdbc:postgresql://<heroku-postgres-host>:5432/<heroku-postgres-db>?user=<postgres-user>&password=<postgres-password>
For both when I open datomic-console, it shows the following error msg:
FATAL: no pg_hba.conf entry for host <my-ip-address>, user <heroku-postgres-user>, database <heroku-postgres-password>, SSL off trying to connect to datomic:sql://?jdbc:postgresql://<heroku-postgres-host>:5432/<heroku-postgres-db>?user=<postgres-user-name>, make sure transactor is running
Using the URI in the first command I was able to create a database from the repl, so I can confirm that the transactor is working.
So, does anyone know what I'm still missing here? Would appreciate any help, thanks!#2020-03-2819:43yuFound the solution, just needed to wrap the uri with double quotes "<datomic-uri>"#2020-03-3015:57robert-stuttafordwe're indexing every 30 minutes right now 😄 should we do anything, e.g. increase any live index thresholds?#2020-03-3016:54Joe LaneOn Prem or cloud?#2020-03-3017:25robert-stuttafordon prem#2020-03-3017:25robert-stuttafordit's handling just fine, more curious than anything#2020-03-3019:34lwhortona while back i remember watching a datomic video where someone (stu?) was showing the power of dataolog. they were demonstrating how you could debugging a slow running query in datomic without knowing anything about a particular domain by simply moving around the order of the :where clauses. does anyone else remember this? maybe it was inside a day-of-datomic series?#2020-03-3019:38lwhortonaha! i have found the thing, https://github.com/Datomic/day-of-datomic/blob/master/tutorial/decomposing_a_query.clj
but there’s definitely a video out there to go along with this …#2020-03-3019:45robert-stuttafordcheck the http://datomic.com videos page @U0W0JDY4C 😃#2020-03-3121:05Braden Shepherdsondoes datomic limit the maximum size of :db.type/string values? what about bytes?#2020-03-3121:17favilaon-prem: no, unless it’s a string in a tuple, then it’s 256 chars; cloud: strings are 4096 chars outside tuples and 256 chars inside, and it doesn’t have bytes.#2020-03-3121:17favilathese are documented on the on-prem and cloud schema pages#2020-03-3121:20Braden Shepherdsonthanks!#2020-03-3121:06Braden ShepherdsonI realize it becomes inefficient, and it's not really recommended to store a blog post in a string value, but I'm wondering if there's a formal limit.#2020-04-0409:12John LeidegrenI could be wrong about this but the idea is to store structure, so you would store maybe each paragraph as a string and not actually whe whole body?
This does create another problem where order is important. At which point, and depending on what you want to do, you might consider actually storing a linked data structure.
Where paragraphs have next pointers. You use pull with recursive patterns to bring in the model. Though, it's a bit odd maybe because you get a hierarchical model back where you might have expected a list of paragraphs.
I'm just thinking out loud here, but I think this is kinda what Datomic wants you to do... or at least I get that feeling from now and then that you should fully embrace the graph like nature of Datomic.#2020-04-0106:56teodorluI've always interpreted Rich's The Language of the System as speaking about Clojure. Clojure is meant to be part of some whole. Clojure is not meant to become its own island that can only send messages to other islands. Rather, Clojure can fit nicely as part of a river, and work effectively even if it relies on some upstream source of truth.
Does The Language of the System also describe the use of Datomic?#2020-04-0112:24alexmillerIt’s intentionally not about either #2020-04-0112:25alexmillerAnd also about both of course#2020-04-0112:48teodorluThat seems right to me.
I think my brain would hurt less with examples / stories where Datomic was used to solve observed problems. I've seen most talks I've found about Datomic, though.
Context: In my daily work with Java, I suspect that we're suffering from a "situatedness", where teams become islands with their own vaguely different entity definitions. I'm not precisely sure what the solution to that should be. But I suspect that understanding Datomic better could help me improve our state. So far, thinking about RDF and specs with global names has been helpful.#2020-04-0113:04alexmillerthose sound like good ideas to me#2020-04-0116:24johnjWhat has proved to be better practice, to try to keep all attributes of a entity use the same namespace or to mix various namespaces in an entity? the latter looks messy to me#2020-04-0116:24johnjWhat has proved to be better practice, to try to keep all attributes of a entity use the same namespace or to mix various namespaces in an entity? the latter looks messy to me#2020-04-0116:27favilathe meaning of the attrs themselves should guide that imo#2020-04-0116:27favilawith the usual caveat against premature abstraction: there may be a difference of meaning you haven’t discovered yet#2020-04-0116:31johnjto clarify, you are saying namespaces should not scope an "entity type" correct?#2020-04-0116:32johnjhttps://github.com/Datomic/mbrainz-sample/blob/master/relationships.png#2020-04-0116:32johnjthat example uses single ns for entities#2020-04-0116:36favilaor maybe it shows co-occurence clusters of attributes on entities? troll#2020-04-0116:37favilawhat I mean is, for example, if you have an attribute with the same meaning no matter what entity “type” it appears on, don’t split it into multiple attributes just so entities never have attributes from other namespaces#2020-04-0116:38faviladata-modeling-wise, focus on attributes and their meanings, build entity “types” later, if that’s even necessary#2020-04-0116:41favilaIn sql ERDs, you might engage in “polymorphic joined-table inheritance” tricks to share columns across tables, or simply copy the same column name into multiple tables. This doesn’t make sense in datomic since you can assert any attr on anything.#2020-04-0116:45johnjfair, I guess I'm getting to hung up in making entities look "elegant" instead of applying common sense but I get your point: namespaces scope a set of attributes, not "entity types"#2020-04-0116:47favilawhat you really want (in datomic) are entity specs rather than types: https://docs.datomic.com/on-prem/schema.html#entity-specs#2020-04-0116:48favilai.e., what are the things I could use an entity for and what constraints do I expect to hold#2020-04-0116:49favilabut an entity may fulfill many specs at once, or some only some of the time#2020-04-0116:49favilaso again, it’s contextual, not baked in to the entity itself as a type#2020-04-0116:52favilaspecs give you: required attrs (:db.entity/attrs), and enforcing cross-fact constraints (:db.entity/preds)#2020-04-0116:53favilathere are also attribute predicates which constrain attribute values further than their type (:db.attr/types), but they are on the attribute not entities, so again they’re expected to be universal#2020-04-0116:54favilathis brings an annoying modeling limitation where you may want an attr with a universal meaning, but contextually want it to have a narrower possible range of values. this is common when dealing with data modeled by XML schema or OOP-ish type systems, where some “refinement” mechanism is used commonly#2020-04-0116:55favilain datomic, you have to decide whether to leave some attr constraints a bit loose and untyped, or split each refinement into a different attr and lose some universality#2020-04-0116:58teodorlu> fair, I guess I'm getting to hung up in making entities look "elegant" instead of applying common sense but I get your point: namespaces scope a set of attributes, not "entity types"
How about making your system for namespacing relations elegant instead?#2020-04-0117:02johnj@U09R86PA4 was reading the attr-preds and entity spec stuff and see what you mean, helpful thanks.#2020-04-0117:04johnjI know they have different purpose, but couldn't entity predicates be use as attribute predicates too? and you get to choose at transaction time when to apply them#2020-04-0117:04johnj@U3X7174KS don't know what you mean, can you elaborate?#2020-04-0117:10teodorluWhen I first had a look at Datomic, I had the urge to get "nice tables". That eventually got me into the same troubles that I would have if I were to use plain SQL tables: I wan't able to make a "rectangular" structure that fit; what would I do about missing data?
Datomic doesn't require "rectangular" data. Missing data is okay.
When missing data starts becoming okay, you start to think in terms of relations (predicates) first. And those predicates tend to (in my experience) be simpler to describe accurately than the entities.
Picture from the Wikipedia page on RDF, which also "thinks in terms of relations (predicates) first[1]. With SQL, you have to design your entity (subject). With Datomic, you can focus on your relations (predicates) instead.
https://en.wikipedia.org/wiki/Resource_Description_Framework#2020-04-0117:12favila> but couldn’t entity predicates be use as attribute predicates too?
Yes, you can check anything you want in an entity predicate#2020-04-0117:20teodorluI find this topic to be abstract, hard to understand, and hard to explain. So I went looking for a knowledge graph it's possible to explore to illustrate. Didn't find a good one.
What I did find was Tim Berners-Lee arguing for the use of linked data on the web[1]. He implies RDF[2], but Datomic can be used similarly. Sorry for throwing new stuff at you.
In the article, FOAF is an example of a "system for namespacing relations".
[1]: https://www.w3.org/DesignIssues/LinkedData.html
[2]: https://www.w3.org/TR/rdf-sparql-query/#basicpatterns#2020-04-0117:39johnj@U3X7174KS about your wikipedia comment, in datomic, you still have to think about how to group those attributes as entities though#2020-04-0122:33steveb8nIn my schema, I model using rectangular entities so they fit with integrations to relational dbs. attributes for each entity share a namespace (as you suggest) but there is also a single “all” namespace for attributes that are shared by all entities e.g. :all/id which is a uuid, :all/type for dispatch etc.#2020-04-0123:04dfornikaIs anyone aware of successful examples of integrating OWL ontology terms into a Datomic database? Or thought about how it could be done (or why it shouldn't be done)? There seems to be so much conceptual overlap between Datomic and 'semantic web' technologies (RDF, OWL, JSON-LD) but little technical interoperability.#2020-04-0200:27rutledgepaulvThere's this https://github.com/cognitect-labs/onto. Also @U066U8JQJ has done some investigation into this area. https://www.amazon.com/Semantic-Web-Working-Ontologist-Effective/dp/0123859654 is a great book#2020-04-0200:53dfornikaOh thanks @U5RCSJ6BB. I've seen https://github.com/arachne-framework/aristotle and https://github.com/quoll/kiara and I'm sure I've stumbled on onto before but haven't looked at it recently. I was just looking at that 'Working Ontologist' book on amazon earlier today.#2020-04-0201:15rutledgepaulvnp! there's also a #rdf channel#2020-04-0203:45wilkerluciothat book had a great effect on my modeling, I’m super glad @U5RCSJ6BB pointed me this book, great read!#2020-04-0217:52dfornika@U066U8JQJ Have you been able to integrate terms from OWL ontologies somehow as attributes in datomic schemas? Or could you point me to any other resources for clojure/rdf interoperability?#2020-04-0217:58wilkerlucio@U1MBP9HV2 I did play with Jena, got to write some wrappers to work on Jena in similar fashion of datomic, but just as an experiment, I agree they share a good portion of principles (not by accident, datomic is based on RDF ideas), but I didn’t tried to integrate in the rest of the system, if you wanna look at that jena wrapper its here https://github.com/wilkerlucio/jena-clj/blob/master/src/main/com/wsscode/jena_clj/core.clj (disclaimer: I don’t consider it near production ready, just a bunch of random experiments)#2020-04-0218:13dfornikaThanks! At this point I'm trying to just put together a small proof-of-concept so I don't need anything production-ready.#2020-04-0209:11motformI have a quick question about how fulltext search works. I have an es with multiple strings fields, which I have concatenated together and and added to the db under :e/fulltext, all of which works. However, I’m a bit lost on how queries with fulltext work. Lets say I have tokens s1 and s2, I assumed that two calls to fulltext would result in an “and” search, which it seems to be doing. However, when I call fulltext just once with the string "s1 s2" I get a different result, returning a much larger amount of es. I’m guessing its the first behaviour that I want, I got a bit confused by the noticeably large discrepancy in return values (in one example, separate calls returned 12 items, while a concatenated single called returned 300+).#2020-04-0212:23favilaThe string given to fulltext is a lucene query string: https://lucene.apache.org/core/2_9_4/queryparsersyntax.html#2020-04-0212:23favilathe default operator for terms is OR#2020-04-0212:23favila“s1 s2” = “s1 OR s2"#2020-04-0215:34motformThank you so much as always, that makes sense. I was surprised that lucene was not mentioned in the docs. I mean, its kind of an implementation detail, but also kind of not really, as the DSL still works#2020-04-0215:05bmaddyIs there some way to get the datomic version in the repl? I'm thinking something like
d/*datomic-version*
#2020-04-0215:06bmaddyFor context, what I'm actually trying to figure out is why this doesn't work:
(d/q '[:find (pull ?e [[:db/doc :as "doc"]])
:where
[?e :db/ident :db.type/boolean]]
(d/db conn))
Execution error (Exceptions$IllegalArgumentExceptionInfo) at datomic.error/arg (error.clj:57).
:db.error/invalid-attr-spec Cannot interpret as an attribute spec: [:db/doc :as "doc"] of class: class clojure.lang.PersistentVector
and I'm wondering if I have an old datomic version or something.#2020-04-0215:16bmaddyNevermind, I was able to figure out the version another way. My datomic is too old, so that's probably the problem.#2020-04-0315:55kennyHi. We are looking into the best practices for loading lots of data into Datomic Cloud. I have seen the documentation on pipelining transactions for higher throughput https://docs.datomic.com/cloud/best.html#pipeline-transactions. From that section,
> Data imports will run significantly faster if you pipeline transactions using the async API, and maintain several transactions in-flight at the same time.
The example that follows does not use the Datomic async API. Why is that? Should it use the async API to achieve higher throughput?
Are there any additional best practices or things to look out for when loading thousands of entities into Datomic Cloud?#2020-04-0316:15ghadione key for batch imports is to always put retries in your transactions#2020-04-0316:17kennyI assume the typical exponential backoff + jitter on a retriable anomaly?#2020-04-0316:17ghadiyea that works#2020-04-0316:18kennyGive up after 3 retries or more?#2020-04-0316:19ghadican't say without knowing your loads#2020-04-0316:20kennyWhat is the function used to calculate number of retries given a particular datoms/transaction + transactions/second?#2020-04-0316:16ghadi@kenny#2020-04-0316:16ghadiThe example of transaction pipelining in the docs does not include backing off on retriable anomalies#2020-04-0316:18ghadiI always do some back of the napkin estimation of # of transactions and number of datoms per transaction#2020-04-0316:19kennyIs there a recommended number of datomics/transaction?#2020-04-0316:20ghadirough order of 1000-10000 datoms#2020-04-0316:20ghadithis is me talking, not the datomic team#2020-04-0316:21kennyWhy a whole order of magnitude of difference?#2020-04-0409:17John Leidegren@kenny I'm going to guess compression. Some datoms compress better than others, so if you payload is very compressible, you'd get away with putting more datoms in each log segment.
If you have really huge log segments you may run into limitations in storage. For example, the DynamoDB backend has a limit of 400 KiB per value. A log segment larger than that (i.e. transaction) cannot be committed into storage.#2020-04-0316:18ghadigood practice to label the transactions themselves with some metadata#2020-04-0316:21ghadi[:db/add "datomic.tx" :db/doc "kenny did this, part 1/15"]#2020-04-0316:22kennyHmm yeah. In this case I won't know the denominator of your part fraction there. I can still label them though.#2020-04-0316:21ghadior have stronger idempotence markers in the DB metadata#2020-04-0316:23kennyAlso still curious as to if I should be using the Datomic Cloud async api for maximal throughput.#2020-04-0322:21BrianHello! I'm using pull in my Datomic query and want to blend it with an :as statement to change the name of something.
My data looks like this:
{:category {:db/ident abcd}}
I can use pull do grab it with [{:category [:db/ident]}] which returns a structure like the above. I can rename :db/ident by pulling like this [{:category [[:db/ident :as :hello]]}] . This returns a {:category {:hello abcd}}
however what I would love to be able to do would be have it return {:hello abcd} essentially renaming that whole path. I'm doing this within a larger query using pull. I tried pulling this specific part out of the query like this with the :keys option:
(d/q '[:find (pull ?e [:1 :2 :3]) ?ident
:keys :nums :hello
...
[?cat :db/ident ?ident]
...)
but this ends up improperly nesting my return values because I don't want the :nums part and want the hello part to be in the same map as [:1 :2 :3].
Combining all of the above I tried something like
(d/q '[:find (pull ?e [:1 :2 :3
[{:category [:db/ident]} :as :hello]])
...)
However this didn't work and I suspect this isn't possible because pull doesn't know that I'm guaranteed to have a single value at the very end and not a vector somewhere in there. Am I right that it's impossible to tell pull to drill down to that last value and return only that last value under a new specific key? Is it possible to do what I desire some other way?#2020-04-0400:25favilaPull can rename keys or default/limit values, but it cannot transform map shapes. You have to post process#2020-04-0408:57John LeidegrenI'd like some input on general EAV data design. I've started using tuples to create unique identities so that I can enforce a uniqueness constraint over a set of refs within an entity. It looks something like this (tx map):
{:db/id 17592186045416
:list-of-things [{:some-unique-identity [17592186045416 17592186045418]
:some-ref 17592186045418
:meta "foo"}
{:unique-identity [17592186045416 17592186045420]
:some-ref 17592186045420
:meta "bar"}]}
So, what I'm trying to do, is to prevent there from being two or more :some-ref for the same thing in this set of :list-of-things. Is this nuts or does this make sense? I'm worried I just invented some convoluted way of doing something which should be modelled different?
If find tuple identities to be incredibly useful because I get this upsert behavior each time but I don't see how I can avoid the potential data race that would otherwise occur. Any suggestions here, would be much appreciated.#2020-04-0411:48John LeidegrenI think I figured this out.
---
These unique identity tuples are needed because I created entities to group attributes but that's already provided by namespaces. I could just has well let the namespaces encode the topology and let the grouping of things be based on that.
---
These "topological" identities wouldn't be needed if I went for bigger, fatter entities, over excessive container like entities. These intermediate entities that encode a tree like structure are just causing me pain. And I will do away with them.#2020-04-0411:20David PhamDoes Datomic support Java11?#2020-04-0418:05jeff.terrellIs it possible to ssh in to the non-bastion instance in a Solo topology for Datomic Cloud?#2020-04-0418:05jeff.terrellI'm seeing some weird behavior in which a freshly restarted instance seems to get hung whenever I try to deploy (which fails).#2020-04-0418:06jeff.terrellOddly, the CPU utilization, as visible on the CloudWatch dashboard, jump to 45% and stays there after I try a deploy, whereas before it's near 0%.#2020-04-0418:07jeff.terrellOnce I start the deploy, neither an existing socks proxy nor a new one allows connections through to Datomic, whereas before the deploy it works fine.#2020-04-0418:08jeff.terrellI can datomic solo reset to get back to a working state…but if I try to deploy, I get back into the hung state.#2020-04-0418:08jeff.terrellI'd like to ssh in to see what process is using so much CPU.#2020-04-0418:10jeff.terrellI'm fairly perplexed by all of this. It's on a fresh AWS account and Datomic Cloud instance, and I've had success with Ions and the Solo topology before…#2020-04-0418:12jeff.terrellOne more clue: the CodeDeploy deployment fails on the final ValidateService event. The script output has about 100 lines of [stdout]Received 000. I think this means that the Datomic health check request is failing to get a response at all, let alone a 200.#2020-04-0418:24ghadi@jeff.terrell sometimes I jump into the Datomic nodes to do a jstack dump.#2020-04-0418:25ghadiBy default, the bastion cannot SSH to the nodes because of a security group#2020-04-0418:25ghadithere is a security group that is called datomic-something-nodes#2020-04-0418:25ghadiyou need to modify that SG to allow the bastion instance in on port 22#2020-04-0418:25jeff.terrellAh, security groups, right! I assumed that the compute node would already be configured to accept traffic from the bastion. But yeah, maybe not on port 22, right. Thanks!#2020-04-0418:26ghadiProtip: you can add an entry to that security group, referring to the bastions security group symbolically#2020-04-0418:27ghadiinstead of hardcoding an IP address or CIDR block#2020-04-0418:27ghadithen, you need the ssh keypair for the nodes, which you had when you created the stack#2020-04-0418:28ghadiso what I do is I add the bastion's key from ~/.ssh/datomic-whatever to my ssh agent:#2020-04-0418:28ghadissh-add ~/.ssh/datomic-whatever#2020-04-0418:28ghadithen add the node keypair:
ssh-add ~/wherever/nodekey#2020-04-0418:29ghadithen I ssh to the bastion with -A, which forwards the local ssh-agent#2020-04-0418:29ghadithen from there you can ssh to the node in question#2020-04-0418:29ghadisudo su datomic to become the datomic user#2020-04-0418:29jeff.terrellAh, fantastic tips, thanks! I would have been stumbling around trying to scp the appropriate private keys over to the bastion.#2020-04-0418:29ghadiso that you can run jstack on the pid, or poke around#2020-04-0418:30ghadiMy pleasure. Whatever you do end up finding, see if there is some signal of it in CloudWatch Logs or CodeDeploy or wherever#2020-04-0418:31ghadiif there's not, maybe worth a support ticket?#2020-04-0418:32jeff.terrellOK. I haven't seen any clue in those places yet. I'll be sure to follow up as needed to be sure others don't run into this.#2020-04-0418:52jeff.terrellWhen I got into the system, I learned that the CPU utilization was because of bin/run-with-restart being called to start some Datomic-related process over and over, which was failing every time. When I ran the command manually, it tells me:#2020-04-0418:52jeff.terrell> com.amazonaws.services.logs.model.ResourceNotFoundException: The specified log group does not exist. (Service: AWSLogs; Status Code: 400; Error Code: ResourceNotFoundException; Request ID: c58f56ce-3b7f-4be1-bbd3-463a950018c7)#2020-04-0418:52jeff.terrell…followed by a stack trace, and an anomaly map:#2020-04-0418:53jeff.terrell:datomic.cloud.cluster-node/-main #error {
:cause "Unable to ensure system keyfiles"
:data {:result [#:cognitect.anomalies{:category :cognitect.anomalies/incorrect} #:cognitect.anomalies{:category :cognitect.anomalies/incorrect}], :arg {:system "pp-app", :prefix "s3://"}}
:via
[{:type clojure.lang.ExceptionInfo
:message "Unable to ensure system keyfiles"
:data {:result [#:cognitect.anomalies{:category :cognitect.anomalies/incorrect} #:cognitect.anomalies{:category :cognitect.anomalies/incorrect}], :arg {:system "pp-app", :prefix "s3://"}}
:at [datomic.cloud.cluster_node$ensure_system_keyfiles_BANG_ invokeStatic "cluster_node.clj" 336]}]
:trace ,,,}#2020-04-0418:54jeff.terrellI'm thinking this is not because of something I did wrong (though I'm happy to be corrected on that point). Opening a support ticket…#2020-04-0504:59Braden ShepherdsonIs the embedded storage in Datomic Free sufficient for a small website I'm building for a hobby group? The old one it's replacing is a single VPS with PHP scripts and MySQL, but there are perhaps 130,000 rows in the database, maybe a million datoms. The transaction rate is modest though, maybe a couple hundred an hour at most. There's just no context for what it's capable of.#2020-04-0506:29John LeidegrenSo, it's based on the H2 Database Engine and noting in Datomic is inherently slow.
The way it uses the database is that it pulls segments of data from the database, so you don't have a datom to row mapping or anything like that. You basically use the database as a key value store.
130,000 rows doesn't sound like a lot and a couple of hundred transactions an hour also seem small.
I haven't done this myself but I don't think you will have any problems.#2020-04-0512:11Braden ShepherdsonOkay, I'll give it a try. I know it's a small database, but I was worried because it's also the "dev" version. I could imagine a very naive dev version that just writes chunks of EDN to files and can only handle a free thousand datoms, or has to fully read into memory, or similar.#2020-04-0512:12Braden ShepherdsonBut it sounds like it's the other way around, and the dev version uses the capable though modest free version's embedded storage.#2020-04-0514:47John LeidegrenYes, I know for a fact that it does! The storage layer is just insert or update of KV pairs that contains chunks of datoms for either log och index data.#2020-04-0514:49John LeidegrenDepending on the I/O subsystem of the box that is running the thing, it should be plenty capable. I have no direct experience with H2 but it probably performs well enough and it's not asked to do anything other than shuffling bytes back and forth. So, it's a quite ideal situation.#2020-04-0517:30favilaThe real perf limitation is that the dev transactor process is also the storage process (it opens another port to serve peers’ sql queries)#2020-04-0517:31favilaI don’t know how well optimized that server code is#2020-04-0517:32favilaBut I have used shared dev for light duty just fine; also as the target of large bulk import jobs just fine#2020-04-0521:18hadilsDatomic Cloud Question: Is there a "hook" for starting up processes when the Ions are started, e.g., after a deploy? I want to use Quartzite and don't know the best way to initialize it...Thanks in advance.#2020-04-0600:38denikIs there a way to have optional inputs in datomic?? In my case, I'd like the where clauses that include undefined inputs to be ignored:
(d/q
'[:find ?e
:in $ ?tag ?more ?other
:where
[?e :ent/foo ?foo]
[?e :ent/tags ?tag]
[?e :ent/more ?more]
[?e :ent/other ?other]]
(d/db conn)
:tag
; skip :more
:other
)
Here the value for ?more should not be passed but because the value for ?other follows, ?more is interpreted as ?other. Passing nil has not worked for me. It seems that it is then used as the value to match in the query engine. Do I need to write a different query for each optional input?#2020-04-0600:38denikIs there a way to have optional inputs in datomic?? In my case, I'd like the where clauses that include undefined inputs to be ignored:
(d/q
'[:find ?e
:in $ ?tag ?more ?other
:where
[?e :ent/foo ?foo]
[?e :ent/tags ?tag]
[?e :ent/more ?more]
[?e :ent/other ?other]]
(d/db conn)
:tag
; skip :more
:other
)
Here the value for ?more should not be passed but because the value for ?other follows, ?more is interpreted as ?other. Passing nil has not worked for me. It seems that it is then used as the value to match in the query engine. Do I need to write a different query for each optional input?#2020-04-0600:55favilaGenerally yes: you can also use a sentinel value to indicate “don’t use” and an or or rule#2020-04-0601:07denikthanks @favila I'm running into some issues doing so. For example, or demands a join variable. Could you point me at an example or some pseudocode?#2020-04-0601:09denikThis also breaks down when using inputs as arguments to functions, i.e. > . Now my sentinel value has to be a number which which would backfire or otherwise the query engine will throw an expression.#2020-04-0601:11favilaI maybe misunderstand your use cause. Your example query doesn’t make sense to me: how do you have a query which has no ?more clauses nor is called with that arg as input but still has it in the :in? How did you get in this situation? What I thought you were talking about is a scenario like this:#2020-04-0601:12denik> I'd like the where clauses that include undefined inputs to be ignored#2020-04-0601:13denikBut I'm also fine wrapping those values in or-like clauses. However, that hasn't been working well due to numerous edge-cases#2020-04-0601:18favila(q [:find ?e
:in $ ?id ?opt-filter
:where
[?e :id ?id]
(or-join [?e ?opt-filter]
(And
[(ground ::ignore) ?opt-filter]
[?e])
(And
(Not [(ground ::ignore) ?opt-filter])
[?e :attr ?opt-filter]))]
Db 123 ::ignore)
#2020-04-0601:18favilaPls excuse everything, I’m on a phone#2020-04-0601:20denikJust retyped this for clarity
(d/q '[:find ?e
:in $ ?id ?opt-filter
:where
[?e :id ?id]
(or-join [?e ?opt-filter]
(and
[(ground ::ignore) ?opt-filter]
[?e])
(and
(not [(ground ::ignore) ?opt-filter])
[?e :attr ?opt-filter]))]
(d/db conn) 123 ::ignore)#2020-04-0601:20favilaYour example query doesn’t have clauses using undefined input, it has a slot for an input that you don’t fill
#2020-04-0601:20favilaThat’s what confuses me#2020-04-0601:20denikright, because hash-map-like destructuring is not supported in :in#2020-04-0601:21favila?#2020-04-0601:22favilaWhere did ?more come from? Was this query built with code?#2020-04-0601:23denikNo, rather the query is supposed to stay as it is but inputs should be nullable / or possible to be disabled#2020-04-0601:23favilaBut no where clause uses ?more#2020-04-0601:23favilaIt is already not used, so why is it input?#2020-04-0601:23denikoh my bad, there was supposed to be a clause for ?more#2020-04-0601:24denikI had simplified from application code#2020-04-0601:24favilaAh ok so it is the case I’m thinking of#2020-04-0601:25favilaThere’s some user supplied optional filtering field, and you wand the query to handle ignoring it#2020-04-0601:25denikyes, you can think of it as a cond->-like where#2020-04-0601:25favilaOptions are sentinel value and a pair of rules which explicitly exclude each other by sentinel match or mismatch#2020-04-0601:26favilaOr build the query dynamically using e.g. cond->#2020-04-0601:27favilaFor the latter choice using the map form is more convenient#2020-04-0601:28favilaHuh that wasn’t meant for the channel#2020-04-0601:28denikYes, I think I'll have to do that. Unfortunately that either means lots of quoting or in the macro-case hairy s-expr parsing to handle collection inputs like [?tags ...]#2020-04-0601:28favilaDon’t inline the inputs#2020-04-0601:29favilaJust optionally exclude or include static clauses#2020-04-0601:29denikBecause query cache?#2020-04-0601:29favilaYes also less tricky#2020-04-0601:29denikok thanks a lot for thinking this through with me @favila!#2020-04-0601:31favila(Cond-> [] :always (conj ‘[?e :foo ?foo]) tags (conj ‘[?e :tag ?tag))#2020-04-0601:32favilaEg for the :find#2020-04-0601:32favilaThen the :in is like#2020-04-0601:33favila(Cond-> ‘[$] tags (conj ‘[?tag ...]))#2020-04-0601:33favilaWith proper formatting it’s easy to read#2020-04-0601:34favilaAnd using the map form, each major clause can be a vector instead of having to stick together a vector positionally #2020-04-0601:36favila{:find [?e] :in (cond-> [$] ,,,) :where (cond-> [] ,,,)}#2020-04-0601:58denikit works but it's quite ugly
(defn query-words [{:keys [tags since]}]
(let [tags (ensure-set tags)
[dur-num dur-unit] since]
(db/q {:find '[?we]
:in '[$ ?dur-num ?dur-unit [?tags ...]]
:where (cond-> ['[?we :word/ticks ?ts]
'[?we :word ?w]]
since (into '[[?ts :tick/when-ms ?tw]
[(tick.util.time/since? ?dur-num ?dur-unit ?tw)]])
tags (into '[[?ts :tick/source ?src]
[?src :source/tags ?tags]]))}
dur-num
dur-unit
tags)))#2020-04-0615:52arohnerI’m considering a workload that uses kafka, where a topic is already partitioned N ways. Is creating N databases on the same transactor to increase scalability a good approach?#2020-04-0615:55ghadino 🙂#2020-04-0615:56ghadican you give more details about the workload?#2020-04-0615:58arohnerwe don’t really have enough details yet, but it’s a high-volume financial service. We’re using kafka and partitioning, so I’d like to have a story for how to scale up datomic if necessary#2020-04-0615:59arohnerwhy is multiple databases not a good idea? I get that a single transactor machine is a limitation, but my assumption is that hardware these days can scale out across cores decently well#2020-04-0615:59ghadii wouldn't be comfortable weighing mechanisms without a problem statement#2020-04-0616:00ghadithe missing part is: good idea for what#2020-04-0616:00ghadiall this talk of scaling is abstract#2020-04-0616:01ghadi(Disclaimer: I don't work on the Datomic team, but I use it daily and am here to hold up a mirror)#2020-04-0616:02arohnerok, what are the recommended ways of scaling up datomic deployment? We have lots of kafka topics, lots of kafka partitions. Lots of services that read from a topic, do some processing, and potentially write to the DB. Are there any other options aside from “deploy more transactors” and “scale up the transactor hardware”?#2020-04-0616:03arohnermultiple databases seems appealing as a middle ground because it seems to reduce ops load somewhat, and my data is already sharded, and it seems like it should work to remove one bottleneck in the system#2020-04-0616:04johnjFWIW, I recall someone from datomic saying here datomic is not desgined to run multiple DBs performantly#2020-04-0616:04arohneryes, this is abstract and fuzzy because I don’t have a production system yet. I’m the CTO, so I have to be able to tell the CEO “yes, in 3 years when we hit our growth numbers, we won’t hit a wall”#2020-04-0616:04favilaA limitation of on-prem multi-db-per-transactor loads is that transactor indexing work isn’t scheduled or scaled evenly across dbs#2020-04-0616:05johnjhttps://forum.datomic.com/t/multi-tenancy-databases/238/3#2020-04-0616:06ghadineed example transactions and example queries#2020-04-0616:06ghadiintegrity/key constraints#2020-04-0616:07ghadiCloud vs. On-Prem?#2020-04-0616:07ghaditransactions/sec
datoms/tx#2020-04-0616:08ghadi@arohner wdym by "data is already sharded"?#2020-04-0616:10ghadito be clear, I didn't mean that N databases is a bad idea#2020-04-0616:10ghadionly err_insufficent_context#2020-04-0616:45arohner“data is already sharded” == we’re running on kafka. Kafka topics are partitioned N ways, typically 10-100. So in a stream of messages, one partition doesn’t see all messages, it sees 1/Nth of total load#2020-04-0617:01ghadiwhat about downstream of that?#2020-04-0617:18arohnerI’m not sure what you mean#2020-04-0617:30favilakafka topic partitioning doesn’t imply anything about the locality of that data as you intend to use it later#2020-04-0617:32arohnerEach topic is partitioned such that it doesn’t need access to other partitions. If it did, that would get in the way of scaling#2020-04-0617:33ghadiunderstood - but like @favila said, the kafka topic partitioning is on dead unindexed data#2020-04-0617:33ghadithe query patterns will determine what data needs to be colocated in the same Datomic DB#2020-04-0617:34ghadiI'm using Datomic Cloud on one project right now, with 200 DBs in the same cluster#2020-04-0619:40johnjthose are 200DBs running on a single node?#2020-04-0619:55ghadisingle Datomic Cloud system#2020-04-0617:35ghadiwe don't anticipate needing cross-database querying#2020-04-0617:36ghadiI'm not sure how the financial data you're storing needs to be aggregated#2020-04-0617:38Braden Shepherdsonso I have a domain ID that's auto-incrementing. I want to make a concurrency-safe way to add a new entity. I suppose there's no way to make a query inside the transactor? That would be ideal, if I could find the max and increment it.
failing that, it feels like the best plan is to query the max ahead of time, increment it, and then transact
[[:db/cas "new-thing" :my/domain-id nil new-domain-id]
{:db/id "new-thing" etc etc}]
does that work? is there a better way?#2020-04-0707:57fmnoiseyou can make tx-function for that#2020-04-0707:57fmnoise{:db/ident :generate-id
:db/doc "Generates an unique sequential id for given attribute and temp id"
:db/fn #db/fn{:lang "clojure"
:params [db attribute entid]
:code
(let [attr (d/attribute db attribute)]
(when (and (not (string? entid))
(get (d/entity db entid) attribute))
(throw (IllegalArgumentException.
(str "Entity already has id " (get (d/entity db entid) attribute)))))
(if (and (= (:value-type attr) :db.type/long)
(= (:unique attr) :db.unique/identity))
(let [id (->> (map :v (d/datoms (d/history db) :avet attribute))
(reduce max 0)
inc)]
[[:db/add entid attribute id]])
(throw (ex-info (str "Invalid attribute " attribute)
{:attr attr}))))}}#2020-04-0707:57fmnoiseworking example ☝️:skin-tone-2:#2020-04-0707:58fmnoisethen you call it like this#2020-04-0707:59fmnoise(d/transact
conn
[[:generate-id :order/id tempid]
{:db/id tempid
:order/customer customer-eid
:order/items ...}])#2020-04-0617:39Braden Shepherdson(and then wrap that whole process in a retry loop)#2020-04-0617:39ghadiprobably need a retry loop with CAS, but your increment + your other transaction data need to be in the same tx#2020-04-0617:40Braden Shepherdsonoh, sorry. by "find the max and increment it" I meant in memory#2020-04-0617:40ghadimax?#2020-04-0617:42ghadibut, yes, you need to pull it and inc it, conj that with your regular transaction data, and if that fails try it again#2020-04-0617:43ghadiyou can search for :cognitect.anomalies/conflict in the ex-data, I believe @braden.shepherdson#2020-04-0618:27Braden Shepherdsoncas doesn't seem to like tempids? :db.error/not-a-keyword Cannot interpret as a keyword: new thing, no leading :#2020-04-0618:28favilacorrect. CAS cannot work on tempids. CAS is a transaction function. Tempid resolution cannot occur until all transaction functions have run. Therefore CAS cannot work on a tempid#2020-04-0618:29favilaMore specifically, a transaction function that reads an entity, since it is unknown until the end of the tx what entity that tempid resolves to.#2020-04-0618:29favila(A tx fn can still emit tempids or do anything with them that doesn’t require resolving them)#2020-04-0618:30Braden ShepherdsonI understand, thanks for the insight. is there another way to solve my need for an incrementing domain ID?#2020-04-0618:31favilayou need a tx fn#2020-04-0618:32favilaor cas outside the transaction, like @U050ECB92 mentioned#2020-04-0618:33favilaThe most robust pattern I have implemented is: have a counter entity with a unique id attr, a no-history nonce attr and an attr to hold the current value.#2020-04-0618:34favilathen have a tx fn with a signature like [db data-entity target-attr counter-entity]#2020-04-0618:35favilait emits [:db/add counter-entity noce-attr random-nonce] [:db/add data-entity target-attr counter-value+1] [:db/add counter-entity counter-attr counter-value+1]#2020-04-0618:36favilathis atomically increments and assigns the counter, and also protects against two things trying to increment the counter in the same tx#2020-04-0618:36favilathere are other ways to do this also#2020-04-0618:37favilayou can also run this before issuing the tx, and have a cas on the counter entity to the new max counter id#2020-04-0618:37favilabut be careful about composing multiple counter issuances together#2020-04-0618:38favilarepeating a cas twice isn’t going to cause a conflict, and will end up “issuing” the same number twice#2020-04-0618:44Braden ShepherdsonI follow.
the operation to add this value is rare enough, and the "table" small enough, that I'm prepared to query for the current max domain ID inside a transactor function.#2020-04-0618:59Braden Shepherdsongot that approach working nicely#2020-04-0618:59Braden Shepherdsonthanks#2020-04-0816:16denikIs there a way to pass collections to :in and achieve logical and matching instead or as in collection bindings? https://docs.datomic.com/on-prem/query.html#collection-binding#2020-04-0816:19favilawhat do you mean by “and matching”?#2020-04-0816:21denik[:in $ [?match-all-these ...]
:where
[?foo :foo/attrs ?match-all-these]]
#2020-04-0816:24denikso that of the collection passed in, the query engine only returns the entities who's attribute contain all of the collection's values#2020-04-0816:24denikthat is, for cardinality/many#2020-04-0816:24favilaNo. there’s only one item bound at a time, so it only unifies on one at a time. to bind “all the attr values matching [?foo :foo/attrs]`” to a single thing requires some kind of aggregation.#2020-04-0816:25favilathere are a few approaches to this. The easiest IMO is just to use some clojure code#2020-04-0816:25favilaOther approaches here: https://stackoverflow.com/questions/53875748/filter-by-cardinality-many-field-that-contains-all-the-values#2020-04-0816:26denikthanks @U09R86PA4 for using the internet which I failed to do 😄#2020-04-0816:26favilasorry, wrong one#2020-04-0816:26favilahttps://stackoverflow.com/questions/43784258/find-entities-whose-ref-to-many-attribute-contains-all-elements-of-input#2020-04-0914:16dpsuttonI remember there being a feature released for datomic that made outside reporting much easier? Can anyone point me towards this? Musing about datomic at work and wanted this to be informed#2020-04-0914:18alexmillerpresumably sql access?#2020-04-0914:19alexmillerhttps://docs.datomic.com/cloud/analytics/analytics-concepts.html#2020-04-0914:19alexmillerhttps://docs.datomic.com/cloud/analytics/analytics-tools.html#2020-04-0914:19dpsuttonyes thank you!#2020-04-1013:53cgrandIs this a known pull bug?
; schema
#:db {:ident :test/kws :valueType :db.type/keyword :cardinality :db.cardinality/many}
#:db {:ident :test/bools :valueType :db.type/boolean :cardinality :db.cardinality/many}
; then I transact this entity
{:db/ident :my/test1 :test/bools [true false] :test/kws [:yes :no]}
; the I pull it (either in a query or with d/pull
=> (d/pull conn '[*] :my/test1)
{:db/id 17592188897678, :db/ident :my/test1, :test/bools [true], :test/kws [:no :yes]} ; <- false filtered out
; funnier
{:db/ident :my/test2 :test/bools [false]}
=> (d/pull (d/db conn) '[*] :my/test2)
{:db/id 17592188897682, :db/ident :my/test2, :test/bools nil} ; <- a nil#2020-04-1017:31favilaThere was something in the changelog about falsiness. Are you on latest datomic?#2020-04-1017:32cgrandI’m on 0.9.6045#2020-04-1018:05johnjI have hit two serious bugs(that have already been fixed) while playing with datomic. My conclusion is that is still too immature to be trusted with production critical data. Don't know how some companies get by with it.#2020-04-1015:02donyormSo when I do a datomic push it gives me a list of overriden dependencies. However, if I copy and try to use those dependencies in my local project, I get the following error: Could not find artifact com.cognitect:s3-creds:jar:0.1.23 in central. Any idea what that's about?#2020-04-1019:06marshallthat dep will be fixed in the next release of ion-dev
The internal ion deps overrides shouldn’t present a problem#2020-04-1103:06drewverleeI'm using datomic cloud and i get the exception "http://grow.de
I dont understand what the that means? why isn't it allowed? i assume their is something wrong with the function. Backing up, the goal is to enforce that every floor has a building. This is just a learning exercise, so i'm open to any interperation of how it would be best to solve this.
code:
(defn floor-has-one-building-v4?
[db eid]
(let [building (d/pull db '[{:building/_floor [:building/name]}] eid)]
(= (count building) 1)))
(def predicate
{:db/ident :floor/guard4
:db.entity/attrs [:floor/name]
:db.entity/preds 'grow.dev/floor-has-one-building-v4?})
(comment
(d/transact conn {:tx-data [predicate]}))
(d/transact conn {:tx-data [{:building/name "Big Building"
:building/floor [{:floor/name "bottom"
:db/ensure :floor/guard4}]}]})#2020-04-1315:40Joe LaneI second what ghadi said. I just noticed that your "exception" message says "... by datomic/on/config". If that isn't a slack typo, it needs to be datomic/ion-config.edn (per https://docs.datomic.com/cloud/ions/ions-reference.html#ion-config)#2020-04-1103:39ghadihttps://docs.datomic.com/cloud/schema/schema-reference.html#entity-specs#2020-04-1103:40ghadi@drewverlee entity predicates need to be on the classpath of the primary compute group#2020-04-1103:41ghadiyou need to deploy them to the transactor using https://docs.datomic.com/cloud/ions/ions-reference.html#ion-config#2020-04-1103:41ghadiand mark them under the :allow section#2020-04-1103:42ghadientity preds do not run on the client submitting the transaction, they run on the server (the primary/transactor)#2020-04-1107:58murtaza52Below is from the datomic documentation -
The V-leading index VAET supports efficient queries for references between entities, analogous to a graph database.
The combination of EAVT and VAET supports arbitrary nested navigation among entities. This is like a document database but vastly more flexible. You can navigate in any direction at any time, instead of being limited to the containment hierarchy you selected when storing the document.
The above tells me that datomic is a good fit where the data is being modelled as a graph / document.
Are there any examples of graph data / document data being modelled in datomic ? Most examples I see is of relational / cloumnar types.#2020-04-1114:30potetm@murtaza52 There’s a lot of overlap there.
For a document store, you just need an entity identifier and some attributes on that entity.
For graphical data, you just need to reference other entities (via :db.type/ref).#2020-04-1212:53murtaza52thanks @U07S8JGF7#2020-04-1209:15murtaza52datomic allows different db's, any rule of thumbs of how data should be partitioned into different dbs ? My assumption is that the peers will have the indexes / cache of only the dbs to which it is connected, so that could be a determining factor.#2020-04-1218:18cobyI guess I'd start with a baseline rule of: don't partition your data into multiple dbs unless it's absolutely necessary. If it is necessary, your problem domain should make it clear what you need to do, e.g. regulatory restrictions on how/where you are allowed to store data. If the need doesn't arise from the domain, I'd venture that you are working around a deeper problem and you should attack that first.#2020-04-1219:48murtaza52thanks @UH85MNSKE#2020-04-1219:06daniel.spanielis there a way to d/pull many ? in the cloud datomic api? I have list of db/id's and want to pull all entities.#2020-04-1219:10daniel.spanielI can't do this kind of thing
(d/q '[:find (pull ?e pattern)
:in $ pattern [?product-ids ...]
:where
[?e :db/id ?product-ids]]
(d/db conn) '[*] db-ids)
#2020-04-1219:59eagonWhat is ?product-ids ? Are these the natural entity ids of datomic itself?
Either way, you can do exactly what you want, no need to match on :db/id.
(d/q `[:find (pull ?product-ids [*])
:in $ [?product-ids ...]]
(d/db conn) db-ids)
More commonly, you might have some actual product ids that you use:
(d/q `[:find (pull ?product [*])
:in $ [?product-id ...]
:where [?product :product/id ?product-id]]
(d/db conn) product-ids)
Assuming :product/id is your attribute for product id#2020-04-1220:03eagon@UEC6M0NNB Key to remember is that you're not forced to use :where, if you have the db/ids already just literally match on them. You can pull many this way.#2020-04-1220:07daniel.spanielholy crapoley @UNRDXKBNY => now that is some good teaching. Works perfectly. Never knew that I could drop the where. So simple. Thanks for taking the moments to explain this#2020-04-1219:11daniel.spanielsince this throws error
Unable to resolve entity: :db/id
#2020-04-1219:28fmnoisewhat about using pull-many without query? If you have only db/id matching condition, query is not not the right way, but pull is#2020-04-1219:42daniel.spanielpull-many is not available in cloud api ( unfortuneately .. good question#2020-04-1220:24favilaPull-many can be implemented in cloud via a simple wrapper around a query#2020-04-1220:30daniel.spanieli think i know what you mean .. very good point actually.#2020-04-1305:27fmnoisewhy not (map #(pull ...)) btw?#2020-04-1310:50favilaSpeed. On client each pull is an http request round-trip. On peer work can be shared between calls or could be done in parallel#2020-04-1219:56murtaza52Datomic provides attribute preds and entity preds, usually such validation will happen in the application layer. From a design perspective in what scenarios should the above be used to validate data in the DB layer too ?#2020-04-1220:52favilaAttr preds are mostly for safety. Entity preds are protection from data races#2020-04-1220:54favilaIe there may be an invariant you can’t guarantee without reading the state of the db right before committing the tx. Entity preds are the only thing that can do this#2020-04-1302:51Jacob O'Bryantand transaction functions#2020-04-1304:14murtaza52thanks @U09R86PA4#2020-04-1311:37favilaTransaction functions can only read the “before transaction” value of the db, so its possible to run two tx fns In the same tx which violate an invariant when run together (eg two counter incrementers). entity preds read the “after transaction” value but can abort the transaction if a constraint is violated#2020-04-1315:41Joe Lane@U09R86PA4 Now THAT is interesting. I had never considered that, thanks!#2020-04-1320:08johnjaka implement your own system for managing consistency#2020-04-1222:43otwieraczHey! I’ve hit something unexpected in Datomic. After storing vector in database it’s being pulled out with elements ordered differently. #2020-04-1222:44otwieraczShould I look for some weird bug in my code? Or it’s desired behavior? (That would be terrible!)#2020-04-1302:53Jacob O'BryantThat's expected behavior. Multi-cardinality values in Datomic are unordered. To preserve ordering you'll have to model it explicitly.#2020-04-1310:51favilaDatomic and datalog operate on sets not lists#2020-04-1415:29onetomThat's just a notational convenience that you can specify multiple values for an attribute as a vector. You can also use a set literal (`#{val1 val2}`); that conveys the intent more precisely, but not every language has native support for representing sets concisely.#2020-04-1406:58Janne SauvalaI got started with Datomic Cloud (solo) during the weekend. Is there a recommended way how to setup a development environment? I saw a few others asking about this on the forums and their solution is to use datomic-client-memdb (https://github.com/ComputeSoftware/datomic-client-memdb) or separate dev-cloud environment.#2020-04-1407:02eagon@UJZ6S8YR2 Also checkout https://github.com/markbastian/replion ! Will get you a lot closer to the Cloud/Ions based environment#2020-04-1410:06Janne SauvalaThanks @UNRDXKBNY 🙂#2020-04-1410:32Janne SauvalaI guess I could use d/with to some degree as showed here http://jamespw.com/clojure/datomic/testing/2016/09/17/testing-datomic-queries-and-pulls-using-with.html#2020-04-1413:51Joe Lane@UJZ6S8YR2 I made a development solo system which I used for over 2 years successfully without a local offline system. I wouldn't complicate things with the datomic-client-memdb or any other replacement for just having an internet connection until you run into an actual pain point.
If cost is the concern you can use the datomic CLI Tools to turn on your bastion+solo node when you begin doing development for the day and shut it off at the end of the day (or set an Autoscaling group to do it for you on a schedule).#2020-04-1413:52Joe LaneThe last thing you want to do when just getting started is have divergent library behavior between local development and production.#2020-04-1415:04kennyFWIW, I wrote that library and agree with @U0CJ19XAM. It's very unfortunate there is no local Datomic Cloud solution. If you can get away with using a solo deployment for your needs, you most certainly should. You will need to build some tooling around how to use a single solo deployment with multiple developers but that isn't too difficult (we prefix DB names).
We have the ability to easily switch to that library for those that have a poor/no internet connection. We run integration tests against Datomic Cloud.#2020-04-1416:14Janne SauvalaThanks @U0CJ19XAM and @U083D6HK9! I didn’t know you could turn on/off the bastion and node like that with CLI Tools, that sounds quite handy approach. I was also concerned about the integration with CI-pipeline but it should be okay just to use the Cloud like you are doing, Kenny.#2020-04-1416:14kennyIt took some work to get it to work on CI 😬#2020-04-1416:14Joe LaneWhen you set up CI, do it in the us-east-1 region.#2020-04-1416:15Joe LaneOtherwise you will have headaches...#2020-04-1416:15kenny> When you set up CI, do it in the us-east-1 region.
What does this mean?#2020-04-1416:15kennyDatomic Cloud region or CI region? We use CircleCI and I don't think they have a region selector.#2020-04-1416:16Joe LaneSorry, I assumed codebuild+codepipeline. If you use those, set them up in the us-east-1 region.#2020-04-1416:17Joe LaneIts the classic "cross-region s3 bucket access from within a vpc" issue :)#2020-04-1418:19johnjyou can also avoid creating the bastion and connect directly to the node, set a firewall rule to allow and protect access#2020-04-1418:23kennyI’d suggest against that due to the security implications. #2020-04-1418:26johnjas?#2020-04-1418:32kennyYou have created a hole in your VPC to the outside world 🙂 unless you have 100% certainty (and a security policy that allows such a thing) that your hole will always hold true (e.g., reserved static IP), you have an issue. #2020-04-1418:35johnjyep, for solo, restricting access to an IP in the security group#2020-04-1418:36johnjhow will that won't always hold true?#2020-04-1418:37kennyYou just be 100% sure that the IP is owned by you. Typically an IP provided to you by an ISP is not static. Most hosted CI solutions (e.g., CircleCI) do not make any guarantees about the IP addresses of their worker nodes. #2020-04-1418:39kennyAn example of an IP owned by you would be an EIP. #2020-04-1418:42johnjYeah, I understand, but for a solo dev on solo playing on the weekends it might be enough#2020-04-1418:48johnjanyway, I just saw that now there are some CLI tools to simplify access#2020-04-1418:49johnjso there's less reason to avoid bastion#2020-04-1419:23Joe LaneNever do that.#2020-04-1415:01naomarikHello! I’ve made a mistake and did a restore instead of backup on my database. App is still running though and all cached data is there. Is there a way to dump those datoms into a file I can restore with via the repl on the running app?#2020-04-1415:25onetomu would need to have a valid DB reference to that state of the DB which contained the datoms still before the restore, then u can do something like (seq (d/datoms (d/history db-value-before-restore) :eavt)) , but not sure if it would work#2020-04-1415:33naomarikThen I could import them with which command?#2020-04-1415:33naomariknearly done writing an import script anyway — but want to try this first#2020-04-1415:34onetomyou have to program that yourself#2020-04-1415:36naomarikgist online of something like this about 200 lines.. looks sketchy#2020-04-1415:36onetomu just get back a sequence of Datom objects, which you can either thread through (map (partial into {})) or (map (partial into [])) , then if u want to transact them back to the DB, you would need to massage them into [:db/add ....] or [:db/retract ...] form#2020-04-1415:39onetomi haven't seen any off-the-shelf solution for this kind of situation, but technically you want those datoms transacted back into your DB what you see thru the history DB.
you might even want to use d/since to just get the tail of the history db after the time of restoration.#2020-04-1415:41onetomit's a bit unrelated, but i often work with the aggregate differences of a database between 2 points in time.
for that i use this function:
(defn db-diff [db-before db-after]
(-> db-after
(d/since (d/basis-t db-before))
(d/history)))
this way i don't have to care what kind of transactions have led to the db-after state; i just see all the added and retracted datoms.#2020-04-1416:05onetomLet's say i have a database like this:
[{:thing/name "thing0"}
{:thing/name "thing1" :thing/container [:box/id "box1"]}
{:thing/name "thing2" :thing/container [:box/id "box1"]}
{:box/id "box1"}
{:box/id "box2"}]
where :thing/container is a ref card/one and :box/id is uniq/identity
in plain english:
1. thing0 is not in any box
2. thing1 and thing2 are both in box1
3. box2 is empty
How can I express in datalog that im looking for empty boxes?
I would expect [:box/id "box2"] as the result.#2020-04-1416:15mgrbyte#2020-04-1416:45onetomthx for trying!#2020-04-1416:07onetomI can of course just pull all box entities and all container values and do a set difference,
but that feels way too low-level, eg:
(set/difference (->> (d/datoms db :avet :box/id) (map :e) set)
(->> (d/datoms db :avet :thing/container) (map :v) set))#2020-04-1416:09onetomI know about the missing? predicate, but i don't see how could that help in this case#2020-04-1416:41onetomHere is a full example#2020-04-1416:49onetomthis actually seems to work:
(-> '[:find [?box ...] :where
[?box :box/id]
(not [_ :thing/container ?box])]
(d/q db))#2020-04-1416:50onetommy problem earlier was that i haven't ignored the thing id:
(-> '[:find [?box ...] :where
[?box :box/id]
(not [?thing :thing/container ?box])]
(d/q db))
so i was getting this error:
Execution error (Exceptions$IllegalArgumentExceptionInfo) at datomic.error/arg (error.clj:79).
:db.error/insufficient-binding [?thing] not bound in not clause: (not-join [?box ?thing] [?thing :thing/container ?box])
#2020-04-1416:52onetombut then i tried this too:
(-> '[:find [?box ...] :where
[?box :box/id]
(not-join [?box]
[?thing :thing/container ?box])]
(d/q db))
and it didn't work at that time, but seems to be correct now.
maybe my actual use-case is actually different from this example; i will investigate it tomorrow.#2020-04-1417:15Ben Hammonddata storage question; I represent state as a 218-bit (ish) BigInt, like
34673886561508028522262350969951764751176091809504658889186481737N
for the purposes of HTTP transmission I am encoding this into a Base62 type string; thus
6qzEQWjdQUt25HqXqMiUSY8Yu63ru31LbeJoc
I need to store this value in datomic; I had assumed that it would be more efficient to store as :db.type/bigint (rather than as a string)
but I have no evidence for this, other than intuition; can anyone confirm or deny?
I mostly want to use this value as a lookup identity.
would there be much performance difference between storing as a bigint, or storing as base62 encoded string?#2020-04-1418:15Ben Hammondperhaps I should look at the fressian encodings of each and see which comes out smaller#2020-04-1419:22Joe Lane@ben.hammond are you concerned about encoding size or execution time converting to and from values? How often do you need to look up these values and do a base62 conversion? Is that efficient? Do you ever need to query over these values numerically in datalog? If so, you would probably want them represented as numbers so you can use primitive range predicates. etc, etc,. If you want leverage over these numbers in the database I'd say store them as numbers.#2020-04-1419:23Ben Hammondefficiency really#2020-04-1419:23Ben Hammondthe numbers are opaque so theres no arithmetical logic#2020-04-1420:11Joe LaneEfficiency of what though?#2020-04-1419:45Ben HammondI guess I was thinking, that in terms of entropy, the raw number must be more efficient than a UTF-8 string where you are only using 62 chars of a possible 254 single-byte chars#2020-04-1419:45Ben Hammondbut I dont' suppose it is particularly significant either way#2020-04-1419:56Sam DeSotaHi, what would be the best way to generate a a random datomic cloud id before transacting an entity? I attempted to generate a random Long but it seems to be failing intermittently. Squuid does not appear to exist in cloud client.#2020-04-1420:14Joe LaneYou are not required to generate a tempid unless you want to in datomic cloud. https://docs.datomic.com/cloud/transactions/transaction-processing.html#tempid-resolution
https://docs.datomic.com/cloud/transactions/transaction-data-reference.html#2020-04-1420:28Sam DeSotaActually I'd like to generate an actual entity id, not a tempid. I'd like to save a file to S3 including the entity id before I commit the entity to datomic, so the file is available for other systems.#2020-04-1420:29Sam DeSotaI could use a generate my own uuid instead, and use :db/unique, but was wondering if there as a way to do this with :db/id#2020-04-1420:43Joe LaneMake a unique attribute with a UUID#2020-04-1421:27donyormI'm getting this error when trying to call this lambda:
(def ion-handler-lambda-proxy
(apigw/ionize
(fn [request]
{:status 200
:headers {"Content-Type" "application/edn"}
:body "ok"}))#2020-04-1421:28donyormBut I'm getting this error, and I have no idea why No implementation of method: :->bbuf of protocol: #'datomic.ion.lambda.api-gateway/ToBbuf found for class: nil . The troubleshooting docs says to ensure your lambda function is returning a string, but that produces the same error.#2020-04-1421:50donyormI seem to be getting this error even in the ion-starter tutorial#2020-04-1509:56Ignashello. I am running a following query to get changes that happened over the last few minutes:
(d/q '[:find [?e ...]
:in $ ?log ?from-t ?to-t
:where
[(tx-ids ?log ?from-t ?to-t) [?tx ...]]
[(tx-data ?log ?tx) [[?e]]]
[?e :entity/status]]
(d/as-of db timestamp-to)
log
timestamp-from
timestamp-to)
Recently I've noticed that if I rerun the same query after some time, I sometimes get more items returned.
Is it possible that on the original run I am getting stale view from the log? As I am reusing the same instance of log for the whole batch.. or for some other reason#2020-04-1512:42benoitHi Ignas, that's surprising. Is it possible that your timestamp-to is in the future the first time it runs? And the second run picks up more transactions?#2020-04-1512:55IgnasShouldn't be. timestamp-to is capped at (new Date) on the original run#2020-04-1513:02benoitThe peer and the transactor might not be in sync. I would look at the extra items you get in your second query and see when they were transacted. Otherwise I don't see what could have happened, sorry 🙂#2020-04-1514:50favila(D/as-of db timestamp-to)#2020-04-1514:51favilaThat is not a guarantee that the db contains tx info up to and including timestamp. Due to physics, the information may not have arrived yet#2020-04-1514:52favilaTo get a guarantee, use d/sync#2020-04-1514:53favilaOr, read dbs off the tx-report-queue#2020-04-1514:55favilaOr (d/db conn) and examine that db’s basis-t#2020-04-1514:56favilaIOW you should either derive “now” from the datomic peer’s clock (whatever the latest t is that it knows about) or use d/sync to make sure the peer caught up to a wall-clock time you want to sync to#2020-04-1514:57favilaDon’t just use (new Date) as your clock blindly#2020-04-1515:30benoitOh! It's the as-of that might not see everything. Thanks @U09R86PA4.
Was that correct that he could also have missed txs if the peer clock was ahead of the txtor?#2020-04-1515:31favilaIt’s the same issue#2020-04-1515:33favilaDbs have an inherent basis-t, which is the newest t they know about. As-of is a filter on top of that, but doesn’t alter the basis#2020-04-1515:33benoitYeah the new Date seemed like a red flag to me but I missed the database part of it.#2020-04-1515:33favilaIf your as-of is > basis it’s like not having an as-of#2020-04-1515:34benoitmakes sense#2020-04-1515:54Ignasare there any benefits of doing d/sync without t over just doing (d/db conn)?#2020-04-1515:59Ignasas I don't have a t to sync to, would just using (d/db conn) instead of the whole as-of work here:
(d/q '[:find [?e ...]
:in $ ?log ?from-t ?to-t
:where
[(tx-ids ?log ?from-t ?to-t) [?tx ...]]
[(tx-data ?log ?tx) [[?e]]]
[?e :entity/status]]
(d/db conn)
log
timestamp-from
timestamp-to)
#2020-04-1516:00IgnasI am ok with getting the latest value of the entity here, even if as-of is in the past#2020-04-1517:28favilaIf you are coordinating between peers or with some other wallclock system you need sync. If you aren’t coordinating and just want the latest thing that peer can see, use the basis t of any db value as the “latest”#2020-04-1517:29favilaI am not sure which you are doing though. The use of timestamp arguments is suspicious to me and suggests maybe you are coordinating with a clock somewhere#2020-04-1517:41IgnasI have a service that exports the changes to another database. It uses a sliding window with overlap so that it can catch-up if there is a delay, or we might want to replay from a certain point in time. This is where the wallclock comes in.. we use it to log what periods were exported and to determine what to export next. so ideally i just want to get everything that happened in the last few minutes (ant the window gets bigger if its catching up)#2020-04-1517:44IgnasSo it seems that it might have been a design flaw to choose timestamps as a way to track the export windows#2020-04-1517:50Ignason the other hand it gives us more visibility on what data is already exported, being able to easily say that for example data is at the moment served with 5 hour delay.#2020-04-1515:43Cas ShunIs there something like Datomic Console for Datomic Cloud, or a way to use it with Cloud?#2020-04-1516:02Joe Lane@auroraminor Use REBL#2020-04-1608:38Oleh K.Hey, does anybody know what to do if datomic cloud returns "Busy indexing" constantly on write attempt (solo env) ?#2020-04-1612:20marshallwhat version of datomic#2020-04-1608:38Oleh K.and what could cause that#2020-04-1610:55UriHi all, thinking about crux/datomic as a solution for my company - is it possible to invalidate a bunch of datoms at once? (Say loading a batch from an external source daily)#2020-04-1610:55UriHi all, thinking about crux/datomic as a solution for my company - is it possible to invalidate a bunch of datoms at once? (Say loading a batch from an external source daily)#2020-04-1613:39favilaWhat do you mean by “invalidate”?#2020-04-1613:43favilaif you mean “retract”, you can retract in large batches; but maybe you mean “mark the transactions as invalidated” with transaction metadata? or reify each import job, link imported entities to that, and mark that import entity somehow?#2020-04-1614:23Urifor example i want to input this json:
[{age: 40, name: "john", children: ["joe"]}, ...]
get some id for this batch (e.g. "1")
then tomorrow i want to add a new batch with this info:
[{age: 40, name: "john", children: []}, ...]
and invalidate batch "1"
and for the query "who are john's children" get an empty response#2020-04-1614:37favila[:db/retract john :children "joe"] ? How is this represented in datomic?#2020-04-1614:39favilaI think I need more information on how you are planning to encode and use this information. datomic’s unit of information (the datom) is quite granular, so it doesn’t make sense to “invalidate” them at a document level#2020-04-1614:39favilamaybe you really want a document store, e.g. like crux#2020-04-1614:40favilawhat is important for you to know about the connection between a datom and a batch?#2020-04-1616:27Uriyou can treat "joe" as its own entity and children -> person/child relationship#2020-04-1616:27Uriagain, the scenario is of loading a bunch of facts from an external source#2020-04-1616:27Urithese "facts" get invalidated every day#2020-04-1616:27Uriand new facts replace them#2020-04-1616:28UriI assume this is modeled as retraction in datomic#2020-04-1616:28Uribut one would have to retract all facts associated with a specific batch#2020-04-1616:30Urias for document store or facts store - this is all the same to me, I'm eventually modeling a big graph, which both solutions do afaiu.
I do however want to be able to take a subgraph and remove it.
what I had in mind is that all datoms associated with a batch will have this fact written somewhere, and then use this association to find each of them and retract them, but this sounds like a heavy load on the db (?)#2020-04-1616:31favilait’s just odd. it makes more sense (to me) to only assert/retract the delta between batches; or alternatively to reify each batch’s data separately so they are present simultaneously alongside each other#2020-04-1616:32favilaeither makes more sense to me than retracting everything from a batch and reasserting a new batch#2020-04-1616:32favilabut it all depends on your goals#2020-04-1616:33Uriso let's say i do #2#2020-04-1616:33Uriso i want to run datalog queries but only on batch #7#2020-04-1616:33Urii can do that easily? this kind of higher order relationship query#2020-04-1616:33favilaeach entity would have to be distinguished by batch#2020-04-1616:33Urientity or datom?#2020-04-1616:34favilaboth are possible#2020-04-1616:34Urii want to query "who are joe's children according to batch #7"#2020-04-1616:34favilaI highlight entity because you can’t make use of unique ids that don’t have batch info in them somehow#2020-04-1616:35favila“who are joe’s children according to batch #7” => “who are batch#7-joe’s children”#2020-04-1616:36Uriah so recreate all the entities#2020-04-1616:36favilaeach would be a different joe#2020-04-1616:36Urigot it#2020-04-1616:36favilayeah, that’s the “each present simultaneously” scenario#2020-04-1616:36Urisounds like that would be very confusing ("batch-7-teacher")#2020-04-1616:37Uriif i had an entity for profession#2020-04-1616:37favilathe “compute the delta” scenario is to make transaction entities themselves the batch marker#2020-04-1616:37favilaadd a batch marker to a tx after you finish a batch#2020-04-1616:38favilathen find that tx, and query (d/as-of that-tx) to say “what did that batch look like”#2020-04-1616:39Uriyeah that could work 🤔 but i'm wondering how easy the diff computation is#2020-04-1616:39favilahowever there’s a lot of assumptions here: batches only replace each other; batches have an order that correspond to tx order; you never backfill batches; you compute the delta of adds/retracts correctly when you add a new batch#2020-04-1616:39favilaThis is a variant of the problem of time-of-record vs domain time#2020-04-1616:39favilais a batch a domain-time concept, or merely an audit concept#2020-04-1616:40favilahttps://vvvvalvalval.github.io/posts/2017-07-08-Datomic-this-is-not-the-history-youre-looking-for.html#2020-04-1616:41Uriso imagine i have my DB on datomic, and there's an external PLOP style database I'm scraping every day
is that a domain-time or audit?#2020-04-1616:41favilawhat is a “PLOP” style database?#2020-04-1616:41Urimutable db#2020-04-1616:41Urisuch as mongodb#2020-04-1616:42favilait depends on your use case for that data#2020-04-1616:42Urimy use case: I have a function that gets a batch # and a query, and it needs to return the answer as if that batch were the only one loaded into the datomic db#2020-04-1616:43Uriit's not very different from "time travel"#2020-04-1616:43Urijust on a subset of the database#2020-04-1616:43favilacan you ever backfill a batch?#2020-04-1616:43Urino - batches invalidate previous ones#2020-04-1616:43Uri(iiuc what you're asking)#2020-04-1616:44favilaso, you will never ever be in a situation where somehow a day’s batch got skipped by accident, and now you need to put that day’s data in so someone can query as if it were there?#2020-04-1616:45Uriright#2020-04-1616:45Urinever#2020-04-1616:45Uri(i think your link above is relevant, skimming through it)#2020-04-1616:45favilahonestly if all you want to do is snapshot a mongodb crux might be a better fit#2020-04-1616:47Urinot familiar with this
but i like the automatic join feature - i really do have a graph#2020-04-1616:47favila> thinking about crux/datomic as a solution for my company#2020-04-1616:47favilayou’re not familiar with crux?#2020-04-1616:47Urijuxt-crux#2020-04-1616:48Urihttps://github.com/juxt/crux#2020-04-1616:48favilayes I’m aware of it#2020-04-1616:48Uriahh#2020-04-1616:48Urinow i understand 🙂#2020-04-1616:48favilawhat do you mean by “not familiar with this”?#2020-04-1616:48Urimissing a comma there#2020-04-1616:48Urii thought you meant mongodb-crux#2020-04-1616:48favilaoh#2020-04-1616:49Urinm i understand now#2020-04-1616:49Uriyeah that's what seems to me too#2020-04-1616:49Urijust wanted to make sure because i'm completely a newbie to graph dbs#2020-04-1616:49favilayeah, so, if all you are doing is dumping docs from mongo into another db that has better querying, crux may be a better fit#2020-04-1616:49Uricool#2020-04-1616:49favilait’s less work because you don’t have to translate the docs into a graph#2020-04-1616:50favilaand you don’t have to have a plan for inconsistent data#2020-04-1616:50favilaand you can use crux’s “valid time” to model your “batch number” concept#2020-04-1616:50Uriso what they said in the crux channel is that i can actually remove or invalidate a transaction#2020-04-1616:50Uriwhich is arbitrarily big json essentially iiuc#2020-04-1616:51Uri(my graph)#2020-04-1616:51favilacrux has bi-temporality vs datomic, but it gives up being a referentially-consistent graph and has a larger unit of truth (the document)#2020-04-1616:52Urihmm interesting, what does "referentially-consistent" mean?#2020-04-1616:52favilacrux doesn’t have references, in your json example, you need to manually know how to make “children” values join to something else#2020-04-1616:53faviladatomic has a ref type, the thing on the other end is an entity#2020-04-1616:53faviladatomic can also add/retract individual datoms: crux can only add a new document#2020-04-1616:54favilaIMO datomic is better as your “source of truth” primary db, and crux is better for dealing with “other people’s” messy data which you may not understand or have a full schema for#2020-04-1616:54Urii will ask them about it - sounds important
i mean, i do want to be able to identify between entities across my json objects#2020-04-1616:55Uriit's not so much other people's data, but more like a scrape of their data that i make, so i'm in control of everything ingested#2020-04-1616:55favilayou can with datalog, but it’s by value (a soft-reference) not an actual reference#2020-04-1616:55favilaI mean, that’s all mongo is doing#2020-04-1616:55Uriso in crux everything is a string/int/date etc'? there's no ref?#2020-04-1616:55favilamongo doesn’t have refs either right?#2020-04-1616:55Uriright#2020-04-1616:55Urimongo/json#2020-04-1616:58Uribut in crux if i load an object, which is essentially a lot of triplets, does crux automatically assign an id to symbolize the entity?#2020-04-1616:58Urior i have to manage it myself somehow#2020-04-1616:59Uriby having an id field in the json objects i load in?#2020-04-1617:00Uri{id: '123', age: 40} then i add {name: "joe", id: '123'}
so i can say in datalog "get me the age and name of things that have id=123"#2020-04-1617:01favilayou need to assign an id to a document when you create the object#2020-04-1617:02favilaif you have refs to something other than documents, you have to figure something out yourself#2020-04-1617:02favilacrux will injest any EDN and decompose it into triples for query purposes, so you can still do arbitrary joins#2020-04-1617:02favilabut it doesn’t know the meaning of those attributes so it can’t enforce anything#2020-04-1617:02favilain fact, it doesn’t even enforce types#2020-04-1617:03Uriso everything is values, and only the document (i.e. what i load in) have a reference#2020-04-1617:03favilacorrect#2020-04-1617:03Urigot it - wow that's good to know#2020-04-1617:03favilahttps://opencrux.com/docs#transactions-valid-ids#2020-04-1617:03favilacrux has four transaction operations#2020-04-1617:04favila:crux.db/id is magic#2020-04-1617:04favilait’s required by every document#2020-04-1617:04favilaand there’s some limit to the kinds of values it can have#2020-04-1617:05favilahonestly this property, though scary for a primary data store, is absolutely freeing for data ingestion#2020-04-1617:05favilaI don’t need to write a complex ETL pipeline before I can use other people’s document-shaped data (and most of it is document-shaped)#2020-04-1617:06favilaI can figure out the joins later; I can retain broken data, etc#2020-04-1617:06favilabut I can always faithfully retain what they said, and transform/normalize/clean-up before moving into a primary datastore that isn’t so sloppy#2020-04-1617:07Uriin some sense this is something i was missing in datomic - the "who"#2020-04-1617:07Uriwho knows what#2020-04-1617:07Urikind of a theory of mind layer over the db#2020-04-1617:07favilayou can kind of do this by using transaction metadata, but you are subject to the limitations on transactions#2020-04-1617:08faviladatomic is built with a close-world assumption--it is the source of truth#2020-04-1617:08favilaother systems like rdf (which datomic is heavily inspired by) have open world assumptions and need complicated reification schemes to use datoms themselves as the subject or object of a predicate#2020-04-1617:09favilacrux takes a different approach by just letting you join on anything you want and working hard to make it fast#2020-04-1617:10favilaI think it’s best suited to cases where the provenance of the data you put into it is not yourself#2020-04-1617:10Uriideally i would just want to treat transactions as entities themselves and associating them (e.g. with a batch #)#2020-04-1617:11Uribecause the crux way is also limiting in some sense#2020-04-1617:12Uriand do datalog queries on a subset of transactions#2020-04-1617:12favilasure, but think through what the loading code would look like for crux vs datomic here#2020-04-1617:12Urifor my current problem - i agree it sounds like i have to compromise#2020-04-1617:13favilaalso, you can’t have a single tx for a batch in datomic--that tx is too big#2020-04-1617:13favilayou should aim for ~1000 datoms per tx#2020-04-1617:13favilayou can go over, it’s fine, but you shouldn’t have tens of thousands of datoms in a tx#2020-04-1617:14Uriah so i meant - treat datoms as entities and do queries on a subset of datoms*#2020-04-1617:14Urilike time travel lets you do it over the time axis#2020-04-1617:14favilaoh, so the “batch-7-joe” solution?#2020-04-1617:14Urii think computationally this would be intractable to do generally#2020-04-1617:15favilayou can do this with tuple refs, if each entity has a batch attribute and whatever their native id attribute is#2020-04-1617:15favilabut you have the same problem of needing to ingest the data in a topological-ish order so your refs work#2020-04-1617:17Urii'm thinking of something maybe simpler - imagine that each datom (not tx) had its own id - i think it's the instant today (?)
then i could say datoms 1, 2 and 7 belong to batch #8, and i would like a higher order datalog query that first chosses a subset of datoms, then runs the internal query#2020-04-1617:17Urii mean - again computationally i don't see how you could do that generally, but if you had infinite cpu#2020-04-1617:21favilayou can do it with indexing#2020-04-1617:22favila{:entity/batch 7 :entity/id "foo" :entity/batch-id [7 "foo"]} where :entity/batch-id is a tuple attr#2020-04-1617:23favilayou only have to start your query from there; the refs outward should all be references to batch-7 entities anyway#2020-04-1617:24favilathis is in the “all batches available simultaneously” approach#2020-04-1617:25favilain the “transact deltas” approach, you can put the batch onto the tx metadata; then as-of time travel accomplishes the same thing#2020-04-1617:25favila(assuming you didn’t make a mistake with your deltas)#2020-04-1710:42Uriwhat if i use this:
> You can add additional attributes to a transaction entity to capture other useful information, such as the purpose of the transaction, the application that executed it, the provenance of the data it added, or the user who caused it to execute, or any other information that might be useful for auditing purposes.
and my batch is many transactions all labled,
then use this to retract the previous batch:
https://stackoverflow.com/a/25389808/378594
then when I want to query on a certain batch, I use the point in time where it was inserted (that's actually the semantics I want - the state of the database beyond my batch at a certain point)
would that work?#2020-04-1712:42favilaWill each batch consist only of new entities?#2020-04-1712:43favilaBatch-6 vs batch-7 joe?#2020-04-1712:44favilaIf so, this is the same as our each-batch-available-simultaneously scenario discussed earlier, but with the additional unnecessary deletion step#2020-04-1712:46favilaIf instead joe is the same entity across batches: when you retract old batches, are you carefully not retracting datoms which are still valid? If so, you aren’t deleting previous batches but transacting the delta between the current db and latest batch.#2020-04-1712:47favilaIf you are deleting everything from a batch, this is both not what you want and unnecessary, as you are just replicating the d/since feature#2020-04-1712:48favilaMaybe what you are missing is that “reasserting” a datom with a new batch doesn’t add new datoms—the previous datom is kept (it’s still valid!) so it will always have the tx of the first batch where it became true, not the last batch#2020-04-1805:43onetomThis was a very interesting conversation!
I'm also ingesting data regularly from a MySQL database and face similar problems you discussed.
However, is it necessary to persist many earlier batches?
Do the batches reference any other data, which doesn't change over time?
I'm asking because maybe you don't want to put your batches into the same DB.
You can create a new DB for every day maybe.
Alternatively, you can also just keep some of the daily snapshots in memory and instead of persisting them with d/transact, you can use d/with to virtually combine your batch-of-the-day onto the rest of the data in some kind of base Datomic DB.
what do you think, @U011WV5VD0V?#2020-04-1809:35Urivery interesting.
first of all @U09R86PA4 I see your point. I really do want to keep the same entity id. If my newly added edges never intersect with my base db then retracting everything would work, but this is dangerous and might not be true at some point in the future.
@U086D6TBN yes it would be preferable to keep this foreign info / copy in a separate place and compose the base db (at a certain instsance) and a version of the foreign db ad hoc. In memory would work today but is not future proof (near-future...).
This is a bit like namespacing I think, but with composition. So I guess these feature don't exist yet?#2020-04-1809:38onetomHow long would you need to keep older days snapshots?
Based on how you described "invalidation" it sounded like you wouldn't need to access yesterday's import even today anymore.#2020-04-1809:39onetomAlso, how big is your dataset, and how long does it take to import it?#2020-04-1809:42onetomI'm working with ~4million entities, each with 2-4 attributes only. That takes me around 5mins to import on an 8core i9, with 80GB RAM. Not sure which of my java processes my app or my transactor, but none of them consume more than 16GB RAM#2020-04-1809:44onetomAlso, I'm directly querying my data from MySQL with jdbc.next fully into memory and then transact it from there#2020-04-1809:46onetomI found that json parsing can have a quite serious performance impact, so it's better if u cut that step out of your data processing pipeline#2020-04-1810:21Urithe only reason to return to older snapshots is for debugging and analytics purposes#2020-04-1810:21Uriso it does happen sometime#2020-04-1810:21Urias for size - it's not nearly as big, I'd say 100k entities#2020-04-1810:22Uri(might be bigger in the future)#2020-04-1810:22Uri(probably)#2020-04-1810:25Uri(I'm not working with clojure so would need another component to handle this ad hoc transacting)#2020-04-1814:40favila
> I really do want to keep the same entity id. If my newly added edges never intersect with my base db then retracting everything would work, but this is dangerous and might not be true at some point in the future.
@U011WV5VD0V No, it’s guaranteed not to work because it’s not just edges, it’s every datom. Eg batch 1 transacts [entity :doc-id “joe”] (an identifier not a ref/edge). Batch 2 attempts to transact the same—but since that fact already exists (by definition—it is an identifier) datomic does not add the datom and the tx of [entity :doc-id “joe”] is still a batch 1 tx. If you then delete all batch 1 datoms, you have removed the “joe” doc identifier. The only thing left in the db is whatever datoms were first asserted by batch 2#2020-04-1814:43favila> I’m not working with clojure
Really? What are you using?#2020-04-1814:45favilaAdding a new db per day is not a bad idea#2020-04-1816:12Uripython and javascript#2020-04-1817:13favilaSo how are you interfacing with datomic? Graalvm?#2020-04-1823:44Urii'm not (yet)
i need a graph database with some versioning features and evaluating different solutions#2020-04-1900:31favilaDatomic without a jvm is going to be a bad time#2020-04-1612:40vlaaad(d/q '[:find ?k ?v
:in $ ?q
:where
[(.getClass ?q) ?c]
[(.getClassLoader ?c) ?cl]
[(.loadClass ?cl "java.lang.System") ?sys]
[(.getDeclaredMethod ?sys "getProperties" nil) ?prop]
[(.invoke ?prop nil nil) [[?k ?v]]]]
db {})#2020-04-1612:41vlaaadfun stuff with interop on datomic cloud ^#2020-04-1612:44vlaaaddidn’t expect query to provide full jvm access though..#2020-04-1613:40vlaaadOr just read-string with read-eval:
(d/q '[:find ?v
:in $ ?form
:where
[(read-string ?form) ?v]]
db "#=(java.lang.System/getProperties)")#2020-04-1613:41Joe LaneCome on now Vlad, what's the first rule of hash-equals club!?#2020-04-1613:42vlaaadYeah, right 😄#2020-04-1613:42vlaaadit’s just absense of eval gives a false sense of security#2020-04-1613:58Ben HammondI see the error
Execution error (ExceptionInfo) at datomic.client.api.async/ares (async.clj:58).
Only find-rel elements are allowed in client find-spec, see
when attempting to query a scalar value like
(client/q {:query
'[:find ?uid .
:in $ ?eid
:where
[?eid :user/uuid ?uid]]
:args [(client/db datomic-user-conn)
17592186045418]})
is there a way to query datomic client for single values?
is this just fundamentally not possible?#2020-04-1613:59marshall@ben.hammond (ffirst …#2020-04-1613:59marshallresult ‘shape’ specifications in the :find clause do not affect the work done by the query#2020-04-1613:59marshallin client or in peer#2020-04-1614:00Ben Hammondyeah so I take that as a not possible#2020-04-1614:00Ben Hammondthanks#2020-04-1614:00marshallthey only define what is returned to you#2020-04-1614:00Ben HammondI guess I could reduce the chunksize to 1#2020-04-1614:00marshallin your case, you have a single clause#2020-04-1614:00Ben Hammondbut I don't think I care all that much#2020-04-1614:00marshallyou could just use a datoms lookup directly#2020-04-1614:01marshallor better even still, in that example you already have an entity ID#2020-04-1614:01marshallyou should use pull#2020-04-1614:01Ben Hammondhttps://docs.datomic.com/on-prem/best-practices.html#prefer-query
?#2020-04-1614:02Ben Hammondthis advice is marked as 'on-prem', but I presume is equally valid for cloud?#2020-04-1614:08favilaprefer query vs datoms or d/pull|d/entity + manual join and filtering#2020-04-1614:09favilai.e. query for “where” work, not “find” work#2020-04-1614:01marshall(d/pull (d/db conn) '[:user/uuid] eid)#2020-04-1614:02Ben Hammondoh I like the look of that#2020-04-1614:09marshallonprem or cloud?#2020-04-1614:09marshallSee: https://docs.datomic.com/cloud/best.html#use-pull-to-retrieve-attribute-values#2020-04-1614:10marshall“You should use the `:where` clauses to identify entites of interest, combined with a `pull` expression to navigate to attribute values for those entities. An example:”#2020-04-1614:10marshallso if you already have your entity identifier, use pull#2020-04-1614:57Ben Hammondthankyou#2020-04-1614:26drewverleeI never noticed this before but it seems like their isn't parity between the find specs between cloud and on-prem
cloud: https://docs.datomic.com/cloud/query/query-data-reference.html#find-specs
on-prem: https://docs.datomic.com/on-prem/query.html
Does anything highlight other api differences?#2020-04-1614:28marshall@drewverlee https://docs.datomic.com/on-prem/clients-and-peers.html#2020-04-1614:33drewverleeThanks. ill have a look.#2020-04-1714:07kennyWe were running an import of data into a Datomic Cloud solo instance and it appears to have crashed. CPU is stuck at 0%. All calls to d/connect results in a Connect Timeout. Is there no health check that can detect and cycle the process/vm in a case like this?#2020-04-1714:13kennyLast log line
{"Gcname":"G1 Old Generation","Gcaction":"end of major GC","Gccause":"Allocation Failure","Msg":"GcEvent","Duration":5584,"Type":"Event","Tid":8,"Timestamp":1587119656023}#2020-04-1714:14kenny#2020-04-1714:38kennyI opened a Datomic support request since this is probably too specific.#2020-04-1816:19joshkhi'm interested as well. recently i discovered that one node in my query group had been completely wedged from a bad query, and i only discovered it hours later when a routine code deployment failed due to inefficient memory.#2020-04-1714:34Ben Hammondwhen I retrieve a db.type/uri datom from datomic cloud using pull it comes back with an unexpected class com.cognitect.transit.impl.URIImpl`.
I was sort of hoping for a .URI
I know I can manually convert it into a https://java.net.URI using str , my question is whether this is expected behaviour, or whether I have something misconfigured
or should I be de-transiting the response from pull#2020-04-1805:51onetomim also using :db.type/uri attrs, but only thru on-prem peers.
i would definitely expect it to work on the cloud version too, out of the box.
so, my guess is that it's a bug.#2020-04-1821:29drewverleeI tried running something very similar to this example:
;; query
[:find [?name ...]
:in $ ?artist
:where [?release :release/name ?name]
[?release :release/artists ?artist]]
and results in a error:
Only find-rel elements are allowed in client find-spec, see
{:cognitect.anomalies/category :cognitect.anomalies/incorrect,
:cognitect.anomalies/message
"Only find-rel elements are allowed in client find-spec, see ",
Which is confusing to me because thats not what the grammer implies to me
find-spec = ':find' (find-rel | find-coll | find-tuple | find-scalar)
find-rel = find-elem+
find-coll = [find-elem '...']
#2020-04-1821:31drewverleeok, i think its linking to the wrong docs. This grammar, specific to the cloud, does say that: https://docs.datomic.com/cloud/query/query-data-reference.html#2020-04-1821:35drewverleeThis is double confusing because the example is from the offical docs: https://docs.datomic.com/cloud/query/query-executing.html#2020-04-1821:43marshallClient does not permit find specifications other than relation.#2020-04-1821:44marshallIll fix the example#2020-04-1822:27drewverleeThanks!#2020-04-2016:10joshkhis it normal to see what appears to be 1 consistent OpsPending in the Datomic CloudWatch dashboard spanning the course of days?#2020-04-2112:39tatutanyone have datomic cloud db using tests running on github actions? my build can't find the ion jars, IIRC there was some region restrictions in accessing the s3 release bucket#2020-04-2114:28pvillegas12Is there a way to query for all datoms affected by a transaction? I can find datoms that are affected from a given transaction which are associated with a particular entity like this:
(d/q '[:find ?attr ?value ?txid
:in $ ?txid ?entity
:where
[?entity ?attr ?value ?txid]
]
(d/history (d/db (cloud-conn))) 13194140275534 69102106782505590)
#2020-04-2114:28pvillegas12If ?entity is not bound, I get a Insufficient binding of db clause: [?s ?attr ?value ?txid] would cause full scan#2020-04-2114:29marshalltx-range#2020-04-2114:29marshall@pvillegas12 ^#2020-04-2114:29marshallhttps://docs.datomic.com/client-api/datomic.client.api.html#var-tx-range#2020-04-2114:30marshallsee https://github.com/cognitect-labs/day-of-datomic-cloud/blob/master/tutorial/log.clj#L50 for an example#2020-04-2114:32pvillegas12@marshall Thank you, that’s exactly what I needed 😄 😄 😄#2020-04-2118:21armedHello everyone.
(first (d/query
{:query '{:find [(boolean ?u) ?passwords-equals ?active]
:keys [login-correct?
password-correct?
user/active?]
:in [$ ?login ?password]
:where [[?u :user/login ?login]
[?u :user/password ?pwd]
[?u :user/active? ?active]
[(= ?password ?pwd) ?passwords-equals]]}
:args [src-db login password]}))#2020-04-2118:22armedWhy this code does not work (empty)? But It returns data when I replace last clause with [(.equals ?password ?pwd) ?passwords-equals]]#2020-04-2118:22armedWhy = is not treated like function expression?#2020-04-2118:30favilaMy guess is it’s a special form for performance. It’s already not the standard clojure.core/= function.#2020-04-2118:30favila!= is another one#2020-04-2118:32armedThanks. Official docs doesn't mention this.#2020-04-2118:33armedBTW, = works as expected if ?password equals to ?pwd, but fails when otherwise.#2020-04-2118:35armed(clojure.core/= ?password ?pwd) works as expected#2020-04-2118:56ghadihttps://docs.datomic.com/cloud/query/query-data-reference.html#range-predicates#2020-04-2207:38onetomI have an 8core iMac with 80GB RAM.
Trying to import bigger amounts of data on it into an on-prem datomic dev storage.
I see very little CPU utilization (~10-20%)
What can I do to make a better use of the machine?
I'm already doing this:
Launching with Java options -server -Xms4g -Xmx16g -XX:+UseG1GC -XX:MaxGCPauseMillis=50
and in my properties file for the txor, this:
## Guessed settings for -Xmx16g production usage.
memory-index-threshold=256m
memory-index-max=4g
object-cache-max=8g
#2020-04-2207:41onetomI'm also not deref-ing the d/transact calls.
I saw on the https://docs.datomic.com/on-prem/capacity.html#data-imports page,
that I should use the async API and do some pipelining, but not sure how.
Is there any example of such pipelining somewhere?
Am I hitting some limitation of the H2 store somehow?#2020-04-2207:41onetomi checked one import example:
https://github.com/Datomic/codeq/blob/master/src/datomic/codeq/core.clj#L466
but this doesn't use the async api, it's just not dereffing the d/transact call...#2020-04-2207:49onetomi'm trying with d/transact-async now and the utilization is slightly better, but then im not sure how to determine when has the import completed.#2020-04-2209:37favilaYou get max utilization with pipelining plus back pressure. You achieve pipelining by using transact-async, leaving a bounded number in-flight (not dereffed) and backpressure by dereffing in order of submissions.#2020-04-2209:39favilahttps://docs.datomic.com/cloud/best.html#pipeline-transactions explains and links to examples#2020-04-2209:41favilaBe warned that the impl they show there assumes no interdependence between transactions (core.async pipeline-blocking executes its parallel work in no particular order, but results are in the same order as input)#2020-04-2210:27onetomah, i see! the on-prem docs also has that page:
https://docs.datomic.com/on-prem/best-practices.html#pipeline-transactions
thanks, @favila!#2020-04-2213:12ghadithe examples there don't retry either#2020-04-2213:43Joe LaneLook here for an project to study which includes retry and backpressure. https://github.com/Datomic/mbrainz-importer#2020-04-2214:18defaI’m having a problem creating a database when running the datomic transactor in a docker container. I created the docker container as desribed https://hub.docker.com/r/pointslope/datomic-pro-starter/. Since I’d like to also run a peer server and a datomic-console dockerized, I configured the transactor with storage-access=remote and set storage-datomic-password=a-secret. The docker container exposes ports 4334-4336.
When connecting from the host via repl to the transactor (docker) I get an error:
Clojure 1.10.1-pro-0.9.6045 defa$ ./bin/repl-jline
user=> (require '[datomic.api :as d])
nil
user=> (d/create-database "datomic:")
Execution error (ActiveMQNotConnectedException) at org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl/createSessionFactory (ServerLocatorImpl.java:787).
AMQ119007: Cannot connect to server(s). Tried with all available servers.
What does this error mean? With the wrong password I get:
Execution error (Exceptions$IllegalArgumentExceptionInfo) at datomic.error/arg (error.clj:79).
:db.error/read-transactor-location-failed Could not read transactor location from storage
#2020-04-2214:33favilathe datomic transactor’s properties file needs a host= or alt-host= that has a name that other docker containers can resolve to the storage container#2020-04-2214:34favila(in the dev storage case, the storage and transactor happen to be the same process, but that this is the general principle)#2020-04-2214:34favilaso connecting to “localhost” connects to the peer container localhost, which is not correct#2020-04-2214:36faviladatomic connection works like: 1) transactor writes its hostname into storage 2) d/connect on a peer connects to storage, retrieves transactor hostname 3) peer connects to transactor hostname#2020-04-2214:36favilayou appear to be failing at step 3 in your first error, step 2 in your second error#2020-04-2214:47defa@favila not sure if I understand correctly… I changed host=localhost to host=datomic-transactor and log now says:
Launching with Java options -server -Xms1g -Xmx1g -XX:+UseG1GC -XX:MaxGCPauseMillis=50
Starting datomic:<DB-NAME>, storing data in: data ...
System started datomic:<DB-NAME>, storing data in: data
#2020-04-2214:48favilafrom where you peer is, can you resolve “datomic-transactor” ?#2020-04-2214:49defaSince I’m connecting from the docker host, I altered /etc/hosts to map datomic-transactor for 127.0.01 (localhost) … same problem when connecting to `
datomic:
…#2020-04-2214:50defaI will try from my docker peer server but thought that i hat to create a database first (before launching the peer)#2020-04-2214:50favilatry nc -zv datomic-transactor 4334 from a terminal running in the same context as your peer#2020-04-2214:52defa$ nc -zv datomic-transactor 4334
found 0 associations
found 1 connections:
1: flags=82<CONNECTED,PREFERRED>
outif lo0
src 127.0.0.1 port 52204
dst 127.0.0.1 port 4334
rank info not available
TCP aux info available
Connection to datomic-transactor port 4334 [tcp/*] succeeded!#2020-04-2214:55defaJust to see if I understand peer-servers corretly… can I start a peer-server without (d/create-database <URI>) first? Because I get:
Execution error at datomic.peer/get-connection$fn (peer.clj:661).
Could not find my-db in catalog
Full report at:
/tmp/clojure-3528411252793798518.edn
where my-db has not been created before.#2020-04-2214:56favilano, you need the db first#2020-04-2214:57favilanow try nc -zv datomic-transactor 4335#2020-04-2214:57favila(4335 is storage)#2020-04-2214:58defa$ nc -zv datomic-transactor 4335
found 0 associations
found 1 connections:
1: flags=82<CONNECTED,PREFERRED>
outif lo0
src 127.0.0.1 port 52844
dst 127.0.0.1 port 4335
rank info not available
TCP aux info available
Connection to datomic-transactor port 4335 [tcp/*] succeeded!#2020-04-2214:58favilaif both of these work, your bin/repl-jline should succeed if you run it from the same terminal#2020-04-2214:58favila(specifically the create-database you were trying before)#2020-04-2215:02defaNow it does… just wondering why it didn’t before. But I tried with a new repl…#2020-04-2215:03defaeven works with localhost in the uri…#2020-04-2215:03favilawhat was in your host= before?#2020-04-2215:04defahost=localhost…#2020-04-2215:05favilaso that means the transactor bound to the docker container’s localhost, 127.0.0.1; probably not the same as the peer’s?#2020-04-2215:06favila(i’m fuzzy on docker networking)#2020-04-2215:06defaNot sure but it does work now. Thank you very much @favila for your quick response and fruitful help!#2020-04-2215:07defaI’m fairly new to docker and datomic but your explanations do make perfect sense!#2020-04-2215:07favilaI usually see and use host=0.0.0.0 alt-host=something-resolveable so I don’t have to worry about how the host= resolves on both transactor and peer#2020-04-2215:08defaOkay, will try this as well. Thanks again!#2020-04-2215:08favilathe transactor will use host= for binding, and advertise both for connecting#2020-04-2215:08favilaand the peers will end up using alt-host#2020-04-2216:57kennyI'm trying to query out datom changes between a start and end date under a cardinality many attribute by doing this:
'[:find ?date ?tx ?w ?attr ?v ?op
:keys date tx db/id attr v op
:in $ ?container ?start ?stop
:where
[?container :my-ref-many ?w]
[?w ?a ?v ?tx ?op]
[?a :db/ident ?attr]
[?tx :db/txInstant ?date]
[(.before ^Date ?date ?stop)]
[(.after ^Date ?date ?start)]]
The query always times out. I assume it must be doing something very inefficient (e.g., full db scan). Is there a more efficient way to get this sort of data out?#2020-04-2217:21marshalluse a since-db#2020-04-2217:21marshallhttps://github.com/cognitect-labs/day-of-datomic-cloud/blob/master/tutorial/filters.repl#L102#2020-04-2217:22marshallyou could use a since and an as-of db to get an ‘in-between’#2020-04-2217:23kennyOooo, ok! I'll try that.#2020-04-2217:38kennyI'm struggling figuring out how I'm supposed to join across these dbs. I'm trying:
'[:find #_?date ?tx ?w ?a ?v ?op
:keys #_date tx db/id attr v op
:in $as-of $since ?workspaces-group ?start ?stop
:where
[$as-of ?workspaces-group :aws-workspaces-group/monitored-workspaces ?w]
[$as-of ?w _ ]
[$since ?w _]
[?w ?a ?v ?op ?tx]
#_[?tx :db/txInstant ?date]
#_[?a :db/ident ?attr]]
and get
Nil or missing data source. Did you forget to pass a database argument?
Is there an example of this somewhere?#2020-04-2217:43marshallyou need to pass the “regular” db as well as the others (i believe#2020-04-2217:44marshallso :in $ $asof $since#2020-04-2217:44favilaI think it’s just that this clause doesn’t specify a db#2020-04-2217:44favila [?w ?a ?v ?op ?tx]#2020-04-2217:44marshalloh, right#2020-04-2217:44marshallyes that’s definitely why#2020-04-2217:45marshallthx @favila#2020-04-2217:45kennyBut what is supposed to go there?#2020-04-2217:45marshallwhich db value do you want that information from#2020-04-2217:45kennyI think both?#2020-04-2217:46marshallthen you’d need 2 clauses#2020-04-2217:46marshallone for each db#2020-04-2217:46marshalland you’ll only get datoms that are the same in both#2020-04-2217:48kennyThis?
'[:find ?tx ?w ?a ?v ?tx ?op
:in $as-of $since ?workspaces-group
:where
[$as-of ?workspaces-group :aws-workspaces-group/monitored-workspaces ?w]
[$since ?w ?a ?v ?tx ?op]
[$as-of ?w ?a ?v ?tx ?op]]
#2020-04-2217:49kennyDoesn't that only return datoms where ?a ?v ?tx ?op in both since and as-of are the same?#2020-04-2217:56kennyI'm pretty sure this is what I want:
'[:find ?tx ?w ?a ?v ?tx ?op
:in $as-of $since ?workspaces-group
:where
[$since ?w ?a ?v ?tx ?op]
[$as-of ?workspaces-group :aws-workspaces-group/monitored-workspaces ?w]]
But I get
Execution error (ExceptionInfo) at datomic.client.api.async/ares (async.clj:58).
processing clause: (?w ?a ?v ?tx ?op), message: java.lang.ArrayIndexOutOfBoundsException
#2020-04-2217:57kennyNot really sure what that exception means. Here's a larger stacktrace:
clojure.lang.ExceptionInfo: processing clause: (?w ?a ?v ?tx ?op), message: java.lang.ArrayIndexOutOfBoundsException {:cognitect.anomalies/category :cognitect.anomalies/incorrect, :cognitect.anomalies/message "processing clause: (?w ?a ?v ?tx ?op), message: java.lang.ArrayIndexOutOfBoundsException", :dbs [{:database-id "f3253b1f-f5d1-4abd-8c8e-91f50033f6d9", :t 105925, :next-t 105926, :history false}]}
at datomic.client.api.async$ares.invokeStatic(async.clj:58)
at datomic.client.api.async$ares.invoke(async.clj:54)
at datomic.client.api.sync$unchunk.invokeStatic(sync.clj:47)
at datomic.client.api.sync$unchunk.invoke(sync.clj:45)
at datomic.client.api.sync$eval50206$fn__50227.invoke(sync.clj:101)
at datomic.client.api.impl$fn__11664$G__11659__11671.invoke(impl.clj:33)
at datomic.client.api$q.invokeStatic(api.clj:350)
at datomic.client.api$q.invoke(api.clj:321)
at datomic.client.api$q.invokeStatic(api.clj:353)
at datomic.client.api$q.doInvoke(api.clj:321)#2020-04-2218:00kennyGot it. See the duplicate :find here:
'[:find ?tx ?w ?a ?v ?tx ?op
:in $as-of $since ?workspaces-group
:where
[$since ?w ?a ?v ?tx ?op]
[$as-of ?workspaces-group :aws-workspaces-group/monitored-workspaces ?w]]
That's a nasty error message though 🙂#2020-04-2218:01favilawhat is the set of ?w you are interested in?#2020-04-2218:02favilathose currently monitored only? or ones that were ever monitored?#2020-04-2218:02kennyI want all ?w added or retracted between 2 dates that were on the :aws-workspaces-group/monitored-workspaces card many ref attr.#2020-04-2218:03favilathe confusing here is there are two different entity histories to consider#2020-04-2218:03kennyThis query gives me some results
[:find ?w ?a ?v ?tx ?op
:in $as-of $since ?workspaces-group
:where
[$as-of ?workspaces-group :aws-workspaces-group/monitored-workspaces ?w]
[$since ?w ?a ?v ?tx ?op]]
It appears to be missing retractions.#2020-04-2218:04favilaare either of those history dbs?#2020-04-2218:04kennyNo. Called like this:
(d/q
'[:find ?w ?a ?v ?tx ?op
:in $as-of $since ?workspaces-group
:where
[$as-of ?workspaces-group :aws-workspaces-group/monitored-workspaces ?w]
[$since ?w ?a ?v ?tx ?op]]
(d/as-of db stop-date)
(d/since db start-date)
[:application-spec/id workspaces-group-id])#2020-04-2218:05favilaso this gives you ?w that were monitored at the moment of stop-date, then looks for datoms on those ?w entities since start-date (if you make that $since a history-db)#2020-04-2218:06favilain particular, if there’s a ?w that used to be monitored between start and stop, you won’t see it#2020-04-2218:06favilais that what you want?#2020-04-2218:07kennyNo. I want ?w the used to be monitored between start and stop included as well.#2020-04-2218:07favilayou want ones that started to be monitored after start, or those that were monitored at start or any time between start and stop?#2020-04-2218:08kennyCorrect#2020-04-2218:08favila…so both?#2020-04-2218:08kennyYes#2020-04-2218:13favilaThen I think you need something like this:#2020-04-2218:13favila(d/q '[:find ?w ?a ?v ?tx ?op
:in $as-of $since ?workspaces-group
:where
[$as-of ?workspaces-group :aws-workspaces-group/monitored-workspaces ?w]
[$since ?workspaces-group :aws-workspaces-group/monitored-workspaces ?w _ true]
[$since ?w ?a ?v ?tx ?op]]
(d/as-of db start)
(-> db (d/history) (d/as-of end) (d/since start))
workspaces-group)
#2020-04-2218:13favilaas-of-start gets you whatever workspaces were already being monitored at start moment#2020-04-2218:14favilathen you look for groups again in $since for any that began to be monitored between start and end#2020-04-2218:14favila?w is now the set-union of both#2020-04-2218:15favilathen you look for any datoms added to ?w between start (not-inclusive) and end (inclusive)#2020-04-2218:16favilait’s possible you want to include ?start there too, in which case you need to decrement start-t of $since by one#2020-04-2218:16kennyWon't [$since ?workspaces-group :aws-workspaces-group/monitored-workspaces ?w _ true] not work if ?workspaces-group was not transacted within start and stop?#2020-04-2218:16faviladoh you are right, this is unification not union#2020-04-2218:17faviladoing this efficiently might need two queries#2020-04-2218:17favilayou can’t use two different data sources in an or#2020-04-2218:20kennyWhy would this not work?
(d/q
'[:find ?w ?a ?v ?tx ?op
:in $as-of $since ?workspaces-group
:where
[$as-of ?workspaces-group :aws-workspaces-group/monitored-workspaces ?w]
[$since ?w ?a ?v ?tx ?op]]
(d/as-of db stop-date)
(-> (d/history db) (d/as-of stop-date) (d/since start-date))
[:application-spec/id workspaces-group-id])#2020-04-2218:21favilaIt would miss ?w that were removed from workspaces-group between start and stop#2020-04-2218:21favilait’s the first choice I offered you earlier#2020-04-2218:21kennyOh, right. That query also hangs for 10+ seconds. Didn't let it finish.#2020-04-2218:21favilathis is only ?w that were part of the group at the very moment of end-date#2020-04-2218:23favilausing $since instead would miss ?w that were in the group at the moment of start-date#2020-04-2218:23kennySo perhaps query for all ?w at start-date and any added up to end-date. Pass that to a second query that uses (-> (d/history db) (d/as-of stop-date) (d/since start-date)) to get all datoms#2020-04-2218:25favilayes, so 3 queries#2020-04-2218:25kennyThe first one needs to be 2 queries, huh?#2020-04-2218:26favilamaybe you can unify later, let me think#2020-04-2218:27favila(d/q '[:find ?w ?a ?v ?tx ?op
:in $as-of $since ?workspaces-group
:where
[$as-of ?workspaces-group :aws-workspaces-group/monitored-workspaces ?w-at]
[$since ?workspaces-group :aws-workspaces-group/monitored-workspaces ?w-since _ true]
(or-join [?w-at ?w-since ?w]
[(identity ?w-at) ?w]
[(identity ?w-since) ?w])
[$since ?w ?a ?v ?tx ?op]]
(d/as-of db start)
(-> db (d/history) (d/as-of end) (d/since start))
workspaces-group)
?#2020-04-2218:29kennySame problem as the other query, I think. ?workspaces-group isn't in $since#2020-04-2218:30favilayeah, and if I solved that, there would be the same problem with as-of#2020-04-2218:30kennyRight#2020-04-2218:30favilaugh, maybe a sentinel value, like -1#2020-04-2218:33favilaif you know that set will be small across all time, you could filter by ?tx like you were doing before#2020-04-2218:34kennyUsing an unfiltered history db?#2020-04-2218:34favilayeah#2020-04-2218:34favilajust to get ?w#2020-04-2218:34favilayou still want the $since to get changes to ?w entities themselves#2020-04-2218:35kennyWhat would you consider small? < 1,000,00?#2020-04-2218:36favilaI would not consider that small…#2020-04-2218:36favilabut really “small” is just “this query is fast enough”#2020-04-2218:36favilaI wonder if we can step back#2020-04-2218:36favila[:application-spec/id workspaces-group-id]
#2020-04-2218:36kennyI think it will often be in the 10-50 thousand range.#2020-04-2218:36favilais that an immutable identifier?#2020-04-2218:37kennyYes#2020-04-2218:37kennyi.e., a lookup ref?#2020-04-2218:37favilaso once asserted on an entity, it is never retracted and never asserted on a different entity#2020-04-2218:37kennyRight#2020-04-2218:38kenny> it is never retracted
Unless the entity itself is retracted#2020-04-2218:42kennyWith 3 queries I'd do:
1. Query for all ?w that are monitored in as-of.
2. Query for all ?w added to monitored in since.
3. Pass the union of ?w in 1 and 2 to a history db and get all the datoms#2020-04-2218:43favilacorrect; the db in 3 is either the same as 2 or just with a since adjusted 1 tx backward#2020-04-2218:43favila(depending on what you want)#2020-04-2218:46kennyShould 2 be querying a (-> db (d/as-of stop-date) (d/since start-date))?#2020-04-2218:47favilayes. the since sets an exclusive outer range#2020-04-2218:47favilatxs that occur exactly at start-date are excluded#2020-04-2218:48kennyIf none are added then that query will throw. Guess I just catch that and return an empty set.#2020-04-2218:49kenny> the db in 3 is either the same as 2 or just with a since adjusted 1 tx backward
Oh, right it would be the same. Since now we know all the ?w it's easy to search for the matching datoms.#2020-04-2218:49favilathat will omit changes to ?w that occurred exactly at start-date.#2020-04-2218:50favilathis difference should only happenmatter if you ever change ?w and group membership in the same tx#2020-04-2218:54kennyHaha, right. Adjusting 1 tx back is easy though#2020-04-2219:44kennyWait the db for 2 needs to include retracts. If a workspace was retracted between start and end, it would not be included in query 3.#2020-04-2219:45kennyI think that just means changing the passed in db to be (-> (d/history db) (d/as-of stop-date) (d/since start-date))#2020-04-2219:47kennyI also don't think the lookup ref for :application-spec/id will be present in that db so I'll need to have the db/id for ?workspace-group#2020-04-2219:47favilayes, sorry I misread your earlier db constructor. it needs d/history#2020-04-2219:47favilayou can look up the id in the query#2020-04-2219:48kennyIn query 2?#2020-04-2219:48favilaboth?#2020-04-2219:49kennyI could do it in query 1. Since query 2 is filtered by as-of and since, I don't think the :application-spec/id attribute will be included since it would have been transacted before the since filter.#2020-04-2219:49kennyUnless there is some special condition for lookup refs#2020-04-2219:50favilaunsure how lookup refs are resolved with history or filtered dbs#2020-04-2219:50kennyi.e., this query would never return any results given :application-spec/id was transacted before start-date
(d/q '[:find ?w
:in $ ?workspaces-group-id
:where
[?workspace-group :application-spec/id ?workspaces-group-id]
[?workspace-group :aws-workspaces-group/monitored-workspaces ?w]]
(-> (d/history db) (d/as-of stop-date) (d/since start-date))
workspaces-group-id)
#2020-04-2219:51kennyAnd this throws:
(d/q '[:find ?w
:in $ ?workspaces-group
:where
[?workspace-group :aws-workspaces-group/monitored-workspaces ?w]]
(-> (d/history db) (d/as-of stop-date) (d/since start-date))
[:application-spec/id workspaces-group-id])#2020-04-2219:51kennySo I think that means I need the ?workspace-group db/id before I do query 2.#2020-04-2219:52favilabut it may not exist at that time, right?#2020-04-2219:52kennyWhich time?#2020-04-2219:52favilaas-of. the time for query 1#2020-04-2219:53favilaa group can be created and destroyed in between start and end time#2020-04-2219:53kennyAh. If ?workspace-group doesn't exist at time 1, we would never need to run this query#2020-04-2219:56kennyLanded here:
(defn get-workspaces-over-time2
[db workspaces-group-id start-date stop-date]
(let [group-db-id (:db/id (d/pull db [:db/id] [:application-spec/id workspaces-group-id]))
cur-ws (->> (d/q '[:find ?w
:in $ ?workspace-group
:where
[?workspace-group :aws-workspaces-group/monitored-workspaces ?w]]
(d/as-of db start-date) [:application-spec/id workspaces-group-id])
(map first))
added-ws (->> (d/q '[:find ?w
:in $ ?workspaces-group
:where
[?workspace-group :aws-workspaces-group/monitored-workspaces ?w]]
(-> (d/history db) (d/as-of stop-date) (d/since start-date))
group-db-id)
(map first))
all-ws (set (concat cur-ws added-ws))
datoms (d/q '[:find ?w ?a ?v ?tx ?op
:in $ [?w ...]
:where
[?w ?a ?v ?tx ?op]]
(d/history db) all-ws)]
datoms))
But I'm back to where I started 😞
processing clause: [?w ?a ?v ?tx ?op], message: java.util.concurrent.TimeoutException: Query canceled: timeout elapsed
#2020-04-2219:57favilaso you have a set of ?w at this point?#2020-04-2219:57kennyRight#2020-04-2219:57favilahow large is it?#2020-04-2219:57kenny874#2020-04-2219:58favilayour history db is unfiltered?#2020-04-2219:58kennyYes#2020-04-2219:59kennyUsing (-> (d/history db) (d/as-of stop-date) (d/since start-date)) hangs "forever". I've been letting it run since I sent the 874 message#2020-04-2219:59kennyIt also caused the datomic solo instance to spike to 2000% cpu 🙂#2020-04-2220:00favilawell, last resort you can use d/datoms :eavt for each ?w#2020-04-2220:00favilawith your filtered db#2020-04-2220:01kennyYeah... That results in ?w number of DB queries, right?#2020-04-2220:01favilayou can run them in parallel, but yes#2020-04-2220:02favilathey are lazily tailed though#2020-04-2220:02favilaqueries are eager, datom-seeking is lazy#2020-04-2220:02favilait could be the problem is result-set size#2020-04-2220:04favila(mapcat #(d/datoms filtered-history-db :eavt %) (sort all-ws))#2020-04-2220:04kennyHmm, ok. That is a potential solution. Thank you for working with me on this. It's been incredibly insightful.
Any idea why that last query is so expensive?#2020-04-2220:06kennyWhy'd you sort all-ws?#2020-04-2220:07favilait probably won’t make a difference, but it increases the chance the next segment (in between datom calls) is already loaded#2020-04-2220:08favila(the entire index is sorted, so fetching 1 2 3 4 5 is better than 5 2 1 4 3)#2020-04-2220:09favila> Any idea why that last query is so expensive?#2020-04-2220:09favilamy suspicion is the result set size is large#2020-04-2220:10kennyInteresting. A bit surprised by that. Would really like to know what's in there that would cause it to be so big 🙂 In this case it shouldn't be that big.#2020-04-2220:11favilawell if your instance ever calms down that mapcat will tell you for sure#2020-04-2220:12favilaI’m not saying it will be fast, but it will use almost no memory#2020-04-2220:12favila(just make sure you don’t hold the head on your client…)#2020-04-2220:19kennyDoing a count on it... Also hung. Must be huge.#2020-04-2220:20kenny748650#2020-04-2220:21kennyOh wow, there is definitely an attribute in there that gets updated all the time that is useless here.#2020-04-2220:22kennyThat one should probably even be :db/noHistory#2020-04-2220:29kennyI wonder if restricting the query to the attrs I'm interested would increase the perf.#2020-04-2220:29kennyAfter filtering out those high churn attrs, I get a coll of 576 datoms#2020-04-2220:31kennyWould need to pull the db-ids of all the attrs to filter since those are also transacted outside the between-db.#2020-04-2220:46favilawith a whitelist (or even blacklist) of attrs, you may be able to retry your query#2020-04-2220:47favilai.e. not use datoms#2020-04-2220:51kennyWeird error doing that:
processing clause: {:argvars nil, :fn #object[datomic.core.datalog$expr_clause$fn__23535 0x11f3ef5d "#2020-04-2217:02Cas ShunI would like to find entities with an (cardmany) attribute with more than one value. A theoretical example is finding customers with more than n orders. What's the best way to go about this? Note - using cloud#2020-04-2217:14favila[?e ?card-many-a ?v] [?e ?card-many-a ?v2] [(!= ?v ?v2)]#2020-04-2217:28Cas ShunI just get [] when trying this, so maybe I'm misunderstanding something.
I just tried with the mbrainz database (to use a public dataset) to do something like find tracks with multiple artists (:track/artists is cardmany ref).
(d/q '[:find ?e
:where
[?e :track/artists ?a]
[?e :track/artists ?a2]
[(!= ?a ?a2)]]
db)
I'm new to Datomic and trying to learn, so I believe I am missing some knowledge here maybe?#2020-04-2217:47favilaare you sure db is what you think it is? are you sure any track actually has multiple artists?#2020-04-2217:48favilaHere’s a minimal example:
(d/q '[:find ?e
:where
[?e :artist ?v]
[?e :artist ?v2]
[(!= ?v ?v2)]]
[[1 :artist "foo"]
[2 :artist "bar"]
[2 :artist "baz"]])
#2020-04-2218:00Cas ShunI'm sure there are multiple artists on some tracks, and I know of a few tracks specifically.#2020-04-2218:00Cas Shunthe official cloud docs even have an example showing multiple artists on a track#2020-04-2218:02Cas Shunhowever, your example returns []#2020-04-2218:28favila?#2020-04-2218:28favila(d/q ’[:find ?e
:where
[?e :artist ?v]
[?e :artist ?v2]
[(!= ?v ?v2)]]
[[1 :artist “foo”]
[2 :artist “bar”]
[2 :artist “baz”]])
=> #{[2]}#2020-04-2218:28favila(from my repl)#2020-04-2314:53Cas ShunThis query doesn't work for me at all. Is this an on-prem thing?#2020-04-2314:57favilaI don’t think so? what happens?#2020-04-2314:59favilaoh, I bet it needs some kind of db somewhere in the data sources to know where to send the query#2020-04-2314:59favilahm, not sure how I feel about that#2020-04-2314:59favilatry this:#2020-04-2315:00favila(d/q ’[:find ?e
:in $ $db :where
[?e :artist ?v]
[?e :artist ?v2]
[(!= ?v ?v2)]]
[[1 :artist “foo”]
[2 :artist “bar”]
[2 :artist “baz”]] some-db)#2020-04-2315:00favilait shouldn’t matter what db you provide since it’s not read#2020-04-2315:01favilaI was just trying to demonstrate in a low-effort, db-agnostic way that the self-join should work#2020-04-2316:00Cas ShunUnable to resolve symbol: "foo" in this context
#2020-04-2316:15favilathat sounds like copy-paste error?#2020-04-2217:04ghadi@kenny use the datoms API#2020-04-2217:05ghadihttps://docs.datomic.com/client-api/datomic.client.api.html#var-datoms#2020-04-2217:15favilaAm I right that datomic cloud query doesn’t let you look at the log? (tx-ids, tx-data)#2020-04-2217:20marshalllog-in-query is not in the client API
You can use tx-range, however:
https://github.com/cognitect-labs/day-of-datomic-cloud/blob/master/tutorial/log.clj#2020-04-2217:19kennyHmm. So this would require some sort of iterative approach? I'd need to query for the tx id for my start and end dates and the filter the :aevt index for datoms within the tx id range. Using that result, for all entity ids returned, I'd filter the :eavt for tx ids between my start and end dates. I would then resolve all attribute ids, giving me my list. Is this what you were thinking @ghadi?#2020-04-2222:26joshkhis the cloud async client library useful for optimising some function which combines the results of more than one parallel query?#2020-04-2300:09Sam DeSotaI don't see it documented, but it appears that you can only use the d/tx-range to map over 1000 datoms at a time? Is this correct? Requesting on a datomic cloud database with about 4 million datoms, where 13194139533312 is my first txid:
(count (into [] (d/tx-range (conn) {:start 13194139533312 :end nil}))) ;; => 1000
#2020-04-2300:14Sam DeSotaAlso, this behavior applies to ts
(count (into [] (d/tx-range (conn) {:start 0 :end 4000}))) ;; => 1000#2020-04-2300:27Sam DeSotaI fixed via
(defn infinite-tx-range [conn {:keys [start end]}]
(let [current-end (+ start 1000)]
(if (and (some? end) (>= current-end end))
(d/tx-range conn {:start start :end end})
(lazy-cat (d/tx-range conn {:start start :end current-end})
(infinite-tx-range conn {:start (+ start 1000) :end end})))))#2020-04-2300:28Sam DeSotaDefinitely feels like a bug.#2020-04-2300:32Joe LaneLook at the namespace docstring https://docs.datomic.com/client-api/datomic.client.api.html you need to specify :limit -1 along with :start and :end . Example:
(count (into [] (d/tx-range (conn) {:start 0 :end 4000 :limit -1}))) ;; => 4000#2020-04-2300:34Sam DeSotaAh, got it. Thank you very much.#2020-04-2314:15Sam DeSotaI noticed that my datomic tx count was growing faster than I expected, after inspecting the tx log, there appears to be random no-op transactions a few times per second:
[#datom[13194144633312 50 #inst "2020-04-22T23:19:36.303-00:00" 13194144633312 true]]
[#datom[13194144633313 50 #inst "2020-04-22T23:19:36.549-00:00" 13194144633313 true]]
[#datom[13194144633314 50 #inst "2020-04-22T23:19:36.771-00:00" 13194144633314 true]]
[#datom[13194144633315 50 #inst "2020-04-22T23:19:37.336-00:00" 13194144633315 true]]
[#datom[13194144633316 50 #inst "2020-04-22T23:19:38.186-00:00" 13194144633316 true]]
[#datom[13194144633317 50 #inst "2020-04-22T23:19:38.919-00:00" 13194144633317 true]]
[#datom[13194144633318 50 #inst "2020-04-22T23:19:39.696-00:00" 13194144633318 true]]
[#datom[13194144633319 50 #inst "2020-04-22T23:19:40.024-00:00" 13194144633319 true]]
#2020-04-2314:15Sam DeSotaI noticed that my datomic tx count was growing faster than I expected, after inspecting the tx log, there appears to be random no-op transactions a few times per second:
[#datom[13194144633312 50 #inst "2020-04-22T23:19:36.303-00:00" 13194144633312 true]]
[#datom[13194144633313 50 #inst "2020-04-22T23:19:36.549-00:00" 13194144633313 true]]
[#datom[13194144633314 50 #inst "2020-04-22T23:19:36.771-00:00" 13194144633314 true]]
[#datom[13194144633315 50 #inst "2020-04-22T23:19:37.336-00:00" 13194144633315 true]]
[#datom[13194144633316 50 #inst "2020-04-22T23:19:38.186-00:00" 13194144633316 true]]
[#datom[13194144633317 50 #inst "2020-04-22T23:19:38.919-00:00" 13194144633317 true]]
[#datom[13194144633318 50 #inst "2020-04-22T23:19:39.696-00:00" 13194144633318 true]]
[#datom[13194144633319 50 #inst "2020-04-22T23:19:40.024-00:00" 13194144633319 true]]
#2020-04-2314:16Sam DeSotaJust double checking, is this normal behavior?#2020-04-2314:16favilaIt’s normal behavior…for an application that is transacting a few times times per second 🙂#2020-04-2314:17Sam DeSotaRight, but these txs have no datoms besides txInstant, and I'm probably not transacting that often. So probably a bug on my end?#2020-04-2314:17favilayes#2020-04-2314:18Sam DeSotaGot it, thank you#2020-04-2314:18favilayou should check before you submit the tx whether your tx-data is empty#2020-04-2314:18faviladatomic won’t drop a transaction--every time you call d/transact it will transact at least tx-instant, or fail#2020-04-2314:19favilait’s also possible to submit non-empty tx that ends up not changing anything. that would also look like an empty tx#2020-04-2314:19favilae.g. if you reassert a datom that is already there#2020-04-2314:20Sam DeSotaAh interesting, that's helpful#2020-04-2315:45Sam DeSotaWorking adding some monitoring for the issue above ^ but all the (cast/dev) calls break with an error like this, was only able to find one older slack message in the archive, but there was no resolution for the issue. Any hints?
> (cast/dev {:msg "Test"})
No implementation of method: :-dev of protocol: #'datomic.ion.cast.impl/Cast found for class: nil#2020-04-2315:46marshallcast/dev does not cast in production
https://docs.datomic.com/cloud/ions/ions-monitoring.html#dev
You’d need to redirect cast/dev before calling it#2020-04-2315:47marshallhttps://docs.datomic.com/cloud/ions/ions-monitoring.html#local-workflow#2020-04-2315:47marshallif you’re running in an active ion, use cast/event instead#2020-04-2315:51Sam DeSotaGot it. Will cast/dev break in a REPL? Both cast/dev + cast/event break with a similar error locally in a REPL. Want to make sure it won't break my production ion.
> (cast/event {:msg "CodeDeployEvent"})
No implementation of method: :-event of protocol: #'datomic.ion.cast.impl/Cast found for class: nil#2020-04-2315:53marshallsomething else is going on there
do you have datomic.ion.cast in your require and also check versions you’re using#2020-04-2315:58Sam DeSotaThis is my setup, checking out latest versions
;; versions
com.datomic/ion {:mvn/version "0.9.35"}
com.datomic/client-cloud {:mvn/version "0.8.81"}
;; ns
(ns my.util
(:require [datomic.client.api :as d]
[datomic.ion :as ion]
[datomic.ion.cast :as cast]))
(defn transact [& args]
(cast/event {:msg "CodeDeployEvent"})
(apply d/transact args))#2020-04-2316:02Sam DeSotaThese appear to be latest versions in ion starter#2020-04-2316:05Sam DeSotaWeird, isolated deps and loaded just this namespace and still having the same problem#2020-04-2316:15Sam DeSotaOkay, so I guess event/cast just doesn't work locally at all? I just cloned ion starter and same error.#2020-04-2316:15Sam DeSotaI guess I have to throw up a test endpoint to see if it works in ions#2020-04-2316:39Sam DeSotaIn case anyone else runs into this, cast/event does not appear to work locally, though perhaps that can be fixed with https://docs.datomic.com/cloud/ions/ions-monitoring.html#local-workflow. When deploying to ions, it is able report to cloud watch correctly.#2020-04-2316:42Sam DeSotaYes, calling (cast/initialize-redirect :stdout) fixes the issue locally.#2020-04-2316:59onetomWhy does the groupping happens differently in my :2 & :3 examples?
(let [names [[1 "Jane"] [2 "JaNe"] [3 "JANE"]
[4 "paul"] [5 "Paul"]
[6 "EVE"]
[7 "bo"]]
q #(->> (d/q % names)
(sort-by (comp count second)))]
(pp/pprint
{:1
(->> names
(group-by (comp str/upper-case second))
(vals)
(map set)
(sort-by count)
#_(filter (comp pos? dec count)))
:2
(q '[:find ?upcase-name (distinct ?id+name)
:in [?id+name ...]
:where
[(untuple ?id+name) [?id ?name]]
[(clojure.string/upper-case ?name) ?upcase-name]])
:3
(q '[:find (distinct ?id+name)
:with ?upcase-name
:in [?id+name ...]
:where
[(untuple ?id+name) [?id ?name]]
[(clojure.string/upper-case ?name) ?upcase-name]])}))
output is:
{:1
(#{[6 "EVE"]}
#{[7 "bo"]}
#{[5 "Paul"] [4 "paul"]}
#{[1 "Jane"] [2 "JaNe"] [3 "JANE"]}),
:2
(["BO" #{[7 "bo"]}]
["EVE" #{[6 "EVE"]}]
["PAUL" #{[5 "Paul"] [4 "paul"]}]
["JANE" #{[1 "Jane"] [2 "JaNe"] [3 "JANE"]}]),
:3
([#{[6 "EVE"] [1 "Jane"] [5 "Paul"] [2 "JaNe"] [3 "JANE"] [4 "paul"]
[7 "bo"]}])}
i would expect
(q '[:find (distinct ?id+name) :with ?upcase-name ...
and
(q '[:find (distinct ?id+name) ?upcase-name ...
form groups the same way#2020-04-2317:04favilaI encountered this recently too. feels like a bug?#2020-04-2317:10onetomim pondering over this for more than an hour.
read the related docs in https://docs.datomic.com/on-prem/query.html#with a few times, but i don't see any mistakes i'm making, so yes, feels like a bug to me too.
where and how can i report it?#2020-04-2317:16favilahopefully it gets visibility here, but opening a support ticket is a guaranteed way to get attention#2020-04-2317:16favilahttps://support.cognitect.com/hc/en-us/requests/new#2020-04-2317:25onetomthanks!#2020-04-2317:26onetomi've also seen situations where using the set function as an aggregate behaved differently than using distinct.
it feels like a related issue maybe.
have u seen anything like that?
should they not be the same from a functional perspective?#2020-04-2317:33onetomhere is a more minimal example for other who might also want to play with it:
(let [names ["a" "A" "b"]]
[(-> '[:find (distinct ?name) ?upcase-name :in [?name ...]
:where [(clojure.string/upper-case ?name) ?upcase-name]]
(d/q names))
(-> '[:find (distinct ?name) :with ?upcase-name :in [?name ...]
:where [(clojure.string/upper-case ?name) ?upcase-name]]
(d/q names))])
=> [[[#{"a" "A"} "A"] [#{"b"} "B"]] [[#{"a" "b" "A"}]]]#2020-04-2317:41onetomSubmitted the issue as https://support.cognitect.com/hc/en-us/requests/2668#2020-04-2322:02donyormSo I'm trying to automate deployments with Amazon Codebuild, and having a working deploy script (it runs fine on my local machine). However, when it runs on the codebuild server I get the following error: Error building classpath. Could not find artifact com.datomic:ion:jar:0.9.35 in central () . I can download the exact zip used by codebuild and run the script fine on my local machine. Why would clojure-cli not know to look for the ion jar in datomic's repo?#2020-04-2322:06alexmillerIt probably is - the error just reports the last place it looked#2020-04-2322:07alexmillerI think this is actually maybe a known issue with code build though#2020-04-2322:07alexmillerWhere code build can’t see stuff in a different region or different vpn or something#2020-04-2322:08donyormHuh any chance you know a workaround? I suppose this isn't strictly necessary, but it would be nice#2020-04-2322:09alexmillerThey’ve run into this on the Datomic team iirc#2020-04-2322:09alexmillerI’m not remembering the details#2020-04-2322:10alexmillerDon’t think they’re available rn#2020-04-2322:10donyormI think I found the issue (https://stackoverflow.com/questions/48984763/aws-codebuild-cant-access-maven-repository-on-github), thanks for the hint that it was codebuild's fault#2020-04-2322:19marshallhttps://forum.datomic.com/t/ions-push-deployments-automation-issues/715/5#2020-04-2322:46donyorm@U05120CBV unfortunately, I'm running this codebuild in us-east-1, so I guess it's a different issue?#2020-04-2405:01tatutI just had this issue and ended up packaging my own ~/.m2 repo (with just com/datomic included) in a private s3 bucket, downloading and extracting that in the codebuild#2020-04-2405:02tatutit is really unfortunate workaround but I couldn't get access to the datomic releases, even in the same region#2020-04-2408:11stijnAre your permissions for Codebuild setup correctly? You need either Administrator access or add this to an IAM policy that is attached to the codebuild instance profile:
{
"Sid": "DatomicReleasesAccess",
"Effect": "Allow",
"Action": "*",
"Resource": [
"arn:aws:s3:::datomic-releases-1fc2183a/*",
"arn:aws:s3:::datomic-releases-1fc2183a"
]
}
#2020-04-2415:21donyorm@U0539NJF7 That's probably it. I'll look into it.#2020-04-2415:54donyormYep that seemed to do it. Thanks, stijn#2020-04-2405:03tatutI'm trying to access datomic via codebuild for db tests, but I can't create endpoint for vpc https://docs.datomic.com/cloud/operation/client-applications.html#create-endpoint (is LoadBalancerName not available in solo topology?)#2020-04-2412:42vlaaadLets suppose i have an entity and a bunch of txs that touch that entity. What would be an efficient query to pull a bunch of data from this entity at these timepoints too see how it looked throughout its life? Or is (map #(d/pull (d/as-of db %) '[*] e) txs) the only way?#2020-04-2412:49vlaaadI’m using cloud by the way#2020-04-2412:49vlaaadso I would guess every d/pull is a separate request?#2020-04-2413:14marshall@vlaaad you could query for everything about the entity from a history db#2020-04-2413:16vlaaadbut that’s another thing, I’m not interested in changes, I want full state at point in time#2020-04-2413:17marshallfull state at a point in time definitely sounds like as-of#2020-04-2413:18vlaaadYeah, and I wonder if there is a way to query for state of an entity at different time-points#2020-04-2413:19vlaaadlike [:find (pull $ ?e [*]) :in [[$ ?e] ...]]#2020-04-2413:19vlaaadso it is a single request to server instead of multiple requests#2020-04-2413:19vlaaadif it is multiple requests? hard to tell without source code…#2020-04-2413:24marshallcloud or on-prem?#2020-04-2413:26vlaaadcloud#2020-04-2413:27vlaaadI tried with more entites, and using multiple d/pull + d/as-of IS a N+1 problem: it gets more and more slow, so I guess it performs multiple requests#2020-04-2413:15marshallhttps://github.com/cognitect-labs/day-of-datomic-cloud/blob/751618ff7526c956bd7d5558a2698eda369cee4f/tutorial/filters.repl#L108#2020-04-2413:16marshalldepending on what you’re looking for, you can also use tx-range: https://github.com/cognitect-labs/day-of-datomic-cloud/blob/751618ff7526c956bd7d5558a2698eda369cee4f/tutorial/log.clj#L50#2020-04-2416:42donyormIs there a reference anywhere to what permissions a user/role needs in order to push and deploy for ions?#2020-04-2417:34Joe Lane@U1C03090C https://docs.datomic.com/cloud/operation/access-control.html#org1c35561#2020-04-2417:36donyormThat seems to be more related to accessing the database itself, rather than just pushing ions. I don't need to give this role access to the database, it just needs to deploy ions. Does that still require being a datomic administrator?#2020-04-2416:51hadilsCloud question: when a new EC2 instance is started, is a new transactor created? Or is there one transactor for the whole system, regardless of the number of EC2 instances? I am curious if I need to track machine ids and so forth to figure out who is writing to parts of the database. I am probably overthinking this.#2020-04-2416:53marshall@hadilsabbagh18 you’re definitely overthinking it 🙂
there is no single transactor in Cloud
All nodes of the primary compute group can perform writes#2020-04-2416:53marshalltraffic is all routed through the load balancer#2020-04-2416:53marshalland uses consistent hashing and sticky sessions#2020-04-2416:54marshallto route requests from the same client and/or about specific DBs to particular nodes#2020-04-2416:54marshallbut that is strictly a performance optimization#2020-04-2416:54marshallany node in the group is capable of handling writes to any db#2020-04-2416:57hadilsThanks @marshall. When a new EC2 instance is started, doesn't my code start on it as well? Isn't there a potential for two servers to work on the same datoms in the database? My code is multi-threaded so there are processes that may replicate work on different EC2 isntances if they are running. If that is the case, then I need to track who is doing what, right?#2020-04-2416:58marshallno. individual transactions are still serialized via coordination with storage#2020-04-2416:58marshallyou may need to consider that separate nodes may try to do things “at the same time”#2020-04-2416:58marshallbut that is no different than multithreaded db access in any system#2020-04-2416:59hadilsAh. Thanks! I can handle this case without machine ids, etc. Thanks a lot @marshall!#2020-04-2416:59marshalli.e. use compare-and-set, optimistic concurrency, etc#2020-04-2417:02hadils@marshall another question. I know that the lambda functions are actually proxies. Do they scale out and spawn separate processes within the EC2 pool when load becomes high?#2020-04-2516:28marciolHi all. I'm thinking in increase our usage of Datomic, but I have some doubts about patterns of usage in a distributed microservices setting.
It's common to see in the wild Datomic as the souce of truth and the final place where all our data should live.
There are a set of good practices related to persistence layer with the microservices approach, and one of them is to set a database per bounded context to avoid coupling, but seems that doesn't apply when using Datomic, given that Datomic allows distributed peers.
Can anyone shed more light on this subject. Blog posts and articles are very welcome.#2020-04-2523:14marciolI found this great article from @U7JK67T3N
https://theconsultingcto.com/posts/datomic-with-terraform/#2020-04-2720:52bhurlowFWIW I know that nubank deploys a datomic instance per microservice#2020-04-2800:56eraadI believe Datomic Cloud is optimized to work with one database. There is no need to shard or divide your application in multiple databases.
Per my understanding, microservices architectures with physically separated databases are needed because of technological constraints related to scalability.
With Datomic, you should not worry about that because it is already optimized for all kinds of data access patterns. Check these for further technical recommendations about those patterns: https://docs.datomic.com/cloud/best.html
Regarding domain bounded contexts, I believe these should be enforced at the code level. If you have diferent traffic patterns for your applications, you can use query groups for example.
This style of architecture is a bit different from the “common knowledge” out there that couples domain modeling of bounded contexts with technology/scalability contraints of specific database technologies.
Anyways, I recommend you stick to one database and enforce your bounded contexts at code level. If you need more, checkout this planning strategies:
https://docs.datomic.com/cloud/operation/planning.html#2020-04-2814:37marciol@U0FHWANJK I have talked to a person that works there and he said that they deploy one datomic per bounded context#2020-04-2814:38bhurlow@U28A9C90Q yea I recall the same. I believe they have a "template" for starting a microservice which installs a datomic on-prem instance and s3 bucket per serivce#2020-04-2814:42marciolHere at PayGo, a company which the main responsability is deal with payments in the ecosystem of C6 Bank (somewhat Nubank competing) we are using Clojure and Datomic in some services, and we are trying to build something like a template as well, it is on my list right now.#2020-04-2814:47marciolThank you @U061BSX36, it is apparent more and more that with Datomic other kinds of patterns are needed.
In order to not lose the advantage to have the database in our application process, as we have today using Datomic on-prem where each application is a peer, we need to plan how make the application run using the datomic ions strategy.#2020-04-2814:48marciolWe are still thinking in pros and cons of this approach.#2020-04-2814:49eraadNice, good way of thinking about it.#2020-04-2814:50marciolRegarding the usage patterns, I wonder if it is possible to an application that depends of multiple databases to make a query joining other databases, as described by @U09K620SG in this article:
http://www.dustingetz.com/:datomic-myth-of-slow-writes
> The problem with place-oriented stores is that sharding writes forces you to shard your reads to those writes.#2020-04-2814:52marciolNubank uses Pathom so they do almost the same, but relaying on each service to get from database a specific part of the data, aggregating all this data after that.#2020-04-2814:54dustingetzDoes Ions have multi-db queries? I thought Cognitect quietly turned that off shortly after Datomic Cloud release, not sure if they ever turned it back on with Ions#2020-04-2814:55marciolYes, but with datomic on-prem we can use multi-db queries. I’m just wonder about how the application will behave regarding memory usage, latency, etc etc#2020-04-2814:56marciolOr just use Pathom to obtain the same result#2020-04-2814:56dustingetzFor on-prem, the databases will compete for object cache in the peer#2020-04-2814:58marciolYes, it is what I thought, this can happen even with one database, depending of amount of data and usage patterns, as we can read on this awesom post from Nubank:
https://medium.com/building-nubank/the-evergreen-cache-d0e6d9df2e4b#2020-04-2815:07marciolSo I’ll avoid future problems giving up what would be a fantastic feature 😅#2020-04-2815:07marciolunless someone change my mind 😄#2020-04-2815:35marciolBased on what @U061BSX36 said, I think that the smart move is to avoid multiples databases, and only break into it if:
1. You hit the write limit throughput of one transactor,
2. The amount of data is so huge that you start to experiment some issues related to object cache space.
Can you confirm this usage pattern:
cc: @U061BSX36 @U09K620SG @U072WS7PE @U05120CBV @val_waeselynck#2020-04-2815:45marshallon-prem or cloud?#2020-04-2815:49marciolon-prem at first @U05120CBV but we are evaluating cloud as well#2020-04-2815:51marciolbut one additional question @U05120CBV:
is it possible to avoid shard at datomic cloud? What is the strategy when data grows really big?#2020-04-2815:53marshallIn on-prem you should run a single primary logical DB per transactor. However, in Cloud multiple DBs per system is fine.#2020-04-2815:54marshallcan you define “really big”?#2020-04-2816:07marciolThinking about the limit of one Datomic Database instance being 11 Billion of Datoms, what corresponds to 353 datoms per second, we are planning to get on some our transaction system approximately 10% of this number.#2020-04-2816:08marciolSo what I consider “really big” is not that big in Datomic standard#2020-04-2816:09marshallThere is no hard limit on db size
The 10B number is a guideline around when you need to consider options for handling volume, shards, etc
if you’re unlikely to hit 10B datoms in 3 to 5 years, then i wouldn’t worry about it#2020-04-2816:15marciolSeems the case @U05120CBV, but I have another question regarding the architectural aspect: Is it possible to use a single primary logical DB to handle in a unified way all my data, even within a distributed services setting?
Sometimes, according “common knowledge”, as pointed by @U061BSX36 is the way to go with multiple databases, but seems to me that this can be different when using Datomic. It will really awesome to concentrate all your data in one place.#2020-04-2816:16marshallit depends a lot on your particular system needs, architecture, etc#2020-04-2816:16marshallthere is no right or wrong answer#2020-04-2816:19marshallthere are definitely advantages to a central single db#2020-04-2816:20marshalllikewise, there are lifecycle advantages to individual services having their own dbs#2020-04-2816:20marshalli would assess the tradeoffs to the different options and determine which fits your particular system needs best#2020-04-2816:22marciolI need to isolate my individual bias towards monolithic applications or “modular monoliths” as some name it, in order to do the best assessment#2020-04-2816:23marciolBut it is really fantastic that Datomic offers a larger range of options#2020-04-2818:08marciolBtw, very good article @U0C4ECS1K#2020-04-2518:33alidlorenzoHey y’all I’m working on two libs as I build my Datomic API
one to manage AWS infrastructure: https://github.com/rejure/infra.aws
another to manage schema accretions: https://github.com/rejure/dation
both are intended for Datomic Cloud, try to make it easier to create configurations using EDN, and overtime will (hopefully) provide more utilities for managing aws infrastructure and database attributes/migrations, respectively
feedback is welcome 🙂 feel free to open issue or discuss in #rejure channel I just created#2020-04-2518:34alidlorenzoon a related note, I have a question about Datomic accretions, unsure of what approach to take in above lib
from my understanding, Datomic transactions are idempotent so you could reinstall attributes every time on startup but sometimes you also need to migrate data, so it helps to have some control over process
currently I’m keeping a version number for schema that can be manually changed whenever schema/migration change is desired.
another approach I just read in one of @val_waeselynck’s post about Datomic is to reinstall schema on startup but track migrations* that have been run (so that unlike the schema, they’re not rerun).
i prefer this latter approach over version numbers, but i’m curious, the `ensure-schemas` example in day-of-datomic-cloud repo checks if a given schema attribute exists before reinstalling - is there a reason this approach was taken instead? are there considerations I’m not taking into account?#2020-04-2518:34alidlorenzoon a related note, I have a question about Datomic accretions, unsure of what approach to take in above lib
from my understanding, Datomic transactions are idempotent so you could reinstall attributes every time on startup but sometimes you also need to migrate data, so it helps to have some control over process
currently I’m keeping a version number for schema that can be manually changed whenever schema/migration change is desired.
another approach I just read in one of @val_waeselynck’s post about Datomic is to reinstall schema on startup but track migrations* that have been run (so that unlike the schema, they’re not rerun).
i prefer this latter approach over version numbers, but i’m curious, the `ensure-schemas` example in day-of-datomic-cloud repo checks if a given schema attribute exists before reinstalling - is there a reason this approach was taken instead? are there considerations I’m not taking into account?#2020-04-2521:35val_waeselynckNote that Datomic transactions are not idempotent in general (e.g [[:db/add "my-tempid" :person/age 42]] will always create a new entity, for lack of an identity attribute to induce upsert behaviour).#2020-04-2521:37val_waeselynckI only meant that schema installation transaction tend to be idempotent (e.g, creating a new attribute). So if you're a bit careful, you can usually just re-run your schema installation transaction, but it does require vigilance.#2020-04-2521:40val_waeselynckI don't know if that's what you read, but you might take inspiration from this: https://github.com/vvvvalvalval/datofu#managing-data-schema-evolutions
(won't work for Datomic Cloud, but shouldn't be too hard to port)#2020-04-2522:46alidlorenzothanks for clarifying that. I was reading the “Using Datomic in your App” article, implementation in linked repo seems to be similar, will take a look.
as is, datofu only works with on prem, right?#2020-04-2611:59val_waeselynckYes#2020-04-2714:25kvltIs there a correlation between datomic peer memory and datomic transactor memory?#2020-04-2718:27stuarthallowayHi @petr! What do you mean by correlation?#2020-04-2718:30stuarthallowayThe peers must follow the in-memory transaction stream for all databases they are connected to, which is up to the memory-index-max setting (on the tranactor(s)!)#2020-04-2718:31stuarthallowayBut processes can make independent choices about total JVM, object cache, etc. so long as they work within that rule. This is partially described at https://docs.datomic.com/on-prem/capacity.html#peer-memory.#2020-04-2817:12joshkhi know this question is 10% Ions and 90% AWS, but maybe one of you experts can help me out. i'm trying to configure a CloudWatch metric (later to be used as an Alarm), which looks for my Ion's cast/alerts and cast/events. after setting up the metric, i have no results in my graph, even though i can find filtered matches in my log streams when i use the same filter pattern.
1. in CloudWatch, i find my datomic-<system-name> log group
2. i select the radio option and click the Create Metric button
3. i enter the Filter Pattern {$.Msg = "my-specific-cast-event"} (which works as a normal filter pattern when searching log streams)
4. i choose a Metric Namespace (i don't think it matters which one)
5. i click Create Filter, and the final result is an empty an empty graph#2020-04-2817:44Joe Lane@joshkh Are you in a solo topology?#2020-04-2817:44joshkhproduction#2020-04-2817:45Joe Lanehmm...
NOTE AWS will display all metrics in lowercase with the first character capitalized. As an example, the aforementioned :CodeDeployEvent will display as Codedeployevent in both the metrics and the logs. Additionally, CloudWatch Metrics do not have namespaces, and any namespace provided in the metric name will be ignored.
#2020-04-2817:45Joe LanePer https://docs.datomic.com/cloud/ions/ions-monitoring.html#metrics#2020-04-2817:48Joe LaneAre you trying to cast a metric from your local environment? I'm not sure that cast/metric works locally, it may have to be deployed. It's been a while since I've worked with these.#2020-04-2817:51joshkhhey no worries, and thanks for the input. the cast is coming from a deployed environment, and i can see them in my cloudwatch logs. i just can't seem to wrangle them in the CloudWatch Metrics (which i think are different from what Datomic calls Metrics)#2020-04-2817:53joshkhwait.. wait. i think you're totally on to something. thanks Joe!#2020-04-2817:53joshkh(i'm using cast/event, not cast/metric)#2020-04-2819:01Joe LaneYeah, you gotta use metric 🙂#2020-04-2817:37Willwhen I submit a retract transaction with this form:
[:db/retract entity-id attribute]
I get the following error:
Error printing return value (IndexOutOfBoundsException) at clojure.lang.PersistentVector/arrayFor (PersistentVector.java:158). null
my code specifically looks like:
(d/transact conn [[:db/retract 17592186123123 :entity/attribute]])
and the relevant schema looks like:
{:db/ident :entity/attribute
:db/valueType :db.type/tuple
:db/tupleTypes [:db.type/string :db.type/string]
:db/cardinality :db.cardinality/many}
I don't think the fact that the attribute is a tuple or cardinality many is relevant, I've tried retracting string valued attributes with cardinality one in the same way and gotten the same error.
Looking at the documentation here:
https://docs.datomic.com/on-prem/transactions.html#list-forms
it seems like I should not have to specify a value for a retraction and if the value is not specified it will retract all the attributes that match the supplied entity id and attribute.
Anyone have any thoughts?#2020-04-2817:41joshkhi think that functionality was introduced only in the very latest release of Datomic (13 Feb 2020). are you on the latest version?
https://docs.datomic.com/on-prem/changes.html#0.9.6045
https://docs.datomic.com/cloud/releases.html#616-8879 (cloud)#2020-04-2817:42Willah that'll be it, we're on 0.9.5981#2020-04-2817:43Will@joshkh thanks for the fast response!#2020-04-2817:44joshkhno problem! it's definitely a feature i thought would have existed for ages. but alas, we have it now 🙂#2020-04-2907:08robert-stuttaford@marshall @jaret hey guys 👋 we want to switch from Java 8 to Java 11, but when i start a pro-0.9.6024 transactor, I get this warning:
WARNING: Illegal reflective access by org.codehaus.groovy.reflection.CachedClass$3$1 (file:/Users/robert/Datomic/datomic-pro-0.9.6024/lib/groovy-all-1.8.9.jar) to method java.lang.Object.finalize()
i did try to find supported java versions on the datomic doc site but couldn't find anything. got any advice for me, please? thanks!#2020-04-2911:59favilaI’ve been told by Cognitect (never seen in docs though) that java 11 should work.#2020-04-2911:59favilathis error re groovy: it’s actually an unused dep and was removed in the next version 0.9.6045#2020-04-2911:59favila(you can see that in the changelog)#2020-04-2912:00favilayou can probably delete that jar file from libs if you don’t want to upgrade#2020-04-2912:47marshallAccurate ^#2020-04-2913:50robert-stuttafordoh ok beeauuutiful! thanks @U09R86PA4 and thanks for confirming @marshall!#2020-04-2915:25kennyIs there any documentation on what passing :repl-cmd to datomic.ion.dev/push does exactly? I would've thought it would allow me to run the push with additional aliases but it doesn't appear to have any effect.#2020-04-2915:27kennyMore to that point, how does push know which paths to include on the final classpath that will be uploaded to S3? Does it simply not include any aliases? If so, is there a way to have it include some aliases?#2020-04-2917:06kennyI am getting an alert with Datomic Cloud after deploying my Ion:
":datomic.cloud.cluster-node/-main failed: 'datomic/ion-config.edn' is not on the classpath"
No errors reported when pushing and deploying from the REPL. I can also slurp my ion-config.edn from the REPL:
clj -A:dev:ion-deploy
Clojure 1.10.1
user=> (require '[ :as io])
nil
user=> (slurp (io/resource "datomic/ion-config.edn"))
"{:allow [],\n :lambdas\n {:query-pricing-api\n {:fn cs.ions.pricing-api/lambda-handler,\n :description \"Query the pricing-api.\"}},\n :app-name \"datomic-import-test\"}\n"
I'm missing something between what is on the classpath locally and what it ends up deploying.#2020-04-2917:15kennyDoes ion push somehow take into account .gitignore?#2020-04-2917:21kennyAh, I think Ion push simply ignores all aliases.#2020-04-2919:19kennyWe're using a monorepo style project where lots of dependencies are all :local/root. Datomic Ions appear to require no :local/root deps even if the git repo is clean and all :local/root deps are within the same git repo. Is this a necessary constraint?#2020-04-2919:41alexmillerLocal root deps are not in git repos from dep’s perspective#2020-04-2919:41alexmillerThey are local unmanaged resources#2020-04-2919:45alexmillerIt seems unlikely that Datomic team would infer this semantic over the top. I think the real place to work this problem is in tools.deps but it really requires a top down intent to address this monorepo use case and I don’t think that’s something likely to happen soon#2020-04-2919:47alexmillerAs far as workarounds, I’m not sure all of the options available for ions#2020-04-2920:59Joe Lane@kenny Create a "runner" project which depends on specific git revision but then allows the deps to be overridden when you have a :local alias.
Example of this "runner" approach with ions is https://github.com/Datomic/ion-event-example-app which just composes https://github.com/Datomic/ion-event-example. You deploy the former. If you expanded on this style with many smaller ion modules/projects you can compose different ion libraries in any way you want.
I'm working on a reference application that demonstrates this by having various "services" (different apps like a health-tracker, a recipe app, a todo application, etc.) all deployed by the same "runner" which references each of these projects at a specific git sha and development is very smooth because in cursive I can create a multi-module project which allows me to edit my :local/root siblings at the same time but keep them in different git repos.#2020-04-2921:22kennyYeah, I suppose I could to that. Would involve creating a deps.edn that contains all of my sub-projects. Easy to do programmatically.#2020-04-2921:00Joe LaneI'll try to get around to sharing my example this weekend.#2020-04-2923:53kennyAny idea what I need to do to get cast/event to work locally?
(cast/event {:msg "Foo"})
Execution error (IllegalArgumentException) at datomic.ion.cast.impl/fn$G (impl.clj:14).
No implementation of method: :-event of protocol: #'datomic.ion.cast.impl/Cast found for class: nil#2020-04-3000:03Joe Lane@kenny https://docs.datomic.com/cloud/ions/ions-monitoring.html#local-workflow#2020-04-3000:04Joe LaneGotta call (cast/initialize-redirect :stdout) , or :stderr , or "somefile.log", or :tap.#2020-04-3000:16kennyOh yeah. Strange error message for that.#2020-04-3011:23joshkhhas anyone experienced the following exception from a deployed HTTP Direct ion project?
Uncaught Exception: .IOException: Too many open files
we see the exception shortly after the EC2 instance comes up, and once it happens the web server stops responding for good. the project does work with temporary files but very rarely and only on demand, so it's strange to see the exception shortly after the app starts.#2020-04-3012:00favila“Open files” can also mean file descriptors, meaning sockets. Do you make lots of tcp or http connections maybe?#2020-04-3012:12joshkhi think i found the problem. it looks like the error was coming from a function which created a new cognitect-labs/aws-api client in a let every time the function was called (which isn't a good practice, and now the client is def'ed). perhaps the client opens files, maybe for end point resolution or something?#2020-04-3012:40favilaI think it probably just opened a new http connection each time#2020-04-3013:30ghadi@U0GC1C09L can you list the version of the client and whether you pass anything to the constructor besides :api#2020-04-3013:35joshkhsure thing.
client version:
{com.cognitect.aws/api {:mvn/version "0.8.305"}}
constructor:
(aws/client {:api :kms})#2020-04-3011:25joshkhthe stack trace points to org.eclipse.jetty.util.component.ContainerLifeCycle & cognitect.http_client#2020-04-3012:42MarcusWhen using the client-pro api there is a function create-database. This requires the peer server to be running. But the peer server requires a database name (in the -d parameter) to run. How can I create a database with the client-pro api?#2020-04-3012:42MarcusDo I need to use the full peer api?#2020-04-3012:49favilayes, you need a peer to create the database; then you can run peer server#2020-04-3012:49MarcusOk. But what then is the use of create-database? to create subsequent databases?#2020-04-3012:50favilait’s for cloud (and other non-peer-server scenarios)#2020-04-3012:50favilanote the docs for create-database:https://docs.datomic.com/client-api/datomic.client.api.html#var-create-database#2020-04-3012:51Marcusah 🙂#2020-04-3012:51Marcusthanks 🙂#2020-04-3015:13tvaughanSorry if I missed this elsewhere, but is it permissable to provide a public docker image of Datomic on-prem (without a license key)?#2020-04-3016:37marshallno. you can’t distribute the bits of Datomic on-prem#2020-05-0113:39robert-stuttafordwhat could cause this to happen, @marshall @jaret? no ddb throttling at all, heartbeat totally stable, but all services started getting transaction timeouts, and as you can see on the graph, live index threshold stuck at full. first time we've ever seen this!#2020-05-0113:45jaret@robert-stuttaford would you be able to start a case and share logs, version, config settings? We’d be interested in looking at this in more detail.#2020-05-0113:46jaret<mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>#2020-05-0113:47robert-stuttafordabsolutely#2020-05-0113:58robert-stuttafordmy colleague Geoff will mail soon!#2020-05-0113:58robert-stuttafordthanks @jaret#2020-05-0114:30potetmI’m curious what the answer is when ya’ll find it.#2020-05-0115:49BrianIs it possible to restore a database from a file on disk into an in-memory database? I'd like to write some tests for my code but don't want to point them towards the database on my system since that isn't portable. I know how to restore a database from the command line, what I am looking for is some Clojure code which demonstrates how to take an on-disk backup and restore it into an in-memory Datomic database for use in my tests. Thanks!#2020-05-0116:07kennyIs datomic.query.support a public api?#2020-05-0116:51ghadino#2020-05-0118:56micahAm pondering the use of :db/fulltext. It’d mean I have to copy data over to new attribute. Is fulltext query really much faster than [(.contains :attr/name ?q)]?#2020-05-0118:56micahAm pondering the use of :db/fulltext. It’d mean I have to copy data over to new attribute. Is fulltext query really much faster than [(.contains :attr/name ?q)]?#2020-05-0119:39favilathat query will always require a full scan of the index#2020-05-0119:42favilafulltext does not. it’s a lucene index with default tokenization and stop words, and you can use lucene queries#2020-05-0119:42favilanot sure a “contains” would work reliably because it’s designed for natural language queries and indexing#2020-05-0119:56micah@U09R86PA4 Thanks for the reply. So in short, yeah.. it’ll be faster?#2020-05-0119:56micah.contains kinda worked… here’s how the query ended up#2020-05-0119:57favilait’ll be faster, but depending on what you want, may not be correct#2020-05-0119:57micahwell that didn’t copy well#2020-05-0119:57micah'[:find ?e
:in $ ?q
:where
[?e :user/email ?email]
[?e :user/name ?name]
[(.toLowerCase ^String ?name) ?name2]
[(str ?email " " ?name2) ?text]
[(re-find ?q ?text)]]
#2020-05-0119:58micahI see what you mean by correct.#2020-05-0119:58favilayeah that’s an unavoidable full scan of two indexes at once#2020-05-0119:58favilawhat is the re pattern expected to be?#2020-05-0119:58micahfull email, email segment, name segment#2020-05-0119:59favilawhat do you mean by email segment?#2020-05-0120:01micah<mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection> -> full email#2020-05-0120:01micahMight want to search for “http://cleancoders.com”#2020-05-0120:01favilaso domain name only?#2020-05-0120:01micahnot only… just an example.#2020-05-0120:01favilaI’m just wondering if you need the full power of RE or if you just want to match different segments#2020-05-0120:02favilaif you want exact matches of different segments, you would be better off creating derived attrs with just those parts canonicalized for indexing purposes, then do exact matches against them#2020-05-0120:02favilathat probably wouldn’t work for name#2020-05-0120:05micahHeya… just been browsing http://clubhouse.io and noticed your pm tool uses terms stories and epics, as do I… old school agile XP#2020-05-0120:05micahDo you guys offer trial accounts?#2020-05-0120:05micahBeen using Trello but it leaves much to be desired.#2020-05-0120:20favilaall accounts up to 10 users are free#2020-05-0120:20favilawe’re somewhere on the spectrum between trello and Jira in terms of power and ease of use#2020-05-0120:23micahSorry… I signed up.. was silly question. Yeah looks cool.#2020-05-0120:23micahThanks for help#2020-05-0308:20Ben HammondMorning
I'm trying to introduce an 'or' clause into a (cloud) datomic query, but I'm seeing a
Execution error (ExceptionInfo) at datomic.client.api.async/ares (async.clj:58).
Nil or missing data source. Did you forget to pass a database argument?
error.
So this query works as expected
(datomic.client.api/q
{:query '[:find (pull ?e [*])
:in $u [?uid ...]
:where [$u ?e :game/red-player ?uid]]
:args [(dca/db datomic-game-conn)
[#uuid "c5658831-892e-4c67-b87e-d39a5a6a5660"]]} )
but this gives me an error 'or' clause introduced in the where
(datomic.client.api/q
{:query '[:find (pull ?e [*])
:in $u [?uid ...]
:where (or [$u ?e :game/red-player ?uid])]
:args [(dca/db datomic-game-conn)
[#uuid "c5658831-892e-4c67-b87e-d39a5a6a5660"]]} )
Execution error (ExceptionInfo) at datomic.client.api.async/ares (async.clj:58).
Nil or missing data source. Did you forget to pass a database argument?
what am I doing wrongly?#2020-05-0308:49Ben Hammondoh I think it works if I replace named database` $#2020-05-0309:00Ben Hammondperhaps its the datomic-queries-across-multiple db angel Tthat I'm having problems with#2020-05-0309:21Ben Hammondare multi-cloud--database queries be possible when using datomic cloud?
is it just or clauses that are a problem?#2020-05-0319:51favilaRules (of which or is a kind of anonymous rule) can only operate on one db#2020-05-0319:51favilaThey have a different syntax for rebounding the db: put it before the rule invocation#2020-05-0319:53favila(datomic.client.api/q
{:query '[:find (pull ?e [*])
:in $u [?uid ...]
:where ($u or [?e :game/red-player ?uid])]
:args [(dca/db datomic-game-conn)
[#uuid "c5658831-892e-4c67-b87e-d39a5a6a5660"]])#2020-05-0319:54favilaSo every clause must use $u and you can not override#2020-05-0319:54favila(No cross-datasource joins allowed in rules)#2020-05-0320:24Ben Hammondooh thanks#2020-05-0320:24Ben HammondI never would have guessed that syntax;
is it in the docs anywhere?#2020-05-0320:29Ben Hammondah but your point is that there is no benefit in specifying the datasource as $u because There Can Be Only One
so I may as well leave it implicit and get on with my life#2020-05-0322:24favilaThat wasn’t really my point. It was that rules have this data source limitation so they have a special syntax#2020-05-0322:24favilaYes this is documented#2020-05-0322:27favilaIt’s called src-var in the grammar#2020-05-0408:08Ben HammondThankyou#2020-05-0320:42Sam DeSotaHi, just ran into this issue: https://forum.datomic.com/t/ion-get-params-throws-exception-when-returning-more-than-10-values/1303#2020-05-0320:43Sam DeSotaThe ion docs recommend using ion/get-params, but get-params breaks after 10 params#2020-05-0320:44Sam DeSotaCan't this break production deployments if all you do is add too many params to param store? This seems like it needs to be fixed.#2020-05-0320:46Sam DeSotaSince we have no control over when an ion restarts, it could happen without a deploy and rollbacks could fail.#2020-05-0321:12Sam DeSotaI've fixed in my own deployment by paging and partitioning#2020-05-0402:55mruzekwDoes Datomic Ions at all handle deployment of a static frontend SPA?#2020-05-0411:50joshkhhow do you mean? do you want Ions to act as a web server and serve the static frontend from somewhere like S3?#2020-05-0413:24Joe Lane@U1RCQD0UE Put that static frontend spa in an S3 bucket which is cached by cloudfront.
Then set up a codebuild/codepipeline job any time it changes and be sure to do a cloudfront cache invalidation on every successful build.#2020-05-0517:32mruzekwThanks Josh and Joe. I was thinking the S3 + Cloudfront route.#2020-05-0517:32mruzekwI was wondering if I had to add something extra like Serverless Framework to deploy an SPA frontend#2020-05-0517:32mruzekwSounds like I do#2020-05-0518:35Joe LaneNo need for a framework, you can compile your assets and s3 cp ./my-project #2020-05-0411:47joshkhis there a way to query for the inclusion of a value anywhere in a tuple attribute? something like:
(d/q '{:find [?social-connection]
:in [$ ?person]
:where [
[?social-connection :friendship/from+to ?person]
]}
db "Bob")
#2020-05-0411:54joshkhi guess i could wrote a more sophisticated query to identify the attributes in the tuple attribute and check each one#2020-05-0412:54favilaDestructure person#2020-05-0412:54favilaNote this is going to scan every friendship/from+to value#2020-05-0413:19joshkhthanks, favila. what do you mean by destructure person?#2020-05-0413:24favila[?social-connection :friendship/from+to ?people][(untuple ?people) [?person ...]]#2020-05-0413:26Joe LaneInteresting, I never knew [?person ...] syntax worked on tuples...#2020-05-0413:32favila(psst, tuple is vector and untuple is identity)#2020-05-0413:33joshkhyeah, that's brilliant @U09R86PA4. thanks for pointing that out -- it's exactly what i needed.#2020-05-0413:34favilanote the caveat, this is a full scan of that attr’s values#2020-05-0413:34favilaif you have a non-tuple card-many attr you can match on exactly, that is better#2020-05-0413:34favila(or many such attrs)#2020-05-0413:38joshkhdefinitely. as my business entities and tuple constraints grow, i find now and again that when an entity is retracted, it produces a tuple somewhere else with a nil value (metaphorically). then, retracting another entity that shared common values in the remainder of the tuple fails due to conflicting datoms.#2020-05-0413:41joshkhtwo existing tuple entities:
[friend-a friend-c]
[friend-b friend-c]
retract friend-a:
[nil friend-c]
[friend-b friend-c]
retract friend-b and fail:
[nil friend-c]
[nil friend-c]#2020-05-0413:44joshkhis working around the issue something you've experienced? i've toyed with inbound component references (but this feels dirty and difficult to manage), or transactor functions to look for tuples, or an API to handle retracting known related tuples.#2020-05-0413:47favilayes, unique composite attrs with refs definitely have this problem#2020-05-0413:47favilawe’re just more careful about retractions#2020-05-0413:49favilaif the index exists just to ensure uniqueness, consider enforcing uniqueness on write with a :db/ensure function#2020-05-0413:50favilabut a “cascading delete” flag on a tuple attr would sure be nice#2020-05-0413:50joshkhabsolutely, i always think of it as a "reverse component reference"#2020-05-0413:52favilaI don’t think this can be done with a tx fn because you need to know the final value of the tuple after all primitive datom operations are applied#2020-05-0413:52joshkhalso we're in the same place: use caution. but that comes with a lot of insider knowledge for new engineers to pick up before they start working with the data, because they have to know how all existing tuples connect the data model.#2020-05-0413:52joshkhah. tx fns were next on my list to explore. you saved me some time 😉#2020-05-0414:07BrianIs there any way to load a database backup from disk into an in-memory datomic database? If not, what is the standard for datomic tests that reach a database?#2020-05-0414:14favilagenerally test fixtures will create an in-memory database and get it into the state being tested#2020-05-0414:15favilaif you want to start from a database you have already, you can use it as a dev db; but make sure you don’t write to it#2020-05-0414:15faviladatomock can help#2020-05-0414:17BrianOkay thank you!#2020-05-0417:29Sam DeSotaCreated a little script for deploying ions without the ceremony in case any one one here could use it: https://gist.github.com/mrapogee/251a6b279f1224b90698676f842aaa74#2020-05-0500:22john-shafferNice. There is a case macro that can simplify your cond.#2020-05-0504:47murtaza52a noob question - datomic conn is a value of the db in time. So once I finish running a transaction, and if I want to run a query, do I need to use the new value of the db returned by the transaction ?
basically if I have an app which is running queries and doing transactions. can I just have a global conn instantiated once at startup, or do I need to refresh the connection after every transaction ?#2020-05-0504:54favila(d/connection ...) => connection object#2020-05-0504:54favila(d/db connection-object) => database-value#2020-05-0504:56favilaWhen you transact (successfully) against a connection object, a new db value is produced, so yes you need to retrieve it if you want to read the results of the transaction. (The map returned from transact has a db-before and db-after already, too)#2020-05-0504:59favilaBut the connection object is unchanging—its a resource handle not a value (in fact it is already cached for you)#2020-05-0505:02murtaza52so after a transaction if I want to read the results of the transaction, can I just do (d/db connection-object)instead of saving the db value returned by the transaction ?#2020-05-0510:43Lyn HeadleyI'm pretty sure the answer is yes but I can't find it in the docs.#2020-05-0511:52murtaza52thanks yup that is how it works#2020-05-0511:52favilaIf you are using the same connection object (same process or client) yes, but you are guaranteed a db at or after that tx, not the immediate next db#2020-05-0511:54favilaIf you are on a different process, there may be propagation delays. In that case communicate the t to the other process then use d/sync to get a db guaranteed to be at or after the tx in question#2020-05-0511:57murtaza52thanks for the pointer#2020-05-0511:53murtaza52I am using datomic client api, is there a helper function to generate a squuid ?#2020-05-0514:40alidlorenzo@murtaza52 i don’t think client api has it but the Clojure cookbook has an example you can use: https://github.com/clojure-cookbook/clojure-cookbook/blob/1b3754a7f4aab51cc9b254ea102870e7ce478aa0/01_primitive-data/1-24_uuids.asciidoc#2020-05-0517:05naomarikHow do I use the collection binding form when performing a query as a map using d/query https://docs.datomic.com/on-prem/query.html#bindings?
Just getting this error: Argument ... in :find is not a variable.#2020-05-0517:08favilacode? also, are you using the client api? (client api doesn’t support find destructuring)#2020-05-0517:11naomarik(let [where ['[?dealer-id :dealer/listings ?listing-id]
'[?listing-id :listing/status :active]]
q {:find '[?listing-id '...]
:in ['$ '?dealer-id]
:where where}]
(d/query
{:query q
:args [(db)
17592186057265]}))#2020-05-0517:11naomarikUsing datomic.api#2020-05-0517:12naomarikQuery works fine without the ... and using ... in any vector-form queries.#2020-05-0517:14naomarikAh nevermind, got it...#2020-05-0517:14naomarik(let [where ['[?dealer-id :dealer/listings ?listing-id]
'[?listing-id :listing/status :active]]
q {:find ['[?listing-id ...]]
:in ['$ '?dealer-id]
:where where}]
(d/query
{:query q
:args [(db)
17592186057265]}))#2020-05-0517:51favilayeah need extra []#2020-05-0517:51favilato make it a map#2020-05-0519:01kennyIs there a way to tell if an environment is running in Datomic (i.e., as an Ion)?#2020-05-0519:13alidlorenzowould using datomic.ion/get-env or datomic.ion/get-app-info work?
what I do specifically rn (for other reasons) is check whether my own (System/getenv "LOCAL_ENV") property has been set, and if it hasn’t I assume it’s running in an Ion. not sure if there’s a better way#2020-05-0519:14kennyThat would tell me if the ion library is on the classpath, not if it's running in the Datomic environment. Current thinking is to check for the env var DATOMIC_ENV_MAP#2020-05-0519:20alidlorenzoi think DATOMIC_ENV_MAP is the env var that datomic.ion/get-env retrieves, hence my comment 🙂#2020-05-0519:21kennyRight but the difference is substantial. I may have Datomic ion lib on the classpath and not have the env var set.#2020-05-0519:25alidlorenzoi’m not sure I follow (from my understanding DATOMIC_ENV_MAP is set by Datomic system, datomic.ion/get-env just checks whether the env var is set or not which seems to be what you proposed to do anyway). in either case, was just suggesting a possible solution, maybe someone else can comment better#2020-05-0519:27kennyOh I see what you mean. I can't call out to the Ion lib since it might not be on the classpath. I'm trying to extend our logging library to know to log using Ion's log functions when in the Ion environment.#2020-05-0519:39murtaza52@alidcastano thanks#2020-05-0807:54alekszelarkHi! What if one wanted to use another randomly generated unique IDs (e.g. nano-id) instead of UUIDs, would Datomic lookup them as fast as UUID?#2020-05-0811:12favilaSpeed probably isn’t going to differ that much. Uuids have a space-efficient encoding in fressian and transit and are compactly represented in java. An alternative id scheme will have to be represented as a string#2020-05-0812:47alekszelarkThanks for pointing to representation and transferring data, just didn’t think about it. However, the main question about lookup speed is still opened.#2020-05-0812:54favilaagain, it’s unlikely to make a difference, except insofar as things with smaller representations in memory tend to be faster#2020-05-0812:55favilacomparing two longs vs comparing a non-interned string for instance#2020-05-0812:56favilaor just being able to fit more of them into a cpu cache#2020-05-0812:56favilaif speed is absolutely critical you should benchmark, but my hunch is it doesn’t matter#2020-05-0812:56favilapeople routinely use string identifiers in datomic#2020-05-0812:57alekszelarkThanks a lot.#2020-05-0812:57alekszelarkThe question comes from https://github.com/zelark/nano-id/issues/12#2020-05-0822:29pvillegas12How can I convert a datom like#2020-05-0822:29pvillegas12#datom[49011830319782294
186
#uuid”c3fc44ab-44a2-4e66-b095-bfe0dc200806"
13194140498706
false]
into the vector of data it contains?#2020-05-0900:11favilaI’m not sure what you mean? The datom is the data. Do you just want something with same fields but type “vector”?#2020-05-0900:12favilaDatoms support nth and get access (using keys :e :a :v :tx :added) so it should be rare you need to convert them#2020-05-1009:08murtaza52I am using datomic cloud and running a proxy client locally to connect to it - bin/datomic client access my-storage
Intermittently the process stops accepting connections and my clojure code starts timing out in my repl.
Has anyone else experienced this. It just starts working after some time. Any number of process restarts does not help. This is the error I get in the repl -
{:cognitect.anomalies/category :cognitect.anomalies/unavailable,
:cognitect.anomalies/message "Total timeout 60000 ms elapsed",
:config
{:server-type :cloud,
:region "us-east-2",
:system "jra-storage",
:endpoint "",
:proxy-port 8182,
:creds-profile "my",
:endpoint-map
{:headers {"host" ""},
:scheme "http",
:server-name "",
:server-port 8182}}}
This is the output I see on my console after starting the datomic process -
debug1: Local connections to LOCALHOST:8182 forwarded to remote address socks:0
debug1: Local forwarding listening on ::1 port 8182.
debug1: channel 0: new [port listener]
debug1: Local forwarding listening on 127.0.0.1 port 8182.
debug1: channel 1: new [port listener]
debug1: Requesting
Any help will be appreciated.#2020-05-1113:44pvillegas12I’m getting
in thread "async-dispatch-3" java.lang.NoSuchMethodError: com.cognitect.transit.TransitFactory.writer(Lcom/cognitect/transit/TransitFactory$Format;Ljava/io/OutputStream;Ljava/util/Map;Lcom/cognitect/transit/WriteHandler;Ljava/util/function/Function;)Lcom/cognitect/transit/Writer;
at cognitect.transit$writer.invokeStatic(transit.clj:157)
at cognitect.transit$writer.invoke(transit.clj:139)
at $marshal.invokeStatic
#2020-05-1113:48pvillegas12When I connect to the bastion#2020-05-1113:53pvillegas12(d/pull (d/db (cloud-conn)) '[* {:company/creator [*]}] [:company/id company-id])
Exception in thread "async-dispatch-2" java.lang.NoSuchMethodError: com.cognitect.transit.TransitFactory.writer(Lcom/cognitect/transit/TransitFactory$Format;Ljava/io/OutputStream;Ljava/util/Map;Lcom/cognitect/transit/WriteHandler;Ljava/util/function/Function;)Lcom/cognitect/transit/Writer;
at cognitect.transit$writer.invokeStatic(transit.clj:157)
at cognitect.transit$writer.invoke(transit.clj:139)
at $marshal.invokeStatic(io.clj:48)
at $marshal.invoke(io.clj:38)
at $client_req__GT_http_req.invokeStatic(io.clj:76)
at $client_req__GT_http_req.invoke(io.clj:73)
at datomic.client.impl.shared.Client._async_op(shared.clj:380)
at datomic.client.impl.shared.Client$fn__112135$state_machine__8973__auto____112150$fn__112152.invoke(shared.clj:404)
at datomic.client.impl.shared.Client$fn__112135$state_machine__8973__auto____112150.invoke(shared.clj:403)
at clojure.core.async.impl.ioc_macros$run_state_machine.invokeStatic(ioc_macros.clj:973)
at clojure.core.async.impl.ioc_macros$run_state_machine.invoke(ioc_macros.clj:972)
at clojure.core.async.impl.ioc_macros$run_state_machine_wrapped.invokeStatic(ioc_macros.clj:977)
at clojure.core.async.impl.ioc_macros$run_state_machine_wrapped.invoke(ioc_macros.clj:975)
at datomic.client.impl.shared.Client$fn__112135.invoke(shared.clj:403)
at clojure.lang.AFn.run(AFn.java:22)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)#2020-05-1113:54pvillegas12Regular pulls are also showing this exception#2020-05-1113:55ghadiNSMError is usually a tools/compilation problem#2020-05-1113:55ghadicheck for things accidentally compiled in $PROJECT/target or wherever your classfiles go#2020-05-1116:32pvillegas12@U050ECB92 not very fluent with these target stuff, will removing the target directory get rid of this problem?#2020-05-1118:08Yuriy ZaytsevHi there! Can you share your best practice for testing with datomic/ions?#2020-05-1122:27drewverleeFor a call do the datomic.api/datoms https://docs.datomic.com/on-prem/clojure/index.html#datomic.api/datoms
1. if you leave any of the parts of a component "entity attribute value tx added" then do they have defaults? I assume transaction is the lastest, added is true.
2. If you provide two components e.g for avet something like (datoms db avet <a> <v> <e> <t> <added> <a1> <v1> ...) what does that actually do? The database isn't keeping 2nd order indexes for everything is it?#2020-05-1200:38favila1. This is a pattern match. Omitted parts are “match any”#2020-05-1200:41favila2. Behavior is undefined, I suspect the extra arguments are just ignored#2020-05-1214:06drewverleeah. I'm not sure what function signature would be better, but it feels ambiguous.#2020-05-1202:48hadilsDatomic Cloud Question: Does anybody have a way to use Component or Mount to set up database configuration and load parameters? Putting it into the lambda initialization setup does not work. I don't know what to do...#2020-05-1212:53nateHey, a Datomic noob question, should one call d/connect for every transaction or one can call it once and store a ref to :datomic.client/conn for the whole life cycle of an application, and call d/db per transaction instead ?#2020-05-1212:55nate(Kind of joins @hadilsabbagh18 ‘s question in a way)#2020-05-1212:56marshallDatomic connections do not adhere to an acquire/use/release
pattern. They are thread-safe and long lived. Connections are
cached such that calling datomic.api/connect multiple times with
the same database value will return the same connection object.
#2020-05-1212:56marshallhttps://docs.datomic.com/on-prem/clojure/index.html#datomic.api/connect#2020-05-1212:56marshallthat happens to be the api reference for the peer API, but the same is true for client#2020-05-1212:57marshallhttps://docs.datomic.com/cloud/client/client-api.html#connection#2020-05-1212:57nateOh cool that answers my question, thanks @marshall !#2020-05-1213:14hadils@marshall that may be true for the connection, but I have other components in my application., such as parameter loading and channel setup. I still need to use mount#2020-05-1213:16marshall@hadilsabbagh18 yeah, that answer was directed at @nate
Managing lifecycle in Ions is a bit different; You need to handle those sort of things in a way that isn’t triggered by ns loading#2020-05-1213:17marshallfor example, the Ion tutorial manages db lifecycle outside the push/deploy cycle: https://docs.datomic.com/cloud/ions/ions-tutorial.html#orgf40df0d#2020-05-1213:18marshallhttps://docs.datomic.com/cloud/ions/ions-reference.html#parameters Params I would use Ion parameters: https://docs.datomic.com/cloud/ions/ions-reference.html#parameters#2020-05-1213:22hadils@marshall has the parmeters been fixed to handle more than 10 parameters? I rewrote it to handle more than 10 paramaters.#2020-05-1213:26marshallhttps://forum.datomic.com/t/ion-get-params-throws-exception-when-returning-more-than-10-values/1303/2?u=marshall
get-params is a convenience wrapper#2020-05-1213:26marshallbut using AWS parameters directly is the right approach if you have more than 10#2020-05-1213:26marshallthat should be available in the cognitect aws-sdk also#2020-05-1213:34hadilsThanks @marshall. So I use my parameters in get-client this is where the problem is...#2020-05-1214:19hadils@marshall Thanks for your help. I got rid of mount and it's working now, after refactoring the code.#2020-05-1215:48marshall👍#2020-05-1216:23joshkhis there a reason to be concerned about a PollingCacheUpdateFailed error in cloud logs? i've checked the troubleshooting documentation but couldn't find an answer.#2020-05-1319:03vnctaingI have issue trying to do a simple query with on-prem, my get-all fn throws this anomalie.
Query args must include a database
(ns oceans-eleven.db.core
(:require
[datomic.client.api :as d]
[io.rkn.conformity :as c]
[mount.core :refer [defstate]]
[clojure.pprint :as p]
[ring.util.request]
[oceans-eleven.config :refer [env]]))
(def cfg {:server-type :peer-server
:access-key "myaccesskey"
:secret "mysecret"
:endpoint "localhost:8998"
:validate-hostnames false})
(def client (d/client cfg))
(def conn (d/connect client {:db-name "pensine"}))
(def o11-schema [{:db/ident :trip/name
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one}])
(def all-trips '[:find ?e
:where [_ :trip/name ?e]])
(defn get-all [e]
(d/q {:query all-trips :args [ conn ]}))
;; I also tried this
;; (defn get-all [e]
;; (d/q all-trips conn))#2020-05-1319:03vnctaingI have issue trying to do a simple query with on-prem, my get-all fn throws this anomalie.
Query args must include a database
(ns oceans-eleven.db.core
(:require
[datomic.client.api :as d]
[io.rkn.conformity :as c]
[mount.core :refer [defstate]]
[clojure.pprint :as p]
[ring.util.request]
[oceans-eleven.config :refer [env]]))
(def cfg {:server-type :peer-server
:access-key "myaccesskey"
:secret "mysecret"
:endpoint "localhost:8998"
:validate-hostnames false})
(def client (d/client cfg))
(def conn (d/connect client {:db-name "pensine"}))
(def o11-schema [{:db/ident :trip/name
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one}])
(def all-trips '[:find ?e
:where [_ :trip/name ?e]])
(defn get-all [e]
(d/q {:query all-trips :args [ conn ]}))
;; I also tried this
;; (defn get-all [e]
;; (d/q all-trips conn))#2020-05-1319:04vnctaingI’m using
[com.datomic/datomic-pro "0.9.5981"]
[com.datomic/client-pro "0.9.41"]#2020-05-1319:10ghadi@vnctaing you're passing the connection to the query, not the db#2020-05-1319:11ghadiyou can get a database from a connection#2020-05-1319:11ghadi(d/db conn)#2020-05-1319:11ghadithat gives you a db-as-a-value#2020-05-1319:11ghadihttps://docs.datomic.com/cloud/tutorial/client.html#2020-05-1319:12ghadialways pass a database explicitly as an argument to functions that call d/q#2020-05-1319:12ghadinever discover a database from the "inside out"#2020-05-1319:13ghadi(defn ask-the-database-something
[db ...]
(d/q {:query .... :args [db ...]}))#2020-05-1410:08robert-stuttaford@marshall or @jaret would it be possible to have a copy of the AMI packing script you use for the official AMI? we're very happy with it, but we'd really like to add a Datadog Agent to it. we can technically hack a way in and look around and reverse engineer it, but we'll never be totally sure we got all the configuration you set up. i figure it's simpler just to ask you for it 🙂 thoughts?#2020-05-1413:51marshallhey @U0509NKGK
I’ll put together something to share re: how that is built / what matters in there
give me a couple days#2020-05-1414:12robert-stuttafordah thank you sir, you are a scholar and a gentleman 🙂#2020-05-1413:23joshkhshould i be worried about DDB related PollingCacheUpdateFailed alerts in Datomic Cloud? and if so, what's the appropriate action to take?#2020-05-1418:31unbalancedHi there 🙂 Is there any kind of a Datomic marketing kit or Datomic sales training?#2020-05-1418:31unbalancedpamphlets I can just leave lying around the building lol#2020-05-1418:31unbalancedanything#2020-05-1512:49lifecoderHi! Is it acceptable practice to call create-database/delete-database ~10 times per day with randomized db-name for use in functional tests in Datomic Cloud? Test cluster is separate from production one. I am just worried if this won’t lead to some kind of resource leaks over time…#2020-05-1513:01marshallas long as you’re on a recent release that should be fine @lifecoder, particularly in a separate system from prod
The only thing you may need to do if you do encounter any issue is restart the compute node(s) on the system when/if you hit any issue (it is unlikely you will, there were a couple cases where it was possible on earlier versions)#2020-05-1513:02lifecoder@marshall Thanks!#2020-05-1513:58jaretSorry for the incoming spam we’ve got a lot to announce so here it goes:#2020-05-1513:58jarethttps://forum.datomic.com/t/ion-dev-0-9-265-and-ion-0-9-43/1446#2020-05-1513:58jarethttps://forum.datomic.com/t/new-client-release-for-pro-0-9-57-and-cloud-0-8-96/1447#2020-05-1514:50kennyIs the built-ins list for https://docs.datomic.com/cloud/query/query-pull.html#xform-option extensive? No support for int/long/double?
Xforms seem like they'd make ordering card many attributes nicer.
Support for clojure.edn/read-string is interesting. Implies people often store edn in a Datomic string attribute. Do people do this often?#2020-05-1514:54kennyIs qseq similar to an eduction -- passing over the data twice will recompute the result?#2020-05-1519:18stuarthallowaypassing over the data twice will rerun the query#2020-05-1513:58jarethttps://forum.datomic.com/t/datomic-cli-0-10-82-now-available/1448#2020-05-1513:59jarethttps://forum.datomic.com/t/datomic-1-0-6165-now-available/1449#2020-05-1514:10favilaThe on-prem docs for the new features looks…off, like it was meant for cloud#2020-05-1514:16favilaI also don’t see these documented on the client api docs. https://docs.datomic.com/on-prem/clojure/index.html#2020-05-1514:17favilaor :xform on the pull grammar#2020-05-1514:18favilasorry for all the nits. I have a use case in my mind and I don’t have time to test with code, so I was trying to infer from docs. I want to use :xform to get back d/entity behavior for idents#2020-05-1514:18favilaI’m not sure if it’s possible#2020-05-1514:33marshallcan you hard refresh?#2020-05-1514:33marshallthere was some caching strangeness#2020-05-1514:34marshallthe Peer API docs may not yet be updated; i’ll look at that asap#2020-05-1514:36marshall@U09R86PA4 https://docs.datomic.com/on-prem/pull.html#xform
yes, you’re right it’s missing in the grammar. will fix#2020-05-1514:41favilahard refresh doesn’t seem to fix. I also notice “q” on the on-prem “query” page is also written as if it’s for the client api#2020-05-1514:41favila(maybe it was always like that)#2020-05-1514:42marshallnot sure what you mean by “looks off like it was meant for cloud”#2020-05-1514:42marshallyeah, the docs are generally written api agnostic or client=preferred#2020-05-1514:43favilaok, so one by one:#2020-05-1514:43favilahttps://docs.datomic.com/on-prem/query.html#qseq says “`datomic.client.ap/qseq` utilizes the same https://docs.datomic.com/on-prem/query.html#grammar.” Both are client api. Maybe by design I’m thinking now#2020-05-1514:45favilahttps://docs.datomic.com/on-prem/pull.html#xform talks about resources/datomic/extensions.edn and whitelisting functions--I’m not aware of that being an on-prem thing?#2020-05-1514:46marshallit is now 🙂#2020-05-1514:46favilahttps://docs.datomic.com/on-prem/index-pull.html looks like a client api and mentions client options--maybe it is actually the same as peer#2020-05-1514:48favilare function whitelisting: wow really? that sounds very tedious in an on-prem setting#2020-05-1514:51favilais :reverse for index-pull more efficient in on-prem than (reverse (d/datoms …)) ?#2020-05-1514:51marshallyep#2020-05-1514:51marshallsignificantly#2020-05-1514:51favila… can it be backported to d/datoms etc?#2020-05-1514:51marshallpotentially; not yet, but i believe it could be in the future#2020-05-1514:53favilacould the tx log also be walked in reverse?#2020-05-1514:53favila(efficiently)#2020-05-1514:55marshallalso, possibly; I’m not sure, but i’ll pass on the question#2020-05-1514:55marshall@U09R86PA4 Peer API docs updated#2020-05-1514:55marshallhttps://docs.datomic.com/on-prem/clojure/index.html#2020-05-1514:58favilaI’m more excited about walking an index in reverse than index-pull’s other features. I’m not sure index-pull makes sense in on-prem otherwise#2020-05-1514:58favilaI see why it would be really important for clients though#2020-05-1513:59jarethttps://forum.datomic.com/t/datomic-cloud-668-8927/1445#2020-05-1514:59kenny> Enhancement: Improve internal record keeping of active databases that could lead to spurious error messages.
Does this fix https://support.cognitect.com/hc/en-us/requests/2598?#2020-05-1517:04jaretno, I don’t think that will resolve that issue. We are still working on reproducing.#2020-05-1513:59jaretNew Cloud, on-prem, client, CLI, ion/ion-dev release ^#2020-05-1517:46souenzzoNaming question:
Why does :xform is named in this way?
clojure.core/sequence for example has a xform coll signature, where xform means functions like clojure.core/cat, not clojure.core/str
I mean, I know that datomic and clojure are "independent" products, but I also know that the ideas behind both are the same
What does xform means?#2020-05-1517:59kennyI was also curious about this. I usually think of "xform" as a transducer. Datomic's xform is not a transducer, afaict.#2020-05-1519:47mafcocincorandom newb question: Can Datomic use Foundation DB as its durable storage?#2020-05-1614:19stuarthallowaydoes Foundation DB have a JDBC driver?#2020-05-1615:19mafcocincoUnfortunately, it does not. There is an fdb-client that is required to be installed on every box. However, we would consider writing one for FDB if that is the only criteria for plugging it into Datomic.#2020-05-1615:21mafcocincoWe (my company is http://novolabs.com) are going to be adopting Datomic for our operational DB. We use FDB for our historical data and would love to have Datomic use FDB for its durable storage as we would only need one storage solution which would greatly improve our SRE.#2020-05-1617:21stuarthallowayObviously we don't test against the (not yet written) driver 🙂 but anything that correctly implements the JDBC spec (particularly transaction isolation) should work fine as a Datomic storage service. On the standard license, we would be able to support you with Datomic but not with any FDB-specific issues.#2020-05-1802:19mafcocincoUnderstood. Thanks for the help and I’ll let you know if/when we have a JDBC driver for FDB that is ready for general consumption.#2020-05-1600:23steveb8nQ: with this new release, what is the idiomatic way to do sorted query with pagination? It seems like index-pull is the answer but I wonder if there’s a way to do pagination with qseq? That would be more flexible (in terms of where conditions) than using an index. Maybe a blog post would be a good way to answer this?#2020-05-1601:02alexmillerI think Datomic forums is probably the best place to get a more in-depth answer to this https://forum.datomic.com#2020-05-1722:25steveb8nposted: https://forum.datomic.com/t/idiomatic-pagination-using-latest-features/1454#2020-05-1601:33steveb8nok. will post there once my user is approved#2020-05-1601:33steveb8nthx#2020-05-1812:01hanDerPedernewb question about datamodeling. if we have a fact like john likes jane it would make sense for likes to be an attribute with valueType ref. but what if we want to quantify how strong this affection is on a scale from 1 to 10? should likes be it's own entity now?#2020-05-1914:31tvaughanA tuple could work here too#2020-05-2016:01unbalancedI would probably have a concept of a "relationship". Or, if it's only one way (yikes!) then perhaps make "interest" and attribute of John. And yes, I think a ref makes sense. You might also make "interests" have multiple cardinality. And then the "interest" would have the "object of interest" and perhaps "strength of interest"#2020-05-1814:54calebp@peder.refsnes That sounds reasonable. There’s nothing stopping you from asserting another attribute for the strength on john, but if likes is cardinality many, you end up having to parse attribute names to make associations between likes and strengths. That’s usually how I divide these cases, if the cardinality is one, it’s easier not to have to look up another entity, but if the cardinality is many I usually create new entities.#2020-05-1815:09calebpI have just upgraded my (cloud) storage and compute stacks. I have another cloudformation stack that creates the resources for exposing my Ions through API Gateway (with Lambda proxy). Once I upgraded, the API Gateway endpoints stopped working. The immediate problem was API Gateway didn’t have permission to call the Lambdas. My working theory is that the Lambdas were recreated as part of the upgrade. They have the same names and the same ARNs, but AWS seems to see them as different resources. I can remove those resources from my template, update the stack, add them back, update again and everything works, but that adds a little down time. Wondering if anybody has figured something else out for this.#2020-05-1915:59babardoHello, I try to use the new index-pull but it always throws a Datomic Client Exception. (We are using datomic-cloud). Does someone has any idea on the cause?#2020-05-1915:59babardoHello, I try to use the new index-pull but it always throws a Datomic Client Exception. (We are using datomic-cloud). Does someone has any idea on the cause?#2020-05-1916:00babardoHere is my deps.edn, I also updated our query group to 668-8927.#2020-05-1916:01babardoIn an other hand, datomic.client.api/index-range which is also using the :avet index works fine#2020-05-1916:14babardocommand and stack trace 👇#2020-05-1923:34tvaughanJust a guess, but I suspect the access key and/or secret key is incorrect #2020-05-2008:33babardoYes it was my first guess but all others `datomic.client.api` function like /q , /index-range return wanted data.#2020-05-1916:04ghadi@babar.ntm I don't speak for the Datomic team, but you'll need to supply at least how you called it (which arguments), and the full exception trace to receive useful help#2020-05-1916:21ghadithanks!#2020-05-1916:38souenzzohttps://docs.datomic.com/on-prem/get-datomic.html
Any plans to recomend/officially support Java 11+?#2020-05-2020:40kennyI have a query that looks something like this:
(d/q '[:find (pull ?m pattern) ?region
:in $ pattern ?region [?aws-class ...]
:where
[?m ::machine-type/aws-region ?region]]
(d/db conn)
[::machine-type/id]
"us-west-2"
[])
=> []
That query returns an empty vector []. If I alter that query to include anything in the vector passed as the final argument passed to the query, I get back 271 results. Is this an expected behavior?
(d/q '[:find (pull ?m pattern) ?region
:in $ pattern ?region [?aws-class ...]
:where
[?m ::machine-type/aws-region ?region]]
(d/db conn)
[::machine-type/id]
"us-west-2"
["something totally random"])
=> vector-of-271-tuples#2020-05-2119:12rkiouakCan anyone point me to a linke describing current state of datomic cloud BLOB/bytes datatype support?#2020-05-2119:22marshallhttps://www.datomic.com/cloud-faq.html#_what_sort_of_applications_is_datomic_not_a_good_fit_for#2020-05-2119:23marshallyou should not store blobs in datomic#2020-05-2119:23rkiouakthanks, I did see that. I would not characterize my storage need here as large unstructured binary data -- rather its encrypted string values#2020-05-2119:23marshallhow big?#2020-05-2119:24marshallyou could use strings, but they’re limited to 4096 characters#2020-05-2119:25rkiouakI would strongly prefer not to incur the base64 encode+decode cost#2020-05-2119:28rkiouakis the plan to sunset the on prem db.type/bytes at some point?#2020-05-2119:30marshalli doubt it will go away in On Prem. That’s not very “Spec-ulation” compatible 😉#2020-05-2119:31rkiouakappreciate the info & help#2020-05-2119:31rkiouakroadmap information w.r.t. small binary data in datomic cloud would be super interesting to me#2020-05-2119:31marshalli would also expect that Cloud will eventually support additional options for this use case#2020-05-2119:31marshallbut i dont know what it’s going to look like or when it might be#2020-05-2119:32rkiouakok, thanks#2020-05-2119:41rkiouakI know you said you aren't sure what a bytes type or similar would look like, but just in case the referenceprompts anything, any further info the on the LOB entry here? https://docs.datomic.com/on-prem/moving-to-cloud.html#other#2020-05-2119:49marshallthat would be what i’m referring to; dont’ know what it will look like or when it might appear#2020-05-2207:20amarjeetHi, I am wondering if Datomic addresses read-conflict or write-skew scenarios at the commit time? I discovered that Clojure has ensure that can be used along with ref to achieve similar results.
To elaborate the above point with an example: Lets say that `process A` reads `a` from DB and writes to `b` in the DB. While `process A` was in progress, `process B` mutated (and committed) `a` with a different value. So, when `process A` tries to commit `b`, it should be retried in order to address conflicts in the read-value of `a`.#2020-05-2212:21favilaThere are three mechanisms: transaction functions, schema invariants, and attribute predicates/ensure. Transaction functions cannot skew because they effectively have a global lock on the database while they run; and they can abort based on the before-transaction db value by throwing. However multiple txfns cannot see the assertions/retractions of other datoms and txfns in the same tx, nor can they see the after-transaction value. (:db/cas is a transaction function.) schema invariants (cardinality, uniqueness, and identity) detect datom conflicts in the after-transaction value and abort, or upsert to an entity possibly created in another transaction. attribute predicates and ensure are a more powerful generalization of this--they can abort on any condition visible in the after-transaction value.#2020-05-2212:23favilaNote all these mechanisms merely provide different ways to read or abort. Retries are up to you#2020-05-2212:36amarjeetSo, it means that the situation i described above (read-conflict for ‘a’) will never arise because transactions are being literally serialized (global db lock)?#2020-05-2212:42favilait can happen, but not in the traditional way they would happen in SQL#2020-05-2212:43faviladatomic transactions are sets of atomic assertions and retractions, applied serially by a single writer (the transactor)#2020-05-2212:44favilahow those sets are made, and whether those sets can be applied in any order, is still a concern#2020-05-2212:46favilatransactions from a given peer will be applied in the same order as submitted relative to one another, but may be interleaved with transactions from other peers#2020-05-2212:49favilabut each of those transaction sets were built (presumably) by reading a database value and conditionally producing a new one. so that’s read schew-ish.#2020-05-2212:49favilaYou can avoid the possibly stale read by preparing the set in a transaction function (i.e. with a read lock), but it’s still just producing a set of assertions and retractions, not actually immediately writing (meaning: other transaction functions in the same db won’t see “writes” by other assertions/retractions in the same tx)#2020-05-2213:12amarjeetHmm, okay, i ll read further in the docs, thanks :)#2020-05-2419:50kmyokoyamaHello, I'm trying to use lookup refs in a transaction, but I'm having trouble. I have the following tx-data in which I try to add a new fact: the user identified by follower-id now follows the user identified by followed-id (it is a Twitter-clone app):
{:db/id [:user/id follower-uuid]
:user/follow [:user/id followed-uuid]}
The schema is already in place and :user/id is unique. I'm receiving this error message from Datomic:
#object[datomic.promise$settable_future$reify__4751
0x3be40c44
{:status :failed,
:val #error{:cause ":db.error/not-an-entity Unable to resolve entity: 956eb63e-6f30-4e94-a378-eee8ae0156cd in datom [[:user/id #uuid \"37b70f39-70cd-473d-9959-f81502621302\"] :user/follow #uuid \"956eb63e-6f30-4e94-a378-eee8ae0156cd\"]",
:data {:entity #uuid"956eb63e-6f30-4e94-a378-eee8ae0156cd",
:datom [[:user/id #uuid"37b70f39-70cd-473d-9959-f81502621302"]
:user/follow
#uuid"956eb63e-6f30-4e94-a378-eee8ae0156cd"],
:db/error :db.error/not-an-entity},
[more omitted]
The schema for :user/id and :user/follow are the following:
{:db/ident :user/id
:db/valueType :db.type/uuid
:db/cardinality :db.cardinality/one
:db/unique :db.unique/identity
:db/index true}
{:db/ident :user/follow
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/many}
Does anyone have a clue? Thank you#2020-05-2420:49Joe LaneHi @UNZQ84WJV , do both of the users exist in the database?#2020-05-2420:50kmyokoyamaYes, they do. I can correctly retrieve them.#2020-05-2420:53kmyokoyamaAnd if I replace [:user/id followed-uuid] with the entity id of the followed user, it works as expected.#2020-05-2420:53Joe LaneAnd what if your tx data was instead
{:user/id follower-uuid
:user/follow [:user/id followed-uuid]}#2020-05-2420:54kmyokoyamaI haven't tried that. I'll try it and comment here#2020-05-2420:56kmyokoyamaThe rationale would be: since :user/id is unique, your suggestion would upsert the :user/follow attribute?#2020-05-2420:57Joe LaneYep, and it will also upsert the follower user as well, without the need for db/id (unless you need a tempid for something, but it doesn't look like this example does.)#2020-05-2420:58Joe LaneWhich datomic is this? Datomic cloud, free, pro/on-prem?#2020-05-2421:00Simon O.Are you sure u're using the uuid fully and correctly? Could you post the full transaction with the value of the follower-uuid ?#2020-05-2421:06kmyokoyamaI'm using the datomic-free API, but this example is running with an in-memory database ("datomic:<mem://mydb|mem://mydb>")#2020-05-2421:11Joe LaneOk, did the above work?#2020-05-2421:35kmyokoyamaSorry, I was away from my laptop, @U0CJ19XAM. It didn't work. It returns the same error as before.#2020-05-2421:37kmyokoyama@U2TLBUVRS, yes, it looks ok:
(d/transact conn [#:user{:id #uuid "956eb63e-6f30-4e94-a378-eee8ae0156cd", :follow [:user/id #uuid "37b70f39-70cd-473d-9959-f81502621302"]}])#2020-05-2421:41kmyokoyamaI have other data that look similar and work. For instance:
#:like{:id uuid
:created-at created-at
:user [:user/id user-uuid]
:source-tweet [:tweet/id source-tweet-uuid]}#2020-05-2421:51Simon O.For the follow, I think it should be ... :follow [[:user/id ...]] 'cos ref type-> many card.., or use a set #{[:user/id ...]}#2020-05-2422:01Simon O.#2020-05-2422:05kmyokoyamaWow, that worked! Will Datomic create a new datom of the form [follower-entity-id :user/follow new-followed-entity-id] for each successive transaction? For instance, when first user (entity id 1) follows second user (entity id 2) it creates datom [1 :user/follow 2 tx-1 true] and then when first user also follows third user (entity id 3), then the following occurs: [1 :user/follow 2 tx-2 false] [1 :user/follow 3 tx-2 true] ? Obviously this wouldn't be the correct result.#2020-05-2422:07kmyokoyamaOk, your screenshot answers that. Thank you, @U2TLBUVRS and @U0CJ19XAM!#2020-05-2500:54naomarikDoes the starter license also include ability to run in multiple environments (dev/staging/prod) with the same license?#2020-05-2502:01naomarikJust tried, looks I can.#2020-05-2513:35fmnoisehi everyone, is there any way to add new schema-level attributes (like db/doc) which would be shown in datomic console?#2020-05-2514:49naomarikDatomic console still works? On the latest datomic (1.0.6165) I get this error: ERROR: This version of Console requires Datomic 0.8.4096.0 to run#2020-05-2709:57eany experiences with migrating or mirroring data from one datomic cloud instance to another? is there some straightforward technique to copy/replay all transactions to a new instance, how would they be queried from the source db?#2020-05-2713:35Joe Lane@e https://docs.datomic.com/client-api/datomic.client.api.html#var-tx-range
(d/tx-range conn {:start nil :end nil :limit -1}) will get you started.#2020-05-2713:36Joe LaneIt's not a trivial transformation though.#2020-05-2714:35stuarthallowayWe are considering bumping the Clojure requirement for the Peer API. Please let us know your thoughts! https://forum.datomic.com/t/peer-api-clojure-version-poll/1469#2020-05-2716:26kennyIs it okay to publish a public docker image with the datomic-access script in it?#2020-05-2718:21currentoorAny plans on adding native support to Datomic for the java.time classes? Or just java.time.Instant?
Right now I’m storing everything (local dates, points in time, etc) as java.util.Date and converting to java.time.Instant to perform operations. Being able to read things out as java.time.Instant would mean a lot less conversion back and forth.#2020-05-2807:24tatutIf I migrate data serialized from another db instance to a new one, is it safe to transact them with the original :db/id numbers? or do I need to make them strings and do some mappings… (in datomic cloud)#2020-05-2808:18fmnoisenope, it's not safe, you should have some app level identities#2020-05-2808:28tatutok, thanks#2020-05-2812:13unbalancedFINALLY got company to green light datomic and now that I'm using the real thing (vs free/datascript), I don't really have a good mental model for why I would choose the client API (`datomic.client.api`) vs datomic.api. Is this a philosophical thing or are they different tools for different mediums? Or are they just different tools in different bags that could be used for similar tasks?#2020-05-2812:26arohnerdatomic.api is only available for peers (on-prem), not cloud#2020-05-2812:29arohnerhttps://docs.datomic.com/on-prem/clients-and-peers.html#2020-05-2812:33unbalancedahhh perfect#2020-05-2813:12unbalancedThank you! 🙇#2020-05-2813:10arohnerHow do queries with composite tuples work? Can you ‘destructure’ and query via the tuple, or is going through the source attributes the only way?#2020-05-2813:18marshall@arohner https://docs.datomic.com/cloud/query/query-data-reference.html#untuple
yes 🙂#2020-05-2813:18marshallyou can do it either way#2020-05-2813:18arohnerthanks#2020-05-2814:29arohnerdb.error/entity-attr Entity -9223301668109487396 missing attributes
#2020-05-2814:37arohnerIs that an error in the way I defined the spec, or a bug in datomic?#2020-05-2814:38ghadiplease provide your inputs#2020-05-2814:38ghadiboth the d/transact args and the specs#2020-05-2908:48arohner{:db/ident ::money/currency
:db/valueType :db.type/keyword
:db/cardinality :db.cardinality/one}
{:db/ident ::money/value
:db/valueType :db.type/bigdec
:db/cardinality :db.cardinality/one}{:db/ident ::money/money
:db/valueType :db.type/tuple
:db/tupleAttrs [::money/currency ::money/value]
:db/cardinality :db.cardinality/one}{:db/ident ::accounts/tx-item
:db.entity/attrs [::accounts/account-id ::money/money]}
(d/transact conn {:tx-data [#:griffin.proc.accounts{:tx-id #uuid “ddcc9e32-c65e-5024-b82a-be3a3324d496”, :tx-items #{{:db/ensure :griffin.proc.accounts/tx-item, :griffin.proc.accounts/account-id #uuid “35db4a29-0bcc-5614-b27e-920c5f31a3a4", :griffin.money/money [:GBP 0.00M]} {:db/ensure :griffin.proc.accounts/tx-item, :griffin.proc.accounts/account-id #uuid “c0d87137-2c39-520c-ad6d-aa529843c38f”, :griffin.money/money [:GBP 0.00M]}}}]})#2020-05-2911:55favilaAttributes with TupleAttrs are not meant to be written directly: they will be computed. This tx writes money/money, it should instead write currency and value#2020-05-2911:56favilaI don’t think composite attr updates flow back into the non-composite attrs #2020-05-2911:56favilaOnly the other way around#2020-05-2912:51arohnerThe official docs seem to do that:
[{:reg/course [:course/id "BIO-101"]
:reg/semester [:semester/year+season [2018 :fall]]
:reg/student [:student/email "#2020-05-2912:51arohnerIsn’t that year+season a write to a composite tuple by passing in a vector?#2020-05-2912:52favilano that is a lookup ref#2020-05-2912:53favila:reg/semester [:semester/year+season [2018 :fall]] will desugar to [:db/add "entity-temp-id" :reg/semester [:semester/year+season [2018 :fall]] which will resolve to an entity id with :semester/year+season equal to [2018 :fall]#2020-05-2912:53favila(or fail if no such entity)#2020-05-2912:54favilaI think your cryptic, horrible error message is :db/ensure complaining that the source tuples are not written#2020-05-2912:55favilai.e. that ::money/currency ::money/value were not asserted#2020-05-2912:55arohnerit works when I assert money/currency and money/value, thanks#2020-05-2912:56favilaNote this in the docs:#2020-05-2912:56favila> Composite attributes are entirely managed by Datomic–you never assert or retract them yourself. Whenever you assert or retract any attribute that is part of a composite, Datomic will automatically populate the composite value.#2020-05-2912:58favilaso in your example you were looking at in the docs, the earlier transaction `
{:semester/year 2018
:semester/season :fall}
is what wrote [:semester/year+season [2018 :fall]]#2020-05-2911:25arohnerhrm, it seems like my tuple write was failing, and I don’t understand why.#2020-05-2912:37dmarjenburghI'm trying to do an index-pull but running into a Datomic Client Exception:
clojure.lang.ExceptionInfo: Datomic Client Exception {:cognitect.anomalies/category :cognitect.anomalies/forbidden, :http-result {:status 403, :headers {"server" "Jetty(9.4.24.v20191120)", "content-length" "19", "date" "Fri, 29 May 2020 12:34:05 GMT", "content-type" "application/transit+msgpack"}, :body nil}}
at datomic.client.api.async$ares.invokeStatic(async.clj:58)
at datomic.client.api.async$ares.invoke(async.clj:54)
at datomic.client.api.sync$channel__GT_seq.invokeStatic(sync.clj:72)
at datomic.client.api.sync$channel__GT_seq.invoke(sync.clj:69)
at datomic.client.api.sync$eval20791$fn__20808.invoke(sync.clj:113)
at datomic.client.api.protocols$fn__11940$G__11875__11947.invoke(protocols.clj:126)
at datomic.client.api$index_pull.invokeStatic(api.clj:293)
at datomic.client.api$index_pull.invoke(api.clj:272)
I can query the db normally otherwise.#2020-05-2912:49favilaare you sure the target server supports it?#2020-05-2913:14dmarjenburghHaha, I was under the impression the upgrade was already deployed, but it was still in the pipeline :face_palm::skin-tone-3: . Works now.#2020-05-2912:50arohnerThe fn is either a fully qualified function allowed under the :xforms key in resources/datomic/extensions.edn, or one of the following built-ins:
I can’t find anything else in the docs that reference extensions.edn. Where can I learn more about that?#2020-05-2913:07faviladoubling down on this question, it’s also not clear to me whether the extension function needs to exist on the client’s classpath or the client-server’s classpath#2020-05-2913:08favilaor why this is necessary at all for the on-prem api#2020-05-2915:51marshallThe extensions.edn file needs to be available in the classpath at that relative path (`resources/datomic/extensions.edn`)#2020-05-2915:51marshallit needs to be there in the system that will be doing the work#2020-05-2915:51marshallso if you’re using peer, in the peer process#2020-05-2915:52marshallfor client, it needs to be in the cp of the peer-server process#2020-05-2915:52marshallif you’re using it inside a transaction function, it would need to be in the transactor cp#2020-05-2916:12favilaAnd it looks like {:xforms #{var/name ,,,}} ?#2020-05-2916:14marshalli believe the value is a vector (or list) of symbols#2020-05-2916:14marshallset may work too#2020-05-2916:16marshallhttps://docs.datomic.com/cloud/ions/ions-reference.html#ion-config#2020-05-2916:16marshallbased on cloud, I would say a vector of fully qualified symbols#2020-05-2916:16marshallI’ll look at adding that detail in onprem docs#2020-05-2915:45jaretHowdy! We just released a fix for Datomic On-Prem Console. The latest release had a bug that caused console to fail to start. https://forum.datomic.com/t/datomic-console-0-1-225-now-available/1472#2020-05-2915:45arohnerIs it possible to use a lookup ref in the same transact that creates the unique identity? It seems like the answer is no#2020-05-2915:45arohnerIs it possible to use a lookup ref in the same transact that creates the unique identity? It seems like the answer is no#2020-05-2915:49marshallNo, but you can use a tempid for that#2020-05-2915:57arohnerBut then I need to know whether the unique identity already exists or not#2020-05-2915:58marshalli think i’d need more detail
If you have one entity being asserted that has a unique ID and another that references it via tempid, Datomic’s entity resolution should handle that correctly whether or not the entity with the unique ID already exists or not. If it does, it will become an upsert, if it doesn’t it will be created#2020-05-2916:17favilaI’m guessing from our earlier conversation that Allen wants to use this with a unique-identity composite attr. I think this doesn’t work unless you assert the composite. e.g. {:db/id "tempid" :attr-a 123 :attr-b 456} where the upsert attr is :attr-a+b#2020-05-2916:18marshallyes, agreed if you’re upserting you need to include the :attr-a+b in the transaction#2020-05-2920:07arohnerAFAICT, it doesn’t work with a scalar unique attribute either#2020-05-2920:10marshallCan you provide your txn data and results you see not working?#2020-05-2920:13arohnerThe code is kind of lengthy and it’s late here (London).
I’m trying to build a ledger. When inserting transaction items:
{:db/ensure ::accounts/tx-item
::accounts/account [::accounts/account-id (::accounts/account-id i)]
::money/currency (-> i ::accounts/tx-amount :currency keyword)
::money/value (-> i ::accounts/tx-amount :value)}#2020-05-2920:14arohnerI’m trying to insert :accounts/account, in the same transaction as the tx-items. Inserting tx-items fails with
:cognitect.anomalies/category :cognitect.anomalies/incorrect, :cognitect.anomalies/message "Unable to resolve entity: [:griffin.proc.accounts/account-id #uuid \"17af261f-9ad5-58e6-938f-b3b7a0ffee22\"] in datom [-9223301668109421343 :griffin.proc.accounts/account [:griffin.proc.accounts/account-id #uuid \"17af261f-9ad5-58e6-938f-b3b7a0ffee22\"]]"
#2020-05-2920:16marshallYou cant use the lookup ref#2020-05-2920:16marshallYou need to use the tempid#2020-05-2920:16marshallIf youre creating the entity in the same transaction#2020-05-2920:17marshallCreate the account with a db/id "foo"#2020-05-2920:17marshallAnd "foo" in place of your lookup ref#2020-05-2920:19arohnerRight. That’s not convenient because it requires me knowing whether the entity already exists or not, which requires an extra query#2020-05-2920:19marshallNot if you have the account entity in the same txn#2020-05-2920:22marshall[{:account/id "someuniquevalue"
:db/id "foo"}
{:transaction/value 20
:transaction/account "foo"}]#2020-05-2920:22marshallif account/id “someuniquevalue” exists, it will upsert#2020-05-2920:22marshallif not it will create#2020-05-2920:22marshalleither way, the txn with value 20 will have a ref attr pointing to that account#2020-05-2920:23arohnerIt’s been several years since I used datomic in anger. At the time, the advice was don’t assert facts unnecessarily. Won’t that create new datoms every time, even if the account already exists?#2020-05-2920:24marshallno#2020-05-2920:24marshalldatomic does redundancy elimination#2020-05-2920:24marshallif the acct entity exists it will upsert#2020-05-2920:24marshallif it doesnt it will be created#2020-05-2920:25marshallany attr/val pairs that already exist for that entity will be eliminated if the value is identical#2020-05-2920:25marshallif the value is different it will retract the old value and assert the new value#2020-05-2920:25marshallif the attr is not present at all on that entity it will assert the attr/value for that entity#2020-05-2920:27marshallnot sure where “dont assert facts unnecessarily” would come from
certainly doing the work of redundancy elimination has some cost, but i would not expect it to be prohibitive, especially in this case, as you have to “find” the account entity either way, whether it’s via the entity being asserted or with the lookupref#2020-05-2920:30marshalla completely redundant txn would create a tx/Instant datom#2020-05-2920:30marshallso if everrything you assert is duplicate you’d be accumulating an “unnecessary” couple of datoms#2020-05-2920:31marshallwhich again, not a big deal as long as you arent doing it in huge numbers#2020-05-2920:31marshalli.e. here and there totally nbd
every single minute, 10 times a minute all the time… maybe not so great#2020-05-2920:37arohnerThat’s good to know#2020-05-2920:37arohnerThe rest of the transaction definitely has to happen and will have novelty, so it sounds like nbd#2020-05-2920:59marshall👍#2020-05-2921:28Kaue SchltzHi there, We're currently using datomic cloud and I've been stuck with the following:
We have several source applications providing financial data, each one with its own payload, so we decided to have a 'normalizer' service
to convert each format to a common payload, so we can build our products in an agnostic manner.
To illustrate this:
;;{:source.a/name "John Doe" would become something like-> {:common/name "John Doe"
:source.a/amount 44.50} :common/amount 4450
:common/source {:source.a/name "John Doe"
:source.a/amount 44.50}}
We chose to keep the original format in the final structure to maintain some backtracking and ease integration with legacy systems.
Now, say there is a buggy implementation in this conversion function rounding floats or any other error, and we end up with incorrect values
in the common payload, but we still have the original data. Does datomic have any support for me to bulk 'alter' that data?
The first solution that came to my mind, was to query all the incorrect data, extract the source info, pass it through the correct function, then transact it back
to datomic. But I wonder, does Datomic has any feature to better support that, something closer to a "compare and swap-like" feature?
Thanks to you all, patient readers 😄#2020-05-2921:47marshallDatomic has compare and set: https://docs.datomic.com/cloud/transactions/transaction-functions.html#db-cas#2020-05-2921:47marshallYou'd need to handle the bulk nature yourself.
Also, if your entities are cardinality one, you could just reassert all the values#2020-05-2921:48marshallOnes that were the same would be unchanged (redundancy elimination)#2020-05-2921:48marshallOnes that differ would be "upserted"#2020-05-2921:49marshallIf the attributes are cardinality many youd need to retract them explicitly#2020-05-2921:49marshall@schultzkaue ^#2020-05-2921:50Kaue Schltzthanks a lot#2020-05-3019:49drewverleeWouldn't it be safer to just create a new set of data? e.g common.v2/name?#2020-05-3020:22drewverleeI take it the trade off in having unique identities is that they require more book keeping by the database?#2020-05-3120:14drewverleeI expected the following query to return [[2] [2]] instead of [[4]] because the with clause should have grouped by list first:
(d/q '[:find (count ?todo)
:with ?l
:where
[?l :list/todo ?todo]
]
db)#2020-06-0100:14drewverleei don't know who to give this feedback to but https://docs.datomic.com/cloud/operation/upgrading.html#know-your-version
really needs to switch the order of instructions around so that "storage and compute upgrade" is first.#2020-06-0119:52drewverleeWhats an ideal way to do schema discover on a large database?#2020-06-0121:03alidlorenzothe ion starter has an example of querying the schema of a database. not sure if that’s what you’re asking about, but I’ll post link in case it is: https://github.com/Datomic/ion-starter/blob/master/src/datomic/ion/starter.clj#L49-L56#2020-06-0120:49ghadi"discover"?#2020-06-0213:44drewverleeThe simipliest form of discover is to just get all the db idents and do a string filter on them. I'm not sure it gets better then that. The get-schema function alid linked seems to have some hints in it.#2020-06-0213:50ghadiin Datomic, attribute definitions are themselves entities#2020-06-0314:00drewverleemakes sense thanks.#2020-06-0213:51ghadiyou can query them normally:
:find ?attribute
[_ :db/ident ?attribute]
#2020-06-0213:51ghadiand you can also augment them with your own information#2020-06-0213:51ghadithat datomic doesn't know about#2020-06-0213:53ghadiso you can add attributes like :drewverlee.attribute/relates-to#2020-06-0213:53ghadito relate one attribute to another#2020-06-0213:54ghadior to assist making schema html documentation#2020-06-0312:23robert-stuttafordhowdy @marshall, just following up on that question of AMI configuration 🙂
also, i have a new question, about DynamoDB and garbage collection. we GC once a week, for a-month-ago-or-older as the manual suggests.
our ddb table is 220gb+ big. a fresh Datomic restore is ~35gb. should we just do a table hopscotch every so often? what is all that extra data, if not the stuff the Datomic GC process would catch?#2020-06-0316:11marshall@robert-stuttaford I’ll try to get something for the AMI config today or tomorrow, sorry for the delay
The additional storage is likely unrecoverable garbage, which can be generated by failed indexing jobs and/or transactor failovers during indexing. Yes, the easiest way to deal with it is to restore your DB into a fresh table#2020-06-0316:33robert-stuttafordthank you @marshall - that's helpful!#2020-06-0413:51joshkhwhile using a library that implements serialization, i found that query results (which appear to be PersistentVector) throw an .NotSerializableException: datomic.client.impl.shared.Db exception, where as a usual Clojure PersistentVector does not. calling vec on the results of a query solves the problem, but i'm wondering if there's a better way to solve this#2020-06-0414:04favilaI think that’s the right way. The result object, especially from the client sync apis, are a bit magical#2020-06-0414:04favilamany of them are lazily realized#2020-06-0414:05favilaif you use :keys in a query, that is a datatable-like object#2020-06-0414:05favilaetc#2020-06-0414:06favilaeven in the peer api this is true. queries may return ArrayList instead of vector for efficiency. d/datoms and friends return a reified thing that implements seq and iterable#2020-06-0414:07favilaso in general the apis only guarantee the interfaces and behavior of return values, not type#2020-06-0415:20marshall@robert-stuttaford ^^#2020-06-0514:43YasHello Guys, does anyone able to restore datomic database into postgres?#2020-06-0516:19faviladatomic on-prem backups are storage-system agnostic. You can restore a backup from any kind of storage to any other kind#2020-06-0619:01David PhamI am trying to understand the pricing of Datomic on prem. it costs 5k$/year/system. How do you define the number of systems? Is it the number of writer?#2020-06-0721:48jdhollisI’ll defer to @marshall, but I suspect a system is the combination of compute + storage. For Datomic Cloud, this easily maps to the CloudFormation stacks involved. I’m not sure how that plays out for on-prem.
You will only have one transactor (i.e., “writer”) at any time for each system (though you can have more than one running for fail-over).#2020-06-0802:58alexmillerYes, for on-prem a system will have one active transactor (may also have an HA transactor)#2020-06-0805:10David PhamThanks!#2020-06-0811:11joshkhi can't believe i'm asking this, but are there any future plans for a nodejs compatible datomic cloud client? i'm just thinking about speedy lambdas that can't really afford the cold startup time of the JVM. i guess there's always graalvm, but still, the ease of whipping up and deploying a cljs lambda is attractive.#2020-06-0811:11joshkhi can't believe i'm asking this, but are there any future plans for a nodejs compatible datomic cloud client? i'm just thinking about speedy lambdas that can't really afford the cold startup time of the JVM. i guess there's always graalvm, but still, the ease of whipping up and deploying a cljs lambda is attractive.#2020-06-0815:58jdhollisYou could also just use ions with HTTP Direct.#2020-06-0815:58jdhollis(I’m assuming you’re wiring these up to an API Gateway.)#2020-06-0815:59jdhollisThey stay spun up.#2020-06-0816:00jdhollisYou have to handle routing within the ion, but it has a significant (positive) response time impact.#2020-06-0816:46joshkhsounds promising but i don't quite follow. HTTP direct lets me route api gateway traffic directly to datomic, but i don't see how that lets me query datomic from a cljs (nodejs) lambda which is my goal 🙂#2020-06-0816:49joshkhit looks like on-prem has a REST api, maybe cloud has something similar?#2020-06-0817:19jdhollisWhat’s your Lambda hooked up to?#2020-06-0817:19jdhollisTypically, I only worry about cold starts if it’s user-facing.#2020-06-0817:20jdhollis(Though I suppose there’s a Rube Goldberg version that hits a private API Gateway endpoint proxying directly to an ion.)#2020-06-0820:30joshkh^ yeah, i entertained the idea 😉 i don't have a specific use case, but let's say something like an API Gateway Authorizer, or an authentication lambda hooked in to Cognito, both of which are customer facing#2020-06-0821:15csmNot too long ago I went and made a nodejs cloud/peer server project: https://github.com/csm/datomic-client-js not official, but it does work with cloud and peer server#2020-06-0821:25jdhollisNeat.#2020-06-0821:27jdhollisAlas, not a lot of good options there if the Lambda is low traffic.#2020-06-0821:27jdhollisEven the Lambdas created to proxy to ions use the JVM if I’m not mistaken.#2020-06-0821:28jdhollisThe API Gateway version might be the best option, latency-wise 😛#2020-06-1012:12joshkh@UFQT3VCF8 this is fantastic, thank you for sharing! i'm curious - why did you write the library in JS and not CLJS?#2020-06-0813:11arohnerIs there a way to assert a query uses an index?#2020-06-0821:07colinkahnI’m trying to understand the terminology in datomic for “peer”. Is the Peer api and peer server similar in some way or does peer just have two meanings?#2020-06-0821:07colinkahnI’m trying to understand the terminology in datomic for “peer”. Is the Peer api and peer server similar in some way or does peer just have two meanings?#2020-06-0822:03favilaThe peer api is the api used to become a peer. A peer server is a peer that provides the server half of the client api#2020-06-0822:04favila“peer” means roughly “member of the datomic cluster”. They have direct connections to transactor and storage#2020-06-0822:06favilahttps://docs.datomic.com/on-prem/architecture.html#peers#2020-06-0915:39colinkahn@U09R86PA4 thanks, I think it makes sense now. Peer and peer server is the full Datomic api with caching etc, where Client is just an interface that connects to the peer server.#2020-06-0916:57favilacorrect. Although a bit of nuance: the client api is designed to be possible to use from a non-peer process, but in certain circumstances for performance it can actually run in a peer and use that peer’s resources directly#2020-06-0916:57favilaI think this is what ions do#2020-06-0916:58favilathey use the client api, but ion processes are also peers so the client api is implemented to call directly into the peer api without crossing a process boundary#2020-06-0919:35colinkahnInteresting, but this is some custom thing that is happening? I was curious if you could use the connection from the peer api with a client, but the apis didn’t seem compatible, with Client requiring an endpoint. But there was a :server-type :local which I couldn’t find docs on that made me wonder#2020-06-0920:21favilathere are three implementations I know about: client (use a transit connection to a server, shared by peer-server and cloud, used by ion if running outside the cloud), peer-client, and local (used by ion if the ion is running in the cloud)#2020-06-0920:21favilapeer-client looks like it would be for on-prem peers; local is for cloud “peers”#2020-06-0920:22favilabut I’ve never seen the implementations for either one, and I don’t think they’re directly supported#2020-06-1009:50katoxHi, we might need to transfer an existing datomic cloud system to a new aws account. What is the best way to handle that?#2020-06-1009:50katoxHi, we might need to transfer an existing datomic cloud system to a new aws account. What is the best way to handle that?#2020-06-1022:47kennyWhen deploying a Datomic HTTP direct endpoint, the final step in creating an API gateway says:
> Enter your `http://$(NLB URI):port/{proxy}` as the "Endpoint URL". This NLB URI can be found in the Outputs tab of your compute or query group https://console.aws.amazon.com/cloudformation/home#/stacks under the "LoadBalancerHttpDirectEndpoint" key
The value in my CF Outputs tab is formatted like this "http://entry.my-datomic-system-name.us-west-2.datomic.net:8184". If I were to follow the docs exactly, I would end up with a Endpoint URL that looks like this: http://http://entry.my-datomic-system-name.us-west-2.datomic.net:8184/{proxy}. I'm assuming that is not what the docs wanted, correct? #2020-06-1022:55Joe Lane@kenny You can watch the video tutorial on http direct for more clear instructions. Definitely don't do http://http://...#2020-06-1022:55Joe Lane@kenny You can watch the video tutorial on http direct for more clear instructions. Definitely don't do http://http://...#2020-06-1022:56kennyI didn't 🙂 Following the docs verbatim would lead to that URL. Surprised it wasn't caught. Not really a huge fan of video docs...#2020-06-1022:55Joe Lanehttps://docs.datomic.com/cloud/livetutorial/http-direct.html#2020-06-1022:59kennyAny idea why all calls to a Datomic http direct endpoint result in a 500 with this response?
{
"message": "Internal server error"
}
The API gateway logs end with a very unhelpful error.
Execution failed due to configuration error: There was an internal error while executing your request
#2020-06-1022:59kennyAny idea why all calls to a Datomic http direct endpoint result in a 500 with this response?
{
"message": "Internal server error"
}
The API gateway logs end with a very unhelpful error.
Execution failed due to configuration error: There was an internal error while executing your request
#2020-06-1022:59kennyI don't think the request is even hitting Datomic.#2020-06-1023:02Joe LaneDid you watch the Video Tutorial?#2020-06-1023:03kennyIs there really info embedded in a video tutorial that is not in the textual docs?#2020-06-1023:03Joe LaneThe error message is telling you that you misconfigured it.#2020-06-1023:04Joe LaneI've seen that error before when I was working on creating an http-direct deployment and, in fact, I did misconfigure it. You're totally right that it isn't hitting datomic.#2020-06-1023:09kennyWatched the video. I have followed the steps exactly & still get the 500 🤔#2020-06-1023:17kennyFull example logs:
Execution log for request 6c5c6e23-0d88-4ac3-a160-374fc3842a83
Wed Jun 10 23:11:17 UTC 2020 : Starting execution for request: 6c5c6e23-0d88-4ac3-a160-374fc3842a83
Wed Jun 10 23:11:17 UTC 2020 : HTTP Method: POST, Resource Path: /datomic
Wed Jun 10 23:11:17 UTC 2020 : Method request path: {proxy=datomic}
Wed Jun 10 23:11:17 UTC 2020 : Method request query string: {}
Wed Jun 10 23:11:17 UTC 2020 : Method request headers: {}
Wed Jun 10 23:11:17 UTC 2020 : Method request body before transformations:
Wed Jun 10 23:11:17 UTC 2020 : Endpoint request URI:
Wed Jun 10 23:11:17 UTC 2020 : Endpoint request headers: {x-amzn-apigateway-api-id=eq2azct4a2, User-Agent=AmazonAPIGateway_eq2azct4a2, Host=}
Wed Jun 10 23:11:17 UTC 2020 : Endpoint request body after transformations:
Wed Jun 10 23:11:17 UTC 2020 : Sending request to
Wed Jun 10 23:11:17 UTC 2020 : Execution failed due to configuration error: There was an internal error while executing your request
Wed Jun 10 23:11:17 UTC 2020 : Method completed with status: 500
Everything appears correct. I wish aws had a bit more info as to what "configuration" could be causing this error.#2020-06-1023:21kennyThis is a uname deployment. I assume that can't matter though.#2020-06-1023:59kennyI can either https://docs.datomic.com/cloud/ions/ions-tutorial.html#orgef4cfed OR https://docs.datomic.com/cloud/ions/ions-tutorial.html#http-direct, right? I don't need to do the former to do the latter?#2020-06-1100:01marshallCorrect. Did you set up a vpc gateway?#2020-06-1100:01marshallVpc link#2020-06-1100:01kennyYes#2020-06-1100:01kennyAnd it is available.#2020-06-1100:01marshallWhat address did you usr#2020-06-1100:01marshallUse#2020-06-1100:01kennyWhere do I provide an address?#2020-06-1100:02kennyThe Endpoint URL is http://entry.datomic-prod-v2.us-west-2.datomic.net:8184/{proxy}#2020-06-1100:02marshallOr rather did you choose the correct nlb#2020-06-1100:02marshallAnd is this latest version of datomic#2020-06-1100:02kennyThere is only 1 production topology deployed in this account & Datomic is the only service using NLB.#2020-06-1100:04kennyOoo, it is not the latest. It is a version that should have http direct support. It's on 616 8879.#2020-06-1100:05kennyWill try updating to the latest version to see if that helps.#2020-06-1100:07marshallyeah, that should have it#2020-06-1100:07marshallfor sure#2020-06-1100:07marshallwhat’s in your ion-config ? @kenny
you need to have a valid :http-direct key in there for Datomic to start the http direct listener#2020-06-1100:08kenny{:allow [],
:lambdas
{:query-pricing-api
{:fn cs.ions.pricing-api/lambda-handler,
:description "Query the pricing-api.",
:concurrency-limit 100}},
:http-direct {:handler-fn cs.ions.pricing-api/web-handler},
:app-name "datomic-prod-v2"}#2020-06-1100:11marshallcan you look for "IonHttpDirectStarted" in your Cloudwatch logstream#2020-06-1100:11marshallfor the production system#2020-06-1100:12kennySure. I think it's not there. Not super familiar with CloudWatch Logs.#2020-06-1100:12marshallinclude the double quotes#2020-06-1100:13kennySame.#2020-06-1100:13marshallare you sure your system is actually starting up?#2020-06-1100:13kennyThis application has been deployed, if that's your next question 🙂#2020-06-1100:13kennyThe latest deployment did not fail.#2020-06-1100:13marshalli.e. can you connect via the bastion#2020-06-1100:13marshallwith a repl or whatever#2020-06-1100:15kennyYep, I can get a client back.#2020-06-1100:15kenny& call list-databases.#2020-06-1100:16kennyUpdated to 668 8927 and still getting the same 500.#2020-06-1100:21kennyNew deploy was successful & same error.#2020-06-1205:16David PhamDoes Datomic Free holds all the database in memory or on disks?#2020-06-1205:16David PhamDoes Datomic Free holds all the database in memory or on disks?#2020-06-1215:00micahHas anyone done a large number of excisions and hose their transactor? I initiated the excision of 1.6M entities. It looks like the transactor acknowledged about 800k of the transactions before I got fed up of waiting and restarted it. Now it just can’t seem to recover. The entities remain un-excised and it can’t seem to complete an indexing job, I think.#2020-06-1217:12JAtkinsAnyone had issues with datomic port forwarding? My team has members in the us, uganda, and india. The datomic instance is in us-east-2 (ohio), and the team members in india and uganda are frequently getting timeouts...#2020-06-1913:13tvaughanI'm in South America and I've seen my traffic throttled by some sites apparently because filtering by geolocation is such an effective way to mitigate against malicious requests. /s#2020-06-1301:19Jon WalchI see the perms listed here for admins: https://docs.datomic.com/cloud/operation/access-control.html#org98dd40a#2020-06-1301:19Jon WalchAre there perms listed anywhere for just the client application? I tried:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3::REDACTED/*"
]
}
]
}#2020-06-1301:21Jon WalchAnd I'm getting:
{:what :uncaught-exception, :exception #error {
:cause Forbidden to read keyfile at . Make sure that your endpoint is correct, and that your ambient AWS credentials allow you to GetObject on the keyfile.
:data {:cognitect.anomalies/category :cognitect.anomalies/forbidden, :cognitect.anomalies/message Forbidden to read keyfile at . Make sure that your endpoint is correct, and that your ambient AWS credentials allow you to GetObject on the keyfile.}#2020-06-1301:22Jon WalchIf I try to pull the same creds from a pod running in my EKS cluster using the awscli, it works.#2020-06-1301:26Jon Walchhttps://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts-minimum-sdk.html anyone know which version of the AWS SDK cognitect is using?#2020-06-1301:28Jon WalchLooks like Update to version 1.11.479 of the AWS SDK for Java. which is below the min version to support#2020-06-1302:29alexmillerThe aws api is not using an sdk at all, it talks through the rest api#2020-06-1413:56craftybonesHello. Just about beginning to play with datomic. I have a specific need and I think datomic fits the bill, I was wondering if anyone could give me pointers. I need to build a dossier system of sorts where there are notes and other details maintained for several candidates. As this detail changes through time, it would help for us to have historic information visible about each candidate#2020-06-1413:57craftybonesIn Mongo, this would be a series of records, timestamped, but all really containing mostly the same information#2020-06-1413:59craftybonesSo given that I want to maintain a history and given that the schema can be flexible, Datomic sounds right, am I correct in assuming this?#2020-06-1414:01craftybonesso let us say, I make a series of assertions on :notes , then later on, I’d like to look at :notes, not just as what the latest is, but all the :notes accrued over time#2020-06-1414:01craftybonesThis should be (trivially?) possible right?#2020-06-1414:01craftybonesThis should be (trivially?) possible right?#2020-06-1414:19marshallhttps://stackoverflow.com/questions/48898046/datomic-query-over-history#2020-06-1414:20marshallhttps://augustl.com/blog/2013/querying_datomic_for_history_of_entity/#2020-06-1414:20marshallhttps://docs.datomic.com/cloud/tutorial/history.html#2020-06-1414:41craftybonesThanks#2020-06-1414:59val_waeselynck@U8VE0UBBR as a non-official source on Datomic, I advise against using Datomic's historical features for giving users access to revisions: https://vvvvalvalval.github.io/posts/2017-07-08-Datomic-this-is-not-the-history-youre-looking-for.html
Datomic is not a bitemporal database. A priori, I recommend making one entity per note revision, as you would do with a regular database.#2020-06-1415:04craftybonesSo you suggest an additional attribute that records the version as well, as opposed to relying just on timestamps#2020-06-1415:04craftybonesI see what you are saying here, history changes are fine as long as the shape remains the same#2020-06-1415:04craftybonesthe second the shape changes#2020-06-1415:05craftybonesthat becomes more complex#2020-06-1415:05craftybonesThanks @U06GS6P1N#2020-06-1415:05craftybonesHowever, even with what you are saying, its easier to use datomic here isn’t it, given the use case?#2020-06-1415:09val_waeselynckDatomic can be easier to use, for general reasons not related to modeling revisions, such as flexible schema, expressive reads and writes, ease of data sync, etc.#2020-06-1415:10craftybonesAlright. In this case, I have a very specific need of having to look at history#2020-06-1415:11val_waeselynckYou may be fine just storing one entity per revision or per change, if your queries aren't highly sophisticated.#2020-06-1415:12val_waeselynckOtherwise might want to look at bitemporal dbs like Crux, but there are many other aspects to consider than historical query features.#2020-06-1415:17craftybonesAs of now, I just want a history of notes per person let us say#2020-06-1415:28craftybonesso let us say some attribute was added only at tx 200, what is the cost to me as a developer if I query for that attribute in an earlier transaction?#2020-06-1415:35val_waeselynckThe main question for assessing Datomic against such use cases is how complicated your historical queries are#2020-06-1415:36craftybonesPretty much no branching, straight ahead, give me everything you’ve got on person x, at most limited by a specific duration#2020-06-1415:37craftybonesAssuming incredibly low performance needs, never more than a handful of users at any point#2020-06-1415:38craftybonesFrom what I am reading of the schema change, I could potentially backfill necessary data, which might not even be necessary for certain types of attributes#2020-06-1512:46favilaWell you can’t backfill such that it looks as if it was transacted in the past. That is the limitation of relying on datomic history for revisions#2020-06-1512:47favilaDatomic history is more like (immutable, not branching) git history than like time-series records#2020-06-1421:53drewverleeHow does using a predicate directly in the query (https://docs.datomic.com/cloud/query/query-data-reference.html#predicates) compare to querying the data then performing the predicate? I assume the predicate runs somehow before the join?#2020-06-1421:53drewverleeHow does using a predicate directly in the query (https://docs.datomic.com/cloud/query/query-data-reference.html#predicates) compare to querying the data then performing the predicate? I assume the predicate runs somehow before the join?#2020-06-1422:21drewverleethe answer is in the docs:
> The predicates =, !=, <=, <, >, and >= are special, in that they take direct advantage of Datomic's AVET index. This makes them much more efficient than equivalent formulations using ordinary predicates. For example, the "artists whose name starts with 'Q'" query shown above is much more efficient than an equivalent version using starts-with?#2020-06-1422:23drewverleeerrr. wait < works on strings to compare the first two letters?
;; fast -- uses AVET index
[(<= "Q" ?name)]
[(< ?name "R")]
;; slower -- must consider every value of ?name
[(clojure.string/starts-with? ?name "Q")]
That seems really odd#2020-06-1422:30drewverleewhat does it mean "they take advantage of datomics AVET index" does that mean the comparison is done using information in the index as well? like when we say index im thinking
"alice"
"bob"
"zack"
so using the index in the context of (< ?name "d") would mean that zack is returned and the operation do this never actual had to look at the string zack because i was stored in location that was marked like "d-z" or something.#2020-06-1512:37favilaIt means it can figure out the equivalent d/index-range call#2020-06-1512:38favila(It’s not literally d/index-range but the semantics are the same)#2020-06-1512:41favila If the query planner can see the attribute you are using, know it has an avet index, and see the comparisons and their values it can figure out a subset of the values in the index to seek instead of seeking the whole thing#2020-06-1512:58drewverleefor 'on-prem' i understand you have to add an avet index for an attribute by doing a transaction. i did a search through my cloud db and i don't see a db/index attribute.
Do i need to add avet indexs for the queries that use index e.g d/index-range to work? and if so, how?#2020-06-1513:44favilacloud adds value indexes for everything already#2020-06-1513:45favilahttps://docs.datomic.com/cloud/query/raw-index-access.html#indexes#2020-06-1513:51drewverleeawesome, thanks!#2020-06-1521:29JAtkinsAny idea why I would not be able to resolve dependencies on com.datomic/ion? I have the maven repo added , and my default aws user credentials are tied to a datomic admin policy.#2020-06-1523:23unbalancedI'm positive I've had this issue before and solved it but I can't remember what the issue was. Running a peer on a docker container and running into some issues:
[main] INFO search.config - Dockerization detected:true
[main] INFO search.config - Using host: 172.17.0.1
[main] INFO search.config - datomic:
[main] INFO datomic.domain - {:event :cache/create, :cache-bytes 2086666240, :pid 660, :tid 1}
[main] INFO datomic.process-monitor - {:event :metrics/initializing, :metricsCallback clojure.core/identity, :phase :begin, :pid 660, :tid 1}
[main] INFO datomic.process-monitor - {:event :metrics/initializing, :metricsCallback clojure.core/identity, :msec 0.865, :phase :end, :pid 660, :tid 1}
[main] INFO datomic.process-monitor - {:metrics/started clojure.core/identity, :pid 660, :tid 1}
[clojure-agent-send-off-pool-0] INFO datomic.process-monitor - {:AvailableMB 3880.0, :ObjectCacheCount 0, :event :metrics, :pid 660, :tid 13}
[clojure-agent-send-off-pool-0] INFO datomic.kv-cluster - {:event :kv-cluster/get-pod, :pod-key "pod-catalog", :phase :begin, :pid 660, :tid 13}
[clojure-agent-send-off-pool-0] INFO datomic.kv-cluster - {:event :kv-cluster/get-pod, :pod-key "pod-catalog", :msec 30.6, :phase :end, :pid 660, :tid 13}
[main] INFO datomic.peer - {:event :peer/connect-transactor, :host "localhost", :alt-host "172.17.0.1", :port 4334, :version "1.0.6165", :pid 660, :tid 1}
Execution error (ActiveMQNotConnectedException) at org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl/createSessionFactory (ServerLocatorImpl.java:787).
AMQ119007: Cannot connect to server(s). Tried with all available servers.
The AMQ thing is ... I can't figure out what the best approach to tackle that is#2020-06-1523:27unbalancedah ok#2020-06-1523:28unbalancedhttps://docs.datomic.com/on-prem/deployment.html#peers-fail-connect-txor#2020-06-1523:28unbalancedI thought I remembered this before#2020-06-1606:05craftybonesHello#2020-06-1606:05craftybonesWhat am I missing here?
user=> (d/q '[:find ?genre
#_=> :where [_ :movie/genre ?genre]] db)
[["Drama, Action"] ["Drama"] ["Sci Fi"]]
user=> (d/q '[:find ?e ?a
#_=> :where [(fulltext $ :movie/genre "Drama") [[?e ?a _ _]]]] db)
#2020-06-1606:06craftybonesBased on what the manual says, this ought to work.#2020-06-1606:06craftybonesI’ve even tried a parameterised variety and didn’t get it to work. I am sure I am doing something stupid, just don’t know what it is#2020-06-1611:31faviladoes :movie/genre have a fulltext index?#2020-06-1611:32favilaThis is the query syntax: https://lucene.apache.org/core/2_9_4/queryparsersyntax.html#2020-06-1611:32favila(`fulltext` passes it straight down to Lucene)#2020-06-1612:05craftybones😄 That was it. Thanks#2020-06-1608:22craftybonesAnybody?#2020-06-1609:11raspasov@srijayanth Try “Drama*” maybe?#2020-06-1609:11raspasovDrama*#2020-06-1610:08dmarjenburghWe have a lambda ion handler that gets invoked daily with a CloudWatch event. It never gave problems, but since this last night it throws this exception:
No implementation of method: :->bbuf of protocol: #'datomic.ion.lambda.dispatcher/ToBbuf found for class: clojure.lang.PersistentArrayMap: datomic.ion.lambda.handler.exceptions.Incorrect
clojure.lang.ExceptionInfo: No implementation of method: :->bbuf of protocol: #'datomic.ion.lambda.dispatcher/ToBbuf found for class: clojure.lang.PersistentArrayMap {:cognitect.anomalies/category :cognitect.anomalies/incorrect, :cognitect.anomalies/message "No implementation of method: :->bbuf of protocol: #'datomic.ion.lambda.dispatcher/ToBbuf found for class: clojure.lang.PersistentArrayMap"}
at datomic.ion.lambda.handler$throw_anomaly.invokeStatic(handler.clj:24)
at datomic.ion.lambda.handler$throw_anomaly.invoke(handler.clj:20)
at datomic.ion.lambda.handler.Handler.on_anomaly(handler.clj:171)
at datomic.ion.lambda.handler.Handler.handle_request(handler.clj:196)
at datomic.ion.lambda.handler$fn__3841$G__3766__3846.invoke(handler.clj:67)
at datomic.ion.lambda.handler$fn__3841$G__3765__3852.invoke(handler.clj:67)
at clojure.lang.Var.invoke(Var.java:399)
at datomic.ion.lambda.handler.Thunk.handleRequest(Thunk.java:35)#2020-06-1610:11dmarjenburghNvm, I found that it's the return value from the server handler#2020-06-1610:21craftybones@raspasov - that didn’t work! 😞#2020-06-1611:48unbalanceddoes anyone have any experience running a dockerized peer server or peer application? There seems to be some networking requirement that I'm not fully understanding#2020-06-1611:49favilaports 4334 and 4335 must be open#2020-06-1611:50unbalancedoutbound or inbound?#2020-06-1611:50favilathe host= or alt-host= in the transactor properties file must name the transactor and be resolveable by peers#2020-06-1611:50favila(actually 4335 is only for dev storage)#2020-06-1611:51faviladatomic peer connections work like this:#2020-06-1611:51favilatransactor writes its own hostname to storage and sets up an artemismq cluster#2020-06-1611:51unbalancedgotcha. So it needs to be able to hit 172.17.0.1:4334, in my case?#2020-06-1611:51favilathen peers connect to storage, lookup the transactor name, and connect to the transactor on 4334#2020-06-1611:51unbalancedah ok, keep going#2020-06-1611:52favilaso they need whatever the txor writes to storage to be resolveable to the transctor in whatever network they are in#2020-06-1611:52favila(4334 is for artemis)#2020-06-1611:53unbalancedinteresting ... digesting#2020-06-1611:54unbalancedand it's a one way connection?#2020-06-1611:54favilalooks like from your logs that the peer can find storage, but either 172.17.0.1 doesn’t resolve to the txor, or it’s not allowed to connect to it, or the destination port isn’t open#2020-06-1611:54unbalancedthere's no inbound from AMQ?#2020-06-1611:54favilathere’s inbound data, but the txor doesn’t actively connect to peers#2020-06-1611:55unbalancedgotcha. hmm okay thank you. This gives me what I need to work on the puzzle#2020-06-1611:55favilabtw why is “localhost” an option?#2020-06-1611:56favilais that for connecting from outside docker?#2020-06-1612:02unbalancedsorry back#2020-06-1612:03unbalancedI developed it locally and now I'm attempting to dockerize it#2020-06-1612:03unbalancedI was considering converting the app code to client process but I'd be back in the same boat with the peer server needing to be dockerized#2020-06-1612:11unbalancedinteresting -- found this and I'm not seeing any extra exposed ports: https://github.com/frericksm/docker-datomic-peer-server#2020-06-1612:13favilait exposes 9001#2020-06-1612:13favilathis is just the peer server#2020-06-1612:13favilano transactor#2020-06-1612:15favilayou confirmed that 4334 is exposed on the transactor?#2020-06-1612:27unbalancedI'm trying to dockerize my peer application code, not the transactor#2020-06-1612:27unbalancedso using this peer server as inspiration -- sorry for the confusion#2020-06-1612:27unbalancedtransactor is on my host machine right now#2020-06-1612:27unbalancedpeer application code is on the docker#2020-06-1612:28unbalancedOH IT DOES EXPOSE 9001 !!! great catch#2020-06-1612:29unbalancedAlso I notice it is going with "0.0.0.0" instead of the docker bridge network, interesting#2020-06-1612:29unbalancedgives me some stuff to play around with, great eye.#2020-06-1612:43favila9001 is the client api port#2020-06-1612:44favilathat is the service the peer-server is exposing#2020-06-1612:44favilabut your problem is your peer can’t find or can’t talk to your transactor#2020-06-1612:44favilaswitching to client-server won’t fix that#2020-06-1613:09unbalancedwell worst case scenario I can rewrite the code with client API and try to talk to the peer server instead#2020-06-1613:12favilabut, the peer-server is a peer#2020-06-1613:12favilait is a peer, which implements the server half of the client api#2020-06-1613:13favilahave you tried this peer-server docker image and gotten it to connect to your transactor?#2020-06-1613:14favilawait a sec, I think I know your problem#2020-06-1613:14favilayour transactor is binding to localhost#2020-06-1613:15favilait needs to bind to something the docker network layer can route to#2020-06-1613:15favilatry host=0.0.0.0 as a first step#2020-06-1613:16favilait may not let you do that; if not, use the container’s IP#2020-06-1613:42unbalancedgood thinking#2020-06-1613:42unbalancedI'll give that a shot#2020-06-1613:46unbalancedinteresting, new error anyway#2020-06-1613:46unbalanced#2020-06-1613:46unbalancedsorry about formatting 😕#2020-06-1613:47unbalancednow it's clearly psql that's pissed#2020-06-1614:02unbalancedyeah this is interesting, postgres wants a DIFFERENT host identifier than the transactor does. Transactor is connecting on 0.0.0.0, but postgres wants the docker bridge:#2020-06-1614:02unbalanced#2020-06-1614:14favilatransactor’s host and storage host are different concepts#2020-06-1614:15favilaI think you are misunderstanding something. this error shows you got even less far along#2020-06-1614:15favilayou didn’t even manage to connnect to postgres this time#2020-06-1614:17favilayou are setting up three things: 1) postgres, exposes 5432, needs routable hostname 2) transactor, exposes 4334, needs routable hostname, needs to connect to postgres. 3) peer; needs to connect to postgres, needs to connect to transactor#2020-06-1614:19favilatransactor.properties host= is what the transactor binds to for port 4334 (for peers to connect to it). Both host= and alt-host= are written to postgres for peers to discover#2020-06-1614:19favilaalt-host= is an alternative host/ip in case there’s some networking topology where what the transactor binds to isn’t the same thing other peers should connect to#2020-06-1913:20tvaughanIf you're running on a single machine, all you need to do is 1) name the running containers, 2) use this name as the host name when connecting to a running container, and 3) run all containers on the same bridge network. Ports don't need to be exposed explicitly if they're only accessed by other containers on the same bridge network. All services should bind to 0.0.0.0 , not localhost. For example, we start the peer server like $DATOMIC_RELEASE/bin/run -m datomic.peer-server -h 0.0.0.0 -p 8998 -a "$DATOMIC_ACCESS_KEY_ID","$DATOMIC_SECRET_ACCESS_KEY" -d $DATOMIC_DATABASE_NAME,datomic:mem://$DATOMIC_DATABASE_NAME#2020-06-2417:44unbalancedAwesome, thank you! I'll be sure to use this when we transition to client API#2020-06-1612:34unbalancedWhat is the significance of port 9001 with regards to the peer server? Would this also apply to peer application code?#2020-06-1612:45favila9001 is the port that client api clients connect to.#2020-06-1612:47unbalancedinteresting. So this should not apply to peer application code? 🤔#2020-06-1612:47favilano. BTW that port is configurable. 9001 is just the one that docker image you were looking at happened to use#2020-06-1612:47unbalancedaha#2020-06-1612:47favilalook at line 25#2020-06-1612:48favilahttps://github.com/frericksm/docker-datomic-peer-server/blob/master/Dockerfile#L25#2020-06-1612:48unbalancedahhh#2020-06-1612:48unbalancedhrrrm#2020-06-1614:18unbalancedokay creeping closer.
.properties file modifications:
alt-host=172.17.0.1
host=0.0.0.0
protocol=sql
#host=localhost
port=4334
peer connection string from docker:
datomic:
[main] ERROR org.apache.activemq.artemis.core.client - AMQ214016: Failed to create netty connection
java.net.UnknownHostException: 172.17.0.1
at java.base/java.net.InetAddress$CachedAddresses.get(InetAddress.java:797)
#2020-06-1614:22favilajust to verify--that is indeed the ip address of the transactor?#2020-06-1614:23favilaI notice it’s the same as postgres.#2020-06-1614:33unbalancedhmm good question. well this is what the transactor says:#2020-06-1614:33unbalanced$ ./run-transactor.sh
Launching with Java options -server -Xms1g -Xmx1g -XX:+UseG1GC -XX:MaxGCPauseMillis=50
Starting datomic:sql://<DB-NAME>?jdbc:, you may need to change the user and password parameters to work with your jdbc driver ...
System started datomic:sql://<DB-NAME>?jdbc:, you may need to change the user and password parameters to work with your jdbc driver
#2020-06-1614:34unbalanceddatomic:sql://<DB-NAME>?jdbc:
#2020-06-1614:35unbalanced(postgres and the transactor are on the same host right now)#2020-06-1614:35unbalanced(but in the future they could be on different hosts)#2020-06-1614:37favilaare they inside or outside your docker network?#2020-06-1614:38unbalancedhmm the transactor is outside, the postgres is also running in a docker probably on the default network#2020-06-1614:38favilaso, your peer can connect to postgres, but not to the transactor#2020-06-1614:39favilaI don’t understand this part though .UnknownHostException: 172.17.0.1#2020-06-1614:40favilaif the peer could connect to postgres on 172.17.0.1 to get the transactor IP, how is that an unknown host?#2020-06-1614:41favilaare you absolutely sure this is what the peer used? (d/connect "datomic:<sql://search?jdbc:postgresql://172.17.0.1:5432/datomic?user=datomic&password=datomic>")` ?#2020-06-1614:50unbalancedgood question#2020-06-1614:50unbalancedI'll hardcode it just to be sure#2020-06-1614:51unbalanced(ns search.config
(:require [clojure.tools.logging :as log]))
;; todo -- parameterize
(def dockerized? (System/getenv "DOCKERIZED"))
(log/info (str "Dockerization detected:" dockerized?))
(def db-host (if dockerized?
"172.17.0.1"
"0.0.0.0"))
(log/info (str "Using host: " db-host))
(def db-uri (str "datomic:" db-host ":5432/datomic?user=datomic&password=datomic"))
(log/info (str db-uri))
(def db-version "0.6")
is technically what it's doing#2020-06-1614:52unbalancedah, no wait, this is the same error facepalm#2020-06-1614:53unbalanced#2020-06-1614:59favilaagain, this indicates that the peer could talk to postgres but not the transactor#2020-06-1614:59favilaabsolutely baffled how the same IP could be fine and also unknown host#2020-06-1615:00favilait just used that IP to talk to postgres, and got the host and alt-host info from there#2020-06-1615:32unbalancedyeah, it's nuts. It's def a networking thing, when I use the --network=host option on the Docker everything works fine#2020-06-1615:33unbalancedah I see so the observation here indicates that the peer process looked up the location of the transactor in storage (the host/alt-host settings) and then attempted to use those to connect to the transactor?#2020-06-1615:34unbalancedis there a way to "ping" the transactor?#2020-06-1615:34unbalancedbecause then I could test what the host is supposed to be from the Docker perspective#2020-06-1615:41unbalancedah okay, I have a new hypthesis.
docker_gwbridge: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
ether 02:42:b5:8e:77:df txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2302 bytes 377328 (377.3 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
that's the docker bridge (as seen from host)#2020-06-1615:41unbalancedpostgres is dockerized, the peer service is dockerized ... the transactor is not#2020-06-1615:42unbalancedin my mind it makes sense then what you said about the baffling behavior of the transactor failing when it just talked to postgres to get the information#2020-06-1615:42unbalancedso perhaps if I dockerize the transactor it will alleviate this issue#2020-06-1617:15favilathat’s not my understanding of “unknownhosterror”#2020-06-1617:15favilabut maybe it is just a communication thing#2020-06-1617:15unbalancedI'm writing up the current state of affaris#2020-06-1617:18favilare “ping” if you just want to test reachability from various machines use nc -z hostname port#2020-06-1617:18favilait will print if it could establish a tcp connection then terminate#2020-06-1617:18unbalancedEither there is a missing piece or I just need to learn more about networking/docker.
Current setup:
• transactor on docker0
• postgres on docker1
• peer application on docker2
observations:
tranasctor can connect to psql
{:tag :a, :attrs {:href "/cdn-cgi/l/email-protection", :class "__cf_email__", :data-cfemail "1765787863577378747c726527"}, :content ("[email protected]")}
peer application can connect to psql:
{:tag :a, :attrs {:href "/cdn-cgi/l/email-protection", :class "__cf_email__", :data-cfemail "b0c2dfdfc4f0d384d3838289898781d1d587"}, :content ("[email protected]")}
current error message from peer application:
[main] INFO search.config - Dockerization detected:true
[main] INFO search.config - Using host: 172.17.0.1
[main] INFO search.config - datomic:
[main] INFO datomic.domain - {:event :cache/create, :cache-bytes 2086666240, :pid 6575, :tid 1}
[main] INFO datomic.process-monitor - {:event :metrics/initializing, :metricsCallback clojure.core/identity, :phase :begin, :pid 6575, :tid 1}
[main] INFO datomic.process-monitor - {:event :metrics/initializing, :metricsCallback clojure.core/identity, :msec 0.47, :phase :end, :pid 6575, :tid 1}
[main] INFO datomic.process-monitor - {:metrics/started clojure.core/identity, :pid 6575, :tid 1}
[clojure-agent-send-off-pool-0] INFO datomic.process-monitor - {:AvailableMB 3880.0, :ObjectCacheCount 0, :event :metrics, :pid 6575, :tid 13}
[clojure-agent-send-off-pool-0] INFO datomic.kv-cluster - {:event :kv-cluster/get-pod, :pod-key "pod-catalog", :phase :begin, :pid 6575, :tid 13}
[clojure-agent-send-off-pool-0] INFO datomic.kv-cluster - {:event :kv-cluster/get-pod, :pod-key "pod-catalog", :msec 8.89, :phase :end, :pid 6575, :tid 13}
[main] INFO datomic.peer - {:event :peer/connect-transactor, :host "172.17.0.1", :alt-host nil, :port 4334, :version "1.0.6165", :pid 6575, :tid 1}
Execution error (ActiveMQNotConnectedException) at org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl/createSessionFactory (ServerLocatorImpl.java:787).
AMQ119007: Cannot connect to server(s). Tried with all available servers.
transactor config:
host=172.17.0.1
#host=0.0.0.0
protocol=sql
#host=localhost
port=4334
sql-url=jdbc:
sql-user=datomic
sql-password=datomic
#2020-06-1617:19unbalancedAs I write that up I think the host might be wrong. host is supposed to be host of transactor, not sql, yeah?#2020-06-1617:22unbalancedthat was it 🙂 @favila you are THE MAN!! thank you!!!#2020-06-1617:22unbalancedI should probably do a blog thing about this for the benefit of others cause this was a little tricky#2020-06-1617:22unbalancedI should probably do anything productive with my time at all 😓#2020-06-1614:19unbalancedso that's a new error, the UnknownHostException#2020-06-1618:05unbalanced@favila if there was an equivalent of upvotes or reddit gold for this slack channel I'd be throwing them at you, thank you#2020-06-1622:38jacksonIs qseq not available in datomic.api as suggested here?
https://docs.datomic.com/on-prem/clojure/index.html#datomic.api/qseq#2020-06-1622:42jacksonSpecifically the peer api for 0.9.6045. And I have the same issue with index-pull.#2020-06-1623:14favilaThis fn is brand new and only available on the latest version (yours is not) https://docs.datomic.com/on-prem/changes.html#2020-06-1700:36jacksonNot sure how I missed that release, thanks!#2020-06-1622:44zhuxun2Earlier I asked a question about EQL in #fulcro but I figured it might concerns datomic users as well so I felt compelled to drop it here as well. Apologize for the long post. 🙏
--- original post ---
I have a question about the design of EQL which I'm not sure this is the right place to discuss. However, I feel EQL is something fulcro users deal with heavily so I think it might be valuable for me to hear your opinions:
There are couple of quirks that made me wonder why EQL was designed like it is. Let's take a look at a typical EQL -- the kind of which probably show up in your app hundreds of times
[:user/id
:user/name
:user/email
{:user/projects [:project/id
:project/name
:project/start-date
:project/end-date
:project/completed?]}
{:user/contacts [:user/id]}]
EQL claims to have "the same" shape as the returned data. That's awesome! However, why don't we go a step further? Consider the return value of the above query:
{:user/id #uuid "..."
:user/name "Fred Mertz"
:user/email "
Why wasn't EQL designed to completely mimic that structure:
{:user/id _
:user/name _
:user/email _
:user/projects [{:project/id _
:project/name _
:project/start-date _
:project/end-date _
:project/completed? _}]
:user/contacts [{:user/id _}]}
Or, if the explicitness of pluarity is not desired here:
{:user/id _
:user/name _
:user/email _
:user/projects {:project/id _
:project/name _
:project/start-date _
:project/end-date _
:project/completed? _}
:user/contacts {:user/id _}}
The immediate benefit is that now I can use the map namespace syntax to make it much more succinct and DRY (and easy on the eye):
#:user{:id _
:name _
:email _
:projects #:project{:id _
:name _
:start-date _
:end-date _
:comleted? _}
:contacts #:user{:id _}}#2020-06-1622:44zhuxun2IMHO many important semantics are much better aligned this way. For example, in a return value, the order of the keys in a level of map should not matter, and there should not be duplicated keys. However, EQL uses an ordered collection (vector) to denote the keys, the semantics of which has a sense of order while ensures no uniqueness. Also, it feels like in EQL maps are used in place of pairs. I understand that Clojure doesn't have a built-in
literal for pairs so it makes sense to use maps, but maps seem to be a poor fit for this role -- here they are only allowed to have one key-value pair, and pushes the key into the next level when it should really belong to the outer level. I feel that the ad-hoc-ish design not only misses mathematical simplicity but also make everything unnecessarily complex. If I were to write a function to retrive all root level keys given a EQL (which should have been trivial), the implementation would be a few lines unnecessarily longer since I need to consider those ref-type attributes. If I am using Emacs to manually write a test example given an EQL, I am doing lots of unnecessary work changing brackets into braces and splicing sexp's.
That being said, my exposure to Clojure and Datomic/Pathom/Fulcro is limited, and I truly want to hear if there are reasons why EQL was designed the way it is rather than my intuitive version. I apologize my above arguments spiralled into a small rant.#2020-06-1622:44zhuxun2IMHO many important semantics are much better aligned this way. For example, in a return value, the order of the keys in a level of map should not matter, and there should not be duplicated keys. However, EQL uses an ordered collection (vector) to denote the keys, the semantics of which has a sense of order while ensures no uniqueness. Also, it feels like in EQL maps are used in place of pairs. I understand that Clojure doesn't have a built-in
literal for pairs so it makes sense to use maps, but maps seem to be a poor fit for this role -- here they are only allowed to have one key-value pair, and pushes the key into the next level when it should really belong to the outer level. I feel that the ad-hoc-ish design not only misses mathematical simplicity but also make everything unnecessarily complex. If I were to write a function to retrive all root level keys given a EQL (which should have been trivial), the implementation would be a few lines unnecessarily longer since I need to consider those ref-type attributes. If I am using Emacs to manually write a test example given an EQL, I am doing lots of unnecessary work changing brackets into braces and splicing sexp's.
That being said, my exposure to Clojure and Datomic/Pathom/Fulcro is limited, and I truly want to hear if there are reasons why EQL was designed the way it is rather than my intuitive version. I apologize my above arguments spiralled into a small rant.#2020-06-1623:20favilaNot a complete answer but some historical context: eql is based on pull expressions, which existed prior to the namespaces map features you mention#2020-06-1623:20favilaSo to some degree this is historical accident.#2020-06-1623:21favilaAlso the symmetry of your proposal breaks down once you consider parameters#2020-06-1623:22favilaKey renaming, limits, defaults, etc. some of that you can smuggle into the value slot, but you need a plan for nested maps#2020-06-1700:56souenzzoI dont think that the pull notation is import and do not think that one is better then other. We can have many notation/representations and talk about the same AST.#2020-06-1705:36zhuxun2@favila I can support params the same way the current EQL does -- on the key slot, no?
#:user{(:id {:with "params"}) _
:name _
:email _
(:projects {:with "params"}) #:project{:id _
:name _
:start-date _
:end-date _
:comleted? _}
:contacts #:user{:id _}}
#2020-06-1705:38zhuxun2Sure I lifted the keys this way, but when I start doing params, the query is already so much deviated from regular static data that it becomes a DSL, so I don't care the structural simplicity as much#2020-06-1709:36souenzzohttps://gist.github.com/souenzzo/c1fcc19c0ed1ac08f3902fb6ed80eb7a#2020-06-1709:36souenzzoalso, "eql query notation" isn't the same of "datomic selector notation"
https://github.com/souenzzo/eql-datomic/#2020-06-1709:50souenzzoin other words: it's OK to have many "query languages". each one has it's own benefits/facilities. all then can talk about the same AST#2020-06-1715:31Ramon RiosHello everyone#2020-06-1715:34Ramon Rios{:tag :a, :attrs {:href "/cdn-cgi/l/email-protection", :class "__cf_email__", :data-cfemail "50223f3f2410316234366569686436603635"}, :content ("[email protected]")}
I'm following datomic tutorial and getting this error. Is there because i'm using free instead or pro version?#2020-06-1717:12marshallyes, peer-server is not included in datomic=free#2020-06-1807:43Ramon RiosShoot, how should i start with free version instead?#2020-06-1811:30dazlddo you need persistence, or just want to play around?#2020-06-1811:31dazldusing an in-memory db is the easiest way to start playing with it, i think#2020-06-1812:31souenzzo(d/connect "datomic:") 🙂#2020-06-1812:33Ramon RiosI need to play around. Will use in a project and i want to have a hands on experience#2020-06-1812:33Ramon RiosThank you all : )#2020-06-1813:19joshkhfollowing up on a question i asked the other day, i am trying to pass the result of a query through a serialization library, and i am having trouble making sense of an error. upon def'ing the result i can see that it is a clojure.lang.PersistentVector
(def result (d/q ... db)
=> #'my-ns/result
(type result)
=> clojure.lang.PersistentVector
and a postwalk through the data structure shows only core java/clojure classes
(clojure.lang.PersistentVector
clojure.lang.PersistentHashMap
clojure.lang.MapEntry
clojure.lang.Keyword
clojure.lang.PersistentArrayMap
java.lang.String
java.lang.Boolean)
however, when i pass result to the serialization library, i get a NotSerializableException for datomic.client.impl.shared.Db.
(sp/set bc "testkey" 120 result)
Execution error (NotSerializableException)
at java.io.ObjectOutputStream/writeObject0 (ObjectOutputStream.java:1185).
datomic.client.impl.shared.Db
how is the datomic.client.impl.shared.Db class related to the result of the query?#2020-06-1813:23favilaMaybe metadata? Maybe your walking isn’t looking at every object?#2020-06-1813:24favilawhat is your query and result? have you tried bisecting the result?#2020-06-1813:26joshkheverything works as expected if i copy and paste the contents of result back in to the repl, so there's definitely something going on with the object itself#2020-06-1813:26favilathat sounds like metadata. does your serializer serialize metadata?#2020-06-1813:26joshkhit does indeed. and the query result vector does have a nav protocol:
(meta result)
=>
#:clojure.core.protocols{nav #object[clojure.core$partial$fn__5839 0x613ebeb5 "
#2020-06-1813:27favilaTIL#2020-06-1813:28joshkhalso, i wasn't able to remove the metadata from result 🤔#2020-06-1813:28favilahow so?#2020-06-1813:29joshkh(meta (vary-meta result dissoc clojure.core.protocols/nav))
=>
#:clojure.core.protocols{nav #object[clojure.core$partial$fn__5839 0x613ebeb5 "
#2020-06-1813:29favilait’s a keyword not a symbol#2020-06-1813:29favilawhy not (with-meta result nil)?#2020-06-1813:30joshkhyes, why not is a good question. thanks for the tip. 😉#2020-06-1813:31favilathere could still be metadata on nested objects. I didn’t know the client lib made results navigable and I don’t know how it works#2020-06-1813:32favilathis seems like something better solved in your serializer if possible#2020-06-1813:32favilacan it be customized or operate in meta/non-meta preserving modes?#2020-06-1813:34joshkhfavila, once again, thanks for your help. i shrugged off the metadata earlier when i saw only the nav protocol. but sure enough stripping it away solved the problem#2020-06-1813:34joshkh(top level metadata, that is)#2020-06-1813:34joshkhi actually need support for my own metadata so this works for me#2020-06-1814:23favila“only the nav protocol” these are always live objects (functions) so I don’t expect them to be serializable ever#2020-06-1815:00joshkhagreed, and i did find that while stripping the top level metadata worked in my one example, other query results had nested metadata that could not be serialised (as you suspected). for now it is a hobby project, so a simple postwalk to remove all metadata works with the least amount of effort, but i will explore the serialization library for a more solid solution.#2020-06-1814:30ivanahello, can anyone explain me what I have to do to make this query work?
{:find '[[?e ...]]
:in '[$ ?date-from ?date-to]
:args [db date-from date-to]
:where '[[?e :logistic/driver ?d]
[?e :logistic/delivery-date ?date]
(not [?d :driver/external? true])
[(get-else $ ?e :logistic/completed #inst "2000") ?completed-date]
(or
(and [?e :logistic/state :logistic.state/incomplete]
[(<= ?completed-date ?date-to)])
[?e :logistic/state :logistic.state/active]
(and (or [?e :logistic/state :logistic.state/completed]
[?e :logistic/state :logistic.state/failed])
[(<= ?date-from ?completed-date)]
[(<= ?completed-date ?date-to)]))]}#2020-06-1814:31ivanathe error is on or clause Assert failed: All clauses in 'or' must use same set of vars#2020-06-1815:05ivanaMoved all the and/or clauses to external function, and it works. I have no idea about theirs magic inside datomic#2020-06-1815:11favilathis has to do with whether a var should be unified with the outside of the rule or not#2020-06-1815:11favilayou can control this by using or-join and and-join instead and being explicit#2020-06-1815:13favilainvisibly, or is creating a rule, and each rule must unify to the same set of vars outside the rule#2020-06-1815:13favilaif you don’t specify the vars, it looks inside the rule to determine it#2020-06-1815:13favilayou’ll notice each clause of your or uses a different, non-overlapping set of vars#2020-06-1815:19ivanaThanks. But it seems too complicated for me, looks like much simplier is to use an external function with predictable behavior...#2020-06-1816:34drewverleeIs there a way to get datomic change log updates via some notification?#2020-06-1816:35marshall@drewverlee you can subscribe to the Announcements topic on the datomic forum#2020-06-1816:35marshallhttp://forum.datomic.com#2020-06-1816:38drewverleegreat thanks!#2020-06-1913:05arohnerWhat is the idiomatic way to express
(d/q '[:find [?name ...] :in $ :where ...])
? I’m getting Only find-rel elements are allowed in client find-spec, see #2020-06-1913:12marshall@arohner you need to use the find-rel spec only: :find ?name :in ...#2020-06-1913:12marshallmanipulating the collection after it is returned can then be handled in your client app code#2020-06-1913:13arohnerRight. I’m asking if there’s a datomic client alternative to [?name …]#2020-06-1913:13arohnerOk, sounds like there’s no alternative#2020-06-1913:14souenzzo(map first (d/q '[]))
Or
'[:find ?name :keys :name ....] @arohner#2020-06-1913:21arohnerThanks#2020-06-2014:05erikwould it be stupid to think of Datomic not of as a DB but a durable message queue system, with the added benefit of providing an event sourced DB implemened using covering indexes?#2020-06-2016:10Linus EricssonI think thats a fair description. However, you will want to use separate datomic databases for events vs business data at some point, but yes. Please make them consider it. Saves a lot of hassle.#2020-06-2014:56erikI'm asking because I'm trying convince my team mates to use Datomic instead of Kafka/NATS#2020-06-2015:40Joe Lanehttps://vvvvalvalval.github.io/posts/2018-11-12-datomic-event-sourcing-without-the-hassle.html @eallik this may be useful#2020-06-2015:40Joe LaneAlso, what is NATS?#2020-06-2018:40drewverleegiven i added a pure function to my :allow list in the draomic/ion-config.edn and the ion push and deploy were successful i would expect to be able to use that funtion as an :xform, however i get an error when i try that says its not allowed in the ion-config. How can i double check the allowed functions?#2020-06-2019:35alidlorenzoWhat’s the value-add of entity predicates (and tx functions more generally), compared to defining constraints in the function that’s initiating the transaction?
ex of entity predicate: (d/transact conn {:tx-data [(merge {:db/ensure :user/validate} user-data)]})
ex of regular function: (if user-valid (d/transact conn {:tx-data [user-data]}))
I was hoping entity predicates would ensure certain data never entered database unless it was valid. But if they must be explicitly called, what else do they add compared to regular function validations?#2020-06-2020:46favilasafety from read and write skew#2020-06-2020:47favilatransaction functions read the before-transaction value of the db and can abort if they see a constraint violation#2020-06-2020:48favilaentity and attribute predicates see the after-transaction value of the db (after all datom expansion, right before final commit) and can abort.#2020-06-2020:49favilachecking before issuing the transaction is seeing a value of the db which may not be the most recent value by the time the transaction command reaches the transaction writer#2020-06-2020:49favilaso these are three different moments in a transaction lifecycle#2020-06-2020:49favilathe absolute safest thing is entity and attribute predicates#2020-06-2020:49alidlorenzooh ok, it does make a difference then. this lifecycle wasn’t as clear in docs, so thanks for explaining#2020-06-2020:50alidlorenzo*cloud docs at least#2020-06-2201:16alidlorenzo@U09R86PA4 if entity predicates see the after-transaction value, then how can you use them to validate new entities?
i.e. if I want to validate a new user’s username/email do not exists, I need to do that before the transaction, otherwise an existing user’s data could be upserted#2020-06-2201:18alidlorenzoi guess transaction funtions can be used for new entity data, and user predicates to check transactions on existing entities; though it does feel odd that these two use-cases would be segmented like that, am I missing something?#2020-06-2201:33favilaDon’t use an upserting attribute if you care about the difference between a create and update operation#2020-06-2201:35favilaPredicates check that the db is in a valid state and can abort if not. Their value is that they don’t know or care about what operations got the db into that state. They can abstractly say “these conditions must always hold”#2020-06-2201:35favilaThat’s also why they can’t alter the result, only accept or abort#2020-06-2201:38alidlorenzowhat do you mean by an upserting attribute?
for example, i can run this transaction twice, first time it creates user, second time it upserts it
(d/transact conn
{:tx-data [{:user/username "admin"
:user/email "
#2020-06-2201:39favilaIt will only upsert if one of those attrs is marked as upserting#2020-06-2201:39favilaDb.index/identity#2020-06-2201:40alidlorenzoah ok, that must be the bit i’m missing. I’ll look more into that in docs; thanks!#2020-06-2201:40favilaOtherwise it will create a new entity or abort if marked db.index/unique#2020-06-2201:46alidlorenzo^^ yea, I’ve been doing :db.unique/identity` instead of :db.unique/value - the former causes upserts 😅#2020-06-2201:59favilaReally it’s safer to always name the entity you are intending to manipulate. Use db/id with a lookup ref for update, tempid for create#2020-06-2020:24erik@lanejo01 yes, aware of that. but event sourcing is not quite the same as what Kafka does... Kafka is event streaming#2020-06-2021:25alidlorenzofor those that reinstall schema every time on startup, how do you handle cardinality many attributes?
specifically, changing attribute specs
e.g. if I change my entity’s :db.attr/preds from 'db.attr-preds/foo1 to db.attr-preds/foo2
my initial assumption was that reinstalling the schema would replace its predicate attribute, but because predicates are cardinality many the new one is added on.
is this a case when reinstalling schema every time stops working, or are there workarounds?#2020-06-2206:32steveb8nQ: I’d like to hear peoples tip/tricks for tuning queries from prod metrics. I know I’m going to need this so I’ll start with what I’m planning and what I wish existing….#2020-06-2206:33steveb8nto start I’m gonna keep a metric for every distinct query run, time to execute, number of entities returned etc. this should give me a good signal for poorly tuned queries#2020-06-2206:34steveb8nSince bad queries are often due to poorly ordered where clauses, I wonder if there is a way to include total number of entities scanned? comparing this to number returned would be a really strong signal#2020-06-2206:35steveb8nany other tricks?#2020-06-2208:16steveb8nI’ve also been pondering an auto-tune capability. if you took every query and ran it with one :where clause at a time and sorted by the result counts, that should give the best where ordering for prod data distribution. only problem is these queries would consume massive CPU so would need a separate query group.#2020-06-2211:19Joe Lane@U0510KXTU
1. Make a wrapper namespace for the query namespace and add the instrumentation (timing, cardinality, etc) there. I've seen projects which ship up to honeycomb using a homegrown lib, but the concept is generic. I HIGHLY RECOMMEND creating some notion of trace-id chaining, either via a hierarchy or some other means (e.g. request creates trace-id 123, which is the parent of trace-id 234 created by a query and then a sibling trace-id 345 is made to do some side-effect, all with their own instrumentation numbers that can also be rolled up). It's extremely valuable to see the whole lifecycle of a request, including all the queries and external calls it performs (datomic or otherwise)
2. I think I remember you being on cloud, so another thing to think about is client-cloud vs ions. They each have different tradeoffs but with ions you get the locality advantage.
3. I don't know of any way to include the number of entities scanned other than re-running the query a few times building up the clauses and magically knowing how to replace the :find clause args with a call to count . That being said, if your queries are fairly static (vs dynamically computed from a client request) you could probably build a tool to accomplish this. (d/db-stats db) is your friend here. Also, there is this tool which may be sufficient, or, at least a great starting point for your "auto-tuner".
4. Try to avoid using massive pull-patterns in your :find clauses. Pull's do joins like :where clauses do, but can have subtle and confusing performance semantics, especially when the data cardinalities change out from under you (like in high traffic production environments).
5. Look at some of the new query tools in the latest release such as qseq and :xform in pull.
Those are the first 5 off the top of my head, LMK if you want to go deeper on any of them.#2020-06-2222:44steveb8nThanks @U0CJ19XAM this is good stuff. I already have a middleware layer in front of all db calls. it’s a successor to https://github.com/stevebuik/ns-clone#2020-06-2222:45steveb8nI also use x-ray so I have the some of the tools you mention in place. reading that article has given me some ideas though.#2020-06-2222:46steveb8nultimately, it’s exceptional queries I want to see, not all of them. so my “signal” vs noise is what I’m currently focused on.#2020-06-2222:46steveb8nI didn’t know about the large pull behaviour. I am doing this so I’ll dig deeper there. Thanks.#2020-06-2213:20Joe LaneRelevant https://www.honeycomb.io/blog/so-you-want-to-build-an-observability-tool/#2020-06-2216:57unbalanceddoes anyone know if mariadb is a supported persistence solution?#2020-06-2217:02ghadi@goomba https://docs.datomic.com/on-prem/storage.html#sql-database#2020-06-2217:02ghadiyes#2020-06-2217:02ghadi> If you want to use a different SQL server, you'll need to mimic the table and schema from one of the included databases. #2020-06-2217:02ghadiThe mysql one should work for maria, I think#2020-06-2217:04unbalancedmuch appreciated 🙇#2020-06-2217:06unbalancedand I suppose that the sql-url and sql-driver-class attributes in the config will inform the transactor of the correct jar to load, and that I should do the same for the peer?#2020-06-2217:12ghadinot sure, but I think if you have the correct url & the jar on the classpath, it will auto discover the correct class#2020-06-2305:49jwkoelewijnHi, we are running an on-prem Datomic installation, with two memcached servers. My question is, are these 2 memcached servers redundant? In other words, could I take one offline, upgrade the underlying machine and bring it back online without any hickups?#2020-06-2311:50favilaDonno about hiccups, but the memcached servers are not replicas: each has half the segments#2020-06-2311:52favilaBut this is really two different questions#2020-06-2311:52favilaWhat you really want to know is how will peers behave when a memcached becomes unreachable#2020-06-2311:55favilaI know everything will still work but I’m unsure if there will be extra blocking timeouts added to peer work or not.#2020-06-2312:56marshallthere will not be blocking timeouts#2020-06-2312:56marshallthe memcache response timeout is very short#2020-06-2312:57marshallif it doesn’t return within that very short window the peer will go to storage instead#2020-06-2407:15jwkoelewijnThanks a lot for the explanations! helped in my understanding!#2020-06-2319:01erikwhat is the client API equivalent of subscriptions/HTTP SSE?#2020-06-2415:17unbalancedok this is a really odd question but ... apparently the storage service they want me to use has a READ port and a WRITE port. I've never seen this before. I'm curious if anyone has any thoughts on how this might be accomplished#2020-06-2415:22favilaBy “WRITE” do you mean “READ+WRITE”?#2020-06-2415:23favilapeers only need read; transactor needs read+write#2020-06-2415:23favilawhat kind of storage is this?#2020-06-2415:27unbalancedcorrect on the READ+WRITE#2020-06-2415:27unbalancedmariadb on something called a galera cluster#2020-06-2415:27unbalanceddoes that just mean I pass the read string to the peers?#2020-06-2415:29unbalancedso if I understand this correctly, I pass the "read uri" to the peer. Peer looks up in storage location of transactor. It hands off transactions to transactor and does queries from read? Is that the theory?#2020-06-2415:32favilacorrect. peers do not write#2020-06-2415:33favilaonly the transactor writes#2020-06-2415:33unbalancedgot it. So it won't cause a conflict that I pass in a different string than the transactor suggests on startup?#2020-06-2415:33unbalancedjust making sure it doesn't do some kind of validation#2020-06-2415:33favilano#2020-06-2415:33unbalancedlovely#2020-06-2415:33favilayou should understand why there are different ports though#2020-06-2415:34favilais this purely an access-control thing, or do they have different consistency guarantees between them? is the other port a read replica?#2020-06-2415:54unbalancedconsistency guarantees#2020-06-2415:54unbalancedreplica, yes. Something about performance#2020-06-2415:54unbalancedpolite thing to do would be to respect the setup, they said, so wanted to see if I could accomodate#2020-06-2416:27favilawhat I mean is, could the peer and transactor ever read different things using different ports?#2020-06-2416:28favilatransactor does a commit, and then informs peers. The peers need to be able to read what the transactor wrote#2020-06-2417:03unbalancedI see. Needs to be strongly consistent?#2020-06-2417:05unbalancedI'm not sure -- checking#2020-06-2417:29unbalanced"about 1 second" propagation time#2020-06-2417:59unbalancedOkay based on what I'm seeing here: https://docs.datomic.com/on-prem/architecture.html#2020-06-2418:00unbalancedIt looks like updates are sent directly from the transactor to the peer#2020-06-2418:00unbalancedso as long as it's sending the actual data and not, say, a lookup value then we should be fine#2020-06-2418:32favilaI am not a datomic dev, but it seems like at least some things must be lookups sometimes#2020-06-2418:32favilae.g., when a new index is finished#2020-06-2418:52unbalancedgotcha#2020-06-2415:18unbalancedI'm thinking if I had a peer service that only did upserts and another that only did reads, I could pass a write uri-string to the upsert service and a read uri-string to the query service, while having the transactor sit on the "write node".#2020-06-2415:18unbalancedDoes that pass a sniff test...?#2020-06-2415:28rolandHello, I saw that there is :reverse option in the index-pull function. Is there also an option to run through the index in reverse order using the datoms function ?#2020-06-2517:50tatutI see datomic cloud accepts frequencies as an aggregate if :find but I don’t see it documented, is that supported or some undefined behaviour? EDIT: doesn’t seem to return what I’d expect… weird that it’s accepted#2020-06-2517:50tatutI see datomic cloud accepts frequencies as an aggregate if :find but I don’t see it documented, is that supported or some undefined behaviour? EDIT: doesn’t seem to return what I’d expect… weird that it’s accepted#2020-06-2518:11favilaI think you are accidentally using it as a custom aggregate: https://docs.datomic.com/on-prem/query.html#custom-aggregates#2020-06-2518:12favilaI think no-namespace symbols are interpreted as in clojure.core, so it happens to work#2020-06-2518:13tatutI haven’t seen anything in cloud docs about custom aggregates#2020-06-2518:19favilahttps://docs.datomic.com/cloud/query/query-data-reference.html#deploying#2020-06-2518:19favilabecause it’s cloud (query is running on a different process than yours) there’s usually stuff you need to do to expose your fn to the query code#2020-06-2518:20favilabut again, I think because it’s clojure core it just accidentally works#2020-06-2518:20favilaI can also use datomic.api functions in a client query when using peer-server#2020-06-2518:21tatutbut it didn’t work… it’s not returning what frequencies should#2020-06-2518:21favilawhat is it returning?#2020-06-2518:24tatutit didn’t get all the values I was expecting it to get… I’ll try it out again later if it should work#2020-06-2518:25tatutPerhaps I’m not understanding the cloud docs as they don’t mention the custom aggregates#2020-06-2518:27favilaThis is what I get:#2020-06-2518:28favila(dc/q '[:find (frequencies ?b)
:with ?a
:in $ [[?a ?b]]]
db [[:a :A] [:a :Z] [:b :A] [:b :F]])
=> [[{:F 1, :A 2, :Z 1}]]#2020-06-2518:28favilaseems right?#2020-06-2518:31tatutthat looks right#2020-06-2518:32tatutmy frequencies is only getting one item so the result is always a mapping of {the-one-value 1}#2020-06-2518:34tatutbut thanks for the help, I’ll continue investigating later#2020-06-2518:45favilaif you don’t use :with, that is expected#2020-06-2518:45favilathe result is a set, thus every item occurs once#2020-06-2519:09jeff tanghi! is it possible to retract a reverse-loookup attribute-value? e.g. [:db/retract 100 :_children 200]#2020-06-2519:38favila[:db/retract 200 :children 100]#2020-06-2519:38favilait’s not possible with an underscore attribute#2020-06-2519:38favilayou have to reverse the terms#2020-06-2519:54jeff tangthank you @U09R86PA4#2020-06-2519:38JAtkinsIs it possible to respond to http ion requests with multiple "Set-Cookie" headers?
My response using just ring wrap cookies looks like this:
"Headers": {
"Content-Type": "application/transit+json; charset=utf-8",
"Set-Cookie": [
"jwt-token=eyJraWQiOiJOM0pRej--retracted--sdfw;Path=/;HttpOnly;SameSite=Strict"
]
}
This works in ring since for seqs the header is translated to this:
Content-Type: application/transit+json; charset=utf-8
Set-Cookie: jwt-token=eyJraWQiOiJOM0pRej--retracted--sdfw;Path=/;HttpOnly;SameSite=Strict
However the ion spec only allows maps of string->string to be returned, and there is no way to set multiple cookeis with only one line in the header.#2020-06-2520:20souenzzohttps://github.com/pedestal/pedestal.ions/issues/3#2020-06-2520:28JAtkinsGenius - thanks!#2020-06-2807:32adamtaitThis fix works great on Ions with Solo deploy or via Lambda, but it’s failing for me with http-direct.
{
:status 200
:headers {
"content-type":"application/json",
"Set-cookie":"b=cookie",
"set-cookie":"a=cookie"}
:body "{\"data\": \"stuff\"}"
}
This response from the http-direct handler results in the HTTP response:
< HTTP/2 200
< content-type: application/json
< content-length: 264
< date: Sun, 28 Jun 2020 07:30:54 GMT
< x-amzn-requestid: c3bc8a56-c0bf-41d7-b6b3-24292a2b6509
< x-amzn-remapped-content-length: 264
< set-cookie: b=cookie
< x-amz-apigw-id: O1APLEtPIAMFkGw=
< x-amzn-remapped-server: Jetty(9.4.24.v20191120)
< x-amzn-remapped-date: Sun, 28 Jun 2020 07:30:53 GMT
< x-cache: Miss from cloudfront
… only a single ‘set-cookie’ header when received by the client.#2020-06-2807:39adamtaitI have also tried different variations of multiValueHeaders (which is supported by API Gateway) but the Ions HTTP direct wrapper seems to ignore those.
Would love to hear if anyone else has seen this issue or worked around it (or if it really is a bug)!#2020-06-2811:31souenzzoI do not use or recommended this CaSE sensitive solution. I just join the cookies with ;#2020-06-2901:43adamtait@U2J4FRT2T are you suggesting this?
:headers { "set-cookie": "a=cookie; b=cookie" }
I wasn’t able to find any documentation on combining multiple cookies in the same header but I tried it anyways and found that browsers ignore the 2nd cookie (`b=cookie` in this example).#2020-06-2901:43JAtkinsThat’s part of the browser spec. I tried that at first. A new line is required for every cookie. Maybe a \n is needed?#2020-06-2920:45adamtaitThanks for the idea! I wasn’t able to get \n to work.
I posted the header inconsistency (between :lambdas and :http-direct to the datomic forum). Hopefully someone from the Datomic team will comment.
https://forum.datomic.com/t/inconsistency-between-lambdas-http-direct-multiple-headers/1506#2020-06-2520:41Kaue Schltz@pedro.silva#2020-06-2520:46Pedro SilvaHello,
I am executing the split stack process to be able to upgrade our Datomic version as described in:
https://docs.datomic.com/cloud/operation/split-stacks.html#delete-master
After start the delete process in CloudFormation we get an error as you can see in the image.
Someone can help us to solve this problem and be able to continue the process?
Thank you.#2020-06-2600:28souenzzoOne year ago I deployed a datomic-ions stack
After fail 3 times in row I decided to not try updates anymore.
Im really sad to see that they still fail at updates#2020-06-2607:23David Pham
Hello everyone :) in datomic, in the schema, how can you write that a combination of two keys is unique, like id and timestamp? With a tuple?#2020-06-2612:21marshallYep, a tuple: https://blog.datomic.com/2019/06/tuples-and-database-predicates.html#2020-06-2607:24David PhamDoes anyone have some suggestion how to implement an entity containing several timeseries? Or how to model time series?#2020-06-2612:24marshallThe modeling decision here is somewhat up to you, but one option is that each entry in the time series is an individual entity with an ordinal (or time) attr and your "parent" entity has a cardinality many reference to the set of them.#2020-06-2612:24marshallIf the set of time series entries is always small (<= eight) you could use a tuple#2020-06-2612:26marshallIf you never want to introspect the individual entries but will only consume the whole timeseries all together you could also store that data elsewhere in a LOB (like s3) and just store a reference to it in datomic#2020-06-2620:11David PhamThanks a lot!#2020-06-2607:25David PhamI am sorry if it sounds trivial, but I am starting with data script.#2020-06-2719:14nicolaHello, datomic users. Is it common to create datomic schema on fly, when loading unknown data? Any recommendations?#2020-06-2807:35adamtait#2020-06-2818:43zhuxun2Is it true that in Datomic there's not a concept of "not null" as there is in SQL and we just have to assume that every attribute can be missing?#2020-06-2818:48zhuxun2Hmmm.. I guess required attributes is the counterpart I am looking for
https://docs.datomic.com/on-prem/schema.html#required-attributes#2020-06-2904:38raspasov@zhuxun2 there’s also missing? https://docs.datomic.com/on-prem/query.html#missing#2020-06-2916:17joshkhare these two constraints equivalent when finding entities that are missing an attribute?
{:where [[(missing? $ ?n :item/sale?)]]}
{:where [(not [?n :item/sale?])]}
#2020-06-2916:20favilaYeah, pretty much. missing? is a function call, not should be visible to the query planner. I don’t know if the query plan is different in any important way.#2020-06-2916:20favilahistorical note, not came later#2020-06-2916:22favilamissing? probably doesn’t work on datasources which are not databases, but I don’t know that for sure#2020-06-2916:27joshkhcool, and as always thanks#2020-06-3015:36JAtkinsDo datomic ions have api documentation?#2020-06-3016:07kennyNo 😞 The best you'll find is https://docs.datomic.com/cloud/ions/ions-reference.html#2020-06-3016:16JAtkinsI found my answer there, but that is a pain...#2020-06-3016:41Joe LaneWhat kind of "api documentation" did you have in mind?#2020-06-3016:43JAtkinsSomething like a doc string on every function so I don't have to hunt around when reading my code...#2020-06-3016:44kennyAlso the equivalent of this https://docs.datomic.com/client-api/datomic.client.api.html#2020-06-3016:44Joe LaneCan you give me an example? Do you mean on your ions or like what kenny just posted above?#2020-06-3016:47JAtkinsWhat kenny posted. For e.g. the (get-env) function is totally blank for docstings. It would be nice to at least have a link to the reference, better yet a permalink to the configuration section, even better than that a synapsis + a link#2020-06-3016:48Joe LaneThanks for clarifying.#2020-06-3016:50JAtkinsNP. I've just found myself very often in the last week trying to decode my ion setup. It's mostly fine when I'm in the middle of everything, since the docs are up and I remember where to look. But on reviewing the code it becomes much slower.#2020-06-3015:56Richardhi - trying to get existing Datomic setup moved from Docker Swarm to Kubernetes#2020-06-3015:57Richardwondering if there are any articles or blog posts on setting up networking so that the peers can connect to the transactor (all in same Kubernetes cluster)#2020-06-3015:59RichardI found this Kubernetes YAML which suggests you can set the port numbers for transactor: https://clojurians-log.clojureverse.org/datomic/2017-03-19/1489953464.521402#2020-07-0118:09genekimHello! I’m wondering if I can get some help with my Datomic Cloud instance that seems to have gone sound — in fact, I’m on a call with @plexus trying to puzzle this out.
1. I’m getting “channel 2: open failed: connect failed: Connection refused” errors on the proxy, when a Datomic Client tries to access the Datomic Cloud instance.
2. In AWS CloudWatch, I see the following alarm, which occurred very close to when we started seeing Datomic connection errors occurring.
Can anyone propose any recommendations? @plexus, any other data worth sharing? (Sorry, gotta pop off for 30m. Thank you, all!)#2020-07-0118:11genekimError I get from a REPL connection:
Execution error (ExceptionInfo) at datomic.client.impl.cloud/get-s3-auth-path (cloud.clj:178).
Unable to connect to localhost:8182
#2020-07-0118:20plexusreading some more AWS docs it seems we have exceeded the allocated write throughput, which is supposed to only cause throttling, but instead the datomic instance has gone under or become unreachable...#2020-07-0118:23ghadicheck your cloudwatch datomic dashboard#2020-07-0118:23ghadishould have a clear smoking gun#2020-07-0118:24ghadiif you have any Alerts (not just "Events") in that dashboard, look at those too by navigating to cloudwatch logs#2020-07-0118:24genekimThank you @U050ECB92 — is this the dashboard? (Sorry, on a call…. 🙂#2020-07-0118:25ghadiyes, weird that it's mostly empty#2020-07-0118:25ghadiwhat about the bottom half of that dash?#2020-07-0118:28genekimWas empty — full screenshot here:#2020-07-0118:29genekim(Empty dashboard was the reason I was asking Datomic team at Conj 2019 about getting help upgrading last year, which I never got around to.)#2020-07-0118:31marshallthe alarm you posted is irrelevant - that is used by DDB for autoscaling capacity#2020-07-0118:31marshallyou should restart your solo compute instance#2020-07-0118:32marshallyou can just bounce it from the EC2 console#2020-07-0118:32marshall@U6VPZS1EK#2020-07-0118:32marshallhttps://docs.datomic.com/cloud/troubleshooting.html#troubleshooting-solo#2020-07-0118:33marshall#2020-07-0118:33marshall^ solo dashboard should look like that#2020-07-0118:34marshallyour instance and/or JVM got wedged and b/c solo is not an HA system there is nothing to fail-over to#2020-07-0118:34marshallquickest fix is to terminate the instance and let ASG create a new one#2020-07-0118:34genekimRoger that! Will try in 30m as soon as I get off this call! Thx!#2020-07-0118:34marshall👍#2020-07-0119:04genekimPosting this datomic log event, before I destroy the solo instance:
2020-06-25T22:54:28.953-07:00
{
"Msg": "RestartingDaemonException",
"Ex": {
"Via": [
{
"Type": "clojure.lang.ExceptionInfo",
"Message": "Unable to load index root ref bd9b3c36-2912-437d-8fc7-6953ab60a1b2",
"Data": {
"Ret": {},
"DbId": "bd9b3c36-2912-437d-8fc7-6953ab60a1b2"
},
"At": [
"datomic.index$require_ref_map",
"invokeStatic",
"index.clj",
843
]
}
],
"Trace": [
[
"datomic.index$require_ref_map",
"invokeStatic",
"index.clj",
843
],
[
"datomic.index$require_ref_map",
"invoke",
"index.clj",
836
],
[
"datomic.index$require_root_id",
"invokeStatic",
"index.clj",
849
],
[
"datomic.index$require_root_id",
"invoke",
"index.clj",
846
],
[
"datomic.adopter$start_adopter_thread$fn__21647",
"invoke",
"adopter.clj",
67
],
[
"datomic.async$restarting_daemon$fn__10442$fn__10443",
"invoke",
"async.clj",
162
],
[
"datomic.async$restarting_daemon$fn__10442",
"invoke",
"async.clj",
161
],
[
"clojure.core$binding_conveyor_fn$fn__5739",
"invoke",
"core.clj",
2030
],
[
"datomic.async$daemon$fn__10439",
"invoke",
"async.clj",
146
],
[
"clojure.lang.AFn",
"run",
"AFn.java",
22
],
[
"java.lang.Thread",
"run",
"Thread.java",
748
]
],
"Cause": "Unable to load index root ref bd9b3c36-2912-437d-8fc7-6953ab60a1b2",
"Data": {
"Ret": {},
"DbId": "bd9b3c36-2912-437d-8fc7-6953ab60a1b2"
}
},
"Type": "Alert",
"Tid": 6306,
"Timestamp": 1593150867958#2020-07-0119:07genekim#2020-07-0119:23marshallThanks, although that shouldn’t cause a significant issue#2020-07-0119:29genekimOkay, terminated the datomic instance, which didn’t work… terminated the bastion-host instance, which didn’t work…
terminated the datomic proxy script, and restarted… forced some sort of reauthentication, which did work!
Thank you, all!#2020-07-0119:31genekim🙏🙏🙏
🎉🎉🎉#2020-07-0119:51marshallyou’d definitely need to restart the proxy script after restarting the bastion instance#2020-07-0119:52marshallIIRC it regenerates creds/keys after coming back from termination#2020-07-0119:30genekimThank you for the help, all! Described resolution of story at end of thread ^^^.#2020-07-0120:00zhuxun2Is it possible to implement a correct task queue in Datomic? Mostly importantly, ensure that multiple task retrievers won't get the same task from the top of the queue. (In PostgreSQL for example I needed to use LOCK FOR UPDATE)#2020-07-0120:09Joe LaneIt's certainly possible to make a queue out of datomic, but why not just use an actual queue?
I also don't necessarily think it's a good idea to use datomic as a queue, depending on the throughput, failure semantics, and data retention you need.#2020-07-0120:35zhuxun2@U0CJ19XAM Good point. I am looking into https://github.com/Factual/durable-queue as well.#2020-07-0120:35Joe LaneWhy not sqs?#2020-07-0120:46zhuxun2Actually, I just realized a queue might not satisfy what I need. There isn't a static queue. Tasks have priorities and they might be changed dynamically. Every task retriever grabs the top-priority job from the database at the moment it accesses the database. Is there a established solution or pattern for something like that?#2020-07-0120:49Joe LaneDepends on the domain, if this is something for humans (like a Jira / Trello clone) then this is easy. If this is for machines, it depends on your throughput, scale, and failure modes.#2020-07-0120:50Joe LaneThat being said, you may be interested in https://github.com/clojure/data.priority-map#2020-07-0120:51Joe Laneand / or https://github.com/clojure/data.avl/#2020-07-0121:01zhuxun2The job retrievers are machines. I don't think an in-memory solution would work well for my particular case, plus, the tasks and their attributes (from which to compute the priority) are already stored in a datomic database so I that's why I was wondering if there's some sort of locking mechanism between querying and updating...#2020-07-0121:02zhuxun2The performance of the priority sorting isn't that much of a problem, at the moment an index on the priority attribute should work well enough#2020-07-0121:05zhuxun2In other words, is there a way to say "change the first item satisfying my query to have attribute [:task/taken true]" -- all within an atomic transaction#2020-07-0121:08Joe LaneYes, via a transaction function, but I don't think it's going to work out well in the end. What happens once a task is taken but then the task retriever dies? What are your retry policies? How do you distinguish between a slow job and a failed job?#2020-07-0121:11Joe LaneDo you have different levels of prioritization like low, medium and high, or is everything prioritized globally? If you can do the former, I think SQS with a queue per level is likely a better approach.#2020-07-0121:11Joe LaneBecause it handles all these things for you#2020-07-0121:14zhuxun2Thanks. That makes sense. What if I'm not using a standard cloud service? Can Kafka serve a similar purpose?#2020-07-0121:17Joe LaneI'd look at rabbitMQ, kafka is a durable log.#2020-07-0121:18Joe Lane(It could do this as well, but may be more difficult to operate. Again, I know nothing of your problem domain, scale, other constraints, etc. so it's hard to make a good recommendation)#2020-07-0121:22zhuxun2Thanks! I will take a look at rabbitMQ#2020-07-0122:46unbalancedI don't suppose there is any way to force a peer to use an alternative address than the one provided by the transactor (retrieved from storage), is there?#2020-07-0201:20favilaThe transactor properties file can have host and alt-host. Are two names not enough?#2020-07-0201:22favilaI’m not sure about ports. I dimly recall that you can specify port in the connection string, but that might only be for dev storage#2020-07-0122:47unbalancedor alternative port, at least?#2020-07-0122:48unbalancedi.e., transactor running on transactorUrl:4334 but is being reverse proxied at with appropriate firewall rules VPN etc etc#2020-07-0123:08unbalancedokay looks like we're able to change the port on the LB. still curious if this is possible, tho#2020-07-0212:50Kaue SchltzHi, there.
I have a ~3Billion datom database in datomic cloud and the need to add an AVET index seems more than reasonable#2020-07-0212:56faviladatomic cloud already value-indexes everything#2020-07-0212:56Kaue SchltzJust realized it#2020-07-0212:56Kaue SchltzThanks#2020-07-0212:50Kaue SchltzI was wondering how datomic will handle the creation of this new index#2020-07-0212:52Kaue SchltzWill it index the existing datoms? If so, would it harmful from a operation standpoint?#2020-07-0212:59marciolHi, We are using Datomic Cloud, executing queries against a Database with approximately 3 Billion Datoms, but a trivial query is taking a long time to return, or it isn’t returning at all, rising timeout exception.
With the query bellow we are trying to return all transactions from a merchant in a range of time, super trivial:
(d/q {:query '[:find (pull ?entity [*]
:in $
:where
[?entity :merchant-id "beb9c7db-a7eb-4e56-8c4e-4db195566562"]
[?entity :transaction-time ?transaction-time]
[?transaction-time :utc-time ?transaction-time-utc]
[(>= ?transaction-time-utc #inst"2020-06-01T03:00:00.000-00:00")]
[(<= ?transaction-time-utc #inst"2020-06-30T03:00:00.000-00:00")])]
:args [(d/db (:conn client))]
:timeout 50000})
We are running in query groups i3.xlarge (with 30.5 GB RAM), and wondering ourselfs if we need to increase these machines.
Can someone with more experience thrown light on this?#2020-07-0213:14Ian Fernandezd/query with pull inside tends to be redundant sometimes, I think that d/pull will get better performance =)#2020-07-0213:20marshallusing pull in query should have the same performance characteristics as a query followed by a pull, except that in the case of using client/cloud using pull in query will save a round trip/wire cost#2020-07-0213:21Ian Fernandezit's datomic cloud w/o ions#2020-07-0213:22Ian FernandezI think d/pull will help in this case#2020-07-0213:22marshallyou should test the time it takes to query just for the entity IDs and how long it takes to pull the attributes of interest#2020-07-0213:22marshallto determine if the pull or the query is taking the majority of the time#2020-07-0213:42Ian Fernandezit can be a problem to use this w/o Ions with too many entities on cloud?#2020-07-0213:46Guilherme PupolinHi @U05120CBV, these are the queries and execution times
First: PULL + 30 days interval = 110578.733438 msecs (14,054 results)
Second: Entity + 30 days interval = 22990.008083 msecs (14,054 results)
(time (def pull-entities (d/q {:query '[:find (pull ?entity [*])
:in $ ?merchant-id ?transaction-time-start ?transaction-time-end
:where
[?entity :merchant-id ?merchant-id]
[?entity :transaction-time ?transaction-time]
[?transaction-time :utc-time ?transaction-time-utc]
[(>= ?transaction-time-utc ?transaction-time-start)]
[(<= ?transaction-time-utc ?transaction-time-end)]]
:args [(d/db (:conn client))
"beb9c7db-a7eb-4e56-8c4e-4db195566562"
#inst"2020-06-01T03:00:00.000-00:00"
#inst"2020-06-30T03:00:00.000-00:00"]
:timeout 50000})))
"Elapsed time: 110578.733438 msecs"
=> #'pgo.commons.datomic-test/pull-entities
(count pull-entities)
=> 14054
(time (def pull-entities (d/q {:query '[:find ?entity
:in $ ?merchant-id ?transaction-time-start ?transaction-time-end
:where
[?entity :merchant-id ?merchant-id]
[?entity :transaction-time ?transaction-time]
[?transaction-time :utc-time ?transaction-time-utc]
[(>= ?transaction-time-utc ?transaction-time-start)]
[(<= ?transaction-time-utc ?transaction-time-end)]]
:args [(d/db (:conn client))
"beb9c7db-a7eb-4e56-8c4e-4db195566562"
#inst"2020-06-01T03:00:00.000-00:00"
#inst"2020-06-30T03:00:00.000-00:00"]
:timeout 50000})))
"Elapsed time: 22990.008083 msecs"
=> #'pgo.commons.datomic-test/pull-entities
(count pull-entities)
=> 14054 #2020-07-0213:54Guilherme PupolinAnd one more case, without filter time:
Third: Entity + without interval = 3768.019134 msecs (17,670 results)
(time (def pull-entities (d/q {:query '[:find ?entity
:in $ ?merchant-id
:where
[?entity :merchant-id ?merchant-id]]
:args [(d/db (:conn client))
"beb9c7db-a7eb-4e56-8c4e-4db19556656"]
:timeout 50000})))
"Elapsed time: 3768.019134 msecs"
=> #'pgo.commons.datomic-test/pull-entities
(count pull-entities)
=> 17670#2020-07-0214:36Joe LaneThis is the time to get the data back to your development computers, right?#2020-07-0214:36Guilherme PupolinRight! @U0CJ19XAM #2020-07-0214:38Joe LaneWhere are you located, which AWS_REGION are your non-ion machines located, and which AWS_REGION is your datomic cloud cluster deployed in?#2020-07-0214:44Guilherme PupolinFor these examples, I connected using datomic-cli from São Paulo in the cluster at us-east-1.
In the productive environment, it has a VPC Endpoint connecting our applications to Datomic in us-east-1.#2020-07-0214:51Joe LaneSo I understand, in prod, your datomic cluster is in us-east-1, and your applications connect to it from which AWS_REGION? Where are the machines themselves?#2020-07-0214:52Guilherme PupolinIn prod, both in us-east-1#2020-07-0214:57Joe LaneHave you gone through the "Decomposing the query" Example marshall posted?#2020-07-0215:35Guilherme PupolinYes I do. But I couldn’t improve any more than that (I got a better result just passing the merchant-id, I do not know a way to search better on this date ref).
[?entity :merchant-id ?merchant-id]
[?entity :transaction-time ?transaction-time]
[?transaction-time :utc-time ?transaction-time-utc]
[(>= ?transaction-time-utc ?transaction-time-start)]
[(<= ?transaction-time-utc ?transaction-time-end)
#2020-07-0216:09marciol@U05120CBV we noticed that what is hurting the query performance are all clauses related to time.#2020-07-0216:14Joe Lane@marciol Can you typehint the query clauses with ^java.util.Date#2020-07-0216:14marciolHmm, good idea @U0CJ19XAM
cc: @U016FDZFA2X#2020-07-0216:22Guilherme Pupolin@U0CJ19XAM in this way?
(d/q {:query '[:find ?entity
:in $ ?merchant-id ^java.util.Date ?transaction-time-start ^java.util.Date ?transaction-time-end
:where
[?entity :merchant-id ?merchant-id]
[?entity :transaction-time ?transaction-time]
[?transaction-time :utc-time ?transaction-time-utc]
[(>= ?transaction-time-utc ?transaction-time-start)]
[(<= ?transaction-time-utc ?transaction-time-end)]]
:args [(d/db (:conn client))
"beb9c7db-a7eb-4e56-8c4e-4db195566562"
#inst"2020-06-01T03:00:00.000-00:00"
#inst"2020-06-30T03:00:00.000-00:00"]
:timeout 50000})))#2020-07-0216:23Joe Lanehttps://docs.datomic.com/cloud/query/query-data-reference.html#calling-java-methods#2020-07-0216:24Joe LaneAlthough, it may not make a difference because you are using the custom comparators <= and >=.#2020-07-0216:27Joe LaneWhy did y'all decrease the timeout from 60 seconds to 50 seconds?#2020-07-0216:49marshall@U016FDZFA2X what are the schema definitions for all the attributes in the query#2020-07-0217:15Kaue Schltz#2020-07-0217:15Kaue SchltzThis is the one#2020-07-0217:15Kaue SchltzFrom what we know so far, the issue lies in the time nesting#2020-07-0217:15Kaue Schltz@U05120CBV#2020-07-0213:00Kaue SchltzIt takes around 50s to retrieve 14k results#2020-07-0213:02Kaue Schltz• We sliced the db using d/since without much improvement#2020-07-0213:04marciolIt’s a lot of time to return the result of such trivial query, must be something we can do to decrease this time.#2020-07-0213:05favilacould it also be the pull * and not the query itself?#2020-07-0213:07favilayour find looks odd (missing close paren). is that the whole thing?#2020-07-0213:19marciol@U016FDZFA2X#2020-07-0213:21marshall@marciol Have you separately tested the time it takes to query for the entity IDs and to pull the attributes from them#2020-07-0213:23marciol@U05120CBV we are going to do all this to get a more fine grained overview of what is happening.#2020-07-0213:23marshallalso review the decomposing a query example#2020-07-0213:04marshall@marciol @schultzkaue https://github.com/cognitect-labs/day-of-datomic-cloud/blob/master/tutorial/decomposing_a_query.clj
have you worked through the decomposing a query ?#2020-07-0216:34zhuxun2What is the idiomatic way to answer the question "when was this attribute last changed"?#2020-07-0216:37ghadibind the ?tx (the fourth component) in a clause, then join to the ?tx's :db/txInstant#2020-07-0216:37ghadi[?e :myAttr _ ?tx]
[?tx :db/txInstant ?time]
#2020-07-0216:53zhuxun2Is there a equivalent of "sort by" in datomic?#2020-07-0216:54zhuxun2Seems not: https://stackoverflow.com/a/30205147#2020-07-0217:03zhuxun2Then how do I query for something like top 10 entity with respect to an attribute? Do I have to query for all of them and then do a client-side sorting?#2020-07-0217:07favilaTake a look at d/seek-datoms and d/index-range on the peer api, and index-seek and index-pull on the peer#2020-07-0217:08favilayou could also try abusing nested queries a bit. You can call normal clojure code inside a query, so you could have an inner query that gets all results, and an outer query that sorts and limits them#2020-07-0217:31zhuxun2I guess by nested queries you mean something like this, right?
https://docs.datomic.com/cloud/query/query-data-reference.html#q#2020-07-0217:32favilayes#2020-07-0217:04zhuxun2There must be a better way ...#2020-07-0221:40zhuxun2Can I have an attribute storing an unspecified EDN?#2020-07-0221:40zhuxun2Or an unspecified nested data structure consisting of maps, vectors, and any of the supported basic types as leaves (i.e., a JSON-like structure)#2020-07-0223:21souenzzo@zhuxun2 you can use pr-str and store as string#2020-07-0223:46marshall@zhuxun2 Datomic is not intended for storing LOBs. You should avoid putting large objects directly in Datomic. Either split them into individual facts (datoms) or store the LOBs somewhere else (ddb, s3, etc) and store a reference to them in Datomic#2020-07-0309:54jeroenvandijk@U05120CBV Do you have a rule of thumb for when a string becomes too big and should be considered a LOB?#2020-07-0318:27ilshad@U0FT7SRLP 4Kb is the limit for strings#2020-07-0308:54Adrian SmithIs there a learning website with a collection of SQL queries and their Datomic equivalents?#2020-07-0315:59dvingonot sure about a comparison (of sql and datalog), but this tutorial is quite good
http://www.learndatalogtoday.org/#2020-07-0316:51ertugrulcetinhey guys, I'm considering to use Datomic Cloud, it seems that datomic.client.api does not have all functions that on-prem datomic.api does. Like datomic.api/entity, would it be a problem? Is there any alternative if I need to use this function?#2020-07-0319:09Kaue SchltzHi there @U0UL1KDLN we're using datomic cloud, and most of the times we use the pull api when we already have the entity id#2020-07-0319:19ertugrulcetin@UNAPH1QMN thank you for the info#2020-07-0609:54Linus EricssonDatomic Cloud and datomic on prem works quite differently. The Entity API expects the application to be able to cache data locally (like a Datomic Peer does), otherwise it has to do a lot of roundtrips to get an entity API working, which would defeat the purpose with en entity view since it would be very slow and ALWAYS has the n+1 error - that you have to do more roundtrips to the server to get more data. The entity API is meant to be a very quick way to navigate around the database.
You can get a similar (but not complete!) way of navigating parts of the databas using the pull expressions as Kaue describes above.#2020-07-0610:43ertugrulcetin@UQY3M3F6D thank you so much!#2020-07-0319:05Kaue SchltzI was wondering, if I were to shard my datomic cloud or to split my data in any way, would it make any sense to just create different dbs?
something like
(d/create-database client {:db-name "smurfs-1"})
(d/create-database client {:db-name "smurfs-2"})
#2020-07-0319:05Kaue Schltzthen query one or the other#2020-07-0319:06Kaue SchltzI'd rather have to query 500M datoms over one db or the other#2020-07-0319:06Kaue Schltzthan to query 1B in a single db#2020-07-0319:07Kaue Schltzdoes that make any sense from an architectural standpoint?#2020-07-0322:46Jon WalchI'm using Datomic Cloud
My data model is similar to
{:user/foo "foo"
:user/other-one "hi"
:user/bar [{:bar/bazed? true} {:bar/bazed? false}]}
In one query, I want to pull :user/foo :user/other-one and everything in :user/bar where :bar/bazed? is false.
The issue that I'm running into is that I want :user/foo and :user/other-one no matter what, but if there are no :bar/bazed? that are false, the whole query returns an empty vector because of implicit joins.
I'm currently doing what I need with two queries, but this path is extremely hot so I'd like to reduce it to one. I also don't want to pull all of :user/bar because it could be extremely large, where as the number of items in :user/bar with user/bazed? equal to false will be quite small.#2020-07-0415:03favilaI think you are asking for a list of maps of users where each user only contains the bar entities where bar/bazed? is false?#2020-07-0415:03favila(does it specifically have to be false, or is unasserted the same?)#2020-07-0415:04favilaYou can do it with a nested query and joining yourself#2020-07-0415:07favilaor issue two queries in parallel and join yourself#2020-07-0415:09favilaI recommend adding the false bars to the user map under a different name so the keyword has a globally-unique meaning.#2020-07-0415:09favilaI recommend adding the false bars to the user map under a different name so the keyword has a globally-unique meaning.#2020-07-0519:52Jon Walch@U09R86PA4 I'm looking for a specific user. I have a unique attribute to look them up with. I want the user no matter what, but I also want everything in user/bar where bar/bazed? is false. If no bar/bazed? is false, I want user/bar to be returned as an empty vector#2020-07-0520:25Jon WalchI think I may go the async query route instead of trying to do it all in one blocking query#2020-07-0609:59Linus EricssonI think you should consider changing the boolean to an additional reference between the user and the data map object instead. This way the structure of the database helps you retrieve the correct data. It makes the change of the boolean data a bit more complicated, but it sounds like it would be worth it in this case.
So for instance:
user -> :mail/inbox #{all mail in the inbox}
user -> :mail/unread-in-inbox #{the unread mails from the inbox}
obviously one has to update both the inbox and unread-in-inbox when removing an unread email but it can still be a simpler solution for you.
you can also just have two different attributes
:mail/inbox and :mail/unread and query for where both links exists. The :mail/unread is could then be sort of isomorphic with :bar/bazed? in your example above.#2020-07-0617:55Jon Walch@UQY3M3F6D Thanks for weighing in! That's a good suggestion!#2020-07-0411:55ashnurHi, continuing from here: https://clojurians.slack.com/archives/C053AK3F9/p1593859141412800
I cloned the ion-starter repo and I am reading https://docs.datomic.com/cloud/ions/ions-tutorial.html#orge88f23e
I have to make sure I have installed the ion-dev tools. So I changed the config files according to that documentation. Does that constitutes of making sure? Is there a programmatic test I could do that changing files is actually installing things? #2020-07-0413:11alexmillerI know you’ve tried some stuff - can you roll back as close as you can to the tutorial and then share your error message?#2020-07-0413:42souenzzo@jonwalch
[:find (pull ?e [:user/foo :user/other-one {:user/bar [*]}]) :where [?e :user/bar ?bar] [?bar :bar/bazed?]] ?#2020-07-0519:57Jon WalchThe issue here is that I also need to filter on :user/foo to make sure I'm getting the correct user.#2020-07-0520:07Jon WalchYeah just tried a version of this. I get no results for the entire query if nothing in :user/bar has :bar/bazed? set to false.#2020-07-0413:57ashnurAfaik, I am on par with the tutorial, I did the edits described there. But actually I started from aws marketplace first because I know less about aws than about datomic. (much much much less).#2020-07-0413:57ashnurHowever there isn't an error message unless I try to do something more, but I am not sure what I should try. I know that some things work and one thing doesn't.#2020-07-0413:59ashnurLet me create some gists so you can see both.#2020-07-0414:08ashnurOne thing that makes it difficult to roll back is that I never knew I had a m2/settings file and similar things : )#2020-07-0414:09ashnurRight now I have to admit that nothing works, not even what worked in the morning, I get Error building classpath. Could not find artifact com.datomic:ion:jar:0.9.43 in central () to every command I try#2020-07-0414:56souenzzo@ashnur check your ~/.m2/settings.xml#2020-07-0417:14ashnurObviously, it would be nicer if I knew what to check on it, like does it have exactly 137 bytes in it or what 🙂
https://gist.github.com/ashnur/62a62afa1c538c249110cfc0202b524a#2020-07-0415:02ashnurI did#2020-07-0415:21pvillegas12I am trying to do a recursive query
[:person/firstName :person/lastName {:person/friends ...}]
However, I would like to impose a condition on the recursion. For example, if the friend has the name “bob”. It looks like the recursion is on the read side of the pull, but I was wondering if there is some way to do recursion and have a condition on the friends (recursion attribute)?#2020-07-0418:15Joe Lane@ashnur Are you still running into issues?#2020-07-0418:17ashnurThey haven't been resolved yet#2020-07-0418:23Joe LaneIn my ~/.clojure/deps.edn I have these entries for my maven repos and an ion-dev alias
:aliases {:ion-dev {:extra-deps {com.datomic/ion-dev {:mvn/version "0.9.265"}}
:main-opts ["-m" "datomic.ion.dev"]}}
:mvn/repos {"datomic-cloud" {:url ""}
"central" {:url ""}
"clojars" {:url ""}}#2020-07-0418:23Joe LaneOf particular importance is the "datomic-cloud" entry under :mvn/repos#2020-07-0418:23ashnurI do have that too#2020-07-0418:24Joe LaneWhat is your OS?#2020-07-0418:24ashnurlinux#2020-07-0418:25ashnurI even tried this, where the repo config is in the alias https://github.com/Datomic/ion-starter/blob/master/examples/.clojure/deps.edn#2020-07-0418:25Joe LaneCan you show me the project's deps.edn?#2020-07-0418:28ashnurhttps://gist.github.com/ashnur/fc2e517bbe6ee3fe1d9ed2cf8c14e1e8#2020-07-0418:30Joe LaneAlso, how are you "running" your project locally?
What is the exact operation you're using to start a repl?
Why do you have ?region-eu-west-1 at the end of the datomic-releases entry?#2020-07-0418:30ashnurI am not sure what "running locally" would mean in this context#2020-07-0418:31ashnurI don't usually start any repls#2020-07-0418:31ashnurand the docs says https://clojure.org/reference/deps_and_cli#_procurers#2020-07-0418:32ashnurwhat you call 'project' here is literally a core.clj with a single hello world function, I can run it many ways#2020-07-0418:33Joe LaneI don't think you're interpreting the procurers section right.#2020-07-0418:33ashnurlast time it worked I ran clj -m nrepl.cmdline --middleware "[cider.nrepl/cider-middleware]" --interactive but obviously that also doesn't do anything right now just says the same error as above#2020-07-0418:33ashnurI haven't interpreted it at all, it was linked in a forum post#2020-07-0418:34Joe LaneWhich forum post are you referring to?#2020-07-0418:35ashnurhttps://forum.datomic.com/t/issue-retrieving-com-datomic-ion-dependency-from-datomic-cloud-maven-repo/508#2020-07-0418:35ashnurIf I am doing something stupid, just tell me 🙂#2020-07-0418:36Joe LaneWhat is the output of aws s3 cp .#2020-07-0418:40Joe LaneAlso, nothing in the forum post you sent makes me think you need to append ?region-eu-west-1#2020-07-0418:41Joe LaneAfter you paste the output of the aws s3 cp ... command, can you replace the :mvn/repos entry in your deps.edn with all three entries in the message I pasted above? https://clojurians.slack.com/archives/C03RZMDSH/p1593886989420200#2020-07-0418:42Joe LaneThen run simply clj in the project root.#2020-07-0418:42Joe LaneNo nrepl or anything.#2020-07-0418:49Joe Lane@ me when you do the above#2020-07-0419:38ashnurfatal error: An error occurred (403) when calling the HeadObject operation: Forbidden#2020-07-0420:05Joe Lane@ashnur I didn't see this until just now.
What is the output of
aws sts get-caller-identity
#2020-07-0420:08ashnur{
"UserId": "AIDAT24PJRSJ6WKCBUGPZ",
"Account": "263904136339",
"Arn": "arn:aws:iam::263904136339:user/same-page-dev"
}
#2020-07-0420:06ashnurno worries, I think I am having some s3 access errors#2020-07-0420:08alexmillerdo you have aws env vars set?#2020-07-0420:09ashnuryes, but wait a second, I just have a terrible suspicion, let me check something#2020-07-0420:39ashnurOk, I had to check that there are no typos, I wish I'd found one. I made some edits for consistency, but the user is added to the group, the policy is attached to the group and AWS shows that the user is active and it's used. I am not sure why the s3 thing says forbidden, I will try to debug that because even if it's unrelated, it should be working anyway, but maybe it helps.#2020-07-0422:41alexmillerclj is trying to download the jar from the cloud s3 maven bucket. what region is your user in? I assume you're not running this from inside aws or anything like that.#2020-07-0506:41ashnurI am running this on my laptop, which I assume is not inside aws, but I am unfamiliar with the terminology, tell me if I misunderstood, please.#2020-07-0421:27daniel.spanielis there a way to query for an entity and its children (recursively) but also ( while recursing ) exclude certain children? I have query like this
(ffirst
(d/q '[:find (pull ?e pattern)
:in $ pattern ?tree-id ?company-id
:where
[?e :accounting-tree/id ?tree-id]
[?e :accounting-account/children ?c]
(or-join [?c ?company-id]
[(missing? $ ?c :entity/company)]
[?c :entity/company ?company-id])
]
db '[* {:accounting-account/children ...}]
tree-id company-id))
#2020-07-0421:27daniel.spanielbut it does not exclude the children without that company-id . I have tried many variations but no luck#2020-07-0422:37bamarcoI am trying to allow one of my ions to access dynamodb. I am following https://docs.datomic.com/cloud/operation/access-control.html#authorize-ions
When I get to the step:
> Adding an IAM Policy to Datomic Nodes
>
> The Datomic Compute CF template lets you specify a custom policy via the template parameter named `NodePolicyArn`. In the console UI this parameter appears under:
Optional Configuration | Existing IAM managed policy for instances
> You can set or update your custom node policy at any time by performing a https://docs.datomic.com/cloud/operation/howto.html#update-parameter, setting the `NodePolicyArn` to the ARN of your policy.
>
> Neither https://console.aws.amazon.com/console/home?region=us-east-1 nor https://console.aws.amazon.com/iam/home?region=us-east-1#/home seems to have an "Optional Configuration" option#2020-07-0500:34Joe Lane@mail524 What release of Datomic Cloud are you using?
I'm on the latest and if I wanted to add a node policy to my instances I would:
1. Find and select the compute stack
2. Click the update button on the top right
3. Use the current template
4. Scroll to the bottom of the "Specify Stack Details" page
5. Add my Policy Arn#2020-07-0500:40Joe Lane2.#2020-07-0500:40Joe Lane3.#2020-07-0500:40Joe Lane4. (Top)#2020-07-0500:41Joe Lane4. (Bottom)#2020-07-0501:55bamarcoThanks @lanejo01 I got it working well enough to move on to my next error. I'm running a solo topology com.datomic/client-cloud #:mvn{:version "0.8.81"}. Now I just have to figure out the function signature for the websockets $connect function.#2020-07-0507:18ashnurI tried running the aws s3 cp . --debug 2> log and this is the result 2020-07-05 08:14:56,113 - MainThread - urllib3.connectionpool - DEBUG - "HEAD /maven/releases/com/datomic/ion/0.9.7/ion-0.9.7.jar HTTP/1.1" 403 0
At this point I am not sure where should I look next for any fix, please if you have even just guesses, don't hold back, it would help me learn.#2020-07-0507:19ashnurfull log https://gist.githubusercontent.com/ashnur/60564b6ff515f7b317aaedb359ff24f3/raw/a397ba366dc7175f48a8a64418e0ab3776f9c4ba/aws.forbidden.s3.cp.log#2020-07-0612:33marshallYour AWS credentials need to allow access to the public datomic maven repo.
If you are not running as an AWS administrator (not just the datomic admin policy), youll need to add an s3 read permission for the datomic maven bucket to your user#2020-07-0613:11ashnuroh, that sounds it!#2020-07-0613:21ashnurit's just that I am severely confused atm. what 's3 read permission for a specific bucket' means.
Should I copy what is in the textbox? https://docs.datomic.com/cloud/operation/access-control.html or should I use https://awspolicygen.s3.amazonaws.com/policygen.html to generate something?#2020-07-0613:22marshallthis is a separate issue/policy from the datomic admin policy#2020-07-0613:22marshallone second, let me find an example#2020-07-0613:23ashnurok, thanks for clearing up that confusion 🙂#2020-07-0613:30marshall{
"Sid": "VisualEditor3",
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::datomic-releases-1fc2183a/*",
"arn:aws:s3:::ddatomic-releases-1fc2183a"
]
}#2020-07-0613:30marshallsomething like that#2020-07-0613:30marshallthe issue is that by default AWS users/roles/etc have no permissions#2020-07-0613:31marshallso if you don’t explicitly allow them to read from a bucket, even if that bucket is publicly accessible, the client permissions for the AWS role will prevent#2020-07-0613:40ashnurmakes sense, I was suspicious of something like this, but being completely new to most of the terms, I got lost easily and since I used search, it lead me to the wrong places#2020-07-0613:40marshallwe are actively working on improving the docs/forum search#2020-07-0613:40marshallfor finding answers to this (and other) questions#2020-07-0613:41ashnurwell, if you know someone who works on the datomic website/docs, I would happily help for free#2020-07-0614:59marshallI just realized the role rule i posted wasnt quite right#2020-07-0614:59marshallgive me a few to correct it#2020-07-0615:00marshallfixed#2020-07-1010:04ashnurFinally I got time to get back to this, but it says that Policy has invalid resource
this is the json I am trying to save:
{
"Id": "Policy1594355345891",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor2",
"Effect": "Allow",
"Action": ["s3:GetObject", "s3:GetBucketLocation"],
"Resource": "arn:aws:s3:::datomic-releases-1fc2183a",
"Principal": {
"AWS": ["arn:aws:iam::263904136339:user/same-page-dev"]
}
}
]
}#2020-07-0510:25ashnurIf I could at least know if the error is with my local or my remote aws config, but the more docs I read the more confused I get. Nothing seems to have any effect for the better.#2020-07-0513:00alexmillerthe log is actually very helpful as that removes everything but the s3 call. your iam user is in eu-west-1 but is correctly trying to get to the bucket in us-east-1. from the head request failing this is almost certainly something to do with your iam permissions for this user, like not being permitted to do s3 downloads#2020-07-0513:03alexmillerat the top of the tutorial, there is a list of prereqs, the last of which are
> Run in an environment with https://docs.datomic.com/cloud/getting-started/connecting.html.
> Have https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_job-functions.html#jf_administrator permissions.#2020-07-0513:04alexmillerI'm thinking maybe your iam user does not have aws administrator permissions?#2020-07-0513:04alexmillerthe steps are at https://docs.datomic.com/cloud/getting-started/configure-access.html#authorize-user#2020-07-0513:20ashnurI will double check it now#2020-07-0513:36ashnurAfaik I can tell, everything is set as it is written. I have checked this yesterday when I said that I had a suspicion. I wrote it then that "the user is added to the group the policy is attached to the group", hoping if that's not enough someone will point it out. Should I make screenshots? What would be a troubleshoot option here?#2020-07-0513:44alexmilleryou used the Datomic administrator policy?#2020-07-0513:53ashnurI think yes, but these are specifically the kind of questions that if I misunderstand it even a bit, that can lead to much confusion.
When I subscribed, the template created a policy called arn:aws:iam::263904136339:policy/datomic-admin-datomic-same-page-eu-west-1 which I then attached to a new group and my user is added to this group, so if I go to https://console.aws.amazon.com/iam/home?#/users/same-page-dev?section=permissions where same-page-dev is the username, I can see the name of the policy listed. (datomic-admin-datomic-same-page-eu-west-1)#2020-07-0513:54ashnurI also wish I could specify a default profile for datomic, but I haven't found this without specifying a default for aws, but that makes the named profile thing a bit useless right now, but probably I just misunderstand the reason for these named profiles#2020-07-0514:43alexmillerthat sounds right, but I'm not an expert on this end of things. maybe @jaret or @marshall can confirm tomorrow#2020-07-0515:23ashnurthanks, I think I will just clear anything and start completely over#2020-07-0613:34jaretThanks @U064X3EF3! @U0VQ4N5EE catching up from the weekend, were you able to resolve after starting over or are you still seeing permission errors? If so, it may be useful to log a case to <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection> so we can share your specific policy and review. I suspect that you are in fact having IAM issues and have previously seen issues with setting the specific region. I can also double check how you have your profiles configured, because using profiles is our recommended resolution to having local AWS creds defaulted to a different AWS region than your Datomic system.#2020-07-0613:38jaretScratch that I see that @marshall spotted the issue and helped you up higher in the the threads.#2020-07-0613:38marshallhttps://clojurians.slack.com/archives/C03RZMDSH/p1594042271484500?thread_ts=1593933535.453500&cid=C03RZMDSH#2020-07-1010:04ashnurlinking for jaret, sorry if redundant : ) I don't want to spam the channel https://clojurians.slack.com/archives/C03RZMDSH/p1594375442110400?thread_ts=1593933535.453500&cid=C03RZMDSH#2020-07-1010:51ashnuralso tried
{
"Id": "Policy1594355345891",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DatomicS3BucketAccess",
"Effect": "Allow",
"Action": [
"*"
],
"Resource": [
"arn:aws:s3:::datomic-releases-1fc2183a",
"arn:aws:s3:::datomic-releases-1fc2183a/*",
"arn:aws:s3:::datomic-code-7cf69135-6e19-4e99-878e-9c3f4a48ad48",
"arn:aws:s3:::datomic-code-7cf69135-6e19-4e99-878e-9c3f4a48ad48/*"
]
}
]
}
But this says Missing required field Principal#2020-07-0515:24ashnursometimes it helps 🙂#2020-07-0517:19bamarcoI am attempting to log a message by using cast/dev as shown here https://docs.datomic.com/cloud/ions/ions-monitoring.html#dev
The first thing I do in my ion function is call (cast/dev {:msg "socket-connect" :req (str req)})
I can not find this message output in cloudwatch anywhere. I have checked the log group for the datomic system overall and for the specific connect ion. I also tried calling with cast/event with no luck.
I do get a thrown error printed out for my function, but I don't get the log that happens before that error occurs.#2020-07-0523:52Joe Lane@mail524
1. Dev is only for local, and will never show up in cloudwatch
2. If the payload is too large it won't be submitted to cloudwatch.
3. In the process of debugging like this, try printing the (cast/event {:msg "socket-connect" :req (str (keys req))})#2020-07-0604:12ataggartIs there a way to unify two logic variables together, similar to how ground unifies a logic variable with a constant? I tried =, but it doesn't appear to work, as this contrived example shows:
(def query '[:find ?x ?y
:in $ % ?x
:where
(foo? ?x ?y)])
(def ground-y '[[(foo? ?x ?y)
[(ground :y) ?y]]])
(def unify-x-y '[[(foo? ?x ?y)
[(= ?x ?y)]]])
(d/q query (d/db conn) ground-y :x)
; #{[:x :y]}
(d/q query (d/db conn) unify-x-y :x)
; Execution error (Exceptions$IllegalArgumentExceptionInfo) at datomic.error/arg
; (error.clj:79). :db.error/insufficient-binding [?y] not bound in expression
; clause: [(= ?x ?y)]
#2020-07-0613:20favilathe identity function#2020-07-0613:21favila[(identity ?x) ?y]#2020-07-0613:22favilaI’m not sure what you want to do is called unification#2020-07-0613:23favilayou want a “alias” or “clone” of a set with a different name so you can avoid some unifications or unify in different clauses#2020-07-0613:24favilathis identity is also useful for self-joins:#2020-07-0613:25favila[(identity ?x) ?y] [(!= ?x ?y)] [?x :foo/position ?xp] [?y :foo/position ?yp] [(> ?xp ?yp)] silly example#2020-07-0622:47ataggart@U09R86PA4 That did it, thanks!#2020-07-0610:39Ramon Rios:db.error/not-an-entity Unable to resolve entity: :policy-coverage/vt"
Hello, what could be the reasons that my datomic is not founding this field?#2020-07-0613:37marshallyou have likely not installed an attribute :policy-coverage/vt#2020-07-0613:38Ramon RiosI did it, it was that. Now my issue is convert the date type#2020-07-0613:38Ramon RiosNow i'm looking after how to convert local-date to inst#2020-07-0611:00ertugrulcetinHi all, there is a Datomic migration library called conformity which supports only Peer API not Client API, so I was considering to use Datomic Cloud and is there any migration library that supports Client API?#2020-07-0612:13Kaue SchltzDepending on how active is this repo, I would consider reaching out to conformity's maintainers and ask if there are any plans on supporting client api, if not, maybe asses how much work would be needed to add it yourself, If you're lucky, maybe it isnt much of a hassle#2020-07-0613:30ertugrulcetin@UNAPH1QMN how are you guys handling migrations in Datomic Cloud? Just sending all edns to transact fn?#2020-07-0613:46Kaue SchltzBasically, yes. We centered our schema in a exclusive repository, shared among the contexts where it is relevant and agreed upon only extend it, never removing fields, something along the lines of: https://docs.datomic.com/on-prem/best-practices.html#grow-schema#2020-07-0614:01ertugrulcetinThanks#2020-07-0621:13bamarco@U0UL1KDLN I am considering updating conformity for use with cloud (I am still not completely settled on migration). It should not be too difficult I rewrote the internals to work with datascript at one point (never submitted a pr though as it seemed pretty specific to our use case). I am trying to get my cloud instance up and running first though.#2020-07-0621:08bamarco@lanejo01 Thanks I got the logging working. I am back to the IAM permissions problem again unfortunately.
I am getting arn:aws:sts::<42>:assumed-role/<my-datomic>-Compute-<compute-id>-us-east-1/i-<other-id> is not authorized to perform: dynamodb:PutItem on resource: arn:aws:dynamodb:us-east-1:<42>:table/sockets (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: AccessDeniedException; Request ID: <ABCDEFG>) in my lambda output
I have attached arn:aws:iam::<42>:policy/sockets-lambda-policy to the Optional Configuration Existing IAM policy for instances for my <my-datomic> stack. (that is root stack, not the compute stack, I am not sure if that was correct, but when I went to update the compute stack it recommended I update the root stack)
The policy is the following:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"dynamodb:BatchGetItem",
"dynamodb:GetItem",
"dynamodb:Query",
"dynamodb:Scan",
"dynamodb:PutItem",
"dynamodb:DeleteItem"
],
"Resource": "arn:aws:dynamodb:us-east-1:<42>:table/socket"
}
]
}#2020-07-0621:09Joe LaneYou have to bounce the compute nodes before the policy is picked up, have you done that?#2020-07-0621:10Joe LaneAlso, your resource is socket but the error message is sockets#2020-07-0621:10Joe Lane@mail524 ^ Maybe check that first?#2020-07-0621:14bamarcoI don't know what bounding the compute nodes means#2020-07-0621:14bamarcobouncing*#2020-07-0621:16Joe LaneRestarting the machines, either through a deployment, redeployment, an upgrade, or adjusting the EC2 ASG. A deployment would likely be simplest#2020-07-0621:17Joe LaneBut first I would check if the resource name in your policy is correct.#2020-07-0621:17bamarcothe names was in fact the problem#2020-07-0621:17bamarcothanks again 🙂#2020-07-0622:05souenzzoHello
I'm getting
(cast/dev {:msg "ok"})
Execution error (IllegalArgumentException) at datomic.ion.cast.impl/fn$G (impl.clj:14).#2020-07-0622:05souenzzoWhen I look at
datomic.ion.cast.impl/Cast
There is no :impls key#2020-07-0622:06souenzzocom.datomic/ion {:mvn/version "0.9.43"}
com.datomic/client-cloud {:mvn/version "0.8.96"}#2020-07-0622:10marshallIs this running locally or deployed in an ion?#2020-07-0622:12souenzzolocally#2020-07-0622:13souenzzoWired:
(cast/initialize-redirect :stdout)
=> :stdout
(cast/initialize-redirect :stderr)
=> :stdout
(cast/initialize-redirect "out.txt")
=> :stdout
#2020-07-0622:13souenzzodatomic.ion.cast.impl/Cast
=>
{:on datomic.ion.cast.impl.Cast,
:on-interface datomic.ion.cast.impl.Cast,
:sigs {:-alert {:name -alert, :arglists ([_ alert-map]), :doc nil},
:-event {:name -event, :arglists ([_ event-map]), :doc nil},
:-dev {:name -dev, :arglists ([_ dev-map]), :doc nil},
:-metric {:name -metric, :arglists ([_ metric-map]), :doc nil}},
:var #'datomic.ion.cast.impl/Cast,
:method-map {:-metric :-metric, :-event :-event, :-dev :-dev, :-alert :-alert},
:method-builders {#'datomic.ion.cast.impl/-event #object[datomic.ion.cast.impl$fn__2905
0x3931e29f
"#2020-07-0713:17souenzzo*HALP*#2020-07-0713:18marshallcan you share the full stack trace when you called cast/dev please#2020-07-0713:27souenzzoIt's working today 😞
What should i do in the days that it's not working?#2020-07-0712:12Ivar RefsdalTrying to retract a non-existing entity does not fail in 0.9.5951 and later.
Is that intentional?
Here is my code:
@(d/transact conn [#:db{:ident :order/id, :cardinality :db.cardinality/one, :valueType :db.type/string, :unique :db.unique/identity}])
@(d/transact conn [[:db/retractEntity [:order/id "missing"]]])
; no error in 0.9.5951 (and later)
; error in 0.9.5930:
; datomic.impl.Exceptions$IllegalArgumentExceptionInfo: :db.error/not-an-entity Unable to resolve entity: [:order/id "missing"]
; java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException: :db.error/not-an-entity Unable to resolve entity: [:order/id "missing"]
Thanks!#2020-07-0712:13Ivar RefsdalCC @U11RVUGP7#2020-07-0712:29souenzzo## Changed in 0.9.6021
* Fix: Prevent a rare scenario where retracting non-existent entities
could prevent future transactions from succeeding.
#2020-07-0712:29souenzzo(maybe related)#2020-07-0809:41Ivar RefsdalMaybe. But the change of behaviour already happened in 0.9.5951, so I'm not sure 0.9.6021 would affect that#2020-07-0715:01craftybonesHello.#2020-07-0715:01craftybonesHow do I go about declaring a tuple of instants?#2020-07-0715:02craftybones(d/transact conn {:tx-data [{:db/ident :foo/bar
:db/valueType :db.type/tuple
:db/tupleType :db.type/instant
:db/cardinality :db.cardinality/one}
]})#2020-07-0715:02craftybonesAnd then when I insert, I did the following:
(d/transact conn {:tx-data [{:foo/bar [#inst "2020-10-10"]}]})#2020-07-0716:48souenzzoAs @ghadi pointed:
https://docs.datomic.com/cloud/schema/schema-reference.html#tuples
A tuple is a collection of 2-8 scalar values, represented in memory as a Clojure vector. There are three kinds of tuples:
#2020-07-0715:02craftybonesIt says its an invalid tuple value#2020-07-0716:42craftybones?#2020-07-0716:44ghadineed to be length 2-8 IIRC#2020-07-0716:45ghadiyou sent in a one-ple#2020-07-0717:01craftybonesAh! Such a silly error! Thanks. So how do I model something that might have a range of 1 to 5 dates?#2020-07-0717:01craftybonesas a many cardinality?#2020-07-0717:01marshallno, you can use a tuple#2020-07-0717:01marshallnil is legal in tuples#2020-07-0717:01craftybonesOk. got it#2020-07-0717:02ghadiI would want to understand the semantics before modeling it#2020-07-0717:02marshallbut i’d only use a tuple if the 1-5 things “go together”#2020-07-0717:02ghadiwhat do the dates represent?#2020-07-0717:02marshallvs. card/many#2020-07-0717:02marshallyeah, what @ghadi said 🙂#2020-07-0717:02craftybonesThe dates represent a range of dates on which a certain tournament occurred. The tournament may range anywhere from 1 to 6 days#2020-07-0717:03ghadicontiguous dates?#2020-07-0717:03craftybonesOften, not necessarily, there might be rest days etc#2020-07-0717:04craftybonesI could obviously simply date range it and say :date/from :date/to#2020-07-0717:04ghadithat's one option#2020-07-0717:04ghadianother is a card-many instant with all the dates enumerated#2020-07-0717:04craftybonesBut, the tuple can also preserve order which might become important when querying, potentially#2020-07-0717:05craftybonesYou mean have the date as a {:db/ident #inst “some date” }#2020-07-0717:05craftybonesInteresting…#2020-07-0717:05ghadino I mean:#2020-07-0717:05craftybonesso just :tournament/date has 1-6 values#2020-07-0717:06ghadi:tourney/dates [#inst "2020-07-01" #inst "2020-07-05" #inst "2020-07-07"]#2020-07-0717:06craftybonesyeah. Got it#2020-07-0717:06ghadinot a tuple, just card-many#2020-07-0717:06craftybonesRight. Got it#2020-07-0717:06craftybonesyeah, that’s certainly an option. That’s what I was wondering.#2020-07-0717:06craftybonesCool. I’ll figure it out. Thanks for the help#2020-07-0717:06ghadidates are comparable - so the ordering concern shouldn't matter#2020-07-0717:06craftybonesI had a question around composite tuples.#2020-07-0717:07craftybonesso let us say I have a match played between two teams on certain dates#2020-07-0717:07craftybonesdoes it make sense to turn this into a composite tuple?#2020-07-0717:07craftybonesAssume that the two teams are actually db/ref#2020-07-0717:08craftybonesIt is likely each of the teams have thousands of entries#2020-07-0717:08craftybonesI could potentially query thousands of matches that are of specific teams#2020-07-0717:08ghadiwhat is the query?#2020-07-0717:09craftybonesgive me the final score of all matches played between two teams in a given date range#2020-07-0717:10ghadi[:find (pull ?match [:match/score])
:where
[?match :participating/team ?t1]
[?match :participating/team ?t2]
[?match :tourney/dates ?d]
[(>= ?d ?start-date)]
[(<= ?d ?end-date)]]
#2020-07-0717:11ghadiwith ?t1 ?t2 ?start-date and ?end-date as inputs#2020-07-0717:11craftybonesyeah, that’s mostly it#2020-07-0717:11ghadimatches don't span days, I assume#2020-07-0717:11craftybonesNo.#2020-07-0717:11ghadiunless cricket#2020-07-0717:11craftybonesthey don’t#2020-07-0717:11craftybonesbasketball#2020-07-0717:11ghadinice#2020-07-0717:12ghadi@jaret (on the Datomic team) and I are big NBA fans 🙂#2020-07-0717:12craftybonesAh. There’s no easy place for easy open NBA data. ESPN took their API down#2020-07-0717:12ghadianyways, I gotta run 🙂#2020-07-0717:12craftybonesThanks a ton mate.#2020-07-0717:33jaret@srijayanth I was using https://www.mysportsfeeds.com/ for NBA data#2020-07-0717:34jaretadmittedly the project I was using it on was OK with stale data from last season, but they have live subscriptions. Not sure what your budget is.#2020-07-0717:34craftybones@jaret - thanks#2020-07-0717:34jaretThey have a very helpful slack community#2020-07-0717:34craftybonesCool.#2020-07-0717:34craftybonesThanks.#2020-07-0717:34jaretbut no clojure programmers that I could find 😞#2020-07-0717:34craftybonesIf you’re on the datomic team, that puts you on the east coast….Knicks fan?#2020-07-0717:35craftybonesI hope not.#2020-07-0717:35jaretOh god no! furthest thing from it. I am a Pacers fan!#2020-07-0717:35craftybonesAh. The late 90s teams of the Pacers were really good. Early 2000s too.#2020-07-0717:36craftybonesthe PG era is a bit too R for me#2020-07-0717:39jaretWho do you follow?#2020-07-0804:02craftybones@jaret - my formative years were in the Bay Area, so the Kings and Warriors would be the teams I root for, but I enjoy basketball too much to really care who wins.#2020-07-0804:02craftybonesbut I am not a fan of the Knicks, that’s for sure 🙂#2020-07-0804:03craftybonesI have a question around bulk data entry. Is it better to transact one large transaction with temp-ids to associate refs or, transact entities which form references first and then transact the rest of the data?#2020-07-0804:03craftybonesThe second option sounds tedious, the first one sounds bulky#2020-07-0813:16alex-dixonHi. I’m wondering how you would model “friendship” in Datomic (assuming “friendship” is mutual)#2020-07-0813:35marshall@alex-dixon one example of an approach: https://github.com/cognitect-labs/day-of-datomic-cloud/blob/master/tutorial/pull_recursion.clj#2020-07-0813:35marshallall references in datomic are bidirectional#2020-07-0814:03alex-dixon@marshall Thanks. So an “add friendship” transaction would need to add two facts: “a is friends with b” and “b is friends with a”?#2020-07-0814:03marshallinteresting - i see that the example does do that#2020-07-0814:04marshallIt is done that way for the recursive pull#2020-07-0814:04marshallyou can navigate the ‘friends’ relationship “backwards”#2020-07-0814:04marshallin a query you just reverse the clause#2020-07-0814:07marshalli.e. :
if you have asserted:
{:user/name "a"
:friend [:user/name "b"]}#2020-07-0814:07marshallyou can navigate that relationship from a to b in a query with:#2020-07-0814:08marshall[[?e :user/name "a"]
[?e :friend ?x]]
#2020-07-0814:08marshalland you can do the “reverse” as well:#2020-07-0814:08marshall[[?e :user/name "b"]
[?x :friend ?e]]
#2020-07-0814:09marshallthe first one says find friends of “a”#2020-07-0814:09marshallthe second says “who are friends with b”#2020-07-0814:10marshallalso, you can pull ref relationships in reverse: https://docs.datomic.com/cloud/query/query-pull.html#reverse-lookup#2020-07-0814:11favilaI think the problem to solve is deduping friendship relationships. You can transact a :friend b and b :friend a and have your read patterns understand that both are equivalent, but when you add/update/remove friend relationships, you want to make sure those updates are consistent and not vulnerable to transaction races#2020-07-0814:11marshallAgreed @favila ^ generally I would leverage the bidirectional ref so you don’t hit that issue#2020-07-0814:14favilaI think using a bidirectional ref may still be vulnerable to read skew unless there’s some canonicalization? I can’t think of a case offhand though#2020-07-0814:16marshallIf the relationship is always considered bidirectional, it should be ok. I.e. i cant be friends with you if youre not friends with me#2020-07-0814:16marshallSo the ref as a single relationship either exists or doesnt#2020-07-0814:17favilaI mean during updates, some ordering of friendship add or remove transactions may end up by failing to remove one of the directions#2020-07-0814:17marshallNot if you only model it from one side#2020-07-0814:17favilabut which side?#2020-07-0814:17marshallDoes it matter?#2020-07-0814:18marshallWrite the query to handle either#2020-07-0814:18marshallVAET makes it efficient either way#2020-07-0814:19alex-dixonI sort of have a feature request in mind…which would the addition of tuple value types that are effectively sets…they don’t bring ordereding into their equality semantics. For things like this that aren’t really directional#2020-07-0814:20favilaI don’t think that would help @alex-dixon? where would that assertion live?#2020-07-0814:20favilaunless you mean a tuple of sets?#2020-07-0814:20favilarather a set of sets#2020-07-0814:20alex-dixon;; [0 :user/name "foo"]
;; [1 :user/name "bar"]
;; [2 :friendship/between [0 1]] ; I'd like [0 1] and [1 0] to be equal, but they're treated as different.
;; [2 :friendship/since 1949] ; :friendship exists as a fact independent of :user. There can be facts about it.
#2020-07-0814:21favilaok, I have wanted something like that#2020-07-0814:22alex-dixonSomething like that. Part of what I’m looking for also is a way to add information about the “friendship”#2020-07-0814:22favilaI’ve solved it by canonicalizing the A and B parts, or by serializing the relationship consistently#2020-07-0814:23favilae.g. canonicalization: {:friendship/left (always the entity with the lower identity id) :friendship/right (always the other one)}#2020-07-0814:26alex-dixonWas thinking the same with a tuple type. Do a sort. Make ordering consistent or not matter#2020-07-0814:26favilajust don’t use raw entity ids for this#2020-07-0814:26favilause some public unchanging id you control#2020-07-0814:27favilanowdays I think the best is to combine this approach with a :friendship/left+right composite tuple--best of both worlds#2020-07-0814:28favilabut if you make that unique, watch out for deletions. when retracting a member of a friendship, you need to remove the friendships they’re involved in first#2020-07-0814:29favilaotherwise you’ll end up with :friendship/left+right [nil something] and get constraint violations#2020-07-0814:31favilaI kinda wish there was a composite tuple mode that would retract the tuple instead of allowing nil in it#2020-07-0814:33favilaactually on balance it’s probably less trouble to use a value tuple with :friendship/left-id+right-id that you control, as long as reads check friendship/left and friendship/right are nonnull#2020-07-0814:33favilathat allows more flexibility about when precisely you schedule the friendship deletions when deleting users#2020-07-0814:34favilawhich may be necessary to control transaction size#2020-07-0814:24dpsuttonbeen a bit since i've done datomic but could a rule easily add the reflexive checking?#2020-07-0814:24favilae.g. serializing {:friendship/id “lower-identity-id+higher-identity-id”}#2020-07-0814:24favila(this was before tuples--I’d use tuples now)#2020-07-0814:25favilathe issue is still that you have an entity that you want to be unique by a value which is a set#2020-07-0814:25favilawith the right read patterns and using transaction fns or :db/ensure, you could keep the constraint that only one of these exists, but you can’t model it directly with a unique index without doing tricks like this#2020-07-0814:40alex-dixonOk. Thanks. I was thinking along those lines but didn’t know if I was missing something. I’m still curious about the possibility of something like a tuple type that would ignore ordering for its equality semantics. It seems that might open up a fact representation that doesn’t require encoding directedness, something like :friendship/between #{a b}, and annotate those with other facts#2020-07-0814:42marshallyou could reify the “friendship”#2020-07-0814:42marshallas an entity#2020-07-0814:43marshallwhich then has a card-many :members attribute#2020-07-0814:43marshallpresumably limited (in your app) to 2#2020-07-0814:43marshallbut the indirection will have some performance considerations#2020-07-0814:43marshallit adds a step to each friend lookup#2020-07-0815:09Joe LaneAn alternative approach could be to model friendship as the hash of two hashes, then keep track of that as a identifier of "friendship" between two people.#2020-07-0815:09Joe LaneThink Merkle Trees#2020-07-0815:39bamarcoI am trying to get websockets working with datomic ions. My connection function currently looks like:
clj
(defn add-connection! [id]
(faraday/put-item nil sockets-table
{:connection-id id}))
(defn connect [{:as req :keys [input context]}]
(let [event (json/parse-string input true)
id (-> event :requestContext :connectionId)]
(add-connection! id)
;(str {:status 200})
))
This guide on aws websocket https://aws.amazon.com/blogs/compute/announcing-websocket-apis-in-amazon-api-gateway/ suggests
js
exports.handler = function(event, context, callback) {
var putParams = {
TableName: process.env.TABLE_NAME,
Item: {
connectionId: { S: event.requestContext.connectionId }
}
};
DDB.putItem(putParams, function(err, data) {
callback(null, {
statusCode: err ? 500 : 200,
body: err ? "Failed to connect: " + JSON.stringify(err) : "Connected"
});
});
};
There doesn't seem to be access to this callback function with a datomic ion. This thread https://forum.datomic.com/t/websockets-from-ions/1255 claims to have gotten this working.#2020-07-0815:41bamarcoMaybe they are not using clojure for the connect function?#2020-07-0815:44Joe Lane@mail524 I don't understand what problem you're having, could you describe what you expect to happen that isn't happening?#2020-07-0815:46Sam AdamsHey all, question re: the Ions tutorial: once I’ve deployed my solo-topology, lambda-proxied ion and wired it up to an API gateway, why must I append “/datomic” to the invocation URL (or else get a 403 response)? https://docs.datomic.com/cloud/ions/ions-tutorial.html#deploy-apigw#2020-07-0816:03Sam AdamsIt seems that any appended path will do. But I want to wire up my own domain, and serve my homepage from the empty path.#2020-07-0816:25Sam Adams:face_palm: resolved - it’s an [API gateway quirk](https://stackoverflow.com/questions/52909329/aws-api-gateway-missing-authentication-token), nothing to do with Ions - apologies for the spam#2020-07-0815:50bamarco@lanejo01 when I call wscat -c wss://<api-key>. I get error: Unexpected server response: 502 which is describe here https://stackoverflow.com/questions/57438756/error-unexpected-server-response-502-on-trying-to-connect-to-a-lambda-function
because I am not calling the callback function the connection does not stay alive. I do successfully get the connection id logged into dynamodb.#2020-07-0815:58Joe Lane@mail524 First, are you sure that you need wss://<api-key>... and not the <api-id>? How do you know that URL is correct?
Second, it appears you aren't returning the right thing from your ion. It has nothing to do with callbacks.#2020-07-0816:00Joe LaneThird, are you sure that your call to Faraday is returning with success? Maybe add some calls to cast/event so you can confirm they are being added?#2020-07-0816:00bamarco1. it is the app-id I just called it the wrong thing. I know because I get logging for it in cloudwatch when I connect.#2020-07-0816:01bamarco3. I check the dynamodb directly and ids are being logged into the table#2020-07-0816:02bamarco2. I am not sure what to return that may be the problem. I have tried {:status 200} and (str {:status 200}).#2020-07-0816:03bamarcoI have not json-ified it#2020-07-0816:07Joe LaneSo, Per the stack overflow article, the body needs to be a string#2020-07-0816:08Joe LaneYou should follow the "Set up API Logging Using the API Gateway Console" in the S.O. post and look at what the logs say (different logs than the datomic logs)#2020-07-0817:05bamarcoIt seems to work now by returning (chesire/generate-string {:statusCode 200}). Got the api gateway logs setup too. Hopefully I can get sockets working entirely without further help. Thanks so much!#2020-07-0817:06Joe LaneGreat to hear!#2020-07-0821:28bmaddyIs it possible to query maps with datomic? If so, does anyone know where I could find a simple example? (I see the nested vector example here: https://docs.datomic.com/on-prem/query.html)#2020-07-0822:09favilaNot directly. Datomic datalog can query datasources, which are fundamentally sets of tuples#2020-07-0822:10favilahowever, you can transform maps to tuple sets by assigning ids to each map#2020-07-0822:10favilaLike this does: https://github.com/djjolicoeur/datamaps#2020-07-0822:12favila(It uses datascript’s query engine, but the same technique could be used with datomic)#2020-07-0822:17bmaddyOh, neat. I didn't realize that's how datamaps worked.
Thanks @favila!#2020-07-0917:08JAtkinsCan data-types be changed in datomic cloud? I have a :db.type/string that I want to convert to :db.type/long. Every string in the db is a number, but the datomic cloud docs make no refence to type changes.#2020-07-0917:08marshallhttps://docs.datomic.com/cloud/schema/schema-change.html#2020-07-0917:09marshallsee the note at the bottom#2020-07-0917:09marshall@jatkin ^#2020-07-0917:09marshall> `NOTE: You can never alter :db/valueType, :db/tupleAttrs, :db/tupleTypes, or :db/tupleType.
#2020-07-0917:09marshallyou can create a new attribute with the type you want and manually “move” the data over#2020-07-0917:10marshalland you can rename the ‘old’ attr if you want to end up with a new attribute that uses the same name as the original#2020-07-1014:47jarethttps://forum.datomic.com/t/dev-and-test-locally-with-dev-local/1518#2020-07-1015:12kennySo excited for this. The timing could not have been more perfect. Thank you Datomic team! ❤️#2020-07-1014:51danierouxDaaaaaaang. Thank you!#2020-07-1016:01ashnurI am not sure what I am doing wrong when I am trying to add this policy under my bucket / permissions/ bucket policy
{
"Id": "Policy1594355345891",
"Statement": [
{
"Sid": "DatomicS3BucketAccess",
"Effect": "Allow",
"Principal": {
"AWS": ["arn:aws:iam::263904136339:user/same-page-dev"]
},
"Action": [
"*"
],
"Resource": ["arn:aws:s3:::datomic-releases-1fc2183a","arn:aws:s3:::datomic-releases-1fc2183a/*","arn:aws:s3:::datomic-code-7cf69135-6e19-4e99-878e-9c3f4a48ad48","arn:aws:s3:::datomic-code-7cf69135-6e19-4e99-878e-9c3f4a48ad48/*"]
}
]
}
I get the error
Policy has invalid resource
I think the resources links are fine, maybe I am trying to use it at the wrong place?
I don't hope that anyone can just randomly solve this for me remotely, but any kind of pointer, even suggestion on what I should read up on could help. I've tried to understand this error message by reading the docs but it seems that there are multiple reasons for it popping up.#2020-07-1016:49csmDo you want s3:* in Action instead of *?#2020-07-1016:54ashnuryeah, and I am doing it at the wrong place, I was informed on freenode. I have to admit that I am utterly confused.#2020-07-1017:06Joe Lane@ashnur I don't understand why you're manually manipulating the datomic bucket policy, what are you trying to do?#2020-07-1017:12ashnursame as last time#2020-07-1017:14ashnurbut I think I might've got it now, I created an completely new policy and added it to the user#2020-07-1017:14alexmillerthe user he's using had datomic admin policy, but that does not include s3 access, which prevents clj from downloading jars from the datomic s3 bucket#2020-07-1017:15ashnurthe download works now, so I am hoping the rest will too#2020-07-1017:21alexmillercool, it should#2020-07-1017:26kennyDeleting a dev-local system is as simple as deleting <storage-dir>/<system-name>, right?#2020-07-1017:41marshallYep#2020-07-1023:41steveb8nThis is great. Importing from cloud will be great for reproducing prod issues and generative testing.#2020-07-1023:44steveb8nA dream would be if https://github.com/vvvvalvalval/datomock can be ported. It’s really useful with local dev. I wonder if this is possible?#2020-07-1100:36Joe Lane@steveb8n What do you think there is that needs to be "ported"?#2020-07-1100:37steveb8ntbh I haven’t looked into it. I’m pretty sure that datomock depends on the peer api so it can’t be used as is#2020-07-1100:37steveb8nare you saying there’s a way to “fork” with this new lib?#2020-07-1100:39steveb8nlike many others, I have proxied the client protocol impl to inject datomic free underneath for local/CI use. in that layer I am able to use datomock to great effect. ideally I don’t want to lose that capability because it really speeds up integration tests#2020-07-1100:40Joe LaneI mean, the whole library is ~ 200 lines of clojure, you could probably do it yourself in less than an hour if you wanted to keep using datomock.#2020-07-1100:40steveb8nthat had occurred to me 🙂#2020-07-1100:41Joe LaneI'd have to understand how you use datomock better before I could make any sort of recommendation, but I'd maybe just see how far you get using dev-local#2020-07-1100:42steveb8nyeah. it’ll be much clearer then. no time to try it now but, when I do, I’ll report back here#2020-07-1100:42Joe LaneI know you've invested already in the indirection but it feels weird to un-indirect datomock, no?#2020-07-1100:42steveb8nand will share if I port datomock#2020-07-1100:42Joe LaneCool#2020-07-1100:43steveb8nnot sure I grok “un-indirect”?#2020-07-1214:26craftybonesHello. If I store :match/teams as a tuple, then the query will have to untuple in order to query if either team is X right? is there a better way?#2020-07-1214:27craftybonesif I stored it with a many cardinality then the query semantics get easier, however the storage semantics is wrong since I now can store multiple teams for a single match#2020-07-1214:27craftybonesthe final option is :match/team-1 with the query containing an or now#2020-07-1214:30craftybonesI least like using team-1, but it is the best compromise from a storage semantic and a query semantic perspective. Which of these would you recommend? Is there an alternative I am missing?#2020-07-1218:31Joe Lane@srijayanth Are you still trying to do https://clojurians.slack.com/archives/C03RZMDSH/p1594141743022500#2020-07-1218:32Joe LaneI'm not sure what you mean by "the query semantics get easier".#2020-07-1218:36craftybones@lanejo01 - Yes. So if I have it in a tuple, my query will have to unpack the tuple and then do a check to see if either of the teams are the teams I want. I was wondering if there’s a way of matching elements within a tuple without having to unpack. Its an additional step#2020-07-1218:38Joe LaneIs there a problem with that additional step? Is it too verbose, or too slow, etc?#2020-07-1218:41Joe LaneFWIW I'm pretty sure you can also use tuple with your inputs to create the tuple you want to find. Then you don't need to untuple#2020-07-1218:46Joe LaneYou may even be able to just pass in the tuple pairs as inputs#2020-07-1218:50Joe Lane(d/q '[:find (pull ?match [:match/score])
:in $ [?teams ...]
:where
[?match :match/teams ?teams]
[?match :tourney/dates ?d]
[(>= ?d ?start-date)]
[(<= ?d ?end-date)]]
the-db [[team1 team2] [team2 team1]])
;; Note, works for an arbitrary number of team tuples, you just need to permute the order of #{team1 team2} => [[team1 team2] [team2 team1]]
#2020-07-1315:58craftybones@lanejo01 - absolutely works! Thanks this is what I was looking for#2020-07-1218:51Joe Lanetry that#2020-07-1312:18Kaue SchltzHi, there. I have the following predicament
We have a production cloud setup with over 3B datoms, and I need to "update" the whole base without interfering, or at least causing little impact as possible, to the undergoing operation.
I need to transform something like:
{:ns.some-field "foo"
:ns.time{:zone "America/Sao Paulo"
:utc #inst"2020-01"}}
into:
{:ns.some-field "foo"
:ns.utc-time :utc #inst"2020-01"}
#2020-07-1312:19Kaue SchltzI was considering to deploy a transaction function to do that, but I wasn't so sure about not interfering with my transactor. Any advice is welcome#2020-07-1312:42favila1. make normal operation use old and new style; 2. backfill the old style; 3. make normal operation use only new style; 4. retract the old style (optional)#2020-07-1312:45favilaif you can insert an abstraction layer between the application and the database, you could probably produce your new style on-the-fly from the old style. If so, you can cut or simplify some of these steps#2020-07-1312:46Kaue SchltzWe're already on step 1#2020-07-1312:46Kaue SchltzNow we're trying to figure out what would be the best approach to accomplish the backfill#2020-07-1312:46favilabest approach in what sense?#2020-07-1312:47Kaue Schltzwe tried to walk through the indexes, retrieve the old style, the transact it back, doing what we needed via :db/cas#2020-07-1312:48Kaue Schltzbut it proved harmful, since it took a toll on our write throuput#2020-07-1312:48favilahow large were your transactions?#2020-07-1312:49Kaue Schltz500 swaps each#2020-07-1312:49faviladid you retract also?#2020-07-1312:49Kaue Schltznot yet#2020-07-1312:50favilaso about 500 datoms per tx? the rule of thumb is ~1000, that may help. you can also not pipeline if you are worried about working too fast#2020-07-1312:52favilaWhat is your estimate of how many datoms you need to commit?#2020-07-1312:53Kaue Schltzthe best would be to convert the whole base#2020-07-1312:53Kaue Schltzwhich is over 3B datoms#2020-07-1312:54favilayou have no other datoms than these?#2020-07-1312:54Kaue Schltzin this particular database, no#2020-07-1312:56favilawell, then writing 3b additional datoms is unavoidable#2020-07-1312:56favilaplan your write capacity accordingly?#2020-07-1312:56favilaif this database is really this simple, perhaps you could copy it to a new database?#2020-07-1312:56favilathen switch the application over#2020-07-1312:57Kaue SchltzI really liked the idea of copying the database#2020-07-1312:57Kaue Schltzseems to be the most practical one#2020-07-1312:58favilado you care about transaction history?#2020-07-1312:58favilai.e. preserving transaction times and metadata#2020-07-1312:59Kaue Schltzfor a portion of it, yes#2020-07-1312:59Kaue SchltzI'll discuss it with our team#2020-07-1312:59favilaok, this is frought with edge cases, but you can do something called “decanting”#2020-07-1313:00favilayou read the transaction log and transform it before transacting each transaction to a new db#2020-07-1313:00favilathink of it as a git-rebase#2020-07-1313:01Kaue SchltzI was just thinking about it#2020-07-1313:01favilathe advantage is only that you avoid having a single db with old+new style sum of datoms#2020-07-1313:01favilayou could also perform most of it offline (writing into a dev database if you are on-prem), then backup+restore to get it into production#2020-07-1313:02favilayou will need some catchup and downtime to switch the application over, though#2020-07-1313:02Kaue Schltzthats manageable#2020-07-1313:02favilaif you can afford it, it’s probably better to just go the way you are going now, honestly#2020-07-1313:03favilaI assume you’re using dynamodb with provisioned write capacity?#2020-07-1313:03Kaue Schltzyes#2020-07-1313:04favilayour call, but bumping it up for a few days will probably be cheaper than the engineer time to get a decant running smoothly#2020-07-1313:05Kaue SchltzWell, I think we have a few paths to consider now#2020-07-1313:05Kaue SchltzI much appreciate your insights#2020-07-1313:05Kaue Schltzthanks a lot#2020-07-1313:06favilaglad to help#2020-07-1322:24unbalancedfinally have #datomic deployed in production. HUGE shout-out to @favila for a tremendous amount of help along the way!! partywombat thanks so much to @marshall for getting us hooked up with the license 🙂#2020-07-1323:42kennyComposite tuple attrs don't support reverse lookups, right?#2020-07-1405:46favilaNo#2020-07-1415:22kennyIt seems like a useful addition 🙂 I'd like to express "all entities in a card many attribute must be unique by a particular other attribute."#2020-07-1416:40favilacould you add that invariant to the attribute itself and ensure it with an attribute predicate?#2020-07-1416:40favila“the attribute itself” = a normal cardinality-many ref#2020-07-1416:45kennyPerhaps. It's not clear that enough context is passed to the predicate to be able to check something like that.#2020-07-1416:46kenny(looking at https://docs.datomic.com/cloud/schema/schema-reference.html#attribute-predicates)#2020-07-1416:47favilaah, yeah, attr predicate may not work#2020-07-1416:47faviladb/ensure will though#2020-07-1416:49kennyYeah, looks possible with an entity predicate. It'd be "built in" if composite tuples supported reverse lookups though 🙂#2020-07-1416:50favilathe semantics are a bit unclear though#2020-07-1416:51kennyEven if the card many ref is required to be a component?#2020-07-1416:52favilaI’m not sure I follow?#2020-07-1416:52favilaa composite tuple is non-homogenous, so which backref is it following?#2020-07-1416:53favilait puts you in a weird situation where :eavt and :vaet are inconsistent#2020-07-1416:58kennyI think I'm missing something. What's the inconsistency?#2020-07-1416:59favilaSuppose you have a composite tuple :foo+bar consisting of refs :foo and :bar from the same entity#2020-07-1417:00favila:eavt entries will be [123 :foo 456] [123 :bar 789] [123 :foo+bar [456 789]]#2020-07-1417:01favilaif :_foo+bar worked and brought you to 123, from where could you follow it, and what would the :vaet index look like?#2020-07-1417:04kennyI think I see what you're getting at but I was after something different. I see how my question was poorly worded. I'm after something like this:
[{:db/ident :user/addresses
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/many}
{:db/ident :address/type
:db/valueType :db.type/keyword
:db/cardinality :db.cardinality/one}
{:db/ident :address/user+type
:db/valueType :db.type/tuple
:db/tupleAttrs [:user/_addresses :address/type]
:db/cardinality :db.cardinality/one
:db/unique :db.unique/identity}]#2020-07-1417:05kennyi.e., a particular user can have many address types but only one of each type.#2020-07-1417:07favilawhy not :user/address_type1 :user/address_type2 etc?#2020-07-1417:07favilais the type open?#2020-07-1417:07kennyYes#2020-07-1417:07kennyBut you're right - if it was closed I'd go that route.#2020-07-1417:09favilaah, ok, you want this index to keep an invariant#2020-07-1417:09favilaI think I get it now#2020-07-1417:11favilawait, I still don’t get it#2020-07-1417:11favilahow would your attribute ever not be unique?#2020-07-1417:11favilayour hypothetical composite attr#2020-07-1417:12kennyIf you try asserting one that already exists?#2020-07-1417:13favila?user :user/addresses ?address has two ?address with the same type?#2020-07-1417:13kennyThat would fail or upsert (if :db/id is specified) on transact#2020-07-1417:14favilais that the scenario you are trying to exclude by this index?#2020-07-1417:14kennyYes#2020-07-1417:15kennyHaving multiple of the same type in this card many coll would result in undefined behavior for us. The card many must by "distinct-by" the :address/type.#2020-07-1417:19favilaok I can see this making sense as a feature. I completely misunderstood what you were asking for originally. Although I think db/ensure is the right way to tackle this.#2020-07-1417:19favilayou could also introduce another joining entity#2020-07-1417:20kennyYeah, sorry... Reading my question again I realize how ambiguous it is 😬#2020-07-1417:20kennyI thought about that. It feels a bit gross though...#2020-07-1403:28deadghostDid datomic cloud lose the collection find spec that was in on-prem? :find [?release-name ...] https://docs.datomic.com/on-prem/query.html#find-specifications https://docs.datomic.com/cloud/query/query-data-reference.html#find-specs. I haven't found cloud docs on it and it doesn't look like it's working when I've tried it in the REPL.#2020-07-1405:44favilaYep, cloud query cannot destructure in :find#2020-07-1405:45favila(Actually client api query)#2020-07-1412:43deadghostIf datomic cloud doesn't have the collection find spec, what is the recommended way to return a collection instead? Massage it on the programming language level?#2020-07-1412:44favilayes#2020-07-1412:44favilathat’s all it was doing anyway.#2020-07-1412:45favila[?a …] => (map peek results)#2020-07-1412:45favila[?a ?b] => (first results)#2020-07-1412:45favila?a . => (ffirst results)#2020-07-1413:50unbalancedwhat's the preferred method of daemonizing the transactor on a linux server?#2020-07-1414:01favilaJust the general unix advice? Don’t daemonize: manage the process with an init system like systemd#2020-07-1414:37ghadiwould also recommend a systemd unit#2020-07-1415:19unbalancedgot it. yep that's exactly the advice I was looking for. Also I thought systemd was a daemonization process?? Maybe I'm just confused on terminology. I meant "robust way to run the thing in the background better than a screen session"#2020-07-1416:37favilaare you just running a one-off transactor in a terminal? screen, new terminal tab, or even bash job-control is fine#2020-07-1416:38favilaare you running in production as a service? use a process manager#2020-07-1416:39favilaby “don’t daemonize” I mean don’t do the double fork/join “daemon” mode of older style inits#2020-07-1416:39favilawhere the transactor gets detached from its parent process#2020-07-1416:51ghadi^ +100#2020-07-1416:51ghadilet systemd sort it out#2020-07-1416:51ghadisystemd supports daemonization but it's not the preferred default service type#2020-07-1416:21Jon WalchHas anyone set up dev-local with lein?#2020-07-1418:24kennyWe uploaded dev-local to a private s3 repo and bring it in with tools-deps s3 support. I think lein has a s3 wagon to do the same.#2020-07-1416:50marciolHi, most probably someone already asked if in some point in the future Datomic Cloud will allow queries across databases. It’d be very nice to have in certain cases when working with sharded databases.#2020-07-1416:52kennyA query can take in multiple dbs.#2020-07-1416:54marciolThere was a limitation, but seems to be solved now, can you confirm it @U083D6HK9?#2020-07-1416:54marciolGood news indeed!#2020-07-1416:55kennyDepends on what limitation you're referring to.#2020-07-1416:56marciol@U083D6HK9 I read in the current #datomic channel and here also: https://forum.datomic.com/t/filtering-and-multi-database-queries/481 that cross join is not allowed when one is using Datomic Cloud.#2020-07-1416:59timcreasy(Just following this conversation in case there has been an update, as we’ve been handling joins in client code)#2020-07-1417:02marciolGreat @U4Z2TFMJ9, I’m reading the Datomic Cloud documentation and seems to be the case. Sometimes is a little bit painful to revisit all docs to know if something changed. Maybe the Release Notes is the best place to be aware of all changes.#2020-07-1417:07kennyWe have code that queries Datomic like the last query in that blog post.#2020-07-1417:09timcreasyYeah, thanks @U28A9C90Q.
I think @U083D6HK9 rightly pointed out that query can take in multiple dbs, however it appears in cloud those dbs must be the same.
(d/q {:query '[:find (count ?a)
:in $1
:where
[$1 _ :db/ident ?a]]
:args [db-1]})
=> [[661]]
(d/q {:query '[:find (count ?a)
:in $2
:where
[$2 _ :db/ident ?a]]
:args [db-2]})
=> [[68]]
(d/q {:query '[:find (count ?a)
:in $1 $2
:where
[$1 _ :db/ident ?a]
[$2 _ :db/ident ?a]]
:args [db-1 db-2]})
Execution error (ExceptionInfo) at datomic.client.api.async/ares (async.clj:58).
Query db arg does not match connection
I’ve always based the fact that it’s unsupported off this: https://docs.datomic.com/on-prem/clients-and-peers.html#2020-07-1417:13marciolExactly @U35LPCHNH#2020-07-1417:14kennyOh, right - I misread the q. Our queries only ever cross the same connection.#2020-07-1417:17kenny> Sometimes is a little bit painful to revisit all docs
@U28A9C90Q They post releases to https://forum.datomic.com/c/datomic-gen/announcements/6. You can follow it for updates.#2020-07-1417:23Kaue SchltzHi there, I was wondering what tools are mostly used when monitoring datomic cloud, I mean are there any other options other than cloud watch?#2020-07-1418:09bamarcoI am trying to write basic ping functionality with a websocket ion, but it is failing with the error:
no protocol: <api-id>.>: datomic.ion.lambda.handler.exceptions.Incorrect
clojure.lang.ExceptionInfo: no protocol: <api-id>.> {:cognitect.anomalies/category :cognitect.anomalies/incorrect, :cognitect.anomalies/message "no protocol: <api-id>.
at datomic.ion.lambda.handler$throw_anomaly.invokeStatic(handler.clj:24)
at datomic.ion.lambda.handler$throw_anomaly.invoke(handler.clj:20)
at datomic.ion.lambda.handler.Handler.on_anomaly(handler.clj:171)
at datomic.ion.lambda.handler.Handler.handle_request(handler.clj:196)
at datomic.ion.lambda.handler$fn__3841$G__3766__3846.invoke(handler.clj:67)
at datomic.ion.lambda.handler$fn__3841$G__3765__3852.invoke(handler.clj:67)
at clojure.lang.Var.invoke(Var.java:399)
at datomic.ion.lambda.handler.Thunk.handleRequest(Thunk.java:35)
I am using the following code
clj
(defn messages-lambda [{:keys [input context]}]
(let [input (-> input
(json/parse-string true)
(update-in [:body] json/parse-string true))
{:as response :keys [::recipients]} (action input)
response (-> response
(dissoc ::recipients)
)]
(doseq [id recipients]
(try
(cast/event {:msg "messages-lambda"
:response (str response)
:url (send-url input id)})
(client/post (send-url input id) {:form-params {:action "pong"}, :content-type :json})
(catch clojure.lang.ExceptionInfo e
(let [{:keys [status reason-phrase]} (ex-data e)]
(cast/event {:msg "messages-lambda" :status status :reason reason-phrase})
(far/delete-item nil sockets-table {:connection-id id}))))
(json/generate-string {:statusCode 200}))))
I get the event msg
{"Msg": "messages-lambda",
"Response": "{:action :pong}",
"Url": "<api-id>.>",
"Type": "Event",
"Tid": 65,
"Timestamp": 1594749027640}#2020-07-1519:42WillIs there a way to access the entity id of the entity which a pull pattern is operating on? For example I'd like to get back ?e in addition to the value of :entity/attr2
(d/qseq {:query '[:find (pull ?e [:entity/attr2])
:where
[?e :entity/attr1 "Attr1Value"]]
:args [(d/db conn)]})
I need to stream the results set into memory since there are millions of these entities in our database and they may not all fit in memory at once. I would use the datoms API but the query is asking for the entity id and value of all the :entity/attr2 datoms that have the value "Attr1Value" for :entity/attr1. It seems like the datoms API is more geared towards operating on a single attribute. I'm hoping qseq will allow me to express this logic while also streaming the results, I just need access to ?e. Anyone have any thoughts?#2020-07-1519:44favilasomething wrong with (pull ?e [:db/id :entity/attr2]) ?#2020-07-1519:49Willnope, that totally does it, thank you!#2020-07-1603:13dregreRelative newcomer to Datomic and I’ve been banging my head against a wall… wonder if anyone has any hints.
If the entity “person” has a cardinality-many attribute “friends,” I want to fetch a person where all of that person’s friends have a certain predicate. Plain and simple unification seems to return a person when any of its friends have the predicate. So I need to reach for something else, but I don’t know what that is.#2020-07-1604:44Jacob O'BryantYou can use not/not-join to check that a person doesn't have any friends that don't satisfy the predicate.#2020-07-1605:33dregreThat’s logical!
Is it idiomatic?#2020-07-1610:23Jacob O'Bryantyep!#2020-07-1615:05dregreLet me take it one step further. If my “friends” entities all have a cardinality-many attribute “pets,” how could I grab a person who only has friends that have at least one dog? I don’t necessarily want to exclude people who have friends who have cats, so long as each friend also has at least one dog.#2020-07-1615:07dregreUsing not/not-join starts to get hairy in this situation, I would think.#2020-07-1615:40favilahttps://stackoverflow.com/questions/43784258/find-entities-whose-ref-to-many-attribute-contains-all-elements-of-input#2020-07-1615:40favilaIf you are on on-prem, I recommend just using predicates#2020-07-1615:41favilaif you are on cloud, you can try to fit it into subqueries using q#2020-07-1615:55dregreThis is incredibly helpful, thank you!#2020-07-1619:59Jacob O'BryantI believe this should work also (haven't tested it fyi):
; "Doesn't have a friend who doesn't have a dog"
(not-join [?person]
[?person :person/friends ?friend]
(not-join [?friend]
[?friend :person/pets ?pet]
[?pet :pet/type :dog]))#2020-07-1620:00Jacob O'BryantDoesn't seem too hairy IMO#2020-07-1704:27dregreYou’re right, not bad at all! I’ll pinch this, thanks muchly!#2020-07-1614:39bamarcoWell I figured out the "no protocol" error message. It just wanted me to add "https::/" to the url. I hadn't thought about protocols with the web since I last used ftp.
Anyways I figured out which aws api I should be using, so I switched to aws-api from clj-http. So my questions now are not really datomic related (not really a problem using ions anymore, but aws-api). I'll head over to #aws I guess, but on my way out does anyone know why the following does not actually send a response to wscat? I have tries all kinds of things for :Data which is supposed to be a blob including (a json string, a json smile, a .getBytes of json string, a plain test string, and the object shown below)
(aws/invoke (aws/client {:api :apigatewaymanagementapi})
{:op :PostToConnection
:request {:Data {"action" "pong"}, :ConnectionId id}})#2020-07-1614:44Joe Lane@mail524 https://github.com/cognitect-labs/aws-api#posttoconnection#2020-07-1614:44Joe LaneCheck that#2020-07-1617:38daniel.spanieldoes datomic allow for doing queries with limit and offset and maybe sorting? ( trying to do pagination )#2020-07-1617:45favilaNo: datomic query results are sets or bags and so inherently unordered, so limit and offset doesn’t make sense. You are expected to either find a natural paging boundary and use that as input to the query to limit results (e.g. results a day at a time), or sort and page yourself afterwards.#2020-07-1617:46favilaThat said, the client api has limit and offset parameters, but I wouldn’t rely on them for user-facing paging for the reasons stated#2020-07-1618:05daniel.spanielgot it, oh well .. i guess i just suck it up ( thanks @U09R86PA4#2020-07-1618:54kennyIs there any guarantee that the order of a carnality many attribute remain the same given no new transactions? e.g., something like this would be true.
(->> (range 1000)
(map (fn [_] (d/pull db [::card-many] eid)))
(partition-all 2 1)
(every? #(= %1 %2)))#2020-07-1618:59ghadicarnality many is such a great misspelling for 2020#2020-07-1619:00kennyWow, didn't even notice!#2020-07-1619:00ghadiI don't think there is a guarantee of order#2020-07-1619:00ghadiunless you're iterating via d/datoms, which are sorted#2020-07-1619:01kennyHmm, okay. It makes testing certain things a bit more of a pain. Since pull returns vecs I have to convert everything to sets to test equality.#2020-07-1619:02ghaditry xform ?#2020-07-1619:02kennyConverting tests to dev-local unearthed this.#2020-07-1619:02kennyGood idea!#2020-07-1619:02ghadinever mind, doesn't do set#2020-07-1619:03kennyHmm. Seems like any fully qualified fn will work.#2020-07-1619:05kennyOh but I'd need to allow it...#2020-07-1619:06ghadihttps://docs.datomic.com/cloud/query/query-pull.html#xform-option
is the scripture here#2020-07-1619:07kennySeems like dev-local should support passing in a custom ion-config map.#2020-07-1619:18kennyAm I missing something here?
(slurp (io/resource "datomic/ion-config.edn"))
=> "{:xform [clojure.core/set]}"
(d/pull db '[{[::card-many :xform clojure.core/set] [*]}] eid)
Execution error (ExceptionInfo) at datomic.ion.resolver/anomaly! (resolver.clj:41).
'clojure.core/set' is not allowed by datomic/ion-config.edn#2020-07-1619:19ghadidev-local isn't an ion#2020-07-1619:21kennyNot sure what you mean. I thought it supports all Datomic features, one of which is :xform. You config :xform via datomic/ion-config.edn, no?#2020-07-1619:21ghadidunno -- will defer to @marshall#2020-07-1619:26kennyMore complete example...#2020-07-1619:28marshallCan i see your client config map#2020-07-1619:29kenny{:server-type :dev-local
:system "kenny-test10"}#2020-07-1619:30marshallIll have to investigate. Looks like a bug#2020-07-1619:31kennyOk. Should I open a support ticket?#2020-07-1619:32ghadibtw the original question was "does pulling cardinality many attrs guarantee a particular order?"#2020-07-1619:32marshallNo :)#2020-07-1619:33marshall@U083D6HK9 sure if you could drop that example in a ticket we can track it there#2020-07-1619:34kennyWill do. Thanks.#2020-07-1619:35kennyWait, my question was more specific @U050ECB92. Is there any guarantee that the order of a carnality many attribute remain the same given no new transactions?#2020-07-1619:36marshallAlso no#2020-07-1619:36kennyOkay, interesting. Thanks.#2020-07-1619:33kennyThe problem this is solving is that I'd like to use certain Datomic features while running particular tests. An example would be allowing a custom :xform used only during tests.#2020-07-1715:48mafcocincoIs there a way to fetch the Datomic version that is running using a Datalog query? In Postgres, for example, I can run SELECT version(); to get the version string/info about the Postgres instance that is processing the query.#2020-07-1716:05defaI’d like to build a small command line jar tool to wipe a datomic database on the transactor. I bundled the Postgres JDBC driver and excluded the H2 driver but the tool will access the h2 driver anyways. Any ideas?#2020-07-1716:06defaException in thread "main" java.lang.ExceptionInInitializerError
at java.base/java.lang.Class.forName0(Native Method)
at java.base/java.lang.Class.forName(Class.java:398)
...
at datomic.coordination$loading__5569__auto____8224.invoke(coordination.clj:4)
at datomic.coordination__init.load(Unknown Source)
at datomic.coordination__init.<clinit>(Unknown Source)
...
Caused by: java.lang.ClassNotFoundException: org.h2.tools.Server
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581)
at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
at java.base/java.lang.Class.forName0(Native Method)
at java.base/java.lang.Class.forName(Class.java:398)
at clojure.lang.RT.classForName(RT.java:2211)
at clojure.lang.RT.classForNameNonLoading(RT.java:2224)
at datomic.h2$loading__5569__auto____8226.invoke(h2.clj:4)
at datomic.h2__init.load(Unknown Source)
at datomic.h2__init.<clinit>(Unknown Source)
... 117 more
#2020-07-1716:08defadeps.end:
{:paths ["src" "resources"]
:mvn/repos {"" {:url ""}}
:deps {org.clojure/clojure {:mvn/version "1.10.1"}
com.datomic/datomic-pro {:mvn/version "1.0.6165"
:exclusions [com.h2database/h2]}
org.postgresql/postgresql {:mvn/version "9.3-1102-jdbc41"}}
:aliases {:uberjar {:extra-deps {seancorfield/depstar {:mvn/version "0.5.1"}}
:main-opts ["-m" "hf.depstar.uberjar" "wipedb.jar"
"-C" "-m" "tools.wipedb"]}}}#2020-07-1716:09defaand wipedb just takes a the main args and calls (datomic.api/delete-database (first gargs)) …#2020-07-1716:37favilaI don’t think you can drop that dependency#2020-07-1716:41favilaOther than tidiness, is there a reason you want to specifically exclude h2?#2020-07-1716:48defaYes, because when running clj -A:uberjar I get an error that the java.sql.Driver clashes…#2020-07-1716:49defaI’ll try to not put the postgres jdbc in the uberjar… and add it to the classpath.#2020-07-1716:52defaAh, that’s the solution. You can not bundle the jdbc-driver jar with the uberjar…#2020-07-1716:52defaAnyway, thanks for replying!#2020-07-1718:53jaretDatomic dev-local version 0.9.180 now available. https://forum.datomic.com/t/dev-local-0-9-180-now-available/1522#2020-07-1718:55WillIs there a way to "un-register" a transactor with a database?#2020-07-1816:31ziltiI fail to connect to my freshly set up datomic-pro. I get this error, which seems to be the usual one:#2020-07-1816:32ziltiDatomic itself seems to have started fine:
System started datomic:sql://<DB-NAME>?jdbc:, you may need to change the user and password parameters to work with your jdbc driver#2020-07-1816:34ziltiIn the transactor config, I've set protocol to sql, host to 127.0.0.1 and port to 4334. I've added the license key and set the sql-url, -user, -password and -driver-class.#2020-07-1818:12favilaIs the peer also on localhost? If not you need to change transactor host= to something the peer can reach#2020-07-1818:14favilaTransactors bind to host then write host and alt-host to storage. Peers then read those out of storage and attempt to connect to either of them. So you need host and/or alt-host to be routeable hosts or ips peers can connect to#2020-07-1912:30ziltiYes, the PostgreSQL server is separate, but everything else is running on localhost#2020-07-1822:58dregreNoobie question: Is it possible to create a Datomic schema where a value has an expected type of any, or at least a union type? I'd like a value to be either a string, integer, or float.#2020-07-1822:58dregreNoobie question: Is it possible to create a Datomic schema where a value has an expected type of any, or at least a union type? I'd like a value to be either a string, integer, or float.#2020-07-1908:22David PhamI am not a datomic expert, but you could store your value as symbol? Otherwise you could save it as a ref and create the correct entity for each value (probability it has a lot memory disadvantage, but it would solve your problem).#2020-07-1913:07favilaIt is not possible #2020-07-1913:07favilaYou must represent it some other way#2020-07-1913:09favilaFor cardinality one attributes, you can do a kind of tagged union thing. Make your polymorphic attr with a ref type; it’s value should be the attr with a specific variant attr#2020-07-1913:10favila{:foo :fooString, :fooString “string”} for example#2020-07-1913:10favilaThis makes nice query patterns#2020-07-1913:12favilaFor cardinality many, if your type options are only scalar types maybe you can use fixed length heterogeneous tuples where only one value is non nil#2020-07-1916:56dregreWhere I landed is I created three different entity schemas — one with a string attr, one with a float attr, one with an integer attr — then created an OR rule that will get the appropriate attribute value based on the entity, so that, given one of the three kinds of entities, I can uniformly access the value. It’s a little like a union type, and appears to work nicely.#2020-07-1916:58dregreAny drawbacks to this approach do you think?#2020-07-1918:32favilaYou have to check three indexes, and you have to ensure only one is asserted at a time#2020-07-1918:33favilaUnless you mean there’s some other value on the entity that tells you which one it is? “One of the three kinds of entities” I’m not sure what that means#2020-07-1919:11dregreLet put more flesh on these bones.
;; Schema
#:db{:ident :foo-string/val
:valueType :db.type/string
:cardinality :db.cardinality/one}
#:db{:ident :foo-float/val
:valueType :db.type/float
:cardinality :db.cardinality/one}
#:db{:ident :foo-integer/val
:valueType :db.type/long
:cardinality :db.cardinality/one}
;; Rules
[[(val ?e ?v)
[?e :foo-string/val ?v]]
[(val ?e ?v)
[?e :foo-float/val ?v]]
[(val ?e ?v)
[?e :foo-integer/val ?v]]]#2020-07-1919:16dregreWon’t it consult one index only?#2020-07-1919:21dregreI'm also interested in learning more about profiling the performance of my rule — are you familiar with any recs or literature on how to do that?#2020-07-2000:02favilaYeah that will have the two downsides I talked about#2020-07-2000:02favila3 index lookups and its structurally possible to have more than one asserted on same entity#2020-07-2000:03favilaWhat I was suggesting would give a query pattern like this#2020-07-2000:05favila[?e :foo ?foo-attr] [?e ?foo-attr ?foo-val]#2020-07-2000:06favilaWhere ?foo-attr is one of your foo-*/val attrs#2020-07-2000:10favila(A ref to the attr not a keyword)#2020-07-2001:40dregreThanks
I reckon a join is less expensive in Datomic than an index lookup?#2020-07-2001:43dregre(In my case I’m not very concerned about the possibility of more than one assertion on the entity, but I am concerned about slowness.)#2020-07-2012:40favilaIt’s not join vs index lookup, it’s 1-2 lookups vs 3 where at least two of them are misses#2020-07-2012:40favilaIf your db still fits in object cache this is unlikely to matter#2020-07-2012:50dregre🙏:skin-tone-2: #2020-07-1903:38mafcocincoAlso Noobie question: Typically where are schema creation files stored and when are they run? Is it every deployment, similar to migrations or something else? I assume running the same schema code over and over again is functionally idempotent, though I suppose it might generate duplicate, redundant data?#2020-07-2001:53dregre> functionally idempotent
I think schema datoms are just datoms, and since set semantics apply, if you assert a fact that is already in place no new fact is created; although I believe a transaction is.#2020-07-2013:13mafcocincoah, cool. So the only downside of running a schema update with every deploy, for example, would be the generation of a new transaction, which is relatively inexpensive.#2020-07-2013:13mafcocincoThanks!#2020-07-1908:17David PhamIs there an notion of datalog injection? What kind of security issue would we need to think about if we accept an arbitrary valid edn data structure for performing the query?#2020-07-1913:16favila(Assuming you are already using an end reader that doesn’t evaluate) datalog queries can contain function calls, but you can discover them syntactically#2020-07-1913:16favilaAside from that, they can DOS your service just by being slow#2020-07-1913:17favilaJust like in sql, you should build the query in code and accept user input only as parameters#2020-07-1913:45David PhamSecurity is a bit annoying. Because then you can let you user build their own queries. Oh sad thing.#2020-07-2015:21WillIs there a way to "un-register" a transactor with a database? i.e. replace the value specified by "host" in sql.properties? also is there a way to query for what the currently configured transactor host is?#2020-07-2015:42favilatransactors actively write themselves into storage continually while they are running. to “un-register”, start a new transactor with different settings.#2020-07-2015:42favilathere’s no long-term hard association between a transactor and a storage#2020-07-2015:44Willok, that makes sense, thanks! is there a way to query for what the currently registered transactor(s) are?#2020-07-2015:45favilaThere’s a way that relies on internal implementation details; but, why do you want to do this?#2020-07-2015:48Willwe're just having some deployment issues and wanted to confirm what the current host name is since we recently changed the VNet that the transactor VM is on#2020-07-2015:49favilayou can select the ‘pod-coord’ and ‘pod-standby’ ids, but note the timestamp#2020-07-2015:49favilathe presence of something there isn’t a guarantee a transactor is using it#2020-07-2015:52Willsorry, I'm not sure I follow, are ‘pod-coord’ and ‘pod-standby’ entities or attributes? what would that query look like?#2020-07-2015:53favilaThese are sql keys#2020-07-2015:54favilaselect * from <storagename>.datomic_kv where id='pod-coord'#2020-07-2015:55favilaif you can already get a peer to connect, then you can look at its logs#2020-07-2015:57Willok, thank you, I'll give that a try#2020-07-2020:00mafcocincoIIRC the datomic documentation recommends using db/ident for enumeration types. Could someone share schema declaration example of this usage? I was trying to wrap my head around how that will work but could not find any code examples demonstrating the usage. Thanks in advance.#2020-07-2020:02favila{:db/id "new-entity" :db/ident :my-name}#2020-07-2020:03mafcocincoThanks. Found this example: https://docs.datomic.com/cloud/schema/schema-modeling.html#2020-07-2020:04marshallhttps://github.com/Datomic/mbrainz-importer/blob/master/subsets/entities/enums.edn#2020-07-2020:04marshallThe mbrainz example uses enums for artist type, release formats, etc#2020-07-2022:33drewverleeHow does a q-seq query compare to a datoms avet index query in terms of performance?#2020-07-2022:49Jon WalchI'm pretty confused at this point. I'm using lein. I removed my dependency on [com.datomic/client-cloud "0.8.101"]. I only have a dependency for [com.datomic/dev-local "0.9.180"].
I can execute:
(ns db.core
(:require
[datomic.client.api :as d]
[datomic.client.api.async :as d.async]
[clojure.core.async :as async]
[cognitect.anomalies :as anom]
))
(defonce ^:private local-config
{:server-type :dev-local
:system "dev"})
(comment (d/client local-config))
perfectly fine in my repl. But (d.async/client local-config) throws:
Execution error (ExceptionInfo) at datomic.client.api.impl/incorrect (impl.clj:43).
:server-type must be :cloud or :peer-server
#2020-07-2111:27Ramon RiosHello everyone#2020-07-2111:27Ramon RiosSyntax error (Exceptions$IllegalArgumentExceptionInfo) compiling at (datomic_utils.clj:1:1).
:db.error/not-enough-memory (datomic.objectCacheMax + datomic.memoryIndexMax) exceeds 75% of JVM RAM
Did someone pass through this before when initialzing datomic?#2020-07-2111:44favilaDo those two settings together exceed the Xmx of your JVM?#2020-07-2111:44favilaOr maybe your Jvm Xmx is too small?#2020-07-2111:46favilaI wouldn’t even try to run a peer with less than 2-3g Xmx. There’s a lower limit where it won’t start at all, but I don’t remember where that ends up being#2020-07-2111:59Ramon RiosI increased heap memory#2020-07-2112:00Ramon RiosStill have this issue#2020-07-2112:00favilawhat is your xmx, datomic.objectCacheMax, and datomic.memoryIndexMax?#2020-07-2112:34Ramon RiosI'm trying to find it#2020-07-2115:01Ramon RiosI resolved updating my java version#2020-07-2115:01Ramon RiosI had a too old java#2020-07-2115:45kennyI created a small Clojure lib/script to do (some of) the right things to upload a jar file to a S3 Maven repo. Folks using dev-local may find it useful. You can paste it into a bb script if you want it to go a bit faster. https://github.com/kennyjwilli/s3-mvn-upload#2020-07-2118:41drewverleeDoes qseq lazily get all the entities? or is that eager and only the datomic/pull and xforms are lazy?#2020-07-2118:46drewverleeFrom the reading its eager#2020-07-2119:05drewverleeI take it he absence of a lazy datomic query means thats a very hard problem. As in if you can't get lazyiness on something involving joins (and not just an index).#2020-07-2119:11alexmillerI think you are assuming things that are not necessarily true#2020-07-2119:12alexmillerbut I am not in a place of enough knowledge to answer definitively#2020-07-2119:15alexmillerconceptually, there is nothing about joining per se that prevents "laziness"#2020-07-2119:18ghadihttps://docs.datomic.com/cloud/query/query-executing.html#qseq#2020-07-2119:18ghadi> Item transformations such as pull are deferred until the seq is consumed.#2020-07-2119:22alexmillerthe qseq docstring at https://docs.datomic.com/client-api/datomic.client.api.html#var-qseq says "returning a lazy seq" ?#2020-07-2119:27favilaI believe qseq only performs the work in :find (e.g. pulling) lazily. There’s still an eagerly realized query result set back there with all the values for the vars referenced by that find#2020-07-2119:28favilacorrect me if I’m wrong, but that was my understanding of the tradeoff qseq gives you#2020-07-2119:32ghadithat sounds coherent#2020-07-2119:32ghaditime to first thing is reduced, still a full query done on the remote end#2020-07-2119:33favilaalso explains “efficient count”--i.e. because it’s really backed by a set#2020-07-2120:37mafcocincoWhen querying for an entity that contains a :db/valueType of :db.type/ref, what is the proper Datalog syntax for pulling attributes from the child entity? Does that have to happen as a two step query (i.e. pull the parent with the ref value, then pull the attributes on that ref entity that you want) or is there way to accomplish the fetching of attributes from a ref entity in a single query?#2020-07-2120:38ghadi@mafcocinco there are some examples in here (datomic cloud docs) https://docs.datomic.com/cloud/query/query-pull.html#orgdf2185f#2020-07-2120:39mafcocincothanks!#2020-07-2121:16jarethttps://forum.datomic.com/t/dev-local-0-9-183-now-available/1526#2020-07-2205:07onetomwhat's the current recommendation for datomic.api/pull-many when using the datomic.client.api?
there is a nice comparison of the peer and the client apis on the https://docs.datomic.com/on-prem/clients-and-peers.html#peer-only page, but it doesn't mention datomic.api/pull-many and what should it's equivalent be on datomic.client.api.
i know it's possible to provide pull patterns as dc/q, but that couples the queries more to pulls, than it would otherwise, using the q+find-rel + pull-many.#2020-07-2205:16onetombased on the introduction of this new qseq function, i suspect that's the one I should use instead of pull-many.
Is my suspicion correct?#2020-07-2210:24maxtI get this error when adding dev-local as a dependency to a previously working ions project:
Could not locate cognitect/hmac_authn__init.class, cognitect/hmac_authn.clj or cognitect/hmac_authn.cljc on classpath. Please check that namespaces with dashes use underscores in the Clojure file name.
I guess it's a dependency conflict but I've checked with -Stree and I get the same versions of com.cognitect/hmac-authn 0.1.195 in both with and without dev-local.
Still trying to track it down, but wanted to ask if anyone else have seen it or something similar?#2020-07-2211:04danieroux@maxt I had this issue too, and it solved by having latest tools.deps - 1.10.1.561
Upgrade on osx with:
brew upgrade clojure/tools/clojure#2020-07-2212:49maxt@danie Thank you for the hint! Just noticed that it works from cli, but not through cursive, which indeed seems to be stuck at an older version of tools deps#2020-07-2212:51alexmillerYou can have cursive use your local version of clj too#2020-07-2213:04maxt@alexmiller that would be great, but when I try I get this
The following errors were found during project resolve: /home/max/wavy/wavy-v2/deps.edn: Coordinate type :mvn not loaded for library org.clojure/clojure in coordinate {:mvn/version "1.10.1"}
I don't yet understand why I see that in cursive but not from cli#2020-07-2213:06alexmillerThat's a weird error#2020-07-2213:06alexmillerI'd ask in #cursive#2020-07-2213:08maxtYep, thank you#2020-07-2215:11onetom@maxt yes, you can specify Cursive to use the installed CLI tools, by pointing it to the clojure executable#2020-07-2215:13maxt@onetom Thank you. That option sadly does not work for me, I get a wierd error when doing so.#2020-07-2215:13onetomi would recommend using nix to manage your projects' dependencies#2020-07-2215:14onetomthat way it's guaranteed that you have the exact version of everything in a read-only folder and it doesn't clash with the needs with any other project#2020-07-2215:17onetomhere is an example of a shell.nix file, which can provide you the exact same environment in a shell, even years later:
# To upgrade pinned versions, get the latest git commit SHAs:
# git ls-remote nixos-20.03 nixpkgs-unstable
with import (builtins.fetchGit {
name = "nixos-20.03";
ref = "refs/heads/nixos-20.03";
rev = "bb8f0cc2279934cc2274afb6d0941de30b6187ae";
url = ;
}) {};
let
sharedJDK = jdk11;
clojure = callPackage ../clojure { jdk11 = sharedJDK; };
maven = maven3.override { jdk = sharedJDK; };
in
mkShell rec {
buildInputs = [
coreutils cacert wget unzip overmind
sharedJDK maven clojure
];
shellHook = ''
export LC_CTYPE="UTF-8"
'';
}#2020-07-2215:19onetomthe ../clojure folder contains a copy of https://github.com/NixOS/nixpkgs/blob/99afbadaca7a7edead14dc5f930aff4ca4636608/pkgs/development/interpreters/clojure/default.nix and i just adjusted the cli tool version and sha in it.#2020-07-2215:20onetomthe Nix pkg manager works great under any Linux or macOS, but Windows is not really supported or might never be supported.#2020-07-2215:21onetomim happy to hop on a https://screen.so/ session and help you to set nix up or i can show it in action on my machine#2020-07-2215:21onetommaybe i should do a screencast about it... 😕#2020-07-2215:21maxtI did not know about Nix, thank you for mentioning it.#2020-07-2215:22maxtIn this case, It works from my command line but not from inside IntelliJ, and I don't think a package manager would help me then.#2020-07-2215:25maxtAnd thank you for offering help. In this case, I just found a workaround that solves my current issue.#2020-07-2504:51staypufd@maxt What was the work-around?#2020-07-2713:49maxt@U060WE55Z Using the option "Use tools.deps directly" on a updated Cursive.#2020-07-2215:25maxtUsing tools.deps version 0.8.709 from inside Cursive finally did allow me to use dev-local. Thank you again @danie#2020-07-2217:50joshkhhello! i am attempting to get a handle on the AWS_ACCESS_KEY_ID environment variable from within my Ion lambda -- something that is typically available based on the execution role. however, listing the environment variables from within my Ion returns the environment variables of the query group, which is expected but not very helpful in my case. is there another way to retrieve the execution role credentials?#2020-07-2217:51marshallusually you would use the instance role credentials#2020-07-2217:51marshallone sec - let me get you an example#2020-07-2218:34joshkhhi marshall, ghadi and kenny sorted me out with a good example. thanks just the same for investigating#2020-07-2217:52ghadi@joshkh you shouldn't need AWS_ACCESS_KEY_ID to use an AWS SDK#2020-07-2217:53ghadiall AWS SDKs detect that they are running in an EC2 machine with an instance role, and transparently fetch and remember credentials#2020-07-2217:53ghadibut I'm not sure what you're trying to do#2020-07-2217:54ghadiif it's "use an aws API" just call the constructor function for whatever SDK you're using, and don't pass any explicit credentials#2020-07-2217:55ghadiEC2 machines don't have that env var unless you've explicitly set it -- which is not recommended#2020-07-2217:55joshkhthanks for the feedback ghadi. in my case i need to manually sign an HTTP request#2020-07-2217:55ghadifor an AWS API?#2020-07-2217:56joshkhspecifically, a raw GraphQL query to an AppSync endpoint#2020-07-2217:57joshkhie a server side mutation from an ion#2020-07-2218:00ghadiif manually signing, you could ask an sdk for credentials#2020-07-2218:00ghadicredentials periodically rotate in ec2 machines, and the sdk's handle this for you#2020-07-2218:00ghadiwould love to see some sample code#2020-07-2218:04joshkhi'm using cognitect's aws-api, although there is no "server side" appsync client or API to perform mutations, so it appears that manually posting signed requests to AppSync is the way to go. i'm drawing some inspiration from the code at the end of this article: https://adrianhall.github.io/cloud/2018/10/26/backend-graphql-trigger-appsync/
nodejs has the benefit of using the AWS AppSync client, but that's not an option in the JVM#2020-07-2218:04ghadicool you are in luck, I am a maintainer of that sdk 🙂#2020-07-2218:05joshkhit's my lucky day!#2020-07-2218:06ghadiinteresting, I didn't realize the aws js sdk exposes sigv4 as an API#2020-07-2218:06joshkhhttps://github.com/cognitect-labs/aws-api/blob/master/latest-releases.edn#L531#2020-07-2218:07ghadithat is not the same thing#2020-07-2218:08ghadithat is a code signing api for IOT#2020-07-2218:08ghadihttps://docs.aws.amazon.com/signer/latest/api/Welcome.html#2020-07-2218:08ghadian actual service#2020-07-2218:08ghadiwhat you are looking for is a thing that signs HTTP maps with AWS credentials#2020-07-2218:11ghadiwe don't yet have v4 request signing a la carte in cognitect labs' aws-api#2020-07-2218:11ghadibut I think @U083D6HK9 may have done it#2020-07-2218:11ghadiit's useful for API Gateway, as well as GraphQL#2020-07-2218:12joshkh(re: AWS Signer, yes thanks for pointing that out. i was crossing wires)#2020-07-2218:13ghadiin any case: don't rely on the datomic machine to have static credentials in the env vars#2020-07-2218:13joshkhfor sure. i didn't really expect them to be there as with a typical lambda.#2020-07-2218:14ghadihttps://github.com/cognitect-labs/aws-api#credentials#2020-07-2218:14ghadibut that doesn't solve your larger problem of signing a request#2020-07-2218:14kennyHaven’t read the background on your issue @joshkh but perhaps this is useful https://gist.github.com/kennyjwilli/aa9e99321d9443a8ae80448974850e79#2020-07-2218:16joshkhforgive me ghadi, but doesn't that demonstrate a way to provide custom credentials? whereas in my case i need to retrieve the current access-key-id etc.?#2020-07-2218:17ghadiyeah#2020-07-2218:17joshkh@U083D6HK9 i think that's exactly what i'm looking for#2020-07-2218:18ghadiif you call (cognitect.aws.client.shared/credentials-provider) you'll have access to the default provider which will pull and refresh the creds automatically#2020-07-2218:18ghadicombine that with kenny's script#2020-07-2218:20ghadithanks @U083D6HK9!#2020-07-2218:20joshkhyup, this looks great. thank you both for your help. it's always interesting to dive a little deeper into ions / aws api.#2020-07-2218:20joshkhand thanks for your work on aws-api. it was a game changer when moving from the aws java sdk.#2020-07-2218:25joshkhon a side note, i wasn't able to sign S3 / CloudFront urls with aws-api in the past (although maybe that has changed), so i ported over an AWS java example to Clojure. might be useful to someone who finds this thread in the future. https://gist.github.com/joshkh/99718bfd4cd95cd48cda8533f162ffbf#2020-07-2217:55joshkhcorrect#2020-07-2217:59ghadihmm which sdk ?#2020-07-2217:59joshkh^ i started a thread above 🙂#2020-07-2312:52nlessaHi, any thoughts about future of Datomic as a product after acquisition of Cognitec by Nubank?#2020-07-2312:56David PhamI hope it becomes open source xD#2020-07-2312:56stuarthallowayWe expect the future for Datomic to be "like today, but more so." Datomic development, product offerings, and customer relationships will continue and grow, as they have.#2020-07-2312:57David PhamDo you think companies might see some conflict of interest on Datomic? Especially financial companies?#2020-07-2313:05nlessawell, I foresse some fintechs in Brazil second guessing their use of Datomic… In the end what matters is how Nubank will treat Datomic as a product by itself or their “inner most important technology”…For me that lived the story of Apple acquisition of Next(and WebObjects), and the long demise of WebObjects as product and more and more as the “most important technology of Apple back-end”) it’s a bit frightening.#2020-07-2313:24tvaughanThat was a looong time ago. The thinking that some critical piece of infrastructure is best kept proprietary as some sort of competitive advantage has changed considerably. Adapting to new technologies and swapping one for another when the new thing offers cost savings or improves customer experience, for example, and, perhaps most importantly, pushing off job training costs to the broader open source community are the new normal in this regard, I think#2020-07-2313:32nlessaI think like you and hope that Nubank/Cognitec act like that. Lets see.#2020-07-2313:55stuarthallowayNubank doesn't see any benefits in restricting access to awesome technology. We exist as part of a broader ecosystem, not in isolation, and we all benefit from broader adoption and more scale for both Clojure and Datomic. Nubank doesn't have any incentives or interests in this sense that would conflict with other companies leveraging the same technologies, regardless of industry.#2020-07-2313:56stuarthallowayNubank believes that our experience with Datomic at scale can help optimize and enhance the product and we would love to see all Datomic users benefiting from that progress.#2020-07-2314:10calebpI’m very grateful for “like today, but more so.” Thanks Datomic team#2020-07-2315:34drewverleein the datoms call https://docs.datomic.com/client-api/datomic.client.api.html#var-datoms given the :avet index what is the big O time? is it proportional to the number
of datoms with the given attribute?
Put another way, should i pass an attribute that is rarer?
person.name = "drew"
Or something where the attribute = value combination is more strict?
person.id = id
I'm fairly sure the former is more correct but i can't articulate why.#2020-07-2315:43drewverleeFrom these docs: https://docs.datomic.com/on-prem/indexes.html#avet
I guess if the attribute has an index (which it must) and the value column has an inner sort within in attribute. Then assuming its an efficent tree search it's going to be constant to look up the attribute=value right?#2020-07-2316:48favilaCorrect. This may help understand why: https://tonsky.me/blog/unofficial-guide-to-datomic-internals/#2020-07-2316:49favilaPretty much any three “start” segments will have an efficient--if not direct node lookup, an array-bisection over a selective segment of the index#2020-07-2316:49favilaso you can always rely on getting the first item from datoms in effectively constant time#2020-07-2317:48drewverleeas always ty @U09R86PA4#2020-07-2316:12calebpWe seem to have a schema modeling conflict between using isComponent and using tuple attributes to create unique values on the component entities. Before tuples were released, we usually defined the relationship from the owning entity to the owned entities and marked the ref attribute as isComponent=true. But if we want to create unique tuple attributes on the owned entities that include the owned entity, we need to define the relationship from the owned to the owner and lose the isComponent. Should it just be a matter of deciding whether isComponent or uniqueness is more important in the relationship? We could model the relationship both ways, but then we have to manage making sure they’re both written.#2020-07-2316:52favila> But if we want to create unique tuple attributes on the owned entities that include the owned entity
Do you mean “unique tuple attributes on the owned entities that include the owner entity”?#2020-07-2316:56calebpYes.#2020-07-2317:31favilaWhat features of isComponent are you sad to lose? The direction of the reference? That backrefs in pulls and entity-map walking are scalar values?#2020-07-2317:38calebpSome of it is the documentation aspect that the owned entities shouldn’t exist without the owners. Whether that’s enforced through retractEntity or other custom code, it’s a piece of data that is lost. Not a huge deal.#2020-07-2413:27calebpI guess there’s nothing stopping me from creating my own attribute to mark the relationship as “isComponent” when modeled in the other direction. The convenience features of retractEntity and (pull [*]) wouldn’t work, but the data would still be there.#2020-07-2320:14drewverleedatomic.api/q takes a map {:query .. :args ... }
Where query can be {:find ...} right? I'm getting a spec error that no find clause is specified.
1. Caused by java.lang.IllegalArgumentException
No :find clause specified
query.clj: 310 datomic.query/validate-query
do the datomic functions have specs? I'm worried this/my project might be shadowing things in a way i dont see yet.#2020-07-2320:20faviladatomic.api/q does not take a map#2020-07-2320:20favilayou are thinking of either datomic.client.api/q or datomic.api/query#2020-07-2320:21favilathat said, (datomic.api/q {:find …} arg1 arg2) works#2020-07-2320:21favila(i.e. anywhere the vector form of a query is accepted, a map form is ok too--the vector is just sugar for the map)#2020-07-2320:23drewverleeI'm looking at these docs:
https://docs.datomic.com/on-prem/clojure/index.html#datomic.api/q
> query can be a map, list or string#2020-07-2320:24drewverleeoh#2020-07-2320:24drewverleequery & inputs#2020-07-2320:25drewverleeso qseq (which is my final goal) does take a map. i just sort of skipped reading between the lines it seems.
Usage: (qseq query-map)
#2020-07-2414:53tvaughanI have Datomic On-Prem 1.0.6165. When I try to run the console script I get ERROR: This version of Console requires Datomic 0.8.4096.0 to run What am I doing wrong?#2020-07-2415:18jaretHey @tvaughan! Sorry about this, we released a fix for this issue with our standalone conole. You can download it from: https://my.datomic.com/downloads/console#2020-07-2415:19jaretYou’ll need to follow the included README and use bin/install-console to install over the version of console in your release.#2020-07-2415:19jaretWe’re going to correct this issue in the next release of Datomic On-Prem by packaging this version of console with the download.#2020-07-2415:20tvaughanOK. I saw The Datomic Console is included in the Datomic Pro distribution on https://my.datomic.com/downloads/console. Perhaps this should be updated too. Thanks!#2020-07-2415:21jaretOh it is still going to be included going forward its just that this particular version of console had a bug that prevented it from starting with that particular version of Console that wasn’t caught by our CI#2020-07-2415:21jaretBut because we include it, we can’t rip it out after the fact 😞#2020-07-2415:23jarethttps://docs.datomic.com/on-prem/changes.html#console-225#2020-07-2415:25tvaughanI mean a note like "Users of versions x, y, and z will need to download the console separately. Follow these instructions..." I saw this download page and thought I had everything I needed#2020-07-2415:44jaretAh understood! I’ll take a look at that and see if we can get a warning there or some kind of call out#2020-07-2416:11marciolI work at a company that is the second in terms of Clojure developers in Brazil, and the news about the acquisition of Cognitect by Nubank are concerning our C-level board, given that we compete with Nubank in several fronts.
We are using the Datomic Cloud offer right now, but as I really want to still use Datomic I'm tempted to suggest a migration to Datomic On-Prem as a way to calm down their anxiety about the whole History. The question is, Datomic On-Prem will be a perennial offer for the foreseeable future? Can I strongly defend this option?
cc: @stuarthalloway @marshall @alexmiller#2020-07-2416:22stuarthallowayHi @marciol. In addition to what @alexmiller said, did you see this in Slack yesterday? https://clojurians.slack.com/archives/C03RZMDSH/p1595512519395700#2020-07-2416:24marciolAh yes, I saw it @stuarthalloway, and I believe sincerely that the Clojure and all Ecosystem, including Datomic, will benefit even more. I need only some arguments to deal with business people.#2020-07-2416:25stuarthallowayTo the extend this is somehow about On-Prem vs. Cloud, we plan to continue to enhance both products in parallel, as we have to date.#2020-07-2416:26marciolNice, so I’ll use it as argument and this can be an real option to relief the anxiety 😄#2020-07-2416:27marciolYou should know how paranoid about competition and thinks like that business people went.#2020-07-2416:20ziltiIs there by any chance a software that allows to edit and add data in a Datomic database? Basically a Datomic Console with transact capabilities#2020-07-2416:21alexmiller@marciol From https://building.nubank.com.br/welcoming-cognitect-nubank/ (CTO of Nubank): "
• The existing development team will continue to enhance both Datomic products: Pro and Cloud
• Nubank is best served by the widespread use of Datomic at other companies. Datomic will continue to be developed as a commercially available, general-purpose database (as opposed to being pulled in-house or restricted)"
#2020-07-2416:41tvaughanThe console is not compatible with a "mem peer server"?
Removing storage with unsupported protocol: mem = datomic:
No storages specified
#2020-07-2416:47favila1. The console is a peer, not a client. So it can’t connect to peer-servers anyway, mem or not.
2. The console doesn’t support mem peers. In theory it could, but that would be nearly pointless because it has no transaction capabilities.#2020-07-2416:48tvaughanGotcha. That makes sense. Thanks for the clarification @U09R86PA4#2020-07-2417:39jarethttps://forum.datomic.com/t/dev-local-0-9-184-now-available/1537#2020-07-2613:47husaynthe datomic-console only seems to be able to do queries … anyone know of a datomic tool which can be used to perform transactions ?#2020-07-3018:52bhurlowhyperfiddle#2020-08-0312:15husaynlooks promising , thanks @U0FHWANJK#2020-07-2618:55vinnyataidehello guys. any idea why the datomic rest-api is deprecated?#2020-07-2618:56vinnyataideI loved it because that makes trivial to interface my backend with my clojure natal apps#2020-07-2618:56vinnyataidebut I am concerned#2020-07-2618:57vinnyataidethats someday I'll have to learn all the aws stack to keep using datomic lol#2020-07-2619:32aaroncodingI'm trying to use conformity with datomic pro (on-prem). It breaks though, seems to be trying to require datomic.api instead of datomic.client.api.
Is there anything I can be doing differently to make it work? Or even an alternative to conformity?#2020-07-2619:54ghadisearch for cloudformity @coding.aaronp #2020-07-2619:54ghadiI’m on mobile otherwise I’d link you#2020-07-2619:56aaroncodingawesome thanks!#2020-07-2620:15Giovani Altelino@husayn , I use the REPL to do the transactions, I find easier to "replay" the transactions too, since I can just save everything in a namespace.
https://github.com/giovanialtelino/hackernews-lacinia-datomic/blob/master/src/hackernews_lacinia_datomic/db_start.clj#2020-07-2620:46husayn@galtelino yeah, that’s what I use now, was just hoping there was a better tool#2020-07-2713:30SvenIs it possible to preload multiple databases after instance restart or speed up the initial d/connect? I’d like to deploy ions often but every deployment results in the initial d/connect to a non preloaded db taking 10+ seconds.#2020-07-2713:36Joe Lane@sl What kind of topology are you using?#2020-07-2713:49Sven@lanejo01 solo for dev and testing and production for qa and live#2020-07-2713:53Joe Lane• Is it the same production topology for QA and Live?
• How do you know the problem the d/connect time is taking a long time?
• Are you sure it's not because of lambda coldstart?
• Have you considered using HTTP DIrect?#2020-07-2714:03Joe Lane@sl I'm not sure how you're measuring the time, but the first thing I would look at is switching to Http Direct if you aren't already using it.#2020-07-2714:03Sven1) yes. Though the issue is way more pronounced on solo.
3) lambda cold starts occasionally contribute to the issue but I run lambda warmers to help mitigate this
4) I am using appsync so I have to invoke lambdas#2020-07-2714:05Joe Lane3) I think that lambdas are rebuilt on redeploy so I'm not sure your warmers would be able to help
4) Doesn't appsync also support an http proxy?#2020-07-2714:33Svenhmmm, I have somehow completely missed ions with HTTP direct. I’ll do some testing on production topology. Thanks for that tip!
2) as of d/connect taking a long time - when I manually reboot the solo topology instance and after restart connect to that system from my laptop (bypassing the lambda function altogether) then the first connect +query/transaction takes a long time.#2020-07-2714:33David PhamWith dev-local, what are the limitations about storage, number of transactors and readers?#2020-07-2716:30stuarthallowayHi David. dev-local is in process, so there are no transactors. Memory usage is described at https://docs.datomic.com/cloud/dev-local.html#limitations.#2020-07-2905:19David PhamThanks Stuart!#2020-07-2715:07Kaue SchltzHi there, I've heard rumours about datomic cloud support for cross db queries, do you guys know something about this? Any reading material on the subject is more than welcome#2020-07-2716:05Joe LaneCross DB queries in cloud refers to the same db at two different points on it's timeline, not two different databases across timeline's.#2020-07-2717:09Kaue SchltzI see, given that I have db-A and db-B, isn't there any built-in features to support simultaneous queries in both bases?#2020-07-2721:36Kaue SchltzOther thing that isn't very clear to me regarding datomic cloud. Suppose I have a production topology with several dbs, would the write ops interfere with one another among those databases or are they served by different transactors?#2020-07-2723:05Joe Lane@schultzkaue Different transactors. They would not compete for resources in the way you described.#2020-07-2723:26Kaue SchltzGreat, thanks#2020-07-2816:43stuarthallowayI would just add that you can increase the number of processes in the primary compute group if you have many databases: https://docs.datomic.com/cloud/operation/scaling.html#database-scaling#2020-07-2914:24Kaue SchltzThat would be a perfect fit#2020-07-2914:24Kaue Schltzthx#2020-07-2804:06zebuIs there a way to restore into dev-local a backup taken from datomic-free?#2020-07-2812:59stuarthallowayNot at present. There a number of differences in core attributes. You would have to write a program that reads the log from one database, keeps track of entity ids, drops or alters unsupported things (e.g. the bytes values type) and transacts into the other database.#2020-07-2813:21zebuThanks Stu 🙂 I'll look into that#2020-07-2816:05fugbixGood evening everyone!! Is there a way to manipulate arrays with Datomic? (unfortunately I can’t use tuples, as they’re limited to 8 scalars).#2020-07-2816:07favilaI think that limit only applies to heterogenous tuples?#2020-07-2816:07favilahttps://docs.datomic.com/on-prem/schema.html#homogeneous-tuples vs https://docs.datomic.com/on-prem/schema.html#heterogeneous-tuples#2020-07-2816:24fugbixWell I thought so too, but apparently I am unable to transact tuples larger than 8 values using :db.type/tuple :
(d/transact conn [{:db/ident :weights
:db/valueType :db.type/tuple
:db/tupleType :db.type/double
:db/cardinality :db.cardinality/one}])
(d/transact conn [{:weights [0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8]}])
(d/transact conn [{:weights [0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9]}])#2020-07-2816:26fugbixhttps://docs.datomic.com/on-prem/schema.html#tuples
says: A tuple is a collection of 2-8 scalar values#2020-07-2816:41favilathat is unfortunate#2020-07-2816:47favilaconsider using bytes and Double/doubleToLongBits?#2020-07-2816:49fugbixI’ll give this a try thank you!!#2020-07-2816:49favilaor DataOutputStream#2020-07-2816:49favila(http://java.io class)#2020-07-2816:49fugbixThanks a lot#2020-07-2819:09Nassindev-local can migrated to the peer server correct?#2020-07-2820:01Nassinguess it's just a matter pointing the transactor to the data dir#2020-07-2912:24arohnerAre there any recommendations on the size of a cardinalityMany attribute? Is there a problem with storing a million uuids in a single datom EAV?#2020-07-2912:53stuarthallowayThere is no particular limit, but you should keep in mind the memory implications of future use.#2020-07-2912:54stuarthallowayFor example, if you gradually build up 1 million EAs, and then retract the entire entity, the transaction that does the retraction will have 1 million datoms in it.#2020-07-2912:55stuarthallowayAlso consider pull expressions, which might have been written (or displayed in a UI) with the presumption that their results are smallish and don't need to be e.g. paginated.#2020-07-2912:58stuarthallowayPrograms consuming a high cardinality attribute may want to use https://docs.datomic.com/cloud/query/query-index-pull.html#aevt to consume in chunks.#2020-07-2913:14arohnerThanks#2020-07-2914:28souenzzoReminder: pull by default get only 1000 elements on ref-to-many
https://docs.datomic.com/on-prem/pull.html#limit-option#2020-07-2915:41NassinFor example, if you gradually build up 1 million EAs, and then retract the entire entity, the transaction that does the retraction will have 1 million datoms in it.#2020-07-2915:41NassinOnly if isComponent is true correct?#2020-07-2917:07favila@U011VD1RDQT no, isComponent will propagate the delete to other entities#2020-07-2917:09favila[E A 1millionV] is going to delete one million E datoms regardless of whether A is an isComponent attr.#2020-07-2917:14Nassinah true, was thinking it was of type :db.type/ref :+1:#2020-07-2914:37Kaue SchltzHi there.
My current cenario is that we're using datomic cloud in one of our major services and it is around 60M entities/3.5B datoms and some particular queries are under performatic.
As we plan to grow some orders of magnitude, I was exploring alternatives to escalate both our writes and reads.
From my understanding so far, given that I'm able to scale the number of processors to serve my dbs, and transactors dont compete for resources among those dbs, I started experimenting with the following:
1 Have my service write in parallel to multiple dbs (let's say db0 db1 db2 all with the same schema), ensuring that the same entity always end up in the correct db so I don't end up with partial data split across my databases
2 When querying, I issue them in parallel, then merge the results in my application, something like
(pcalls query-for-satellites0 query-for-satellites1 query-for-satellites2)
So far, this parallel read/write cenario has proven to be really performatic
Now my question to you guys is if I'm missing on something, or are there any achitectural gotchas that would make this a bad idea?#2020-07-2919:51marciol@UNAPH1QMN It’d be nice to know if anyone already experimented this kind of topology.
You are sharding your data across several db’s and writing/issuing queries in parallel right?#2020-07-2920:03Kaue SchltzYup#2020-07-3013:31cpdeanIs it possible to save rules to a datomic database? I've noticed that datalog rules seem to only be used (in the examples in the docs) when scoped to a single query request
; use recursive rules to implement a graph traversal
; (copied from learndatalogtoday)
(d/q {:query '{:find [?sequel]
:in [$ % ?title]
:where [[?m :movie/title ?title]
(sequels ?m ?s)
[?s :movie/title ?sequel]]}
:args [@loaded-db
'[[(sequels ?m1 ?m2) [?m1 :movie/sequel ?m2]]
[(sequels ?m1 ?m2) [?m :movie/sequel ?m2] (sequels ?m1 ?m)]]
"Mad Max"]})
Is it possible to save a rule to a database so that requests do not need to specify all of their rules like that? I'm looking at modelling programming languages in datalog and so there will be a lot of foundational rules that need to be added and then higher-level ones that build on top of those.#2020-07-3014:47val_waeselynck@UGHND87PG you may want to read about the perils of stored procedures 🙂
But AFAICT, for your use case, you don't really need durable storage or rules, you merely need calling convenience. I suggest you either put all your rules in a Clojure Var, or use a library like https://github.com/vvvvalvalval/datalog-rules (shameless plug).#2020-07-3014:50val_waeselynckAll that being said, datalog rules are just EDN data, nothing keeps you from storing them e.g in :db.type/string attributes.#2020-07-3016:34cpdeangotcha so it's idiomatic to just collect rules that define various bits of business logic on the application side as a large vec or something and then ship that per request?#2020-07-3016:41cpdeanalso -- i would love to read anything you recommend about the perils of stored procedures! I've gone back and forth quite a bit during my career about relying on a database to process your data, but since i now sit firmly on the side of "process your data with a database", i don't feel like discounting them wholesale. but in any case, since datalog rules are more closely related to views than stored procs, i kinda want them to be stored in the database the way that table views can be defined in a database. but, i'd love to read anything you have about how that feature might be bad and if it's better to force clients to supply their table views.#2020-07-3017:13favilaphilosophically datomic is very much on the side of databases being “dumb” and loosely constrained and having smarts in an application layer. The stored-procedure-like features that exist are there mostly to manage concurrent updates safely, not to enforce business logic. (attribute predicates being a possible, late, narrow exception)#2020-07-3017:14favila(at least IMHO, I don’t speak for cognitect)#2020-07-3018:53cpdeanyeah i'm finding a lot of clever things about its ideas of the data layer -- like, most large scale data systems do well when they enshrine immutability. the fact that datomic does that probably resolves a lot of issues around concurrency/transaction management when you allow append-only accretion of data and have applications know at what point in time a fact was true#2020-07-3019:08cpdeanit'd be nice to see if my guess is accurate in the reason for not storing datalog rules in the database, but maybe by keeping rules and complicated businesslogic they could implement out of the database means you avoid problems where a change to a rule would break a client that's old versus a newer client that expects the change. tracing data provenance when the definition of a view is allowed to change makes things difficult to reason about or trace where a number is coming from. By forcing the responsibility of interpretation on the client, it allows clients to manage the complicated parts and keep the extremely boring fact-persistence/data-observations in one place#2020-07-3016:52mafcocincoI have added a composite tuple to my schema in Datomic marked it as unique to provide a composite unique constraint on the data. The :db.cardinality is set to :db.cardinality/one and the :db/unique is set to db.unique/identity. When a unique constraint is set to db.unique/identity on a single attribute, if a transaction is executed against an existing entity, upsert will be enabled as described https://docs.datomic.com/cloud/schema/schema-reference.html#db-unique-identity. I would have expected the behavior to be the same for a composite unique constraint, provided the :db/unique was set to :db.unique/identity. However, that does not appear to be the case as when I try to commit a transaction against an entity that already exists with the specified composite unique constraint, a unique conflict exception is thrown. AFAIK, this is what would happen in the single attribute example if the :db/unique was set to :db.unique/value. Am I missing something or misunderstanding how things are working? I’m new to Datomic and I’m assuming this is just a misunderstanding on my part.#2020-07-3017:05favilaResolving tempids to entity ids occurs before adjusting composite indexes, so by the time the composite tuple datom is added to the datom set the transaction processor has already decided on the entity id for that datom#2020-07-3017:06favilaTo get the behavior you want, you would need to reassert the composite value and its components explicitly every time you updated them#2020-07-3017:07favilaThe reason it’s like this is because there’s a circular dependency: to know what the composite tuple should be to update, it needs to know the entity to get its component values to compute the tuple, but to know there’s a conflict it needs to know the tuple value#2020-07-3017:26mafcocincoah, that makes sense. It is relatively trivial to handle the exception and, in the application I’m working on, it is perfectly acceptable to just return an error indicating that the entity already exists. Any individual attributes on the entity that need to be updated can be done as separate operations.#2020-07-3017:26mafcocincoThanks for the explanation.#2020-07-3017:31favilaIf that’s the case, consider using only :db.unique/value instead of identity to avoid possibly surprising upserting in the future.#2020-07-3017:32mafcocincoJust so I’m clear, that is under the assumption that the behavior we discussed above changes such that upserting works with composite unique constraints?#2020-07-3017:32mafcocincoThat makes sense to me, just want to make sure I’m understanding correctly.#2020-07-3017:35favilaI guess that’s possible, but I just mean :db.unique/identity is IMHO a footgun in general#2020-07-3017:35favilaif you don’t need upserting, don’t turn it on#2020-07-3017:57mafcocincogotcha. thanks.#2020-07-3017:57Kaue SchltzHi there. I was looking for a more straightforward doc on how to scale up my primary group nodes for my datomic cloud production topology, any of you guys could help me on that?#2020-07-3018:25marshall@schultzkaue do you mean make your instance(s) larger or add more of them?#2020-07-3018:25Kaue SchltzI wanted more nodes#2020-07-3018:26marshallhttps://docs.datomic.com/cloud/operation/howto.html#update-parameter
^ this is how you choose a larger instance size - change the instance type parameter
for increasing the # of nodes:
https://docs.datomic.com/cloud/operation/scaling.html#database-scaling
Edit the AutoScaling Group for you primary compute group, set it larger#2020-07-3018:27marshallsame approach as is used here: https://docs.datomic.com/cloud/tech-notes/turn-off.html#org7fdb7ff but you set it higher instead of setting it down to 0#2020-07-3018:27Kaue Schltzneat! Thank you#2020-07-3120:05hadilsHi there, I am using Datomic Cloud. I would like to compile the code in my CI pipeline before deploying it, to save time and money. Can anyone tell me how Datomic Cloud invokes the compiler, and if it's reproducible?#2020-07-3120:37stuarthallowayHi @hadilsabbagh18. Are you writing ion code that runs inside a cluster node?#2020-07-3120:37hadilsYes sir.#2020-07-3120:39stuarthallowayIf you compile your code before deploying it to an ion, it will load into the cluster node faster, but I am not sure that will save you a visible amount of time or money.#2020-07-3120:40hadils@stuarthalloway I have deployed code that has had Java compiler errors, which costs time and money. I am just try to pre-compile the code to make sure that it will pass.#2020-07-3120:41stuarthallowayDo you mean Clojure compiler errors? The cluster node does not compile Java for you.#2020-07-3120:41hadilsYes, I mean Clojure compiler errors...#2020-07-3120:41stuarthallowayYou have some options:#2020-07-3120:43stuarthallowayIf you are already going to the trouble of running the compiler locally, then you deploy make a jar with the compiled code instead of with source. Then there is no compilation on the cluster node, and no possibility of (that class of) error.#2020-07-3120:44stuarthallowayIn that case the cluster node will also start faster after an ion deploy, although the difference may not matter much.#2020-07-3120:45hadilsHow would I indicate to the Ion deployment that I have already compiled my code into a jar? I can figure our that part...#2020-07-3120:45stuarthallowayGood news: you don't have to.#2020-07-3120:45stuarthallowayJars are jars are jars#2020-07-3120:46hadilsok. So I just declare :gen-class in my code and compile ir?#2020-07-3120:46stuarthallowayHave your ion depend on your compiled code as a maven dep.#2020-07-3120:46hadilsOk. Understood.#2020-07-3120:47stuarthallowayYou definitely do not need gen-class#2020-07-3120:47hadilsIn deps.edn rifht?#2020-07-3120:47stuarthallowayright#2020-07-3120:47stuarthallowayThis leads to a two-project structure, where your code is in one project, and your ion has deps on that code and probably just ion-config.edn.#2020-07-3120:48hadilsAha! Interesting idea!#2020-07-3120:48stuarthallowayI do this all the time. As soon as code is nontrivial I want to use it from more than one ion.#2020-07-3120:49stuarthallowayTo get the compilation benefit, you still need to do whatever maven/leiningen/boot magic you need to compile all your Clojure code in the code project.#2020-07-3120:50hadilsCan I use maven with tools.deps.alpha?#2020-07-3120:51stuarthallowayFor some definitions of "use", yes 🙂#2020-07-3120:53stuarthallowayThis space is evolving https://github.com/clojure/tools.deps.alpha/wiki/Tools#packaging#2020-07-3120:58hadilsI have found @seancorfield's depstar repo. I will use that. Thanks for your help @stuarthalloway!#2020-07-3121:22Nassinis the dev-local client compatible with on-premise client? (ignoring the features that on-premise support that cloud doesn't)#2020-07-3121:32cpdeanWhat's the idiomatic way to model something like a link table but against multiple other entities? in old datalog/prolog you'd do something like attrName(entity1, other1, other2, other3). assuming entity1, other1, etc are either scalar values or entity ids.
but in datomic's datalog, if vecs are allowed as a value in a datom, you might be able to do something like this
[entity1 :attrName [other1, other2, other3]]
or if not, you could... maybe this is how you'd do it?
[entity1 :attrName1 other1]
[entity1 :attrName2 other2]
[entity1 :attrName3 other3]
the fact attrName is meant to be something that must join entity1 with 3 other entities, rather than it representing an unordered collection of linked entities, like the :movie/cast attr in http://learndatalogtoday.org#2020-07-3121:38Nassindo all :attrName* express the same relation?#2020-07-3121:40cpdeanyeah. maybe i should have come up with a better concrete example for this...#2020-07-3121:41cpdeanboughtHouse(buyer, seller, house, notary). maybe? i don't actually know how houses are sold haha#2020-07-3121:43favilawhy is this different from having separate ref attributes? each assertion has a different meaning#2020-07-3121:43cpdeanmaybe the orientation of what an entity is can be reversed?
[house-sale-eid :housesale/buyer buyer-eid]
[house-sale-eid :housesale/seller seller-eid]
[house-sale-eid :housesale/house house-eid]
[house-sale-eid :housesale/notary notary-eid]
#2020-07-3121:43favila^^ this is what I would expect#2020-07-3121:44cpdeani don't know if it's different - i'm totally new to this and only have a background in dimensional modelling, datavault, and datalog#2020-07-3121:44favilaI think you’re getting at something though. Is it maybe a constraint you’re trying to enforce?#2020-07-3121:45cpdeani definitely know that i want some constraints to be enforced, but i don't know what the term means in datomic's context yet 😬#2020-07-3121:51cpdeanyeah i guess orienting the entity around the event and not the buyer, or whatever the 'primary subject' of the event is is how you'd avoid having more than one instance of an entity for a given field#2020-08-0114:50alidlorenzoWhat exactly counts as an entity in Datomic? Is it whatever datoms are transacted together as part of a single transaction form?
I’m asking bc I’m pondering what would be the best way to model a belongs-to relationships of different namespaced datoms that are always created together?
e.g. an account and a user
should they be transacted* together as a single db entity?
{:tx-data [{:account/username "admin"
:user/email "
or would it be better to make them separate db entities and give one a db ref to the other?
{:tx-data [{:account/username "admin"}
{:user/email "
i imagine making them a single db entity and adding a db ref would redundant since the datom would be referencing its own db id
{:tx-data [{:account/username "admin"
:user/email "#2020-08-0300:56hadilsI read somewhere that we should never use the Synchronous Client API in Production. Does anyone have experience with using Async in Production?#2020-08-0300:56hadilsPerhaps they can share some insights. I am using Datomic Cloud...#2020-08-0301:18Joe Lane@hadilsabbagh18 I've never heard that before and disagree with "never". I've almost exclusively used the Synchronous api.#2020-08-0301:19hadilsThanks @lanejo01. Congratulations on joining Cognitect/Nubank!#2020-08-0301:19Joe LaneThanks!#2020-08-0308:22plexusFor folks who are interested in Datalog databases in general please come and hang out with us over at #datalog#2020-08-0407:47robert-stuttaford@jaret @marshall what does it mean if i can see a datom in a d/db but not in a d/history of that same db?#2020-08-0412:13jaret@robert-stuttaford any chance the attribute has :db/noHistory set to true?#2020-08-0412:48jaret@robert-stuttaford second thought, you’re getting the history db from the db you see the datom in? If so, that sounds like something we would want to investigate. Would you be able to give us a small repro or better yet, a backup that shows this behavior?#2020-08-0413:26robert-stuttafordthat's right - db and (d/history db)#2020-08-0413:27robert-stuttaford@jaret it's in our prod db, which has all our PII in it, started circa 2012 🙂#2020-08-0413:27robert-stuttafordperhaps we could arrange a zoom and i could show you via screen share, and then we can see about next steps from there?#2020-08-0414:51Lennart BuitHey there, we are using datomic and are currently diagnosing a performance issue related to a recursive rule. We have a tree structure in datomic, that for each node, links a parent (or not), so a schema like this:
(def schema
[;; Additional tree attributes omitted
{:db/ident :node/parent
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/one}])
Now, we sometimes look through the tree to find, say a descendant of a node. To do so, we have a recursive descendants-of? rule that binds this collection of descendants to ?node:
(def descendants-of?
'[[(descendants-of? ?root ?node)
[?node :node/parent ?root]]
[(descendants-of? ?root ?node)
[?node :node/parent ?intermediate]
(descendants-of? ?root ?intermediate)]])
So far so good, we can do queries that reason about descendants, for example finding all descendants for a root:
(d/q '[:find ?node
:in $ ?e %
:where
(descendants-of? ?e ?node)]
(d/db (conn))
root-eid
descendants-of?])
Now, sometimes we have a candidate set of nodes, and of those candidates, we need to find the descendants, say like this:
(d/q '[:find ?node
:in $ ?name %
:where
;; assuming that `?name` only exists in one tree
[?e :node/name ?name]
(descendants-of? ?e ?node)]
(d/db (conn))
“name”
descendants-of?])
In the most pathological case, where all nodes are named ?name, we will be binding all nodes in a tree to ?e, and then find the descendants of those ?e s and bind those to ?node. The result will be the same as the query above: all nodes but the root.
However, this query appears to be much slower. I think that makes sense intuitively if we assume that descendants-of? is kinda expanded per ?e. For each member of ?e, we can potentially redo descendant seeking, if those descendants are also in ?e.
Is there a way to optimise here, if there are potential queries that bind descendants and ancestors to ?e ?#2020-08-0418:18favilaWhy not:
'[[(descendants-of? ?root ?node)
[?node :node/parent ?root]]
[(descendants-of? ?root ?node)
[?intermediate :node/parent ?root]
(descendants-of? ?intermediate ?node)]]
?#2020-08-0418:19favilaThe second implementation of decendents-of? necessarily scans all :node/parent if ?node is unbound#2020-08-0418:19favilain fact, datomic has a syntax for ensuring a rule input var is bound:#2020-08-0418:20favila'[[(descendants-of? [?root] ?node)
[?node :node/parent ?root]]
[(descendants-of? [?root] ?node)
[?intermediate :node/parent ?root]
(descendants-of? ?intermediate ?node)]]
#2020-08-0418:46Lennart BuitAh let me take a look, I may have simplified the example a bit too much when lifting it from our codebase#2020-08-0419:11Lennart BuitOh you are on to something! Thank you so much#2020-08-0418:03kennyUsing :as with :db/id does not appear to have any effect. Is this expected?
(d/q
'[:find (pull ?e [(:db/id :as "foo")])
:where
[?e :db/ident :db/cardinality]]
(d/db conn))
=> [[#:db{:id 41, :ident :db/cardinality}]]#2020-08-0419:05kennyOpened a support ticket for this https://support.cognitect.com/hc/en-us/requests/2797#2020-08-0418:04kennyOther db/* attrs work:
(d/pull (d/db conn)
'[(:db/id :as "foo")
(:db/ident :as "ident")]
:db/cardinality)
=> {:db/id 41, :db/ident :db/cardinality, "ident" :db/cardinality}#2020-08-0418:05kennyThis one is strange since :db/ident is included twice.#2020-08-0418:05kenny:db/doc is not included twice.
(d/pull (d/db conn)
'[(:db/id :as "foo")
(:db/ident :as "ident")
(:db/doc :as "doc")]
:db/cardinality)
=>
{:db/id 41,
:db/ident :db/cardinality,
"ident" :db/cardinality,
"doc" "Property of an attribute. Two possible values: :db.cardinality/one for single-valued attributes, and :db.cardinality/many for many-valued attributes. Defaults to :db.cardinality/one."}#2020-08-0418:14souenzzo@kenny :db/id isn't an attribute. where is no Datom[e :db/id v]. It's a "special thing" that d/pull assoc into it's response.#2020-08-0418:15kennySo? I don't think pull requires the selection to be attributes. From the doc "Pull is a declarative way to make hierarchical (and possibly nested) selections of information about entities."#2020-08-0418:17kennyEven if that was a requirement, I don't think it makes sense for it to behave differently than everything else.#2020-08-0418:17souenzzoI agree that it's a bug#2020-08-0508:42thumbnailHi, I'm trying to pull an entity and pull 1 attribute of it's recursive parents.
'[:node/name, :node/bunch-of-attrs, :node/other-attr, {:node/parent ...}] works, but pulls all attrs every parent. i want to limit that to just name . Any way to achieve that?#2020-08-0512:05favilaDon’t know for sure but try this [,,, {:node/parent [:node/name {:node/parent ...}]}]#2020-08-0512:14thumbnailThat worked! Thanks.#2020-08-0513:59wsbfgDoes anyone know of a good comparison between the various on-prem datomic stores? We're thinking about a new project based on datomic and I was wondering if there exists a discussion of the pros and cons? I think our options would be (1) Postgres (2) Cassandra (3) Some other SQL (4) datomic cloud with the caveat that we're running in google cloud so would require data to cross clouds which may be an issue for dataomic (I can't find an opinion on this either).#2020-08-0514:12favilaon-prem vs cloud is the big difference. Within on-prem, I think you should just use whatever storage you are most familiar with#2020-08-0514:14faviladatomic on-prem uses storage as a key-value store for smallish (<60kb) binary blobs. I doubt any of the options have a clear advantage#2020-08-0514:15favilaand you’ll run anything serious with memcached or valcache#2020-08-0515:02ziltiIf you are on Google Cloud you can use their managed PostgreSQL as a backend for Datomic#2020-08-0515:23favilaI can confirm the managed mysql also works fine (with some schema tweaks: https://gist.github.com/favila/ecdcd6c4269ff2bd1bb3)#2020-08-0515:25wsbfgThanks all - we do make use of managed postgres at the moment which would make it an obvious choice. Although google does require occasional downtime for updates which is a shame.
Sounds like the choice of data store isn't critical then. That's good to know.#2020-08-0516:07ghadiI wouldn't do cross cloud AWS <> GCP without evaluating ingress/egress costs, or latency#2020-08-0610:54wsbfgYeah that's the big question with that approach. Although we could potentially host our read clients inside AWS meaning that only the writes would originate in GCP which seems likely to work for our usecase. Not ideal to have to run a service away from our others but we need to weigh that up against running a database. Usually running services is easier than databases!#2020-08-0514:07robert-stuttafordjust to forewarn that datomic cloud is a totally different system to on-prem, you'll architect differently and use completely different libraries#2020-08-0520:45arohnerHow do you architect differently?#2020-08-0515:26wsbfgThat's interesting - I'd not really appreciated that. I've found a good link on on-prem vs cloud. Thanks!#2020-08-0517:27Kaue SchltzHi there, does datomic cloud have any restrictions regarding AWS regions?#2020-08-0517:35marshallYes, Datomic Cloud is only available in certain regions
the current list is:#2020-08-0517:46Kaue SchltzThanks#2020-08-0617:31onetomthe hong kong region would be a welcome addition to that list.
i have the gut feeling that it supports all the required aws features already.#2020-08-0518:40marciolAnyone here is doing serious business with Datomic Cloud without the 24x7 support? The contracted support can be pretty expensive and I notice this is a matter of risk management. We run an application from 9 months and we had problems only when we needed a hands-on help to carry a major migration. It’d be nice to heard about other experiences.#2020-08-0518:58dregreHi folks —
Any tips on how to best approach writing tests for Datomic rules?
My app makes extensive use of rules and I would like to button up the testing.
Much obliged.#2020-08-0617:30onetomim also just about to explore this topic in our current project.
what's your current approach?#2020-08-0619:35dregreMy approach has been to use an in-memory database loaded with the right schema and mock data (fixtures) and then run queries against them — but I wonder if there’s a better approach.#2020-08-0619:37dregreI’m also interested in finding a query profiler or explainer, if anyone’s come across any.#2020-08-0617:39onetomi've just started to use the client api recently (after a few years of dealing only with the peer api).
when i tried to test something simple, i was getting this error:
Execution error (ExceptionInfo) at datomic.client.api.impl/incorrect (impl.clj:43).
Query args must include a database
on a closer look, i can provoke this error by trying to run some of the official examples from the cloud docs (https://docs.datomic.com/cloud/query/query-data-reference.html#calling-static-methods), against a com.datomic/client-pro 0.9.63 backed by a datomic-pro peer-server & transactor 1.0.6165 (and i also have a com.datomic/dev-local 0.9.183 loaded into the same process too):
(dc/q '[:find ?k ?v
:where [(System/getProperties) [[?k ?v]]]])
or this even simpler query:
(dc/q '[:find [?b]
:in [?tup]
:where [[(untuple ?tup) [?a ?b]]]]
[1 2])
is that expected or a bug?#2020-08-0617:41onetomfor the 1st case, where the :in clause is omitted, i might understand the error, but for the second case, i definitely don't expect it#2020-08-0617:41marshallclient query must take a db, can’t be run against a collection the way peer query can#2020-08-0617:42onetomso it's a "bug" in the documentation then, isn't it?#2020-08-0617:42marshalli can look at the docs, if they are client-specific docs then yes#2020-08-0617:43onetomthe cloud docs - i linked - is all client api specific, no?#2020-08-0617:43marshallyes, that example should take a db#2020-08-0617:44onetombut now that we are talking about it, of course it needs a "db", since that's how the query itself can reach the query engine over the network...#2020-08-0617:45marshallright 🙂#2020-08-0617:41marshall@onetom ^#2020-08-0617:48onetomi keep finding myself reaching for the peer api functions, like entid, ident etc.
the https://docs.datomic.com/on-prem/clients-and-peers.html#peer-only section of the docs makes it clear that we should use the pull api instead of these functions, but that's just much more verbose. same issue with the lack of the find-coll and find-scalar from find-spec.
is there any official or popular compatibility lib which fills this gap?#2020-08-0617:49onetomor is there any good examples how to sturcture an app in a way that it's concise to test?#2020-08-0617:52onetomfor example, given this function:
(defn by-name
[db merchant-name]
(-> {:query '{:find [?merchant]
:in [$ ?merchant-name]
:where [(or [?merchant :merchant/name ?merchant-name]
[?merchant :merchant/name-en ?merchant-name])]}
:args [db merchant-name]}
(dc/q)
(ffirst)))
my test would look like this:
(deftest by-name-test
(testing "exact match"
(let [db (db-of [{:db/ident :some-merchant
:merchant/name "<merchant name in any language>"}])]
(is (match?
(->> :some-merchant (dc/pull db [:db/id]) :db/id)
(merchant/by-name db "<merchant name in any language>"))))))
where db-of is just a with-db with some schema, made from a dev-local test db.
that (dc/pull db [:db/id]) :db/id is the annoying part and it's even more annoying if im expecting multiple values.#2020-08-0617:54onetomthe benefit of operating with idents is that the test failure messages are symbolic and i don't have to muck around with destructuring string temp-ids, potentially across multiple transactions#2020-08-0618:02onetomi can understand that the client api doesn't want to provide find-scalar and find-coll and ident, entid, so the interface size is small, which helps providing alternative implementations, like the dev-local one, but these functions are just too useful for REPL work and automated tests.
i can also understand how they might seep into application code, promoting inefficient code, but that's not a strong reason for not providing them officially.#2020-08-0619:07onetomfor now, i made a custom matcher, which results in tests like this:
(is (match?
(idents-in db
:matching-merchant-1
:matching-merchant-2)
(merchant/named-like db "matching")))
where idents-in looks like this:
(ns ...
(:require
[matcher-combinators.core :refer [Matcher]] ...))
(defrecord MatchIdents [db expected-idents]
Matcher
(-matcher-for [this] this)
(-matcher-for [this _] this)
(-match [_this actual-entity-refs]
(if-let [issue (#'matcher-combinators.core/validate-input
expected-idents
actual-entity-refs
sequential? 'in-any-order "sequential")]
issue
(#'matcher-combinators.core/match-any-order
expected-idents
(mapv (comp :db/ident (partial dc/pull db [:db/ident]))
actual-entity-refs)
false))))
(defn idents-in [db & entity-idents]
(->MatchIdents db entity-idents))
#2020-08-0623:37Kaue SchltzI have an incident where datomic cloud started giving me busy indexing errors#2020-08-0623:38Kaue SchltzI supposed I exceeded my node limit, the stop transacting entirely#2020-08-0623:38Kaue Schltzbut after 40min it still gives me busy indexing errors#2020-08-0700:07marciol@U05120CBV can you help us with some tip!#2020-08-0700:09marciol@U0CJ19XAM or @U09R86PA4 you have some tip about this kind of issue?#2020-08-0715:02favilaSorry, no, I don’t have any production experience with cloud#2020-08-0719:29marciolYes, in the end we restarted all machines from the primary computational group.#2020-08-0719:58Kaue Schltzas it turns out we largely exceeded our nodes capacity believing it would scale automatically#2020-08-0717:50jarethttps://forum.datomic.com/t/datomic-cloud-704-8957/1571#2020-08-0717:50jarethttps://forum.datomic.com/t/ion-dev-0-9-276-and-ion-0-9-48/1572#2020-08-0717:50jarethttps://forum.datomic.com/t/datomic-1-0-6202-now-available/1570#2020-08-0720:58Jake ShelbyI tried upgrading my ion dep to the latest above, was having trouble with a datomic/java-io dep, anybody else having this issue, am I missing something? (simple example):
~/ion-test-0.9.48
▶ cat deps.edn
{:deps {com.datomic/ion {:mvn/version "0.9.43"}}
:mvn/repos {"datomic-cloud" {:url ""}}}
~/ion-test-0.9.48
▶ clojure -Srepro
Clojure 1.10.1
user=> ^C%
~/ion-test-0.9.48
▶ cat deps.edn
{:deps {com.datomic/ion {:mvn/version "0.9.48"}}
:mvn/repos {"datomic-cloud" {:url ""}}}
~/ion-test-0.9.48
▶ clojure -Srepro
Downloading: com/datomic/java-io/0.1.19/java-io-0.1.19.pom from datomic-cloud
Downloading: com/datomic/java-io/0.1.19/java-io-0.1.19.jar from datomic-cloud
Error building classpath. Could not find artifact com.datomic:java-io:jar:0.1.19 in central ()#2020-08-0804:58mafcocincoRandom newb question: If an entity is retracted, are all references to that entity automatically retracted as well or are those retractions explicitly required?#2020-08-0814:18potetm> It retracts all the attribute values where the given id is either the entity or value, effectively retracting the entity’s own data and any references to the entity as well.
https://docs.datomic.com/on-prem/transactions.html#dbfn-retractentity#2020-08-0814:18potetm> It retracts all the attribute values where the given id is either the entity or value, effectively retracting the entity’s own data and any references to the entity as well.
https://docs.datomic.com/on-prem/transactions.html#dbfn-retractentity#2020-08-0814:19potetmIt’s just a shorthand for retracting all facts involving that entity.#2020-08-0920:18drewverleeWhen is it appropriate to namespace/qualify an datomic datom attribute instead of having it un namespaced/unqualified? E.g ( entity human/name drew) vs (entity race human)(entity name drew).
I would say it depends on if you ever need to query those attributes separately.#2020-08-0920:18drewverleeWhen is it appropriate to namespace/qualify an datomic datom attribute instead of having it un namespaced/unqualified? E.g ( entity human/name drew) vs (entity race human)(entity name drew).
I would say it depends on if you ever need to query those attributes separately.#2020-08-1002:55Saurabh SharanHas anyone tried to build a real-time clojurescript (fulcro) app w/ Datomic? All I could find is https://medium.com/adstage-engineering/realtime-apps-with-om-next-and-datomic-470be2c8204b, but it uses tx-report-queue which isn't supported in Datomic Cloud.#2020-08-1002:55Saurabh SharanHas anyone tried to build a real-time clojurescript (fulcro) app w/ Datomic? All I could find is https://medium.com/adstage-engineering/realtime-apps-with-om-next-and-datomic-470be2c8204b, but it uses tx-report-queue which isn't supported in Datomic Cloud.#2020-08-1304:44Saurabh Sharan@U09R86PA4 Thanks for the explanation!#2020-08-1013:28Joe Lane@saurabh.sharan1 Can you be more specific about what you mean by "Real-Time"?#2020-08-1013:28Joe Lane@saurabh.sharan1 Can you be more specific about what you mean by "Real-Time"?#2020-08-1023:18hadilsHow do I list the dependencies used by Datomic Cloud? I want to fix my deps.edn...#2020-08-1023:29hadilsSpecifically I need to find the version of tools.deps.alpha that has the reader namespace.#2020-08-1023:34alexmillerThe reader namespace was recently removed - the changelog is at https://github.com/clojure/tools.deps.alpha/blob/master/CHANGELOG.md #2020-08-1023:34alexmillerThe reader ns was removed in 0.9.745#2020-08-1023:34alexmillerPrior was 0.8.709#2020-08-1023:42hadilsThanks @alexmiller!#2020-08-1113:05simongrayHow would you generally model something like inheritance of attributes from entities in an is_a relationship using datalog, e.g. modelling the class hierarchy found in OOP? The naïve solution would be to query for one entity’s parent and then fetch the parent’s attributes, repeating the process by looping through each entity’s parent entity and collecting any attributes found along the way until there is no parent. I was wondering if there is datalog pattern to do this in a single query - or if I need to run many successive queries using Clojure instead?#2020-08-1113:25Joe Lane@simongray https://github.com/cognitect-labs/onto may be a good reference for you. #2020-08-1113:25Joe Lane@simongray https://github.com/cognitect-labs/onto may be a good reference for you. #2020-08-1113:42arohnerI have a transaction that I want to commit iff an existing value hasn’t changed. i.e. [:db.cas eid ::foo bar bar] Is db.cas the best way to do that, or is there a better way?#2020-08-1113:50dmarjenburgh@jake.shelby I have the same problem. It can't download com/datomic/java-io/0.1.19/java-io-0.1.19.pom from datomic-cloud. Problem occurs when trying to upgrade to com.datomic/ion {:mvn/version "0.9.48"}#2020-08-1114:29marshall@jake.shelby @dmarjenburgh - We've released that missing dep; sorry about that and thanks for catching it#2020-08-1114:29marshall@jake.shelby @dmarjenburgh - We've released that missing dep; sorry about that and thanks for catching it#2020-08-1116:46Jake ShelbyAwesome thanks, working for me now:
~/ion-test-0.9.48 ⍉
▶ clojure
Downloading: com/datomic/java-io/0.1.19/java-io-0.1.19.pom from datomic-cloud
Downloading: com/datomic/java-io/0.1.19/java-io-0.1.19.jar from datomic-cloud
Clojure 1.10.1
user=>
#2020-08-1116:46marshall:+1:#2020-08-1121:25JoshHey there, I was testing what error I would get when I hit the https://docs.datomic.com/cloud/schema/schema-reference.html#:~:text=Strings%20are%20limited%20to%204096%20characters. (4096 chars) when I was surprised to find that transacting strings larger than 4096 characters does not result in an error.
Is the Datomic cloud string size limit a soft limit? If so what other problems could I run into by storing strings larger than 4096 characters?
Here’s a sample of the code I’m running
(ns user
(:require
[datomic.client.api :as d]))
(def db-name "test")
(def get-client
"Return a shared client. Set datomic/ion/starter/config.edn resource
before calling this function."
#(d/client {:server-type :ion
:region "us-west-2"
:system "<system>"
:endpoint "<endpoint>"
:proxy-port 8182}))
(defn get-conn
"Get shared connection."
[]
(d/connect (get-client) {:db-name db-name}))
(defn get-db
"Returns current db value from shared connection."
[]
(d/db (get-conn)))
(def schema
[{:db/ident :string
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one}])
(comment
(d/create-database (get-client) {:db-name db-name})
(d/transact (get-conn) {:tx-data schema})
;; test string limit
(let [s (apply str (repeat 10000 "a"))
tx-report (d/transact (get-conn)
{:tx-data [{:db/id "tempid"
:string s}]})
id (get-in tx-report [:tempids "tempid"])
stored-s (:string (d/pull (get-db) '[*] id))]
(println "s length in: " (count s))
(println "s length out: " (count stored-s))
(println "equal? " (= s stored-s)))
;; =>
;; s length in: 10000
;; s length out: 10000
;; equal? true#2020-08-1200:31Jake Shelbyinteresting … one thing I can think of that would still be limited is the index - are you still able to look up that entity using the value of that large string in a query?#2020-08-1201:17JoshIt seems so, this query returns the expected id
(d/q '[:find ?e
:in $ ?s
:where
[?e :string ?s]]
(get-db)
s)#2020-08-1213:15souenzzoDatomic Team
There is a issue/tracker for this issue?
(let [db-uri (doto (str "datomic:mem://" (d/squuid))
d/create-database)
conn (d/connect db-uri)
db (d/db conn)]
(first (d/index-pull db
{:index :avet
:start [:db/txInstant]
:reverse true
:selector '[(:db/id :as :not-id :xform clojure.edn/read-string)]})))
selector :as do not work on :db/id, in any context (pull, query-pull, index-pull)...
PS: :db/id do not respect ANY param. xform also not work with it.#2020-08-1300:30kennyI opened a support ticket for this as well. I suggest doing the same so they know there’s interest in getting this fixed. #2020-08-1220:36Sam DeSotaI'm trying to transact a :db/fn via a custom client that works from javascript, but I'm struggling to get the schema right. I keep getting an error:
Value {:lang :clojure, :params [db eids pull-expr], :code "(map eids (fn [eid] (datomic.api/pull db pattern eid)))", :imports [], :requires []} is not a valid :datomic/fn for attribute :db/fn#2020-08-1220:41favilaYou should show your code, but it sounds like you are transactions a map instead of a d/function object#2020-08-1220:42favilaYou can make one either with d/function or the #db/fn reader literal#2020-08-1220:44Sam DeSotaYes, I’m transacting a map, I assumed that’s how transit serialized the d/function object. The issue is that I’m doing this from a javascript library that doesn’t have access to d/function, on the transit level how is this object serialized? (using this from javascript is obviously non-standard, but necessary for my org)#2020-08-1220:50favilaI missed that you were using a custom client#2020-08-1220:50favilaYou control all of this, so whatever you are doing is what you are doing :)#2020-08-1220:50favilaBut when you call d/transact, you need a function object#2020-08-1220:51favilaIt’s unclear to me how you made a d/function object from JavaScript?#2020-08-1223:55Sam DeSotaI ended up just shifting to putting the txfns in the datomic install classpath#2020-08-1220:37Sam DeSotaIt's not clear what's missing to correctly transact this function, using the on-prem obviously. Any ideas?#2020-08-1408:56ziltiHas someone here used Datomic with Algolia before, or with something similar? If so, are there any gotchas or something I should be aware of?#2020-08-1409:47categoryRe:
https://docs.datomic.com/cloud/getting-started/configure-access.html#authorize-user
https://docs.datomic.com/cloud/ions/ions-tutorial.html#org6699cd4
Please can you confirm whether or not administrator permissions are required for all applications to connect to datomic cloud?#2020-08-1416:48kennyWhen initially transacting your schema on a fresh database and using tuple attributes, do folks typically do 2 transactions -- one for the schema without tupleAttrs and one with the tupleAttrs?#2020-08-1416:51kennyWait, order in the transaction appears to matter! This transaction fails
(d/transact conn {:tx-data [#:db{:ident ::a+b,
:valueType :db.type/tuple,
:tupleAttrs [::a ::b],
:cardinality :db.cardinality/one,
:unique :db.unique/identity}
#:db{:ident ::a,
:valueType :db.type/string,
:cardinality :db.cardinality/one}
#:db{:ident ::b,
:valueType :db.type/string,
:cardinality :db.cardinality/one}]})
Execution error (ExceptionInfo) at datomic.core.error/raise (error.clj:55).
:db.error/invalid-install-attribute First error: :db.error/invalid-tuple-attrs
And this one succeeds.
(d/transact conn {:tx-data [#:db{:ident ::a,
:valueType :db.type/string,
:cardinality :db.cardinality/one}
#:db{:ident ::b,
:valueType :db.type/string,
:cardinality :db.cardinality/one}
#:db{:ident ::a+b,
:valueType :db.type/tuple,
:tupleAttrs [::a ::b],
:cardinality :db.cardinality/one,
:unique :db.unique/identity}]})
#2020-08-1721:48Jake Shelby[datomic cloud] Trying to update my development solo stack to the latest version. Following the instructions from the documentation (as I understand them), I click “update” on the nested CF stack for “compute” … however, I am presented with a warning:
> It is recommended to update through the root stack
> Updating a nested stack may result in an unstable state where the nested stack is out-of-sync with its root stack.
Is this something I should worry about, it’s not mentioned in the documentation (I’m I updating the wrong stack?)#2020-08-1722:01kennyDon't know the answer to your specific question but in the future I recommend always deploying storage and compute as separate stacks.#2020-08-1723:25marshall@U018P5YRB8U you should split your stack before you upgrade#2020-08-1723:25marshallyou should definitely NOT upgrade a nested stack#2020-08-1723:25marshallsee https://docs.datomic.com/cloud/operation/split-stacks.html#2020-08-1723:40Jake Shelbyokay thanks, it wasn’t clear to me if a solo stack needed to be split at all (because these docs seemed to be specifically for production deployments)#2020-08-1723:41marshallah, yes it should be#2020-08-1723:41marshalland we'll look at improving those docs#2020-08-1723:41Jake Shelbygreat, thanks for the responsiveness!#2020-08-1813:04mafcocincoWhat is an idiomatic way to express uniqueness within a set of attribute values in Datomic? That is, if I have an attribute that is of type db.type/ref and it is of db.cardinality/many, how do I enforce a uniqueness constraint on the set of values that is being referred to, in the context of the containing value?#2020-08-1813:04mafcocincoWhat is an idiomatic way to express uniqueness within a set of attribute values in Datomic? That is, if I have an attribute that is of type db.type/ref and it is of db.cardinality/many, how do I enforce a uniqueness constraint on the set of values that is being referred to, in the context of the containing value?#2020-08-1813:09favilaAre you saying that the uniqueness constraint is expressed among the entities referenced?#2020-08-1813:10favilaso normal ref uniqueness is not enough, and the referenced entities themselves don’t have inherent uniqueness#2020-08-1813:10favilawould an example be: A person may have many addresses, but only one may be a primary address#2020-08-1813:11favilaor: A person may hold many cards, but they must all have the same color or different numbers#2020-08-1813:13favilaif there’s a fixed number of different states like in the address example, you can break up the single card-many attribute into more attributes with more specific meanings. e.g. instead of :person/addresses, you have :person/address-primary, :person/address-secondaries#2020-08-1813:15favilaif the set is more open (e.g. color or number constraints) but the entities are semantically “components” of the primary entity, you can reverse the direction of reference so the component points to the primary entity, then make a unique composite tuple that includes the component and the other attributes that mark uniqueness#2020-08-1813:16favilaif the referenced entities are not components (i.e. are shared among other entities, so you can’t add an ownership attribute to it), you have to do this through an intermediate entity#2020-08-1813:17favilafinally, you can just use a :db/ensure function that enforces your constraint#2020-08-1814:23mafcocincoIt is more the open set concept, so reversing the relationship might be the way to go, though I’m going to look at :db/ensure first.#2020-08-1814:23mafcocincoThank you for your help.#2020-08-1819:12joshkhis there a path to audit access to a datomic cloud system, perhaps using cloudtrail?#2020-08-1908:05cmdrdatsI have a schema with two attributes - :ent/uuid :db.unique/identity and :ent/context :db.type/ref - is there a way I can make this automatically create the referenced entity?
(d/transact datomic
[{:ent/context [:ent/uuid (d/squuid)]}])
(obviously this is a contrived example, the (d/squuid) would almost always be a uuid from somewhere else)#2020-08-1908:06cmdrdats(ideally, there's a schema attribute that can set this as a property of :ent/uuid - but I suspect I'll have to actually go lookup and create the entity if it doesn't exist?)#2020-08-1911:24cmdrdatshmm
(d/transact datomic
[{:ent/uuid #uuid "093eaee6-ebc7-45f2-9a3f-b923f4864e5c"}
{:ent/user [:ent/uuid #uuid "093eaee6-ebc7-45f2-9a3f-b923f4864e5c"]}])
doesn't work? that's a shame - am I looking at this wrong, or do I pretty much have to resort to lookup+tempid's?#2020-08-1912:16joshkhif i'm reading this correctly, you want to transact an entity while also transacting a reference to that entity in the same transaction, right?#2020-08-1912:18joshkhif so, then you can use temporary ids within the same transaction
(d/transact datomic
[{:ent/uuid #uuid "093eaee6-ebc7-45f2-9a3f-b923f4864e5c"
:db/id "newuser"}
{:ent/user "newuser"}])#2020-08-1912:19cmdrdatsye, that's basically what I'm doing as a workaround (using d/tempid ) - trouble is that you have to actually have to think about it, at a higher level#2020-08-1912:21cmdrdatswhere I would have preferred to just (effectively):
(->>
[(when user-not-found {:ent/uuid #uuid "093eaee6-ebc7-45f2-9a3f-b923f4864e5c"})
{:ent/user [:ent/uuid #uuid "093eaee6-ebc7-45f2-9a3f-b923f4864e5c"]}]
(remove nil?))
instead, I need to collaborate the two pieces of code to pass the temporary id around#2020-08-1912:21cmdrdatsnot a major problem, but seems a bit odd, given how lovely lookup refs are#2020-08-1912:22joshkh> I'm wanting to write my code for adaptive gradual backfill of existing entities with uuid's from other database
i do something similar from time to time, and what i do is first transact the entities in the DB, and then in a second transaction i establish their references. you might be able to do something clever, like walk your input data and create a tempid out of the uuid (i never had to go that far)#2020-08-1912:22joshkhit's a pain though 😉#2020-08-1912:25cmdrdatsye, I'm trying to whittle down a simple paradigm for consistently interacting with datomic that I can then teach to the team.. save then load would work, but a bit too much of fickle#2020-08-1912:25cmdrdatsbut thanks 🙂#2020-08-1912:28joshkhwell if you ever write a generic walker/id replacer for the input code let us know! in my case i migrate slices of the datomic graph from one database to another, and over time i have just added steps for each "kind" of entity. another useful tool would be to replace UUIDs with new ones.. in other words clone parts of the graph and their relationships (within the same db).#2020-08-1912:32joshkhjust another thought, and i don't know if this applies to your scenario, but you can transact new entities as reference values if the reference is a component#2020-08-1912:34joshkh{:tx-data [{:user/id #uuid "093eaee6-ebc7-45f2-9a3f-b923f4864e5c"
:user/address {:address/id #uuid"f0ef5f94-8125-4aa1-a906-524ec0274941"
:address/street "Maple Street"
:address/country-code "US"}}]}#2020-08-1912:48cmdrdatsthat is good to know, thanks! but no, this is for the generic case, not really components#2020-08-1912:50cmdrdatsI guess I could handle a generic walker, but that would add a bunch of overhead for 99% of the code that doesn't need it + be an extra bit of complexity to keep having to think about#2020-08-1912:51cmdrdatsso prefer to just shift paradigm a little xD#2020-08-1913:49marshallYou can transact the whole thing as a nested map:
[{:ent/user {:ent/uid #uuid "MY_UUID_HERE"}}]#2020-08-1913:55cmdrdatsooh - even without being a component?#2020-08-1913:55cmdrdatsand if you do it twice, it just works?#2020-08-1913:56cmdrdats[{:ent/user {:ent/uid #uuid "MY_UUID_HERE"} :ent/ident "1"}
{:ent/user {:ent/uid #uuid "MY_UUID_HERE"} :ent/ident "2"}]
for instance#2020-08-1913:59cmdrdatsalso - is that infinitely nestable?#2020-08-1913:59marshallsure#2020-08-1913:59cmdrdatsvery cool - I'll be abusing that, thanks!#2020-08-1914:00marshallhttps://docs.datomic.com/on-prem/transactions.html#nested-maps-in-transactions#2020-08-1914:00marshallthere are a couple of caveats ^#2020-08-1914:01cmdrdatshehe, thanks - I'll have a read 🙂#2020-08-1914:01marshallin particular you need a unique attribute in the inside entity#2020-08-1914:01marshallor it has to be component#2020-08-1914:03cmdrdatscool - ye, I'll be enforcing :ent/uuid - so that it can always gradually converge to the same entity.. I do have a few more identity attributes, but there's a possibility of creating two rows for the same entity, then not being able to reconcile later.#2020-08-1914:21marshallyep, you'd need to make sure that was handled when you build the nested map#2020-08-1914:23cmdrdats👍 thank you!#2020-08-1911:25cmdrdats(for context, I'm wanting to write my code for adaptive gradual backfill of existing entities with uuid's from other databases - as opposed to a dedicated ETL process)#2020-08-1915:23franquitoDoes anyone actually uses the schema type :db.type/uri ? There's some real benefit apart from semantics? Using :db.type/string works just fine and I don't need any code to serialize/deserialize the data.#2020-08-1915:28favilaI have used it. I like that it rejects invalid uris. I also see more types as a bonus.#2020-08-1915:29favilaI don’t think there’s anything wrong with using strings instead#2020-08-1917:20franquitoOh, I didn't have that in mind. Thanks! I think I'll use :db.type/uri because of the free type checking.#2020-08-1917:12twashingI took this :db.type/tuple schema example from Datomic’s On-Prem documentation. https://docs.datomic.com/on-prem/schema.html#heterogeneous-tuples
{:db/ident :player/location
:db/valueType :db.type/tuple
:db/tupleTypes [:db.type/long :db.type/long]
:db/cardinality :db.cardinality/one}#2020-08-1917:13twashingBut I get this error when running a transact (with com.datomic/client-pro “0.9.57” ( also tried “0.9.63" )).
Is this just a bug? Or is there something else to making tuple types work?
Caused by: datomic.impl.Exceptions$IllegalArgumentExceptionInfo: :db.error/not-an-entity Unable to resolve entity: :db/tupleTypes
{:entity :db/tupleTypes, :db/error :db.error/not-an-entity}#2020-08-1917:20favilaDid you upgrade your base schema? https://docs.datomic.com/cloud/operation/howto.html#upgrade-base-schema#2020-08-1917:23twashingI haven’t deployed yet. This is all in development, with this version.
com.datomic/client-pro "0.9.57"
#2020-08-1917:33favilaThis is all tied to the storage not the client#2020-08-1917:33favilaThe db you are connecting to: was it created with a version of datomic earlier than the one that started supporting tuples?#2020-08-1917:33favilaif so, then you need to follow the instructions to upgrade the base schema#2020-08-1917:34favilaupgrading the base schema will transact entities like :db/tupleType#2020-08-1918:10twashingHmm, right. Probably something with the datomic engine.
Ok, cheers.#2020-08-1920:54Jon WalchHas anyone gotten dev-local working on windows 10? I'm having a hard time with specifying {:storage-dir "C:\\Users\user\foo"}
Execution error at datomic.dev-local.impl/user-config (impl.clj:324).
Unsupported escape character: \U
#2020-08-1921:00alexmilleryou probably need "C:\\\\Users\\user\\foo" to escape the \'s ?#2020-08-1921:01Jon Walch@alexmiller Different error message, but still doesnt work#2020-08-1921:01alexmillerwhat's the error?#2020-08-1921:02Jon WalchExecution error (ExceptionInfo) at datomic.core.anomalies/throw-if-anom (anomalies.clj:94).
You must specify an absolute path under :storage-dir in a map in
your ~/.datomic/dev-local.edn file, or in your call to client.
#2020-08-1921:02alexmillerinteresting, it's checking for absolute path - maybe "\\C:\\\\Users\\user\\foo" ?#2020-08-1921:03Jon WalchSame error with that unfortunately#2020-08-1921:07alexmillersorry, that extra pair of \\ isn't needed and paths on windows starting with the drive should be considered absolute. "C:\\Users\\user\\foo" I would expect to work, but maybe I'm missing something#2020-08-1921:08Jon WalchSadly, same error again. Thanks for your help alex!#2020-08-1921:08alexmillerwell, I'll invoke @jaret and @marshall to check with the Datomic team then :)#2020-08-1921:30jaretHi @jonwalch could you try:
{:storage-dir "C:\\Users\\user\\foo"}#2020-08-1921:32Jon Walch@jaret yes, that's the last one I tried#2020-08-1921:38jaretFiring up a windows machine to test. Will update here or on the ticket 🙂#2020-08-1921:39Jon WalchCool, thanks @jaret! this isn't super urgent for me. my main box is debian. my clojure code talks to a game that I'm running on my Windows machine. Going to go forward with ngrok in the mean time.#2020-08-2012:54LukasHello 👋 is there a way to get the entity ID of a entity that was created using the clojure d/transact function? The function returns the tempids, but I need the actual entity IDs as I use these as unique identifiers and need a reference to a user I just created.#2020-08-2012:55marshall@lukasmoench1113 the return from transact includes a :tempids map that provides the mapping from the tempids to the final entity IDs#2020-08-2012:55marshallsee https://docs.datomic.com/cloud/transactions/transaction-processing.html#results#2020-08-2012:56Lukasoh the return is actually the mapping 😅 thanks!#2020-08-2012:56marshallthe :tempids key - there is an example of doing that just below the table#2020-08-2015:25maxtI get the following error while trying to upgrade datomic cloud storage. Is it known?
The runtime parameter of nodejs8.10 is no longer supported for creating or updating AWS Lambda functions. We recommend you use the new runtime (nodejs12.x) while creating or updating functions.
#2020-08-2016:20marshall@maxt Datomic cloud 569-8835 moved to using nodejs 10.x#2020-08-2016:20marshallyou need to update to at least that version#2020-08-2016:21marshallbut you should go to latest#2020-08-2017:57donyormHey I'm trying to get a codebuild instance to be able to push up ions with datomic cloud. What permissions would the codebuild role need in order to run {:op 'push'} ?#2020-08-2018:55maxt@marshall I was able to upgrade my (all 44) lambdas using the aws cli tools, and then I could update my other compute node. But the one I started with, got stuck in UPDATE_ROLLBACK_FAILED , and then I can't rerun the upgrade. The rollback also fails, because the old version is ofcourse also using old nodejs, and is not allowed to proceed anymore.#2020-08-2018:59marshall@maxt can you please file a support ticket (<mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>)
Manually altering components of the system is not recommended and we may need to help you work through how to resolve that#2020-08-2018:59marshallupgrading the system overall (storage stack and compute stack) should have done it automatically for you#2020-08-2019:00maxtok, I thought that part maybe wasn't automatic. Sure, I'll file a ticket, thanks.#2020-08-2019:58tvaughanI'm running across a strange problem. I have this in my schema (abbreviated):
{:db/ident :length/mm
:db/valueType :db.type/double
:db/cardinality :db.cardinality/one}
{:db/ident :dash-space/mm
:db/valueType :db.type/tuple
:db/tupleAttrs [:length/mm :length/mm]
:db/cardinality :db.cardinality/one
:db/unique :db.unique/value}
I never transact :dash-space/mm, but I do transact other things like:
{:radius/key unique-string-1 :length/mm 254.0}
{:length/key unique-string-2 :length/mm 254.0}
At some point (still more digging required) I get a unique constraint error on :dash-space/mm for the values of [254.0 254.0] . I can't say I really understand this problem. Getting rid of :dash-space/mm in the schema does eliminate it, but I assume there's something incorrect about the use of :db/tupleAttrs above, right?#2020-08-2019:59marshallyou've created a composite tuple#2020-08-2019:59marshallhttps://docs.datomic.com/cloud/schema/schema-reference.html#composite-tuples#2020-08-2020:00marshallyour :`dash-space/mm` attribute is automatically created on any entity with either of the attrs in the :db/tupleAttrs vector#2020-08-2020:01marshallbecause you have it set to :db.unique/value if you ever try to transact data that would create a composite with the same value (254 254 in this case), you get the unique value error#2020-08-2020:01marshallhttps://docs.datomic.com/cloud/schema/schema-reference.html#db-unique#2020-08-2020:01marshallif it were set to :db.unique/identity it would "upsert" instead#2020-08-2020:02marshalland if it were not set to any uniqueness, it would create a new entity#2020-08-2020:02marshallall that said, i'm not sure the purpose of making a composite tuple with a repeated attr#2020-08-2020:03marshall@tvaughan ^#2020-08-2020:03marshallif you just wanted a tuple that would hold two arbitrary longs, unrelated to other attributes on that entity, you'd need to use :db/tupleTypes or :db/tupleType#2020-08-2020:04marshallinstead of :db/tupleAttrs#2020-08-2020:04marshallhttps://docs.datomic.com/cloud/schema/schema-reference.html#tuples#2020-08-2020:06tvaughanAwesome @marshall! Thanks for the quick and clear response!#2020-08-2020:07tvaughanFYI, I didn't create this part of the schema so I can't say why it was done this way#2020-08-2020:07marshalldo you know what the intended use/purpose of that attribute is?#2020-08-2020:10tvaughanI suspect it's trying to capture the length of the dash and space, and place a unique constraint on the pair of values (which are really references)#2020-08-2020:21marshallincidentally, using doubles as identity is pretty iffy; given the semantics of doubles / precision / comparison /etc, i'd definitely be wary of using them as any kind of identifier
Thanks to @alexmiller for mentioning it#2020-08-2020:36tvaughanOh, I'm aware, thanks. The values shouldn't have a unique constraint. This is definitely an error on our part#2020-08-2020:45marshall👍#2020-08-2021:49stuarthalloway#rebl and Datomic dev-local are now free as part of Cognitect dev-tools https://cognitect.com/dev-tools/#2020-08-2121:01Lennart BuitHey, I’m looking for some understanding, what makes this a much faster query:
(time (d/q '{:find [(count ?e)]
:where [[?e :person/last-name ?last-name]
[(>= ?last-name "A")]
[(< ?last-name "B")]]}
(d/db conn)))
Than this:
(time (d/q '{:find [(count ?e)]
:where [(or-join [?e]
(and [?e :person/last-name ?last-name]
[(>= ?last-name "A")]
[(< ?last-name "B")]))]}
(d/db conn)))
I’d say, these are the same queries right? Interestingly, before I added an index for :person/last-name they were about as fast... as in, same speed as the or-join version now is#2020-08-2121:01Lennart BuitHey, I’m looking for some understanding, what makes this a much faster query:
(time (d/q '{:find [(count ?e)]
:where [[?e :person/last-name ?last-name]
[(>= ?last-name "A")]
[(< ?last-name "B")]]}
(d/db conn)))
Than this:
(time (d/q '{:find [(count ?e)]
:where [(or-join [?e]
(and [?e :person/last-name ?last-name]
[(>= ?last-name "A")]
[(< ?last-name "B")]))]}
(d/db conn)))
I’d say, these are the same queries right? Interestingly, before I added an index for :person/last-name they were about as fast... as in, same speed as the or-join version now is#2020-08-2121:02Lennart BuitI understand the or-join is superfluous in this small example, I’m just wondering where this performance difference comes from#2020-08-2210:44Kevin MungaiHi, I would like to use Datomic Cloud but I am having a hard time trying to figure out or (work out) how to create a web application that has routing. All the web applications I have created with Clojure just run after calling main and then accept http requests. These kind of web applications have no invokable function unlike what is suggested with Datomic Ions. How do I write a web application using routing libraries like bidi while exposing an invokable function via :http-direct ? Thanks in advance for your answers.#2020-08-2302:43rapskalianThe ion wraps (or adapts) the handler function. So for example, if you’re using bidi, then the handler that you get when you call bidi.ring/make-handler can be configured as an invokable function. This is thanks to http ions being “ring compatible” as per the docs.
In the main function that you’re used to, you probably have been starting an http server (eg jetty) by passing it your handler function. With ions, you don’t run the http server yourself. This is abstracted away by API Gateway. All you have to do is define your handler function and configure it properly as an http direct endpoint. #2020-08-2302:48rapskalianThe gist is that, from this example in bidi’s readme, app is just a function (http request -> response). That is your invokable function.
https://github.com/juxt/bidi#wrapping-as-a-ring-handler#2020-08-2314:17Kevin MungaiI didn't know that you can just pass the handler as the invokable function to :http-direct. Thanks :+1:.#2020-08-2218:20jdhollisYou can put the router in the ion that’s being invoked via :http-direct.#2020-08-2218:22jdhollisTreat the ion like you would main.#2020-08-2314:18Kevin MungaiThanks :+1:. Datomic Cloud keeps looking better and better.#2020-08-2421:40JAtkinsAnyone had issues with cloud where endpoints got requests missing "content-type" and where "content-length" was 0? This has just started happening to my prod server.#2020-08-2614:21joshkhcan i sync a metaschema per database in Datomic Analytics? (edit) in other words, have two different metaschemas in parallel - one for a dev db, and another for a prod db#2020-08-2709:04joshkhhttps://docs.datomic.com/cloud/analytics/analytics-concepts.html#one-metaschema#2020-08-2623:09kennyDoes dev-local import-cloud allow the :dest to be a different cloud system?#2020-08-2711:37marshallNo, import cloud is only for dev local#2020-08-2716:11marshallYou could write a program to read the log and transact the data into another system#2020-08-2716:11marshallthere are some community blogs/etc about that approach#2020-08-2716:12marshallit requires some work because you have to maintain a mapping of entity IDs, as there will new EIDs in the new database#2020-08-2716:14kennyRight. Curious, is there a technical reason import-cloud couldn't connect to a separate Datomic system?#2020-08-2716:19marshalli dont know 🙂#2020-08-2702:42adamtaitI have a query on a Datomic Ion that consistently returns partially different results when run remotely vs on the Ion. The query returns 9 results (both remotely & on-ion) but remotely returns correct entity-ids and on-ion returns 3 correct entity-ids and 6 empty/new entity-ids. I think this will be hard to reproduce b/c I can’t even reproduce it on a 2nd Ion with the same schema (it works correctly there).
Here’s the query in case anyone can see something obviously wrong:
(d/q
'[:find ?mp
:in $ ?low ?high
:where
[?mp :membership-period/end ?end]
[(< ?low ?end)]
[(< ?end ?high)]]
db inst-low inst-high)
Has anyone experienced this strange inconsistent behavior before? I’d love to hear from you.#2020-08-2708:01joshkhi've seen inconsistencies when using > < operators on dates. try using [(.after ?date1 ?date2)]#2020-08-2708:04joshkhhttps://datomic.narkive.com/wxKhH0Qz/how-to-work-with-dates-in-datomic#2020-08-2713:05Joe LaneAre you using (Date.) in the ion?
I would confirm time zones aren’t messing this up. #2020-08-2722:36adamtaitThanks both of you!
I’m using the standard Java (Date.) objects in the comparison. I also thought it might be a timezone issue, so I ran the same query with wider date ranges and got similarly strange results.
I was able to get the query working consistently using [(.before ?low ^java.util.Date ?end)] .
I’m still not sure why Datomic was returning :db/ids for non-referenced entities in the results but at least I can move on…
Thanks again!#2020-08-2707:58cmdrdatsGood morning! I remember seeing/hearing somewhere that :db/index is now superfluous, or, at least, we should actually always make it :db/index true , since it's basically doing that under the hood in Datomic now anyhow... I can't find the source now, but the docs still state :db/index defaults to false - so is this something we still need to consider for our schema?#2020-08-2710:30favilaYou are probably thinking of datomic cloud: it makes a value index for everything so it doesn’t have a :db/index attribute at all#2020-08-2710:30favilaFor on prem nothing has changed, and you should still consider whether to index or not#2020-09-0113:49cmdrdatsah, cool, thanks!#2020-08-2707:59cmdrdats(I'm using on-prem datomic)#2020-08-2709:38Ben SlessHi all, working through the https://docs.datomic.com/on-prem/tutorial.html I find myself missing a slightly more complex use case. For an example of order items, what if I wanted to edit an order and retract a specific item or update an item's amount? I find it a bit complicated when dealing with componenets#2020-08-2712:16vWhat’s so complicated about that ? #2020-08-2712:17vDo you have a code that you would like to share? #2020-08-2715:05Ben Sless@UGMEQUCTV sure
(def schema
[{:db/ident :product/name
:db/cardinality :db.cardinality/one
:db/unique :db.unique/identity
:db/valueType :db.type/string}
{:db/ident :order/items
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/many
:db/isComponent true}
{:db/ident :order/id
:db/valueType :db.type/long
:db/cardinality :db.cardinality/one
:db/unique :db.unique/identity}
{:db/ident :item/product
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/one}
{:db/ident :item/amount
:db/valueType :db.type/long
:db/cardinality :db.cardinality/one}])
#2020-08-2715:05Ben Sless@(d/transact conn schema)
@(d/transact conn [{:product/name "coke"} {:product/name "pizza"}])#2020-08-2715:06Ben SlessThen a simple order
@(d/transact
conn
[{:order/id 1
:order/items
[{:item/product [:product/name "pizza"] :item/amount 3}
{:item/product [:product/name "coke"] :item/amount 1}]}])#2020-08-2715:07Ben SlessNow assuming there was a mistake and the client ordered just one pizza, can I write it with reference only to the order/id? Because if I do the following it just adds another item
@(d/transact
conn
[{:order/id 1
:order/items [{:item/product [:product/name "pizza"] :item/amount 1}]}])#2020-08-2715:07Ben SlessOr do I have to refer to the item's eid?#2020-08-2717:26v@UK0810AQ2 you will need the item id to update the amount. So here is the code that does that
(defn update-product [product-name amount]
(let [item-id (d/q '[:find ?eid .
:in $ ?product
:where
[?eid :item/product ?product]]
(d/db conn) [:product/name product-name])]
(d/transact conn [{:db/id item-id
:item/amount amount}])))
(update-product "pizza" 1)
(d/q '[:find ?count .
:where
[?id :item/product [:product/name "pizza"]]
[?id :item/amount ?count]]
(d/db conn))#2020-08-2717:43Ben SlessI see. No way around it, then. Thanks#2020-08-2722:11Jake Shelbyjust to push on this use case some more (because I've had similar wishes to @UK0810AQ2 's) ... @UGMEQUCTV since you are getting the item id first, and then transacting it, it's not concurrent safe, especially since Ben is talking about possible retracting the sub-entities (like replacing) ... I wonder if there is a way to do this in one transaction, maybe using a TX fn??#2020-08-2722:12Jake Shelby(another similar use case to this, is a many cardinality attr, and you want to replace the current set of values with new values, but it's important to do it in one transaction for concurrency concerns)#2020-08-2808:33Ben Slessyeah, it looks like to be safe and atomic you must use a tx fn#2020-08-2813:31rapskalianSomewhat out of date now, but Valentin’s Datofu library had some helpers for resetting to-many relations as a tx fn. It might be worth studying the code for ideas: https://github.com/vvvvalvalval/datofu#resetting-to-many-relationships#2020-08-2712:18Kevin MungaiHi, while choosing an AWS region for Datomic Cloud I found out that https://aws.amazon.com/local/africa/ is https://aws.amazon.com/marketplace/pp/prodview-otb76awcrb7aa?qid=1598530512130&sr=0-1&ref_=srh_res_product_title. How do I go about choosing the Africa Region (Capetown)? Also, is it possible to migrate from one AWS region using Datomic Cloud to another AWS Region? e.g. AWS EU (London) to AWS Africa Region (Capetown). Thanks in advance for your answers.#2020-08-2713:20souenzzoAbout move across instances,
Once you do a datomic backup, um can restore it in any other instance, including a local instance in your machine.#2020-08-2713:34marshall@U2J4FRT2T That is true for Datomic On-Prem, not for Datomic Cloud#2020-08-2713:37marshall@U015L4S1C72 Datomic Cloud is not currently available in the South Africa region. I can investigate whether we could add support for that region in a future release.
We don't currently have a built-in mechanism for transferring a Datomic Cloud database from one AWS region to another. However, because Datomic keeps the history of your entire database (the log) you could write a tool that reads the log of an existing database and transacts the same data into a new database in your new region.#2020-08-2713:57Kevin MungaiLooking forward to it. I am from Nairobi, Kenya and the nearest AWS Region is South Africa, followed by UAE, India and then the EU. The EU regions are not bad because African fibre optic cables connect to the EU countries and many African companies use cloud providers that are thousands of miles away from their customers. Adding more regions would certainly help with latency. Thanks.#2020-08-2714:15marshallWe do have some India regions available for now
I investigate the Cape Town region#2020-08-2714:28Kevin MungaiSorry I didn't find the India Region. Maybe am restricted from choosing the India Region due to my location.#2020-08-2714:53marshallMy apologies - I believe we're working on the Mumbai region support now#2020-08-2714:53marshallit should be available in a future release#2020-08-2714:56Kevin MungaiThanks. Looking forward to it.#2020-08-2715:08kardanI guess we’re all interested in different regions, I been looking out for Stockholm 🙂#2020-08-2716:10marshall@U051F5T93 we attempted to launch in Stockholm region earlier this year, but AWS Marketplace didnt have the correct support there yet#2020-08-2718:21kardanThat became an odd emoji.. 🙂#2020-08-2800:49kennyHow do folks handle deploying the same ion config to N different Datomic Cloud systems? Datomic only allows the ion config to be specified as a static edn file on the classpath. Right now we have to dynamically generate that file for each datomic system at deploy time. It feels quite hacky though. I'd really like a single "master" config where I can customize the :app-name at deployment time.#2020-08-2814:49adamtaitWhat I do is worse; change the classpath (in deps.edn) each time I deploy. I’d love to see a better solution, too!#2020-08-2823:48steveb8nI do the same thing. Overwriting the file and doing an unreproducible deployment. Feels wrong#2020-08-2900:04kenny@U04VC606L @U0510KXTU @U6GFE9HS7 @U018P5YRB8U
Pinging you all since you seemed interested in this. I created a post on the Datomic forum so this doesn't get lost in Slack. Post is here: https://forum.datomic.com/t/dynamic-ion-config-edn-app-name/1607#2020-08-2901:47p14nMay be misunderstanding the use case, but all our environments have the same app name, only the aws profile (in gitlab deployment params) determine where it's being deployed to.#2020-08-2902:04kennyI think you can customize profile when calling the deploy functions, right?#2020-08-2907:41p14nYes#2020-08-2913:57rapskalianMy use case is for dev/staging/prod deploys of the app. By “AWS profile”, I’m assuming that means you have an AWS account per deploy? That seems overkill for my simple use case, but perhaps there are benefits to keeping envs isolated in that way. #2020-08-3006:14p14nYes, separate accounts. It probably is overkill for a simple case. I use terraform for the AWS stuff so creating another account/deployment is relatively low effort#2020-08-2801:33Kaue SchltzI was wondering, whats the procedure to delete data from datomic cloud, due to data regulations?#2020-08-2817:40jdhollisMy understanding is that excision is not available in Datomic Cloud. So if you’re worried about GDPR or similar, you’re better off storing any identifying data elsewhere.#2020-08-2819:39Kaue Schltz@U7JK67T3N This would be our primary path#2020-08-2822:48Jon WalchIs it possible to downgrade a production topology to a solo topology?#2020-08-2918:08jdhollisI haven’t tried it, but I suspect it should be possible. I believe both topologies share the same storage layer.#2020-08-2901:54p14nDatomic seems the ideal system to reference entities at a particular point in time (ie the order was sent to the customer address as it was on Saturday 29th). I haven't seen a way of doing this in a way that feels natural - I could use the order time to get a database as of that point in time and read the address of course. Anyone discovered a better way?#2020-08-2902:28seancorfield@p14n I thought Datomic had a specific API for viewing the database as of a given timestamp?#2020-08-2907:40p14nIt does, but not a way I know of to store that fact 'this reference is of time T'#2020-08-2910:37pithyless@p14n I think what you're asking for is what bitemporal databases (e.g. https://github.com/juxt/crux) are designed to model explicitly: differentiating between when information is recorded vs when that information is considered valid. The Datomic API only really concerns itself with the former (when information is actually recorded). If you want the latter, you can either model it using the same API (but then, e.g. you can't go back and easily change when something should be considered valid, since you can't change the time of recording); or you model it as additional valid-time attributes and make sure all your time sensitive queries take them into account. Valentin had a good blog post about this: https://vvvvalvalval.github.io/posts/2017-07-08-Datomic-this-is-not-the-history-youre-looking-for.html#2020-08-3023:57drewverleewhat are we defining as valid? when i see some information is a straight forward idea, i don't have a strong understanding of what valid means in this context?#2020-08-3101:00drewverleeSo as-of doesn't give all attached facts past the given time? It gives just what was true as of that time.#2020-08-2910:59p14nAh, yes, I think you're right#2020-08-2910:59p14nThanks#2020-08-2920:15Joe Lane@p14n Isn't this just an attribute on an entity?#2020-08-3006:12p14nYes, it could be, but then you need to pull the entity, and use the attribute to get a new dB, and pull the second entity from that#2020-08-3011:10Joe LaneI don't understand the problem you're trying to solve here but am interested in hearing more.#2020-08-3004:51dmarjenburghDoes every transaction in Datomic Cloud go via DynamoDB? And if so, can DynamoDB Streams be used to listen/subscribe for transactions?#2020-09-0113:51jaretYes, every transaction is written to durable storage. However, Cloud's use of DDB is opaque unless accessed from Datomic. So, I am not sure what insight you would get by reviewing the stream of writes other than "writes are occurring," which you could also see by monitoring with metrics in CloudWatch (txdatoms, txbatchbytes) or by reviewing the Cloud Dashboard (txes,Txbytes DDB usage, write count). I am interested in hearing more about what you're envisioning with DBstreams, maybe there is something here we should look at as a feature for Datomic. 🙂#2020-09-0119:53dmarjenburghI was thinking of the tx-report-queue functionality in the peer library and how that might be possible in Cloud.#2020-08-3013:18kennytiltonSo just getting serious about Datomic I decided to list all the datoms in a newly created DB:
:db.error/insufficient-binding Insufficient binding of db clause: [?eid ?a ?v] would cause full scan
Reminds me of Mommy telling me I am going to put my eye out if I keep doing sth (rather too hopefully, I might add). I just want to see the initial set of predicates. I already listed all the :db/idents, that was fun. Hmm, maybe a lower level API call hitting indexes? I'll see what I can see.#2020-08-3013:19kennytiltonSorry, query was:
(d/q '[:find ?eid ?a ?v
:in $
:where [?eid ?a ?v]]
(d/db cw))#2020-08-3013:25rapskalian@U0PUGPSFR you might try using the datoms api to do that: https://docs.datomic.com/cloud/query/raw-index-access.html#2020-08-3013:28rapskalian(datoms db {:index :eavt}) would give you an Iterable of all datoms in the db (on mobile, untested)
https://docs.datomic.com/client-api/datomic.client.api.html#var-datoms#2020-08-3013:31kennytiltonDoh!
:db.error/insufficient-binding Insufficient binding of db clause: [?eid ?a ?v] would cause full scan
That was
(->> (d/datoms (d/db cw)
{:index :avet})
(take 3)
(map :a))
They saw us coming. 🙂#2020-08-3013:33kennytiltonOh, hang on.....#2020-08-3013:48kennytiltonOK, that works, my REPL was a mess. Thcx!#2020-08-3013:58rapskalianCool!#2020-08-3014:26kennytiltonIn the beginning....
([[:fressian/tag]]
[[:db/txInstant]]
[[:db/valueType]]
[[:db.install/attribute]]
[[:db/cardinality]]
[[:db/fulltext]]
[[:db.install/valueType]]
[[:db/tupleType]]
[[:db.install/partition]]
[[:db/ident]]
[[:db/unique]]
[[:db/doc]])
Now I have to google "fressian". 🙂#2020-08-3019:09kennytiltonBingo: https://github.com/Datomic/fressian#2020-08-3020:32kennytiltonOK, who picked the word "ident" for a "name"? :db/doc "Attribute used to uniquely name an entity." 🙂#2020-08-3020:32kennytiltonOK, who picked the word "ident" for a "name"? :db/doc "Attribute used to uniquely name an entity." 🙂#2020-08-3022:04favilaIdent as in “identifier”, stronger than a name#2020-08-3023:05drewverleeIt works better then "name" because it would be hard to search for it. As in, name is too widely used to refer to something this specific.#2020-08-3023:07drewverleeWhich is exactly what favila is saying upon reflection.#2020-08-3023:21kennytiltonPuh-leaze! 🙂 We are now corrupting our coding to optimize SEO?! Man, is this full circle or what? Btw, had they used the full expansion "identifier" I would not have my shorts in such a bunch. ps. Yes, it is pretty funny when we have to google something that is such an ordinary word. Funny as in hopeless!#2020-08-3023:49drewverlee@U0PUGPSFR
I meant it would be harder to communicate to other developers. In either case the name "name" or "ident" will need further clarification. But i agree "name" also works.#2020-08-3100:00kennytiltonAs a noob, ident had my head spinning because everywhere I looked in Datomic I saw IDs. And dto is similar to allegrograph which I grok OK so it is not even alien technology. I guess idents are closest to enum in programmerspeak; they let us give a number a mnemonic (another term I would consider, along with alias). But wait, we are in the Land of Hickey, what does the dictionary say?
"name noun 1. a word by which a thing is known, addressed, or referred to."
Lovely. Btw, I do not like identifier because the entity-id is an identifier, and the real one needing no translation.
Fun stuff. 🙂#2020-08-3100:18drewverleeYep. it takes time to learn the language 🙂.#2020-08-3101:23favilaIdents have enough unique properties viz entity ids that it’s worth giving them another name imo. Entity id values are not user-controllable, are not a public contract, and should not be stored durably for long periods outside the system. Idents are all of these, and are also reassignable and guaranteed unique. You can maybe think of them as a special case of lookup ref where the :db/ident attr is implied, although historically lookup refs came later#2020-08-3101:28favilaThis May be helpful as a primer on the kinds of “identifying” datomic has: https://docs.datomic.com/on-prem/identity.html#2020-08-3101:32favilaAn impl note: at least on on-prem there is a full in-memory map of idents to eids and vice versa, so they are faster forms of reference than normal unique attributes and that is also why you shouldn’t have very large numbers of them#2020-08-3101:35favilaIdents are also valid even after retracted. This is so you can rename an attribute without breaking code—the old name will still work#2020-08-3102:55kennytiltonThx! I had indeed seen all that. I had not picked up, tho, that idents would work if stored outside the system even when entity-ids had changed. Interesting. But in that same section we see the sentence that actually, I recall, made me stop reading and look for a different tutorial:
When an entity has an ident, you can use that ident in place of the numeric identifier, e.g.
Thy syllables "id" and "ent" are doing the flight of the bumblebees in there. 🐝
The sentence before it was better. A little. The first half was fine. then it got weird.
Idents associate a programmatic name (a keyword) with an entity id, by setting a value for the :db/ident attribute
That's great like NYC street signs are great if we already know where we are going.
But yeah, "name" by itself would have its own challenges. I will close by noting that, if a db/ident is a name for an entity, then per Gertrude Stein:
The :db/ident of :db/ident is :db/ident.
I'd give that a 10.#2020-08-3109:54ashnurfwiw, ident is also in the dictionary, it just means identification, certainly I see this choice a much better name than name which is such an overloaded name for names that trying to google anything with it would be extremely frustrating.
Definition of identification
a: an act of identifying : the state of being identified
b: evidence of identity#2020-08-3112:11kennytiltonAh, but the identity of :db/ident is 10. :db/ident is just an enum, an alias, an aka.
A good counterargument here is @U09R86PA4's point that idents survive as external references where the numerid ID does not, but that is just an example of the power of idents as implemented by Datomic and given some operation on a database. (What would alter entity-ids of the "same" entity?)
If one looks inside Datomic, one would see that the true identity of :db/ident is 10. 10 gets linked to :db/ident by that :db/ident being the value where the attribute is :db/ident and --wait for it -- the entity-id is 10. :db/ident, after all, is just a namespaced keyword. This brings us to that other quagmire, tempid. With tempid we see we can have the numeric entity-id absent a :db/ident. And not the other way around.
Do we sense the walls closing in on :db/ident? :)#2020-08-3112:50favilaI think this is confusing two different concerns. an entity-id’s only purpose is to join facts asserted about the same “thing”. In that sense it is an “identity”, but a very weak one that isn’t aware of the meaning of the data. It’s also weak because it’s kind of an implementation detail of datomic: the only guarantee is that they will be referentially consistent, not that their values will be stable. In the google dictionary, this is the second meaning of the word “identity”#2020-08-3112:51favilain the data modeling domain, “identity” is the assertion that makes an entity “be” that thing. so :db/ident’s identity is not 10, 10's identity is :db/ident#2020-08-3112:51favilanote also the attribute schema for these : :db/unique :db.unique/identity, i.e., this is an attribute that, when asserted, gives an entity an identity#2020-08-3112:51favilanote also the attribute schema for these : :db/unique :db.unique/identity, i.e., this is an attribute that, when asserted, gives an entity an identity#2020-08-3112:55favila> What would alter entity-ids of the “same” entity?
These are rare or hypothetical in practice, but: 1. cognitect has said in the past that it couldn’t rule out entity renumbering in future versions of datomic. This is actually useful for performance because you can rearrange commonly-accessed values to be together in the datom sort order. (You can do this manually with on-prem using partitions, which are the top 20ish bits of an entity id. cloud stopped exposing partition control.)#2020-08-3112:56favila2. “decanting”, which is essentially a “git rebase”-like operation. You run through the transaction log of a db, and reapply the transactions with transformations to a second db. At Clubhouse we did this in order to renumber entity ids with partitions for performance. A key property of this operation is that entity ids are not guaranteed the same between the two dbs.#2020-08-3112:58favila3. base schema version changes. Datomic at some point introduced new base schema attributes (the version that introduced tuples.) To do this you need to install new attributes with the d/administer-system function. The entity id of these new attributes depends on what transaction in your particular db performed the upgrade--they are not the same on all dbs like the older attributes are.#2020-08-3112:59kennytiltonGood points all. I am going to save all these for my Datomic tutorial before they scroll off the Clojurian history.#2020-08-3113:21kennytiltonOne perhaps acceptable result I am seeing, where tempids are involved, is that datomic identities can be idendtity-less. I just transacted this twice: `
[:db/add "Hi, Mom!" :movie/title "Harvey"]
:movie/title is not unique, in the tutorial schema or life, so good. But then the two entity IDs assigned are meaningless, if we agree that nothing that cannot survive the above DB transformations can be considered meaningful. eg, After, say, a decanting, we can still retrieve those two entities along with their arbitrary eids, but we cannot pair them off before/after.
I guess that is OK. The physical DB has identity, but the abstract application DB identity relies on the developer managing identity capably. How'm I doin?#2020-08-3113:31favilasure, but I still think using the word “identity” to think about entity or temp ids is going to be confusing in the long run. Identity is from/for humans, ids are for machines#2020-08-3113:32favilaa tempid is a degenerate case of an entity id: it’s an entity id that is referentially consistent, but scoped to a particular transaction instead of a particular db, so it’s even more short-lived#2020-08-3113:33favilaif necessary to preserve references in the db, the tempid will be “upgraded” to a newly minted entity id that has a longer lifetime; or if it is involved in an assertion about an identity attribute, datomic will ensure that all other facts in the transaction about that tempid will be asserted on the same entity that has the same identity#2020-08-3113:34favilain an attribute/assertion-centric data model, entities are very intangible#2020-08-3113:35favilaan entity id is even less than an autoincrement column in a relational db, which is already not very much#2020-08-3113:38favilamaybe another angle on this: an entity is a map-projection of all the facts asserted about one thing. that thing may not have an identifier--the only thing that identifies it is the collection of assertions about it#2020-08-3113:38favila(or the entity id--if you peek under the covers)#2020-08-3113:39favilathis is another way of saying “entity ids are only for joining facts” and don’t really grant an entity identity#2020-08-3113:39favilaIf this were pure mathematics I’m sure they’d find a way to do this without an entity id#2020-08-3114:21kennytilton"that thing may not have an identifier--the only thing that identifies it is the collection of assertions about it" and "this is another way of saying “entity ids are only for joining facts” and don’t really grant an entity identity". But how do you have the "it" in "collection of assertions about it" without sth designating object identity? Put another way, the "only" in "only for joining facts" seems unjust. 🙂 Joining the facts is where object identity begins, it seems.#2020-08-3114:30favilayes, that’s the epistemological model. there is no “there” there but what is said of it#2020-08-3114:33favilahere’s a thought question which may clarify: when does an entity exist?#2020-08-3114:34favilacan an entity id be said to exist or not exist? If so, what makes it exist or not exist?#2020-08-3114:57ashnurIf you can show it, it exists. Also, ident is not identity nor id. It's evidence of identification (or encoding for a process of identification). If you have an ident, you still need a db and additional work to get an identity. (hopefully I didn't stretch the analogy too far :D)#2020-08-3119:55kennytilton"when does an entity exist?" An entity exists by definition. "When" depends on the frame of reference. Our frame of reference is a datomic database. There, when we assert a fact we get an entity ID, whether we like it or not. And we get an entity, but perhaps a ghost if we use a tempid with no :db/ident attribute. Ghosts thus are on the user.
So I agree that ghosts and the edge cases of re-partitioning and decanting mean entity IDs are not part of object identity: that must be arranged by the user, via db/ident, a great, well, name for it!#2020-08-3120:29favilaI think the question is a trick question. Entities (and entity ids) don’t meaningfully exist or not-exist the same way a row in a sql table exists. Only assertions about entities (i.e. datoms) exist. if nothing is said about an entity, you can still pull that entity id and see nothing. I think you can even do this if the entity id has never been used before. You can also use an entity id only as the object (ref value) of a datom--so now you have something which is the object of assertions but never the subject of any assertions. when you write an application, “does this record exist” isn’t usefully answered by looking at entity ids but by a looking for a datom with a unique-value or unique-identity attribute and matching value.#2020-08-3120:34ashnur'by definition'? where is the definition stored then? 🙂 how do you access it?#2020-08-3122:35kennytilton"You can also use an entity id only as the object (ref value) of a datom--so now you have something which is the object of assertions but never the subject of any assertions." I think I tried using 42 as an entity-id (not a ref) and got yelled at. I should think I would at least to have created a :db/entity with a nice name and used the resulting eid. In which case there would be one fact: the :db/ident value, yes?#2020-09-0105:16ashnurI don't think you should care so much about entity ids. And no, I don't think an entity can exist just with one datom, but I could be wrong, haven't checked it.#2020-09-0105:47seancorfield@U0PUGPSFR :
> Good points all. I am going to save all these for my Datomic tutorial before they scroll off the Clojurian history.
This channel is archived to Zulip as https://clojurians.zulipchat.com/#narrow/stream/180378-slack-archive/topic/datomic which is a full, searchable history back to whenever the @UFNE73UF4 was added to this channel.#2020-09-0105:48seancorfield(I figured this was a good opportunity to remind folks of the free, searchable archive of many channels here!)#2020-08-3020:43ashnurwhy#2020-08-3023:00drewverleeDoes datmoic keep a log of all the queries that were made?#2020-08-3100:29val_waeselynckI would be very surprised if it did 🙂 that said, it probably keeps a cache of compiled Datalog queries somewhere in the JVM.#2020-08-3100:31val_waeselynckDatomic has a philosophy of "I'm not slowing down because you're watching me", it seems to me that logging every query would go against that philosophy, especially given the aim that most queries be fast walks through in-memory data structures.#2020-08-3100:37drewverleeseems reasonable. Having that log isn't a goal of mine, if it existed it might be useful though.
I'm considering how to extend the datalog language so it could be used across multiple databases. I have no real plans on doing this, just a thought experiment.#2020-08-3101:10favilaIt can already do this?#2020-08-3101:14favilaNot sure if you are thinking exactly of this, but you can already supply multiple data sources, :in $ds1 $ds2 then reference in pattern clauses by [$ds1 e-match a-match ...] or in rules by ($ds1 rulename ...)#2020-08-3101:15favilaNot having to mention the default $ name most of the time is syntax sugar#2020-08-3103:12vThere is a really good video on this by folks at nubank. I believe it’s called 4 super powers of Datomic, where they talk about querying across multiple database. Highly recommended #2020-08-3116:32Robert RandolphHere is that video: https://www.youtube.com/watch?v=7lm3K8zVOdY#2020-08-3117:03drewverleeThanks ill give it a watch.#2020-09-0110:51thumbnailFor development we have a custom implementation of d/q-protocol which sends queries to tap>. it enables us to see what performance is of specific queries. we just wrap the existing connection in our system-map.#2020-08-3114:19souenzzoWho do I get a older version of datomic dev-tools ???
My project uses com.datomic/dev-local {:mvn/version "0.9.184"} and I can't find how to download it.#2020-08-3119:16stuarthallowayI will look into that -- in the meantime, can you just update the dep to latest. They are all compatible.#2020-09-0114:27souenzzo@U072WS7PE bit off but which maven repository do you use/recommend for small corps//personal experimentation?#2020-09-0119:47stuarthallowayI tend to use an S3 bucket -- a maven repo is just a convention about files.#2020-09-0113:50cmdrdatsis it possible to lock the usage of d/delete-database (on-prem)? it feels a little too easy to kill the entire db?#2020-09-0115:55favila(intern 'datomic.api 'delete-database (fn [_] (println "Nope!")))#2020-09-0116:16jaretHaha! Yeah, outside of Francis's approach, we do not have a built in method for limiting api access. I have logged a feature request in this space and we will review options in this area.#2020-09-0117:27Robert RandolphHello there! We, at Cognitect, are collecting feedback regarding the https://docs.datomic.com/cloud/dev-local.html getting started experience and entry points to dev-local.
We'd love if you can communicate your feedback with us https://forum.datomic.com/t/requesting-feedback-on-dev-local-getting-started/1608/2 on the forum: https://forum.datomic.com/t/requesting-feedback-on-dev-local-getting-started/1608/2#2020-09-0117:45seancorfield@UEFE6JWG4 I'm curious how quickly new accounts are approved on the forum (since I just signed up so I can read/reply to this).#2020-09-0117:46Robert Randolph@U04V70XH6 I just approved yours. We're working to improve the forum experience in this regard.#2020-09-0117:46seancorfieldThanks -- I suspect you'll get quite a few new Datomic devs now dev-local is available 🙂#2020-09-0122:48kennytiltondev-local utterly rocks the casbah.#2020-09-0123:10seancorfieldI'm watching the feedback on the forum and the main theme so far (and I've seen it mentioned on Twitter etc too) is the process for getting dev-local means it's hard to use for CI and for multi-dev teams since you can't just depend on a version on Maven/Clojars. The fill-out-form-and-download-via-email-link is fine for me for experimenting but I wouldn't like it much if I was trying to set up repeatable builds across a team or in a CI system. Could Cognitect perhaps clarify who the target audience really is for dev-local @UEFE6JWG4?#2020-09-0123:27kennyIt would be very surprising, imo, if the one of the objectives with dev-local was not to address the difficulty testing against Datomic Cloud during CI. Previously you'd need to figure out a complex docker setup with the Datomic socks proxy on CI and build your own db name prefix system. All of that is removed with dev-local.#2020-09-0214:15stuarthalloway@U04V70XH6 @U083D6HK9 we certainly intend for people to use dev-local in CI (and do so ourselves.)#2020-09-0214:16stuarthallowayWe have always maintained a private maven repo for our CI system, and in that world it doesn't matter where a dep comes from. (The time and effort is in reviewing/approving a lib, not in copying it into S3.)#2020-09-0214:17stuarthallowayThat said, we want to meet our users where they are, not where we are, so we are considering ways to make this better.#2020-09-0216:29seancorfieldWe used to run an instance of Apache Archiva for CI but it was a pain because it would randomly lock up/crash, and we only did it because Clojars wasn't always reliable. Since Clojars got a CDN, we pulled the plug on Archiva, and for the only custom dependency we have left, we use use a :local/root dependency to the JAR (and we keep versions of the JAR in a separate third-party repo under git because it's a lot easier than needing to worry about some external repo and making sure it's always available). Having to maintain a separate Maven-style repo just for a couple of JARs or deal with custom upload code and S3 is an overhead a lot of people don't want. Like I said on the forum, the current works for me and could work for us, because of how we have things setup, but I also have sympathy with other folks who feel this doesn't scale to larger teams or larger CI pipelines, in its current form.#2020-09-0119:54dregreHi folks,
Does anyone know if there is a way to specify a unique identity that encompasses multiple attributes?
For example, say I have entities of the following shape:
{:foo/a ...
:foo/b ...
:foo/c ...}
I'd like such entities to have a unique identity specified by the combination of the attributes :foo/a and :foo/b and their values (but not :foo/c).
(Roughly in SQL terms, I'm looking for a composite primary key.)#2020-09-0120:05timcreasySounds like you are looking for these: https://docs.datomic.com/cloud/schema/schema-reference.html#composite-tuples#2020-09-0120:21dregreThat seems like it! Did I understand this correctly: underlying the tuple are multiple datoms? IOW a tuple is not a single datom, whose value is a tuple, but rather multiple datoms, joined logically into a tuple?#2020-09-0120:29favilaa composite-tuple (there are two other kinds of tuples--this is only composite tuples) is a denormalization that datomic keeps up to date for you.#2020-09-0120:29favilait’s not magic, if that’s what you are thinking. In the index will be a datom corresponding to that assertion#2020-09-0120:31favilaif you define a composite :foo/a+b composed of :foo/a and :foo/b and you assert or retract :foo/a or :foo/b on an entity, an additional assertion will be added`[entity1 :foo/a+b [value-of-a value-of-b]]`#2020-09-0120:31favilait’s fully materialized#2020-09-0120:53dregreInteresting#2020-09-0120:53dregreAnd can one of the elements of the tuple be an inverse relation?#2020-09-0120:57dregreFor example, given the following entities:
{:bar/foos [{:foo/a ...
:foo/b ...
:foo/c ...}
{:foo/a ...
:foo/b ...
:foo/c ...}]}
I'd like the tuple to be asserted on my foo entities to encompass :foo/a and :bar/_foos#2020-09-0120:58dregreMy goal is to make it such that the children of bar are all unique with respect to :foo/a#2020-09-0121:05favila> And can one of the elements of the tuple be an inverse relation?
no#2020-09-0121:06favilayou can either reverse the relation, or consider using db/ensure or a transaction function to enforce your invariant#2020-09-0121:07dregreThanks -- I'll explore the other routes#2020-09-0121:07favilanote that an absent value for a component will write a nil into the composite value, which can be a problem if you’re using this to enforce uniqueness with rel types#2020-09-0121:08favilabecause of retractEntity’s behavior#2020-09-0121:08favila(you don’t have to use it, but it’s common to)#2020-09-0121:09dregreah yes#2020-09-0121:09dregrenoted, many thanks#2020-09-0122:16kennyIf I have a list of tx-ids where I'll need to call d/as-of on each, are there any performance trade offs to consider? (e.g., should I sort the list desc/asc by tx before calling as-of)#2020-09-0214:19stuarthallowayIt depends on what you are trying to do. Getting multiple asOf points against a particular entity may lose against walking the entire history. See also https://docs.datomic.com/cloud/time/filters.html#filter-or-log.#2020-09-0213:16souenzzohttps://docs.datomic.com/on-prem/clojure/index.html#datomic.api/transact
This doc string need a update
it says "If the transaction times out, the call to transact itself will throw a RuntimeException"
I got "clojure.lang.ExceptionInfo: :db.error/transaction-timeout Transaction timed out."#2020-09-0213:20alexmillerExceptionInfo is a RuntimeException#2020-09-0214:42joshkhi remember once having to run d/administer-system as part of a Datomic Cloud upgrade, but i can't find any historical mention of it in the release notes. is that something we should be doing regularly when upgrading?#2020-09-0216:00favilaIt’s only purpose right now is to upgrade schema on a db created with a version that predates the various features that introduced new schema. so you only run it once#2020-09-0216:00favilahttps://docs.datomic.com/on-prem/deployment.html#upgrading-schema#2020-09-0216:00favilathese are on-prem docs, but the same applies#2020-09-0216:00favilaI can’t find equivalent cloud docs#2020-09-0216:01favilaso you should definitely not be running it regularly#2020-09-0220:14nandoBeginner here. To "update" an entity, must the entity have a :db.unique/identity attribute? Or is there a way to rely on the datomic assigned entity id for this?
https://docs.datomic.com/cloud/transactions/transaction-processing.html#unique-identities#2020-09-0220:23favilaIt may help to think backwards. All transactions must eventually fully expand to a set of [:db/add e a v] or [:db/retract e a v] operations. Maps are a syntax sugar for :db/add s#2020-09-0220:26favilathe sugar is either 1) Your map has an explicit :db/id with an entity identifier (entity id, lookup-ref, or ident) 2) your map has a tempid, or 3) your map has no :db/id at all, so it gets a tempid automatically#2020-09-0220:27favilalater, after full expansion, some “e”s will be entity ids and some will still be tempids#2020-09-0220:28favilaif an e is a tempid and there’s an assertion that mentions a unique-identity attribute that already exists, the tempid can be substituted for the already-existing id#2020-09-0220:28favilaotherwise the tempid will be replaced with a newly-minted id#2020-09-0220:30nandoSo I can use the entity id of an entity in :db/id to update that entity?#2020-09-0220:30favilaso, stepping back, you can do this {:db/id 1234 :foo "bar"} or {:db/id [:unique-attr "value"] :foo "bar"} or {:db/id :ident-value :foo "bar"} if you want to be explicit about what entity you want to assert on#2020-09-0220:31favilathis “upsertion” case should really be the less common case, and only matters if you want create-or-update behavior.#2020-09-0220:32favilathat said, you should still use a unique attribute of some kind (not necessarily unique-identity--unique-value is ok) to identify your entities rather than raw entity ids#2020-09-0220:34nandoOk, so generally speaking, I should add UUID attributes to all entities in my schema, unless some other attribute will be unique. Correct?#2020-09-0220:35favilaall entities that need to be identifiable from outside datomic should have a unique attribute#2020-09-0220:35favilasometimes you don’t need this. e.g. many isComponent entities have no meaning aside from their “parent” entity, and it’s often ok to not give these unique ids#2020-09-0220:37favilathe functions that construct txs that manipuate them will have a proper identifier for the parent and will find them that way#2020-09-0220:38nandoOk, thanks very much for your help.#2020-09-0220:40favilaglad to help, sorry if that was long-winded#2020-09-0220:28Yuriy ZaytsevHi there. I have a question about analytics. I want to have separate metaschemas for development and for production. What is the best way to do it? If I have for example 2 metaschemas in which order it will be applied?#2020-09-0309:14henrikWhy does,
(d/q {:query {:find [?d]
:where [[_ :some.test/thing ?d]]}
:args [db]})
Given me a `java.lang.RuntimeException: "Unable to resolve symbol: ?d in this context"`?#2020-09-0309:14henrikWhy does,
(d/q {:query {:find [?d]
:where [[_ :some.test/thing ?d]]}
:args [db]})
Given me a `java.lang.RuntimeException: "Unable to resolve symbol: ?d in this context"`?#2020-09-0309:16cmdrdatsit needs to be quoted so that clojure doesn't try resolve the symbol - '{:find ...}#2020-09-0309:17henrikDoh, thanks#2020-09-0309:17cmdrdats👍#2020-09-0309:18henrikDocs don’t specify quoting as necessary: https://docs.datomic.com/cloud/query/query-executing.html#querying-a-database#2020-09-0309:18cmdrdatscurious - that must be a typo#2020-09-0309:19cmdrdatsunless d/q is a macro in the cloud api#2020-09-0309:19henrikMaybe, or I’m missing some subtlety. @U064X3EF3?#2020-09-0309:19cmdrdatsI'm used to the on-prem datomic#2020-09-0309:19henrikI’ve only used Cloud, but it was a while back#2020-09-0309:20cmdrdatsusing on-prem now?#2020-09-0309:23henrikNo, I’ve been doing some other stuff for a while (UX, PLing), but I can’t let dev-local just sit there without trying it out. 🙂#2020-09-0310:07cmdrdatsxD#2020-09-0313:31favilaI think the necessity of quoting is considered “obvious”--a query is a data structure and the symbols in it are not supposed to be resolved eagerly in clojure but used inside the query engine. This can’t be solved with a macro because queries don’t need to be literals#2020-09-0313:35favilathat example in the doc you linked to is just a flat-out bug 🙂#2020-09-0314:00cmdrdats@U09R86PA4 surely a macro could technically solve this, since it would be able to walk the inputs and quote the symbols.. it would be ugly, so I way prefer the quoting, but doable?#2020-09-0314:01favilaHow would a macro look at a query provided as an argument by reference?#2020-09-0314:01favilavs a literal#2020-09-0314:02favila(let [q my-query] (d/q q db))?#2020-09-0314:15cmdrdatsI guess it could rewrite the symbols to function calls that would try resolve in local environment or resolve in actual symbols instead.. it would be such a terrible hack#2020-09-0314:24henrikIt’s not that far-fetched, if you consider the find clause a kind of declaration.
It’s semantically not much weirder than naming parameters in a defn , even though they may not be declared anywhere beforehand.#2020-09-0314:25cmdrdatsit would be fickle as anything though xD#2020-09-0314:26cmdrdatsquoted is so much simpler#2020-09-0314:27henrikAnd yet we don’t have to (defn hello '[person-name] …). But hey, it doesn’t matter much. 🙂#2020-09-0314:39favilathe difference is that the slot in defn must be a literal#2020-09-0314:40favilayou can’t (let [my-arg-vector '[person-name]] (defn hello my-arg-vector,,,))#2020-09-0314:40favilarequiring a query literal would be a poor limitation to impose on d/q#2020-09-0314:40favila(IMO)#2020-09-0314:49cmdrdats(defmacro q [body]
(println &env))
#2020-09-0314:49cmdrdats(let [qr [:find]] (q qr))
{qr #object[clojure.lang.Compiler$LocalBinding 0x694600ab #2020-09-0314:49cmdrdatsbut yes - there would be all sorts of caveats#2020-09-0314:49cmdrdatsit would be terrible xD#2020-09-0314:56henrik(defmacro d-q [q-map]
`(d/q ~(assoc q-map :query `(quote ~(:query q-map)))))
😅#2020-09-0309:15henrik(Trying out dev-local)#2020-09-0314:47StefanHi! In my new job we’re using Datomic Pro on-prem. We’re now trying to setup continuous integration (CircleCI), but lein deps raises an error that it cannot get datomic from http://my.datomic.com (“not authorized”). What is the “idiomatic” way to use Datomic in continuous integration? Both for testing and for generating release builds? Thanks!#2020-09-0314:48chrisblomdo you have a private maven repo that the CI can access? If so you could mirror the dependency there#2020-09-0314:49StefanNo I’m afraid not.#2020-09-0314:50alexmilleryou do have a private maven repo with your on-prem license#2020-09-0314:51alexmillerI'm not sure what's supported in circleci wrt setting up access to it#2020-09-0314:51marshallif you login to your http://my.datomic.com dashboard there are instructions for configuring access to the private maven repo#2020-09-0314:51marshallyou’ll need to add user/pw to your leiningen config. older versions required gpg-encrypted key files#2020-09-0314:51marshalli dont know if that’s still true#2020-09-0314:52StefanAh that’s good to know, I had no idea. I will have to check with the person who set it up originally then. I think given your info we should be able to get it going, thanks!!#2020-09-0314:52chrisblomCircleCi has a feature to pass secrets as env. vars., you could use that to pass the pw to the build#2020-09-0314:56chrisblomif you use leiningen you can call clojure code with unquote, to inject the env var:
:repositories {"my repo"
{:url ""
:name "repo name"
:username "usernamr"
:password ~(System/getenv "DATOMIC_PASSWORD")}}
#2020-09-0315:42JasperGreat, thanks a lot, we got it working like that#2020-09-0316:39Björn EbbinghausIs there a way to pull an entity by a composite tuple made out of refs?#2020-09-0316:52favilaYes, but you need to use raw entity ids#2020-09-0316:52favila[:aref+bref [123 456]]#2020-09-0316:53favila(for e.g., assuming that attr is :db/unique and that a and b are both refs)#2020-09-0316:59Björn EbbinghausHm.. 😕
I hoped I could get away with lookup refs instead of eids…#2020-09-0316:59Björn EbbinghausThank you anyway#2020-09-0320:40Jake Shelby[Datomic Cloud] I've seen several references in the documentation that says the Application Name, in context of a CF template param, "cannot be changed later" - however, that parameter does show up if I attempt a parameter upgrade in CF. Will bad things happen if I change that? ... Also - what if I really really do need to change the application name? Like in the case of scaling up my system, by breaking the main app out into a new compute group, but needing to still deploy tx fns to the primary group from a separate application name?#2020-09-0405:15David PhamWith Datomic on-prem with PostgreSQL, would it be possible to increase the read capacity of datomic by adding Postgres recplicas of the database? I did not see any documention on whether it was feasible in the docs. I guess it would not make sense because of transactors.#2020-09-0409:10cmdrdatsI'm finding using enum's as refs in datomic (on-prem) extremely tedious - in order to find the enum keyword, I need to pull or query/join the :db/ident every time... writing and explicitly querying is simple enough, but where I want to do something like (case (:message/status m) :message.status/read ...) it's a pain.. is there something I'm missing?
Currently, I'm heavily leaning toward just using the :db.type/keyword#2020-09-0409:40pithylessOne advantage is you can hang other attributes off of an :db/ident enum (e.g. docs, etc.). This may or may not be useful to you. If it's just a question of dealing with the pull result - and all I care about in a specific context are the keywords - I find it useful to have a helper function that will postwalk the pull result and replace all the enum maps (cardinality one-or-many) with the :db/ident keyword.#2020-09-0411:34cmdrdatsye, I'm just not sure it's worth it when I can just keep a map about the enum's around (most might have a displayname of some sort, but that's really it).. the postwalk sounds too fiddly for my liking, and I'd much rather have us not have to think about it 😕
Trying to find out if there's another reason for using refs instead of keywords? is the indexing more efficient?#2020-09-0412:11favilaPull with xform is another option #2020-09-0412:13favila{[:ident-Val-attr :xform lift-ident] [:db/ident]}#2020-09-0412:14favilaDefine lift-ident as :db/ident#2020-09-0416:49cmdrdatsBut why do all that instead of just keyword? Besides being able to attach other metadata to the enum value, of course? Surely there's another tradeoff?#2020-09-0420:25Nassinenums gives you more constraints (if you need that), a keyword type would accept any keyword, with refs, only already asserted enums/entities are valid#2020-09-0420:32cmdrdatsThat's ok, I can work around that easily enough, even with the constraint functions.. i forget what they're called.. surely there's a more fundamental reason?#2020-09-0420:56NassinUsing keywords directly will require more storage I guess(duplicates), instead of pointing always to the same enum.#2020-09-0420:57Nassinyou could create refs to keywords but no idea if that is less or more efficient than refs to enums#2020-09-0420:58Nassindocs don't say, datomic being a black box, only cognitect knows#2020-09-0421:20favilaThis is really just the difference between a pure value and something you would assert things about#2020-09-0421:21favilaif it’s really just a pure value: sure, keyword, fine; but if you want to assert anything about it (e.g. give it unique names, change its name, say it belongs to a set of other values in the same domain, give them aliases for different contexts, say who introduced them, give them sorting priorities, etc etc), then entities give you that flexibility#2020-09-0506:15cmdrdats@U011VD1RDQT interesting point about the storage requirements, would be interesting to know if this is the case#2020-09-0506:17cmdrdats@U09R86PA4 thanks, that's useful - some great examples. I guess it's about whether that data should live in code or in database#2020-09-0409:22kipzHi. I've setup a Datomic Cloud in us-east-1. I'm going through the ion tutorial at https://docs.datomic.com/cloud/ions/ions-tutorial.html. Everything is working until I try to clojure -A:ion-dev '{:op :push}' when I get a 403 AccessDenied trying to download the ion-dev pom/jars. I have an AWS_PROFILE environment exported, which seems to be working enough to get ion jars downloaded and the samples working up until this point (i.e. I can load the sample db, run queries etc). Any thoughts on this would be much appreciated.#2020-09-0409:24kipzCould not transfer artifact com.datomic:ion-dev:pom:0.9.265 from/to datomic-cloud (): Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: XXX; S3 Extended Request ID: XXX)
#2020-09-0409:32kipzIncidentally, I'm curious as to why these are authenticated maven repos in the first place. Anyone know why this would be?#2020-09-0411:14kipzThis is now working, but intermittently. Unfortunately I can't say what fixed it as I've made no changes - I've just tried again. Perhaps an AWS issue?#2020-09-0415:41Jake Shelbythere were pretty big IAM issues yesterday - so it's possibly an AWS thing, depending on when you were getting the issue#2020-09-0715:32kipzThanks - yeah, must have been that. I'm not seeing issues doing this now at all. It didn't make sense that some artefacts could be downloaded, but not others.#2020-09-0416:06Jake ShelbyI just noticed that https://docs.datomic.com/cloud/releases.html#current shows older versions for ion and ion-dev than is listed in the https://docs.datomic.com/cloud/releases.html#ion-dev-276#2020-09-0416:09Jake Shelby(in case anyone else is scratching their heads wondering why it looks as if there are conflicts for the latest Datomic Cloud versions not using the latest client libs ... you just have to use the actual latest version of ion-dev)#2020-09-0417:04jaret@jake.shelby Thanks for the report! I just fixed this …an artifact in our docs build failed to get updated.#2020-09-0419:07Matheus Serpellone NashHello!
We noticed our application tends to crash after a few weeks of no restarts (due to no deploys or cycles), we see increased GC activity and CPU usage until it becomes unhealthy and we have to cycle it.
I used jmap to extract a thread dump of a recent pod (24h), and ran it through eclipse memory analyzer, to see if i can find any leaks.
It’s telling me that there is an object datomic.cache.WrappedGCache retaining 9+GB of heap. I have attached the stack from gcroot
I tried to search for this class to understand what it does, but nothing shows up.
Do you guys have any advice on how to debug this? Is this Datomic’s peer getting too bloated from all the queries? Can I tune this to not take too much space? I’m a bit uncertain on how to proceed.
Also, sorry if this is not the right channel.#2020-09-0419:28csmI think https://docs.datomic.com/on-prem/caching.html#object-cache is what you’re looking for?#2020-09-0419:29Matheus Serpellone NashAh ok! I just also realized that the “95% usage” im getting is probably from the used heap, not total heap#2020-09-0419:30csmThough we also see our peer processes eventually start consuming a significant amount of CPU, and inspecting the process, it’s almost all GC scans, and almost all the memory is in datomic’s cache. I haven’t figured out if there is a good way to tune the GC to handle a large in-memory cache, to avoid this CPU usage#2020-09-0419:41Matheus Serpellone NashWhat are you using to inspect the process? visualvm?#2020-09-0419:58csmtop -H -p <pid>#2020-09-0420:01csm(and jmap)#2020-09-0420:21marshallwhat JVM gc flags are you using and what heap size#2020-09-0420:22marshallyou mostly likely want to use the recommended flags from our docs: -XX:+UseG1GC and -XX:MaxGCPauseMillis=50#2020-09-0420:22Matheus Serpellone NashLet me confirm. I’m sure we use G1. Not sure about gc pause#2020-09-0420:22marshall@m.serpellone this is a peer application, yes?#2020-09-0420:23Matheus Serpellone Nashyes#2020-09-0420:23marshallkeep in mind the peer is your app so you need to be aware of things like head holding/etc#2020-09-0420:24marshallif looking for memory leaks/etc#2020-09-0421:08Lennart BuitCan I ask more lighthearted questions here? Stuart Halloway has this party trick of using a vector as vector as a database value, what makes that that doesn’t work for the client api? Sounded like a simple way to test rules ^^#2020-09-0421:09marshall@lennart.buit you can do that in the peer because the work of query happens in the peer#2020-09-0421:10marshallclient sends the request over the wire to the peer server (on prem) or to a datomic cloud system (cloud)#2020-09-0421:10Lennart BuitAh yes that makes sense 🙂! I should have thought of that#2020-09-0421:14kennyIt'd be great if ion-dev bumped its version of tools.deps. The version it is using still uses the s3-wagon-private lib which transitively brings in a logback.xml file. This causes the multiple slf4j bindings warning. I can manually bump tools.deps to 0.8.709 to work around the warning.#2020-09-0518:20vI am working on an app where I am using d/with function to store intermediate transaction steps. I want to apply all the changes I made using d/with to actual database. How can that be achieved #2020-09-0522:31val_waeselynckI don't see any trivial way to do that. You could keep track of your transaction results, compacting the added datoms into one transaction of the same effect.
Such a compaction is not straightforward, as the intermediary transactions might produce conflicting datoms, and new entities might have been created in the way.
In addition, in the general case, you're facing a potential concurrecy issue here, because computing changes speculatively and submitting them later is not transactional. Between the moment you compute your changes locally, and the moment you transact them, another transaction might have occured, causing your changes to violate an invariant (for example, setting a banking account's balance to a wrong amount, effectively erasing the result of a transfer).#2020-09-0522:33val_waeselynckMy point here is that there's no way to make this foolproof without knowing more about the nature of your changes.#2020-09-0613:24joshkhi'm having an issue with dl/import-cloud when importing a database to dev-local . i can see in the exception (excluded for brevity) that the datom value is a very lengthy string
ImportingExecution error (ExceptionInfo) at datomic.dev-local.tx/sized-fressian-bbuf (tx.clj:103).
Item too large
java.util.concurrent.ExecutionException: clojure.lang.ExceptionInfo: Item too large
at datomic.dev_local.tx$sized_fressian_bbuf.invokeStatic(tx.clj:103)
at datomic.dev_local.tx$sized_fressian_bbuf.invoke(tx.clj:96)
at datomic.dev_local.tx$marshal_tx$fn__17053.invoke(tx.clj:180)
at clojure.core$mapv$fn__8445.invoke(core.clj:6912)
at clojure.lang.PersistentVector.reduce(PersistentVector.java:343)
at clojure.core$reduce.invokeStatic(core.clj:6827)
at clojure.core$mapv.invokeStatic(core.clj:6903)
at clojure.core$mapv.invoke(core.clj:6903)
at datomic.dev_local.tx$marshal_tx.invokeStatic(tx.clj:179)
at datomic.dev_local.tx$marshal_tx.invoke(tx.clj:176)
at clojure.core$pmap$fn__8462$fn__8463.invoke(core.clj:7022)
at clojure.core$binding_conveyor_fn$fn__5754.invoke(core.clj:2030)
at clojure.lang.AFn.call(AFn.java:18)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:844)
#2020-09-0613:32joshkhso it seems that strings are limited to 4096 characters, but Datomic Cloud allows for longer strings? https://forum.datomic.com/t/4096-character-string-limit/1579#2020-09-0812:26stuarthallowayThanks for the report! The documented limit for strings in Cloud is 4096, as you describe. The dev-local importer enforces this limit. We understand that this is a problem and are considering options.#2020-09-0711:34arohnerAre there any known issues transacting ::foo false values when using :db.entity/attrs [::foo], with ::foo db.type/boolean? I’m transacting a map, and all of the other required keys are working fine, but requiring ::foo and adding ::foo false is causing the transaction to fail#2020-09-0817:02kipzI posted a question on the dev forum, but because it's on an old thread, I wondering if it'll draw any attention, so I was wondering if there's anyone here in Slack who might be able to provide some insight into the behaviour of unique composite-tuples we are seeing? https://forum.datomic.com/t/upsert-behavior-with-composite-tuple-key/1075?u=jcarnegie#2020-09-0817:17favilaThis is a caveat of unique-identity composite ref tuples#2020-09-0817:17favilaI would even say a “gotcha”#2020-09-0817:18favilahowever, I’m not sure there’s an easy fix. “upsertion” works by looking for a to-be-applied assertion with a tempid and an upserting attr and resolving the tempid to an existing id if the value matches an existing id#2020-09-0817:19favilacomposite tuples need to look at a just-completed transaction, see which composite component attributes were “touched”, and adding an additional datom to update the composite#2020-09-0817:19favilahaving upsertion resolve to a composite tuple would create a cycle here#2020-09-0817:20favilainstead of two simple phases, it would become a constraint problem#2020-09-0817:22kipzYeah, I see what you mean. It's just that this is the sort of thing that transaction isolation could give us, right?#2020-09-0817:23favilaI’m not sure what you mean?#2020-09-0817:37favilais your scenario combining upserting of the components of the ref also with upserting of the composite ref itself?#2020-09-0817:38favilaI’m trying to imagine why you don’t either have an entity id already, or know you are creating the entity and thus cannot conflict#2020-09-0817:49kipzWell, yeah, this can be solved by issuing multiple transactions, but I'm trying to avoid that. The system itself receives events (the entities) from different sources at different times/orders with partial data - enough to create id's of (potentially) new entities that are required refs of other entities. So in general, we can't know, without issuing queries, if a particular entity already exists. So we want to upsert all the time, and we need a single transaction. Like I said in my post, we've solved this by generating unique id fields from our own external definition of composite-ids - and this is a bit of a pain (we have to manage the lifecycle of this schema and related code between clients). I'm wondering how folk are using these (unique) composite-tuples in the real world given how they currently work.#2020-09-0817:50kipz> I’m not sure what you mean?
What I mean is, it seems feasible that this constraint could be solved within the transaction if the datomic team wanted to implement this. I understand that it's currently not the case.#2020-09-0817:52kipz> is your scenario combining upserting of the components of the ref also with upserting of the composite ref itself?
I'm still trying to grok this 🙂 but I think so. I can post a little schema and transaction if you're interested?#2020-09-0817:59favilayeah, I am#2020-09-0819:14kipz[
;; commit entity
{:db/ident :kipz.commit/sha
:db/cardinality :db.cardinality/one
:db/valueType :db.type/string}
{:db/ident :kipz/repo
:db/cardinality :db.cardinality/one
:db/valueType :db.type/ref}
{:db/ident :kipz.commit/id
:db/valueType :db.type/tuple
:db/unique :db.unique/identity
:db/tupleAttrs [:kipz.commit/sha
:kipz/repo]
:db/cardinality :db.cardinality/one}
;; repo entity
{:db/ident :kipz.repo/id
:db/cardinality :db.cardinality/one
:db/unique :db.unique/identity
:db/valueType :db.type/string}
{:db/ident :kipz.repo/name
:db/cardinality :db.cardinality/one
:db/valueType :db.type/string}
{:db/ident :kipz.repo/owner
:db/cardinality :db.cardinality/one
:db/valueType :db.type/string}]#2020-09-0819:14kipz[[:db/add "r1" :kipz.repo/id "repo-id-1"]
[:db/add "r1" :kipz.repo/name "repo-name-1"]
[:db/add "r1" :kipz.repo/owner "repo-owner-1"]
[:db/add "c1" :kipz.commit/sha "commit-sha-1"]
;; always fails without this, fails after first time with it
;; this is the line that the docs says we should never do, but only works
;; with specific known "r1" eid
[:db/add "c1" :kipz.commit/id ["commit-sha-1" "r1"]]
[:db/add "c1" :kipz/repo "r1"]]
#2020-09-0819:16kipz:kipz.repo/id has been made a scalar to help show the issue#2020-09-0917:58favilawhat was it before?#2020-09-0918:01favila> is your scenario combining upserting of the components of the ref also with upserting of the composite ref itself?
That’s indeed what is happening. you are trying to allow repo upserting (one of the components of the ref) while also allowing the commit entity to upsert (the composite ref itself)#2020-09-0918:05favilaI think this may be a complecting in :db.unique/identity itself. To identify an entity you generally can’t use refs--refs are internal identifiers, but :db/unique is to mark external identifiers#2020-09-0918:05favilabut :db.unique/identity also has this upserting behavior you want#2020-09-0918:06favilawhich I’m guessing you want to use here for deduplication#2020-09-0918:07favilaso, if the repository id never changes, and the commit->repo reference never changes, and repo id is always available to the application at tx time (I don’t see how it couldn’t be with this schema design) consider denormalizing by putting the repo id on the commit entity#2020-09-0918:08favilayou can do this a few ways#2020-09-0918:09favila1. add :commit/repo-id, and make the :commit/id use that as one of its components (I suggest putting repo id first for better indexing)#2020-09-0918:09favila2. just write :commit/id as a tuple with those two values (don’t use a composite, just a tuple). This has the advantage of not adding a datom, but the disadvantage of being less clear#2020-09-0918:12favilaboth these have the advantage that you can now produce lookup refs for commits without a db: [:kipz.commit/id [commit-sha-string repo-id-string]]#2020-09-0918:12favilathis is not possible for ref composites generally--it’s that notion of external identity again#2020-09-0918:14favilaalternatively, if you just want to enforce uniqueness, consider not using upserting or a composite attribute at all. You can query first and speculatively create entities if they’re not found, and use :db/ensure to allow the transaction to fail if you violate the constraint.#2020-09-0918:14favila(i.e. optimistic commit style)#2020-09-0918:14favilayou can use some, none, or all indexes according to your preference and the concurrency of the workload#2020-09-0918:17favilae.g. here, you could look up the repo and use that id; and if not found create a repo but allow the tx to fail (using :db.unique/value instead of identity) if someone else made the same repo in the meantime. you can recalculate and reissue the tx#2020-09-0918:17favila(that doesn’t actually need :db/ensure at all)#2020-09-0918:17favilaanyway, those are just some ideas#2020-09-0918:18favilaI don’t think expecting composite tuple upserting constraint resolution is a realistic expectation because of performance: the transactor has a global write lock on the db (essentially) while it’s doing all this tempid resolution and composite tuple maintenance, so it has to be as fast as possible#2020-09-0918:21favilathat said, you can always write a transaction function that does what you want. it would take the repo and the commits plus some DSL for your own tempid replacement for the other assertions you want to make on those entities, do the lookup-or-create, then replace your tempid and return the expanded transaction. Essentially implementing the upserting logic yourself before the transactor does tempid resolution#2020-09-1514:02kipz> so, if the repository id never changes, and the commit->repo reference never changes, and repo id is always available to the application at tx time (I don’t see how it couldn’t be with this schema design) consider denormalizing by putting the repo id on the commit entity
Yeah - I kind of added that external repo-id to simplify the example, but perhaps that just confused things. I had wanted repos to have unique composite tuples made from other attributes too.#2020-09-1514:07kipzAgain - we've moved forwards with generating our own unique id attributes for all entities grounded in the attributes of those entities, and this leaves us free to use non-unique composite tuples as we like. This gives us the overall behaviour we like. However, to me, this feels like exactly the sort of constraint problem I want my database to solve for me and doesn't seem unreasonable - at least from the outside. In any case, I'm still wondering which uses cases these unique composite tuples (as they are currently implemented) are suitable for.#2020-09-1514:07kipzThanks for all your insights! 🙂#2020-09-1514:40favilathey are suitable for ensuring uniqueness violations fail a tx (vs upsert), and for having more-selective lookups#2020-09-0822:31nandoDatomic beginner here. I have a question about schema evolution from initial experience. Developing a simple web app, I began with the following query to populate a form:
(defn find-nutrient [eid]
(d/q '[:find ?eid ?name ?grams-in-stock ?purchase-url ?note
:keys eid name grams-in-stock purchase-url note
:in $ ?eid
:where [?eid :nutrient/name ?name]
[?eid :nutrient/grams-in-stock ?grams-in-stock]
[?eid :nutrient/purchase-url ?purchase-url]
[?eid :nutrient/note ?note]
(d/db conn) eid))
Got all CRUD operations working as expected. Delightful.
Decided to add categories of nutrients to this app to work through using ref types. Made appropriate changes to schema and codebase, added a few categories, dropdown populates, all good, changed the find-nutrient function to the following:
(defn find-nutrient [eid]
(d/q '[:find ?eid ?name ?grams-in-stock ?purchase-url ?note ?category-eid
:keys eid name grams-in-stock purchase-url note category-eid
:in $ ?eid
:where [?eid :nutrient/name ?name]
[?eid :nutrient/grams-in-stock ?grams-in-stock]
[?eid :nutrient/purchase-url ?purchase-url]
[?eid :nutrient/note ?note]
[?eid :nutrient/category ?category-eid]]
(d/db conn) eid))
Opps, and now the form no longer populates with existing data, because none of the entities have a category.#2020-09-0822:39Nassinhave you read about the pull api?#2020-09-0822:39Nassinsounds more like a job for it#2020-09-0822:41nandoOk, I will look into it.#2020-09-0822:44Nassinyes, did you add the new attribute to existing entities?#2020-09-0822:45nandoI'm thinking ahead if this could be a potential issue with the evolution of a production app.#2020-09-0822:46nandoThere are only a few entities, so one has and the others don't.#2020-09-0822:49nandoI'll work on modifying the function to use the pull api and see what happens.#2020-09-0822:52Nassinyes, the pull api is designed for this#2020-09-0823:03nandoOk, got it working. Is there a handy way to specify the keys used in the map that is returned using the pull api? I didn't find one in a scan of the docs.#2020-09-0823:05Nassinlike this? https://docs.datomic.com/cloud/query/query-pull.html#as-option#2020-09-0823:07Nassinclojure has select-keys#2020-09-0823:09nando{:db/id 4611681620380876878,
:nutrient/name "Vitamin A",
:nutrient/grams-in-stock 40,
:nutrient/purchase-url "",
:nutrient/note "beta carotene and palmitate",
:nutrient/category #:db{:id 96757023244374}}
Here's the data returned from the pull. I'll see what I can do with it.#2020-09-0823:10nandoMaybe I should be using the fully qualified keys in my web forms?#2020-09-0823:20Nassindon't see why not, as long as you are inside the same process one should rely on them IMO#2020-09-0823:25Nassinsometimes they aren't pretty to work with though#2020-09-0823:26nandoHow would I flatten :nutrient/category #:db{:id 96757023244374}}#2020-09-0823:27nandoI'm not yet familar with what #: designates#2020-09-0823:28nandoI should just try to do it myself ...#2020-09-0823:30Joe Lane@nando That is just clojure shorthand for :nutrient/category {:db/id 96757023244374} .#2020-09-0823:31nandoOh! That helps a lot! Then a get-in should do it.#2020-09-0823:34Joe LaneWell, wait, what is the type of ref that :nutrient/category is pointing at?#2020-09-0823:34nandoI'm sufficiently sorted out. Thanks very much @kaxaw75836 & @lanejo01#2020-09-0823:37nando@lanejo01 It's essentially a string, a category name#2020-09-0823:38nando{:db/ident :nutrient/category
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/one}
{:db/ident :category/name
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one
:db/unique :db.unique/identity
:db/doc "Nutrient category"}#2020-09-0823:40Joe LaneCool. If you're using d/pull or using pull in a query you can supply a pull pattern like this.
(d/pull db '[:nutrient/name
:nutrient/grams-in-stock
:nutrient/purchase-url
:nutrient/note
{:nutrient/category [:category/name]}] eid)#2020-09-0823:43nandoOk, got it!#2020-09-0823:44Joe LaneHave fun, reach out if you have more questions!#2020-09-0823:46nandoThis is fun!#2020-09-0823:47Joe LaneAre you using dev-local?#2020-09-0823:48nandoYes, I am.#2020-09-0823:49Joe LaneCool, I would love to hear some feedback on your experience.#2020-09-0823:50Joe LaneIf you were interested, of course 🙂#2020-09-0823:51Joe LaneEither way, glad to hear you think it's fun!#2020-09-0823:57nandoI've always wanted to use datomic, for years, but it was difficult to find a sensible path in as a solo developer. For me, datomic justified learning clojure. Anyway, some weeks back i decided to bite the bullet and dive into developing an app to learn Clojure. I thought I was going to use next.jdbc and a relational database. Well, I had trouble getting the mysql driver to work with the mysql version installed on my dev laptop ... and the next day I decided to hell with it, I'm going to find a way to use datomic instead! So I started looking around and found Stu's message here that dev-local had been released the day before. 🙂#2020-09-0900:12seancorfieldLOL! Sounds like me... I've been meaning to learn Datomic for "ages" but now that dev-local is available, I think I actually might.#2020-09-0900:20nandoI'm not yet at the stage of working with anything that complex, but so far I'm very happy, probably just because it all makes so much sense ...#2020-09-0905:07John LeidegrenHow can I move the identity :test/id from the entity 17592186045418 to the new entity (referenced by :test/ref). Do I have to do this in two separate transactions? All I want to do is move the identity to a new entity in a single transaction. I understand why the temp ID resolution is taking place and resolving the temp ID to a conflict but how can I avoid it. How can I force a new entity here?#2020-09-0905:07John LeidegrenHow can I move the identity :test/id from the entity 17592186045418 to the new entity (referenced by :test/ref). Do I have to do this in two separate transactions? All I want to do is move the identity to a new entity in a single transaction. I understand why the temp ID resolution is taking place and resolving the temp ID to a conflict but how can I avoid it. How can I force a new entity here?#2020-09-0905:31cmdrdatsInteresting problem! I'm interested to see the solution for this.. I expect you'd have to split to two transactions since you're working with db.unique/identity though#2020-09-0906:43John LeidegrenYeah. That's what I did. I don't like it because now there's a point in time where the database sort of has an inconsistent state. It's not the end of the world but I really want it to commit as a single transaction.
For this to actually go though, the transactor, would have to somehow react to the fact that the identity is being retracted during the transaction and because of that, it mustn't be allowed to partake in temp ID resolution. (either that, or you tag the temp ID as unresolvable to force a new entity...)#2020-09-0906:45cmdrdatsit seems like a modelling problem if you need to get to a state where an entity has an 'identity', then loses it and gives it to another entity - so I could see why this could be unsupported behaviour#2020-09-0906:46John LeidegrenI'm fixing a data problem or rather I'm doing this because I'm revising the data model. I ran into this as part of a upgrade script I wrote.#2020-09-0906:48John LeidegrenI know Marshal has commented in the past on some of these transactional isolation behaviours. As to why it might need to work this way but I'm curious to what the reasoning for it is. As I can see a way to program around it but I can also understand that you might not want to just do that.#2020-09-0906:54John LeidegrenYou could argue that I'm https://blog.datomic.com/2017/01/the-ten-rules-of-schema-growth.html or some such.#2020-09-0906:54cmdrdatsI suspect that it's a bit of a trade-off - if you have this behaviour, it's simpler to reason about transactions, since it's likely implemented in a ref-resolution phase, then an actual write phase#2020-09-0906:55cmdrdatsbut if you have clever transactions where you effectively mutate the state for each fact, then things get trickier to accurately reason about#2020-09-0906:56cmdrdatsI had this kind of issue for new schema, and schema that used the new schema:
[{:db/ident :ent/displayname}
{:db/ident :ent/something
:ent/displayname "Hello"}]
would complain that :ent/displayname is not part of the schema yet#2020-09-0906:57cmdrdatsso I had to write a function that checks existence of the properties and then split the schema assertion into multiple phases#2020-09-0906:58John LeidegrenYeah, so this rule applies to attributes. Which I sort of understand. You cannot refer to schema before it exists but for data though. Are the same constraints equally valid?#2020-09-0907:01cmdrdatsI imagine the same ref-resolution phase code applies.. I don't know the exact implementation details of course, but that's the picture I have in my head xD It would basically handle every datom against the immutable value prior to the transaction#2020-09-0907:04cmdrdatsinterestingly, this implementation also implies that you can't provide two values of a :cardinality/one field in the same transaction#2020-09-0907:04cmdrdats(d/transact (:conn datomic)
[[:db/add "temp" :ent/displayname "hello"]
[:db/add "temp" :ent/displayname "bye"]]
)#2020-09-0907:05cmdrdats:db.error/datoms-conflict Two datoms in the same transaction conflict
{:d1 [17592186045457 :ent/displayname \"hello\" 13194139534352 true],
:d2 [17592186045457 :ent/displayname \"bye\" 13194139534352 true]}#2020-09-0907:05cmdrdatssince it can't imply the db/retract for the "hello" value#2020-09-0907:05cmdrdats/s/field/attribute, of course#2020-09-0907:11John LeidegrenYeah, the application of transactions is unordered, so if you say add twice for the same attribute of cardinality one it cannot know which one you meant so it rejects the transaction.#2020-09-0907:17cmdrdatsah, I see - so by that constraint, the same applies for retracting and to re-using identity on a new entity#2020-09-0911:57marshallwhat version of datomic are you using?#2020-09-0911:57marshalland cloud or on-prem?#2020-09-0912:24John Leidegren@U05120CBV It's actually datomic-free-0.9.5703.21 so maybe this isn't a problem elsewhere#2020-09-0913:16marshalli believe this was fixed in https://docs.datomic.com/on-prem/changes.html#0.9.5390#2020-09-0913:16marshallbut its possible this is unrelated#2020-09-0913:16John Leidegrenhehe, that description seems to fit my problem very well. oh well. Thanks for letting me know.#2020-09-0913:43marshall@UNV3H01PS do you have a starter license? can you try it in starter and/or with Cloud?#2020-09-0913:44marshalli can also look at trying to reproduce#2020-09-0914:16John LeidegrenThanks but I'm just fooling around. As long as I know this isn't the intended behavior that's fine. I know what to do now.#2020-09-0914:29marshall👍 we’ll look into it anyway#2020-09-1015:10jaretHey @UNV3H01PS,
Marshall tasked me with looking into this and I wanted to clarify that this is indeed intended behavior and not related to the fix Marshall described. You already have the rough reason here:
> Yeah, the application of transactions is unordered, so if you say add twice for the same attribute of cardinality one it cannot know which one you meant so it rejects the transaction.
You cannot transact on the same datom twice and have it mean separate things in the same transaction. You have to split the transactions up to retract the entity then assert the new identity. Ultimately what you're doing here is cleaning up a modeling decision and in addition to separating your retraction and add transactions you could alternatively model a new identity and use that identity going forward, preserving the initial decision.#2020-09-1015:10jaretI know you were already past this problem, but I hope that clears things up.#2020-09-1110:44John Leidegren@U1QJACBUM Oh, thanks for getting back to me. I really appreciate it.#2020-09-0917:32AbeHello, I'm new to Clojure and Datomic. I'm using the min aggregate to find the lowest-priced product, but can't seem to figure out how to get the entity ID of the product along with it -
;; schema
(def product-offer-schema
[{:db/ident :product-offer/product
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/one}
{:db/ident :product-offer/vendor
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/one}
{:db/ident :product-offer/price
:db/valueType :db.type/long
:db/cardinality :db.cardinality/one}
{:db/ident :product-offer/stock-quantity
:db/valueType :db.type/long
:db/cardinality :db.cardinality/one}
])
(d/transact conn product-offer-schema)
;; add data
(d/transact conn
[{:db/ident :vendor/Alice}
{:db/ident :vendor/Bob}
{:db/ident :product/BunnyBoots}
{:db/ident :product/Gum}
])
(d/transact conn
[{:product-offer/vendor :vendor/Alice
:product-offer/product :product/BunnyBoots
:product-offer/price 9981 ;; $99.81
:product-offer/stock-quantity 78
}
{:product-offer/vendor :vendor/Alice
:product-offer/product :product/Gum
:product-offer/price 200 ;; $2.00
:product-offer/stock-quantity 500
}
{:product-offer/vendor :vendor/Bob
:product-offer/product :product/BunnyBoots
:product-offer/price 9000 ;; $90.00
:product-offer/stock-quantity 15
}
])
;; This returns the lowest price for bunny boots as expected, $90:
(def cheapest-boots-q '[:find (min ?p) .
:where
[?e :product-offer/product :product/BunnyBoots]
[?e :product-offer/price ?p]
])
(d/q cheapest-boots-q db)
;; => 9000
;; However I also need the entity ID for the lowest-priced offer, and
;; when I try adding it, I get the $99.81 boots:
(def cheapest-boots-q '[:find [?e (min ?p)]
:where
[?e :product-offer/product :product/BunnyBoots]
[?e :product-offer/price ?p]
])
(d/q cheapest-boots-q db)
;; => [17592186045423 9981]
I think I might see what's going on - it's grouping on entity ID, and returning a (min ?p) aggregate for each one (so basically useless). But I'm not sure how else to get the entity ID in the result tuple... should I not be using an aggregate at all for this?#2020-09-0917:48faviladatalog doesn’t support this kind of aggregation (neither does sql!)#2020-09-0917:48favilayou can do this with a subquery that finds the max, then find the e with a matching max in the outer query; or, do it in clojure#2020-09-0917:50favila:find ?e ?p then (apply max-key peek results) (for example)#2020-09-0917:51favilathe reason datalog and sql don’t do this is because the aggregation is uncorrelated: suppose multiple ?e values have the same max value: which ?e is selected? the aggregation demands only one row for the grouping#2020-09-0917:52favila(you still have that problem BTW--you may need to add some other selection criteria)#2020-09-0918:41AbeAh I see, thank you!#2020-09-0920:00vHello I am playing with dev-local datomic. When I try to create a database I get error.
java.nio.file.NoSuchFileException: "/resources/dev/quizzer/db.log"
....
Here is the full code
(ns quizzer.core
(:require
[datomic.client.api :as d]))
(def client (d/client {:server-type :dev-local
:storage-dir "/resources"
:system "dev"}))
;; Creating a database
(defn make-conn [db-name]
(d/create-database client {:db-name db-name})
(d/connect client {:db-name db-name}))
(comment
(d/create-database client {:db-name "quizzer"}))
Any ideas? 🙂#2020-09-0920:02alexmillerdoes /resources/dev/quizzer exist?#2020-09-0920:02alexmilleror more simply, does /resources exist?#2020-09-0920:06vI placed it in the root directory. Here is project structure
.
├── README.md
├── deps.edn
├── resources
│ └── dev
│ └── quizzer
│ └── db.log
├── src
│ └── quizzer
│ └── core.clj
└── test
└── quizzer
└── core_test.clj
7 directories, 5 files#2020-09-0920:06vAnd the deps.edn structure
{:paths ["src" "resources" "test"]
:deps {org.clojure/clojure {:mvn/version "1.10.1"}
com.datomic/dev-local {:mvn/version "0.9.195"}}
:aliases {:server {:main-opts ["-m" "quizzer.core"]}
:test {:extra-paths ["test/quizzer"]
:extra-deps {lambdaisland/kaocha {:mvn/version "0.0-529"}
lambdaisland/kaocha-cloverage {:mvn/version "1.0.63"}}
:main-opts ["-m" "kaocha.runner"]}}}#2020-09-0920:09alexmiller"/resources" is an absolute path#2020-09-0920:09alexmillerI assume that's in your ~/.datomic/dev-local.edn#2020-09-0920:29vAbsolute path was the problem, thank you @alexmiller#2020-09-0921:34Jake ShelbyI have a datomic cloud production topology, which shows the correct number of datoms in the corresponding CloudWatch dashboard panel..... however, the datoms panel for my other solo topology never shows any datoms, no matter how many I transact into the system#2020-09-0921:48kennyI know solo reports a subset of the metrics, but according to https://docs.datomic.com/cloud/operation/monitoring.html#metrics solo should report that datoms metric.
> Note In order to reduce cost, the Solo Topology reports only a small subset of the metrics listed above: Alerts, Datoms, HttpEndpointOpsPending, JvmFreeMb, and HttpEndpointThrottled.
Not sure what's going on. I'm seeing the same on our solo stacks though @U018P5YRB8U.#2020-09-0921:49kennyEven the solo https://docs.datomic.com/cloud/operation/monitoring.html#dashboardsshows the datoms metric.#2020-09-0921:52Jake Shelbythanks for checking your system @U083D6HK9, what version is yours? (I just launched mine last week, so it's the latest version
▶ datomic cloud list-systems
[{"name":"core-dev", "storage-cft-version":"704", "topology":"solo"},
{"name":"core-prod",
"storage-cft-version":"704",
"topology":"production"},#2020-09-0922:01kennySame version#2020-09-0923:42vDoes anyone have an example on how tx-report-queue is used #2020-09-1011:55val_waeselynckhttps://docs.datomic.com/on-prem/transactions.html#monitoring-transactions#2020-09-1011:55val_waeselynckSee also: https://docs.datomic.com/on-prem/javadoc/datomic/Connection.html#txReportQueue--#2020-09-1018:23davewoI am trying to update schema to add uniqueness to an attribute like so:
[{:db/id :owsy/dot-number
:db/unique :db.unique/identity}]
but I get the following error:
clojure.lang.ExceptionInfo: java.lang.IllegalArgumentException: :db.error/invalid-alter-attribute Error: {:db/error :db.error/unique-without-index, :attribute :owsy/dot-number} {:succeeded [{:norm-name :V20200901/clearfork-last-funded-date-SNAPSHOT, :tx-index 0, :tx-result {:db-before
This is confusing because I thought that adding :db/unique would also set :db/index true#2020-09-1018:26ghadi@davewo I think it's saying that your data isn't already unique :db.error/unique-without-index so you can't add a uniqueness constraint#2020-09-1018:32davewohttps://docs.datomic.com/on-prem/schema.html#schema-alteration
"In order to add :db/unique, you must first have an AVET index including that attribute."#2020-09-1018:32davewothat seems more in line with the error message#2020-09-1018:33ghadiah yeah#2020-09-1018:33ghadigood catch!#2020-09-1018:34davewoand because those indexes are added asynch, adding the :db/index in the same tx doesn't work (which I tried)#2020-09-1021:21colinkahnIs there anyway to clone an in memory peer connection? Lets say I want to transact a large schema and then be able to make clones of it at that point for testing purposes and not incur the schema transaction cost each time?#2020-09-1021:36kennyPerhaps https://github.com/vvvvalvalval/datomock?#2020-09-1022:32colinkahn@U083D6HK9 cool, I’ve run across this lib a couple times but I guess it never sunk in what usecase it solved 😄#2020-09-1022:39Joe Lane@colinkahn Have you considered using the new dev-local database? Copying a database is literally cp my-db the-copy-of-my-db#2020-09-1022:45colinkahnI just saw that today. I’ve been using https://github.com/ComputeSoftware/datomic-client-memdb for testing, but they’re saying to switch over to dev-local. But at first glance seems like it writes to disk which I don’t need for tests.#2020-09-1100:44val_waeselynckWhait, copying Datomic dbs is no longer O(1) ? What a pity :)#2020-09-1122:40nandoIf an attribute has a :db.cardinality/many, can it have other value types besides :db.type/ref? If so, what does a query or pull of that attribute return? A vector? A set? And what does a transaction expect if multiple values are passed into an attribute with a cardinality of many?#2020-09-1122:48favilaCardinality many means you are allowed more than one assertion datom per e+a at a time#2020-09-1122:48favilaThey’re still separate assertions#2020-09-1122:49favilaSo in query there is no difference #2020-09-1122:49favilaIn pull results, they are vectors of values, but they will be unique #2020-09-1122:50favilaAnd any type can be cardinality many#2020-09-1122:51nandoSo I cannot transact several values in an array, for instance. They should be in separate maps?#2020-09-1122:52favilaYou can have more than one db/assert#2020-09-1122:53favilaAnd a map with a set value will desugar to multiple assertions#2020-09-1122:53favilaI want to emphasize that each value is from separate datom. It’s not that you have a datom with many values in it#2020-09-1122:53nandoOh, good. I'm working with a web app.#2020-09-1122:54favilaThe vector you get from pulls, and the transaction map form are projections#2020-09-1122:54nandoUnderstand they are separate datoms, good.#2020-09-1122:55nandoOk, so I'll work it out through experimentation from here. Thanks for the pointers!#2020-09-1122:42nandoI'm so used to relational databases, I'm not sure what mental model I should have here.#2020-09-1222:41azHi all, is there a way to enforce authorization rules in datomic? I’m trying to figure out the authorization story for clojure and wondering if this can partially be done with datomic.#2020-09-1300:10val_waeselynckWhat would be the SQL equivalent of what you're looking for? Authorization rules tend to consist in highly application-specific invariants, I'd be surprised if you found a generic solution.#2020-09-1302:00az@U06GS6P1N - I was thinking maybe there was something similar to what Hasura, Dgraph, Firebase are doing for their authorization system. A rule engine that runs against a query.#2020-09-1320:17val_waeselynckI don't think so, but this answer might help: https://stackoverflow.com/a/48269377/2875803#2020-09-1412:48nando@lanejo01 About a week ago, you asked for feedback regarding the use of dev-local.
It gave me an easy on-ramp to work with Datomic in the development of an app for the first time, one with an open-ended future that can easily be upgraded to a solo or production topology. Clojure and Datomic are generally regarded as languages for experienced programmers. What I'm finding is that Datomic is a heck of a lot simpler to use and understand compared to SQL and relational databases - once you get through the initial learning curve, which isn't that steep at all. And that's my experience already, after a few weeks of working with it a few hours a day. And there is a lot I haven't explored yet.
I'm not smart enough to deal with the incidental complexity that relational databases introduce. The issues that folks run into with time and versioning in a relational database are well known. But what I'm experiencing is Datomic's simplicity, intelligent simplicity, well designed simplicity, at a more fundamental level. Relationships are mapped in a line, point to point. My monkey brain can grok that - swing from tree to tree to get to the bananas.
So suddenly, I have the impression that Datomic is the right database for beginners, or anyone like me that has difficulty modelling complex, arbitrarily designed 2D/3D/4D relationships in their head. It is really hard for me to imagine going back to relational databases at this point, if I have any say in the matter. I should not have to think that hard to get to the bananas.
So, dev-local is an on-ramp, that you know. But my feedback is it may be /could be a very broad on-ramp, simply because Datomic with datalog and pull is so much easier than a relational database with sql.#2020-09-1414:35gwsOn-prem, is there ever a situation where transactions submitted with datomic.api/transact that have been waiting longer than the peer's specified timeout purged before processing by the transactor in order to make room for new transactions, or is the queue effectively unbounded, where the transactor will continue working through the queue of transactions potentially much longer than peers will wait on them? From the docs:
> When a transaction times out, the peer does not know whether the transaction succeeded, and will need to query a recent value of the database to discover what happened.
... which makes sense given that a peer timeout set close to the amount of time it took the transaction to be processed could succeed, but doesn't seem to clarify whether it's ever the case that old transactions that peers no longer care about might be dropped by a queue and not processed by the transactor.#2020-09-1416:00mMeijdenHi,
The security/compliance dep requires that all S3 buckets are server side encrypted and all CMK keys have a rotation enabled. By default, the datomic template does not do this. Can anyone confirm my expectation that making these changes to the CFTs ourselves will not break anything?#2020-09-1416:06ghadi@matthijs.van.der.meij I treat the resources inside the CloudFormation stacks as implementation details. I would expect that things will break if you make this change.#2020-09-1417:11jeremyWhat are valid values for this property? The documentation is not clear given this error message. https://docs.datomic.com/on-prem/system-properties.html#backup-properties
Caused by: java.lang.IllegalArgumentException: :db.error/invalid-config-value Invalid value '1' for system property 'datomic.backupPaceMsec'
#2020-09-1420:02jeremyWe've learned that any value >=5 is valid. 0, the "default", is not valid.#2020-09-1421:26tvaughanHi @U050LF1LR!#2020-09-1513:12jeremyHey @U0P7ZBZCK!! How goes it?#2020-09-1513:32tvaughanSent you a DM. Sorry I can't help with this issue 😞#2020-09-1422:55ivanaCan anyone return my faith in reality?
;; lets play a little bit: create empty database and get connection
(d/create-database (db-uri "test-database"))
(def dev-conn (d/connect (db-uri "test-database")))
;; transact some schema
(d/transact dev-conn [{:db/ident :order/type
:db/valueType :db.type/keyword
:db/cardinality :db.cardinality/one}
{:db/ident :order/customer
:db/valueType :db.type/long
:db/cardinality :db.cardinality/one}])
;; transact some data
(d/transact dev-conn [{:order/type :a :order/customer 1}
{:order/type :b :order/customer 1}
{:order/type :b :order/customer 1}])
(def db (d/db dev-conn))
;; lets make trivial query
(d/q '[:find ?o ?c ?is-b
:where
[?o :order/customer ?c]
[?o :order/type ?o-type]
[(= :b ?o-type) ?is-b]]
db)
;; #{[17592186045420 1 true] [17592186045419 1 true]}
;; this is not what I expected
;; lets cure it by... changing = on clojure.core/=
(d/q '[:find ?o ?c ?is-b
:where
[?o :order/customer ?c]
[?o :order/type ?o-type]
[(clojure.core/= :b ?o-type) ?is-b]]
db)
;; #{[17592186045420 1 true] [17592186045419 1 true] [17592186045418 1 false]}
;; much better! lets cure it by changing where-clauses order!
(d/q '[:find ?o ?c ?is-b
:where
[?o :order/type ?o-type]
[?o :order/customer ?c]
[(= :b ?o-type) ?is-b]]
db)
;; #{[17592186045420 1 true] [17592186045419 1 true] [17592186045418 1 false]}
;; also fine!
;; how do you like it, Elon Musk?
;; Am I stupid? Or there is some rules which I violated?
;; How can I sleep after that and beleive my all other queries?#2020-09-1423:35marshallCan you submit this as a support ticket to <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection> ?#2020-09-1423:35marshallI would like to look into this tomorrow#2020-09-1423:37ivanaYep, thanks, I'l send this example on given email#2020-09-1423:43marshallThanks#2020-09-2123:57unbalancedI'm actually surprised the 2nd and 3rd examples work. I would've expected [(clojure.core/= :b ?o-type) ?is-b]] to fail on unification when false#2020-09-1423:30ivanayep, com.datomic/datomic-pro "0.9.6045"#2020-09-1423:48Jake Shelby[Datomic Cloud]: The documentation seems to indicate that encryption at rest is automatic and I do see that the DynamoDB table is set to have encryption enabled .... however I've noticed that the S3 bucket and EFS instances that are created, are not set to be encrypted. Am I missing something, like a parameter somewhere? or do I need to manually enable encryption for some of these other resources?#2020-09-1511:45marshallEverything in storage is encrypted using a CMK (customer master key) automatically#2020-09-1511:46marshallThis is done by datomic itself, instead of through the specific aws services#2020-09-1511:47marshall@U018P5YRB8U ^#2020-09-1512:29mMeijdenExtending on this, if we would like to also have SSE available on the S3 buckets from a company policy perspective, can datomic support this? Would this affect the way datomic performs? I've ran it in a sandbox environment and it looks like datomic can work with the SSE bucket and objects. Can you maybe confirm this @U05120CBV?
Our security department would likes to see that all buckets are encrypted by default, as this makes it from an auditing perspective slightly easier
Altering the template is something we already have to do unfortunately to rum datomic in our managed accounts since we are required to implement a role boundary on our iam roles (which works perfectly fine, having it with an automated script.)#2020-09-1514:05marshallWe haven’t tested the effects of enabling SSE#2020-09-1522:12Jake Shelbythanks @U05120CBV , that's good to know - and I can see that CMK now also. - so does this datomic side encrypting happen for both S3 and EFS then?#2020-09-1522:14marshallYep#2020-09-1522:18Jake Shelbyperfect#2020-09-1522:20Jake ShelbyAlso, I noticed there is just a single (CMK) key named datomic, but I have 2 datomic systems in that region - I'm assuming they're just both sharing that same one? is this something that would be a concern if I'm trying to keep those two systems very separate (in terms of access with IAM roles)?#2020-09-1517:46Sam DeSotaI’m trying to disable some logging on a peer server that’s painfully verbose, but I can’t seem to make any progress with the docs. I have lein project that’s using datomic-pro , tried to add a bin/logback.xml but I’m not really sure how I actually configure datomic to pickup on that file (I added the slf4j exclusion so the datomic pro lib using logback), since I’ve changed all the loglevels to warn with no change in the verbosity. Also tried to use (.setLevel (org.slf4j.LoggerFactory/getLogger "datomic") ch.qos.logback.classic.Level/WARN) with no progress. Let me know if there’s tips on how to disable the DEBUG logging by default.#2020-09-1517:50favilaIf you use the bin/run method of starting, it should have logback.xml on the classpath already#2020-09-1517:51favilaalternatively, you can include the -Dlogback.configurationFile= property to point to your own logback file#2020-09-1517:52favilawait, do you mean “peer-server” or “peer, that is also a server”?#2020-09-1517:53Sam DeSotaI got it, just wasn’t familiar with the convention. I added logback.xml to my resources dir so it’s in my class path, not using bin/run, it’s a peer that’s also a server 🙂#2020-09-1517:53Sam DeSotaThank you#2020-09-1517:54favilaah, ok. that’s different. I recommend always using the property btw instead of putting it on the classpath (except maybe in dev, where you can put the logback in dev-resources)#2020-09-1517:55favilahttps://docs.datomic.com/on-prem/configuring-logging.html#peer-logging#2020-09-1517:55favilathat’s maybe not especially helpful#2020-09-1517:55Sam DeSotaYup, I’m on that page. I’ll add the property on deploy.#2020-09-1521:44ennI'd like to write a :where clause which will unify and bind a certain var if the relevant attribute is present, but which won't prevent the whole clause from matching if that attribute isn't present.
Something like this:
:where [?foo-id :foo/id "123"]
[?foo-id :foo/bar ?foo-bar]
Except I want it to match every :foo/id of 123, regardless of whether :foo/bar is present. But if it is present, I'd like to bind it to ?foo-bar.
Is this possible?#2020-09-1603:22cmdrdatsThe key here is to understand what you're wanting this for. 9/10, you're actually just wanting to pull the field information out into ?foo-bar
For that, I would recommend using the pull syntax, ie.#2020-09-1603:22cmdrdats(d/q
'[:find (pull ?foo-id [:foo/id :foo/bar])
:where [?foo-id :foo/id "123"]] db)#2020-09-1522:16ivanaI'm not sure I understand you correctly, but it sounds like get-else#2020-09-1522:58kennytiltonLook for get-else half way down: https://docs.datomic.com/on-prem/query.html @enn Could be a fit.#2020-09-1522:58ennthank you, I’ll check that out#2020-09-1615:20joshkhi found myself in the middle of a fun (:man-shrugging:) debate today, the topic being:
> can a function still be pure if it calls d/pull?
some people say yes, d/pull operates on db as a value.
other people say no, d/pull is a side effect that fetches data over the network
who's right, and who has to buy the next round at the pub?#2020-09-1615:23joshkhexample function taken from the Cloud documentation:
(defn inc-attr
"Transaction function that increments the value of entity's card-1
attr by amount, treating a missing value as 0."
[db entity attr amount]
(let [m (d/pull db {:eid entity :selector [:db/id attr]})]
[[:db/add (:db/id m) attr (+ (or (attr m) 0) amount)]]))
#2020-09-1615:28benoitI think it is more important to worry about what your function returns rather than what it actually does. Does your "pure" function always return the same result for the same input?#2020-09-1615:28danstoneThe difference between a memory load of an immutable value and a network load is mostly going to be performance and possibility of exception. Well I can write a 'pure' function that takes a long time to compute and I can write a 'pure' function that might throw e.g out-of-memory.#2020-09-1615:30danstoneExcision may well remove the value-ness of your as-of db though.#2020-09-1615:32joshkhyes, i agree with both of your points. Datomic being immutable, you will always be returned the same output for a given input, except for the case of excision (although we don't worry about that in Cloud).#2020-09-1615:51favilaYou also can’t just look at the function to decide purity, you have to look at its arguments and return values#2020-09-1615:51favilad/pull is absolutely pure if db is an in-memory db, for example#2020-09-1615:52favilabut it’s absolutely not pure if it’s an in-memory db that randomly generates results when read#2020-09-1615:57favilaI think more than purity I usually want to know things like: does this function take/return values or objects (lazyness being an important grey area--look at d/entity return values, or lazy-seqs that perform IO); does it return the same thing on repeated calls with the same arguments (“same” depending on whether they are values or objects); could it ever possibly perform io or idle-blocking#2020-09-1617:10benoitYes, same is not an obvious concept 🙂 But I think the important part to get across here is that the notion of pure function matters for the user of the function and not its implementation. You can have pure functions that manage state internally (e.g. any memoized function).#2020-09-1617:30favila“pure” can mean same return for same arguments, or no side effects (I think this excludes memoization and IO), or both. Because it can mean any of these things, I think it’s better to be more specific. We know the properties of d/pull, so I think the original question is really an argument about what “pure” should mean.#2020-09-1617:32favilawhich I guess is a fine argument for the pub 🙂#2020-09-1617:33favila(just keep sharp objects away)#2020-09-1622:14Nassinanything that needs to leave the process in impure, networks failing is common#2020-09-1707:01joshkhthe example function inc-attr at the top of this thread is actually a transactor function taken from the Datomic documentation. and according to the same documentation, they must be pure. so perhaps inc-attr is pure (in this context) because transactor functions run "inside" of Datomic?#2020-09-1707:25favilaI think “pure” is used loosely here to mean “I may execute this function multiple times while holding a db lock at the head of a queue of other transactions, and you are ok with the consequences of whatever you do in there”#2020-09-1711:32joshkhagreed!#2020-09-1711:33joshkhthanks for sharing your thoughts, it's always insightful to pick other peoples' brains.#2020-09-1619:41Lennart BuitSo you can specify that a rule requires bindings by enclosing the argument in brackets, but from experimenting I noticed that that doesn’t take clause ordering into account, e.g. this is fine, even though the rule is invoked in a clause preceding the binding of ?e:
(d/q '{:find [?v]
:in [$ %]
:where [(my-rule? ?e)
[?e :attr ?v]]}
db
'[[(my-rule? [?e])
[?e :attr 12]]])
Why is that? And how should I go about making sure that consumers of this rule don’t accidentally use it in ways that binds large parts of the database?#2020-09-1620:06favilaThat looks like a bug to me#2020-09-1620:07faviladoes that only happen if it’s the very first clause?#2020-09-1620:15Lennart BuitIt also doesn’t complain like this:
(d/q '{:find [?v]
:in [$ %]
:where [[?other-e :attr ?v]
(my-rule? ?e)
[?e :attr ?v]]}
db
'[[(my-rule? [?e])
[?e :attr 12]]])
#2020-09-1620:15Lennart BuitLets see what it does on a more contemporary version of datomic#2020-09-1620:20Lennart Buit(same behaviour on 1.0.6202, both queries don’t complain about missing bindings)#2020-09-1620:40favilayeah, I tried a bunch of things. I can only get this to fail:
(d/q '{:find [?v]
:in [$ %]
:where [(my-rule ?e ?v)]}
[[1 :attr 2]
[2 :attr 12]]
'[[(my-rule [?e] ?v)
[?e :attr ?v]
[(ground 12) ?v]]])
Execution error (Exceptions$IllegalArgumentExceptionInfo) at datomic.error/arg (error.clj:79).
:db.error/insufficient-binding [?e] not bound in clause: (my-rule ?e ?v)#2020-09-1620:41favilamaybe, there is some clause reordering? I wouldn’t know how to predict the performance of this#2020-09-1620:43favilaif it isn’t somehow delaying rule evaluation until after everything’s bound that can be, I would expect your examples to throw#2020-09-1620:48Lennart BuitHmm interesting. Yeah, if it is just performant, you will not hear me complain. I just got pretty fixated on getting my clause orders right, right, so I was surprised this was allowed#2020-09-1621:50marshallRules will be pushed down until required bindings are satisfied#2020-09-1621:51marshallYoull get an error if it cant ever satisfy them#2020-09-1621:54marshallThat is documented for or and not clauses but not for rules, i will double check and also add documentation#2020-09-1621:58Lennart BuitAh thank you! Then I learned something today and it was worth to ask#2020-09-1709:04StefanHi all, I’m hoping that this is not a FAQ that I missed somehow, but this is what I’d like to accomplish: for my unit tests, I’d like to be able to pass a simple Clojure hashmap into Datomic query functions, instead of a real Datomic connection, so that I can test my queries without actually round-tripping to a database. Is there something out there to do this? Or am I on a wrong track here?#2020-09-1709:36pithylessYou can actually pass in a vector of datum tuples as your DB and query can unify them. But that's probably not what you're looking for.
Why not just create an in-memory datomic connection? Something like:
(str "datomic:mem://" (gensym))
You may also be interested in a tool like https://github.com/vvvvalvalval/datomock for setting up and reusing more complex test data scenarios#2020-09-1709:38pithylessI suppose none of this is relevant if you're using Datomic Cloud. Seems to be a primary driver for releasing https://docs.datomic.com/cloud/dev-local.html#2020-09-1709:38thumbnailnote that passing datum tuples only works for the peer library, not for the client library iirc.#2020-09-1710:10Stefan@U05476190 We’re using on-prem, so… I’ve indeed found datomock, which is a nice concept, but then you still need to specify both a schema and the test data; I was hoping for something even simpler 😉#2020-09-1717:48val_waeselynck@UGNFXV1FA I find this use case strange. Wouldn't you have more confidence in your tests if they ran in an environment more similar to production?#2020-09-1717:49val_waeselynckI personally find it hugely advantageous to have a full-featured implementation of Datomic in-memory, I would recommend embracing it#2020-09-1807:36Stefan@U06GS6P1N Yeah we’re already experimenting with that, and maybe it’s good enough. But if those tests take 1 second each because of setup/teardown of Datomic databases, that’s too long for me. For unit tests, I prefer to keep things as lean as possible.#2020-09-1813:11val_waeselynck@UGNFXV1FA forking solves that problem#2020-09-1813:13val_waeselynckPut a populated db value in some Var, and then create a Datomock connection from it in each test, there's virtually no overhead to this#2020-09-1813:23StefanSounds good, will definitely try, thanks! 🙂#2020-09-1715:08souenzzoI'm running datomic /cdn-cgi/l/email-protection
Can I generate a "AWS Event"¹ on every transaction?
¹ AWS Event is something that i can plugin into lambda/SNS/SQS#2020-09-2109:42souenzzoBump#2020-09-2114:22bhurlowWe use the transaction report queue to push data into a kinesis stream, then run lambdas on those events#2020-09-2114:23bhurlowTriggering side effects on Dynamo db writes is likely not what you want since datomic is writing full blocks to storage (not a datom at a time)#2020-09-2115:12souenzzo@U0FHWANJK when running on multiple/scalled instances, how do you manage the tx-report-queue?#2020-09-2115:13bhurlowWe run a single, global process which just subscribes to the queue and pushes events to kinesis#2020-09-2115:13bhurlowother Datomic traffic is scaled horizontally but doesn't invoke the queue#2020-09-2115:13bhurlowKinesis -> Lambda integration works reasonably well#2020-09-2115:14bhurlowone bonus is you can do one queue to many lambda consumers#2020-09-2115:14souenzzo@U0FHWANJK can you share which instance size you use for this report-queue?#2020-09-2115:15bhurlowsubscribing to the tx report queue and putting into lambda is not a very intensive process#2020-09-2115:15bhurlowt3.large would be fine imo#2020-09-2115:19souenzzotnks @U0FHWANJK#2020-09-2115:28joshkhhmm, i don't suppose you know if something like the "transaction report queue" is available on Datomic Cloud, do you? i have often been in need of exactly what souenzzo mentioned, but instead settled for querying / sipping the transaction log on a timer#2020-09-2115:34bhurlowI'm not sure about cloud, have only used the above in on-prem#2020-09-2115:34bhurlowI'd assume it's inside the system but possibly not exposed#2020-09-2115:37val_waeselynckClients don't have a txReportQueue indeed. Polling the Log is usually fine IMO (and having a machine dedicated solely to pushing events seems wasteful, and it's also fragile as it creates a SPoF).#2020-09-2115:38souenzzoI work with datomic cloud and datomic on-prem (on different products)
IMHO, datomic on-prem still way easier/flexible then cloud
Cloud has to many limitations. You can't edit IAM for example, and if you edit, you break any future updates.#2020-09-2115:40joshkhthanks guys#2020-09-2115:42val_waeselynckOne interesting construction might be using AWS Step Functions + Lambda for polling the Datomic Log into Kinesis, using the Step Functions state to keep track of where you are in consuming the Log#2020-09-1721:00donyormLooking to try and figure out how to handle sessions/authentication with ions, is there a best practice for that in ions? https://forum.datomic.com/t/best-way-to-handle-session-in-ions/1630#2020-09-1723:21kennyJust confirming, it's okay to pass a db created with datomic.client.api/db to datomic.client.api.async/q, correct?#2020-09-1809:37ziltiI just updated Datomic from 1.0.6165 to 1.0.6202, both the transactor, and the peer library in my program. Now, "nothing works anymore". Interestingly, Datomic Console can still connect fine. But my program cannot anymore, giving me `AMQ212037: Connection failure has been detected: AMQ119011: Did not receive data from server for org.a/cdn-cgi/l/email-protection3145e5fe[local= /127.0.0.1:36065, remote=localhost/127.0.0.1:4334
] [code=CONNECTION_TIMEDOUT]. AMQ119010: Connection is destroyed` https://termbin.com/wnhzu#2020-09-1809:39ziltiAny ideas what could be causing this?#2020-09-1815:55NassinWhat Java version?#2020-09-1915:28ziltiJava 13#2020-09-1814:21xcenoHi, I need some clarification about Datomic Cloud vs. OnPrem setup.
Difference 1 in the cloud guide (https://docs.datomic.com/on-prem/moving-to-cloud.html#aws-integration) states:
> Datomic apps that do not run on AWS must target On-Prem
Is this because of technical reasons or a licence thing?
Our clients are all in on Azure, but we need a Datomic database. Should I convince them to bite the bullet and let us deploy on AWS, or do we have to deploy Datomic OnPremise on a Azure environment?#2020-09-1814:25marshallI suppose that is a bit draconian
You certainly could run Datomic Cloud in AWS and run your application in Azure#2020-09-1814:25marshallyou'd have to handle the network stuff to make sure it was secure#2020-09-1814:25marshalland you'd be paying the cross-cloud latency#2020-09-1814:27xcenoThat's what I thought. Just plug in the client config pointing to AWS, but the main app runs on Azure. So it must be a licensing issue then#2020-09-1814:28marshallit's not a licenseing issue#2020-09-1814:28marshallit's a 'we need to write that sentence better` issue#2020-09-1814:28marshallyou are definitely free to do that ^#2020-09-1814:28xcenoAhh okay got it, thank you 🙂#2020-09-1814:28marshallthere is no way to run Datomic Cloud in Azure#2020-09-1814:29marshallbut if you're OK with the cross-cloud configs/tradeoffs, there is no reason you can't do that#2020-09-1814:29xcenoYeah it would be like a datomic onPrem installation targeting a postgres DB on azure, but even typing this sounds a bit stupid#2020-09-1814:29marshalli mean, it's not that bad; I've definitely talked to several customers using Cloud and hosting their apps elsewhere#2020-09-1814:29marshallGCP mostly#2020-09-1814:30xcenoI see, fair enough. I talk to my client then. Thanks again!#2020-09-1814:30marshallsure#2020-09-1814:25Lennart BuitWhen datomic reports an anomaly :cognitect.anomalies/busy with category :cluster.error/db-not-ready, what is exactly the problem that datomic is having (cpu load?) and how could I go about mitigating this in the short term? Or is it just that my peers are severly overloaded and I need to add more 😛?#2020-09-1814:26marshallthe set of "active" databases on each node (query group or primary compute group instance) is dynamic. Datomic 'unloads' inactive databases after a period of time#2020-09-1814:27marshallif you issue a request to connect to a db that's not currently 'active', the serving node has to load that DB's current memory index/context/etc#2020-09-1814:27marshallthat's what the anomaly you're seeing indicates#2020-09-1814:27marshallif you have only a few DBs in your system, you can use the preload db parameter in your compute group (or query group) cloudformation to automatically load those DBs on any node at startup#2020-09-1814:33Lennart Buit(This is an on prem peer server btw). But thats unique databases, or also database values at different t?#2020-09-1814:35marshallunique databases#2020-09-1814:39TwanIs there a way we can deal with this? We have around 12 databases in total, with only 1 (production) db being largely hit (and 2 little production dbs). How come it changes the active database after all?#2020-09-1814:40marshalldoes this only occur on starting up a new peer server?#2020-09-1814:40Lennart BuitNo, it appears to occur randomly every few seconds or so#2020-09-1814:41marshallis it always db-not-ready?#2020-09-1814:41marshallhow many peer servers do you have running?#2020-09-1814:41Lennart BuitWe did plug a second peer server today, so we have 2 now. Loadbalanced by haproxy, no sticky sessions#2020-09-1814:44Lennart BuitPredominantly, yeah. We did see an ops limit reached exception before, but I can’t confirm right now when I last saw that#2020-09-1814:48marshalldo you have cpu and memory metrics from the peer server?#2020-09-1815:14Lennart BuitJust for posterity/googlers: We ended up severing datomics connection to a badly provisioned memcached, which reduced these errors significantly. Can’t say for sure thats the problem, tho#2020-09-1816:35favilaRunning datomic on-prem+dynamodb with a very large database (>6 billion datoms). I’m noticing large amounts of data (3-5GB) written to the data directory that appear to be lucene fulltext indexes. Is this a scratch space for the transactor’s fulltext indexing?#2020-09-1816:35favilaI ask because I see three items with old timestamps and I’m wondering if I can delete them.#2020-09-1816:35favilaAlso, that seems really big, is this normal?#2020-09-1816:36favilashould I be provisioning a separate or faster disk for this?#2020-09-1816:41favilaTo be clear, this is 3-5gb per directory under and I currently have 3 of them#2020-09-2009:32João FernandesHi, I've been trying do find a way to do a left join for last three days and I finally decided to ask for help 😅
How can I get all owners and their pets even if they don't currently have a pet?
[:find ?owner-name ?pet-name
:where [?owner :owner/name ?owner-name]
[?owner :owner/pets ?pet]
[?pet :pet/name ?pet-name]]#2020-09-2009:32João FernandesHi, I've been trying do find a way to do a left join for last three days and I finally decided to ask for help 😅
How can I get all owners and their pets even if they don't currently have a pet?
[:find ?owner-name ?pet-name
:where [?owner :owner/name ?owner-name]
[?owner :owner/pets ?pet]
[?pet :pet/name ?pet-name]]#2020-09-2114:22Giovani AltelinoYou could to use an or-join#2020-09-2114:29Giovani Altelino[:find ?owner-name ?pet-name
:with ?data-point
:where [?owner :owner/name ?owner-name]
[?owner :owner/pets ?pet]
[?pet :pet/name ?pet-name]
(or-join [?owner-name ?pet-name ?data-point]
(and [(identity? ?owner-name) ?data-point)
(and [(identity? ?pet-name) ?data-point)]
#2020-09-2114:30Giovani AltelinoI guess something like this should work, although I don't have datomic installed right now to confirm#2020-09-2009:38David Phamfind all owners and pull the results?#2020-09-2009:39João FernandesIs there a way to do it in "one trip" or am I making a conceptual mistake here?#2020-09-2009:58pithyless@joaovitorfernandes2 as a rule of thumb, I suggest using where clauses for filtering and pull for pulling data (that may or may not exist)
[:find (pull ?owner [:owner/name {:owner/pets [:pet/name]}])
:where [?owner :owner/name _]]
So, you could use get-else in the where clause to optionally find pets, but that only makes sense if you then want to filter with additional rules (e.g. if the owner has a pet, one of the pets names need to be "Rex")#2020-09-2009:59pithylessWhen you want conditional matching, you need to add a datalog rule (or one of the sugar syntaxes - e.g. or)#2020-09-2010:11João FernandesThank you so much! Yesterday I tried to use pull but I didn't knew I could nest maps in there! Again, thank you!#2020-09-2013:38pithyless@joaovitorfernandes2 Technically, that is not a nested map, but the notation for doing a join. I recommend checking out the pull docs for the kinds of built-in features you can take advantage of: https://docs.datomic.com/on-prem/pull.html The syntax has been re-used in other libraries (e.g. it is the basis for https://github.com/edn-query-language/eql and used by libraries such as Fulcro and Pathom, among others)#2020-09-2009:44schmee@joaovitorfernandes2 I think you can use get-else for this: https://docs.datomic.com/cloud/query/query-data-reference.html#get-else#2020-09-2009:58João FernandesThe union happens on [?owner :owner/pets ?pet] and get-else doesn't support cardinality-many attributes 😣#2020-09-2020:37bhurlowI'm seeing some memcached/item-too-large logs when integrating memcached (same with valcache), could I be missing some required config?#2020-09-2021:57favilaYou have segments which are too large to cache. This is not configurable. Probably you have some large string or binary values#2020-09-2102:28bhurlowthanks, that could make sense#2020-09-2102:29bhurlowI believe I have a few run away string values#2020-09-2102:33bhurlowdoes retraction affect the storage of large strings in the segments or do those stay for good?#2020-09-2111:01cmdrdatsYou could excise the values? That should get rid of them#2020-09-2114:19bhurlowI'm going to try excising, though I remember reading that it's not made to reduce the size of stored data necessarily#2020-09-2114:26favilaIt’s not, but if you have a too-large value that’s the only way to ensure it’s not written to segments again#2020-09-2114:27favilaIt’s actually OK to have item-too-large occasionally. All this means is that the item will be fetched from storage instead of memcache/valcache#2020-09-2114:27favilait will still be kept in object cache#2020-09-2114:27favilathat said, there’s a reason they say to keep strings under 4k#2020-09-2115:17bhurlowthanks. In this case the data size was by accident#2020-09-2103:19jeff tangdoes the not= predicate work for datalog queries? e.g.
(d/q '[:find ?uid ?order
:in $ ?parent-eid [?source-uids ...]
:where
[?parent-eid :block/children ?ch]
[?ch :block/uid ?uid]
[?ch :block/order ?order]
[(= ?order ?source-uids)]]
@db/dsdb 48 #{0 1 2})
works but
(d/q '[:find ?uid ?order
:in $ ?parent-eid [?source-uids ...]
:where
[?parent-eid :block/children ?ch]
[?ch :block/uid ?uid]
[?ch :block/order ?order]
[(not= ?order ?source-uids)]]
@db/dsdb 48 #{0 1 2})
does not work
to elaborate, = works for both value and collection comparisons, whereas not= only seems to work for value comparisons#2020-09-2114:21bhurlowthere's a datalog specific not impl here https://docs.datomic.com/on-prem/query.html#not-caluses#2020-09-2114:23favilaThis still isn’t what I expect, but note that in datalog it’s more idiomatic to use = and !=#2020-09-2114:23favilanot= is clojure.core/not=, but those two are not necessarily clojure’s#2020-09-2114:23favilaAlso, why not this?#2020-09-2114:23favila(d/q '[:find ?uid ?order
:in $ ?parent-eid [?source-uids ...]
:where
[?parent-eid :block/children ?ch]
[?ch :block/uid ?uid]
[?ch :block/order ?source-uids]]
@db/dsdb 48 #{0 1 2})#2020-09-2114:23favilaor this for the negation?#2020-09-2114:24favila(d/q '[:find ?uid ?order
:in $ ?parent-eid [?source-uids ...]
:where
[?parent-eid :block/children ?ch]
[?ch :block/uid ?uid]
(not [?ch :block/order ?source-uids])]
@db/dsdb 48 #{0 1 2})
#2020-09-2114:24favilaor, if you want to keep a set:#2020-09-2114:24favila(d/q '[:find ?uid ?order
:in $ ?parent-eid ?source-uids
:where
[?parent-eid :block/children ?ch]
[?ch :block/uid ?uid]
[?ch :block/order ?order]
[(contains? ?source-uids ?order)]]
@db/dsdb 48 #{0 1 2})#2020-09-2114:25favila(which is faster in some cases)#2020-09-2116:02jeff tang@U09R86PA4 your first two codeblocks make sense to me! I tried your third codeblock earlier (`contains?`) but datascript didn't recognize my custom predicate for negation. It was fully qualified but idk#2020-09-2116:02favilaoh this is datascript?#2020-09-2116:04jeff tangyeah, not sure if custom predicates are different in that case#2020-09-2109:40arohnerIs there a way to get a ‘projection’ of a database? For authZ purposes, I would like to run queries on a db that only contains the set of datoms that were returned from a query#2020-09-2114:37rapskalianI know on-prem has something like this via d/filter but Cloud does not afaik #2020-09-2114:45arohnerthanks#2020-09-2204:58steveb8nyou could do what I have done for cloud. proxy the db/conn values and inject middleware at that layer. then you can implement your own filtering before or after the reads occur#2020-09-2204:58steveb8nit’s complex but works well#2020-09-2217:00rapskalian@U0510KXTU interesting. With what data does your middleware stack work with? Are the filters queries themselves, the results of which are then used as input to the next query, and so on?
It seems with the client api you could end up performing large scans of the database if your filter was relatively wide...#2020-09-2217:01rapskalian(I think this may be part of the reason why the client api doesn’t support filter)#2020-09-2222:52steveb8nI went all out and added Pedestal interceptors in the proxy. The enter fns decorate the queries before execution with extra where clauses. In that way you can 1/ limit access 2/ maintain good performance#2020-09-2222:52steveb8ndoesn’t work for d/pull so, in that case, I filter the data in the leave fn instead#2020-09-2222:53steveb8nfor writes, the enter fns check that all references can be read using the same filters as the reads#2020-09-2323:04rapskalianAh clever, that’s a great use of queries as data. I can see how you could have a toolbox of interceptors for common things like [?e :user/id ?user-id-from-cookie]#2020-09-2323:29rapskalianThinking more, you could even build :accessible/to into the schema, and assert it onto entities to authorize access by the referenced entity (ie a user). That might be generalized into an interceptor more gracefully. #2020-09-2402:29steveb8nexactly. almost anything can be generalised with this design. It’s non-trivial but worth it imho#2020-09-2411:02arohnerWhere are the API docs for the proxy? I’m not finding anything#2020-09-2422:54steveb8nThere aren’t any docs. This technique relies upon undocumented (i.e. unsupported) use of the api client protocols. you can see an example of this here https://github.com/ComputeSoftware/datomic-client-memdb/blob/master/src/compute/datomic_client_memdb/core.clj#2020-09-2423:13steveb8nin the (unlikely) event that Cognitect changes these protocols, you can always refactor using this technique (which is where I first tried the middleware idea) https://github.com/stevebuik/ns-clone#2020-09-2115:49joshkhis there a more efficient way to find all entities Y with any tuple attribute that references X?
(d/q '{:find [?tuple-entity]
:in [$ ?target-entity]
:where [[?tuple-attr :db/valueType :db.type/tuple]
[?tuple-entity ?tuple-attr ?refs]
[(untuple ?refs) [?target-entity ...]]]}
db entity-id)
this runs in around ~500ms given a few hundred thousand ?tuple-entitys which isn't too slow for its purpose, but i am worried that it won't scale with my data#2020-09-2117:13Joe Lane@joshkh What problem do you have that necessitate that kind of schema structure?#2020-09-2117:16joshkhi knew someone would ask that 😉#2020-09-2117:35joshkhi'm working with one database that has been modelled in such a way that entities with tuple attributes that are unique are no longer "valid" when any one of their tuple reference values is retracted. one drawback to having unique tuples is that you can end up with {:enrollment/[player+server+board [p1 s1 nil]} after retracting a board, and then any subsequent retraction to another course will fail due to a uniqueness constraint so long as there is another enrollment for [p1 s1 b2].
i have implemented a business layer API for retracting different "kinds" of entities that cleanup any tuples known to be "about" them. but in my real data i have many, many different kinds of entities, and many tuples that could be about any one+ of them. so when adding a new tuple to the schema, or transacting an existing tuple that includes a new kind of entity, there is a feeling of technical debt when the developer must know which retraction API functions to update.
since the schema was designed in such a way that tuples should not exist with nil values, i was hoping for a "catch all" transactor function that can clean up related tuples without making complicated decisions about which ones to look for.#2020-09-2117:36joshkh(another option i explored was having component references from all entities back to tuples so that they are automatically retracted, but this proves to be just as tedious on the other end when transacting new entities)#2020-09-2117:55favilawhat if instead you wrote your own retractentities function which does what you want?#2020-09-2117:56favilaThis is possible if an enrollment becomes invalid (i.e. should be completely retracted) if any of player, server, or board are not asserted#2020-09-2117:56favilais that true?#2020-09-2117:57favilaI think that’s what you mean by this:
entities with tuple attributes that are unique are no longer "valid" when any one of their tuple reference values is retracted
#2020-09-2118:00favilathen you could query for [?referring ?attr ?e] (vaet index), see if the attr is one of your special ones, and if so, emit [:db/retractEntity ?referring]#2020-09-2118:01joshkh> your own retractentities function
as in an API layer function or a transactor level function?#2020-09-2118:01favilaYou could look at tupletype membership, but I think it’s going to be less surprising to have either a hardcoded list or your own annotation on the attribute, e.g. :required-for-entity-validity?#2020-09-2118:02favilatransactor level function would be safest#2020-09-2118:05joshkhagreed, and that's where i'm at. but if i'm understanding you correctly, the problem still stands of knowing which tuple attributes reference which entities if i want to shorten the list of possible matches. in my case, nearly any tuple can reference nearly any entity.#2020-09-2118:06favilayou don’t need to know about the tuples, but the attributes that compose the tuple#2020-09-2118:07favilasince you know you are retracting, if you retract an attribute which is a member of a tuple, you know the tuple is going to get a null in it, so you can retract the entire referring entity#2020-09-2118:07joshkhoh hey, that just might work...#2020-09-2118:07joshkhthank you for clarifying 🙂#2020-09-2118:10favilaI still think it’s probably safer to annotate/enumerate attributes which you want this cascading behavior on#2020-09-2118:11joshkhyes, i'm with you on that. ideally it's something i can update via annotations on the schema rather than in the codebase.#2020-09-2118:11favilacorrect#2020-09-2118:12favilaThis is just wanting one piece of isComponent’s behavior#2020-09-2118:12joshkhi've always thought of it as a "reverse component reference" :man-shrugging:#2020-09-2118:13joshkhwhich isn't 100% accurate, but for some reason it's stuck in my head#2020-09-2200:13twashingI’m trying to launch a “Solo” Datomic Cloud CloudFormation stack. But it consistently fails with
CREATE_FAILED: Embedded stack arn:aws:cloudformation:blah:blah:stack/appname-StorageF7F305E7-S5WZ9NKP2OOE/ebc51350-fc61-11ea-9f8a-12709ad3d671 was not successfully created: The following resource(s) failed to create: [GetSystemName].
Is there anything basic I’m missing? Details are in this SO post.
https://stackoverflow.com/questions/64001410/stack-resource-fails-datomic-cloud-cloudformation-launch#2020-09-2211:48TwanWe are running HAProxy to load balance peer requests. What do you recommend to stick our sessions to? Is SSL session ID a good idea?#2020-09-2211:49Twancc: @UDF11HLKC @UHJH8MG6S#2020-09-2212:10TwanWe were looking for something along the lines of https://www.haproxy.com/blog/maintain-affinity-based-on-ssl-session-id/#2020-09-2213:04tvaughanThis was many years ago, but I worked on a project that tried to use SSL session ids as a session id. We abandoned it pretty quickly. There was no guarantee that clients would even try to resume an SSL session. Some had built-in timers to reinitiate SSL sessions every n minutes where n was very small. I have no idea if things have changed, but I certainly wouldn't just do this without a lot of research first#2020-09-2215:17TwanBut that wasn't an project on Datomic I presume?#2020-09-2216:21tvaughanCorrect. It was not Datomic related#2020-09-2310:53TwanDoes anyone have an advice on how to do this on a Datomic peer cluster?#2020-09-2211:54xcenoI'm currently trying to follow the datomic ions tutorial, but after installing the datomic cli tools I'm stuck:
Executing datomic system list-instances <system> fails with AWS Error: Unable to fetch region..
The alternative command using the aws cli directly, succeeds though. I've installed the aws cli via docker and made an alias. Could this be my problem?
Edit: Installed AWS CLI tools via docker and tried again with a normal installation of the V1 and V2 tools. But the problem remains.
Edit2: Okay nervermind. The old docker aws process was locking the .aws/credential file. With a standard aws cli V1 version everything runs fine now.#2020-09-2213:16ghadi@rob703 is the region set in your AWS profile?#2020-09-2213:40xcenoYes it's all running now, thanks! My problem was the aws docker image blocking my entire ~/.aws directory. So when i finally installed a regular V1 CLI it couldn't access the config files.#2020-09-2213:49xcenoI have another question though, that got buried in #beginners.
I want to deploy a fulcro3 project with a dependency to libpython-clj to datomic ions. It includes http-kit as server.
1. Can I just ssh into the ION EC2 instance and set up the necessary python environment?
2. When deploying a :web ion: Do I just ignore http-kit and route the incoming request straight through my ring middleware stack, like so?
(def get-items-by-type-lambda-proxy
(apigw/ionize my-middleware-stack))
(def -main []
(start-dev-server-normally)
#2020-09-2215:15joshkhyes to 2.#2020-09-2215:18joshkhyour web ion points to the ionizeed handler, where as an http-direct connection points to your standard, unionized handler#2020-09-2215:20xcenoPerfect thanks!
If I can figure out the python part now, I'm all set#2020-09-2218:10marshallthe ec2 instance(s) that run datomic are intended to be ephemeral and replaceable, i would definitely recommend against modifying their environments directly#2020-09-2218:17xcenoI see, any other ideas than on how to achieve my requirements?
Further down the line we might even have the need for EC2 instances with GPU's for some machine learning and rendering tasks. I'd love to have it all neatly packaged in an ion, so we don't have to mess with all the AWS setups ourselves#2020-09-2218:18marshallinteresting;
I don’t have any ideas offhand, but I’ll bring it up w the team#2020-09-2218:18xcenoawesome, thank you!#2020-09-2309:19Yuriy Zaytsev@rob703 consider putting your python code to separate environment and interact via lambdas#2020-09-2312:35xcenoI did consider this yesterday evening, it just looses the charm of being able to work with the code through clojure. Setting up a completely separate python environment and accessing it via remote calls just isn't the same as simply using it directly.
But yes, if all things fail that will be our workaround. It won't be too bad, but I still hope someone comes up with another solution#2020-09-2309:01Michaël Salihi@val_waeselynck Hi Valentin,
I just finished reading your very interresting post "Using Datomic in your app: a practical guide" https://vvvvalvalval.github.io/posts/2016-07-24-datomic-web-app-a-practical-guide.html
I would like to know, 4 years later, what would be the points that have changed the most?#2020-09-2314:59val_waeselynck@UFBL6R4P3 nothing much has changed really.
Datofu has been added to automate some common tasks such as schema declaration and migrations: https://github.com/vvvvalvalval/datofu
Some frameworks now make Datomic a bit more battery-included, such as Fulcro or Hodur. I've used neither.#2020-09-2320:31Michaël Salihi@val_waeselynck Perfect, thanks.
This is clearly one of the strength of Clojure. (y)
Another thanks for the Datofu link, I was just telling myself that the schema declaration had perhaps evolved.#2020-09-2314:55kennytiltonIs it just me, or does everyone think of John McCarthy’s http://www-formal.stanford.edu/jmc/elephant/elephant.html when they contemplate Datomic?#2020-09-2317:07seancorfieldIt may be just you @hiskennyness -- I'd never heard of Elephant 2000 until you mentioned it. Looking at the syntax, Elephant reminds me more of Prolog than Datalog but I guess that's where the connection comes in?#2020-09-2317:12kennytilton“It may be just you”. It happens, @seancorfield.The connection I see is “An elephant never forgets!“, ie the possibility of painlessly referring to the past. Fun note: I actually remembered E2K as a DB proposal!#2020-09-2317:51kennytiltonA more substantial http://www-formal.stanford.edu/jmc/elephant/node3.html#SECTION00030000000000000000.#2020-09-2317:14seancorfieldAh, yes, gotcha!#2020-09-2318:02jarethttps://forum.datomic.com/t/datomic-cloud-715-8973/1634#2020-09-2318:07jaretNew Datomic Cloud Release ^#2020-09-2322:51twashingFrom my AWS EBS application, I’m failing to make a simple datomic.client.api/client call to a Datomic Cloud Solo topology.
Details here. Am I missing something?
https://stackoverflow.com/questions/64037250/datomic-client-api-client-failing-to-reach-a-datomic-cloud-solo-topology#2020-09-2323:28twashingOk, got past this. Now getting the error.
{:cognitect.anomalies/category :cognitect.anomalies/forbidden, :cognitect.anomalies/message "Forbidden to read keyfile at s3://<path>/db/<my-app>/read/.keys. Make sure that your endpoint is correct, and that your ambient AWS credentials allow you to GetObject on the keyfile."}
https://docs.datomic.com/cloud/troubleshooting.html#aws-creds
But where to find my “ambient AWS credentials_”?_#2020-09-2400:19twashingOk, nevermind. Sorted it!#2020-09-2406:22tatutany recommendations for backup/restore in datomic cloud? the feature request hasn’t seen activity in a long time. we’ve been rolling our own application level export/import but that unfortunately makes every entity id change#2020-09-2406:23tatutmainly for disaster recovery or just migrating the database to a new environment#2020-09-2409:23steveb8nbasically there isn’t a solution except to build your own. the (un)official answer seems to be that backups are not required because s3 is so reliable and, because there’s no excision, no data is ever lost#2020-09-2409:26steveb8nI can see the sense in that response but I think it doesn’t account for our customers who don’t understand our powerful new toolset. it forces us to take our customers out of their comfort zone which is not great for conservative (i.e. many enterprise) customers#2020-09-2409:28steveb8nthe new local dev client supports importing cloud data to local if that’s one of your use-cases#2020-09-2409:28steveb8nbut for migration, you have to roll your own#2020-09-2409:30steveb8nI’ll be happy to be corrected on any of these interpretations. FWIW it doesn’t change the fact that I really like the cloud managed service.#2020-09-2410:12steveb8nI have a uuid attr on every entity. If you have this, you don't care about entity IDs changing. Is that something you have tried?#2020-09-2411:47kennytiltonNot sure if you have the luxury of time, but from what little I know it would be a good idea to do what it takes to abide entity-ids changing. My understanding is that backup/restore will not guarantee entity-ids being unchanged.#2020-09-2421:36joshkhi am in the exact same boat right now, and am finding it challenging to justify to our enterprise customers that we can't "simply" backup/restore a db from storage to meet their (i.e. not our) DR requirements. and unfortunately for us, dev-local is not yet an option because we have string values that exceed dev-local's max character limit. that being said, i did a small test with dev-local to replay the demo mbrainz db transaction log into a new db and it worked well, but the t values are of course different which is a real shame.#2020-09-2502:03steveb8n@U0GC1C09L exactly! despite the fact that we might not need backup/recovery, in an enterprise sales cycle this can be a real problem. Even worse if it’s an RFP situation because the prospect writing the RFP might consider common DR techniques to be a must have. It doesn’t matter that we can explain it away, the internal politicians in the customer can simply use this as a battering ram to avoid choosing our product. It’s not always a technical question: I hope one day Cognitect will provide an answer for export so that Datomic Cloud can be used without this risk in the sales cycle. @U05120CBV any comments on this?#2020-09-2502:06steveb8nwhat’s interesting is that Datomic provides DR features that other dbs cannot e.g. you can recover a single tenants data to any point in time, even in a multi-tenant system. Update in-place dbs cannot do this. So we are technically superior for DR. But that doesn’t always work in enterprise sales.#2020-09-2502:09NassinCurious, why does your application relies on entiti ids ?#2020-09-2505:00tatutDR is also to protect against human error, customers want to have a backup file stored in a completely different AWS account S3 bucket… or even have it downloaded to their own infastructure#2020-09-2505:01tatutand when you have backups you need to be able to restore from them… now we’ve rolled our own and fixed some mistakes we’ve made in relying too much on :db/id values#2020-09-2506:22steveb8nthat’s true. you could accidentally delete your Datomic s3 bucket and then you’d be finished! goodbye biz 😞#2020-09-2506:23steveb8nI wonder if some kind of s3 level backup would be supported by Datomic to guard against this?#2020-09-2519:31stuarthalloway@U0GC1C09L @U0510KXTU We hear you and working on things in this area.#2020-09-2519:33joshkhcheers. i know that we're in good hands. 🙂#2020-09-2718:23daniel.spanielThis is mega important, not just for enterprise, but for any datomic cloud user who wants to preserve their sanity. The sooner the better Stuart. This is huge deal.#2020-09-2421:47joshkhfor the sake of testing query performance, is there a way to flush/bust the query cache in Datomic Cloud other than by renaming bound variables?#2020-09-2519:31jarethttps://forum.datomic.com/t/cognitect-dev-tools-version-0-9-46-now-available/1636#2020-09-2519:34jaret^ New release of Dev Tools provides dev-tools via Cognitect maven repo!#2020-09-2519:51seancorfieldI didn't see anything stating the new version of REBL in dev-tools -- I had to download the ZIP to find out what the latest version was (0.9.242) so that I could put it in my deps.edn and pull it from the Cognitect Maven repo.#2020-09-2519:51seancorfield(and, yes, I know that having downloaded the ZIP, I don't need to set it up that way -- I could just use a local install -- but I prefer the idea of being able to get things direct from Maven)#2020-09-2519:54jaretThank Sean, I'll share this with the team and see what we can do.#2020-09-2519:56Michael J DorianHey! I want to pull all data on an entity based on it's entity id, can anyone tell me what's wrong with this query? (d/pull (d/db conn) [:db/id 79164837199970])#2020-09-2519:56Michael J DorianHey! I want to pull all data on an entity based on it's entity id, can anyone tell me what's wrong with this query? (d/pull (d/db conn) [:db/id 79164837199970])#2020-09-2519:57Michael J DorianI know the id based on a previous query#2020-09-2520:00manutter51missing colon on :db/id#2020-09-2520:02Michael J DorianAh, had that messed up. This isn't working for me either, it just says
Execution error (NullPointerException) at com.google.common.base.Preconditions/checkNotNull (Preconditions.java:782).
null
(d/pull (d/db conn) [:db/id 79164837199970])#2020-09-2520:02souenzzo[:db/id 42] isn't a valid lookup eid
it should be just 42#2020-09-2520:03Michael J Dorianlike so? (d/pull (d/db conn) 79164837199970)#2020-09-2520:03manutter51Ah, right, that’s true#2020-09-2520:03souenzzoyup#2020-09-2520:03Michael J DorianI guess there's something wrong with the query I'm getting this id from then, I'm still getting null pointer exceptions#2020-09-2520:04manutter51funny, though, why wouldn’t that work just like any other unique ID pair?#2020-09-2520:04souenzzo:db/id isn't a "datomic ident"
There is no [42 :db/id 42] tuple in "datoms stack"#2020-09-2520:04manutter51@UNMBR6ATT Are you sure you haven’t lost the conn? That could give you an NPE#2020-09-2520:05manutter51@U2J4FRT2T Ok, that makes sense#2020-09-2520:05manutter51I guess I’ll aways be an old SQL guy at heart#2020-09-2520:06Michael J DorianI don't think so, I can still run a query that gives me [[79164837199970]] and all of my functions are making a fresh (d/db conn)#2020-09-2520:07Michael J Dorianthe query that gives me the id is
[:find ?e :where [?e :user/email ?email]]
#2020-09-2520:08Michael J Dorianwhich, if I'm not mistaken, should just give me the entity ids of anything with an email address#2020-09-2520:19csmyou don’t have a selector, you’d need (d/pull (d/db conn) '[*] id) to pull everything for id.#2020-09-2520:20csmClient api can also use an arg map: (d/pull db {:selector '[*] :eid id})#2020-09-2520:21Michael J Dorianthank you, that did it!#2020-09-2520:21manutter51Yeah that was it, I was looking up the docs:
datomic.api/pull
([db pattern eid])
Returns a hierarchical selection of attributes for eid.
See for more information.#2020-09-2520:21manutter51I was just working with pull expressions too, should have spotted that sooner.#2020-09-2520:22Michael J DorianSorry for the silly question, these docs have a lot of "..." that really throws me off. I'm curious why I didn't get an arity exception though#2020-09-2520:23manutter51Yeah, seems like you should have.#2020-09-2520:23Michael J DorianOh, I guess I could have included the selector and :eid all in a map. All makes sense now. Thanks everyone!#2020-09-2713:37nandoI'm trying to work out how to sort a collection of items nested within the data structure returned from a pull pattern, particularly a pull that uses a reverse lookup. Here's the pull pattern I'm working with:
[:db/id
{:batch/formula [:db/id :formula/name]}
:batch/doses
:batch/date
{:batch-item/_batch [:db/id
{:batch-item/nutrient [:nutrient/name
{:nutrient/category [:category/sort-order]}]}
:batch-item/weight
:batch-item/complete?]}]
The :batch-item/_batch bit returns a rather large collection and I want to sort it by :category/sort-order and :nutrient/name#2020-09-2713:39souenzzo@nando you can use #specter with something like (transform [(walker :batch-item/nutrient) :batch-item/nutrient] (partial sort-by :nutrient/category) (d/pull ...))#2020-09-2713:44nandoSo I would wrap the pull in a specter transform? With a query that returns a flat structure, I'd use
(sort-by (juxt :sort-order :nutrient-name)
(d/q ...
#2020-09-2714:22souenzzo#specter will help you to "find some sub-structure and transform it without change anything outside it'#2020-09-2714:24souenzzoonce you find what you want to transform (the second argument, know as 'path'. On the example: find a map with this key, and 'enter' this key)#2020-09-2714:24souenzzoin this case the transform function will not by sort-by :nutrient/category, but something like #(sort-by (fn [el] ((juxt ..) el)) %)#2020-09-2714:26souenzzothe path is [(walker :batch-item/nutrient) :batch-item/nutrient ALL]
TLDR; #datomic do not do anything about sorting#2020-09-2714:50nandoThanks @souenzzo , will look into #specter next.#2020-09-2718:25daniel.spanielDoes datomic query syntax allow for group-by? I wanted to group some datums by date and then count them.#2020-09-2719:00nandoI've been looking at the clojure core function group-by for this https://clojuredocs.org/clojure.core/group-by#2020-09-2719:02daniel.spanielHas it worked ? I will try it as well .. see what happens .. good idea#2020-09-2719:06nandoI haven't incorporated it into the app I'm working on, but it certainly worked in the REPL#2020-09-2719:20nando(defn group-nutrients-by-category
[v]
(group-by :category-name v))
I've got a datomic query that returns nutrients, and each of these have a category, such as Vitamins, Minerals, Amino Acids, Plant Extracts. I just tried the above, using (group-nutrients-by-category (find-all-nutrients)) and it worked perfectly, as expected.#2020-09-2719:27daniel.spanielright, that is doing group-by after the query .. i meant in the query itself ..#2020-09-2719:30nandoIt is my understanding that sorting and grouping is done with clojure functions rather than datalog query syntax.#2020-09-2719:31daniel.spanieli reckon so. there are some other aggregate function like max, min count, but not group-by or sort-by that are built in#2020-09-2719:32nandoHave you tried to sort the results of a query yet?#2020-09-2719:32daniel.spanieloh sure, tis easy#2020-09-2719:33Joe Lanehttps://docs.datomic.com/cloud/query/query-data-reference.html#aggregate-example#2020-09-2719:34nando^^^#2020-09-2719:35Joe LaneIs this not what you mean when you say group-by?#2020-09-2719:38Joe Lane@dansudol '[:find ?date (count ?e) :where [?e :entity/date ?date]]#2020-09-2719:39Joe LaneHave you looked at https://docs.datomic.com/cloud/query/query-data-reference.html#aggregate-example#2020-09-2719:41daniel.spanielyes, that is pretty close to the query i need Joe, interesting .. i guess if that does the same as group by ( i am reading the examples now ) then that does it .. i am going for something a big more complicated ( count by date range ) but if this works as grouping by date then i am super close to what i want#2020-09-2719:44Joe LaneAre the date ranges contiguous and non-overlapping?#2020-09-2719:44daniel.spanielyes#2020-09-2719:44daniel.spanielbeginning ->end of a month , so finding items whose dates are in that range and counting them up, where let's say the range is a year, so each month, wanted the count of the items ( that have date field on them )#2020-09-2719:46Joe LaneWhat datomic system are you using?#2020-09-2719:46daniel.spanielcloud#2020-09-2719:51nandoIf I'm understanding the difference correctly, using group-by will return all records, while using count in an aggregate query will return a single record for each date.#2020-09-2719:53daniel.spanielyou can't use group-by in the query though, just to operate on the returned data , but the last part is right i reckon#2020-09-2719:54daniel.spanieli guess the count by date is kinda grouping dates in a way so there is the element of group by there#2020-09-2719:57Joe Lane'[:find ?month (count ?e)
:where
[(java.time.ZoneId/of "UTC") ?UTC]
[?e :entity/date ?date]
[(.toInstant ^Date ?date) ?inst]
[(.atZone ^Instant ?inst ?UTC) ?inst-in-zone]
[(.getMonthValue ^ZonedDateTime ?inst-in-zone) ?month]
#2020-09-2719:58Joe LaneConsider the above a sketch, written in slack, untested, likely need to add a few things.#2020-09-2719:59daniel.spanielthat is pretty hillarious Joe, nifty idea , i will hack around it#2020-09-2720:03Joe LaneThe instant type in datomic is a java.util.Date, so if you want to use the nice .getMonthValue method you'll need some combination of that.
There are several other things you could do like make a custom query function to do all the gnarly time conversion stuff in an isolated way. https://docs.datomic.com/cloud/query/query-data-reference.html#deploying
Other than that time conversion stuff, this is a pretty trivial query, right?
It's basically:
'[:find ?month (count ?e)
:where
[?e :entity/date ?date]
[(my.ions/date->month ?date) ?month]]
#2020-09-2720:04Joe Lane(You might need to use :with ?month in that query, I'd have to think about it...)#2020-09-2720:04daniel.spanielpretty much, your idea is good .. me like#2020-09-2720:26daniel.spanielinteresting @lanejo01 .. this works, very nice ( i made my own database function as you suggested ) slick !#2020-09-2720:27Joe LaneGreat to hear! #2020-09-2806:38David PhamIs it possible to find the entity with the maximum of some attribute in datalog?#2020-09-2808:37Yuriy Zaytsev(d\q '{:find [(max ?attr)] :in [$] :where [[_ :some/attribute ?attr]]} db)#2020-09-2815:58David PhamHow do you get the entitie whose attribute is the maximum?#2020-09-2816:08Yuriy Zaytsev(d/q '{:find [?entity]
:in [$]
:where [[(datomic.client.api/q '{:find [(max ?attr)]
:in [$]
:where [[?entity :some/attribute ?attr]]} $) [[?attr]]]
[?entity :user-metric/elapsed-ms ?attr]]} db)
#2020-09-2816:42David PhamSo nested queries?#2020-09-2816:43Yuriy Zaytsevyes#2020-09-2822:20steveb8nQ: I have a very slow memory leak in a production Cloud system. Before I start dumping logs and digging around, I wonder if folks out there have any tricks/tips for this process. I’ll post the chart in the thread…..#2020-09-2822:22steveb8n#2020-09-2822:22steveb8nIn particular, I wonder why the indexer line goes up. And does that provide a clue about the leak?#2020-09-2912:15jaretHi @U0510KXTU, have you actually seen a node go OOM or are you just noticing this in your metrics/dashboard? This small snippet matches with the expectations I have for indexing. The indexing job occurs in the background. Indexing is done in memory and then the in-memory index is merged with the persistent index and a new persistent index is written to the storage service. If you widen the time scale you should see a saw tooth pattern on your indexing line.#2020-09-2922:20steveb8n@U1QJACBUM No I haven’t yet in prod but the same code running on Solo (test system) has gone OOM. That chart is 2 weeks, hence no saw tooth. Here’s the hour just gone. Saw tooth as expected#2020-09-2922:21steveb8nInteresting that you think this is normal. Is there some doc somewhere that describes what “normal” is for charts in the dashboard? That would help me (and others I suspect)#2020-09-2922:22steveb8nWhenever I deploy new code, the FreeMem line jumps back up to 10Mb and starts the slow decline#2020-09-2906:29armedHi. Is there any way to make custom transaction function to omit operaton (e.g. return nil instead of transaction operation)? I want omit insertion of data in some situations.#2020-09-2906:36armedI have permission entity with composite tuple (unique) on all three attributes. When I try to bulk insert list of permissions transation sometimes aborts with unique exception.
I want to make something like postgres's on conflict do nothing. Here is my transaction function, which not workig obviously.
(defn try-add-permission
[db {:keys [permission/app
permission/user
permission/role] :as perm}]
(if (d/q '[:find ?p .
:in $ ?app ?user ?role
:where
[?p :permission/app ?app]
[?p :permission/user ?user]
[?p :permission/role ?role]]
db app user role)
nil
perm))#2020-09-2906:36armedI have permission entity with composite tuple (unique) on all three attributes. When I try to bulk insert list of permissions transation sometimes aborts with unique exception.
I want to make something like postgres's on conflict do nothing. Here is my transaction function, which not workig obviously.
(defn try-add-permission
[db {:keys [permission/app
permission/user
permission/role] :as perm}]
(if (d/q '[:find ?p .
:in $ ?app ?user ?role
:where
[?p :permission/app ?app]
[?p :permission/user ?user]
[?p :permission/role ?role]]
db app user role)
nil
perm))#2020-09-2907:06tatuthow about returning empty vector instead of nil?#2020-09-2907:18armedI already tried that. Got error
{:status :failed,
:val #error{:cause "Cannot write #2020-09-2907:21armed@(d/transact (db/get-connection) [[auth.server.cas-sync/try-add-permission perm]])
#2020-09-2907:32tatutyou need to quote db fn name#2020-09-2907:35armedquoting does not help. And docs does not use quoting https://docs.datomic.com/on-prem/database-functions.html#using-transaction-functions#2020-09-2907:36tatutah, it’s on-prem, don’t know about that#2020-09-2908:28favilaThis is a quoting issue. The exception is related to serializing the function, which you can’t do. Your function is not being executed yet#2020-09-2908:31favilaIn fact your transaction data hasn’t left the peer. What is the error you get when you quote the function name?#2020-09-2909:25armedwhen I quote like this:#2020-09-2909:25armed@(d/transact
(db/get-connection) [['auth.server.cas-sync/try-add-permission perm]])
#2020-09-2909:26armedI get error: Could not locate auth/server/cas_sync__init.class, auth/server/cas_sync.clj or auth/server/cas_sync.cljc on classpath. Please check that namespaces with dashes use underscores in the Clojure file name.#2020-09-2909:32favilais this function installed on your DB?#2020-09-2909:33favilayou’ll note in your link you need to make classpath transaction functions available on the classpath of the transactor. This error looks like it can’t find the function#2020-09-2909:35favilaactually it can’t even find the namespace#2020-09-2910:44armed@U09R86PA4 thanks, It seems that I misunderstood how transaction functions work.#2020-09-2909:50tatutUpdating datomic cloud compute group to 2020/09/23 715-8973 release, the log shows that the compute nodes don’t seem to get up after upgrade… complaining that our application code has syntax error (which it shouldn’t as it worked in previous version)#2020-09-2909:51tatutproduction topology#2020-09-2909:53tatut"Msg": ":datomic.cloud.cluster-node/-main failed: Syntax error compiling at …
#2020-09-2909:54tatutit doesn’t seem to find a required .cljc file#2020-10-0107:53onetomhas this been solved yet?#2020-10-0111:33tatutyes, workaround with support… it seemd you can’t have paths that point to for example `“../common/src” (like we have sharing backend and frontend cljc code)#2020-10-0111:34tatutit worked in all previous versions, but it doesn’t anymore with the latest#2020-10-0111:34tatutworkaround with symlinks seems ok#2020-09-2911:49onetomWhen I connect a web ion via an API Gateway proxied thru a lambda, my ion function is supposed to receive a ring-compatible map as an argument (according to the official Ion docs).
However, the map I receive only contains :headers, :server-name and a :datomic.ion.edn.api-gateway/data and /json keys, so I can't just use the typical routing libs to build my web-app or http API, because those depend on the :request-method and :uri keys of the request map.
Is it a know issue?
Is it something related to the Lambda proxy data format version?
Is it just some kind of mis-configuration?#2020-09-2911:49onetomhere is an example request map I observed:
{:headers
{"accept-encoding" "gzip, deflate",
"content-length" "0",
"host" "",
"user-agent" "http-kit/2.0",
"x-amzn-trace-id" "Root=1-5f730b24-4ac64db84deabaf53c38af60",
"x-forwarded-for" "42.200.88.157",
"x-forwarded-port" "443",
"x-forwarded-proto" "https"},
:server-name "",
:datomic.ion.edn.api-gateway/json
"{\"version\":\"2.0\",\"routeKey\":\"$default\",\"rawPath\":\"/\",\"rawQueryString\":\"\",\"headers\":{\"accept-encoding\":\"gzip, deflate\",\"content-length\":\"0\",\"host\":\"\",\"user-agent\":\"http-kit/2.0\",\"x-amzn-trace-id\":\"Root=1-5f730b24-4ac64db84deabaf53c38af60\",\"x-forwarded-for\":\"42.200.88.157\",\"x-forwarded-port\":\"443\",\"x-forwarded-proto\":\"https\"},\"requestContext\":{\"accountId\":\"191560372108\",\"apiId\":\"8g759uq7nb\",\"domainName\":\"\",\"domainPrefix\":\"8g759uq7nb\",\"http\":{\"method\":\"GET\",\"path\":\"/\",\"protocol\":\"HTTP/1.1\",\"sourceIp\":\"42.200.88.157\",\"userAgent\":\"http-kit/2.0\"},\"requestId\":\"Tn6trha8yQ0EMGg=\",\"routeKey\":\"$default\",\"stage\":\"$default\",\"time\":\"29/Sep/2020:10:23:32 +0000\",\"timeEpoch\":1601375012276},\"isBase64Encoded\":false}",
:datomic.ion.edn.api-gateway/data
{:version "2.0",
:routeKey "$default",
:rawPath "/",
:rawQueryString "",
:headers
{:accept-encoding "gzip, deflate",
:content-length "0",
:host "",
:user-agent "http-kit/2.0",
:x-amzn-trace-id "Root=1-5f730b24-4ac64db84deabaf53c38af60",
:x-forwarded-for "42.200.88.157",
:x-forwarded-port "443",
:x-forwarded-proto "https"},
:requestContext
{:routeKey "$default",
:stage "$default",
:time "29/Sep/2020:10:23:32 +0000",
:domainPrefix "8g759uq7nb",
:requestId "Tn6trha8yQ0EMGg=",
:domainName "",
:http
{:method "GET",
:path "/datomic",
:protocol "HTTP/1.1",
:sourceIp "42.200.88.157",
:userAgent "http-kit/2.0"},
:accountId "191560372108",
:apiId "8g759uq7nb",
:timeEpoch 1601375012276},
:isBase64Encoded false},
}#2020-09-2911:52onetommy ion-config.edn looks like this:
{:allow [datomic.ion.starter.http/ionized-app]
:lambdas {:app
{:fn datomic.ion.starter.http/ionized-app
:integration :api-gateway/proxy
:description "return html app"}}
;:http-direct {:handler-fn datomic.ion.starter.http/return-something-json}
:app-name "kyt-dev"}#2020-09-2911:55onetomI'm using the Solo topology (the version, which was the latest last week), otherwise I wouldn't bother with lamdba gateways if I could use the production topology.#2020-09-2911:57onetomthe docs are mentioning these /json and /data keys in a note, but just in the table above the note, they are not namespaced:
https://docs.datomic.com/cloud/ions/ions-reference.html#web-ion#2020-09-2912:43Joe LaneHey @U086D6TBN , have a look at https://github.com/pedestal/pedestal.ions
And https://github.com/pedestal/pedestal-ions-sample
#2020-09-2915:27onetomthanks, I had a look, but I don't see how would it deal with my situation.
it does have a great example of a protocol which converts the response body into an input stream, which I still need, because the reitit.ring/create-resource-handler just returns a java.io.File as a :body and Datomic threw some ->>bbuff conversion error as a result.
for now, I just have a middleware to transform the above mentioned request map to be ring compatible:
(if-let [gw-data (:datomic.ion.edn.api-gateway/data gw-req)]
(-> gw-req
(assoc :uri (-> gw-data :requestContext :http :path))
(assoc :request-method (-> gw-data :requestContext :http :method)))
gw-req)#2020-09-2915:33Joe LaneI want to make sure I understand, did you call apigw/ionize on your ring handler function?
https://docs.datomic.com/cloud/ions/ions-tutorial.html#lambda-proxy#2020-09-2915:34onetomI have the suspicion that or ion-config.edn doesn't need the :integration :api-gateway/proxy option anymore if I use the newer style HTTP API gateway setup (as opposed to the RESTful API style), it's just hasn't been documented...#2020-09-2915:34onetomhttps://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-vs-rest.html#2020-09-2915:37onetom@U0CJ19XAM yes, that .../ionized-app is defined as (apigw/ionize app), where app is simply:
(fn [req]
{:status 200
:headers {"content-type" "text/plain"}
:body (with-out-str
(clojure.pprint/pprint req))})#2020-09-2915:37onetomthat's how I obtained the request map I showed above#2020-09-2915:38Joe LaneTo make sure I understand correctly, you're not using the supported integration. Is there still a problem if you use the supported one?#2020-09-2915:39onetomwhat do you mean by supported integration?#2020-09-2915:40Joe Lanehttps://clojurians.slack.com/archives/C03RZMDSH/p1601393646108500?thread_ts=1601380150.089100&cid=C03RZMDSH#2020-09-2915:41onetomI'm just realizing that probably the Datomic docs are taking about how to integrate a web ion with the traditional RESTful API gateway, not the new "HTTP API".
I'm using this "new style" gateway, because it supports JWT authorizer's out of the box, without the need to deploy a lamdba function just for that purpose.#2020-09-2915:43onetomyes, the mentioned request map was observed when my ion-config.edn contained that : integration :api-gateway/proxy setting#2020-09-2915:46onetommy API gw was created my this sample CF template though:
https://github.com/awsdocs/amazon-api-gateway-developer-guide/blob/master/cloudformation-templates/HTTP/http-with-jwt-auth.yaml
which I found in these AWS docs:
https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-jwt-authorizer.html#2020-09-2915:48Joe LaneI'm not sure that approach is compatible with the current mechanism for making a ring compatible handler. I think you're in uncharted territory, rife with undefined behavior.#2020-09-2915:49onetomProbably... Thanks looking into it!#2020-09-2915:51onetomI will probably just transition to the production topology, though I can foresee issues with the VPC link and NLB in that case, which I have just as little experience with as I have with Cognito and JWT authorizers :)#2020-09-2915:52Joe LaneYou will get a raw payload if you use http-direct#2020-09-2915:52Joe Laneit will not be a nice map like above.#2020-09-2915:53onetomAlternatively, I can write a replacement ionizer function for this new "HTTP API" gateway#2020-09-2915:54Joe LaneYou're free to do that, but we won't be able to support that if there are issues.#2020-09-3000:43jeff.terrell@U086D6TBN try switching the payload format version in the integration config for your API Gateway instance. Saying that based purely on memory so details may be off, but I’m pretty sure I got cut by this exact issue and that was the solution that I eventually found.#2020-09-3002:37onetom@U056QFNM5 thanks a lot for the advice. it worked indeed and I can even see how the JWT authorizer has decoded the token!#2020-09-3002:38onetomso i don't have to fall back to the old, RESTful-style API gateway creation#2020-09-3003:15jeff.terrellYou're very welcome. Also keep an eye out for setting cookies. I ran into an issue where the value of my set cookie header in my ring response map was a vector rather than a string. Apparently this is legal in ring, but it didn't work in an Ions context. Again, going from memory here, but I think that was right. A simple middleware to detect such values and only take the first value out of the vector worked.#2020-10-0710:28xcenoHi guys, just found this thread because I'm working on the exact same thing right now (trying to deploy an SPA as ion / lambda proxy)
I got confused by the mismatch between the Ion tutorial and the API Gateway console, so just to clarify once more:
What the Datomic Ion docs are talking about is now called REST API on AWS?
And the HTTP API is a new thing, that is not officially supported?#2020-10-0713:41jeff.terrellYes, REST is the old kind that the docs implicitly refer to. HTTP can work, but it's not what the docs describe specifically.#2020-10-0714:00xcenoUnderstood, thank you!#2020-09-2915:29Petrus TheronJust gotten bitten for an hour getting 401 Unauthorized for Datomic Pro due to missing XML schema in ~/.m2/settings.xml, which is not mentioned in https://my.datomic.com/account. Previously: https://clojurians-log.clojureverse.org/datomic/2019-01-30/1548890962.888000#2020-09-2915:29Petrus TheronJust gotten bitten for an hour getting 401 Unauthorized for Datomic Pro due to missing XML schema in ~/.m2/settings.xml, which is not mentioned in https://my.datomic.com/account. Previously: https://clojurians-log.clojureverse.org/datomic/2019-01-30/1548890962.888000#2020-09-2915:34jaretSorry about that, what is missing in https://my.datomic.com/account ? I see the .m2/settings.xml described as:
;; In ~/.m2/settings.xml:
<!-- ~/.m2/settings.xml (see the Maven server settings docs) -->
<servers>
…
<server>
<id></id>
<username>REDACTED</username>
<password>REDACTED</password>
</server>
…
</servers>
;; In deps.edn:
{:mvn/repos
{"" {:url ""}}
:deps
{com.datomic/datomic-pro {:mvn/version "${VERSION}"}}}#2020-09-2915:34Petrus Theron<?xml version="1.0" encoding="UTF-8"?>
<settings xmlns=""
xmlns:xsi=""
ssi:schemaLocation="">
#2020-09-2915:35jaretack! let me talk to Alex so i fully understand and I can update our http://my.datomic.com account to reflect that!#2020-09-2915:39Petrus TheronAlso - and this is probably out of scope - but even after fixing settings.xml (and running clj from terminal), IntelliJ Cursive still reported a 401 for deps because it caches the 401 error for a given deps.edn (not sure if this is due to Maven or IntelliJ). Fixed after reordering any two items under :deps, then I could start a REPL.#2020-10-0920:42tekacsI'm now debugging this 401 issue in my own case, where Datomic fails to download consistently on Github Actions for CI purposes but not locally on my machine.#2020-09-2918:52Michael J DorianHey! I have a transaction that always puts one entry into datomic, and I'd like to get it to return the entity id of the new entry.
I notice that the returned map contains :tx-data, which has the data I need. But I'm not sure how to read the contents of the returned datum, and, indeed, if this is considered bad best practices or not.
Help appreciated!#2020-09-2918:54ghadithe returned map also contains :tempids which is a map of tempid -> entity id#2020-09-2918:54ghadi@doby162#2020-09-2918:55Michael J DorianI'm getting an empty map on that one, do I just need to add a temp-id to the transaction?#2020-09-2918:57ghadipaste your code/input#2020-09-2918:58ghadiif your transaction included tempids, datomic returns the resolved ids after it transacts#2020-09-2919:01Michael J Dorian{:tx-data [#:user{:email "e", :password "q", :name "q", :token "q"}]} ; this query is generated by (make-record :user) and executed
(def q (make-record :user "q" "q" "q" "q"))
(:tempids q)#2020-09-2919:06Michael J DorianAh, ok! Just had to add :db/id "nonsense" to my query and now the map gives me {"nonsense" id} !#2020-09-2919:06Michael J Dorianthanks!#2020-09-2919:09ghadiif :user/email is a unique attribute in your database, you can use it to lookup entities without entity ids#2020-09-2919:11Michael J DorianOh, nice#2020-09-3007:21Ben SlessHi all, I have a silly question regarding the pricing model, maybe I'm just missing something:
Is the pricing only of instances running Datomic (transactor, etc), or for application instances using the client library as well?#2020-09-3013:26marshall@ben.sless Datomic Cloud presumably?
The pricing is only for the nodes/instances running Datomic Cloud software (the nodes started by the Cloudformation template)#2020-09-3013:32Ben Slessalright, then no charge for the number of clients, only for instances running Datomic itself.
What about on-prem?#2020-10-0107:59onetom@ben.sless i think this page answers that question well:
https://www.datomic.com/get-datomic.html
> All Datomic On-Prem licenses are perpetual and include all features:
>
> • Unlimited Peers and/or Clients#2020-09-3013:43ziltiI've seen datomic.api/entity. How is it supposed to work? I give it the db plus a :db/id and it is then supposed to give me a map with all attributes? Or how do I use it? There is only documentation for the Java version, not the Clojure one.#2020-09-3013:45ziltiThe immediate result is a map with the key :db/id and nothing else#2020-09-3013:47marshalldocumentation for entity: https://docs.datomic.com/on-prem/clojure/index.html#datomic.api/entity#2020-09-3013:48marshallseveral day of datomic examples use the entity API:
https://github.com/Datomic/day-of-datomic/blob/a5f7f0bd084a62df7cc58b9a1c6fe7f8340f9b23/tutorial/hello_world.clj
https://github.com/Datomic/day-of-datomic/blob/a5f7f0bd084a62df7cc58b9a1c6fe7f8340f9b23/tutorial/data_functions.clj#2020-09-3013:49marshallhowever, you should also familiarize yourself with the pull API, as it provides much of the same functionality as entity (some differences), but is available in both peer and client#2020-09-3014:24ziltiThanks, ah, so it only fetches the key names when explicitly asked! Well, I don't think I'll ever use the client library, but I have looked at pull as well (and use it regularly inside a query)#2020-10-0108:01onetomwhy do u think u would never use the client library?#2020-10-0109:52ziltiWe're using Datomic on-prem and only use Clojure, so there's simply no reason to#2020-09-3017:52jI'm running the free version datomic via https://github.com/fulcrologic/fulcro-rad-demo. How do I interface with datamic to peek inside it?#2020-10-0110:55joshkhpulling a reference to an ident returns an entity with the ident's db id as well as its db ident:
(d/q '{:find [(pull ?n [:some/ref])]
:in [$ ?n]}
db 999)
=> [{:some/ref [{:db/id 987 :db/ident :some/ident}]}]
whereas pulling a tuple with a reference to an ident only returns its db id
(d/q '{:find [(pull ?n [:some/tuple])]
:in [$ ?n]}
db 999)
=> [{:some/tuple [987]}]
it would be great if pulling a reference to an ident from a tuple behaved the same way as pulling a reference to an ident from outside a tuple. i could untuple the values in the query constraints and find their idents, but that constrains my results to only entities that have :some/tuple values, unlike pull#2020-10-0111:07joshkhin other words, i can't seem to pull https://docs.datomic.com/cloud/schema/schema-modeling.html#enums from within tuples#2020-10-0114:15vnczHey!
I have recently been looking at Datomic, reading the documentation and watching almost every single video I could possibly find on the internet. I really like what I've seen so far; I do have couple of questions that I still haven't figured out. Any help here would be appreciated.
I've seen this idea that once you get a db from a connection — it's an immutable value and you can do queries on it by leveraging the query engine that's embedded in the application.
That is a great abstraction, but I am assuming that under the hood the peer library will be grabbing the required datoms from the storage engine; that inevitably will go over the network. With that in mind:
• What happens if there's a network failure while fetching the data from the storage? Is the peer library going to retry that automatically? What if it fails continuously? Will I ultimately see a thrown exception out of nowhere?
• What happens if to satisfy a query the peer library needs to grab more data from the storage engine? Is that going to block the thread where the query is being executed? (I'm assuming this depends on whether I'm using the sync or async API)#2020-10-0115:07marshallThe details of the answers to these questions depend a little bit on whether you're talking about the client API (cloud or on-prem) or the peer API (on-prem only)#2020-10-0115:07marshallhttps://docs.datomic.com/on-prem/clients-and-peers.html#2020-10-0115:52vnczAh ok interesting. I'll definitely take a look at it then#2020-10-0122:59vncz@U05120CBV I've just reviewed the document. I guess my confusion is here
> Compared to the Peer API, the Client API introduces a network hop for read operations, increasing latency.
Doesn't the Peer API also need to grab the data from the storage engine? How does the data get delivered then?#2020-10-0123:21marshallPeer reads directly from storage itself, client sends the request to peer server or a cloud node, where the storage read occurs#2020-10-0200:32vnczWell ok, so my point still stands @U05120CBV
• What happens if there's a network failure while fetching the data from the storage? Is the peer library going to retry that automatically? What if it fails continuously? Will I ultimately see a thrown exception out of nowhere?
#2020-10-0200:33marshallYes it will retry. It may eventually time out and/or throw#2020-10-0201:20vnczOk understood. So although the db value is immutable, it might fail to deliver the data in edge cases. That clarifies, thanks a lot!#2020-10-0114:26Sam DeSotaHey all, we just had an issue where ##NaN was transacted into a datomic on-prem db, and a couple weird things happened:
• It was impossible to update the values, unless you manually used db/retract + db/add, just using db/add would not automatically retract ##NaN value
• We also couldn’t search for the ##NaN values with a query
Is this known undefined behavior or a bug that should be reported? Seems like ##NaN values shouldn’t even be allowed to be transacted.#2020-10-0114:26vnczI am also kind of confused of what client I should be using here 🤔#2020-10-0115:08marshallcloud or on-prem ?
or dev-local?#2020-10-0115:51vnczI have a local Datomic instance running on my computer but I could switch to dev-local if that makes the things easier. I'm more curious about why 3 different libraries#2020-10-0116:10marshallif you're using on-prem you can use the peer library or you can use the peer-server & the client-pro library#2020-10-0116:16vncz@U05120CBV Is there a documentation page that explains a little bit the differences and when to use what?#2020-10-0116:17marshallthe clients vs peer page i linked in the other thread#2020-10-0116:26vnczAh all right, I'll check that out before continuing the conversation. Thanks for the help @U05120CBV#2020-10-0114:28pvillegas12Does somebody know how to increase the number of instances in a production topology? Switching the auto scaling group to 3 for example failed when trying to deploy our ion in Datomic Cloud#2020-10-0114:55Joe LaneHave you investigated query groups @U6Y72LQ4A?#2020-10-0114:56Joe LaneSee https://docs.datomic.com/cloud/operation/scaling.html#2020-10-0114:56Joe LaneYou likely DON'T want to be autoscaling your primary group.#2020-10-0115:07zaneI recall someone saying there’s a library out there with a clojure.spec spec for Datomic queries. Does anyone know where I could find it?#2020-10-0117:32JoshThis library defines a bunch of datomic specs https://github.com/alexanderkiel/datomic-spec/blob/master/src/datomic_spec/core.clj#2020-10-0118:12zaneBrilliant. Cheers!#2020-10-0119:58Lennart BuitNote, this is the on prem dialect, cloud (and using client to access a peer server on prem), has slight variations#2020-10-0119:59Lennart BuitFor example, cloud only allows one shape of :find#2020-10-0120:24ivanaIs there any way to check that entity id is temporary? Does (*instance?* datomic.db.DbId val) work?#2020-10-0121:58faviladepends on context. strings and negative numbers can also possibly be tempids#2020-10-0122:35ivanaHm... So, having an entity id, we can not chose the right way to resolve entity, we should add boolean flag is it tempid or not, and then resolve them different ways depending on this flag...#2020-10-0509:45Linus EricssonIn on-prem (maybe also on client) you should use the :tempids key in the transaction result, there and use datomic.api/resolve-tempid to resolve the tempids to realized entity-ids.#2020-10-0514:51ivana@UQY3M3F6D yep, and I do this. But anyway I need a criteria, are some ids temporary, for resolving it that way. Or you suggest to resolve any id as temporary first, and if it is not resolved this way (or throws an exception), then it probably is a real id and trying to use it an non-temporary?#2020-10-0515:00Linus Ericssonif you create your ids with datomic.api/tempid then you can check if they are an instance of datomic.db.DbId. But that requires your code to use the tempid function, of course.
If you transact data with tempids, that already can be resolved to entities (through external indexes and more), the tempids can be resolved to already existing entities, yes. Tempids do not have to create new entities, they can be resolved to already existing entities.#2020-10-0121:25ziltiWhat is Datomic's way to achieve this:
[:find ?dbid .
:in $ ?name ?domain
:where
(or [?dbid :company/name ?name]
[?dbid :company/domain ?domain])]
#2020-10-0121:52souenzzo@zilti or-join #2020-10-0217:39zaneIs it possible to pull all attributes, but with a default? I’m imagining something like:
(pull [(* :default :no-value)] …
#2020-10-0217:39zaneIs it possible to pull all attributes, but with a default? I’m imagining something like:
(pull [(* :default :no-value)] …
#2020-10-0217:43kennyI don't see how that could be possible. "*" means all attributes that an entity has so there can't be a default.#2020-10-0217:44kennyi.e., there is no info on what an entity does not have.#2020-10-0217:45zaneLet me try to explain how I would do it in two queries.#2020-10-0217:48zane(let [attributes (d/q '[:find [?a ...]
:where
[?e ?a _]]
db)
pattern (mapv (fn [attribute]
`(~attribute :default :no-value))
attributes)]
(d/q `[:find (pull $ ~pattern ?e)
:in $
:where
[?e _ _]]
db))#2020-10-0217:48zaneSomething along those lines.#2020-10-0221:53kennyDefinitely not something built in. I'd advise against that. What is your use case?#2020-10-0217:45donyormSo I have the following query:
{:query
{:find [?e],
:in [$ ?string-val-0],
:where
[(or-join
[?e]
(and
[?e :exception/message ?message-0]
[(.contains ?message-0 ?string-val-0)])
(and
[?e :exception/message ?explanation-0]
[?explanation-0 :message/explanation ?explanation-val-0]
[(.contains ?message-0 ?string-val-0)]))]},
:args
[#object[compute.datomic_client_memdb.core.LocalDb 0x2a589f86 "#2020-10-0217:45donyormBut I'm getting Execution error (Exceptions$IllegalArgumentExceptionInfo) at datomic.error/arg (error.clj:57).
:db.error/insufficient-binding [?string-val-0] not bound in expression clause: [(.contains ?message-0 ?string-val-0)], and I'm not sure why#2020-10-0217:59zaneIf you want ?string-val-0 to unify the outer clause you’ll need to include it in the rules-vars vector: (or-join [?e ?string-val-0] …)#2020-10-0217:59zaneAt least I suspect that’s what’s wrong.#2020-10-0217:46donyormIs it not enought to have ?string-val-0 defined in the in list?#2020-10-0222:30nandoI'm trying to figure out how to format db.type/instant values, for display in a UI. Using clojure.java-time as follows:
(:require [java-time :as t])
evaluating
(t/format #inst "2020-09-26T23:08:27.619-00:00")
returns "Sun Sep 27 01:08:27 CEST 2020"
but if I add a custom format
(t/format "dd/MM/yyyy" #inst "2020-09-26T23:08:27.619-00:00")
I get the error
Execution error (ClassCastException) at java-time.format/format (format.clj:50).
java.util.Date cannot be cast to java.time.temporal.TemporalAccessor#2020-10-0222:36nandoAny suggestions for formatting db.type/instant values ?#2020-10-0223:31favilaYou need to coerce the Java.util.date to an instant#2020-10-0223:44nando(t/instant #inst "2020-09-26T23:08:27.619-00:00")
=> #time/instant "2020-09-26T23:08:27.619Z"#2020-10-0223:45nando(t/format "yyyy/MM/dd" (t/instant #inst "2020-09-26T23:08:27.619-00:00"))
=> Execution error (UnsupportedTemporalTypeException) at java.time.Instant/getLong (Instant.java:603).
Unsupported field: YearOfEra#2020-10-0223:47nandothe clojure.java-time docs for t/instant say this function "Creates an Instant" https://cljdoc.org/d/clojure.java-time/clojure.java-time/0.3.2/api/java-time.temporal#instant#2020-10-0300:06nando(t/instant? (.toInstant #inst "2020-09-26T23:08:27.619-00:00")) => true
(t/instant? (t/instant #inst "2020-09-26T23:08:27.619-00:00")) => true#2020-10-0300:23nandoThis works: (t/format ".dd" (t/zoned-date-time 2015 9 28))
=> "2015.09.28"
but I can't find a way to convert a datomic db.type/instant to a zoned-date-time.#2020-10-0300:36nandoThere must be a more straightforward way to format datomic datetime values.#2020-10-0304:45seancorfield@U078GPYL8
user=> (t/format "dd/MM/yyyy" (t/zoned-date-time #inst "2020-09-26T23:08:27.619-00:00" (t/zone-id "UTC")))
"26/09/2020"
user=>
#2020-10-0304:45seancorfield(or whatever TZ you need there)#2020-10-0304:51seancorfieldAlthough if you're dealing with #inst which I believe are just regular java.util.Date objects, this should work (without clojure.java-time at all):
user=> (let [f (java.text.SimpleDateFormat. "dd/MM/yyyy")]
(.format f #inst "2020-09-26T23:08:27.619-00:00"))
"26/09/2020"
user=> #2020-10-0304:53seancorfieldYup, Datomic docs say it's just a java.util.Date:
:db.type/instant instant in time java.util.Date #inst "2017-09-16T11:43:32.450-00:00"
#2020-10-0310:51nando@U04V70XH6 Thanks very much! I've confirmed that both approaches work as expected with an #inst returned from datomic.#2020-10-0311:37nandoI've learned a lot here, both by dipping my toes into the clojure.java-time and tick libraries, and getting a more practical sense of how java interop works through your example.#2020-10-0312:33favila@U078GPYL8 I meant something like this (sorry, was on a phone earlier):
(let [d #inst"2020-10-03T12:18:02.445-00:00"
f (-> (java.time.format.DateTimeFormatter/ofPattern "dd/MM/yyyy")
(.withZone (java.time.ZoneId/systemDefault)))]
(.format f (.toInstant d)))
#2020-10-0312:33favilaI avoid java.text.SimpleDateFormat because it’s the “old” way and it’s not thread-safe#2020-10-0312:35favilaI think what sean posted is nearly the equivalent, except he coerces to a zoned date time instead of specifying the zone in the formatter#2020-10-0312:35favilabut I’m not familiar with clojure.java-time, I just use java.time directly#2020-10-0313:30nando@U09R86PA4 I see what you originally meant evaluating
(let [d #inst"2020-10-03T12:18:02.445-00:00"
f (-> (java.time.format.DateTimeFormatter/ofPattern "dd/MM/yyyy")
(.withZone (java.time.ZoneId/systemDefault)))]
(.format f d))#2020-10-0313:32nandoThe same error is produced without the date being wrapped in a .toInstant call.#2020-10-0313:40nandoIn what type of use case would the fact that SimpleDateFormat is not thread safe produce an unexpected result, particularly in the context of a web application?#2020-10-0314:19favilaDef the format object and then use it in functions running in my tools threads #2020-10-0314:20favila*multiple#2020-10-0314:24favilaThere are a few strata of Java date systems#2020-10-0314:25favilaThe oldest is Java.util.date objects. The newest is Java.time.*, which represents instants as Java.time.Instant objects instead. #2020-10-0314:26favilaThere are some in between that aren’t worth learning anymore#2020-10-0317:55seancorfieldYup, going via Java Time is definitely the safest route and the best set of APIs to learn. At work, over the past decade we've gone from java.util.Date to date-clj (date arithmetic for that old Date type), to clj-time (wrapping Joda Time), to Java Time (with clojure.java-time in some parts of the code and plain interop in a lot of places). Converting java.util.Date to java.time.Instant and doing everything in Java Time is a bit painful/verbose, but you can write utility functions for stuff you need frequently to hide that interop/verbosity.#2020-10-0223:31favilatoInstant method#2020-10-0400:03nandoI'm getting an inconsistent result using the sum aggregate function on dev-local. If I include only the sum function, the result is much less than it should be. If I add the count function to the same query, the result of the sum function is then correct.
Here's the query with only the sum function. There are multiple batch items per batch and I need the total weight of all batch items.
[:find ?e ?formula-name ?doses ?date (sum ?weight)
:keys e formula-name doses date total-weight
:in $ ?e
:where [?e :batch/formula ?fe]
[?fe :formula/name ?formula-name]
[?e :batch/doses ?doses]
[?e :batch/date ?date]
[?bi :batch-item/batch ?e]
[?bi :batch-item/weight ?weight]]
=> :total-weight 1027800#2020-10-0400:12nandoHere's the query with both the sum and count aggregate functions:
[:find ?e ?formula-name ?doses ?date (sum ?weight) (count ?bi)
:keys e formula-name doses date total-weight count
:in $ ?e
:where [?e :batch/formula ?fe]
[?fe :formula/name ?formula-name]
[?e :batch/doses ?doses]
[?e :batch/date ?date]
[?bi :batch-item/batch ?e]
[?bi :batch-item/weight ?weight]]
=> :total-weight 2009250,
:count 45
I've confirmed that 2009250 is the correct amount.
What am I not understanding here?#2020-10-0400:13Joe Lanehttps://docs.datomic.com/cloud/query/query-data-reference.html#with#2020-10-0400:14Joe Lane[:find ?e ?formula-name ?doses ?date (sum ?weight)
:keys e formula-name doses date total-weight
:with ?bi
:in $ ?e
:where [?e :batch/formula ?fe]
[?fe :formula/name ?formula-name]
[?e :batch/doses ?doses]
[?e :batch/date ?date]
[?bi :batch-item/batch ?e]
[?bi :batch-item/weight ?weight]]#2020-10-0400:14Joe LaneWhat does that return?#2020-10-0400:15nandoReading now .... so duplicates are being excluded from the sum?#2020-10-0400:17nandoIt's correct now!#2020-10-0400:18nandoThat's subtle. Thanks @lanejo01#2020-10-0400:19Joe LaneDoes the concept of a set vs a bag make sense to you from the docs?#2020-10-0400:21nandoI understood immediately that duplicates might be excluded from (sum ...) when I saw the example, but that's not what one would expect from a sum function.#2020-10-0400:21nando2 + 2 = 2 ???#2020-10-0400:23nandoSo I think it might be good to point this out in the sum section of the documentation (if it isn't there already)#2020-10-0400:24Joe LaneIt's not related to the sum aggregate though, it's related to whether or not you want a bag vs a set of the ?bi lvar.#2020-10-0400:24Joe LaneIt's a more general concept.#2020-10-0400:24nando;; query
[:find (sum ?count)
:with ?medium
:where [?medium :medium/trackCount ?count]]
I see an example is in there, but I didn't understand the signficance.#2020-10-0400:27nandoI understand it doesn't only apply to the sum aggregate. I'm only saying that if it has a non-obvious impact on a specific aggregate function, it might be helpful for beginners like me to point that out.#2020-10-0400:30seancorfieldInteresting. I hadn't learned enough about Datomic to realize it specifically deals in sets by default instead of bags...#2020-10-0400:34nandoIt is still quite vague to me when a query would return a set.#2020-10-0400:36nandoI guess it has to be kept always in mind, because as the example in the documentation on With Clauses shows, it isn't only an issue with some aggregate functions.#2020-10-0401:03nando@lanejo01 Here's a specific suggestion for the docs that might help to make this more clear for beginners. In the subsection on sum where it says
"The following query uses sum to find the total number of tracks on all media in the database."
You might change that to something like
"The following query uses sum to find the total number of tracks on all media in the database. Note carefully the use of the with-clause in the query so that all trackCounts are summed. If the with-clause is excluded, only unique trackCounts will be summed."#2020-10-0511:57onetomwe have upgraded clojure cli tools x.x.x.590 to 1.10.1.697, then the following error appeared:
$ clj -Srepro -e "(require 'datomic.client.api)"
WARNING: When invoking clojure.main, use -M
Execution error (FileNotFoundException) at clojure.core.async.impl.ioc-macros/eval774$loading (ioc_macros.clj:12).
Could not locate clojure/tools/analyzer__init.class, clojure/tools/analyzer.clj or clojure/tools/analyzer.cljc on classpath.
i think im on the latest dependencies in my ./deps.edn file:
org.clojure/clojure {:mvn/version "1.10.1"}
com.datomic/client-cloud {:mvn/version "0.8.102"}
com.datomic/ion {:mvn/version "0.9.48"}
#2020-10-0512:50onetomI tried it with both nixpkgs.jdk8 and jdk11.
I tried it with and without the deps overrides recommended by the latest ion-dev push operation.
the error is always the same.
I have no other dependencies specified and still get this error.
I guess I can specify this missing dependecy explicitly, but it feels like I'm doing something wrong, if such a bare bones ion project doesn't work out of the box.#2020-10-0513:10alexmillerCan you share your full deps.edn?#2020-10-0513:36onetom{:paths
["src" ;"rsc" "classes"
]
:deps
{
org.clojure/clojure {:mvn/version "1.10.1"}
com.datomic/client-cloud {:mvn/version "0.8.102"}
com.datomic/ion {:mvn/version "0.9.48"}
;org.clojure/data.json {:mvn/version "0.2.6"}
;http-kit/http-kit {:mvn/version "2.5.0"}
;metosin/reitit-ring {:mvn/version "0.5.6"}
;org.clojure/tools.analyzer {:mvn/version "1.0.0"}
;; Deps to avoid conflicts with Datomic Cloud
;; commons-codec/commons-codec #:mvn{:version "1.13"},
;; com.fasterxml.jackson.core/jackson-core #:mvn{:version "2.10.1"},
;; com.amazonaws/aws-java-sdk-core #:mvn{:version "1.11.826"},
;; com.cognitect/transit-clj #:mvn{:version "0.8.319"},
;; com.cognitect/s3-creds #:mvn{:version "0.1.23"},
;; com.amazonaws/aws-java-sdk-kms #:mvn{:version "1.11.826"},
;; com.amazonaws/aws-java-sdk-s3 #:mvn{:version "1.11.826"}
}
:mvn/repos
{"datomic-cloud"
{:url ""}}
:aliases
{:test
{:extra-paths
["test"]
:extra-deps
{nubank/matcher-combinators {:mvn/version "3.1.3"}
lambdaisland/kaocha {:mvn/version "1.0.700"}}}}
}
#2020-10-0513:40onetomi tried brew install clojure and run /usr/local/bin/clojure directly; same result.
i haven't tried it under linux yet, but it feels like a tools.deps.alpha issue.#2020-10-0513:43alexmillerI don't think the os matters so no reason to do that#2020-10-0513:46onetomi just retried again on a different machine:
no error:
$ /nix/store/0v7kwppxygj3wln9j104vfi1kx21fssj-clojure-1.10.1.590/bin/clojure -Srepro -e "(require 'datomic.client.api)"
analyzer error:
$ /nix/store/9g4xqjpzi7vkr5a5n2q3fd1cyymvh68r-clojure-1.10.1.697/bin/clojure -Srepro -e "(require 'datomic.client.api)"
#2020-10-0513:49onetomthese are the differences in the dependency tree:
$ diff -u <(/nix/store/0v7kwppxygj3wln9j104vfi1kx21fssj-clojure-1.10.1.590/bin/clojure -Srepro -Stree) <(/nix/store/9g4xqjpzi7vkr5a5n2q3fd1cyymvh68r-clojure-1.10.1.697/bin/clojure -Srepro -Stree)
--- /dev/fd/63 2020-10-05 21:48:46.147069426 +0800
+++ /dev/fd/62 2020-10-05 21:48:46.147513695 +0800
@@ -31,10 +31,6 @@
com.datomic/client-api 0.8.54
org.clojure/core.async 0.5.527
org.clojure/tools.analyzer.jvm 0.7.2
- org.clojure/tools.analyzer 0.6.9
- org.clojure/tools.reader 1.0.0-beta4
- org.clojure/core.memoize 0.5.9
- org.ow2.asm/asm-all 4.2
com.cognitect/http-client 0.1.105
org.eclipse.jetty/jetty-http 9.4.27.v20200227
org.eclipse.jetty/jetty-io 9.4.27.v20200227#2020-10-0513:54alexmillerI'm looking at it, give me a bit#2020-10-0514:50alexmillerthis is a tools.deps bug - it's pretty subtle and will take me a bit to isolate and fix properly but adding a top level dep on org.clojure/core.async 0.5.527 should be a sufficient workaround for the moment#2020-10-0514:50onetomthank you!#2020-10-0514:53onetomwith that core.async, it worked on my side too#2020-10-0523:33alexmillerhey, a new prerelease of clj is out if you'd like to test it - 1.10.1.708, will promote to stable after a bit more use#2020-10-0523:34alexmillerand I guess I implied but should say that it fixes this problem - thanks for the report, it would have been challenging to find this otherwise!#2020-10-0514:58kennytilton@nando I am a Datomic noob myself, but I got curious about the proposed enhanced doc (+1 on that, btw) and how :with might work and ran a little experiment:
(d/q '[:find ?year
:with ?language
:where [?artist :artist/name "Bob Dylan"]
[?release :release/artists ?artist]
[?release :release/year ?year]
[?release :release/language ?language]]
db) ;; => [[1968] [1973] [1969] [1970] [1971]]
So to my unwitting eyes, the :with per se does not block collapsing of duplicates: rather, one must concoct a :with clause based on domain knowledge to force a bag with the desired population over which to aggregate. Maybe? :shrug:#2020-10-0515:11favilaThe columns in the initial set are with+find, (in this case ?year ?language), then aggregation happens, then the :with columns are removed (in this case ?language) leaving a bag#2020-10-0515:11favilamaybe it’s easier to think of it as :find ?year ?language :removing ?language#2020-10-0515:12favilainstead of :find ?year :with ?language#2020-10-0515:25nandoThat's a very helpful explanation.#2020-10-0515:15nando@hiskennyness I can only respond by saying that I think :with is a very important clause to understand, and I'm not sure I fully understand it yet. Your example is the first I've seen targeting an attribute rather than an entity id. I don't have sufficient grasp of the inner workings of datomic or concept behind :with to make a guess how that works, but I'm easily confused.#2020-10-0515:24kennytiltonYou remind me of this gem: https://ace.home.xs4all.nl/Literaria/Poem-Graves.html.
I have been toying with doing a ground-up Datomic for the Easily Confused tutorial series, maybe I should do it as I struggle up my own learning curve.#2020-10-0515:40nandoIf you do a tutorial series, of course please send the link!#2020-10-0516:29timohow do I create a client with datomic.client.api with datomic free?#2020-10-0517:18timocan I even use datomic.client.api with datomic free?#2020-10-0517:21Michael WYes you have to run a peer server with datomic free to do that. See: https://docs.datomic.com/on-prem/peer-server.html#2020-10-0517:21marshall@timok Datomic free does not support peer server#2020-10-0517:21marshall@timok You should look at dev-local: https://docs.datomic.com/cloud/dev-local.html#2020-10-0517:21marshall^ no cost way to use datomic client library locally#2020-10-0517:21Michael WI have it running a peer server here...#2020-10-0517:22marshallDatomic Pro Starter, which is free (no cost) does include peer server#2020-10-0517:22marshallhttps://www.datomic.com/get-datomic.html#2020-10-0517:22Michael WOk so I am running that not free then. Sorry for the confusion.#2020-10-0517:23timoalright, thanks...will try dev-local then#2020-10-0522:38onetomi would like to implement some cognito triggers, using ions.
how can i see what payload does an api service calls an ion with?
can i "log" such info to some standard location easily?
for example, where can i see, if i just clojure.pprint/pprint something in an ion?#2020-10-0523:08marshall@onetom https://docs.datomic.com/cloud/ions/ions-monitoring.html#events#2020-10-0613:42xcenoNot sure if I run into the same bug as @onetom reported above, or if I don't understand the docs correctly.
I've added both, con.datomic/dev-local and com.datomic/client-cloudto my project. Now whenever I try to call (d/client <some-cfg>) , it crashes with:
> Syntax error (FileNotFoundException) compiling at (datomic/client/impl/shared.clj:1:1).
> Could not locate cognitect/hmac_authn__init.class, cognitect/hmac_authn.clj or cognitect/hmac_authn.cljc on classpath. Please check that namespaces with dashes use underscores in the Clojure file name
If I only add one or the other dependency it works fine. So I can either connect locally or to datomic-cloud, but as I understand we should be able to add both as a dependency at the same time and then either construct one or the other client, or call divert(?)
I'm still on clj 1.10.1.536
You can replicate the behaviour by simply checking out the ion-starter project and adding dev-local as a dependency. Leave everything else unchanged and try to create a client.#2020-10-0613:50xcenoOh, I just found this post in the forums: https://forum.datomic.com/t/dev-and-test-locally-with-dev-local/1518/9
So, nevermind. I'll try the latest clj then#2020-10-0613:52alexmillerif you do upgrade to latest stable clj (1.10.1.697) you are likely to run into the issue that @onetom was seeing (these issues are related) so you might actually need to go to the prerelease (1.10.1.708) or wait for that to be promoted to stable, should be soon#2020-10-0614:01xcenoNice thanks!
I was just going to ask if I can install the pre-release via homebrew for linux, but I can also use that script for now#2020-10-0614:02alexmillerprereleases are not in brew (or they'd be releases) but you can just follow the instructions at https://clojure.org/guides/getting_started but with that version number on linux#2020-10-0614:10alexmillerjust fyi, if you are posting large text, using a snippet (the lightning bolt in the bottom left of the edit pane) will fold it and syntax highlight it#2020-10-0614:29onetomI'm posting from mobile and that lightning icon brings some search dialog up, but thx for the feedback; I will try to figure this out#2020-10-0614:30onetombtw, what's the license of the Datomic CLI?
I haven't found any mention of that in the ZIP file#2020-10-0614:58alexmilleryou should ask in main channel, don't know#2020-10-0614:40onetomNix package for the Datomic Cloud CLI Tools#2020-10-0614:42onetomAdaptation of the official Clojure CLI Nix package to the latest version: 1.10.1.708#2020-10-0621:47zhuxun2Is there a way to subscribe to entity changes in Datomic?#2020-10-0621:48Lennart BuitWould the tx-report-queue work ^^: https://docs.datomic.com/on-prem/clojure/index.html#datomic.api/tx-report-queue ?#2020-10-0621:54zhuxun2@UDF11HLKC Looks like it's what I'm looking for, thanks!#2020-10-0622:18schmeeheads up: tx-report-queue does not exist in Datomic Cloud#2020-10-0701:27mruzekwIs there an alternative for this? ^#2020-10-0700:08cjmurphyIs there a library out there such that whether using :peer or :client is mostly unknown from your application code's point of view?#2020-10-0706:32Lennart BuitYou can use the client library to connect to an on-prem peer server. The client library is the lowest common denominator in that sense ^^#2020-10-0708:42cjmurphyYes, in a sense the peer library is on the way out?? I was asking because I use such a peer/client ambivalent library internally, and was thinking to make it open source.#2020-10-0709:01Lennart BuitWell I’m not at cognitect, so I can’t answer that. But if you intend to migrate from on-prem to the cloud at some point, you are better off with the client api, so it appears to be ‘good advice’ to start new projects with the client api#2020-10-0709:06Lennart BuitAlso; there are quite a few subtle differences between the api’s. The query dialect is slightly different (most notably, not all :find specs in peer are supported in client), and many functions have slightly different arguments/return values (for example, many functions return deref’able things in peer, but direct results in client).#2020-10-0714:01cjmurphyI must have ignored the 'good advice' initially, hence the compatibility library. For the :find differences I've just changed all the queries to work for the client, which means they can work for both. Other subtleties I've found have just been taken care of by the library - basically it is a map that includes some wrapper functions that choose to use one function or the other - for example either one of the two variants of transact , depending on the 'mode' being used.#2020-10-0712:07xcenoCan't I build/push a datomic ion that includes an alias? Meaning: I have an application and bundle datomic-ion specific stuff under a :datomic alias. I then want to push it like this: clojure -M:datomic:ion-dev '{:op :push}', but when I check the zip file that's generated all the stuff from my alias (specific namespaces & resources) is missing. Am I doing something wrong or is this just not supported?#2020-10-0713:54marshall@rob703 I believe you want -A:aliases#2020-10-0713:54marshallnot -M#2020-10-0713:58alexmillernot with new clj ...#2020-10-0713:59xcenoI actually tried both and various combinations thereof yesterday. My initial cmd was clojure -A:datomic -M:ion-dev ... and I also tried the variant straight from the docs: clojure -A:datomic:ion-dev -m datomic.ion.dev '{:op :push (options)}' to no avail. But I can double check again.
On another note:
Right now, I've pulled all my deps from the alias into my main deps so I can at least try the deployment of my lambda proxy.
Now I'm a step further, I see in the Api Gateway console that my app is returning a proper ring response, but the lambda crashes with a 502 error:
> Wed Oct 07 13:51:50 UTC 2020 : Execution failed due to configuration error: Malformed Lambda proxy response
> Wed Oct 07 13:51:50 UTC 2020 : Method completed with status: 502
The only thing I don't see in my response body is the isBase64Encoded flag, so maybe that's the issue right now#2020-10-0714:49alexmillerwhich doc was clojure -A:datomic:ion-dev -m datomic.ion.dev '{:op :push (options)}' from ?#2020-10-0715:06xcenoFrom what I've seen so far it's every command in the ion tutorials, e.g. https://docs.datomic.com/cloud/ions/ions-reference.html#push#2020-10-0715:08xcenoThe ion tutorials would also need some updates regarding the new AWS Api-Gateway Options, see here: https://clojurians.slack.com/archives/C03RZMDSH/p1601380150089100#2020-10-0715:09alexmillerthanks those commands seem wrong - if using :ion-dev with a :main-opts, the -m datomic.ion.dev isn't need there /cc @U05120CBV#2020-10-0715:46marshallI’ve updated the commands in the reference doc. Thanks @rob703#2020-10-0713:59xcenoAh yeah and I updated to the latest clj yesterday#2020-10-0713:59xcenoso that's why I converted to -M#2020-10-0714:09alexmilleryes, that clj syntax is fine with the new clj (but I don't think that has anything to do with your issue)#2020-10-0714:09vnczHey, how do I create a new database in Datomic-Local?#2020-10-0714:11marshallIf you mean dev-local, once you’ve made a client (https://docs.datomic.com/cloud/dev-local.html#using) you can use it exactly the same way as you would using client against cloud (i.e. you can call create-database https://docs.datomic.com/client-api/datomic.client.api.html#var-create-database)#2020-10-0714:12vnczI do recall creating the database from the cmd line argument when using Datomic on premise locally on my computer#2020-10-0714:12vnczMy memory might be flaking though#2020-10-0714:40marshallgenerally that would not be the case unless you were just using peer-server with a mem database#2020-10-0714:41vnczTotally my memory flaking then#2020-10-0714:10marshall@rob703 what versions of the various ion tools are you using? I’m going to try to reproduce/investigate#2020-10-0714:13xcenoThank you!
This is part of my deps edn:#2020-10-0714:13xcenoSo initially, all those deps where under my :datomic alias#2020-10-0714:19xcenoOh and I'm using the latest ion-dev tools as an alias in my user config#2020-10-0714:19xcenobasically just following the tutorial#2020-10-0714:39marshall@rob703 the zip file that is created will not contain all of your deps themselves#2020-10-0714:39marshallit only contains your ion code
The deps are fetched when you deploy#2020-10-0714:39marshallwere you seeing a problem with your actual deploy#2020-10-0714:41xceno> it only contains your ion code
Yes, that's the other part of my problem, it's not only the deps but also the additional paths.
For example:
:aliases {:datomic {:extra-paths ["src/datomic"]}}
My entire code in the datomic folder is missing#2020-10-0714:50marshallyep, I’ve reproduced that behavior. looking into it further now#2020-10-0714:53xcenoThank you!#2020-10-0714:54marshallfor now I would say you’ll want to put those extra paths in the main body of the deps, not in an alias#2020-10-0714:55xcenoYeah I moved everything from my alias up for now.
I'm now battling with AWS itself, trying to get the lambda proxy to work. But that's another issue in itself#2020-10-0723:14m0smithIs there a clear example of using :db.entity/preds? I have defined it as {:db/ident :transaction/pred
:db.entity/preds 'ledger.transaction/existing-transaction-entity-pred}#2020-10-0723:15m0smithWhen I try and transact with (d/transact conn {:tx-data [{:transaction/user-id #uuid "9550f401-fb16-4e42-8940-d683dbad3a3d" :transaction/txn-hash "Pl3b9f7ba2-eb0d-412d-b305-f76b5150c711" :db/ensure :transaction/pred}]})#2020-10-0723:16m0smithI get Execution error (IndexOutOfBoundsException) at datomic.core.datalog/bound-consts$fn (datalog.clj:1570).#2020-10-0723:16m0smithAny hints?#2020-10-0723:19m0smithAfter taking a closer look at the stack trace, the predicate is being called but erroring#2020-10-0801:21ziltiIs it a known bug that when there's a bunch of datums that get transacted simultaneously, it can randomly cause a :db.error/tempid-not-an-entity tempid '17503138' used only as value in transaction error?#2020-10-0801:36favilaThe meaning of this error is that the string “17503138” is used as a tempid that is the value of an assertion, but there is no place where the tempid is used as the entityid of an assertion; the latter is necessary for datomic to decide whether to mint a new entity id or resolve it to an existing one#2020-10-0801:37ziltiWell, as you can see in the actual datums I posted, it clearly is being used as :db/id.#2020-10-0801:38ziltiI had my program dump all datums into a file before transacting, and I copied the two that refer to this string over into here#2020-10-0801:38favilaIn your example, I see the second item says :account/accounts “17503138”. Are both these maps together in the same transaction?#2020-10-0801:39favila(Btw a map is not a datum but syntax sugar for many assertions—it’s a bit confusing to call it that)#2020-10-0801:40ziltiYes, they are both together in the same transaction.
True, I mixed up the terminology... Entity would be more fitting#2020-10-0801:43favilaIf they are indeed both in the same tx I would call that a bug. Can you reproduce?#2020-10-0801:44favilaWhy is each map in its own vector?#2020-10-0801:44ziltiYes, reliably, every time with the same dataset. Both locally with a dev database as well as on our staging server using PostgreSQL.#2020-10-0801:45ziltiConformity wants it that way, for some reason#2020-10-0801:45favilaConformity for data?#2020-10-0801:45ziltiI had that same issue a while back in a normal transaction without conformity as well though#2020-10-0801:45favilaSeparate vectors in conformity means separate transactions...#2020-10-0801:45ziltiThe migration library called conformity#2020-10-0801:48favilaI’ve only ever used conformity for schema migrations; using it for data seems novel; but I’m suspicious that these are really not in the same transaction#2020-10-0801:49favilaSee if you can get it to dump the full transaction that fails and make sure both maps mentioning that tempid are in the same transaction#2020-10-0801:22ziltiIt is often caused by one single entry that is the same structure as many others. Everything is fine, but for some reason, Datomic doesn't like it. Removing that one entry solves the problem.#2020-10-0812:55marshallwhy are both of those entity maps in separate vectors?
If you’re adding them with d/transact , all of the entity maps and/or datoms passed under the :tx-data key need to be in the same collection#2020-10-0812:56marshallbased on the problem you described, I would expect that error if you transacted the first of those, and then tried the second of those in a separate transaction#2020-10-0812:56marshallif they’re asserted in the same single transaction it should be fine#2020-10-0801:23ziltiOrdering of the entries in the transaction vector doesn't seem to matter either#2020-10-0801:26ziltiThe two datums causing problems:
[{:account/photo
"REDACTED",
:account/first-name "REDACTED",
:account/bio
"REDACTED",
:account/email-verified? false,
:account/location 2643743,
:account/vendor-skills [17592186045491],
:account/id #uuid "dd33747e-5c13-4779-8c23-9042460eb3f3",
:account/vendor-industry-experiences [],
:account/languages [17592186045618 17592186045620],
:account/vendor-specialism 17592186045640,
:account/links
[{:db/id "REDACTED",
:link/id #uuid "ea51184c-d027-44d0-8f20-df222e58daf3",
:link/type :link-type/twitter,
:link/url "REDACTED"}
{:db/id
"REDACTED",
:link/id #uuid "c9577ca4-332d-41f0-b617-c00e89fc94b4",
:link/type :link-type/linkedin,
:link/url
"REDACTED"}],
:account/last-name "REDACTED",
:account/email "REDACTED",
:account/vendor-geo-expertises
[17592186045655 17592186045740 17592186045648],
:db/id "17503138",
:account/vendor-type 17592186045484,
:account/roles [:account.role/vendor-admin],
:account/job-title "Investor"}]
and
[{:account/primary-account "17503138",
:company/headline "REDACTED",
:account/accounts ["17503138"],
:tenant/tenants [[:tenant/name "REDACTED"]],
:company/name "REDACTED",
:company/types [:company.type/contact],
:db/id "REDACTED",
:company/id #uuid "ee26b11f-53ba-43f9-a59b-f7ad1a408d41",
:company/domain "REDACTED"}]#2020-10-0809:02Adrian SmithDuring a meetup recording that I haven't uploaded yet I recorded my own maven private token from https://cognitect.com/dev-tools/view-creds.html is there a way I can regenerate that token?#2020-10-0813:10marshallCan you send an email to <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection> and we will help with this?#2020-10-0821:14Adrian Smiththank you, I've just sent an email over#2020-10-0811:38BlackHey, I just missing something and can't figure out what. I am calling tx on datomic:
(defn add-source [conn {:keys [id name]
:or {id (d/squuid)}}]
(let [tx {;; Source initial state
:db/id (d/tempid :db.part/user)
:source/id id
:source/storage-type :source.storage-type/disk
:source/job-status :source.job-status/dispatched
:source/created (java.util.Date.)
:source/name name}]
@(d/transact conn [tx])))
;; and then later API will call
(add-source conn entity-data)
After I call add-source entity is created, but after another call is made old entity is rewritten, only if I call transact with multiple transactions I can create multiple entities, but other than that old entity is being rewritten. I am new to datomic, and I can't find any resources about that, can anyone help?#2020-10-0812:19favilatempids resolve to existing entities if you assert a :db.unique/identity attribute value on them that already exists. Are any of these attributes :db.unique/identity? are you sure you are not supplying an id argument to your function?#2020-10-0812:20favila(btw I would separate transaction data creation into a separate function so it’s easier to inspect)#2020-10-0812:23Black{:db/doc "Source ID"
:db/ident :source/id
:db/valueType :db.type/uuid
:db/cardinality :db.cardinality/one
:db/id #db/id [:db.part/db]
:db.install/_attribute :db.part/db}#2020-10-0812:23Blackthis is schema for source/id, I am not using :db.unique/identity#2020-10-0812:24BlackAnd I agree with separation tx and creation but first I would like to get it work#2020-10-0812:25BlackId I removed :db/id from transaction, I shoud still be able to create new entity, right? But everytime first one is rewritten#2020-10-0812:26favilacan you give a clearer get/expect case? maybe a repl console?#2020-10-0812:28favilasomething that shows you calling add-source twice with the returned tx data, and pointing out what you think is wrong with the result of the second call?#2020-10-0812:42BlackOk I had unique on other parameter:
{:db/doc "Source name"
:db/ident :source/name
:db/unique :db.unique/identity
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one
:db/id #db/id [:db.part/db]
:db.install/_attribute :db.part/db}
If I removed it, all entities are created and it works how I expected. So I will read more about unique attribute, thanks @U09R86PA4 I would not noticed it without your help!#2020-10-0814:45ziltiWell, I guess I am going to do my migrations using a home-made solution now. I just lost all trust in Conformity. It doesn't write anything to the database most of the time I noticed.#2020-10-0814:46ziltiOr are there alternatives?#2020-10-0814:50ghadican you describe your problem with conformity in more detail?#2020-10-0815:21ziltiI have a migration that is in a function. Conformity runs the function normally, but instead of transacting the data returned from it, it just discards it. The data is definitely valid; I made my migration so it also dumps the data into a file. I can load that file as EDN and transact it to the db using d/transact perfectly fine.#2020-10-0815:23ziltiConformity doesn't even give an error, it just silently discards it.#2020-10-0815:28ghadiis this cloud or on prem?#2020-10-0815:30ziltiOn prem, both for the dev backend and the postgresql one#2020-10-0815:33ghadinot sure what to tell you. you need to analyze this further before throwing up your hands#2020-10-0815:37favilaConformity does bookkeeping to decide whether a “conform” was already run on that database. If you’re running the same key name against the same database a second time, it won’t run again. Is that what you are doing?#2020-10-0815:38favilaConformity is really for schema management, not data imports#2020-10-0815:40ziltiNo, that is not what I am doing.#2020-10-0815:41ziltiWell, the transaction is changing the schema, and then transforming the data that is in there.#2020-10-0815:41ziltiOr at least, that is what it is supposed to be doing.#2020-10-0815:42ghadihttps://github.com/avescodes/conformity#norms-versioning#2020-10-0815:42favilahttps://github.com/avescodes/conformity#norms-versioning#2020-10-0815:42ghadijinx#2020-10-0815:42favilajinx#2020-10-0815:42favilaWe’re pointing out a case where it may evaluate the function but not transact#2020-10-0815:44favilayou can use conforms-to? to test whether conformity thinks the db already has the norm you are trying to transact#2020-10-0815:44favilathat may help you debug#2020-10-0815:47ziltiWell, what is the second argument to conforms-to? ? It's neither the file name nor the output of c/read-resource#2020-10-0815:49ziltiIt wants a keyword, but what keyword?#2020-10-0815:55favilathe keyword in the conform map#2020-10-0815:56favila{:name-of-norm {:txes [[…]] :requires […] :tx-fn …}}#2020-10-0815:56favilathe :name-of-norm part#2020-10-0815:57favilathat’s the “norm”#2020-10-0815:09Filipe Silvaheya, coming here for a question about datomic cloud. I've noticed that while developing on a repl, I get exceptions as described in the datomic.api.client api:
All errors are reported via ex-info exceptions, with map contents
as specified by cognitect.anomalies.
See .
But on the live system, these exceptions don't seem to be ex-info exceptions, just normal errors. At any rate, ex-data returns nil for them. Does anyone know if this is intended? I couldn't find information about this differing behaviour.
A good example of these exceptions is malformed queries for q . On the repl, connected via the datomic binary, I get this return from ex-data
{:cognitect.anomalies/category :cognitect.anomalies/incorrect, :cognitect.anomalies/message \"Query is referencing unbound variables: #{?string}\", :variables #{?string}, :db/error :db.error/unbound-query-variables, :dbs [{:database-id \"48e8dd4d-84bb-4216-a9d7-4b4d17867050\", :t 97901, :next-t 97902, :history false}]}
But on the live system, I get nil.#2020-10-0815:09marshall@filipematossilva are you using the same API (sync or async) in both cases?#2020-10-0815:11Filipe Silvathink so, yeah#2020-10-0815:11Filipe Silvahave a ion handling http requests directly, and the repl is calling the handler that's registered on the ion#2020-10-0815:11Filipe Silvaso it should be the same code running#2020-10-0815:12Filipe Silvawe can see on the aws logs that the error is of a different shape#2020-10-0815:12Filipe Silvalet me dig it up#2020-10-0815:13Filipe Silvaon the aws logs, logging the exception, shows this#2020-10-0815:13Filipe Silva{
"Msg": "Alpha API Failed",
"Ex": {
"Via": [
{
"Type": "com.google.common.util.concurrent.UncheckedExecutionException",
"Message": "clojure.lang.ExceptionInfo: :db.error/not-a-binding-form Invalid binding form: :entity/graph {:cognitect.anomalies/category :cognitect.anomalies/incorrect, :cognitect.anomalies/message \"Invalid binding form: :entity/graph\", :db/error :db.error/not-a-binding-form}",
"At": [
"com.google.common.cache.LocalCache$Segment",
"get",
"LocalCache.java",
2051
]
},
{
"Type": "clojure.lang.ExceptionInfo",
"Message": ":db.error/not-a-binding-form Invalid binding form: :entity/graph",
"Data": {
"CognitectAnomaliesCategory": "CognitectAnomaliesIncorrect",
"CognitectAnomaliesMessage": "Invalid binding form: :entity/graph",
"DbError": "DbErrorNotABindingForm"
},
"At": [
"datomic.core.error$raise",
"invokeStatic",
"error.clj",
55
]
}
],#2020-10-0815:13Filipe Silva(note: this was not the same unbound var query as above)#2020-10-0815:14Filipe Silvaprinting the error on the repl, we see this instead
#error {
:cause "Invalid binding form: :entity/graph"
:data {:cognitect.anomalies/category :cognitect.anomalies/incorrect, :cognitect.anomalies/message "Invalid binding form: :entity/graph", :db/error :db.error/not-a-binding-form, :dbs [{:database-id "48e8dd4d-84bb-4216-a9d7-4b4d17867050", :t 97058, :next-t 97059, :history false}]}
:via
[{:type clojure.lang.ExceptionInfo
:message "Invalid binding form: :entity/graph"
:data {:cognitect.anomalies/category :cognitect.anomalies/incorrect, :cognitect.anomalies/message "Invalid binding form: :entity/graph", :db/error :db.error/not-a-binding-form, :dbs [{:database-id "48e8dd4d-84bb-4216-a9d7-4b4d17867050", :t 97058, :next-t 97059, :history false}]}
:at [datomic.client.api.async$ares invokeStatic "async.clj" 58]}]#2020-10-0815:14marshallthat ^ is an anomaly#2020-10-0815:14marshallwhich is a data map#2020-10-0815:15Filipe Silvamore precisely, (ex-data e) returns the anomaly inside that exception#2020-10-0815:16marshallah, instead of ex-info ?#2020-10-0815:17Filipe SilvaI imagine the datomic client wraps the exception doing something like (ex-info e anomaly cause)#2020-10-0815:18Filipe Silvawe're not wrapping it on our end, just calling ex-data over it to get the anomaly#2020-10-0815:18Filipe Silvabut on the live system, ex-data over the exception returns nil#2020-10-0815:18Filipe Silvawhich I think means it wasn't created with ex-info#2020-10-0815:20Filipe SilvaI mean, I wouldn't be surprised if this is indeed intended to not leak information on the live system#2020-10-0815:20Filipe Silvathat anomaly contains database ids, time info, and history info#2020-10-0815:21Filipe Silvajust wanted to make sure if it was intended or not before working around it#2020-10-0815:29ghadi@filipematossilva are you saying that you are not able to get a :cognitect.anomalies/incorrect from your failing query on the client side?#2020-10-0815:34Filipe Silvaif by client side you mean "what calls the live datomic cloud system", then yes, that's it#2020-10-0815:35ghadi@filipematossilva so what's different about your "live system" vs. the repl?#2020-10-0815:35ghadiclearly it's an ex-info at the repl#2020-10-0815:36Filipe SilvaI really don't know, that's what prompted this question#2020-10-0815:36ghadiperhaps print (class e) and (supers e) in your live system when you get the error#2020-10-0815:36ghadior (Throwable->map e)#2020-10-0815:36ghadisync api or async api?#2020-10-0815:37Filipe Silvasync#2020-10-0815:38Filipe Silvaregarding printing the error#2020-10-0815:39Filipe SilvaI'm printing the exception proper like this:
(cast/alert {:msg "Alpha API Failed"
:ex e})#2020-10-0815:39ghadido you have wrappers/helpers around your query? running it in a future?#2020-10-0815:39Filipe Silvaon the live system the cast prints this#2020-10-0815:39Filipe Silvahttps://clojurians.slack.com/archives/C03RZMDSH/p1602169996347800#2020-10-0815:39ghadioh, yeah that's a com.google.common.util.concurrent.UncheckedExecutionException
at the outermost layer#2020-10-0815:40ghadithen the inner exception is an ex-info#2020-10-0815:40Filipe Silvaon the repl, when cast is redirected to stderr, the datomic binary shows this#2020-10-0815:40ghadithanks. @marshall ^#2020-10-0815:40Filipe Silva#2020-10-0815:44Filipe Silvajust realized that the logged response there on the live system wasn't complete, let me fetch the full thing#2020-10-0815:46Filipe Silvaok this is the full casted thing on aws logs#2020-10-0815:46Filipe Silva#2020-10-0815:47ghadiunderstood#2020-10-0815:49Filipe Silvanow that I look at the full cast on life, I can definitely see the cause and data fields there#2020-10-0815:50Filipe Silvawhich leaves me extra confused 😐#2020-10-0815:50ghadilet me clarify:#2020-10-0815:51ghadiin your REPL, you are getting an exception that is:
* clojure.lang.ExceptionInfo + anomaly data
in your live system you are getting:
* com.google.common.util.concurrent.UncheckedExecutionException
* clojure.lang.ExceptionInfo + anomaly data#2020-10-0815:52ghadiwhere the Ion has the ex-info as the cause (chained to the UEE)#2020-10-0815:52ghadimake sense? seems like a bug @marshall#2020-10-0815:53ghadito work around temporarily, you can do (-> e ex-cause ex-data) to unwrap the outer layer#2020-10-0815:53ghadiand access the data#2020-10-0815:53Filipe SilvaI can see that via indeed shows different things, as you say#2020-10-0815:54Filipe Silvabut the toplevel still shows data and cause for both situations#2020-10-0815:55Filipe SilvaI imagine that data would be returned from ex-data#2020-10-0815:56Filipe Silvalet me edit those code blocks to remove the trace, I think it's adding a lot of noise and not helping#2020-10-0815:57Filipe Silvadone#2020-10-0815:59alexmillerI think it's important to separate the exception object chain from the data that represents it (which may pull data from the root exception, not from the top exception)#2020-10-0816:00alexmillerThrowable->map for example pulls :cause, :data, :via from the root exception (deepest in the chain)#2020-10-0816:02Filipe Silva@alexmiller it's not clear to me what you mean by that in the current context#2020-10-0816:03Filipe Silva(besides the factual observation)#2020-10-0816:04Filipe Silvais it that you also think that the different behaviour between the repl+datomic binary and live system should be overcome by calling Throwable->map prior to extracting the data via ex-data?#2020-10-0816:05ghadiroot exception is the wrapped ex-info#2020-10-0816:06ghadiyou could do (-> e Throwable->map :data) to get at the :incorrect piece#2020-10-0816:06alexmillerI’m just saying that the data you’re seeing is consistent with what Ghadi is saying#2020-10-0816:06alexmillerEven though that may be confusing#2020-10-0816:07Filipe Silvaok I think I understand what you mean now#2020-10-0816:07Filipe Silvathank you for explaining#2020-10-0816:07ghadibut the inconsistency is a bug 🙂#2020-10-0816:19Filipe Silvacurrently deploying your workaround, and testing#2020-10-0816:20marshall@filipematossilva this is in an Ion correct?#2020-10-0816:49Filipe Silvathe workaround is fine enough for me, but maybe you'd like more information about this?#2020-10-0817:40marshallnope, that’s enough thanks; we’ll investigate#2020-10-0820:28marshallI’ve reproduced this behavior and will report it to the dev team#2020-10-0816:34Filipe Silva@marshall correct#2020-10-0816:35Filipe Silvain a handler-fn for http-direct#2020-10-0816:36Filipe Silva@ghadi I replaced my (ex-data e) with this fn
(defn error->error-data [e]
;; Workaround for a difference in the live datomic system where clojure exceptions
;; are wrapped in a com.google.common.util.concurrent.UncheckedExecutionException.
;; To get the ex-data on live, we must convert it to a map and access :data directly.
(or (ex-data e)
(-> e Throwable->map :data)))#2020-10-0816:36Filipe SilvaI can confirm this gets me the anomaly for the live system#2020-10-0816:37Filipe Silvaslightly different than on the repl still#2020-10-0816:37Filipe Silvalive:
{:cognitect.anomalies/category :cognitect.anomalies/incorrect, :cognitect.anomalies/message "Invalid binding form: :entity/graph", :db/error :db.error/not-a-binding-form}
repl:
{:cognitect.anomalies/category :cognitect.anomalies/incorrect, :cognitect.anomalies/message \"Invalid binding form: :entity/graph\", :db/error :db.error/not-a-binding-form, :dbs [{:database-id \"48e8dd4d-84bb-4216-a9d7-4b4d17867050\", :t 97901, :next-t 97902, :history false}]}#2020-10-0816:38Filipe Silvawhich makes sense, because in the live exception the :dbs property just isn't there#2020-10-0816:38Filipe Silvabut tbh that's the one that really shouldn't be exposed#2020-10-0816:38Filipe Silvaso that's fine enough for me#2020-10-0816:38Filipe Silvathank you#2020-10-0816:41Nassinis there are an official method to move data from dev-local to cloud?#2020-10-0819:54ChicãoDoes anyone know how I get the t from tx (d/tx->t tx), but my tx is a map and the error in the conversion?
{:db-before
java.lang.ClassCastException: clojure.lang.PersistentArrayMap cannot be cast to java.lang.Number#2020-10-0819:59csmYou need to grab the tx from a datom in :tx-data , in your case 13194139534369. I think something like (-> result :tx-data first :tx) will give you it#2020-10-0820:03csmI think also (-> result :db-after :basisT) will give you your new t directly#2020-10-0820:09Chicãothks#2020-10-0823:07steveb8nQ: I want to store 3rd party oauth tokens in Datomic. Storing them as cleartext is not secure enough so I plan to use KMS to symmetrically encrypt them before storage. Has anyone done something like this before? If so, any advice? Or is there an alternative you would recommend?#2020-10-0823:11steveb8nOne alternative I am considering is DynamoDB#2020-10-0823:11ghadihow many oauth keys? how often they come in/change/expire?#2020-10-0823:13steveb8nI provide a multi-tenant SAAS so at least 1 set per tenant#2020-10-0823:14steveb8nAlso looking at AWS Secrets Manager for this. Clearly I’m in the discovery phase 🙂#2020-10-0823:14steveb8nbut appreciate any advice#2020-10-0823:29ghadiinteraction patterns within KMS are not supposed to be for encryption/decryption of fine granularity items#2020-10-0823:30ghadiusually you generate key material known as a "DEK" (Data Encryption Key) using KMS#2020-10-0823:30ghadithen you use the DEK to encrypt/decrypt a bunch of data#2020-10-0823:30steveb8nok. I can see I’m going down the wrong path with Datomic for this data#2020-10-0823:31ghadithat's not the conclusion for me#2020-10-0823:31steveb8nit looks like Secrets Manager with a local/client cache is the way to do#2020-10-0823:31ghadiyou talk to KMS when you want to encrypt/decrypt the DEK#2020-10-0823:31ghadiso when you boot up, you ask KMS to decrypt the DEK, then you use the DEK to decrypt fine-grained things in the application#2020-10-0823:32ghadiwhere to store it (Datomic / wherever) is orthogonal to how you manage keys#2020-10-0823:32ghadiif you talk to KMS every time you want to decrypt a token, you'll pay a fortune and add a ton of latency#2020-10-0823:33ghadithe oauth ciphertexts could very well be in datomic#2020-10-0823:33steveb8nif I am weighing pros/cons of DEK/Datomic vs Secrets Manager, what are the advantages of using Datomic?#2020-10-0823:34ghadisecrets manager is for service level secrets#2020-10-0823:34steveb8nit seems like the same design i.e. cached DEK to read/write from Datomic#2020-10-0823:34ghadiyou could store your DEK in Secrets manager#2020-10-0823:34steveb8nthe downside would be no excision c.f. Secrets Manager#2020-10-0823:34ghadiyou cannot put thousands of oauth tokens in secrets manager#2020-10-0823:35steveb8nexcision is desirable for this kind of data#2020-10-0823:35ghadiwell, depending on how rich you are#2020-10-0823:35steveb8nI’m not rolling in money 🙂#2020-10-0823:35ghadiif you need to excise, you can throw away a DEK#2020-10-0823:35steveb8nhmm. is 1 DEK per tenant practical?#2020-10-0823:36ghadiI would google keystretching, HMAC, hierarchical keys#2020-10-0823:36steveb8nseems like same scale problem#2020-10-0823:36ghadiyou can have a root DEK, then create per tenant DEKs using HMAC#2020-10-0823:36ghadideteministically#2020-10-0823:36steveb8nok. that’s an interesting idea. a mini DEK chain#2020-10-0823:37ghaditenantDEK = HMAC(rootDEK, tenantID)#2020-10-0823:37steveb8nthen the root is stored in Secrets Manager#2020-10-0823:37ghadiright#2020-10-0823:37steveb8nwhere would the tenant DEKs be stored?#2020-10-0823:37ghadineed to store an identifier so that you can rorate the DEK periodically#2020-10-0823:37ghadiyou don't store the tenant DEKs#2020-10-0823:37ghadiyou derive them on the fly with HMAC#2020-10-0823:38steveb8nok. I’ll start reading up on this. thank you!#2020-10-0823:38ghadisure. with HMAC you'll have to figure out a different excision scheme#2020-10-0823:38ghadiyou could throw away the ciphertext instead of the DEK#2020-10-0823:38ghadibecause you can't throw away the DEK (you can re-gen it!)#2020-10-0823:38ghadietc.#2020-10-0823:39ghadibut yeah db storage isn't your issue :)#2020-10-0823:39ghadikey mgmt is#2020-10-0823:39steveb8ninteresting. that means Datomic is no good for this i.e. no excision#2020-10-0823:39steveb8nor am I missing a step?#2020-10-0823:39ghadiare you using cloud or onprem?#2020-10-0823:40steveb8ncloud / prod topo#2020-10-0823:40ghadistay tuned#2020-10-0823:40steveb8nnow that’s just not fair 🙂#2020-10-0823:41steveb8nI will indeed#2020-10-0823:41ghadihow often does a tenant's 3p oauth token change?#2020-10-0823:41steveb8nIt’s a Salesforce OAuth so the refresh period is configurable I believe. would need to check#2020-10-0823:42steveb8ni.e. enterprise SAAS is why good design matters here#2020-10-0823:42steveb8nI’ll need to build a v1 of this in the coming weeks#2020-10-0823:50steveb8nnow that I think about it, I could deliver an interim solution without this for a couple of months and “stay tuned” for a better solution#2020-10-0823:50steveb8nI’ll hammock this…#2020-10-0823:50steveb8n🙏#2020-10-0911:52ziltiIs there a way to query for entities that don't have a certain attribute? Something like "show me all entities that have a :company/id but don't have a :company/owner"#2020-10-0911:53manutter51Check out missing? in the query docs#2020-10-0913:59souenzzo@zilti you can also do
[?e .....]
(not [?e :attr])
#2020-10-0919:33souenzzoLooks like that datomic-peer do not respect socks proxy JVM props -DsocksProxyHost=127.0.0.1 -DsocksProxyPort=5000
Is it a know issue? slurp respect this settings both for dns resolution and packages.
Datomic do not respect the proxy for names resolution
I can't know about packages#2020-10-0920:33ChicãoHi, I want to restore my backup db then I run bin/transctor the transactor
datomic-pro-0.9.5561 bin/transactor config/dev-transactor-template.properties
Launching with Java options -server -Xms1g -Xmx1g -XX:+UseG1GC -XX:MaxGCPauseMillis=50
Starting datomic:>, storing data in: data ...
System started datomic:>, storing data in: data
and I ran this command and I got an error
datomic-pro-0.9.5561 bin/datomic restore-db backup.tgz datomic:
java.lang.IllegalArgumentException: :storage/invalid-uri Unsupported protocol:
at datomic.error$arg.invokeStatic(error.clj:57)
at datomic.error$arg.invoke(error.clj:52)
at datomic.error$arg.invokeStatic(error.clj:55)
at datomic.error$arg.invoke(error.clj:52)
at datomic.backup$fn__19707.invokeStatic(backup.clj:306)
at datomic.backup$fn__19707.invoke(backup.clj:304)
at clojure.lang.MultiFn.invoke(MultiFn.java:233)
Can someone help me?#2020-10-0920:34marshallyour backup (source) needs to be an unzipped backup, not a tar#2020-10-0920:35ChicãoI got the same error when I unzipped#2020-10-0920:35marshalluntarred/unzipped#2020-10-0920:35marshallit should be a directory#2020-10-0920:36marshalla top level dir with roots and values dirs inside of it#2020-10-0920:38Chicãobackup ls
owner roots values
#2020-10-0920:38marshallah#2020-10-0920:38marshallyou need to make a URI for it#2020-10-0920:38marshallsorry#2020-10-0920:38marshallit will be like: file:///User/Home/backup/#2020-10-0920:39marshallhttps://docs.datomic.com/on-prem/backup.html#uri-syntax#2020-10-0920:42Chicãoit worked#2020-10-0920:42Chicãothanks !#2020-10-0920:42marshallno problem#2020-10-0922:35ziltiOkay, I don't get it... or-join works completely different from what I expect. When there's no result fulfilling any of the clauses in or-join it will match everything. Is that on purpose? How can I avoid that?#2020-10-0922:53ziltiI thought this:
(d/q '[:find ?eid .
:in $ ?comp-domain ?comp-name
:where
(or-join [?eid]
[?eid :company/name ?comp-name]
[?eid :company/domain ?comp-domain])]
db comp-domain (:company/name data)))
Would be equivalent to this:
(or (d/q '[:find ?eid .
:in $ ?comp-domain ?comp-name
:where
[?eid :company/domain ?comp-domain]]
db comp-domain))
(d/q '[:find ?eid .
:in $ ?comp-domain ?comp-name
:where
[?eid :company/name ?comp-name]]
db (:company/name data))))
But it is not.#2020-10-0922:57Lennart BuitIn the first query, you are getting all ?eid s because the or-join you specify does not unify with ?comp-name nor ?comp-domain. So, practically, the ?comp-domain/`?comp-name` in your :in clause are not the same as the ones you use in the or branches of your or-join#2020-10-0922:58Lennart BuitSo your first query now says “Give me al entity ids of entities that have either a name, or a domain”, the bindings in your :in make no difference#2020-10-0922:59Lennart BuitIf you change (or-join [?eid] ...) to (or-join [?eid ?comp-domain ?comp-name] ...), do you get what you want?#2020-10-0923:00ziltiI'm trying...#2020-10-0923:01ziltiYes, that gives me an empty result, which is correct in this case#2020-10-0923:02ziltiAnd it works for a valid binding too. Thanks! I misinterpreted how that first vector works in or-join, I thought that is to declare the common variable.#2020-10-0923:03Lennart BuitIt declares what variables from outside the or-join to unify with ^^#2020-10-0923:04zilti🙂#2020-10-0923:03steveb8nQ: what’s the best way to query for the most recently created entity (with other conditions) in Datalog?#2020-10-0923:04steveb8nI can include a :where [?e :some/attr _ ?t] and then sort all results by ?t but it feels like there must be some way to use max to do this#2020-10-0923:04Lennart BuitHere are some interesting time rules: https://github.com/Datomic/day-of-datomic/blob/master/tutorial/time-rules.clj maybe that helps ^^#2020-10-0923:06steveb8nperfect! thank you 🙂#2020-10-0923:08Lennart BuitNot sure if your use is exactly in there, but I often use it as a reference if I want to find something history related 🙂#2020-10-0923:08steveb8nit’s a good start. I’ll be able to make it work from this#2020-10-0923:09Lennart Buithaha, teach a man to fish, and all 🙂#2020-10-0923:09steveb8nexactly. you supplied the bait#2020-10-1111:03joshkhi'm curious if anyone here makes use of recursion in their queries? i tend to find myself thinking "ah, recursion can solve this problem!" but then later i find myself implementing some tricky manipulations outside of the query to get back the results i want. for example, if i want to find the top level post of a nested post, then i have to walk the resulting tree to its maximum depth, which of course is pretty quick, but does not feel elegant.
; the "Third Level" post
(d/q '{:find [(pull ?post [:post/id :post/text {:post/_posts ...}])]
:in [$]
:where [[?post :post/id #uuid"40b8151d-d5f4-45a6-b78c-67655cdf1583"]]}
db)
=>
; and the top post being the most nested
[[{:post/id #uuid"40b8151d-d5f4-45a6-b78c-67655cdf1583",
:post/text "Third Level",
:post/_posts {:post/id #uuid"9209c1c6-d553-4632-848a-d9929fd7652a",
:post/text "Second Level",
:post/_posts {:post/id #uuid"0b15be5d-84f2-45d1-8b44-b9928d67f388",
:post/text "Top Level"}}}]]
#2020-10-1111:42schmeeyou can also implement recursion with rules: https://docs.datomic.com/cloud/query/query-data-reference.html#rules#2020-10-1111:43schmeeit might be easier to write a rule to get the id of the nested post, and then do a pull on that id#2020-10-1111:43schmeeinstead of pulling the whole thing#2020-10-1112:11joshkhhmm. i'm not sure how i would write a recursive rule though, e.g. one that unifies on a top level post in the example above#2020-10-1112:12joshkhthen again i haven't given it much thought. i'll play around and see what i can come up with. thanks for the advice 🙂#2020-10-1112:13teodorluIf you use recursion without a recursion limit, you open yourself to an infinite loop. Perhaps it makes sense to have a hard-coded recursion limit regardless.#2020-10-1112:14joshkhoh yes, i've been down that road before#2020-10-1209:46steveb8nI have a tree read at the core of my app. It's was tricky to make it perform well and it's still not super fast. I'm using recursive rules. My summary: it's possible but non-trivial#2020-10-1210:07joshkhdoes Datomic Cloud have a REST API similar to on-prem? https://docs.datomic.com/on-prem/rest.html#2020-10-1215:41vnczI am almost sure it does not; you can build one with Java/Clojure that would internally use the client API#2020-10-1216:04joshkhi haven't looked through the code, but this NodeJS library claims to be cloud compatible, so perhaps there is an accessible rest API? https://github.com/csm/datomic-client-js#2020-10-1212:49vnczAre EntityIDs in Datomic something we can use as "user space" identifiers or shall we use our own?#2020-10-1212:51marshallyou should use domain identifiers#2020-10-1212:51marshallthere are a variety of reasons not to expose entity IDs to your applications layers as user-space identifiers#2020-10-1212:52marshallif there isn’t a straightforward choice for an identifier in your particular domain, you can always generate a UUID for entities and just use that#2020-10-1213:43vnczIs there a way to get the created id without having to re-query? I can see there is a tempId field but it's empty#2020-10-1213:43vnczThis is the current schema I am using#2020-10-1213:53vnczOh ok my mistake, it is not autogenerated, I'm still responsible for generating it#2020-10-1214:04marshallright, and the :tempids map will return the mapping between actual assigned entity IDs and the tempids you supply#2020-10-1214:04marshallwhen/if that’s relevant#2020-10-1214:15vnczOk, I guess I'll have to use a regular java.uuid to get my number; was hoping Datomic would handle the ids for me somehow but that ain't a problem#2020-10-1212:52vnczFair. Thanks.#2020-10-1212:53marshall👍#2020-10-1215:42vnczWhat is the best practice about the schema? Do you usually transact it every time the application starts? Or only when running a "migration"?#2020-10-1215:55joshkhcan i use :db/cas to swap an attribute value for one that is missing to one that is present (and vice versa), or is it only compatible with swapping two "non-nil" values?#2020-10-1215:59joshkh(d/transact conn {:tx-data [[:db/cas 12345 :reader/nickname nil "joshkh"]]})
=>
entity, attribute, and new-value must be specified
i suspect i'll have to roll out my own transactor function for that?#2020-10-1216:53benoitThe old value can be nil (per doc): "You can use nil for the old value to specify that the new value should be asserted only if no value currently exists."#2020-10-1309:52joshkhhuh, thanks for pointing that out. i thought i tried that... sure enough nil to non-nil works. thanks!#2020-10-1218:10ChicãoSomeone can help me? I want to restore my backup but I've got this problem
java.lang.IllegalArgumentException: :restore/collision The name 'db-dev' is already in use by a different database
#2020-10-1218:17ChicãoI deleted folder data from datomic/#2020-10-1303:41vnczWhat error is this? Why am I only limited to use find-rel ?#2020-10-1305:08kennyDatomic client API doesn't support all the find specs that the peer API supports. See https://docs.datomic.com/cloud/query/query-data-reference.html#find-specs for what is supported.#2020-10-1312:45vncz@U083D6HK9 So shall I use the peer server in case I'd like to do such query?#2020-10-1303:41vnczThis is the query I am trying to run#2020-10-1312:57marshall@vincenz.chianese change your find to: :find ?name ?surname#2020-10-1312:57marshallthen you can manipulate the collection(s) returned in your client application if necessary#2020-10-1313:29vnczYeah, I was trying to avoid such boilerplate per each query @marshall#2020-10-1313:29vnczBecause I'm receiving something like [[{"name": "name", "surname": "surname"}]]#2020-10-1313:30vnczThat's kind of weird as structure (although I am sure there's a reason for that#2020-10-1313:33marshallyou could pull in the find#2020-10-1313:34marshalli.e. :find (pull ?id [:name :surname])#2020-10-1313:43vnczI tried that but I think that still gave me a weird structured result#2020-10-1313:50vnczIndeed: [[{"person/name":"Porcesco","person/surname":"Gerbone"}]]#2020-10-1313:51vnczThat's the same result I'm regularly getting using the regular query#2020-10-1319:07SvenAfter recently changing a backend stack from
AWS Appsync -> AWS Lambda -> Datomic ions ->
to AWS Appsync -> HTTP direct -> API Gateway -> Datomic ions I am now getting errors like
Syntax error compiling at (clojure/data/xml/event.clj:1:1)
java.lang.IllegalAccessError: xml-str does not exist
Syntax error compiling at (clojure/data/xml/impl.clj:66:12).
java.lang.IllegalAccessError: element-nss does not exist
Syntax error compiling at (******/aws/cognito.clj:20:38).
java.lang.RuntimeException: No such var: aws/invoke
They happen every now and then with seemingly no way to reliably reproduce them and never happened when calling ions via Lambdas.
I have updated ion, ion-dev, client api, datomic storage and compute to latest as of current date with no effect.
Does anyone have ideas where to look for hints or what could be a cause for such behaviour?#2020-10-1319:27SvenThere is one major change compared to the Lambda configuration - I am now resolving functions in other namespaces based on routes. Could this have any effect and if so then why? There are no errors with resolving these functions tough.#2020-10-1319:29alexmillerthose all look like they could be a case of not having the expected version of a dependency (OR that it's asynchronously loading and you're seeing partial state)#2020-10-1319:29alexmillerwhen you resolve functions, how are you doing it?#2020-10-1319:29alexmillerI would recommend using requiring-resolve#2020-10-1319:31SvenI parse a string to a symbol and then resolve it e.g. (when-let [f (resolve 'app.ions.list-something/list-something)] (f args))#2020-10-1319:32alexmillerso you're not ever dynamically loading namespaces?#2020-10-1319:33alexmillerI mean, where does 'app.ions.list-something coming from? is that a dynamic value?#2020-10-1319:35SvenI get a string from a route e.g. list-something and then I convert it into a symbol app.ions.list-something/list-something and then resolve it. Just like in the datomi cion starter example https://github.com/Datomic/ion-starter/blob/7d2a6e0bda89ac3bb4756501c3ada3d1fbc80c1a/src/datomic/ion/starter.clj#L26#2020-10-1319:40SvenFixed my examples 😊. I guess I’ll try requiring-resolve .#2020-10-1319:42Svenand I am also requiring the namespace dynamically just like in that example (-> ion-sym namespace symbol require)#2020-10-1319:46SvenThis is my http direct handler fn
(defn handler
[{:keys [uri] :as req}]
(try
(let [arg-map (-> req parse-request validate authenticate)
{:keys [ion-sym]} arg-map]
(-> ion-sym namespace symbol require)
(let [ion-fn (resolve ion-sym)]
(when-not ion-fn
(throw (ex-info ...)))
(ion-fn arg-map)))
(catch ....)))#2020-10-1320:05alexmilleryeah, I would strongly recommend requiring-resolve - it uses a shared loading lock#2020-10-1320:48SvenI changed resolve -> requiring-resolve . The issue still persists with the exception that now only specific namespaces fail and in almost 100% of cases. What makes them different is that they implement cognitect aws api and fail at cognitect/aws/client.clj 😕#2020-10-1320:57alexmillerwhat does "they implement cognitect `aws api` " mean? they == what? implement == what?#2020-10-1320:58alexmilleraws api does do some dynamic loading but should be doing safer things already#2020-10-1321:16SvenIf I resolve and execute a symbol that uses cognitect.aws.client.api to invoke an operation on an AWS service then I always get Syntax error compiling at (cognitect/aws/client.clj…).
I added (:require [cognitect.aws.client.api]) to the handler namespace and seem to get no syntax error compiling at errors anymore. I guess it’s a fix for now.#2020-10-1321:35alexmilleryeah, don't know off the top of my head but that would have been my suggestion#2020-10-1322:53Brandon OlivierDoes the Datomic client lib support fulltext ?#2020-10-1400:19joshkhis it normal for SOCKS4 tunnel failed, connection closed to occur when running a query in the REPL during a :deploy?#2020-10-1402:51steveb8nQ: is there a metric somewhere that shows the hit-rate for cached queries? I’d like to know if I accidentally add queries that are not cached in their parsed state#2020-10-1412:41motformI have a question about using :db/ident as an enum. In my model, a :session/type is modelled as an enum of :db/idents, which works great when writing queries. However, there are times when I want to return the :session/type to be consumed as a value like :type/online, but I get the datomic id instead. Is there a way to get idents as values or should I just use a keyword instead?#2020-10-1413:38Lennart BuitYou can just pull them: [{:session/type [:db/ident]} ...rest]#2020-10-1413:42Lennart BuitYou can also just join them in your queries, say because you are finding tuples of entities and statuses:
(d/q '[:find ?e ?type-ident
:in $ ?e
:where [?e :session/type ?type]
[?type :db/ident ?type-ident]
...)#2020-10-1413:42Lennart BuitDoes that help ^^?#2020-10-1413:43motformYes, that was exactly was I was wondering about. Thank you!#2020-10-1416:09joshkhjust one thing to note, that will only return entities that have a :session/type value. i made a similar post about it here: https://forum.datomic.com/t/enumerated-values-in-tuples-are-only-eids/1644#2020-10-1416:11joshkhno response in 11 days, so if you would find an answer to the question useful then perhaps give it a bump or a like 🙂#2020-10-1417:41souenzzo@love.lagerkvist you can do (d/pull db [(:session/type :xform :db/ident)] id) => {:session/type :type/online}
Using #eql libraries you can programmatically add it to all your refs
(defn add-ident-xform
[ident? query]
(->> query
eqld/query->ast
(eql/transduce-children (map (fn [{:keys [dispatch-key] :as node}]
(if (ident? dispatch-key)
(assoc-in node [:params :xform] :db/ident)
node))))
eqld/ast->query))
(add-ident-xform
#{:session/type}
'[:foo
{:bar [:session/type]}])
;; => [:foo {:bar [(:session/type :xform :db/ident)]}]
But as a #fulcro and #eql developer, I like to return :session/type {:db/ident :type/online} because it allow you to include useful data for frontend like :session/type {:db/ident :type/online :label "OnLine" :icon "green-dot"}#2020-10-1423:57onetomWe had the impression that sometimes the ion code we deploy using the datomic CLI command takes awhile (a few minutes) to actually replace the previously running version.
We are using unreproducible deployments into a solo topology.
The issue is with a web-ion GET request, which is called through an APIGW, using their new HTTP API (instead of RESTful API) and integrating the datomic lambda as a proxy, using the v1.0 payload format.
All versions of tools and libs are the latest (as of yesterday).
Has anyone experienced anything like this?#2020-10-1423:59onetomThe :deploy-status reports SUCCESS for both keys in its response of course.#2020-10-1500:17steveb8n@onetom not sure what problem you are describing here. one useful tool is to watch the deploy in the AWS console in “Code Deploy”. That can provide useful info#2020-10-1500:18onetomthanks!
i never looked at that console yet, only the various cloudwatch logs.#2020-10-1500:22steveb8nsure. are you also enabling api-gw level logging (as well as lambda/ion logs)? I have debugged many issues with that level of detail#2020-10-1501:45onetomis there a way to deploy ions from a clojure repl?
i tried this:
(datomic.ion.dev/-main
(pr-str
{:op :push
:uname "grace"
:region "ap-southeast-1"
:creds-profile "gini-dev"}))
but it quits my repl after executing the operation.
it would be nice to just expose the function which gets called with that map provided as a command line argument and return the printed result map, so i can just grab the :deploy-command (or rather just the map itself, which describes the next operation)#2020-10-1504:40steveb8nhttps://github.com/jacobobryant/trident/blob/master/src/trident/ion_dev/deploy.clj#2020-10-1504:40steveb8nhttps://gist.github.com/jacobobryant/9c13f4cd692ff69d8f87b0d872aeb64e#2020-10-1513:14joshkhhere's what i use, which let's me deploy from the command line to a provided group name via an alias:
$ clj -Adeploy-to-aws some-group-name
https://gist.github.com/joshkh/3455a6905517a814b4623d01925baf0e#2020-10-1900:20solussdThere is. 🙂
You can use the functions push and deploy in the namespace datomic.ion.dev
Here's some code from my dev ns on a project. I think you can visually extract the important bits and ignore the project-specific ones:
(defn deploy-unrepro-build!
([]
(deploy-unrepro-build! nil))
([system-config-overrides]
(deploy-unrepro-build! system-config-overrides
(str "dev-" (java.util.UUID/randomUUID))))
([system-config-overrides uname]
(let [system-config (system/get-config system-config-overrides)]
(ion/push {:uname uname})
(ion/deploy {:group (:pawlytics/deployment-group system-config)
:uname uname}))))
(defn deploy-rev-build!
([rev] (deploy-rev-build! rev nil))
([rev system-config-overrides]
(let [system-config (system/get-config system-config-overrides)]
(ion/push {:rev rev})
(ion/deploy {:group (:pawlytics/deployment-group system-config)
:rev rev}))))
(defn deploy-current-rev-build!
([]
(deploy-current-rev-build! nil))
([system-config-overrides]
(deploy-rev-build! (-> (shell/sh "git" "rev-parse" "HEAD")
:out
str/trim-newline)
system-config-overrides)))
#2020-10-1900:21solussdwarning though: repro builds don't check that the working directory is clean like they do using the clj command.#2020-10-1911:40onetomthanks everyone!
I will give these a try!#2020-10-1508:06ErweeHey, coming from a typical app, if you have heavy read operations, you could spin up a sql read only replica and point your data guys there, safely knowing you wont topple your prod db.
How is this generally solved in the on prem postgresql storage datomic world? Something like memcache wont offer much value, its one off huge queries being run.#2020-10-1518:56favilaHow big is your table in bytes and what is your current read/write to Postgres? The load on storage is purely IO—Postgres is basically used as a dumb key-value store. It seems very unlikely (but not impossible) that this is going to be a problem.#2020-10-1519:01ErweeWe were in DDB, but it became too expensive, so we moved to PostgreSQL. DB is 50GB on clean restore, up to 100 GB, as we need to vacuum the storage#2020-10-1519:02ErweeIts a theoretical at this point, postgresql is more than capable, im just curious what the options are, or what other folks are doing for this kind of thing 🙂#2020-10-1623:42favilaIf the segment churn on your db is low, consider keeping a valcache volume around and remounting it for this big query job#2020-10-1518:15donyormI'm trying to use dev-local, but I'm getting this error: Unable to load client, make sure com.datomic/client-impl-local is on your classpath. Is there a way to add client-impl-local directly to the classpath? I'm not seeing a maven dependency online anywhere#2020-10-1518:48kennyIt’s probably a transitive dep. Are you on the latest Clojure CLI version @U1C03090C ? I know there was an issue with an older Clojure CLI version and dev-local. #2020-10-1518:56donyormYeah I have the same issue with the newest CLI. I tried looking for the dependency transitively with -Stree, but it wasn't there#2020-10-1519:06kennyHmm. Well you should not need to add that dep to your cp. Can you paste your -Stree you used to launch your REPL?#2020-10-1520:57donyormHere's what I got#2020-10-1521:39kennyHmm. Not sure. Guessing someone with deeper knowledge of dev-local needs to help here.#2020-10-1521:40donyormWell thanks for the attempt anyway. I appreciate it!#2020-10-1521:40kennyOnly other thing that might help is posting a snippet of what you're doing to get the error.#2020-10-1521:41kennyFwiw, this is what my dev-local looks like from -Stree
com.datomic/dev-local 0.9.203
com.google.errorprone/error_prone_annotations 2.3.4
com.datomic/client-api 0.8.54
com.google.guava/listenablefuture 9999.0-empty-to-avoid-conflict-with-guava
com.datomic/client 0.8.111
com.cognitect/http-client 0.1.105
org.eclipse.jetty/jetty-client 9.4.27.v20200227
org.checkerframework/checker-compat-qual 2.5.5
com.google.guava/failureaccess 1.0.1
com.google.guava/guava 28.2-android
com.datomic/client-impl-shared 0.8.80
com.cognitect/hmac-authn 0.1.195
com.google.j2objc/j2objc-annotations 1.3
com.datomic/query-support 0.8.27
org.fressian/fressian 0.6.5
com.google.code.findbugs/jsr305 3.0.2
org.ow2.asm/asm-all 4.2#2020-10-1521:41kennyI don't see any dep in my -Stree for com.datomic/client-impl-local#2020-10-1521:45donyormYeah I wonder why it's looking for that#2020-10-1521:46donyormoh I used the wrong type of :server-type in the config. I did :local instead of :dev-local . That would do it#2020-10-1520:50Lennart BuitLittle data modeling question: Say that I have a category with tags, and these tags are component/many of this category. Now, I’d like to add a composite (tuple) key to this tag entity that says [tagName, category] is unique, but there is no explicit relation from tag -> category. Do I have to reverse this relation / lose the component-ness to add this composite key?#2020-10-1522:03Brandon OlivierI’m trying to do a fulltext search on my Datomic instance, but I get this error:
The following forms do not name predicates or fns: (fulltext)
Anybody know why that might be? I’m following straight from what’s in the docs#2020-10-1522:15Lennart BuitAre you using on prem, or cloud?#2020-10-1615:19Brandon Olivier@UDF11HLKC This is local. It should be the on-prem version, but I’m connecting via the datomic api client.#2020-10-1615:20Lennart BuitIirc you can’t use fulltext from the client api#2020-10-1618:30Brandon OlivierThat was my suspicion, but I couldn't confirm. So I need to convert my application to use the peer server internally?#2020-10-1618:30Brandon Olivieror I guess "should", not "need"#2020-10-1619:34Lennart BuitThat depends on what you’d like to achieve. If your goal is to move to the cloud at some point, you may want to consider sticking with the client API and instead using some other store for your full text needs.
Here is a thread on the datomic forums about it: https://forum.datomic.com/t/datomic-fulltext-search-equivalent/874/6#2020-10-1619:34Lennart BuitI can’t decide what your architecture should look like, but this is advice I’ve seen before 🙂#2020-10-1616:29roninhackerWe're running with datomic on the backend and datascript on the front end. I'd like to just mirror datomic ids on the client side, but datascript uses longs for its ids, which means some datomic ids don't fit. Are there datomic settings that can constrain the id space? (if not I'll just write some transformation glue on the FE)#2020-10-1618:30favilaDatomic also uses longs....#2020-10-1618:32favilaDo you mean doubles? Are you worried about the 52 bit integer representation limit?#2020-10-1622:24roninhackerah, yes, I guess it's not strictly datatype I'm worried about, but datascript's id limit of 2147483647#2020-10-1622:26roninhacker(which seems to be 2^31 -1 )#2020-10-1623:17favilaD/tx->t will give you a 42 bit unique id per entity, which is extremely likely to be < 32 bits u less you have a huge number of entities or transactions. Maybe that’s useful info for some clever encoding scheme#2020-10-1623:44roninhackerhmmm, thank you#2020-10-1716:31kennyI have a query that looks like this .
'[:find ?r
:in $ ?c [?cur-r ...]
:where
[?c ::rs ?r]]
I'd like to restrict ?r to be all ?r's that are not in ?cur-r. Is there a way to do this?#2020-10-1716:38kennyI could make ?cur-r a data source but that requires me to have ?cur-r db ids for cur-r. Currently only have a list of lookup refs.
'[:find ?r
:in $ $cur-r ?c
:where
[?c ::rs ?r]
(not [$cur-r ?r])]
#2020-10-1716:41Lennart BuitMaybe (not [(identity ?cur-r) ?r]) works#2020-10-1716:42kennyReturns all ?r's#2020-10-1716:43kenny?cur-r is passed in as a list of lookup refs#2020-10-1716:46kennyI may just have to convert the ?cur-r lookup refs to eids. Not a big deal but it seems like there should be a way to make this happen in a single query 🙂#2020-10-1718:11favila(not [(datomic.api/entid $ ?cur-r) ?r])#2020-10-1718:12kennyOoo, nice! Is datomic.api documented somewhere?#2020-10-1718:12favilaIt’s the peer api#2020-10-1718:12kennyOh, right - I'm on cloud.#2020-10-1718:13favilaIt might still be there#2020-10-1718:13kennyPerhaps. Not documented though: https://docs.datomic.com/client-api/datomic.client.api.html#2020-10-1718:13favilaIt would just be on the server’s classpath#2020-10-1718:14kennyYeah. Curious if that's able to be depended on though haha.#2020-10-1718:18favilaIf you know they are lookup refs you can decompose and resolve the ref#2020-10-1718:19favilaIf not you could reimplement entid as a rule#2020-10-1718:20favila:in [[?cur-a ?cur-v]] :where [?cur-r ?cur-a ?cur-v]#2020-10-1718:24kennyAh, that seems like it’d work! Will try it in a bit. Thanks @U09R86PA4#2020-10-1718:24favilaAs a rule [[(entid [?x] ?eid)[(vector? ?x)]...] [(entid [?x] ?eid) [(int? ?x)][(identity ?x) ?eid]] and a keyword case looking up ident#2020-10-1718:25favilaSorry I can’t type out the whole thing, on a phone#2020-10-1719:53ChicãoHi, does anyone know how to solve this problem?
bin/transactor config/dev-transactor-template.properties
Launching with Java options -server -Xms1g -Xmx1g -XX:+UseG1GC -XX:MaxGCPauseMillis=50
Starting datomic:sql://<DB-NAME>?jdbc:, you may need to change the user and password parameters to work with your jdbc driver
...
System started datomic:sql://<DB-NAME>?jdbc:, you may need to change the user and password parameters to work with your jdbc d
river
Terminating process - Lifecycle thread failed
java.util.concurrent.ExecutionException: org.postgresql.util.PSQLException: ERROR: relation "datomic_kvs" does not exist
Position: 31
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at clojure.core$deref_future.invokeStatic(core.clj:2300)
at clojure.core$future_call$reify__8454.deref(core.clj:6974)
at clojure.core$deref.invokeStatic(core.clj:2320)
at clojure.core$deref.invoke(core.clj:2306)
at datomic.lifecycle_ext$standby_loop.invokeStatic(lifecycle_ext.clj:42)
at datomic.lifecycle_ext$standby_loop.invoke(lifecycle_ext.clj:40)
at clojure.lang.Var.invoke(Var.java:384)
at datomic.lifecycle$start$fn__28718.invoke(lifecycle.clj:73)
at clojure.lang.AFn.run(AFn.java:22)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.postgresql.util.PSQLException: ERROR: relation "datomic_kvs" does not exist
Position: 31#2020-10-1719:54Chicãothis is my config/properties...
protocol=sql
host=localhost
port=4334
sql-url=jdbc:
sql-user=datomic
sql-password=datomic
sql-driver-class=org.postgresql.Driver
#2020-10-1720:04ChicãoI solve this problem running
CREATE TABLE datomic_kvs (id text NOT NULL, rev integer, map text, val bytea, CONSTRAINT pk_id PRIMARY KEY (id)) WITH (OIDS = FALSE);
ALTER TABLE datomic_kvs OWNER TO datomic; GRANT ALL ON TABLE datomic_kvs TO datomic; GRANT ALL ON TABLE datomic_kvs TO public;
#2020-10-1721:34cjmurphyhttps://docs.datomic.com/on-prem/storage.html#sql-database#2020-10-1721:34cjmurphyI can see that's where creating that table is documented.#2020-10-1805:54zhuxun2In datomic, is there a way to enforce that a relation be one-to-many as opposed to many-to-many? For example, setting :folder/files to have :db.cardinality/many does not prohibit the co-existence of [x :folder/file k] and [y :folder/file k].#2020-10-1809:44joshkhif i'm reading that correctly, then file k can only exist in one :folder/files relationship, correct?
you could put a :db.unique/value constraints on :folder/files
(d/transact (client/get-conn)
{:tx-data [#:db{:ident :some/id
:valueType :db.type/string
:cardinality :db.cardinality/one
:unique :db.unique/identity}
#:db{:ident :folder/files
:valueType :db.type/ref
:cardinality :db.cardinality/many
:unique :db.unique/value}]})
here is a folder with two files:
(d/transact (client/get-conn)
{:tx-data [{:db/id "file1"
:some/id "file1"}
{:db/id "file2"
:some/id "file2"}
; file1 and file2 are part of folder1
{:db/id "folder1"
:some/id "folder1"
:folder/files ["file1" "file2"]}]})
=> Success
then adding file1, which is already claimed by folder1, throws an exception when adding it to a new folder2:
(d/transact (client/get-conn)
{:tx-data [{:db/id "folder2"
:some/id "folder2"
:folder/files [{:some/id "file1"}]}]})
Execution error (ExceptionInfo) at datomic.client.api.async/ares (async.clj:58).
Unique conflict: :folder/files, value: 30570821298684032 already held by: 44692948645838978 asserted for: 69889357107953795#2020-10-1909:40schmeecan I write a shortest path query in Datomic, e.g can I determine if it is possible to navigate from Entity A to Entity B via some reference attribute?#2020-10-1917:03favilaHere is a trivial example:#2020-10-1917:03favila'[[(path-exists? ?e1 ?a ?e2)
[?e1 ?a ?e2]]
[(path-exists? ?e1 ?a ?e2)
[?e1 ?a ?e-mid]
[(!= ?e2 ?e-mid)]
[(path-exists? ?e-mid ?a ?e2)]]]
#2020-10-1917:04favilathe general pattern with recursive rules is to define the rule multiple times, and have one that is terminal, and the rest recursive, and (generally but not required) the rule impls match disjoint sets.#2020-10-1917:05favilaunfortunately there’s no “cut” to stop evaluation early. I’m pretty sure this example will exhaustively discover every possible path, even though any one will do. However, it may discover them in parallel.#2020-10-1917:13schmeethank you for the detailed example! 🙂#2020-10-1917:18favilaNote this example only searches refs in a forward direction. With two additional implementations, it could search backwards also#2020-10-1909:41schmeeI’ve looked at all the examples of recursive rules that I could find and they all “hardcode” the depth of the search (such as the MBrainz example: https://github.com/Datomic/mbrainz-sample/blob/master/src/clj/datomic/samples/mbrainz/rules.clj#L37)#2020-10-1916:31kennyI would've expected the below query to return all txes where ?tx is not in ?ignore-tx. I actually get all txes, as if the not is completely ignore. ?ignore-tx is passed in as a set of tx ids. Why would this happen?
'[:find ?t ?status ?tx ?added
:in $ [?ignore-tx ...]
:where
[?t ::task/status ?status ?tx ?added]
(not [(identity ?ignore-tx) ?tx])]
#2020-10-1916:38faviladatalog comparisons are not “type”-aware. are all ?ignore-tx actually tx longs and not some other representation?#2020-10-1916:38kennyYes
(type (first ignore-txes))
=> java.lang.Long
#2020-10-1916:38favilaare they T or TX?#2020-10-1916:39kennytx#2020-10-1916:39favila(both are longs, but TXs have partition bits)#2020-10-1916:39faviladoes this behave differently? [(!= ?ignore-tx ?tx)]#2020-10-1916:39favila(instead of (not …)#2020-10-1916:40kennySame result#2020-10-1916:41favilaprint (first ignore-txes) ?#2020-10-1916:41kenny(first ignore-txes)
=> 13194142112981
#2020-10-1916:43favilaand you’re actually sure this is in the result set? You can test with `
'[:find ?t ?status ?tx ?added
:in $ [?tx ...]
:where
[?t ::task/status ?status ?tx ?added]
]
#2020-10-1916:44kenny(d/q {:query '[:find ?t ?status ?tx ?added
:in $ [?ignore-tx ...]
:where
[?t ::task/status ?status ?tx ?added]
[(!= ?ignore-tx ?tx)]
[?tx :audit/user-id ?user]]
:args [(d/history (d/db conn))
#{13194142035321 13194142112981}]
:limit 10000})
=>
[[606930421025569 :cs.model.task/status-in-progress 13194142112981 false]
[606930421025569 :cs.model.task/status-in-progress 13194142035321 true]
[606930421025569 :cs.model.task/status-open 13194142112981 true]
[606930421025569 :cs.model.task/status-open 13194142035321 false]]#2020-10-1916:46kennyIdentical result with (not [(identity ?ignore-tx) ?tx]).#2020-10-1916:49favilaThat is really weird. I can’t reproduce with a toy example#2020-10-1916:49favila(d/q '[:find ?e ?stat ?tx ?op
:in $ [?ignore-tx ...]
:where
[?e :status ?stat ?tx ?op]
[(!= ?ignore-tx ?tx)]
]
[[1 :status :foo 100 true]
[1 :status :bar 100 false]]
#{100}
)#2020-10-1916:49favila=> #{}#2020-10-1916:50kennyYeah - that's what I would expect#2020-10-1916:53favilawhat about using contains?#2020-10-1916:53favila(d/q '[:find ?e ?stat ?tx ?op
:in $ ?ignore-txs
:where
[?e :status ?stat ?tx ?op]
(not [(contains? ?ignore-txs ?tx)])
]
[[1 :status :foo 13194142112981 true]
[1 :status :bar 13194142112981 false]
[1 :status :baz 13194142112982 true]]
#{13194142112981}
)
=> #{[1 :baz 13194142112982 true]}
#2020-10-1916:54favilaI’m just kinda probing to see if this is a problem with comparisons or something deeper#2020-10-1916:54kenny(d/q {:query '[:find ?t ?status ?tx ?added
:in $ ?ignore-tx
:where
[?t ::task/status ?status ?tx ?added]
(not [(contains? ?ignore-tx ?tx)])
[?tx :audit/user-id ?user]]
:args [(d/history (d/db conn))
#{13194142035321 13194142112981}]})
=> []#2020-10-1916:55kennyThat's the expected result. Still odd that the former didn't work.#2020-10-1916:57kennyEven odder is that it worked in your toy example.#2020-10-1916:57favilaI think that points to something funky with the numeric comparisons done by the datalog engine, like it’s using object identity or something.#2020-10-1916:58favilamy toy example used on-prem, but should be able to replicate with cloud or peer-server#2020-10-1917:00favilaI was using 1.0.6165#2020-10-1917:01kennyThis is using the client api 0.8.102 and connecting to a system running in the cloud.#2020-10-1917:08kennySeems to work as expected with dev-local as well.#2020-10-1917:16kennyDatomic Cloud includes :db-name and :database-id as get'able keys from a d/db. Are these part of the official API?#2020-10-1917:17kennye.g.,
(d/db conn)
=>
{:t 2580397,
:next-t 2580398,
:db-name "my-db",
:database-id "74353541-feea-4ea2-afa6-f522a169856d",
:type :datomic.client/db}#2020-10-1917:19kennyIt would appear so (for :db-name at least) https://docs.datomic.com/client-api/datomic.client.api.html#var-db#2020-10-1917:20kennyIf that is true, shouldn't dev-local support that? See below example using dev-local 0.9.203.
(def c2 (d/client {:server-type :dev-local,
:system "dev-local-bB7z07Io_A",
:storage-dir "/home/kenny/.datomic/data/dev-local-bB7z07Io_A"}))
(d/db (d/connect c2 {:db-name "cust-db__0535019e-79fe-44a1-a8d9-b19394abd958"}))
(:db-name *1)
=> nil#2020-10-1917:31kennyFairly certain this is a bug so I opened a support req: https://support.cognitect.com/hc/en-us/requests/2879#2020-10-2017:15daniel.spanieldoes datomic mem-db support tuple type ? i tried to add a tuple field and it barfed so not sure ?#2020-10-2017:54favilaThis should be dependent on datomic lib version, not storage type#2020-10-2018:06daniel.spaniellib version? where is that found ? we use cloud db for production#2020-10-2018:12favilahow do you create a mem db with cloud?#2020-10-2018:43daniel.spanielyou dont .. you use one or the other. looks like dev-local has some thing dev-local-tu for doing test like things where you blow away the db around each test, which is what we want. but i think mem-db does not support tuple#2020-10-2018:44favilaAFAIK before dev-local there were no mem-dbs with cloud#2020-10-2018:44favilaso I’m not sure what you are doing#2020-10-2018:46favilaif you use a peer-server with on-prem you could do it, but that depends on the peer lib’s version. There was also this: https://github.com/ComputeSoftware/datomic-client-memdb#2020-10-2018:48daniel.spanielthat the one we using, but we just run that locally , when on prod using cloud db , we switch between one and the other#2020-10-2018:49favilaso, that depends on an on-prem lib, and that on-prem lib’s version is what’s dictating whether tuples are supported or not (most likely)#2020-10-2018:49favilaI’m just saying there’s more to the story than “mem-db -> no tuple types”#2020-10-2018:51favilaon-prem 0.9.5927 added tuples: https://docs.datomic.com/on-prem/changes.html#0.9.5927#2020-10-2019:22kennyI imported a prod db via dev-local/import-cloud. Is there a way to get a breakdown of the size of the db.log file?#2020-10-2019:25kennyI'm also curious if import-cloud provides a way to import the current version of the database with no historical retracts.#2020-10-2019:52kennyAre there any issues with running multiple import-cloud in parallel?#2020-10-2021:36donyormSo I have an entity with a child with cardinality many, and I query for all entities where one of these child entities matches a value. I tried
'(or
(and
[?e :child-element-key ?ste]
[(.contains ^java.lang.String ?ste "value")]))
But that didn't work, is there another way to do this?#2020-10-2021:36donyormSo I have an entity with a child with cardinality many, and I query for all entities where one of these child entities matches a value. I tried
'(or
(and
[?e :child-element-key ?ste]
[(.contains ^java.lang.String ?ste "value")]))
But that didn't work, is there another way to do this?#2020-10-2021:39favilaclojure.core/list isn’t needed--you are already quoting#2020-10-2021:43donyormThanks, sorry I copied and modified this from my code where I wasn't quoting#2020-10-2021:47favilathis is generally how you do it; it’s going to be difficult to diagnose your problem without a complete example. You could try simplifying the query with specific data to see what’s going wrong. e.g.:
(d/q '[:find ?e
:where
(or
(and
[?e :child-element-key ?ste]
[(.contains ^java.lang.String ?ste "value")]))]
[[1 :child-element-key "value1"]
[2 :child-element-key "nope"]])
=> #{[1]}#2020-10-2021:48donyormOk thanks, I wasn't sure if I was completely off base, probably an issue in my data then. Thank you!#2020-10-2021:51donyormYes definitely was a problem in the data, thanks for the help though!#2020-10-2100:49Michael Stokleyare subqueries only possible with ions? i'm fooling around and i'm running into cognitect/not-found errors that tell me "'datomic/ion-config.edn' is not on the classpath"#2020-10-2100:49Michael Stokleyhere's the subquery i attempted:
(d/q `[:find ~'?contract ~'?latest-snapshot-tx-instant
:where
[~'?contract :contract/id]
[(datomic.client.api/q [:find (~'max ~'?snapshot-tx-instant)
:where
[~'?contract :contract/snapshots ~'?_ ~'?snapshot-tx]
[~'?snapshot-tx :db/txInstant ~'?snapshot-tx-instant]])
~'?latest-snapshot-tx-instant]]
db)
#2020-10-2100:57Joe Laneuse [(q @U7EFFJG73#2020-10-2101:02Michael Stokleythat gets me further... thank you!#2020-10-2101:10Michael Stokleyhere we are: https://docs.datomic.com/cloud/query/query-data-reference.html#q#2020-10-2101:26Michael Stokleya humble suggestion to whoever may have control over the documentation: i could not find the q function documentation when googling "datomic subquery"#2020-10-2104:46steveb8nQ: I want to use an API-Gateway custom authorizer (lambda) with Ions. The authorizer decorates the request which is passed through to the Ion Lambda (I’m not using http-direct yet). The auth data is in the lambda request “context”, not in headers. Using a ring handler which has been “ionized” I can’t figure out how to access that data. Has anyone got any experience with this?#2020-10-2105:02steveb8nI found the answer in the docs. the “requestContext” is in the web ion request#2020-10-2105:02steveb8nhttps://docs.aws.amazon.com/apigateway/latest/developerguide/set-up-lambda-proxy-integrations.html#api-gateway-simple-proxy-for-lambda-input-format#2020-10-2113:45jaretCognitect dev-tools version 0.9.51 now available
Version 0.9.51 of Cognitect dev-tools is now available for download.
See https://forum.datomic.com/t/cognitect-dev-tools-version-0-9-51-now-available/1666#2020-10-2113:57vncz@U1QJACBUM I can't find anything in the documentation about the MemDB feature; what's that about?#2020-10-2113:59jaret@U015VUATAVC Sorry the doc's cache wasn't busted#2020-10-2113:59jarethttps://docs.datomic.com/cloud/dev-local.html#memdb#2020-10-2114:28vnczAh ok sweet, thanks!#2020-10-2119:41zhuxun2What's the idiomatic way to
> retract (`:db.fn/retractEntity`) all the releases that have a particular given :release/artist, and return the list of :release/name's of the releases retracted, all done atomically
I know that the first part can be done with a transaction function, and the second part can be extracted manually from the "TX_DATA" key in the transaction report map. However, I found manual extraction to be too much dependent on the structure of the transaction report map, which seems to be rather subject to future changes. I was wondering if there's a more elegant way of doing this that I am not aware of.#2020-10-2120:08favilathe tx-data report map has been stable for years AFAIK. what difficulty are you encountering specificaly?#2020-10-2120:10favilaMy go-to strategy in this case would be to look in tx-data for datoms matching pattern [_ :release/name ?value _ false] . That will only tell you that a value was retracted, not that a retractEntity caused it. With some domain knowledge you could refine that further#2020-10-2120:10favila(note you’ll have to resolve :release/name to its entity id somehow)#2020-10-2120:18zhuxun2> look in tx-data for datoms matching pattern ...
Interesting ... does Datomic provide a mechanism to do this kind of matching against a list of datoms? @U09R86PA4#2020-10-2120:24zhuxun2@U09R86PA4 Or do I have to do (map #(nth 2 %) (filter (fn [[e a v t f]] (and (= :release/name a) (not f))) (:tx-data query-report)))?#2020-10-2120:27favilaThat should be enough; if you need more sophistication you can use d/q with the :db-before or :db-after dbs#2020-10-2120:29favilae.g., I want all release names for all entities which lose a release name but don’t gain a new one within a transaction (i.e. they only lose, not change their name):#2020-10-2120:30favila(d/q '[:find ?release-name
:in $before $txdata
:where
[$before ?release-name-attr :db/ident :release/name]
[$txdata ?e ?release-name-attr ?release-name _ false]
($txdata not [?e ?release-name-attr _ _ true])
]
(:db-before result) (:tx-data result))#2020-10-2120:30favila(untested)#2020-10-2120:40zhuxun2I think this is what I was looking for. Thanks!#2020-10-2120:52zhuxun2@U09R86PA4 Wait ... wasn't [$before ?release-name-attr :db/ident :release/name] implied? Why did you have to put it in the query?#2020-10-2120:55favila$txdata is a vector of datoms, which is a valid data source, but doesn’t understand that :release/name is an ident with an equivalent entity id. So just putting [$txdata _ :release/name] would never match anything#2020-10-2120:56favilathat you can say [_ :attr] or [_ :attr [:lookup "value"] or [_ :attr :ident-value] is magic provided by the datasource#2020-10-2120:56favilathe database “knows” that those map to entity ids and does it for you#2020-10-2120:57favilabut a vector datasource isn’t that smart#2020-10-2120:57favilaso you need to match the entity id of :release/name exactly#2020-10-2121:03zhuxun2This is very interesting detail. Thanks for the explanation.#2020-10-2207:34onetomTo make Ion deployment more comfortable, we started using the datomic.ion.dev/push and friends directly.
Those function however seem to shell out to call clojure and expect it to be available on the PATH.
Since we are using nix-shells to manage our development environments, we deliberately have no clojure on our PATH by default, only within direnv managed shells.
Would it be possible to just call the necessary functions directly from the JVM process which runs the datomic.ion.dev/push function?
It seems like it's only doing some classpath computation, which should be possible to do directly with clojure.tools.deps...#2020-10-2209:44cmdrdatsWe've got a datomic on-prem transactor running, saving the data in mysql - we got this, and it died... how would I go about diagnosing the cause and fixing so it doesn't die?#2020-10-2209:45cmdrdatswe're running datomic-transactor-pro-1.0.6165.jar#2020-10-2209:51favilasuper-high-level, the transactor tried to update one of the top-level mutable rows in the database (“pods”) and it couldn’t so it self-destructed. If this happened immediately on txor startup, I would check the connection parameters and credentials are correct and the mysql user has the right permissions (it needs SELECT UPDATE DELETE on table datomic_kvs). Assuming the transactor started correctly and connected to mysql correctly and this happened randomly later, I don’t know. It could be something transient on mysql itself.#2020-10-2209:52favilaI would actually look at mysql logs first#2020-10-2209:59cmdrdatshmm - ok, makes sense - this happened randomly much later#2020-10-2210:00cmdrdatsI do know we've had weird random connection issues on our mysql hosts that we've had to workaround with reconnecting.. so that seems the most likely explanation, thanks for the info!#2020-10-2217:02Michael Stokleyi'm seeing some old materials around datomic that suggest you can query vanilla clojure data structures, such as vectors of vectors. eg:
(d/q '[:find ?first ?height
:in $a $b
:where [$a ?last ?first ?email]
[$b ?email ?height]]
[["Doe" "John" "
this does not run for me. is there a version of this that does work?#2020-10-2217:07favilaThis is on-prem with the peer api; if you are using the client api, you need to include a “real” database as a parameter (first parameter?) even if you don’t use it because that is how the client api finds a machine to send the query to#2020-10-2217:07favilaon-prem runs the query in-process; client (typically-not necessarily) sends the query over the network to another process#2020-10-2217:11Michael Stokleyi see, thank you. it would be terrific to be able to use generic data structures as databases, seems like that would have been the clojure way of doing things as opposed to locking you in to a nominal type#2020-10-2217:11favilathe client api doesn’t provide a query engine, so, they’re kind of at cross purposes#2020-10-2217:13favilato be clear: this works with the client api just fine, but you have to send it to something that can evaluate it#2020-10-2217:14Michael Stokleycan you say more, i'm not sure i follow, yet. this works - this being, using generic data structures as the db?#2020-10-2217:15favila(d/q '[:find ?first ?height
:in $ $a $b
:where [$a ?last ?first ?email]
[$b ?email ?height]]
(d/db client-connection)
[["Doe" "John" "#2020-10-2217:15Michael Stokleyi am using datomic.client.api, it's not on-prem#2020-10-2217:15favilaI’m saying that this should work#2020-10-2217:15favilanote I added a client db, but I didn’t use it in the query#2020-10-2217:16Michael Stokleyoh, interesting.#2020-10-2217:16favilaall of those arguments will be sent to the server (probably not in-process), the query will run, and you will get the result#2020-10-2217:16favilathe server that is backing the db object#2020-10-2217:16Michael Stokleyyeah, that works!#2020-10-2217:24Michael Stokleyi wonder if there's a way to use this for testing? maybe not, since my production query will necessarily be referring to $ (ie not $a or $b)#2020-10-2217:25Michael Stokleyit would be great if i could throw together a very simple database out of generic data structures and exercise my production query on that, instead of a real db#2020-10-2217:32favilaYou could, but real databases normalize entity references to entity ids for you (e.g. it knows :some-attr-keyword is eid 123). Without that you would have to construct your query or data carefully so that the comparisons are exact#2020-10-2217:33favilaalso many query helper functions only work on a real database because they use indexes directly#2020-10-2217:33favila(e.g. d/datoms)#2020-10-2217:33favilaI’m pretty sure get-else would fail, for example#2020-10-2217:37Michael Stokleyperhaps it's more practical to use a real db in tests, then.#2020-10-2219:59Joe LaneHey @michael740 , try the new memdb feature in the latest dev-local! #2020-10-2302:44Michael Stokleythanks @U0CJ19XAM, I'll check it out#2020-10-2217:05motformIs it possible to express a recursive query with the pull api where the data looks like :c/b m..1-> :b/a m..1 -> :a syntax starting from a/gid ? I.e. walking down refs (not components) that point “upward” from the top of hierarchy. I can easily do it from the bottom up, from c/gid, but I guess I just don’t get how to reverse the query. EDIT: Never mind, I just realised that this is what _ is for.#2020-10-2218:22Michael Stokleyit looks like the two argument comparison predicates such as < work perfectly well to compare instants when inside of a datomic query but not in normal clojure. it's confusing because the documentation says that most of the datomic query functions are the same as those found in clojure.core. anyone have any insight?#2020-10-2218:23schmee@michael740 < and some other common predicates are the exception: https://docs.datomic.com/cloud/query/query-data-reference.html#range-predicates#2020-10-2218:26Michael Stokleythe documentation does not indicate that < works with strings, inst, etc.#2020-10-2218:26Michael Stokleyi glad they do, though!#2020-10-2221:12jaretHi All! I wanted to announce the release of the Datomic Knowledgebase: http://ask.datomic.com/#2020-10-2221:22jaretFor anyone wondering... we will be migrating over all of the pendo/receptive requests we've received in the past. So if you don't see something you've requested with me or on our old portal feel free to re-ask or check back again in a week or so.#2020-10-2221:23kennyCurious when this should be used over the forum. #2020-10-2221:25jaretThe big gain from the forums, which is still the place to have discussions about Datomic Applications and to see announcements -- is to have the upvote button for features and harder questions.#2020-10-2221:28jaretOur previous tool Pendo/receptive had several limitations. It just wasn't as accessible as we wanted it to be to get that feedback loop on what features are important to the community.#2020-10-2221:29jaretI'll definitely be cross linking/posting from forum posts going forward if we get to a point where a feature is the best answer for whatever is being discussed.#2020-10-2311:58cmdrdatshi - we're trying to restore a backup of our datomic database (it's a tiny 9mb db, as a test), and it just seems to be hanging forever.. how do we go about figuring out what we're doing wrong? We've set the log level in the logback.xml for datomic to TRACE - and theres a tiny bit of logging (I'll attach that in thread), but nothing else#2020-10-2311:59cmdrdats#2020-10-2314:48jaret@U050CLJ53 Can you share what command you're using to run backup/restore?#2020-10-2314:49jaretIf you'd like feel free to log a case to me by e-mailing <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection> and we can dive further.#2020-10-2319:10cmdrdats@U1QJACBUM we're doing this:
/storage/datomic/current/bin/datomic -Xmx1g -Xms1g restore-db "file:/storage/datomic/archive/restore/" "datomic:sql://...?jdbc:..."
if there's nothing we can really self diagnose at a high level, then sure, I'll send a mail on monday 🙂#2020-10-2319:27jaretTry restoring locally using :dev protocol, just to see if it's underlying storage related. But if it couldn't write to storage due to perms I would expect an error not a hang.#2020-10-2314:33millettjonI am trying to get started with datomic in a dev-local setup. Any idea why my dev-local db is not found? I can see can see db.log and log.idx files were created.
(ns flow.db
(:require [datomic.client.api :as d]))
(let [dir (str (System/getProperty "user.dir") "/var/datomic")
db-name "flow"
client (d/client {:server-type :dev-local
:storage-dir dir
:system "dev"})
_ (d/create-database client db-name)
conn (d/connect client {:db-name db-name})]
conn)
;; Unhandled clojure.lang.ExceptionInfo
;; Db not found: flow
;; #:cognitect.anomalies{:category :cognitect.anomalies/not-found,
;; :message "Db not found: flow"}#2020-10-2314:33millettjonI am trying to get started with datomic in a dev-local setup. Any idea why my dev-local db is not found? I can see can see db.log and log.idx files were created.
(ns flow.db
(:require [datomic.client.api :as d]))
(let [dir (str (System/getProperty "user.dir") "/var/datomic")
db-name "flow"
client (d/client {:server-type :dev-local
:storage-dir dir
:system "dev"})
_ (d/create-database client db-name)
conn (d/connect client {:db-name db-name})]
conn)
;; Unhandled clojure.lang.ExceptionInfo
;; Db not found: flow
;; #:cognitect.anomalies{:category :cognitect.anomalies/not-found,
;; :message "Db not found: flow"}#2020-10-2314:48jaretCan you try testing with the absolute path of dir and just execute d/client without the let? Also does the absolute path already have a "dev" system folder?#2020-10-2314:49jaretthen list-dbs on the client#2020-10-2314:50jaret(d/list-databases client {})#2020-10-2314:56millettjonSure.
(def client (d/client {:server-type :dev-local
:storage-dir "/home/jam/src/flow/var/datomic"
:system "dev"}))
(d/create-database client "flow") ; => true
(d/list-databases client {}) ; => []#2020-10-2314:57millettjonfile system looks like this:
$ tree /home/jam/src/flow
/home/jam/src/flow
├── deps.edn
├──
├── src
│ └── flow
│ └── db.clj
└── var
└── datomic
└── dev
├── db.log
└── log.idx#2020-10-2315:00millettjonI created var/datomic.
dev/ gets created by create-database fn#2020-10-2315:17millettjonI tried setting :storage-dir in ~/.datomic/dev-local.edn and it has the same problem. Creates some files but no db found.#2020-10-2315:30jaretWhat version of Dev-local are you using?#2020-10-2315:31jaretCan you share the .datomic/dev-local.edn file in your home directory?#2020-10-2315:32jaretMight be best to just share your deps.edn and I will try to re-create. So I can see version of client and dev-local.#2020-10-2315:37millettjon{:tag :a, :attrs {:href "/cdn-cgi/l/email-protection", :class "__cf_email__", :data-cfemail "204a414d60535449434b"}, :content ("[email protected]")}#2020-10-2315:39millettjon{:tag :a, :attrs {:href "/cdn-cgi/l/email-protection", :class "__cf_email__", :data-cfemail "f298939fb281869b9199"}, :content ("[email protected]")}#2020-10-2316:02jaret@U071T1PT6 I don't see a version of client in your deps? Do you have datomic client in your .clojure/deps.edn? Could you include com.datomic/client-cloud "0.8.102"#2020-10-2316:07jaretAlso worth testing with the latest dev-local and use 0.9.225#2020-10-2316:38millettjonOk. I was following instructions here: https://docs.datomic.com/cloud/dev-local.html and didn't know about that additional dep. Unfortunately, adding it didn't make any difference. I will try updating to 0.9.225.#2020-10-2317:41jaret@U071T1PT6 I can't reproduce is there anything else I could be missing? Are you starting a new repl to do this? Can you try making a new system name to confirm you are local when a new dir is made? What OS are you using on your system?#2020-10-2319:16ghadilook at the args for create-database#2020-10-2319:16ghadineeds to be {:db-name "flow"} not "flow"#2020-10-2319:17ghadi@U071T1PT6 @U1QJACBUM#2020-10-2319:25jaret!!! Ghadi, great catch!#2020-10-2319:25jaretThat's it#2020-10-2319:26ghadicalling it with a raw string probably shouldn't return true#2020-10-2319:26jaretNo it should not!#2020-10-2319:56millettjonThanks!#2020-10-2404:45onetomthis bit me a few times initially.
it would be very helpful to provide better error message for this situation.#2020-10-2414:05jaretTotally agree this seems to be a bug in using dev-local client and create DB. It should throw an error like in cloud for expected map. I've logged a bug and we'll look at fixing!#2020-10-2414:06jaret;Cloud client
(d/create-database client "testing")
Execution error (ExceptionInfo) at datomic.client.impl.shared/api->client-req (shared.clj:258).
Expected a map
;Dev local client
(d/create-database client "testing")
=> true
#2020-10-2414:12jaretin either case no DB is created, but we should throw an error in both cases!#2020-10-2415:48millettjonThanks for all the help. Working as expected now.😀#2020-10-2317:52drewverleeWhat's the best way to report/ask about questions on the official datomic docs? e.g The docs for import-cloud on the docs:
https://docs.datomic.com/cloud/dev-local.html#import-cloud
Don't match the docstring. I assume the docstring is correct as the documentation page lists the same value for source and dest.#2020-10-2320:07jaretI'll fix that!#2020-10-2320:07jaretThanks for catching it.#2020-10-2418:06drewverlee@U1QJACBUM thanks for fixing it. Do you know if there is a light weight way to get datomic cloud consulting? I run into little blockers here and there and I would rather shell some money then get stuck for a day off someone can easily help me trouble shoot things.#2020-10-2418:12jaret@U0DJ4T5U1 I am going to tag @U05120CBV on this. You can also e-mail him directly at <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>. He is running point on our Datomic consulting services along with a few other folks at Cognitect. I'll bring this up with him on Monday if you two don't connect here.#2020-10-2317:54dpsuttonThere’s a new ask.datomic site that probably fits the bill #2020-10-2317:58joshkhshould one be alarmed when periodically seeing an Unable to load index root ref <uuid> exception appear in Datomic Cloud logs? (from com.amazonaws.services.dynamodbv2.model.InternalServerErrorException )#2020-10-2320:14jaretDo you often delete DBs? In general, you can see this error if a client or node is asking for a deleted db. The error would also correlate to an outage but these calls are retried so if you don't see this error often or repeatedly it's probably not a major issue.#2020-10-2320:15jaretAs always, it is my support-person duty to recommend upgrading to the latest Cloud release CFT and if you'd like me to look more closely at your system logs I'd be happy to poke around with a Read-Only CloudWatch account. If you want to go down that path, log me a case at <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection> and I can take a look.#2020-10-2612:40joshkhthanks, Jaret. we haven't noticed any performance issues related to the exception, so i was mostly just curious. but since you mentioned that it could be related to often deleting DBs, which we very rarely do, then i'll just mention it in some future support ticket. it's a low priority for us. 🙂#2020-10-2322:51drewverleeWhen i try to use dev-tools to import a cloud locally it complains that it can't use the endpoint to connect/"name or service is not known". I'm connected and can query the databases though so i'm not sure what the issue is.#2020-10-2412:42Petrus Therondatomic-pro-1.0.6202 throws ActiveMQInternalErrorException when I try to create or connect to a Datomic DB:
clj
Clojure 1.10.1
user=> (require '[datomic.api :as d])
nil
user=> (d/connect "datomic:)
Execution error at datomic.peer/get-connection$fn (peer.clj:661).
Could not find newdb in catalog
user=> (d/create-database "datomic:)
Execution error (ActiveMQInternalErrorException) at org.apache.activemq.artemis.core.protocol.core.impl.ChannelImpl/sendBlocking (ChannelImpl.java:404).
null
I’ve tried with both Oracle JDK 15 and OpenJDK 15.#2020-10-2413:32jaretI see you are connecting to the DB and then attempting to create the DB? Did this DB already exist or was it the product of a backup/restore? Did you recently upgrade to the new version of Datomic-pro? Or are you saying that this worked before you moved to JDK15? If so, what version were you previously running where this worked? I am going to go test with JDK 15 right now.#2020-10-2413:47Petrus TheronFull story here: https://stackoverflow.com/q/64512606/198927#2020-10-2422:42NassinDatomic doesn't work with jdk15, your safest bet with datomic is java 8#2020-10-2614:45jaretI've re-created the behavior and logged an anomaly for us to investigate further. In general, I am updating our docs to indicate that Datomic on-prem is tested to run against LTS versions of Java (8 and 11). @U051SPP9Z I agree with your assertion elsewhere that we should have a feature to detect when not on an LTS java version and throw a warning to move to one. I am looking at options for such a feature and logging a feature request for further investigation.#2020-10-2618:14NassinFWIW, datomic 1.0.6202 with Java 11 throws some jackson reflection warnings#2020-10-2414:41vnczDoes anybody know if there's a relation between db's T value and the txInstant of an entity?#2020-10-2414:43vnczEssentially I have a database and if I do (:t db) I get 7 as value. On the other hand, if I look for a txInstant for an entity via (def query '[:find ?tx :where [?e :person/id _ ?tx]]) I get very long number instead#2020-10-2414:48vnczWhat I am trying to do is "Given a certain entity ID, what was the t that has introduced/updated it?#2020-10-2414:58vnczFor anybody interested: https://ask.datomic.com/index.php/457/relation-between-t-and-db-txinstant#2020-10-2415:51Lennart BuitForgive me for not answering on ask, but this blog may interest you: https://observablehq.com/@favila/datomic-internals#2020-10-2416:32vnczOh sweet, let's check that out#2020-10-2416:32vnczThanks @UDF11HLKC#2020-10-2416:44vnczI can't find these functions in datomic.client.api 🤔#2020-10-2416:44vnczWhere are them?#2020-10-2418:44vnczThese function seem to be in datomic.api but I can't find it anywhere on maven/clojars#2020-10-2421:03Lennart BuitYou can call datomic.api functions in your queries. Or you can at least on client + peer server#2020-10-2421:19vncz@UDF11HLKC Ah ok so maybe it's only executed on the peer?#2020-10-2421:44vnczI'm a bit confused, I can't find such namespace anywhere and it does not work when doing it in a query (d/q '[:find ?e ?tx ?t :where [?e :person/id _ ?tx] [((t->tx ?tx)) ?t]] db) which makes sense, since it even the docs says that the functions executed must be in the class path.#2020-10-2414:49alexmillerThis would be a great question to ask on the new forum https://ask.datomic.com#2020-10-2414:49vnczAh ok, I was not aware there was a specific forum#2020-10-2415:21alexmillerJust opened this week!#2020-10-2421:49vnczOh ok I found it, it seems like it's in com.datomic/datomic-free#2020-10-2422:00vnczAnd I got it working#2020-10-2422:01vncz#2020-10-2509:28Petrus TheronHey guys, I’ve been blocked for two days trying to get Datomic to talk to any non-memory storage on my machine. Any leads on why Datomic works fine for in-memory DB, but can’t connect to my local dev transactor?
➜ datomic-debug clj
Clojure 1.10.1
user=> (require '[datomic.api :as d])
nil
user=> (d/create-database "datomic:)
true ;; in-mem works
user=> (d/create-database "datomic:)
Oct 25, 2020 9:26:42 AM org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnector createConnection
ERROR: AMQ214016: Failed to create netty connection
javax.net.ssl.SSLException: handshake timed out
at io.netty.handler.ssl.SslHandler.handshake(...)(Unknown Source)
Execution error (ActiveMQNotConnectedException) at org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl/createSessionFactory (ServerLocatorImpl.java:787).
AMQ119007: Cannot connect to server(s). Tried with all available servers.
I suspect an incompatibility with my JDK and Datomic’s queuing dependency, but having tried different versions of Clojure, Datomic (Pro and Free), Netty, HornetMQ and different JDKs, I can’t figure out why I can’t connect to or create a DB with :dev storage. What am I doing wrong?#2020-10-2509:40Petrus TheronOMG. Datomic transactor requires Java 8. Fixed by switching the transactor env to Java 1.8. https://forum.datomic.com/t/java-11-0-1-ssl-exception/734
Maybe the transactor can try to connect to itself on startup and complain if Java version is wrong?
(Thanks for the tip, @U011VD1RDQT.)
Depending on which client version of Datomic you’re running, you’ll get different error messages ranging from ActiveMQ, to SSL handshakes, to Netty errors.#2020-10-2515:45dustingetzI recall this being fixed in recent versions of Datomic, i could be wrong#2020-10-2608:55Petrus TheronHappens when running Datomic Pro 1.0.6202#2020-10-2612:59joshkhi have an HTTP Direct project setup behind an API Gateway, with a VPC Link resource/method configured to Use Proxy Integration. everything works fine. the proxy method request is configured to use AWS_IAM authorization which also works as expected.
when i inspect a request that makes it through the gateway and to my project, i see all of the keys listed in the Web Ion table [1] except for two: datomic.ion.edn.api-gateway/json and datomic.ion.edn.api-gateway/data
presumably these keys have the values i need to identify the requester's cognito identity, know about the gateway details etc. are they available when using HTTP Direct integration?
[1] https://docs.datomic.com/cloud/ions/ions-reference.html#web-ion#2020-10-2711:20joshkhper yesterday's discussion, i've moved this to the forums https://forum.datomic.com/t/where-can-i-find-cognito-or-iam-details-from-api-gateway-when-using-http-direct/1675#2020-10-2616:47drewverleeDouble checking here that the new forum is the ideal way to ask questions of this type: https://ask.datomic.com/index.php/476/how-to-use-import-cloud-to-import-cloud-data-locally#2020-10-2617:12jaretIdeal place! I think I added a potential answer to your question. I believe you're missing :proxy-port which you need when going through a proxy (i.e. client access)#2020-10-2617:17drewverleeAdding proxy-port moves me forward. I must have tried to add it before when my connection was down.#2020-10-2617:33joshkh^ piggybacking on that question, is that also the ideal place for my question? i'm never quite sure where to post: slack, datomic forums, and i only just learned about http://ask.datomic.com#2020-10-2617:36joshkhpublic archives are ideal over slack's limited history. i'm just not sure which of the ones i listed get the most attention (sometimes feels like Slack to me)#2020-10-2618:18jaretin my dream world we would all feel the compulsion to cross post to ask/forums all of the great answers that get worked out here quickly in slack. 🙂#2020-10-2618:26joshkhagreed! and i'm happy to do that. but i wasn't sure of the level of tolerance for already answered questions getting posted to the forum... 'suppose Ask is a good place for that 🙂#2020-10-2618:27jaretMy level of tolerance is infinite. We lose so much to slack archive 🙂#2020-10-2618:32joshkhspeaking of the forums, sometimes i find unanswered posts (including my own*) and wonder if we're opening the wrong kind of discussions to garner responses * https://forum.datomic.com/t/enumerated-values-in-tuples-are-only-eids/1644.#2020-10-2618:38joshkhit makes me wonder if no response (here in Slack or on the forums) means the question or topic is nonsense, with my full understanding that i'm noisy and ask some dumb questions from time to time 🙂#2020-10-2619:24jarethaha! No they aren't dumb questions. I just overlook some questions or need to check with the team for a better personal understanding. I look at this tomorrow and ask the team if I can't reason through it. Sorry for not responding on this post!#2020-10-2619:26jaretAnd just so I am clear, are you asking why ref's in tuples are EIDs? Trying to discern if you need a feature request or are questioning if this is useful/intended?#2020-10-2620:15joshkhwell, to me, pull is acting differently for references to idents than it is to references to the same idents within tuples, and the impact is on unification. when i pull a typical reference to an ident, i get back {:db/id 999 :db/ident :some/enumerated-value} which is perfect because that value doesn't have to exist or unify.. it's a pull, and i can return that value as-is. this entity might have some enumerated value, or not. but when i pull a reference to an ident within a tuple, i get back just the EID 999. then, to resolve its :db/ident, i have to unify in the constraints [?maybe-some-enumerated-value :db/ident ?ident which excludes any entities in the query that do not have a reference in a tuple to an enumerated value.#2020-10-2620:16joshkhso i'm wondering if that's by design, mostly because i ran into a use case where moving a reference to an ident into a tuple broke some logic based on pull 's usual behaviour of returning :db/idents#2020-10-2617:38marshalldatomic forums and/or ask.datomic are preferred#2020-10-2718:55Nate IsleyI created a new datomic cloud solo stack deployment today in AWS in support of exploring the https://github.com/Datomic/ion-starter project. After stepping through the starter connection steps, there were 2 CloudWatch alarms:
ConsumedReadCapacityUnits < 750 for 15 datapoints within 15 minutes
ConsumedWriteCapacityUnits < 150 for 15 datapoints within 15 minutes
A third CloudWatch alarm showed up sometime after I tried to deploy ions:
JvmFreeMb < 1 for 2 datapoints within 2 minutes#2020-10-2719:02jaret@UF9AED8CC No, likely unrelated. Those are the Dynamo DB scaling alarms. If you have been unable to deploy it's unlikely that you're getting far enough for DDB to be a factor as those alarms will fire when you exceed read and write capacity on a Dynamo DB. To diagnose your failure, you'll want to look in a few places:
1. The Code Deploy console (you can drill into the failure and find out which step it failed on)
2. CloudWatch logs for the ion deploy exception. You can find these logs by searching your CloudWatch console for datomic-<systemname>. Shown in our docs https://docs.datomic.com/cloud/troubleshooting.html#troubleshooting-ions#2020-10-2719:15Nate IsleyOk, thank you. I saw a CodeDeploy message about health of nodes, which made me wonder if the Alarms were preventing the deploy.#2020-10-2718:56Nate IsleyCould these alarms be the cause of the ion deploy failing?#2020-10-2720:33localshredHi friends, I'm working on a retry+backoff for the :cognitect.anomalies/unavailable "Loading Database" exception that we get after ion restarts. If this is a common issue like the troubleshooting docs suggest, I'm wondering how others are handling this. My current approach when getting the connection with d/connect is to try/catch when performing a simple check if the connection is active (something like (d/db-stats (d/db conn))). I've also considered doing a d/q query for some specific datom that I know is present. Any thoughts or other ideas?#2020-10-2720:35alexmilleryou might want to ask on https://ask.datomic.com#2020-10-2720:35localshredOk, thanks @alexmiller, here's the question on the forum https://ask.datomic.com/index.php/486/approach-connection-cognitect-anomalies-unavailable-exceptions#2020-10-2722:45vnczIs there anywhere specified the logic that Datomic uses to get the data from the storage server to the peer (whether it's in process or a separate server)?#2020-10-2723:13favilaThe peer reads storage directly for whatever it needs#2020-10-2723:42vncz@U09R86PA4 I heard around in some videos that the Transactor pushes the updates?#2020-10-2723:51favilaIt broadcasts just-completed txs to already connected peers, but not index data#2020-10-2800:09vnczUnderstood. I must have understood incorrectly then. I recall a video saying something different#2020-10-2814:21vnczIs there a way to DRY these two queries? They look almost identical apart from the id parameter.
Is there a way without having to manipulate the list manually?#2020-10-2814:23vnczWell I can probably get away with it by using the map form, but I was wondering whether there's a better way 🤔#2020-10-2814:55favilaI question whether this is better, but it is DRY:#2020-10-2814:56favila(defn people [db person-pull person-ids]
(d/q '[:find (pull ?e person-pull)
:in $ person-pull ?ids
:where
(or-join [?e ?ids]
(and [(= ?ids *)]
[?e :person/id])
(and [(!= ?ids *)]
[(identity ?ids) [?id ...]]
[?e :person/id ?id]))]
db
person-pull
person-ids))#2020-10-2814:56favilause the special id value * to mean “everyone”#2020-10-2814:56favilaotherwise it’s a vector of entity identifiers#2020-10-2815:08vnczHmm does not seem worth the hassle. I was thinking of using the map form and manually inject the parameter @U09R86PA4 …what do you think?#2020-10-2815:10favilathere may be a penalty from not caching the query plan, since each query is a new instance. But I’m not sure if cache lookup is by identity or by value#2020-10-2816:11vnczGot it. good point. Seems like a case where duplication is cheaper than the wrong abstraction 🙂#2020-10-2814:24ghadiUse d/pull directly with the first form#2020-10-2814:27vnczCan I? I do not really have the entity id, I have my "internal" id#2020-10-2814:49favilapull (and most things) can take any entity identifier, which includes entity ids, idents, and lookup refs. In your case, (d/pull db [:person/id :person/name :person/surname] [:person/id "the-id"]) would work#2020-10-2815:07vnczAh interesting, that I didn't know. Thanks!#2020-10-2814:24ghadiRather than pull in the find spec of a query#2020-10-2814:24vnczOk fair#2020-10-2814:29vnczI was thinking that if I would be switching to map form I could manipulate the query easily and add the parameter?#2020-10-2815:43Aleh AtsmanHello, can somebody clarify for me the purpose of cast/event , is it only for infrastructure level events or can it be used for application level events as well?#2020-10-2817:10joshkhsomeone else can correct me if i'm wrong, but i use cast/event for all sorts of things including application level logging#2020-10-2818:16jaret@U4N27TADS to echo what Josh is saying it's for ordinary occurrences of interest to the operator. Whereas an Alert is for an extraordinary occurrence that requires operator intervention. These are conventions you can choose to follow or not in your use of the ion.cast.#2020-10-2914:08Aleh AtsmanHello @U0GC1C09L, @U1QJACBUM! Thank you for explanation. In the end we decided to go with event bridge or sns topic directly.
It is problematic to route events from cloudwatch logs to aws lambda functions. As the only option there is subscription filter (max 2 per log group).
Maybe I missing something, but haven't found solution where I am able to get events submitted using cast/event to lambda functions.#2020-10-2914:20joshkhi don't know if this is useful to you, but we cast/alert exceptions to CloudWatch, and then use a CLJS Lambda to send them to Slack to torture our developers. cast/events are not really any different, except for maybe the frequency at which they appear, but i think log streams are batched. (sorry for the lack of a proper README, it was a hobby project)
https://github.com/joshkh/datomic-alerts-to-slack
> It is problematic to route events from cloudwatch logs to aws lambda functions. As the only option there is subscription filter (max 2 per log group).
for application level logs and alerts, we tend to the use a common message throughout the application (e.g. {:msg "<app-name>-application-log"} ) , and then we attach other keys such as :env and :query-group . this provides us different levels of filtering while keeping our logs all in one place#2020-10-2914:22joshkhbut if SNS works for you then go for it! 🙂 i just prefer CloudWatch because cast serialises Clojure to JSON very well, and being able to add arbitrary keys at any level is useful for filtering#2020-10-2912:35Matheus Moreirahello! today i notice a weird interaction between datomic client api and djblue/portal (https://github.com/djblue/portal): when this last one is not on the classpath, then i can obtain a connection to my (local, datomic pro) database; when portal is on the classpath, connecting to the database fails with the following error:
Exception in thread "async-dispatch-1" java.lang.RuntimeException: java.lang.NoClassDefFoundError: org/msgpack/MessagePack
at com.cognitect.transit.TransitFactory.writer(TransitFactory.java:104)
at cognitect.transit$writer.invokeStatic(transit.clj:161)
at cognitect.transit$writer.invoke(transit.clj:139)
at $marshal.invokeStatic(io.clj:48)
at $marshal.invoke(io.clj:38)
at $client_req__GT_http_req.invokeStatic(io.clj:76)
at $client_req__GT_http_req.invoke(io.clj:73)
at datomic.client.impl.shared.Client._async_op(shared.clj:398)
at datomic.client.impl.shared.Client$fn__34578$state_machine__5717__auto____34593$fn__34595.invoke(shared.clj:423)
at datomic.client.impl.shared.Client$fn__34578$state_machine__5717__auto____34593.invoke(shared.clj:422)
at clojure.core.async.impl.ioc_macros$run_state_machine.invokeStatic(ioc_macros.clj:973)
at clojure.core.async.impl.ioc_macros$run_state_machine.invoke(ioc_macros.clj:972)
at clojure.core.async.impl.ioc_macros$run_state_machine_wrapped.invokeStatic(ioc_macros.clj:977)
at clojure.core.async.impl.ioc_macros$run_state_machine_wrapped.invoke(ioc_macros.clj:975)
at datomic.client.impl.shared.Client$fn__34578.invoke(shared.clj:422)
at clojure.lang.AFn.run(AFn.java:22)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at clojure.core.async.impl.concurrent$counted_thread_factory$reify__469$fn__470.invoke(concurrent.clj:29)
at clojure.lang.AFn.run(AFn.java:22)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.lang.NoClassDefFoundError: org/msgpack/MessagePack
at com.cognitect.transit.impl.WriterFactory.getMsgpackInstance(WriterFactory.java:77)
at com.cognitect.transit.TransitFactory.writer(TransitFactory.java:95)
... 20 more
Caused by: java.lang.ClassNotFoundException: org.msgpack.MessagePack
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581)
at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
... 22 more
#2020-10-2912:35Matheus Moreirahello! today i notice a weird interaction between datomic client api and djblue/portal (https://github.com/djblue/portal): when this last one is not on the classpath, then i can obtain a connection to my (local, datomic pro) database; when portal is on the classpath, connecting to the database fails with the following error:
Exception in thread "async-dispatch-1" java.lang.RuntimeException: java.lang.NoClassDefFoundError: org/msgpack/MessagePack
at com.cognitect.transit.TransitFactory.writer(TransitFactory.java:104)
at cognitect.transit$writer.invokeStatic(transit.clj:161)
at cognitect.transit$writer.invoke(transit.clj:139)
at $marshal.invokeStatic(io.clj:48)
at $marshal.invoke(io.clj:38)
at $client_req__GT_http_req.invokeStatic(io.clj:76)
at $client_req__GT_http_req.invoke(io.clj:73)
at datomic.client.impl.shared.Client._async_op(shared.clj:398)
at datomic.client.impl.shared.Client$fn__34578$state_machine__5717__auto____34593$fn__34595.invoke(shared.clj:423)
at datomic.client.impl.shared.Client$fn__34578$state_machine__5717__auto____34593.invoke(shared.clj:422)
at clojure.core.async.impl.ioc_macros$run_state_machine.invokeStatic(ioc_macros.clj:973)
at clojure.core.async.impl.ioc_macros$run_state_machine.invoke(ioc_macros.clj:972)
at clojure.core.async.impl.ioc_macros$run_state_machine_wrapped.invokeStatic(ioc_macros.clj:977)
at clojure.core.async.impl.ioc_macros$run_state_machine_wrapped.invoke(ioc_macros.clj:975)
at datomic.client.impl.shared.Client$fn__34578.invoke(shared.clj:422)
at clojure.lang.AFn.run(AFn.java:22)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at clojure.core.async.impl.concurrent$counted_thread_factory$reify__469$fn__470.invoke(concurrent.clj:29)
at clojure.lang.AFn.run(AFn.java:22)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.lang.NoClassDefFoundError: org/msgpack/MessagePack
at com.cognitect.transit.impl.WriterFactory.getMsgpackInstance(WriterFactory.java:77)
at com.cognitect.transit.TransitFactory.writer(TransitFactory.java:95)
... 20 more
Caused by: java.lang.ClassNotFoundException: org.msgpack.MessagePack
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581)
at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
... 22 more
#2020-10-2912:37Matheus Moreirai noticed that portal has com.cognitect/- transit-cljs, transit-js, transit-clj, and transit-java as dependencies. why would these dependencies interfere with datomic obtaining a connection?#2020-10-2912:38Matheus Moreira(there are other dependencies but i don’t believe they have anything to do with the case: cheshire 5.10.0 and http-kit 2.5.0).#2020-10-2912:42alexmillera) what are you using to make the classpath
b) if clj, what version? (prob best to upgrade to latest if you haven’t)
c) please provide the set of project deps that repro this#2020-10-2912:53Matheus Moreirai am using clojure tools (clojure 1.10.1) and i open a repl using cider-jack-in-clj. then i start my system via integrant.repl.#2020-10-2912:54Matheus Moreirathis is my deps.edn. when connecting to the repl, i add -A:dev to the clj command.#2020-10-2913:04favilaPortal requires transit but excludes msgpack#2020-10-2913:06Matheus Moreiraand datomic requires msgpack?#2020-10-2913:07Matheus Moreiraif this is the case, is it a classpath resolution conflict/problem, i.e. msgpack should be in the final classpath because datomic requires it, even if portal excludes it?#2020-10-2913:26alexmillerwhat version of the clojure tools? clj -Sdescribe#2020-10-2913:27alexmillerthere were issues with this kind of scenario that were fixed a few versions ago#2020-10-2913:27alexmillerrelease info here https://clojure.org/releases/tools - latest is 1.10.1.727#2020-10-2913:33favilaThe client apparently requires msgpack but doesn’t depend on it directly, expecting it via transit#2020-10-2913:33favilaYes#2020-10-2913:33favilaPortal uses transit but not the msgpack encoding option, so it doesn’t want to bring it in#2020-10-2913:33favilaDatomic client uses transit with msgpack encoding but doesn’t require it directly#2020-10-2913:34favilaI wonder what maven would compute in this case#2020-10-2913:38alexmillerI'm stepping away till this afternoon but I'd be happy to look at this in depth when I get back. My recommendation would be to move to latest clj if you haven't already as there have been fixes in this area.#2020-10-2914:14Matheus Moreirathanks, @U064X3EF3 and @U09R86PA4. i’ll update clj tools if mine is not up-to-date.#2020-10-2914:17Matheus Moreirahttps://clojurians.slack.com/archives/C03RZMDSH/p1603978063215500?thread_ts=1603974900.204700&cid=C03RZMDSH
mine was 1.10.1.716#2020-10-2914:20favila(Sorry if I’m confusing, slack must have barfed on my messages because they’re all out of order and 20 min late)#2020-10-2914:31Matheus Moreira@U064X3EF3 fyi i updated clj tools and the error still happens.#2020-10-2918:57alexmillerI do actually see msgpack on the classpath with these deps. Can you run clj -Sforce -Stree -A:dev and grep the output for msgpack? force will force recomputing the classpath - it's possible you are seeing an older cached classpath#2020-10-2918:59alexmilleralso if you're still seeing it after that, do clj -Strace -A:dev and attach the trace.edn file it emits here#2020-11-0311:39Matheus Moreiraclj -Sforce -Stree -A:dev returns nothing. djblue/portal was commented out in deps.edn, maybe that is why you see it in your output.#2020-11-0311:40Matheus Moreira@U064X3EF3 sorry for the delay in my reply…#2020-10-2919:04joshkhwhen untruthifying™ a boolean value of a schema attribute, is there an advantage to choosing one method over the other?
[:db/retract :some/ident :db/isComponent true]
vs
[:db/add :some/ident :db/isComponent false]
just curious#2020-10-2919:46Lennart Buitfwiw, they are not equivalent, right? The first removes the fact about being a component altogether, and the second asserts it as false#2020-10-2921:59joshkhyup. in this case the resulting behaviours are the same (:some/ident is no longer a component), but the resulting annotations of the ident are different (false vs missing). i'm just wondering why someone might choose one over the other.#2020-10-2920:09Michael Stokleydo folks compose pull patterns? suppose i have an entity and for one use case, i need a set of attrs; for another, i need a non-overlapping different set of attrs. since it's all data, maybe they can be defined separately but then composed so i can make one db call instead of n#2020-11-0103:21steveb8n@michael740 I do this a lot. I use the apply-template fn in clojure core to inject pull vectors into other pull vectors or queries. it’s simple and it works well.#2020-11-0103:21steveb8neven better if you match a transform fn for the results to each pull expr. then you can compose them to process the results as well.#2020-11-0223:47Michael Stokleydo you mean apply-template in clojure.template?#2020-11-0321:57steveb8nsorry, yes, that’s the one. in my case, I generate the pull expressions and post query transform fns from a domain model. something like https://github.com/stevebuik/clj-code-gen-hodur#2020-10-2920:10Michael Stokleyi probably want to use sets instead of vectors in the initial representation? merge those, then swap the sets for vectors before use with datomic#2020-10-2920:12Michael Stokleyhandling the vector syntax vs the map syntax might be tricky#2020-10-2920:14kenny@michael740 We use the https://github.com/edn-query-language/eql for this. #2020-10-2920:15Michael Stokleyhttps://github.com/edn-query-language/eql#unions ?#2020-10-2920:17kennyWe go pull pattern -> ast -> merge -> pull pattern. #2020-10-2920:17kennyThe merge is typically very simple since it’s in the ast format. #2020-10-2920:19Michael Stokleythanks!#2020-10-2920:19Michael Stokleythis looks cool as all get out#2020-10-2920:20kennyWe also use pathom so this is a very natural lib for us to use. You could probably write a smaller version to just do what you need if you don’t want to bring on a new dep. #2020-10-3016:01kennyI am running dev-local/import-cloud (in parallel for unique db-names. May be relevant, not sure) and received this exception. Any idea what would cause this?#2020-10-3016:07kennyThis definitely has something to do with being parallel. It will consistently fail when ran in parallel.#2020-10-3016:09kenny@U1QJACBUM I'm not sure that your answer https://ask.datomic.com/index.php/493/is-dev-local-import-cloud-thread-safe to true 🙂#2020-10-3017:25jarethuh. Ok I will run this down @U083D6HK9 . Could you copy your comment over to the ask question so we don't lose this to slack archiving?#2020-10-3017:25jaretI am happy to copy it, but would make more sense if you replied to me there#2020-10-3017:27kennySure. I was hoping to provide a bit more insight than this exception though 🙂 Just wasn't sure what would be helpful for you all. I may be able to create a repro.#2020-10-3017:29jaretIt's helpful! and if you can make a repro that'd be better.#2020-10-3017:30kennyOk. Will follow up in a bit.#2020-10-3017:30jaret@U083D6HK9 also it might be relevant to know if these DBs are special (i.e. large or something)#2020-10-3017:49kennyWhat is large?#2020-10-3021:57kennyI've also noticed that import-cloud can hang forever. I left an import running for the past 4 hours and the normal red "Loading ..." didn't even appear.#2020-10-3022:02kennyThere exception thrown is also very inconsistent. Will try to include everything.#2020-10-3022:10kennyComment blocks don't have rich formatting 😢#2020-11-0315:18kennyfyi, you can delete https://ask.datomic.com/index.php/493/is-dev-local-import-cloud-thread-safe?show=498#c498. I didn't see a way to on my end.#2020-11-0317:50jaretKenny, I think I deleted teh right one for you#2020-11-0317:50jaretI'll look at turning on the ability for a user to delete their own post#2020-10-3110:01holyjakI had no idea Datomic was a place in Finland. Though perhaps nothing about :flag-fi: should surprise me :rolling_on_the_floor_laughing:#2020-11-0102:55yubrshenWhat's the meaning of the following error message, and how can I investigate and fix it?
:db.error/lookup-ref-attr-not-unique Attribute values not unique: :user/email
Here is the source code that will recreate the error:
(ns grok.db.add-user
(:require [grok.db.core :as SUT]
[grok.db.schema :refer [schema]]
[datomic.api :as d]))
(def sample-user
{:user/id (d/squuid)
:user/email "
The code will create a user in the mem database, and retrieve it by its email address (for the purpose of having a user for tests)
Here is the related schema code for user:
[
;; ## User
;; - id (uuid)
;; - full-name (string)
;; - username (string)
;; - email (string => unique)
;; - password (string => hashed)
;; - token (string)
{:db/ident :user/id
:db/valueType :db.type/uuid
:db/cardinality :db.cardinality/one
:db/unique :db.unique/identity
:db/doc "ID of the User"}
{:db/ident :user/email
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one
:db/doc "Email of the User"}
{:db/ident :user/password
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one
:db/doc "Hashed Password of the User"}
]
Here is the code of create-conn:
(defn create-conn [db-uri]
(when db-uri
(d/create-database db-uri)
(let [conn (d/connect db-uri)]
conn)))
Here is the full error trace:
2. Unhandled clojure.lang.Compiler$CompilerException
Error compiling test/grok/db/add_user.clj at (21:1)
#:clojure.error{:phase :compile-syntax-check,
:line 21,
:column 1,
:source
"/home/yshen/programming/clojure/learn-immutable-stack-with-live-coding-ankie/grok/server/test/grok/db/add_user.clj"}
Compiler.java: 7648 clojure.lang.Compiler/load
REPL: 1 user/eval19658
REPL: 1 user/eval19658
Compiler.java: 7177 clojure.lang.Compiler/eval
Compiler.java: 7132 clojure.lang.Compiler/eval
core.clj: 3214 clojure.core/eval
core.clj: 3210 clojure.core/eval
interruptible_eval.clj: 87 nrepl.middleware.interruptible-eval/evaluate/fn/fn
AFn.java: 152 clojure.lang.AFn/applyToHelper
AFn.java: 144 clojure.lang.AFn/applyTo
core.clj: 665 clojure.core/apply
core.clj: 1973 clojure.core/with-bindings*
core.clj: 1973 clojure.core/with-bindings*
RestFn.java: 425 clojure.lang.RestFn/invoke
interruptible_eval.clj: 87 nrepl.middleware.interruptible-eval/evaluate/fn
main.clj: 437 clojure.main/repl/read-eval-print/fn
main.clj: 437 clojure.main/repl/read-eval-print
main.clj: 458 clojure.main/repl/fn
main.clj: 458 clojure.main/repl
main.clj: 368 clojure.main/repl
RestFn.java: 137 clojure.lang.RestFn/applyTo
core.clj: 665 clojure.core/apply
core.clj: 660 clojure.core/apply
regrow.clj: 20 refactor-nrepl.ns.slam.hound.regrow/wrap-clojure-repl/fn
RestFn.java: 1523 clojure.lang.RestFn/invoke
interruptible_eval.clj: 84 nrepl.middleware.interruptible-eval/evaluate
interruptible_eval.clj: 56 nrepl.middleware.interruptible-eval/evaluate
interruptible_eval.clj: 152 nrepl.middleware.interruptible-eval/interruptible-eval/fn/fn
AFn.java: 22 clojure.lang.AFn/run
session.clj: 202 nrepl.middleware.session/session-exec/main-loop/fn
session.clj: 201 nrepl.middleware.session/session-exec/main-loop
AFn.java: 22 clojure.lang.AFn/run
Thread.java: 834 java.lang.Thread/run
1. Caused by datomic.impl.Exceptions$IllegalArgumentExceptionInfo
:db.error/lookup-ref-attr-not-unique Attribute values not unique: :user/email
{:cognitect.anomalies/category :cognitect.anomalies/incorrect,
:cognitect.anomalies/message "Attribute values not unique: :user/email",
:db/error :db.error/lookup-ref-attr-not-unique}
error.clj: 79 datomic.error/arg
error.clj: 74 datomic.error/arg
error.clj: 77 datomic.error/arg
error.clj: 74 datomic.error/arg
db.clj: 590 datomic.db/resolve-lookup-ref
db.clj: 569 datomic.db/resolve-lookup-ref
db.clj: 610 datomic.db/extended-resolve-id
db.clj: 606 datomic.db/extended-resolve-id
db.clj: 621 datomic.db/resolve-id
db.clj: 614 datomic.db/resolve-id
db.clj: 2295 datomic.db.Db/entity
api.clj: 171 datomic.api/entity
api.clj: 169 datomic.api/entity
add_user.clj: 21 grok.db.add-user/eval19672
add_user.clj: 21 grok.db.add-user/eval19672
Compiler.java: 7177 clojure.lang.Compiler/eval
Compiler.java: 7636 clojure.lang.Compiler/load
REPL: 1 user/eval19658
REPL: 1 user/eval19658
Compiler.java: 7177 clojure.lang.Compiler/eval
Compiler.java: 7132 clojure.lang.Compiler/eval
core.clj: 3214 clojure.core/eval
core.clj: 3210 clojure.core/eval
interruptible_eval.clj: 87 nrepl.middleware.interruptible-eval/evaluate/fn/fn
AFn.java: 152 clojure.lang.AFn/applyToHelper
AFn.java: 144 clojure.lang.AFn/applyTo
core.clj: 665 clojure.core/apply
core.clj: 1973 clojure.core/with-bindings*
core.clj: 1973 clojure.core/with-bindings*
RestFn.java: 425 clojure.lang.RestFn/invoke
interruptible_eval.clj: 87 nrepl.middleware.interruptible-eval/evaluate/fn
main.clj: 437 clojure.main/repl/read-eval-print/fn
main.clj: 437 clojure.main/repl/read-eval-print
main.clj: 458 clojure.main/repl/fn
main.clj: 458 clojure.main/repl
main.clj: 368 clojure.main/repl
RestFn.java: 137 clojure.lang.RestFn/applyTo
core.clj: 665 clojure.core/apply
core.clj: 660 clojure.core/apply
regrow.clj: 20 refactor-nrepl.ns.slam.hound.regrow/wrap-clojure-repl/fn
RestFn.java: 1523 clojure.lang.RestFn/invoke
interruptible_eval.clj: 84 nrepl.middleware.interruptible-eval/evaluate
interruptible_eval.clj: 56 nrepl.middleware.interruptible-eval/evaluate
interruptible_eval.clj: 152 nrepl.middleware.interruptible-eval/interruptible-eval/fn/fn
AFn.java: 22 clojure.lang.AFn/run
session.clj: 202 nrepl.middleware.session/session-exec/main-loop/fn
session.clj: 201 nrepl.middleware.session/session-exec/main-loop
AFn.java: 22 clojure.lang.AFn/run
Thread.java: 834 java.lang.Thread/run
I ran into this problem when I followed the coding example at https://www.youtube.com/watch?v=Fz6LxSSc_GE at 6:06 (The Immutable Stack - Building Anki Clone using Clojure, Datomic and ClojureScript (Part 5)). In the video, there was no error, bui I did, and could reproduce it as above.#2020-11-0103:26favilaThe schema for :user/email does not include a :db/unique constraint, therefore you cannot use this attribute for lookups as you do in your d/entity call#2020-11-0103:27favilaThus “lookup ref attr not unique” in the error message#2020-11-0103:33yubrshen@U09R86PA4 Thanks for the quick and effective help!#2020-11-0121:49holyjakI guess "lookup ref attr not declared as :db/unique" would be a more helpful error message.#2020-11-0202:10yubrshenIt's strange that once I added :db/unique to the existing schema without transacting, but just evaluate the schema definition, it works for retrieving the user by email address.
But later, when I actually transacted the updated schema, then I run into trouble of "Error: {:db/error :db.error/unique-without-index, :attribute :user/email}"#2020-11-0118:53yubrshenHow do I fix this error?
"Error: {:db/error :db.error/unique-without-index, :attribute :user/email}"
The error happened when I added more entities to the schema and did (d/transact conn schema) again to update the schema.
This is a consequence of my prior error referred in https://clojurians.slack.com/archives/C03RZMDSH/p1604199338251700 where I had to change my schema to add :db/unique constraint to :user/email
I wouldn't mind to start from scratch with a new database. But I have not learned how to do that yet.
But if it were in a production system, when I modify my schema, what's the proper way to correct and update?#2020-11-0121:53holyjakI wish there was a catalog of datomic errors with explanations and guidance. Searching the net for "datomic unique-wirhout-index&t" yields nothing :-(#2020-11-0121:56favilaYou need a value index before you can make a value unique. See the docs on schema changes, it has a table of all legal transitions#2020-11-0202:02yubrshenMy problems are that I don't know enough Datomic to figure out how to have "a value index". I'm studying the documentation on schema change at https://docs.datomic.com/cloud/schema/schema-change.html
There are two pre-conditions:In order to add a uniqueness constraint to an attribute, both of the following must be true:
> The attribute must have a cardinality of `:db.cardinality/one`.
> If there are values present for that attribute, they must be unique in the set of current database assertions.
For the first one, my schema for :user/email already has :db.cardinality/one.
For the second one, I don't know how to handle:
1. how to check if the values present for :user/email are unique or not
2. If not, how to fix them.#2020-11-0212:18favilaAre you using cloud or on prem? You cite cloud docs but this sounds like an on-prem problem. (Cloud indexs all values by default)#2020-11-0212:20favilaOn on-prem, there is a :db/index true#2020-11-0215:09yubrshenI'm using on-prem, actually just dev one. I'll take look of :db/index tree @U09R86PA4 Thanks for the pointer!#2020-11-0215:10favilahttps://docs.datomic.com/on-prem/schema.html#altering-schema-attributes#2020-11-0215:10favila> All alterations happen synchronously, except for adding an AVET index. If you want to know when the AVET index is available, call https://docs.datomic.com/on-prem/javadoc/datomic/Connection.html#syncSchema(long). In order to add :db/unique, you must first have an AVET index including that attribute.#2020-11-0215:11favila(quote from the docs)#2020-11-0215:25yubrshen@U09R86PA4 Yes, the following worked:
(def tx-add-index @(d/transact conn [{:db/id :user/email
:db/index true}]))
(def tx-fix @(d/transact conn [{:db/id :user/email
:db/unique :db.unique/identity}]))
where conn is a connection to the on-prem (dev) database.
Thanks again for your coaching!#2020-11-0210:11lambdamHello,
I'm discovering Datomic entity specs.
I tried to trigger an spec error and here is the message:
"Entity temp-id missing attributes
The doc example gives:
"Entity 42 missing attributes [:user/email] of by :user/validate"}
Clearly, the serialization of the missing attribute seems to go wrong.
I'm using the latest version of Datomic (`1.0.6202` ).
Is it a known problem?#2020-11-0217:27marshallcan you share your :admin/validate spec ?#2020-11-0316:37lambdamHere is the spec:
{:db/ident :admin/validate
:db.entity/attrs [:admin/email :admin/hashed_password]}
and here are the attribute declarations:
{:db/ident :admin/email
:db/valueType :db.type/string
:db/unique :db.unique/identity
:db/cardinality :db.cardinality/one
:db.attr/preds myproject.entities.entity/email?}
{:db/ident :admin/hashed_password
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one}
The only "particular" thing that I see is that the email field has an attibute predicate.
Thanks#2020-11-0317:15jaretHey @U94V75LDV I've made a ticket to look at this more closely. I'll keep you updated on what I find could you DM me an e-mail so I can contact you in the event that this slack convo gets archived while I am poking around?#2020-11-0317:26lambdamThank you very much!
I do it right now.#2020-11-0210:34lambdamAlso I noted those points that seem weird:
1 - The documentation says: :db/ensure When I then pull all the attributes of the entity, the :db/ensure field appears.
{:db/id 17592186045425,
:db/ensure [#:db{:id 17592186045420}],
:user/hashed-password "...",
...}
I then don't get what is a "virtual attribute" then.
2 - After transacting successfully an entity with its spec "activated", I could then retract a field without triggering the entity spec:
(datomic.api/transact
conn*
[[:db/retract 17592186045425 :user/hashed_password]])
The resulting entity viiolates the spec but nothing was triggered. Is it the desired behaviour of entity specs?
Thanks#2020-11-0214:14bhurlowdoes Datomic use auto-discovery when integrating with the managed memcached in AWS? Or do we need to pass in each relevant memcached node individually?#2020-11-0312:46joshkhshould i be able to upsert an entity via an attribute which is a reference and also unique-by-identity?#2020-11-0313:06favilayes#2020-11-0312:50joshkhfor example
(d/transact conn
{:tx-data
[; upsert a school entity where :school/president is a reference and unique-by-identity
{:school/president {:president/id 12345}
:school/name "Bowling Academy of the Sciences"}]})
#2020-11-0313:07favilaThis is actually two upserts isn’t it? :president/id also?#2020-11-0315:26joshkhyes, you are correct and that is indeed the problem. it seems that you cannot upsert two entities that reference each other within the same transaction.
for example, running this transaction twice causes a datom conflict
(d/transact conn
{:tx-data
[
; a president
{:president/id "The Dude" :db/id "temp-president"}
; a school with a unique-by-identity
; :school/president reference to the president
{:school/president "temp-president"
:school/name "Bowling Academy of Sciences"}
]})
whereas both of these transactions upsert as expected
(d/transact conn
{:tx-data
[; a president
{:president/id "The Dude" :db/id "temp-president"}
]})
(d/transact conn
{:tx-data
[; a school with a unique-by-identity
; :school/president reference to the president
{:school/president 101155069755476 ;<- known dbid
:school/name "Bowling Academy of Sciences"}]})
#2020-11-0315:27joshkh(note the known eid in the second transaction)#2020-11-0314:15vnczIs there any specific reason why some kind of selection can only be done using the Peer Server?#2020-11-0314:16favilaWhat do you mean by “selection”?#2020-11-0314:16vnczLet me give you an example#2020-11-0314:17vncz:find [?name ?surname] :in $ :where [?e :p/name ?name] [?e :p/surname ?surname]#2020-11-0314:17vnczThis query cannot be executed by the peer library#2020-11-0314:18vnczThis one can
:find ?name ?surname :in $ :where [?e :p/name ?name] [?e :p/surname ?surname]#2020-11-0314:18vncz@U09R86PA4#2020-11-0314:19favilaah, ok, those are called “find specifications”#2020-11-0314:20vnczYes, these ones. It seems like the Peer Library can only execute the "Collection of List" one#2020-11-0314:20favilaand it’s the opposite: only the peer API supports these; the client API (the peer server provides an endpoint for the client api) does not#2020-11-0314:20vnczThis is weird, I'm using Datomic-dev (which I guess it's using the peer library?!) and I can't execute such queries#2020-11-0314:21faviladev-local?#2020-11-0314:21vnczYes#2020-11-0314:21favilathat uses the client api. (require '[datomic.client.api])#2020-11-0314:21favilathe peer api is datomic.api#2020-11-0314:22vncz#2020-11-0314:22favilacorrect#2020-11-0314:22favilabut you are using a client api#2020-11-0314:23favilathe client api does not support these#2020-11-0314:23vnczHmm 🤔#2020-11-0314:23vnczOk so in theory I should just change the namespace requirement?#2020-11-0314:23favilano, datomic.api is not supported by dev-local#2020-11-0314:23vnczAh ok so there's no way around it basically#2020-11-0314:24favilaMaybe historical background would help: in the beginning was datomic on-prem and the peer (`datomic.api` ), then came cloud and the client-api, and the peer-server as a bridge from clients to on-prem peers.#2020-11-0314:24favilaMaybe historical background would help: in the beginning was datomic on-prem and the peer (`datomic.api` ), then came cloud and the client-api, and the peer-server as a bridge from clients to on-prem peers.#2020-11-0314:24faviladev-local is “local cloud”#2020-11-0314:24favilathat came even later#2020-11-0314:24favila(like, less than two months ago?)#2020-11-0314:24vnczOh ok, so it's a simulation of a cloud environment. I guess I was confused by the fact that's all in the same process#2020-11-0314:25favilathe client-api is designed to be networked or in-process; in dev-local or inside an ion, it’s actually in-process#2020-11-0314:25vnczGot it. So to keep it short I should either move to Datomic-free on Premise or workaround the limitation in the code#2020-11-0314:26favilaas to why they dropped the find specifications, I don’t know. My guess would be that people incorrectly thought that it actually changed the query performance characteristics, but actually it’s just a convenience for first, map first, etc#2020-11-0314:27favilathe query does just as much work and produces a full result in either case#2020-11-0314:27vnczI could see these conveniente useful though. The idea of having to manually do that every time is annoying.#2020-11-0314:27vnczNot the end of the world, but still#2020-11-0317:09Kaue SchltzHi there.
We've been facing an awkward situation with our Cloud system
From what I've seem of Datomic Cloud architecture, it seemed like I can have several databases in the same system, as long as there are transactor machines available in my Transactor group.
With that in mind, we scaled our compute group to have 20 machines, to serve our 19 dbs. All went well for a few months, until 3/4 days ago, when we started facing issues to transact data, having "Busy Indexing" errors.
If Im not wrong this is due to our transactors being unable to ingest data the same pace we are transacting it, or is there something else I'm missing here?
Thanks :D#2020-11-0317:37Kaue Schltz@U28A9C90Q#2020-11-0321:18Kaue SchltzAnother odd thing is that my Dynamo Write Actual is really low, despite my IndexMemDb metric being really high#2020-11-0321:19Kaue SchltzI have 130 Write provisioned, but only 2 is used#2020-11-0322:07tony.kayare you running your application on the compute group? Or are you carefully directing clients to query groups that service a narrow number of dbs? If you hit the compute group randomly for app stuff, then you’re going to really stress the object cache on those nodes.#2020-11-0322:08tony.kaywhich will lead to segment thrashing and all manner of badness#2020-11-0322:09Kaue SchltzIm pointing my client directly to compute group#2020-11-0322:10tony.kayyeah, I don’t work for cognitect, but my understanding of how it works leads me to the very strong belief that doing what you’re doing will not scale. Remember that each db needs it’s own RAM cache space for queries. The compute group has no db affinity, so with 20 dbs you’re ending up causing every compute node to need to cache stuff for all 20 dbs.#2020-11-0322:11Kaue Schltz@U0CKQ19AQ would you say it would be best if I transacted to a query group fed by a specific set of databases?#2020-11-0322:11tony.kayright, so a given user goes with a given db?#2020-11-0322:12tony.kay(a given user won’t need to query across all dbs?)#2020-11-0322:12Kaue SchltzFrom what Ive read, transactions to query groups end up in compute group#2020-11-0322:12tony.kayyes, but that is writes, not memory pressure#2020-11-0322:12Kaue Schltzthis application is write only#2020-11-0322:12tony.kaywrites always go to a primary compute node for the db in question. no way around that#2020-11-0322:13tony.kaythe problem is probably that you’re also causing high memory and CPU pressure on those nodes for queries#2020-11-0322:13tony.kayyou could also just be ingesting things faster than datomic can handle…that is also possible#2020-11-0322:13tony.kaybut 20dbs on compute sounds like a recipe for trouble if you’re using that for general application traffic#2020-11-0322:14Kaue SchltzI tried shutting my services down and give time to datomic to ingest, but to no avail. IndexMemDB is just a flat line#2020-11-0322:15Kaue SchltzI will give your suggestion a try, thanks in advance#2020-11-0322:15tony.kaythere’s also the possibility that the txes themselves need to read enough of the 20 diff dbs to be causing mem problems. I’d contact support with a high prio ticket and see what they say.#2020-11-0322:15tony.kaycould be something broke 🙂#2020-11-0322:17Kaue SchltzThe way things are built, there is a client connection for each one of the databases, depending on the body of a tx it is transacted to a specific db#2020-11-0322:18tony.kaythe tx determines the db?#2020-11-0322:18Kaue Schltzyes#2020-11-0322:19tony.kayooof. much harder to pin limited dbs to a query group then.#2020-11-0322:19tony.kaygood luck#2020-11-0322:19Kaue SchltzThanks#2020-11-0403:16NassinIf each node will be indexing/caching all 19 DBs, what's the point of increasing the node count to 20?#2020-11-0403:27NassinIf the answer is writes, will each DB have a different preferred node for transactions and does cloud tries to distribute this evenly? or will a single node, at any point in time can be the preferred one for multiple databases?#2020-11-0403:37NassinIf it's the latter, sounds like you are better served by creating multiple production stacks to better distribute writes among databases than by increasing the node count for a single production stack (sounds like a nightmare though) or.. instead of increasing to such many nodes, have less nodes but increase their size, ex: to a i3.xlarge#2020-11-0413:40Kaue SchltzWe are writing. From what we could gather, theoretically datomic would spread traffic accross nodes since it was writing to different databases. We are stable since we upgraded our stack from 704 to 715. Looks like we were having issues in GC#2020-11-0412:21pithylessUsing Datomic on-prem, I am trying to migrate a :db/ident to a new alias (while keeping the old one for existing code). The docs suggest this is possible: https://docs.datomic.com/on-prem/best-practices.html#use-aliases
Unfortunately, the documented approach asserts the new ident and removes the previous one:
[:db/add :old/id :db/ident :new/id]
This would make sense, since the schema is a cardinality one:
;; => #:db{:id 10, :ident :db/ident, :valueType :db.type/keyword, :cardinality :db.cardinality/one, :unique :db.unique/identity, :doc "Attribute used to uniquely name an entity."}
Was this changed in some version of Datomic and the docs are not up-to-date? Is there a better way to go about introducing backwards-compatible idents? I suppose I could just change the cardinality to many, but not sure if that would break other assumptions and/or performance?#2020-11-0412:40favilaIdent lookups are special because they ignore retractions. Go ahead and try it: (d/ident db :old/id)#2020-11-0412:42favilaCardinality many wouldn’t solve the problem: it would just make it ambiguous which ident was the preferred vs deprecated one#2020-11-0412:43favilaIt also wouldn’t solve the problem of moving an ident to a different attribute#2020-11-0414:29pithylessThanks @U09R86PA4; what threw me off was querying [?e :db/ident :old/id] returned an empty set; it would only find it via [:?e :db/ident :new/id]. But that makes sense if the idents are special via ignoring retractions.#2020-11-0414:30favilaquerying won’t act like this---only ident resolution#2020-11-0414:30pithylessQuerying for [?e :old/id ...] and [?e :new/id ...] does work. But I've still got to debug why it's not working with my datofu/conformity migrations.#2020-11-0414:31pithylessThanks for pointing me in the right direction!#2020-11-0519:10dogenpunkHas anyone run into this error using the datomic CLI?
Syntax error (FileNotFoundException) compiling at (clojure/core/async/impl/ioc_macros.clj:1:1).
Could not locate clojure/tools/analyzer__init.class, clojure/tools/analyzer.clj or clojure/tools/analyzer.cljc on classpath.
I’m coming back to a datomic cloud project after ~8mo. The datomic script loads tools.ops version 0.10.82. Seems to only occur if I run datomic commands from my project directory, so I assume this is an issue with the project deps.edn.#2020-11-0519:55alexmilleryou might need to update to latest version of the clojure tools (clj) - there were some dependency issues in past versions that would prevent necessary transitive deps from being included in the classpath#2020-11-0519:56alexmillerthe error above could definitely be a sign of that#2020-11-0520:21dogenpunkHmmm… clojure -h reports version is 1.10.1.727 which seems to be the latest. This is obviously not critical as I can just run from outside the project directory. I’m mostly worried that I screwed something up when upgrading datomic, ions, etc. Thanks for the help though!#2020-11-0521:11alexmilleryep, that should be the latest#2020-11-0521:12alexmillerif you want to ask at https://ask.datomic.com that would be a good place to file a question - would be helpful to include exactly what you ran (and your deps.edn if relevant)#2020-11-0812:17dogenpunkThanks, Alex. I’ll do that. #2020-11-0616:17joshkhjust nudging this post because we are keen to make use of all the great things dev-local has to offer 😇 https://forum.datomic.com/t/execution-error-when-importing-from-cloud-with-dev-local-0-9-225#2020-11-0616:20joshkhdev-local is our only best option right now to satisfy some customer requirements regarding backups, so any help would be much appreciated#2020-11-0618:26Jon WalchFor index-pull is it possible to specify multiple attributes and their values with :start? I want to index-pull all entities where every entity has an attribute's value as x and another attribute is a numerical value in sorted order.#2020-11-0618:32Jon WalchI basically want this returned:
[
{:x :foo
:y 1000}
{:x foo
:y 9780}
...
]#2020-11-0619:14g3oHello, today I started finally playing around with Datomic, and when I open the db.log file I see some weird chars, is that normal#2020-11-0619:27alexmillersure, it's binary data#2020-11-0619:41g3ooh I see. is there a way to make this more readable?#2020-11-0619:45alexmillerno? what are you trying to do?#2020-11-0619:45g3onothing special, just curious what is that file.#2020-11-0622:02jjttjjanyone else having trouble with the datomic-cloud maven repo? I'm trying to run https://github.com/Datomic/ion-starter but seemingly keep timing out when downloading the jars when trying to start a repl#2020-11-0701:06tony.kayI am. I’m trying to update topology to prod, and it keeps timing out there#2020-11-0718:43joshkhany luck? what is timing out?#2020-11-0719:43tony.kaywell, not yet. The code deploy step: “Script at specified location: sync-libs failed to complete in 120 seconds” and the logs are trying to do an s3 copy#2020-11-0719:43tony.kayI realized that I need to up my i3 quota by one, so I’m waiting for that to finish before trying again#2020-11-0701:07tony.kaybeen wasting life on this all afternoon 😞#2020-11-0718:42joshkhis there a way to return the current base schema version of a cloud db without upgrading it?#2020-11-0801:37ivangalbansHi everyone,
I’m trying to use attribute predicates in a project and i’m following https://docs.datomic.com/on-prem/schema.html#attribute-predicates
I have a running transactor with datomic-pro-0.9.6045 :
bin/transactor config/samples/dev-transactor-template.properties
and I have a simple file as in the doc:
(ns datomic.samples.attrpreds
(:require [datomic.api :as d]))
(defn user-name?
[s]
(<= 3 (count s) 15))
(def user-schema
[{:db/ident :user/name,
:db/valueType :db.type/string,
:db/cardinality :db.cardinality/one,
:db.attr/preds 'datomic.samples.attrpreds/user-name?}])
(def uri "datomic:")
(d/create-database uri)
(defonce conn (d/connect uri))
@(d/transact conn user-schema)
This file is outside the Datomic directory. I have a running repl via cider-jack-in-clj.
I have evaluated the buffer and the schema is installed as expected, but when I run
@(d/transact conn [{:user/name "X"}])
I get the following error:
> Could not locate datomic/samples/attrpreds__init.class, datomic/samples/attrpreds.clj or datomic/samples/attrpreds.cljc on classpath.
I have read this in the doc:
> Attribute predicates must be on the classpath of a process that is performing a transaction.
But I don’t know how to do it. I have a proof of concept project with datomic-pro-0.9.6045 (dev), cider, deps.edn…
Should I set the DATOMIC_EXT_CLASSPATH variable to datomic/samples/attrpreds.jar (or .clj if possible) before run the transactor ?
How do you configure your projects in this case ?
Thanks in advance#2020-11-0811:37souenzzo@ivan.galban.smith for learning, use datomic:
For "production", you will generate an artifact or something like and do that classpath thing#2020-11-0817:33ivangalbansis there a workaround ?
I avoid using mem because I don’t wanna lose the datomic console. Persistent data is not important to me at this moment, although I would like to have it too#2020-11-0817:41ivangalbansi understand what you say and generate an artifact is overkill for my purpose.#2020-11-0816:00joshkhthis one is driving me a little crazy. given the following tree:
a -> b -> c -> d -> e
how might i write a rule to find the most ancestral entity of e that has :item/active? true? in this example, the result would be b
[; schema
{:db/ident :item/id
:db/valueType :db.type/long
:db/cardinality :db.cardinality/one
:db/unique :db.unique/identity}
{:db/ident :item/children
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/many}
{:db/ident :item/active?
:db/valueType :db.type/boolean
:db/cardinality :db.cardinality/one}
; entities
{:item/id "a"
:item/children ["b"]}
{:db/id "b"
:item/id "b"
:item/children ["c"]
:item/active? true}
{:db/id "c"
:item/id "c"
:item/children ["d"]}
{:db/id "d"
:item/id "d"
:item/children ["e"]
:item/active? true}
{:db/id "e"
:item/id "e"}]
here is my starting point, which at least finds all ancestors of e (where as i want the just farthest ancestor from e that is active, which is b)
(let [rules '[[(anc ?par ?child)
[?par :item/children ?child]]
[(anc ?anc ?child)
[?par :item/children ?child]
(anc ?anc ?par)]]]
(d/q '{:find [(pull ?anc [*])]
:in [$ % ?item-id]
:where [[?n :item/id ?item-id]
(anc ?anc ?n)]}
(db) rules "e"))#2020-11-0823:19benoitFrom a logical point of view, you're looking for an ancestor that is active and that itself doesn't have an active ancestor. So I would write a rule to check if an entity has an active ancestor and negate it.#2020-11-0915:31joshkhthat sounds like a good way to approach it. thanks.#2020-11-0916:14xcenoI have an application running as datomic ion. I'll need to upload a bunch of binary files between 1MB-100MB. There's a stack overflow answer (https://stackoverflow.com/a/10569048/932315) that brings up some points I'd like to clarify:
• Is it a good idea to store blobs in datomic if I disable the history for them?
• Does it make sense to store files as a datomic byte array?
• Or should I rather upload the files to S3 and save the URL in an attribute?#2020-11-0916:31NassinLast one, cloud doesn't have the byte array type#2020-11-0916:33NassinIn on-premise it's only used for very small binary data anyway#2020-11-0916:35joshkhseconded - put those files in S3 where they belong#2020-11-0916:36xcenoAlright, thanks guys!#2020-11-1013:24souenzzothere is docs about how to do permissions on datomic-ions?
Last time I tryied there is no docs, my solution break the cloudformation and I get 1 day of downtime#2020-11-1013:29xcenoIt took me almost two weeks to get my initial setup up and running. I initially went with solo but upgraded to production later on. Anyhow, I have my own permission / auth system right now as ring-middleware. It provides the very basics, but if I find time I want/need to switch over to using AWS Cognito.
From what I've seen in the forums there are some people using Ions + Cognito in production, but there aren't any docs or examples in the wild. At least I haven't found any, if you do, please let me know#2020-11-1716:19joshkhonly just saw this thread, but in case you haven't found an answer yet @U2J4FRT2T, can you clarify by what you mean as permissions? user permissions to your api? permissions for your ion to access other AWS services?#2020-11-1718:10souenzzoHow to customize the IAM of the machines created by DatomicCloudCloudFormation template
It isn't just "find the the group and add the permission"
If you do that (like i did) you will not be able to remove/upgrade the CloudFormation because it will fail#2020-11-0918:49camdezIf I transact against a connection, then get a db value from that connection via datomic.api/db , and then query that db, are the newly transacted values guaranteed to be included in the db queried?#2020-11-0918:52camdez(Note that I’m not talking about explicitly using the :db-after value here.)#2020-11-0918:59favilaThat db is guaranteed to be at or after. Because other transactions may have happened in the meantime, you’re not guaranteed that any particular value is in there#2020-11-0919:01favilaput it differently: the peer receives transaction updates in-order via a single queue. Your d/transact future finalizes when it sees the result of its request on that queue. So it’s not possible for a d/db call to not see that tx yet if it ran after the future finished#2020-11-0919:02favilayou can access this queue yourself via d/tx-report-queue#2020-11-0919:09camdezThanks, @U09R86PA4! That’s what I meant to ask, I just worded it poorly. I’ve been operating under this assumption for a while and had a bug today that had me questioning my sanity. 😛 Just got it figured out though. Much appreciated.#2020-11-1012:39geodrome“High Availability (HA) is a Datomic Pro feature for ensuring the availability of a Datomic transactor in the event of a single machine failure.” according to https://docs.datomic.com/on-prem/ha.html. “All Datomic On-Prem licenses are perpetual and include all features…” including High Availability for Failover according to https://www.datomic.com/get-datomic.html. Please clarify whether HA is included with Datomic Starter. Thanks.#2020-11-1012:39geodrome“High Availability (HA) is a Datomic Pro feature for ensuring the availability of a Datomic transactor in the event of a single machine failure.” according to https://docs.datomic.com/on-prem/ha.html. “All Datomic On-Prem licenses are perpetual and include all features…” including High Availability for Failover according to https://www.datomic.com/get-datomic.html. Please clarify whether HA is included with Datomic Starter. Thanks.#2020-11-1013:04jaretYes, you can use HA with a Datomic starter license.#2020-11-1014:15favilaWith the client API, is there any equally-performant alternative to seek-datoms to find a next-v in an :aevt index? (d/seek-datoms db :aevt :known-attr some-e) . I’ve tried index-pull with :aevt, but it requires :a to be cardinality-many (?!); Query with > and <= seems to not-work quickly (which I didn’t expect: I expected either too slow or error)#2020-11-1014:17favilaExample use case: given a tx value (which may fall between actual existing tx ids), find the :db/txInstant of the same e or the nearest-next one. In this particular case you can use the log (although this seems less efficient), but I have other cases besides :db/txInstant where I do this in the peer api for performance.#2020-11-1015:13jarethttps://docs.datomic.com/cloud/query/query-index-pull.html#aevt
> :v must be `db.type/ref` and `:db.cardinality/many`#2020-11-1015:13jaretIs that what you meant with trying :aevt? ^#2020-11-1015:15favilaI mean I want to start matching (fuzzily) on e#2020-11-1015:15favilapeer code: (-> (d/seek-datoms db :aevt :db/txInstant tx) first :v)#2020-11-1015:15favilahow would I do that efficiently with the client api?#2020-11-1015:18favilaThe error I got using dc/index-pull was `Execution error (ExceptionInfo) at datomic.client.api.async/ares (async.clj:58).
:db/txInstant is not card-many, as required for :aevt`#2020-11-1015:19favilaI see now the error message was just misleading, the problem is really that it’s not a ref attr#2020-11-1015:28jaretYeah, I think index-pull is the answer here, but that error message is something I want to look at and I am going to talk with the team to see if there is a more efficient way that doesn't have the requirements of index-pull.#2020-11-1015:33favilaI’m not sure how index-pull could be the answer, as I would need to pull the second element not the third#2020-11-1015:34favilaI would want to pull from the e in the :aevt, not the :v#2020-11-1015:34favila(actually I don’t want to pull at all-I just want the e and v)#2020-11-1015:14Carey HayHello! Our organisation has recently migrated to kubernetes using Amazon EKS. We are running the datomic transactor inside a pod in k8s and are hoping to also create a cronjob to create database backups to s3 using the standard backup commands. Has anyone done this succesfully using IAM Roles for service accounts? https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html
We are using these succesfully for other applications to negate the need to mount aws credentials files into pods, but datomic does not seem to be able to interact with the auth tokens that are created in each pod as tokens. According to the supported sdk list, an application requres the following:
• Java (Version 2) — https://github.com/aws/aws-sdk-java-v2/releases/tag/2.10.11
• Java — https://github.com/aws/aws-sdk-java/releases/tag/1.11.704
The most recent reference to the aws sdk in the change log is here: https://docs.datomic.com/on-prem/changes.html#0.9.5561.50, citing "Peers and transactors now use version 1.11.82 of the AWS SDK". Depending on what sdk is used, it may or may not be supported!#2020-11-1208:08Ivar RefsdalHi. And thanks for a fine piece of software!
I'm having a problem with excision and on-prem:
My history database (eavto) looks like this:
[17592186045418 :m/info "secret data that should be removed" 13194139534313 true]
[17592186045418 :m/info "secret data that should be removed" 13194139534315 false]
[17592186045418 :m/info "OK data" 13194139534315 true]
Then I execute excision:
{:db/excise 17592186045418,
:db.excise/attrs [:m/info],
:db.excise/beforeT 13194139534315}
After waiting for syncing of excision, my history database looks like this:
[17592186045418 :m/info "secret data that should be removed" 13194139534315 false]
[17592186045418 :m/info "OK data" 13194139534315 true]
Thus the bad secret data is still present in the history, but only the retraction, which does not make sense in my opinion.
Is it possible to fix this? To also get rid of the retraction information?
Here is a gist that reproduces this issue:
https://gist.github.com/ivarref/f92d9efd45d1c0cbd2d239bf4904a323
Thanks and kind regards.#2020-11-1208:12Ivar RefsdalCC @UQGBDTAR4 @U11RVUGP7#2020-11-1212:14favilaBeforeT is not inclusive#2020-11-1212:15favilaThe history items that remain have a tx == your beforeT argument to the excision#2020-11-1217:38Ørnulf Risnes@U09R86PA4 (I'm a colleague of @UGJE0MM0W)
Thank you for the response.
If you look at Ivar's example, you will see that the problem is that the retraction of the problematic datom that we want to excise and its benign counterpart that we want to keep - they have the same tx.
This typically happens when we add a new value to an attribute with cardinality one.
So - beforeT isn't expressive enough to distinguish between the retraction of the problematic value and the adding of the benign value.#2020-11-1217:41favilaAh I understand your problem now.#2020-11-1217:41favilaYes, it’s not expressive enough. There’s no way to get exactly what you want.#2020-11-1218:03Ørnulf Risnes@U09R86PA4 Thank you again.
Since the first entry of the datom in the post-excision history now is a (logically invalid) retraction, we were hoping for some kind of "garbage collection" mechanisms to rescue us here, and help us get rid of the problematic value completely.
Will send a question about possible workarounds to Datomic support.
(Cc @U1QJACBUM)#2020-11-1218:04favilaI suspect any gc or reindex mechanisms, even if they remove the item from the index, will not remove them from the tx-log#2020-11-1218:05favilaexcision is special in that it alters tx-log entries; even noHistory doesn’t do that#2020-11-1218:35Ivar RefsdalThanks @U09R86PA4 and @UQGBDTAR4
I've noticed the following:
retraction is about existing data, thus it does not make sense to keep [17592186045418 :m/info "secret data that should be removed" 13194139534315 false] in the history database.
Ref: https://docs.datomic.com/on-prem/transactions.html#retracting-data
If I do
@(d/transact conn [[:db/retract "item-that-does-not-yet-exist" :m/info "secret data"]])
this will be silently discarded, which is OK, though I would prefer an exception. It does not end up in the history database.
In this respect I think there is a mismatch between retract and excision, and I think the excision logic should be improved with the following: the new database history of the excised entity and attribute should never contain a retraction in the first transaction. This simple rule would solve the problem (I think!).
Thanks and kind regards again.#2020-11-1219:23favilaI don’t speak for cognitect, but because this alters transactions which happened after the beforeT, I can see this as a semantic grey area about the meaning of excision#2020-11-1219:23favilait’s probably also a performance concern because many more datoms and transactions need inspection#2020-11-1219:25favilayour rule is also too simple for cardinality-many attributes#2020-11-1308:09Ivar RefsdalI agree it's a grey area. I wouldn't be too concerned about performance as excision is a seldom thing, but yes I do not know the performance / implementation implications of this suggestion.
Are you sure the rule is too simple?
Why would the first transaction of an entity's attribute (cardinality-many) have any retractions? It's the equivalent of:
@(d/transact conn [[:db/retract "new-item" :m/many "data-1"]
[:db/retract "new-item" :m/many "data-2"]])
which does not make sense.
Or did I miss something?#2020-11-1316:30favilawhat I mean is that the retracts may be spread throughout the transaction history after the excision time. You need to know what values used-to-be asserted at moment T, and you need to look for the first retraction or assertion of any of those values forward in time. For cardinality-many, there won’t be just one transaction. You can terminate early if all values are accounted for, not on the first transaction#2020-11-1316:30favilain the worst-case, a value is never retracted later, so you scan all of time#2020-11-1214:32avocadeHey guys! Anyone else having an issue when using expound and datomic dev-local's (d/db conn) value in specs (either directly, or using guardrails/ghostwheel which wraps expound)?
We filed an issue on it here for reference: https://github.com/bhb/expound/issues/205#2020-11-1216:10Michael Stokleythese are logically equivalent. are they equivalent from a perf standpoint?
(d/q '[:find ?e
:in $ ?id
:where [?e :e/id ?id]]
db id)
;; vs
(d/q '[:find ?e
:in $ ?e]
db [:e/id id])
where `[:e/id id]` is a lookup ref#2020-11-1216:23tatutI don’t think they are completely equivalent, the first will return a :db/id number and the latter will just return the lookup ref as is in the results#2020-11-1216:28Michael Stokleyah, you're right.#2020-11-1216:29Michael Stokleyi should have included a pull pattern in the example#2020-11-1216:29tatutso I would think the latter should be faster as it does nothing#2020-11-1216:29Michael Stokleymy question is more around whether it matters to pass in the unique identifier or the lookup ref#2020-11-1216:29tatutyou can give the latter a non-existing lookup ref and it just happily returns it#2020-11-1216:31Michael Stokley(d/q '[:find (pull ?e pull-pattern)
:in $ ?id pull-pattern
:where [?e :e/id ?id]]
db id pull-pattern)
;; vs
(d/q '[:find (pull ?e pull-pattern)
:in $ ?e]
db [:e/id id] pull-pattern)#2020-11-1216:31Michael Stokleydo you think there would be a performance difference in the above?#2020-11-1216:32tatutfeels to me that there shouldn’t be, but I don’t really know#2020-11-1216:32tatutand if there is, it is likely negligible#2020-11-1216:33tatutbut in both cases, if you have a lookup ref, wouldn’t you just use (d/pull db pattern id)instead of q?#2020-11-1216:41Michael Stokleyyou could. it's a bad example, sorry. in truth, the real code has additional where clauses, so it's a real query.#2020-11-1216:21ziltiIs there an usable tutorial somewhere on how to set up Metabase with Presto?#2020-11-1216:35ziltiI've set everything up, but all I get is
Nov 12 16:35:28 the-network java[12958]: 2020-11-12 16:35:28,337 ERROR driver.util :: Database connection error
Nov 12 16:35:28 the-network java[12958]: java.io.EOFException: SSL peer shut down incorrectly
#2020-11-1219:05respatializeddata modeling question: is there any semantics for disjoint attributes in Datomic - something like "an entity can have attribute x or attribute y, but not both"? Or is that anathema to the open composition of attributes that Datomic's data model encourages and those constraints should be left up to the application?#2020-11-1220:30benoitYou cannot express this constraint with the Datomic schema attributes but you can always enforce it with a custom database function.
Whether it is a good idea from a logical perspective, I'm not sure. This looks like a sum type to me.
You can also think about other ways to implement it like creating a ref attribute that points to an entity that can have the x or y attribute.#2020-11-1316:31marshall@jdkida you can use the pull API directly
https://docs.datomic.com/on-prem/pull.html (onprem)#2020-11-1316:31marshallhttps://docs.datomic.com/cloud/tutorial/read.html#pull (cloud)#2020-11-1316:33jkidaahh, i see. eid or (unique-id) work?#2020-11-1316:33marshallor lookup ref#2020-11-1316:33marshallhttps://docs.datomic.com/on-prem/identity.html#entity-identifiers#2020-11-1316:33marshallit takes an entity identfier#2020-11-1317:18gabor.veresHi all, newbie question: does Datomic support "ordered" :db/cardinality many attributes? I'd like to store a vector of values, and somehow retrieve the same, ordered vector. The actual use case would be an entity that refers to other entities, but those references do have a defined order. I can't seem to find a way to do this on the data model/schema level. Is this an application/client concern rather, meaning I store data required to reconstruct the order and reorder after retrieval?#2020-11-1518:56val_waeselynckCheck out Datofu, it has helpers for that IIRC#2020-11-1317:26Braden Shepherdsonwell, the underlying indexes are always sorted, but that order doesn't necessarily survive in a query.#2020-11-1317:27Braden Shepherdsongenerally you have to do your own sorting in memory. if there's some arbitrary order (say, tracks on an album) then you need to record those as attributes.#2020-11-1317:30Braden Shepherdsonputting it slightly differently, you might transact {:foo/id (uuid "...") :foo/things [19 12 22]} but that's just a shorthand. it swiftly gets unpacked to a set of entity-attribute-value triples, and the order of your vector is lost. it's just a set to Datomic.#2020-11-1317:39gabor.veresThanks @braden.shepherdson, that's what I suspected - this is an application level concern then.#2020-11-1318:14favilaIf you need control over partially-fetching items in a certain order, use d/index-pull#2020-11-1610:03hanDerPederAny harm in transacting a schema multiple times?#2020-11-1612:22souenzzoI transact on every application start (even on elastic ones)#2020-11-1613:43vnczI also do the same and I have not noticed any problem#2020-11-1715:47tvaughanSame#2020-11-1616:14Michael Stokleyis calling d/db to create a db from a conn expensive?#2020-11-1616:29favilaNo#2020-11-1616:30favilaYou should think more about consistent values for a unit of work than about the expense of creating a db object: https://docs.datomic.com/on-prem/best-practices.html#consistent-db-value-for-unit-of-work#2020-11-1616:31favilaalso by passing down a db you guarantee that the entire subtree of function calls cannot transact#2020-11-1616:32favila(so you don’t have to worry about accidental writers)#2020-11-1618:07holyjakHello! I would like to start playing with Datomic. I have this project, clj_tumblr_summarizer , that will run monthly as AWS Lambda, fetch fresh posts from Tumblr, and store them somewhere for occasional use. Now I would like the "somewhere" to be Datomic. It is far from the optimal choice but I want to learn it 🙂
So my idea is to use dev-local and storing its data to S3 (fetch it at the lambda start, re-upload when it is done).
My question is: Is this too crazy? Thank you!#2020-11-1619:20ghadiYes too crazy because of concurrency #2020-11-1620:08holyjakThank you. Could you be so kind and expand on that a little? Do you mean it breaks when concurrent access is attempted? I don't think I have any concurrency there..#2020-11-1620:41ghadiYou’d have to guarantee that the lambda is not being concurrently called#2020-11-1620:42ghadiAt which point it would be better to just use datomic proper or ddb#2020-11-1620:49holyjakWell, the lambda is run once a month by a schedule so I wouldn't worry about that. Yeah, dynamodb is much bette choice but then I don't get to learn Datomic 😢#2020-11-1618:10gdanovhi, is there any performance or other difference how 1-n relations are implemented? refs 1 --> n or the other way round?#2020-11-1618:59Braden ShepherdsonBecause of the VAET reverse lookup index, there's no major performance impact here either way I think, provided you write your :where clauses properly.
think about how you'd write the query for each case (have child find parent, have parent list children, etc.), and you'll see they work out about the same.#2020-11-1619:01gdanovthanks...what would be 'improper' :what clause in this case? I'm asking exactly because query-wise there's no difference#2020-11-1619:01Braden Shepherdsonoh I just meant the usual principles of writing your :where clauses so that they (a) start as specific as possible, and (b) always have overlap between one line and the next, so you don't get a big cross product.#2020-11-1619:02Braden Shepherdsonyou're right, it doesn't really matter which way around you model the relationship, the where clause is just swapped around.#2020-11-1619:03gdanovgot it. I typically navigate to specific 'child' node from the 'master' so was thinking that maybe it's more efficient to have the ref on the child#2020-11-1619:03Braden Shepherdsonit's worth noting: if the children are :db/isComponent and should be deleted if their parent is deleted, then you want a list of children refs on the parent.#2020-11-1619:07gdanovhow about
[:find ?parent ?child
:in $ ?param ?child-param
:where
[?parent :some/attrib ?param]
[?child :has/a ?parent] ;; or the other way round
[?child :other/attrib ?child-param]
#2020-11-1619:07gdanovyes, I really don't see what difference it could make#2020-11-1619:08Braden Shepherdsonthat's perfectly fine. what you want to avoid is this order:
[?parent :some/attrib ?param]
[?child :other/attrib ?child-param]
[?child :has/a ?parent]
because that finds all plausible parents, and all plausible children, and then finally just the intersection.#2020-11-1619:08Braden Shepherdsonbut that's a general query design thing and doesn't really have anything to do with 1-n relationships.#2020-11-1619:10gdanovyes you are right. my thinking is still SQL influenced sometimes and I get weird feelings and need to double-check#2020-11-1619:12Braden ShepherdsonI'll remark, finally, that the "parent with list of children" approach actually makes a n-n relationship, in principle. it's just a coincidence if every child appears in the list of exactly one parent. having a :db.cardinality/one parent attribute on each child makes it certain that it's 1-n.#2020-11-1619:13gdanovgood one, this is important if I need to enforce restriction. thanks!#2020-11-1703:21onetomHow can we restrict client apps / IAM users to only have access to certain databases?
The https://docs.datomic.com/cloud/operation/access-control.html article defines the DbName metavariable at the beginning, but then it's not mentioned afterwards.
It does have a section called Authorize Client Applications, linking to https://docs.datomic.com/cloud/operation/client-applications.html , but that page doesn't mention DbName either.
Is it not possible to restrict access to certain dbs or it's just not documented?#2020-11-1716:23joshkhi'd like to know this as well. i had started defining a policy to grant access to just certain access keys in the datomic s3 bucket, but in the end gave up (admittedly after not much trial and error)#2020-11-1704:17mruzekwHas anyone been able to install dev-local on a Windows machine (not WSL or VM)?#2020-11-1704:30mruzekwLooks like Powershell is particular about . in args. When you run the mvn commands from ./install wrap the whole -Dfile arg in quotes (`"-Dfile=…"`)#2020-11-1715:01jaretI was able to get Dev-local running on windows 10, using powershell. I created the .datomic\dev-local.edn file and populated with:#2020-11-1715:01jaret{:storage-dir "C:\\Users\\<COMPUTER NAME>\\dev-local-proj\\storage"}#2020-11-1715:07jaretalternatively you can specify the storage dir which is what is contained in the .datomic folder.#2020-11-1715:07jaret(def client (d/client {:server-type :dev-local
:storage-dir "C:\\Users\\<COMPUTER NAME>\\dev-local-proj\\storage"
:system "dev"}))#2020-11-1716:56mruzekwThanks, jaret!